Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
14,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bidirectional LSTM on IMDB
Author
Step1: Build the model
Step2: Load the IMDB movie review sentiment data
Step3: Train and evaluate the model
You can use the trained model hosted on Hugging Face Hub and try the demo on Hugging Face Spaces. | Python Code:
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers
max_features = 20000 # Only consider the top 20k words
maxlen = 200 # Only consider the first 200 words of each movie review
Explanation: Bidirectional LSTM on IMDB
Author: fchollet<br>
Date created: 2020/05/03<br>
Last modified: 2020/05/03<br>
Description: Train a 2-layer bidirectional LSTM on the IMDB movie review sentiment classification dataset.
Setup
End of explanation
# Input for variable-length sequences of integers
inputs = keras.Input(shape=(None,), dtype="int32")
# Embed each integer in a 128-dimensional vector
x = layers.Embedding(max_features, 128)(inputs)
# Add 2 bidirectional LSTMs
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True))(x)
x = layers.Bidirectional(layers.LSTM(64))(x)
# Add a classifier
outputs = layers.Dense(1, activation="sigmoid")(x)
model = keras.Model(inputs, outputs)
model.summary()
Explanation: Build the model
End of explanation
(x_train, y_train), (x_val, y_val) = keras.datasets.imdb.load_data(
num_words=max_features
)
print(len(x_train), "Training sequences")
print(len(x_val), "Validation sequences")
x_train = keras.preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_val = keras.preprocessing.sequence.pad_sequences(x_val, maxlen=maxlen)
Explanation: Load the IMDB movie review sentiment data
End of explanation
model.compile("adam", "binary_crossentropy", metrics=["accuracy"])
model.fit(x_train, y_train, batch_size=32, epochs=2, validation_data=(x_val, y_val))
Explanation: Train and evaluate the model
You can use the trained model hosted on Hugging Face Hub and try the demo on Hugging Face Spaces.
End of explanation |
14,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Images
Images are just arrays of data, where the data tells us the colors in the image. It will get a little more complicated than this, as we'll see below, but this is the general idea. Since colors are typically represented by three dimensions, image arrays are typically [M x N x 3], and sometimes [M x N x 4], where the final entry of the last dimension contains the alpha or transparency value.
Step1: Note
We'll use an image of Grace Hopper for our sample image. Grace was one of the first computer programmers, invented the first computer compiler, and was a US Navy Rear Admiral. She's so important that matplotlib contains a picture of her!
A. Reading in and viewing images
Reading
matplotlib
There is a basic read function in matplotlib.pyplot
Step2: What does hoppermpl look like and contain?
Step3: ... just a bunch of numbers in an array with shape [M x N x 3].
Python Imaging Library
The Python Imaging Library (PIL) is a package for image manipulation that is in wide use. We'll use the Pillow branch of PIL, which is the name of the fork still being developed and maintained. Image is contained within PIL.
Step4: What does hopperpil look like and contain?
Step5: The PIL PngImageFile object defaults to a convenient view of the picture itself.
Viewing
We have a sneak peak at the photo of Grace Hopper from the PIL Image object, but we'll also want to be able to plot the image other ways in which we have more control. Generally for plotting, we'll want to have the data in the form of an array, though there are other options using the PIL package and a PIL object directly.
Let's try the way we've been plotting a lot of our data
Step6: Why didn't that work?
When we've used pcolor or contourf in the past, we've always used a 2D array of data (or a single slice of a 3d array). However, this data is 3D due to having red, green, and blue values. Thus, there are too many dimensions to plot it this way.
Instead, we need to use special image-based functions to plot RGB data, for example, imshow
Step7: Notice that the x-axis 0 value is, as usual, at the left side of the figure. However, the y-axis 0 value is at the top of the figure instead of the typical bottom. This makes the origin for the coordinate axes at the top left instead of the bottom left. This is the convention for image data.
B. Converting between colorspaces
In RGB, colorspace is represented as a cube of values from 0 to 1 (or 0 to 255 or 1 to 256, depending on the specific algorithm) for each of red, green, and blue, which, when combined, represent many colors. The Hopper images are currently in RGB. However, RGB is but one representation of color. We could, instead, represent color by its hue, saturation, and value (HSV), where hue is a circular property from red to yellow to blue and back to red, saturation is the vividness of the color, and value or brightness goes from black to white. And there are many others.
There are at least a handful of Python packages out there you can use to convert color triplets between colorspaces, including colorspacious which has more options, but we'll use scikit-image.
Step8: So the HSV representation is still an array of numbers of the same shape, but they are for sure different
Step9: What is wrong here? For one thing, she is upside down. Another is that she is still colored though didn't we just eliminate all but one color channel?
We can fix the flip in plotting by either flipping the axes by hand or by using a function that is meant to plot image data, like matshow.
Step10: Grace is being colored by the default colormap, giving her a strange look. Let's choose the grayscale colormap to match our expectations in what we're doing here.
Step11: Exercise
How good is this representation of the photo in grayscale? Try the other two channels and compare, side-by-side. Which gives the best representation? Why?
Exercise
How else might we use the given RGB data to represent the image in grayscale? Play around with different approaches and be ready to discuss why one is better than another.
We can also just use a built-in function for conversion to grayscale, such as from scikit-image
Step12: D. Data in png files
Image file format png is worth a specific discussion due to its use in applications like satellite data. The pixel format of the pixels in a png file can have different numbers of dimensions, representing different things. We'll focus on two cases here
Step13: Next we examine a sea surface temperature (SST) image. Here is the edited data note from the site
Step14: This has shape [M x N] instead of [M x N x 3], so we have used matshow instead of imshow to plot it. Still, the plot doesn't look very good, does it? The land has been colored as red, which is taking up part of our 0-255 data range. Let's examine this further with a histogram of the values in the data set.
Step15: We see a suspicious pattern in the data
Step16: We need to change this information into a colormap. To do so, we need an [N x 3] array of the colormap values, where N is probably going to be 256 but doesn't have to be. Then we convert this into a colormap object.
Step17: So where exactly is the cut off for the range of data values? Here we examine the colormap values
Step18: Looks like the highest data value is 235, so everything above that can be masked out.
also
Step19: Exercise
Continue below to finish up the plot.
Mask out the land (contained in index)
Step20: Filtering
Step21: Filtering without paying attention to the dimensions of the array altered the colors of the image. But, if we instead filter in space for each channel individually
Step22: Exercise
Modify the sigma parameter, and see what happens to the image.
Gradients
Now, let's see if we can find gradients in this image. To make it easier, let's make a grayscale representation of the image by summing the RGB channels.
Step23: We use a Sobel filter (Sobel Operator) to quickly search calculate gradients in the image array.
Step24: Interpolation
Quick review of interpolation. When you have an image, or a data array on a uniform grid, map_coordinates is the best way to interpolate.
Step25: Rotation
Step26: Exercise
Try some other rotations. Does the increase in image size make sense based on trigenometry? What happens with a 90deg
rotation?
Look at the documentation, and try different modes. What's the difference between 'constant' and 'wrap'?
Try rotating back and forth 15 degrees at least 10 times, using various modes (and be be sure to set 'reshape=False' to prevent the image from growing over the iterations).
An example of edge detection
Let's use some of these tools to generate a sample image – a rotated square with softened edges, and some noise added.
Step27: Now, try to detect the edges of this feature using the scikit image canny edge detection algorithm
Step28: Exercise
Try different values of sigma to see if you can isolate the square as the only feature detected.
Now let's find the convex hull of the edges that we detected (hopefully only the single square now)
Step29: This would work even for an 'open' object,
Step30: Other feature detection
Here we use an image from the Hubble telescope showing stars and galaxies as bright dots. We want to detect the galaxies automatically.
We look at three algorithms for doing this.
Step31: Here is the Laplacian of Gaussian method as an example. How many galaxies are found depends on the threshold parameter especially.
Step32: Here we show the three algorithms. The Laplacian of Gaussian (LoG) is the most accurate and slowest approach. The Difference of Gaussian (DoG) is a faster approximation of LoG approach. The Determinant of Hessian is the fastest approach but is not accurate for small blobs. More details are available online. | Python Code:
import requests # from webscraping
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import cmocean
import cartopy
from PIL import Image # this is the pillow package
from skimage import color
from scipy import ndimage
from io import BytesIO
Explanation: Images
Images are just arrays of data, where the data tells us the colors in the image. It will get a little more complicated than this, as we'll see below, but this is the general idea. Since colors are typically represented by three dimensions, image arrays are typically [M x N x 3], and sometimes [M x N x 4], where the final entry of the last dimension contains the alpha or transparency value.
End of explanation
hoppermpl = plt.imread(matplotlib.cbook.get_sample_data("grace_hopper.png"))
Explanation: Note
We'll use an image of Grace Hopper for our sample image. Grace was one of the first computer programmers, invented the first computer compiler, and was a US Navy Rear Admiral. She's so important that matplotlib contains a picture of her!
A. Reading in and viewing images
Reading
matplotlib
There is a basic read function in matplotlib.pyplot: imread:
End of explanation
print(hoppermpl.shape, type(hoppermpl))
hoppermpl
Explanation: What does hoppermpl look like and contain?
End of explanation
hopperpil = Image.open(matplotlib.cbook.get_sample_data("grace_hopper.png"))
Explanation: ... just a bunch of numbers in an array with shape [M x N x 3].
Python Imaging Library
The Python Imaging Library (PIL) is a package for image manipulation that is in wide use. We'll use the Pillow branch of PIL, which is the name of the fork still being developed and maintained. Image is contained within PIL.
End of explanation
print(type(hopperpil))
hopperpil
Explanation: What does hopperpil look like and contain?
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.pcolormesh(hoppermpl)
Explanation: The PIL PngImageFile object defaults to a convenient view of the picture itself.
Viewing
We have a sneak peak at the photo of Grace Hopper from the PIL Image object, but we'll also want to be able to plot the image other ways in which we have more control. Generally for plotting, we'll want to have the data in the form of an array, though there are other options using the PIL package and a PIL object directly.
Let's try the way we've been plotting a lot of our data: pcolormesh or contourf:
End of explanation
fig = plt.figure(figsize=(14, 14))
ax1 = fig.add_subplot(1, 2, 1)
ax1.imshow(hoppermpl)
ax1.set_title('data via matplotlib')
# Get an array of data from PIL object
hopperpilarr = np.asarray(hopperpil)
ax2 = fig.add_subplot(1, 2, 2)
ax2.imshow(hopperpilarr)
ax2.set_title('data via PIL')
Explanation: Why didn't that work?
When we've used pcolor or contourf in the past, we've always used a 2D array of data (or a single slice of a 3d array). However, this data is 3D due to having red, green, and blue values. Thus, there are too many dimensions to plot it this way.
Instead, we need to use special image-based functions to plot RGB data, for example, imshow:
End of explanation
hopperhsv = color.convert_colorspace(hoppermpl, "RGB", "HSV")
hopperhsv
plt.plot(hoppermpl[:,:,0], hopperhsv[:,:,0], '.k');
Explanation: Notice that the x-axis 0 value is, as usual, at the left side of the figure. However, the y-axis 0 value is at the top of the figure instead of the typical bottom. This makes the origin for the coordinate axes at the top left instead of the bottom left. This is the convention for image data.
B. Converting between colorspaces
In RGB, colorspace is represented as a cube of values from 0 to 1 (or 0 to 255 or 1 to 256, depending on the specific algorithm) for each of red, green, and blue, which, when combined, represent many colors. The Hopper images are currently in RGB. However, RGB is but one representation of color. We could, instead, represent color by its hue, saturation, and value (HSV), where hue is a circular property from red to yellow to blue and back to red, saturation is the vividness of the color, and value or brightness goes from black to white. And there are many others.
There are at least a handful of Python packages out there you can use to convert color triplets between colorspaces, including colorspacious which has more options, but we'll use scikit-image.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.pcolormesh(hoppermpl[:,:,0])
Explanation: So the HSV representation is still an array of numbers of the same shape, but they are for sure different: if they were the same, plotting them against each other would give a 1-1 correspondence.
C. Converting to grayscale
An image can be represented by shades of gray instead of in 3D colorspace; when you convert to grayscale from 3D colospace, you inherently discard information. There are many ways of doing this transformation (this link is a great resource).
How might we convert to grayscale? We have RGB information, which is more than we need. What if we just take one channel?
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.matshow(hoppermpl[:,:,0])
Explanation: What is wrong here? For one thing, she is upside down. Another is that she is still colored though didn't we just eliminate all but one color channel?
We can fix the flip in plotting by either flipping the axes by hand or by using a function that is meant to plot image data, like matshow.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.matshow(hoppermpl[:,:,0], cmap='gray')
Explanation: Grace is being colored by the default colormap, giving her a strange look. Let's choose the grayscale colormap to match our expectations in what we're doing here.
End of explanation
hoppergray = color.rgb2gray(hoppermpl)
print(hoppergray.shape)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.matshow(hoppergray, cmap='gray')
Explanation: Exercise
How good is this representation of the photo in grayscale? Try the other two channels and compare, side-by-side. Which gives the best representation? Why?
Exercise
How else might we use the given RGB data to represent the image in grayscale? Play around with different approaches and be ready to discuss why one is better than another.
We can also just use a built-in function for conversion to grayscale, such as from scikit-image:
End of explanation
# RGB
image_loc = 'http://optics.marine.usf.edu/subscription/modis/GCOOS/2016/daily/091/A20160911855.1KM.GCOOS.PASS.L3D_RRC.RGB.png'
response = requests.get(image_loc) # choose one of the files to show as an example
img = Image.open(BytesIO(response.content))
rgb = np.asarray(img)
print(rgb.shape)
plt.imshow(rgb)
Explanation: D. Data in png files
Image file format png is worth a specific discussion due to its use in applications like satellite data. The pixel format of the pixels in a png file can have different numbers of dimensions, representing different things. We'll focus on two cases here: the [M x N x 3] and [M x N] cases.
Returning to our web scraping example using satellite data, we find that different types of satellite data products have differently-sized arrays. Note that when you go to the website and examine the information associated with various satellite products, you get hints about how many channels of data it should contain.
First we examine an RGB composite image. The (edited) note associated with this data on the website is as follows:
RGB: Red-Green-Blue composite image showing clouds, ocean, and land. The resulting reflectance in the three MODIS bands (645 nm: R; 555 nm: G; 469 nm: B) is stretched to 0-255 to obtain the RGB image.
This turns out to be pretty straight-forward to plot if we just treat the data we've read in as an image:
End of explanation
# SST
image_loc = 'http://optics.marine.usf.edu/subscription/modis/GCOOS/2016/daily/091/A20160911855.1KM.GCOOS.PASS.L3D.SST.png'
response = requests.get(image_loc) # choose one of the files to show as an example
img = Image.open(BytesIO(response.content))
index = np.asarray(img)
print(index.shape)
plt.matshow(index)
Explanation: Next we examine a sea surface temperature (SST) image. Here is the edited data note from the site:
SST: Sea Surface Temperature (in Degree C) estimated using the SeaDAS processing software (default product) with a multi-channel non-linear regression algorithm (Brown and Minnett, 1999). The MODIS standard product MOD35 (Ackerman et al., 2010) is used to discriminate clouds from water, and a cloudmask (grey color) is overlaid on the image.
What is this telling us? The data in the image is not represented in three channels like in the previous example, but in a single channel or index. It looks like it is represented in 3D colorspace, but really what we are seeing is a single channel of data being mapped using a colormap, just like in any of our typical data plots using pcolormesh, etc. This means that we are working to access the data points themselves, which we will then want to plot with our own colormap for representation.
End of explanation
n, bins, patches = plt.hist(index.flatten(), range=[0,255], bins=256) # use 256 bins, one for each color representation in the data.
Explanation: This has shape [M x N] instead of [M x N x 3], so we have used matshow instead of imshow to plot it. Still, the plot doesn't look very good, does it? The land has been colored as red, which is taking up part of our 0-255 data range. Let's examine this further with a histogram of the values in the data set.
End of explanation
img.getpalette()
Explanation: We see a suspicious pattern in the data: there is a reasonable-looking spread of data in the lower part of the available bins, then nothing, then some big peaks with high, singular values (without a spread). This is telling us that the data itself is in the lower part of the representation range, and other parts of the image are represented with reserved larger values.
The histogram values give us a strong clue about this. We can also directly examine the colormap used in this data to figure out the range of data. The PIL function getpalette tells us this information as a list of RGB values:
End of explanation
# the -1 in reshape lets that dimension be what it needs to be
palette = np.asarray(img.getpalette()).reshape(-1, 3) # change list to array, then reshape into [Nx3]
palette.shape
cmap = cmocean.tools.cmap(palette) # Create a colormap object
plt.matshow(index, cmap=cmap, vmin=0, vmax=255) # use the colormap object
plt.colorbar()
Explanation: We need to change this information into a colormap. To do so, we need an [N x 3] array of the colormap values, where N is probably going to be 256 but doesn't have to be. Then we convert this into a colormap object.
End of explanation
plt.plot(palette)
# plt.gca().set_xlim(230, 250)
Explanation: So where exactly is the cut off for the range of data values? Here we examine the colormap values:
End of explanation
lon = np.linspace(-98, -79, index.shape[1]) # know the number of longitudes must match corresponding number in image array
lat = np.linspace(18, 31, index.shape[0])
lat = lat[::-1] # flipping it makes plotting later work immediately
Explanation: Looks like the highest data value is 235, so everything above that can be masked out.
also: x and y coordinates
We want the appropriate x and y coordinates to go with our image. There is information about this on the data page:
The Gulf of Mexico Coastal Ocean Observing System region is an area bounded within these coordinates: 31°N 18°N 79°W and 98°W.
...
All images are mapped to a cylindrical equidistant projection. Images are at 1 kilometer resolution.
A cylindrical equidistant projection is just lon/lat.
End of explanation
image_loc = 'https://upload.wikimedia.org/wikipedia/commons/c/c4/PM5544_with_non-PAL_signals.png'
response = requests.get(image_loc)
img = Image.open(BytesIO(response.content)) # using PIL
index = np.asarray(img)
plt.imshow(index)
Explanation: Exercise
Continue below to finish up the plot.
Mask out the land (contained in index):
Make a new colormap instance that includes only the data range and not the masking values (since palette also contains color information for the land):
Plot the satellite data. What should the range of data be? Be sure to show the colorbar to check your work.
How about a good colormap to finish off the plot?
Ok. So we have a plot with a reasonable range for the data and the image looks pretty good. What do these values represent, though? The color index probably doesn't actually have values from datamin to datamax. Rather, we have to determine the range of the data that was used in the originally plotted colormap and transform the values to span the correct range.
How do we do this? To start, we need to know the colorbar min and max that were used in the original image. It turns out that while this information is not on the png, it is on the google earth representation. Here is a direct link to that data page so we can click around.
Exercise
Find the min/max values of the data. Then think about how to convert your index data into temperature data within this range.
Once you've converted the data, make a proper plot of the satellite data!
Image Analysis
Let's start with a simple image, but keep in mind that these techniques could be applied also to data arrays that aren't images.
End of explanation
findex = ndimage.gaussian_filter(index, 2.0) # filters in all 'three' dimensions, including channel...
plt.imshow(findex) # ...probably not what we want.
Explanation: Filtering
End of explanation
sigma = 2.0 # Standard deviation of the gaussian kernel. Bigger sigma == more smoothing.
findex = np.zeros_like(index)
for channel in range(3):
findex[:, :, channel] = ndimage.gaussian_filter(index[:, :, channel], sigma=sigma)
plt.imshow(findex)
Explanation: Filtering without paying attention to the dimensions of the array altered the colors of the image. But, if we instead filter in space for each channel individually:
End of explanation
gsindex = index.sum(axis=-1)
fig = plt.figure(figsize=(7.68, 5.76), dpi=100)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
plt.imshow(gsindex, cmap='gray')
Explanation: Exercise
Modify the sigma parameter, and see what happens to the image.
Gradients
Now, let's see if we can find gradients in this image. To make it easier, let's make a grayscale representation of the image by summing the RGB channels.
End of explanation
# FINDING GRADIENTS
from scipy.ndimage import sobel, generic_gradient_magnitude
d_gsindex = ndimage.generic_gradient_magnitude(gsindex, sobel)
# Note screen resolution is about 100dpi, so lets make sure the image is big enough to see all the points.
fig = plt.figure(figsize=(7.68, 5.76))
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
ax.matshow(d_gsindex, cmap='gray')
Explanation: We use a Sobel filter (Sobel Operator) to quickly search calculate gradients in the image array.
End of explanation
# INTERPOLATION/MAPPING
x = 768*np.random.rand(50000)
y = 578*np.random.rand(50000)
xy = np.vstack((y, x))
z = ndimage.map_coordinates(gsindex, xy)
plt.scatter(x, y, 10, z, edgecolor='none')
Explanation: Interpolation
Quick review of interpolation. When you have an image, or a data array on a uniform grid, map_coordinates is the best way to interpolate.
End of explanation
# ROTATING
rgsindex = ndimage.rotate(gsindex, 15, mode='wrap')
fig = plt.figure(figsize=(7.68, 5.76), dpi=100)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
plt.imshow(rgsindex, cmap='gray')
# Note, the image size increased to accomodate the rotation.
print(rgsindex.shape, gsindex.shape)
Explanation: Rotation
End of explanation
im = np.zeros((128, 128))
im[32:-32, 32:-32] = 1
im = ndimage.rotate(im, 15, mode='constant')
im = ndimage.gaussian_filter(im, 4)
im += 0.2 * np.random.random(im.shape)
plt.imshow(im, cmap='viridis')
Explanation: Exercise
Try some other rotations. Does the increase in image size make sense based on trigenometry? What happens with a 90deg
rotation?
Look at the documentation, and try different modes. What's the difference between 'constant' and 'wrap'?
Try rotating back and forth 15 degrees at least 10 times, using various modes (and be be sure to set 'reshape=False' to prevent the image from growing over the iterations).
An example of edge detection
Let's use some of these tools to generate a sample image – a rotated square with softened edges, and some noise added.
End of explanation
from skimage import feature
edges = feature.canny(im, sigma=1) # sigma=1 is the default
plt.imshow(edges, cmap='viridis')
Explanation: Now, try to detect the edges of this feature using the scikit image canny edge detection algorithm:
End of explanation
from skimage.morphology import convex_hull_image
chull = convex_hull_image(edges)
plt.imshow(chull, cmap='viridis')
Explanation: Exercise
Try different values of sigma to see if you can isolate the square as the only feature detected.
Now let's find the convex hull of the edges that we detected (hopefully only the single square now):
End of explanation
diag_mask = np.triu(np.ones(im.shape))
edges = edges.astype('float') * diag_mask
chull = convex_hull_image(edges)
fig, axs = plt.subplots(1, 2)
axs[0].imshow(edges, cmap='viridis')
axs[1].imshow(chull, cmap='viridis')
Explanation: This would work even for an 'open' object,
End of explanation
from skimage import data
from skimage.feature import blob_dog, blob_log, blob_doh
from skimage.color import rgb2gray
image = data.hubble_deep_field()[0:500, 0:500]
image_gray = rgb2gray(image)
plt.imshow(image_gray, cmap='gray')
Explanation: Other feature detection
Here we use an image from the Hubble telescope showing stars and galaxies as bright dots. We want to detect the galaxies automatically.
We look at three algorithms for doing this.
End of explanation
blobs_log = blob_log(image_gray, max_sigma=30, num_sigma=10, threshold=.4)
# the data are x, y, sigma for all the blobs. Lets make a quick plot.
y = blobs_log[:, 0]
x = blobs_log[:, 1]
sigma = blobs_log[:, 2]
# Calculate the radius of the blob from sigma, which is given in the docs as:
r = sigma*np.sqrt(2)
# represent marker size with r^2 to approximate area, and use log10(r) to give a spread in colors
plt.scatter(x, -y, r**2, np.log10(r), cmap='viridis', edgecolor='none')
plt.colorbar()
plt.axis('tight')
plt.gca().set_aspect(1.0)
Explanation: Here is the Laplacian of Gaussian method as an example. How many galaxies are found depends on the threshold parameter especially.
End of explanation
blobs_dog = blob_dog(image_gray, max_sigma=30, threshold=.1)
# For this method, the radius is again found by multiplying by sqrt(2)
blobs_dog[:, 2] = blobs_dog[:, 2] * np.sqrt(2)
# for this method, the third output parameter is already approximately the blob radius
blobs_doh = blob_doh(image_gray, max_sigma=30, threshold=.01)
blobs_list = [blobs_log, blobs_dog, blobs_doh]
colors = ['yellow', 'lime', 'red']
titles = ['Laplacian of Gaussian', 'Difference of Gaussian',
'Determinant of Hessian']
fig, axes = plt.subplots(1, 3, figsize=(14, 4), sharex=True, sharey=True)
plt.tight_layout()
# loop over the parameters in `zip` simultaneously
for blobs, color, title, ax in zip(blobs_list, colors, titles, axes):
ax.set_title(title)
# Show blogs
ax.imshow(image, interpolation='nearest')
ax.set_axis_off()
# plot detected blobs over blobs to compare
for blob in blobs:
y, x, r = blob
c = plt.Circle((x, y), r, color=color, linewidth=2, fill=False)
ax.add_patch(c)
plt.show()
Explanation: Here we show the three algorithms. The Laplacian of Gaussian (LoG) is the most accurate and slowest approach. The Difference of Gaussian (DoG) is a faster approximation of LoG approach. The Determinant of Hessian is the fastest approach but is not accurate for small blobs. More details are available online.
End of explanation |
14,202 | Given the following text description, write Python code to implement the functionality described.
Description:
Program to find the largest and smallest ASCII valued characters in a string
Function that return the largest alphabet ; Initializing max alphabet to 'a ; Find largest alphabet ; Returning largest element ; Function that return the smallest alphabet ; Initializing smallest alphabet to 'z ; Find smallest alphabet ; Returning smallest alphabet ; Driver code ; Character array ; Calculating size of the string ; Calling functions and print returned value
| Python Code:
def largest_alphabet(a , n ) :
'
max = ' A '
for i in range(n ) :
if(a[i ] > max ) :
max = a[i ]
return max
def smallest_alphabet(a , n ) :
'
min = ' z ' ;
for i in range(n - 1 ) :
if(a[i ] < min ) :
min = a[i ]
return min
if __name__== ' __main __' :
a = "GeEksforGeeks "
size = len(a )
print("Largest ▁ and ▁ smallest ▁ alphabet ▁ is ▁ : ▁ ", end = "")
print(largest_alphabet(a , size ) , end = "▁ and ▁ ")
print(smallest_alphabet(a , size ) )
'
|
14,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From scratch
Step1: Term frequencies
Term frequency indicates how often each word appears in the document. The intuition for including term frequency in the tf-idf calculation is that the more frequently a word appears in a single document, the more important that term is to the document.
tf(t,d) = count of t in document / number of words in document
Step2: Inverse document frequencies
The inverse document frequency component of the tf-idf score penalizes terms that appear more frequently across a corpus. The intuition is that words that appear more frequently in the corpus give less insight into the topic or meaning of an individual document, and should thus be deprioritized.
We can calculate the inverse document frequency for some term t across a corpus using
idf(t) = log(n/occurrence of t in documents) + 1
Smoothing idf
Step3: TF-IDF
tf-idf(t, d) = tf(t, d) * idf(t)
Step4: TF-IDF using sklearn
Step5: Search algorithm | Python Code:
import pandas as pd
import numpy as np
Explanation: From scratch
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
term_frequencies = vectorizer.fit_transform(corpus_cleaned).toarray()
term_frequencies
# Visualize term_frequencies
df_tf = pd.DataFrame(
term_frequencies.T,
index=vectorizer.get_feature_names(),
columns=corpus_cleaned
)
df_tf
Explanation: Term frequencies
Term frequency indicates how often each word appears in the document. The intuition for including term frequency in the tf-idf calculation is that the more frequently a word appears in a single document, the more important that term is to the document.
tf(t,d) = count of t in document / number of words in document
End of explanation
n_samples, n_features = term_frequencies.shape
doc_frequency = term_frequencies.sum(axis=0)
inverse_doc_frequency = np.log(
(1 + n_samples) / (1 + doc_frequency)
) + 1
inverse_doc_frequency
# Inverse Document Frequency using sklearn
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer(norm=None, smooth_idf=True)
transformer.fit(term_frequencies)
inverse_doc_frequency = transformer.idf_
inverse_doc_frequency
# Visualize inverse_doc_frequency
df_itf = pd.DataFrame(
inverse_doc_frequency,
index=vectorizer.get_feature_names(),
columns=['idf'])
df_itf
Explanation: Inverse document frequencies
The inverse document frequency component of the tf-idf score penalizes terms that appear more frequently across a corpus. The intuition is that words that appear more frequently in the corpus give less insight into the topic or meaning of an individual document, and should thus be deprioritized.
We can calculate the inverse document frequency for some term t across a corpus using
idf(t) = log(n/occurrence of t in documents) + 1
Smoothing idf: As we cannot divide by 0, the constant "1" is added to the numerator and denominator of the idf as if an extra document was seen containing every term in the collection exactly once, which prevents zero divisions:
smoothed_idf(t) = log [ (1 + n) / (1 + df(t)) ] + 1
The important take away from the equation is that as the number of documents with the term t increases, the inverse document frequency decreases (due to the nature of the log function).
End of explanation
df_tf * df_itf.values
Explanation: TF-IDF
tf-idf(t, d) = tf(t, d) * idf(t)
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(norm=None)
tfidf_scores = vectorizer.fit_transform(corpus_cleaned).toarray()
tfidf_scores
pd.DataFrame(
tfidf_scores.T,
index=vectorizer.get_feature_names(),
columns=corpus_cleaned
)
Explanation: TF-IDF using sklearn
End of explanation
the_raven = '''
Once upon a midnight dreary, while I pondered, weak and weary,
Over many a quaint and curious volume of forgotten lore,
While I nodded, nearly napping, suddenly there came a tapping,
As of some one gently rapping, rapping at my chamber door.
“‘Tis some visiter,” I muttered, “tapping at my chamber door—
Only this, and nothing more.”
Ah, distinctly I remember it was in the bleak December,
And each separate dying ember wrought its ghost upon the floor.
Eagerly I wished the morrow;—vainly I had sought to borrow
From my books surcease of sorrow—sorrow for the lost Lenore—
For the rare and radiant maiden whom the angels name Lenore—
Nameless here for evermore.
And the silken sad uncertain rustling of each purple curtain
Thrilled me—filled me with fantastic terrors never felt before;
So that now, to still the beating of my heart, I stood repeating
“‘Tis some visiter entreating entrance at my chamber door—
Some late visiter entreating entrance at my chamber door;—
This it is, and nothing more.”
Presently my soul grew stronger; hesitating then no longer,
“Sir,” said I, “or Madam, truly your forgiveness I implore;
But the fact is I was napping, and so gently you came rapping,
And so faintly you came tapping, tapping at my chamber door,
That I scarce was sure I heard you “—here I opened wide the door;——
Darkness there and nothing more.
Deep into that darkness peering, long I stood there wondering, fearing,
Doubting, dreaming dreams no mortal ever dared to dream before;
But the silence was unbroken, and the darkness gave no token,
And the only word there spoken was the whispered word, “Lenore!”
This I whispered, and an echo murmured back the word, “Lenore!”—
Merely this, and nothing more.
Back into the chamber turning, all my soul within me burning,
Soon I heard again a tapping somewhat louder than before.
“Surely,” said I, “surely that is something at my window lattice;
Let me see, then, what thereat is, and this mystery explore—
Let my heart be still a moment and this mystery explore;—
‘Tis the wind and nothing more!”
Open here I flung the shutter, when, with many a flirt and flutter,
In there stepped a stately raven of the saintly days of yore;
Not the least obeisance made he; not an instant stopped or stayed he;
But, with mien of lord or lady, perched above my chamber door—
Perched upon a bust of Pallas just above my chamber door—
Perched, and sat, and nothing more.
Then this ebony bird beguiling my sad fancy into smiling,
By the grave and stern decorum of the countenance it wore,
“Though thy crest be shorn and shaven, thou,” I said, “art sure no craven,
Ghastly grim and ancient raven wandering from the Nightly shore—
Tell me what thy lordly name is on the Night’s Plutonian shore!”
Quoth the raven “Nevermore.”
Much I marvelled this ungainly fowl to hear discourse so plainly,
Though its answer little meaning—little relevancy bore;
For we cannot help agreeing that no living human being
Ever yet was blessed with seeing bird above his chamber door—
Bird or beast upon the sculptured bust above his chamber door,
With such name as “Nevermore.”
But the raven, sitting lonely on the placid bust, spoke only
That one word, as if his soul in that one word he did outpour.
Nothing farther then he uttered—not a feather then he fluttered—
Till I scarcely more than muttered “Other friends have flown before—
On the morrow he will leave me, as my hopes have flown before.”
Then the bird said “Nevermore.”
Startled at the stillness broken by reply so aptly spoken,
“Doubtless,” said I, “what it utters is its only stock and store
Caught from some unhappy master whom unmerciful Disaster
Followed fast and followed faster till his songs one burden bore—
Till the dirges of his Hope that melancholy burden bore
Of “Never—nevermore.”
But the raven still beguiling all my sad soul into smiling,
Straight I wheeled a cushioned seat in front of bird, and bust and door;
Then, upon the velvet sinking, I betook myself to thinking
Fancy unto fancy, thinking what this ominous bird of yore—
What this grim, ungainly, ghastly, gaunt and ominous bird of yore
Meant in croaking “Nevermore.”
This I sat engaged in guessing, but no syllable expressing
To the fowl whose fiery eyes now burned into my bosom’s core;
This and more I sat divining, with my head at ease reclining
On the cushion’s velvet lining that the lamplght gloated o’er,
But whose velvet violet lining with the lamplight gloating o’er,
She shall press, ah, nevermore!
Then, methought, the air grew denser, perfumed from an unseen censer
Swung by Angels whose faint foot-falls tinkled on the tufted floor.
“Wretch,” I cried, “thy God hath lent thee—by these angels he hath sent
thee
Respite—respite and nepenthe from thy memories of Lenore;
Quaff, oh quaff this kind nepenthe and forget this lost Lenore!”
Quoth the raven, “Nevermore.”
“Prophet!” said I, “thing of evil!—prophet still, if bird or devil!—
Whether Tempter sent, or whether tempest tossed thee here ashore,
Desolate yet all undaunted, on this desert land enchanted—
On this home by Horror haunted—tell me truly, I implore—
Is there—is there balm in Gilead?—tell me—tell me, I implore!”
Quoth the raven, “Nevermore.”
“Prophet!” said I, “thing of evil—prophet still, if bird or devil!
By that Heaven that bends above us—by that God we both adore—
Tell this soul with sorrow laden if, within the distant Aidenn,
It shall clasp a sainted maiden whom the angels name Lenore—
Clasp a rare and radiant maiden whom the angels name Lenore.”
Quoth the raven, “Nevermore.”
“Be that word our sign of parting, bird or fiend!” I shrieked, upstarting—
“Get thee back into the tempest and the Night’s Plutonian shore!
Leave no black plume as a token of that lie thy soul hath spoken!
Leave my loneliness unbroken!—quit the bust above my door!
Take thy beak from out my heart, and take thy form from off my door!”
Quoth the raven, “Nevermore.”
And the raven, never flitting, still is sitting, still is sitting
On the pallid bust of Pallas just above my chamber door;
And his eyes have all the seeming of a demon’s that is dreaming,
And the lamp-light o’er him streaming throws his shadow on the floor;
And my soul from out that shadow that lies floating on the floor
Shall be lifted—nevermore!
'''
raven = the_raven.split('.')
len(raven)
raven[0]
raven_cleaned = [" ".join(preprocess_text(txt)) for txt in raven]
raven_cleaned[0]
# Build tf-idf lookup table
vectorizer = TfidfVectorizer(norm=None)
tfidf_scores = vectorizer.fit_transform(raven_cleaned).toarray()
df_tfidf = pd.DataFrame(
tfidf_scores.T,
index=vectorizer.get_feature_names()
)
df_tfidf
# Get most relevant docs for "the bird"
terms = 'the bird'.split(' ')
search = None
for term in terms:
if search is None:
search = df_tfidf.loc[term]
else:
search += df_tfidf.loc[term]
search = search.sort_values(ascending=False)
search
from IPython.display import display, HTML
i = 0
for idx in search.index[:5]:
i += 1
html = raven[idx] \
.replace('\n', '<br>')
for term in terms:
html = html.replace(term, '<marked>' + term +'</marked>')
display(HTML('<style>marked{background:lightskyblue}</style>'
+ '<h3>Result ' + str(i) + '</h3>'
+ html))
Explanation: Search algorithm
End of explanation |
14,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GA Python
Here is a simple example of using a GA in Python using PyOptSparse and my wrapper available here.
Step1: Here is where we define the problem and choose an optimizer. Various options exist, I've set a few, see documentation for details.
Step2: Now we can run the optimizer and parse the results.
Step3: NSGA, like many genetic algorithms, doesn't have any speicific convergence criteria other than the maximum number of generations. I set it at 200 in this case. Notice that the answer is ok, but not super great.
Let's also try with SNOPT and start fairly far away (and I won't supply gradients) | Python Code:
def rosen(x):
f = (1 - x[0])**2 + 100*(x[1] - x[0]**2)**2
c = []
return f, c
Explanation: GA Python
Here is a simple example of using a GA in Python using PyOptSparse and my wrapper available here.
End of explanation
from pyoptsparse import NSGA2
# choose optimizer and define options
optimizer = NSGA2()
optimizer.setOption('maxGen', 200)
optimizer.setOption('PopSize', 40)
optimizer.setOption('pMut_real', 0.01)
optimizer.setOption('pCross_real', 1.0)
Explanation: Here is where we define the problem and choose an optimizer. Various options exist, I've set a few, see documentation for details.
End of explanation
from pyoptwrapper import optimize
x0 = [4.0, 4.0]
lb = [-5.0, -5.0]
ub = [5.0, 5.0]
xopt, fopt, info = optimize(rosen, x0, lb, ub, optimizer)
print 'results:', xopt, fopt, info
Explanation: Now we can run the optimizer and parse the results.
End of explanation
from pyoptsparse import SNOPT
optimizer = SNOPT()
xopt, fopt, info = optimize(rosen, x0, lb, ub, optimizer)
print 'results:', xopt, fopt, info
Explanation: NSGA, like many genetic algorithms, doesn't have any speicific convergence criteria other than the maximum number of generations. I set it at 200 in this case. Notice that the answer is ok, but not super great.
Let's also try with SNOPT and start fairly far away (and I won't supply gradients):
End of explanation |
14,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Embedded Actions in <span style="font-variant
Step1: The grammar shown above has no semantic actions (with the exception of the skip action).
We extend this grammar now with semantic actions so that we can actually compute something.
This grammar is stored in the file Calculator.g4. It describes a language for a
symbolic calculator
Step2: First, we have to generate both the scanner and the parser.
Step3: We can use the system command ls to see which files have been generated by <span style="font-variant
Step4: The files CalculatorLexer.py and CalculatorParser.py contain the generated scanner and parser, respectively. We have to import these files. Furthermore, the runtime of
<span style="font-variant
Step5: Let us parse and evaluate the input that we read from a prompt. | Python Code:
!cat -n Program.g4
Explanation: Embedded Actions in <span style="font-variant:small-caps;">Antlr</span> Grammars
The pure grammar is stored in the file Grammar.g4.
End of explanation
!cat -n Calculator.g4
Explanation: The grammar shown above has no semantic actions (with the exception of the skip action).
We extend this grammar now with semantic actions so that we can actually compute something.
This grammar is stored in the file Calculator.g4. It describes a language for a
symbolic calculator: This calculator is able to evaluate arithmetic expressions and, furthermore,
where we can store the results of our computations in variables.
End of explanation
!antlr4 -Dlanguage=Python3 Calculator.g4
Explanation: First, we have to generate both the scanner and the parser.
End of explanation
!ls -l
Explanation: We can use the system command ls to see which files have been generated by <span style="font-variant:small-caps;">Antlr</span>.
If you are using a windows system you have to use the command dir instead.
End of explanation
from CalculatorLexer import CalculatorLexer
from CalculatorParser import CalculatorParser
import antlr4
Explanation: The files CalculatorLexer.py and CalculatorParser.py contain the generated scanner and parser, respectively. We have to import these files. Furthermore, the runtime of
<span style="font-variant:small-caps;">Antlr</span>
needs to be imported.
End of explanation
def main():
parser = CalculatorParser(None) # generate parser without lexer
parser.Values = {}
line = input('> ')
while line != '':
input_stream = antlr4.InputStream(line)
lexer = CalculatorLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser.setInputStream(token_stream)
parser.start()
line = input('> ')
return parser.Values
main()
!rm *.py *.tokens *.interp
!rm -r __pycache__/
!ls -l
Explanation: Let us parse and evaluate the input that we read from a prompt.
End of explanation |
14,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imports
Step1: Configuration
Step2: Read images and labels [WORK REQUIRED]
Use fileset=tf.data.Dataset.list_files to scan the data folder
Iterate through the dataset of filenames
Step3: Useful code snippets
Decode a JPEG in Tensorflow
Step4: Decode a JPEG and extract folder name in TF | Python Code:
import os, sys, math
import numpy as np
from matplotlib import pyplot as plt
if 'google.colab' in sys.modules: # Colab-only Tensorflow version selector
%tensorflow_version 2.x
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
#@title "display utilities [RUN ME]"
def display_9_images_from_dataset(dataset):
plt.figure(figsize=(13,13))
subplot=331
for i, (image, label) in enumerate(dataset):
plt.subplot(subplot)
plt.axis('off')
plt.imshow(image.numpy().astype(np.uint8))
plt.title(label.numpy().decode("utf-8"), fontsize=16)
subplot += 1
if i==8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
Explanation: Imports
End of explanation
GCS_PATTERN = 'gs://flowers-public/*/*.jpg'
CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # flower labels (folder names in the data)
Explanation: Configuration
End of explanation
nb_images = len(tf.io.gfile.glob(GCS_PATTERN))
print("Pattern matches {} images.".format(nb_images))
#
# YOUR CODE GOES HERE
#
#display_9_images_from_dataset(dataset)
Explanation: Read images and labels [WORK REQUIRED]
Use fileset=tf.data.Dataset.list_files to scan the data folder
Iterate through the dataset of filenames: for filename in fileset:... .
Does it work ? Yes, but if you print the filename you get Tensors containing strings.
To display the string only, you can use filename.numpy(). This works on any Tensorflow tensor.
tip: to limit the size of the dataset for display, you can use Dataset.take(). Like this: for data in dataset.take(10): ....
Use tf.data.Dataset.map to decode the JPEG files. You will find useful TF code snippets below.
Iterate on the image dataset. You can use .numpy().shape to only see the data sizes.
Are all images of the same size ?
Now create a training dataset: you have images but you also need labels:
the labels (flower names) are the directory names. You will find useful TF code snippets below for parsing them.
If you do "return image, label" in the decoding function, you will have a Dataset of pairs (image, label).
You can see the flowers and their labels with the display_9_images_from_dataset function. It expects the Dataset to have (image, label) elements.
End of explanation
def decode_jpeg(filename):
bits = tf.io.read_file(filename)
image = tf.image.decode_jpeg(bits)
return image
Explanation: Useful code snippets
Decode a JPEG in Tensorflow
End of explanation
def decode_jpeg_and_label(filename):
bits = tf.io.read_file(filename)
image = tf.image.decode_jpeg(bits)
# parse flower name from containing directory
label = tf.strings.split(tf.expand_dims(filename, axis=-1), sep='/')
label = label.values[-2]
return image, label
Explanation: Decode a JPEG and extract folder name in TF
End of explanation |
14,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CTA AEFF interpolation
There was a report of possible bugs with CTA DC-1 AEFF interpolation via email.
In this notebook we have a quick look.
As far as I can see, everything is as expected and as good as it can be
(given the somewhat noisy IRFs from CTA, and assuming we're not introducing smoothing in Gammapy for now).
Step1: Bins and nodes
Everything looks as expected to me.
Note that CTA IRFs currently are produced from diffuse photons and IRFs computed in offset bins of 0 to 1 deg, 1 to 2 deg and so on. In Gammapy, we choose to put the node for the interpolation at the bin center in offset, i.e. the first node is at 0.5 deg, the second at 1.5 deg and so on.
Step2: Peek
Let's have a quick look at AEFF, and especially at the extrapolation to offset = 0 deg.
Everything looks as expected
Step3: Evaluate at nodes
We're using bilinear interpolation in energy and offset, so evaluating at nodes should give the exact same values as the data array.
Step4: Extrapolation
When extrapolating to energies below the lowest node at 0.014 TeV, negative effective area values do occur.
Running analyses that include energies so low will probably cause issues. So for now to get correct results | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import astropy.units as u
from astropy.table import Table
from gammapy.irf import EffectiveAreaTable2D
Explanation: CTA AEFF interpolation
There was a report of possible bugs with CTA DC-1 AEFF interpolation via email.
In this notebook we have a quick look.
As far as I can see, everything is as expected and as good as it can be
(given the somewhat noisy IRFs from CTA, and assuming we're not introducing smoothing in Gammapy for now).
End of explanation
filename = '1dc/1dc/caldb/data/cta/1dc/bcf/North_z20_50h/irf_file.fits'
aeff = EffectiveAreaTable2D.read(filename)
# Just to compare, what the raw IRF BINTABLE HDU contains
table = Table.read(filename, hdu='EFFECTIVE AREA')
table
print(aeff)
energy_axis = aeff.data.axes[0]
print(energy_axis)
energy_axis.nodes.value
offset_axis = aeff.data.axes[1]
print(offset_axis)
print(offset_axis.nodes.value)
Explanation: Bins and nodes
Everything looks as expected to me.
Note that CTA IRFs currently are produced from diffuse photons and IRFs computed in offset bins of 0 to 1 deg, 1 to 2 deg and so on. In Gammapy, we choose to put the node for the interpolation at the bin center in offset, i.e. the first node is at 0.5 deg, the second at 1.5 deg and so on.
End of explanation
aeff.peek()
for offset in np.arange(0, 2, 0.5):
val = aeff.data.evaluate(offset=offset*u.deg)
plt.plot(energy_axis.nodes.value, val, label=offset)
plt.legend()
Explanation: Peek
Let's have a quick look at AEFF, and especially at the extrapolation to offset = 0 deg.
Everything looks as expected: the AEFF values at offset = 0 deg are the linear extrapolation of the values at the two nearest nodes from offset = 0.5 deg and 1.5 deg.
End of explanation
# If no energy and offset is passed, the interpolator is called at the nodes by default
val = aeff.data.evaluate()
val2 = aeff.data.data
diff = val - val2
print(diff.max())
Explanation: Evaluate at nodes
We're using bilinear interpolation in energy and offset, so evaluating at nodes should give the exact same values as the data array.
End of explanation
energy = np.logspace(-2, 2, 300) * u.TeV
val = aeff.data.evaluate(energy=energy, offset=0.5*u.deg)
plt.plot(energy, val, 'o')
plt.plot(energy_axis.nodes.value, aeff.data.data[:, 0], 'o')
plt.xlim(0.005, 0.03)
plt.ylim(-10000, 10000)
plt.
Explanation: Extrapolation
When extrapolating to energies below the lowest node at 0.014 TeV, negative effective area values do occur.
Running analyses that include energies so low will probably cause issues. So for now to get correct results: just put a minumum energy > 0.015 TeV.
We can discuss if we want to clip AEFF to values >= 0, or pad the data array with rows of zeros so that the interpolator does the right thing directly.
End of explanation |
14,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-l', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CAS
Source ID: FGOALS-F3-L
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:44
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
14,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Authorship Attribution using Machine Learning
</h1>
<img src="images/title.png" height="233" width="500">
Overview and Motivation
The objective of this project is to investigate application of Machine Learning algorithms for authorship attribution. Authorship attribution refers to the process of identifying the writer of a piece of text.
It has a wide range of applications in many diverse domains such as
Step1: 2.3 Machine Readable Catalog Reader
Gutenberg website also maintains catalogs in machine readable formats that can be used to create a database of the books available via the project. The catalogs are available at the following URL http
Step2: The marcs.csv file generated by reading catalog in MARCS format allows one to get to any book by any author using a web browser. Theoretically this database can be fed into a python script using urllib and urllib2 modules to download this data on the local filesystem. However when I tried to download the files using this urllib script this was refused by the Gutenberg website as well. It makes sense as it is a portal being maintained by donations on a limited resource infrastructure so bots accessing it frequently will put unwarranted load on Gutenberg server/servers.
2.4 Project Gutenberg Custom ISO Creator
Project Gutenberg Custom ISO Creator is a beta project at the time of this project and can be accessed at
http
Step3: 2.7 Removing duplicate author names
One of the problems is that in Gutenberg catalog same author is added with different spellings of the name. This is removed by measuring the Levenshtein distance between the names. If the distance is 1 there is a string likelihood that the books are by the same author.
Step4: The next step was to find the correct spellings via wikipedia and other sources and add another field called preferred.
The name in the preferred field is the correct name. This step involves intensive manual labor as one needs to make sure that authors with similar names are not lumped together. I deleted the records with "first" and "second" containing two different authors with the same name. The end result is as shown below.
Step5: The next step in cleaning is to put back the correct name in the original dataframe.
Step8: 2.8 Corpus Creation
Now that we have all the files copied over to S3 and a CSV file containing records of all these files we can create our corpus. In this corpus each author should have just one document containing all his/her works. In order to create corpus we therefore need to concatenate books from each author together. I used python gutenberg library to strip Gutenberg header and footer from each book before adding them to the document.
Corpus Creation requires a number of python libraries including
Step9: Now we are at a stage where we can derive corpus of interest from the Gutenberg corpus. This corpus of interest shall contain works of the authors we are interested in for our authorship attribution model building.
Step11: 2.9 Exploratory Data Analysis
Once we have the corpus in S3 with each document named as author name and with contents containing all the works available in the corpus for that particular author, we can conduct some basic exploratory data analysis using this corpus.
2.9.1 Find the authors with highest document size to corpus size ratios in percentage
Step13: This means that Shakespeare and Dickens constitute almost 42% of our corpus. Please note that this percentage is in our corpus size and not of Gutenberg in all.
2.9.2 Find vocabulary richness ratio (VRR)
Next we would like to find out richness of vocabulary for each author. We should remove all the stop words and find total number of words in each author's works. The richness is not computed in terms of absolute vocabulary size but is instead computed as ratio of unique words to total words(excluding stop words) as percentage. VRR is also known as lexical richness.
First of all we need to install stopwords corpus of nltk on AWS EMR Master node.
In order to accomplish that we need to ssh into the Master node and run the following command.
sudo /home/hadoop/anaconda2/bin/python -m nltk.downloader -d /usr/share/nltk_data stopwords
Step15: This graph shows somethig unexpected. The bars that have a lot more red and only a small proportion of blue (such as Jane Austen) mean that the corpus contains a lot of text from that particular author but there is a lot of repitition in usage of words. This could either mean that there is a lot of noisy data in the document or repitition of words.
2.9.3 Word Frequency Distribution
Step16: The distribution almost resembles Zipf's distribution(as expected).
3. Model Planning
There are many diverse approaches employed by different researchers in authorship attribution, however in this project I decided to only use Bag of Words approach based on the assumption that every author has certain favourite words that he/she repeats in his/her written work.
I also decided to give preference to Scikit Learn Toolkit for different machine learning algorithms as it offered more learning algorithms choice than offered by MLLib.
3.1 Utility Functions for Classification
These functions are adapted from function used in lab 6 of CS109. Copyrights reserved with Harvard Extension school staff.
Step19: 3.2 Vocabulary Creation
The first step is to build our vocabulary by using all unique words from the corpus
Step20: Now we need to combine all these individual vocabularies into corpus vocabulary
Step21: It is important to mention that these words may contain Nouns that we can remove by POS tagging BUT in authorship attribution our assumption is that some authors may have favourite character names.
Step22: This vocabulary will be passed to CountVectorizer
3.3 Feature Selection
Careful Feature Selection is imperative for building good machine learning models. From the same datasets one may select different features for different problems. For sentiment analysis, adjectives are selected as features and punctuations may not be critical but an authorship attribution task may use syntactic features of a document. In this stage we can employ dimensionality reduction to reduce the feature space.
In this project, I have used "Bag of Words" approach. As the name implies, bag of words approach represents text of a document as a set of terms and does not give any significance to context or order.
Step23: 3.3 Feature Extraction
Feature Extraction Process involves converting textual data to numerical feature vectors that can be used as input to machine learning algorithms. I have mainly used CountVectorizer for feature extraction.
Step24: 4. Model Building
In this phase I converted the feature set to training and testing portions and used following approach to model building.
1) Select a machine learning algorithm
2) Conduct KFold Cross Validation for finding optimal parameters
3) Fit the model on training data with optimal parameters
4) Test the model on testing data
5) Compute Accuracy
Repeat the steps for each model.
4.1 Training Testing Split
Step25: 4.2 K Nearest Neighbors (KNN) Classifier
KNN is a non-parametric classification algorithm. It does not make any assumption about the distribution of the underlying data and is very fast. It takes majority voting of K neighbors to decide class the feature belongs to.
Step26: 4.3 Support Vector Machines
Support Vector Machines (SVM) is an extension of Support Vector Classifier. It classifies features by using separating hyperplanes.
Step27: 4.4 Naive Bayes
Naive Bayes is a probablistic classification method based on Baye's theorem. A naive Bayes classifier assumes that absence or presence of a feature of a class is not related to the absence or presence of other features. This is also called conditional independence.
Mathematicall this can be written as
Step28: 4.5 Random Forest
Random Forest is an ensemble method that fits a number of decision tree classifiers on various subsamples of the dataset and use averaging to improve the predictive accuracy and control over-fitting.
Step29: 4.6 Final Analysis of Multiclass classification
Looking at the accuracy results, Multinomial Naive Bayesian and SVM have given the maximum prediction accuracy on testing data. We shall use Naive Bayesian for operational implementation.
Step30: 5. Results
The results of the project can be summarized as responses to the following questions.
1) Can "Bag of Words" as features give acceptable classification accuracy?
All classifiers have given good accuracies making it a suitable technique for Authorship attribution.
2) Which algorithms perform best with Bag of Words?
SVM and Naive Bayes have performed best for Bag of Words based authorship attribution.
3) How can Big Data processing engines such as Apache Spark aid in preparing data for Machine Learning algorithms?
Spark was used through out this project and managed to clean 12G of raw gutenberg data in matter of minutes.
4) How can one clean Gutenberg data to be used for different NLP related research projects?
Cleaning Gutenberg data took a long time. NLTK provides a subset of Gutenberg corpus but it would be interesting to scale up authorship attribution for the full set of authors using complete Gutenberg data.
5) Can AWS EMR be used as a viable cloud based Big Data platform?
I have found AWS EMR to be very convenient. EMR clusters with m3.xlarge with 3 nodes are more than enough for most operations but the cleaning stage required 5 EC2 nodes of m3.xlarge.
6. Operational Implementation
In this phase the model should be saved and used via command line.
6.1 Saving the model
Step31: 6.2 Script to Load Text File and Give Result | Python Code:
%matplotlib inline
import sys
import re
import os
import csv
import codecs
import string
import json
import boto
import pattern
import pandas as pd
import seaborn as sns
import numpy as np
import scipy as sp
import nltk as nl
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from bs4 import BeautifulSoup
from pymarc import MARCReader
from gutenberg.cleanup import strip_headers
from nltk.tokenize import word_tokenize
from boto.s3.key import Key
from pyspark.sql import SQLContext
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from Levenshtein._levenshtein import distance
from sklearn.grid_search import GridSearchCV
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import classification_report
from sklearn import svm
#Configure Pandas and Seaborne
sns.set_style("whitegrid")
sns.set_context("poster")
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
mpl.style.use('ggplot')
#pd.options.display.mpl_style = 'default'
Explanation: <h1 align="center">Authorship Attribution using Machine Learning
</h1>
<img src="images/title.png" height="233" width="500">
Overview and Motivation
The objective of this project is to investigate application of Machine Learning algorithms for authorship attribution. Authorship attribution refers to the process of identifying the writer of a piece of text.
It has a wide range of applications in many diverse domains such as :
- Academia
- History of Literature
- Homeland Security
- Criminal Investigation
The basic idea behind Authorship Attribution is to analyse text document such as a letter, book, transcript of telephonic conversation, email or social media post under pseudonym to find the author.
Some typical applications are plagiarism detection, finding the author of an anonymously published literary work, identifying an individual terrorist or a terrorist organisation from their written letters or threat emails and shortlisting potential criminals.
The main goal of the project is to develop a model to detect the author/writer of a piece of text from a list of authors/writers given a piece of text.
I have divided the project into following phases:
1) Discovery
2) Data Preparation
3) Model Planning
4) Model Building
5) Result
6) Operational Implementation
1. Discovery
In this phase I consulted a number of research papers, online resources and some books to learn how other data scientists have addressed authorship attribution problem. The references are given at the end of this notebook.
1.1 Authorship Attribution Overview
Authorship attribution employs stylometry for finding the author of a piece of text. Stylometry can be defined as the study of linguistic style of written text. It involves identifying unique style markers in a given piece of text.The style markers are those features that one can find repeating in different texts writtern by the same writer.This repeating style markers are called 'writer invariant'.
Stylometry analysis uses lexical, synctactic, structural and content specific style markers that distinguish an author from other authors. Lexical descriptors provide statistics such as total number of words/characters, average number of words per sentence and distribution of word length etc.Synctactic features focus on structure of the sentences such as usage of punctuation marks whereas the structural markers look into the organisation of text into paragraphs, headings etc[1].
Some approaches take into account n-grams (combination of words) as features while others only use set of terms called 'bag of words' approach.
1.2 Related Work
Different researchers have tried different machine learning algorithms for authorship attribution. Some of the main algorithms that I found in the papers that I consulted include K-nearest neighbors, Bayesian, Support Vector Machines(SVM), Feed Forward Multilayer Perceptrons (MLP) and ensembles using combination of these algorithms.
In 2007 Bozkurt, Bağlıoğlu and Uyar[2] found that 'Bag of Words' approach with SVM gave very high accuracy. In 2007 Stańczyk and Cyran[1] used ANN and found that highest classification ratio is granted by the exploitation of syntactic textual features.
In 2014 Pratanwanich and Lio[3] have used Supervised Author Topic (SAT) model that is based on probabilistic generative model and has exhibited same performance as Random Forests.
1.3. Initial Questions & Context
The main questions for this project were:
1) Can "Bag of Words" as features give acceptable classification accuracy?
2) Which algorithms perform best with Bag of Words?
3) How can Big Data processing engines such as Apache Spark aid in preparing data for Machine Learning algorithms?
4) How can one clean Gutenberg data to be used for different NLP related research projects?
5) Can AWS EMR be used as a viable cloud based Big Data platform?
2. Data Preparation
This phase was more data engineering than data preparation as it included writing configuration & bootstrap scripts for AWS EMR, building conda packages for the python libraries that were not available in default conda repositories such as Gutenberg library.
The main source of data for this project was Gutenberg site with public domain eBooks in the text format.
2.1 Project Gutenberg
Project Gutenberg (http://www.gutenberg.org) at the time of this project offers more than 50,000 eBooks in different languages available for download in different file formats.This project however is focused on books written in English and in plain text form. The first challenge therefore was to identify and download all the books written in English in plain text form.
Project Gutenberg website clearly states on the main page that :
The Project Gutenberg website is for human users only. Any real or perceived use of automated tools to access our site will result in a block of your IP address
2.2 Robot Site Access
On more investigation I found some information on Gutenberg website on how to download data as via robot at the following URL.
http://www.gutenberg.org/wiki/Gutenberg%3aInformation_About_Robot_Access_to_our_Pages
The information given on this page allows wget based data access by using the following command.
wget -w 2 -m -H "http://www.gutenberg.org/robot/harvest?filetypes[]=html"
I created a bash shell script and used it to download the data on my local machine however on executing this script I got following error.
Resolving www.gutenberg.org (www.gutenberg.org)... 152.19.134.47
Connecting to www.gutenberg.org (www.gutenberg.org)|152.19.134.47|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
I searched online if others have come across the same problem and found a number of different sites listing that many other users have come across the same problem. I also tried some of the suggestions but none of them worked.
My next instinct was to mirror the project and then host it behind a local version of Apache Webserver and use BeautifulSoup4 or wget to retrieve the data. The Gutenberg website has provided a rsync based method to clone the whole site http://www.gutenberg.org/wiki/Gutenberg:Mirroring_How-To.
I used the following command to download the main collection.
rsync -av --del [email protected]::gutenberg gutenberglocal
where gutenberglocal is the directory created to hold all the site content. This worked fine but could only download 25GB of 650GB in one day due slow network connection which meant finding another way to download the data fortunately http://pgiso.pglaf.org/ provides a way to download the data.
The site requires range of eText numbers to be entered in a form before it can create an ISO image to be downloaded.
I therefore first wrote a script to read catalog in MARC format to find all books available in English language.
The following import section is common for the whole notebook.
End of explanation
#Language of interest
LANGUAGE = "eng"
#Constants for MARC File format
LANGUAGE_RECORD_FIELD = '008'
URI_RECORD_FIELD = '856'
LANGUAGE_CODE_START_INDEX = 41
LANGUAGE_CODE_END_INDEX = 44
#Function : clean_metadata
#Purpose : Function to clean metadata records
# Removes special characeters
def clean_metadata(raw):
if raw is not None:
pattern = '[^a-zA-Z0-9 ]'
prog = re.compile(pattern)
cleaned = prog.sub('', raw)
return cleaned
#Function : get_metadata
#Purpose : Function to retrieve metadata from MARC record
def get_metadata(record):
#Get language :MARC Code 008
language_record = str(record[LANGUAGE_RECORD_FIELD])
if language_record is not None:
if len(language_record) > LANGUAGE_CODE_END_INDEX:
language_code = language_record[LANGUAGE_CODE_START_INDEX:LANGUAGE_CODE_END_INDEX]
#Only proceed if language is language of interest
if(language_code == LANGUAGE):
#Find URI to access file: MARC Code 856
url = str(record[URI_RECORD_FIELD]['u'])
title = record.title()
if title is None:
title = "Unknown"
author = record.author()
if author is None:
author = "Unknown"
title = clean_metadata(title.encode('utf-8'))
author = clean_metadata(author.encode('utf-8'))
return (title,author,url)
#Function : get_etext_number
#Purpose : Given metadata, retrieves the eText number of the book
def get_etext_number(metadata):
if metadata is not None:
url = metadata[2]
filename = url[url.rindex('/')+1:]
return filename
etexts = []
#Remove previous marcs.csv if it exists
if os.path.exists('marcs.csv'):
os.remove('marcs.csv')
with open('marcs.csv', 'wb') as csvfile:
filewriter = csv.writer(csvfile)
with open('data/catalog.marc','r') as fh:
reader = MARCReader(fh)
for record in reader:
#Get metadata for the book
metadata = get_metadata(record)
if metadata is not None:
filewriter.writerow([metadata[1],metadata[0],metadata[2]])
etext = get_etext_number(metadata)
if etext is not None:
etexts.append(int(etext))
print "Minimum eText is:"+ str(min(etexts))
print "Mazimum eText is:"+ str(max(etexts))
Explanation: 2.3 Machine Readable Catalog Reader
Gutenberg website also maintains catalogs in machine readable formats that can be used to create a database of the books available via the project. The catalogs are available at the following URL http://gutenberg.pglaf.org/cache/generated/feeds/
in Resource Description Format(RDF) and MARC 21 formats. MARC 21 format is a machine readable format for communicating bibliographic and related information. I used the catalog.marc.bz2 file downloaded from aforementioned website and pymarc (Library to read MARC 21 files) to create a script that takes catalog.marc file as input to create marcs.csv file and also prints out the eText range for Gutenberg files. In order to write this script one needs to understand the fields in MARC 21. I studied the format given at http://www.loc.gov/marc/.
The source code is as given under:
End of explanation
#Utility script to convert Gutenberg data index file to CSV
#Gutenberg data when downloaded from pgiso.pglaf.org comes with
#index file containing metadata about downloaded eBooks
#This utility script converts that metadata into CSV format
#Function : get_book_name
#Purpose : Retrieves book name from the raw text
def get_book_name(raw):
if raw is not None:
pattern = '[^a-zA-Z0-9_ ]'
prog = re.compile(pattern)
raw = prog.sub('', raw)
return raw
else:
return "Unknown"
#Function : get_author_name
#Purpose : Retrieves author's first and last name from the raw text
def get_author_name(raw):
if raw is not None:
raw = raw.replace(';',',')
pattern = '[^a-zA-Z, ]'
prog = re.compile(pattern)
raw = prog.sub('',raw)
raw = raw.strip()
names = raw.strip(',')
names = names.split(',')
if len(names)>1:
return names[1]+ " " + names[0]
elif len(names)==1:
return names[0]
else:
return "Unknown"
#Function : get_modified_url
#Purpose : If user provides custom base url add that to
# file name
def get_modified_url(original,custom_base):
url_parts = original.split('/')
return custom_base + '/' + url_parts[2] + '/' + url_parts[3]
#Function : get_book_records
#Purpose : Function to retrieve book record
def get_book_records(file,base_url=None):
book_records = []
url = ""
author_name = ""
book_name = ""
try:
fh_index_file = codecs.open(file,'r','utf-8')
index_data = fh_index_file.read()
except IOError as e:
print "I/O Error".format(e.errno, e.strerror)
sys.exit(2)
soup = BeautifulSoup(index_data,'html.parser')
for link in soup.find_all('a',href=True):
#skip useless links
if link['href'] == '' or link['href'].startswith('#'):
continue
url = link.string
if base_url is not None:
url = get_modified_url(url,base_url)
etext = link.find_previous_sibling('font',text='EText-No.').next_sibling
book_name = get_book_name(link.find_previous_sibling('font',text='Title:').next_sibling)
author_name=get_author_name(link.find_previous_sibling('font',text='Author:').next_sibling)
book_records.append({'etext':etext,'author':author_name,'book':book_name,'url':url})
return book_records
#Function : write_csv_file
#Purpose : Writes book records to csv file
def write_csv_file(book_records):
if os.path.exists('data/pgindex.csv'):
os.remove('data/pgindex.csv')
with open('data/pgindex.csv', 'w') as csvfile:
fieldnames = ['etext', 'author','book','url']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for record in book_records:
writer.writerow(record)
book_records_ = get_book_records('data/index.htm')
write_csv_file(book_records_)
Explanation: The marcs.csv file generated by reading catalog in MARCS format allows one to get to any book by any author using a web browser. Theoretically this database can be fed into a python script using urllib and urllib2 modules to download this data on the local filesystem. However when I tried to download the files using this urllib script this was refused by the Gutenberg website as well. It makes sense as it is a portal being maintained by donations on a limited resource infrastructure so bots accessing it frequently will put unwarranted load on Gutenberg server/servers.
2.4 Project Gutenberg Custom ISO Creator
Project Gutenberg Custom ISO Creator is a beta project at the time of this project and can be accessed at
http://pgiso.pglaf.org/.
It basically takes a user query and generates an ISO file to be burnt on DVD with files meeting that query. We can use the eText range obtained by MARC catalog reader script given in the previous step to form my query for downloading the English language books in text format. This is likely to include all English books written until 2014 as that's when the MARCS 21 file for gutenberg was last modified.
<img src="images/pglaf_website.jpg" height="233" width="500">
When Add These ETexts button is pressed followed by Create ISO button the next screen prompts to give user email address.We used a Single-page index file option. Once the ISO is prepared an email is dispatched by the system to the email address given by the user. This email contains the link to the ISO created by the system.
2.5 ISO to AWS S3
Next I launched an EMR cluster on AWS and used wget to download the data from link received in email from pgiso system of Project Gutenberg. I also created S3 bucket to store all the raw data and data generated from modelling and analysis steps. In order to copy raw data I configured awscli on local machine.
The script to create AWS EMR and the associated configuration file (script/spark-conf.json) is available in the script subdirectory of the project. The install-anaconda script needs to be copied to the S3 bucket and the name of the S3 bucket should be added in the create-emr-cluster script available in the scripts directory.
Once the cluster is launched one can find the public DNS name of the master node and ssh into the master to download the data.
On EC2 shell of the master I used:
wget -c -O Englishbooks.iso "link received in pgiso email"
Now one can mount this image and copy the data to S3 bucket using the following commands.
sudo mkdir -p /mnt/disk
mount -o loop EnglishBooks.iso /mnt/disk
cd /mnt/disk/cache
aws s3 cp generated s3://cs109-gutenberg/raw --recursive
These aforementioned steps copied all the data to s3 bucket named cs109-gutenberg.The ISO image created by Gutenberg also contains an index.htm file with list of all files. If opened in browser the structure of the index
<img src="images/pglaf_index_file.jpg" >
This is semistructured html data that I needed to convert into a structured CSV format. I wrote the following python utility script to convert index.htm to csv file.
2.6 Index.htm to CSV
Please note that complete script is given in scripts/pgindextocsv.py and accepts the base url as input argument. Base URL should be the DNS based name of S3 bucket. The code below is extracted from pgindextocsv to demonstrate the creation of CSV file containing information about books and authors.
Since we copied the files in raw folder on S3 we should use the pgindextocsv as follows on shell:
./pgindextocsv.py ../data/index.htm raw
This will create the correct URLs in pgindex.csv for files in the S3 bucket.
End of explanation
#Read pgindex file as dataframe
df = pd.read_csv('data/pgindex.csv')
#Many authors are same but are spelt differently
#It is therefore important to find the authors with similar names
#Levenshtein distance can be used to find the difference between two strings
authors_distance = []
df2 = df.sort_values(['author'])
author_list = df2['author']
index_range = range(len(author_list))
for i in index_range:
if (i+1 < len(author_list)):
distance_ = distance(str(author_list[i]),str(author_list[i+1]))
if (distance_ == 1) or (distance_ == 2):
authors_distance.append(dict(first=str(author_list[i]),second = str(author_list[i+1])))
similar_names = pd.DataFrame(authors_distance)
similar_names.to_csv('data/similar_names.csv')
similar_names.head()
print len(similar_names.index)
Explanation: 2.7 Removing duplicate author names
One of the problems is that in Gutenberg catalog same author is added with different spellings of the name. This is removed by measuring the Levenshtein distance between the names. If the distance is 1 there is a string likelihood that the books are by the same author.
End of explanation
df_similar_authors = pd.read_csv('data/corrected_author_names.csv')
df_similar_authors.head()
Explanation: The next step was to find the correct spellings via wikipedia and other sources and add another field called preferred.
The name in the preferred field is the correct name. This step involves intensive manual labor as one needs to make sure that authors with similar names are not lumped together. I deleted the records with "first" and "second" containing two different authors with the same name. The end result is as shown below.
End of explanation
# Read the original file
df = pd.read_csv('data/pgindex.csv')
df_out = df.copy()
index_range = range(len(df_similar_authors.index))
first_entry = df_similar_authors['first'].values
second_entry = df_similar_authors['second'].values
preferred_entry = df_similar_authors['preferred'].values
#Replace first choice
for i in index_range:
df_out.loc[df_out.author == first_entry[i],'author'] = preferred_entry[i]
df_out.loc[df_out.author == second_entry[i],'author'] = preferred_entry[i]
#Remove duplicate rows
df_final = df_out.drop_duplicates(['etext'])
#Save as CSV
df_final.to_csv('data/corrected_pgindex.csv')
df_clean = pd.DataFrame.from_csv('data/corrected_pgindex.csv')
#Broadcast df_clean dataframe
sc.broadcast(df_clean)
#Find authors that are agencies and/or anonymous
df_unwanted_authors =df_clean[df_clean["author"].str.contains('agency',na=False)
| df_clean["author"].str.contains('presidents',na=False)
| df_clean["author"].str.contains('various',na=False)
| df_clean["author"].str.contains('anonymous',na=False)]
unwanted_authors = df_unwanted_authors['author'].values
unwanted_authors = list(set(unwanted_authors))
Explanation: The next step in cleaning is to put back the correct name in the original dataframe.
End of explanation
#Get SQL Context
sqlContext = SQLContext(sc)
#Convert Pandas data frame to Spark DataFrame and save in cache
sdf = sqlContext.createDataFrame(df_clean)
sdf.cache()
#Create Corpus
#IMPORTANT: This step requires raw data in a bucket on S3
#S3 Bucket name storing raw Gutenberg English books
S3_BUCKET_NAME = 'cs109-gutenberg'
#Connect with S3
s3 = boto.connect_s3()
bucket = s3.get_bucket(S3_BUCKET_NAME)
sc.broadcast(bucket)
def get_author_book_pair(key):
Convert Gutenberg books in txt format to author book pair.
:param key: An s3 key path string.
:return: A tuple (author, book_contents) where book_contents is the contents of the
book in a string with gutenberg header and footer removed.
s_key = str(key).strip()
contents = key.get_contents_as_string()
if contents is not None:
contents = unicode(contents, 'utf-8')
book = strip_headers(contents).strip()
#Remove special characters and digits
pattern = '[^\w+.\s+,:;?\'-]'
prog = re.compile(pattern,re.UNICODE)
document = prog.sub('',document)
document = re.sub(" \d+ |\d+.",'', document)
#Find the author for this book
#This portion requires refactoring(nan authors)
start_index = s_key.find(',')
if start_index != -1:
s_key = s_key[s_key.find(',')+1:len(s_key)-1].lower()
result = df_clean[df_clean['url'].str.strip() == s_key]
if(len(result) == 0):
s_key = s_key+".utf8"
result = df_clean[df_clean['url'].str.strip() == s_key]
if(len(result) == 0):
author = "Unknown"
else:
author = str(result['author'].iloc[0])
author = author.replace(' ','_')
else:
author = "Unknown"
book = "Unknown"
return (author,book)
def save_document_to_s3_corpus(author_books):
Save the result of reduceByKey in S3
:param author_books: An tuple containing authorname and all his/her books content in text.
key = bucket.get_key('corpus/'+author_books[0]+'.txt')
if key is None:
#Create the key
k = Key(bucket)
k.key = 'corpus/'+author_books[0]+'.txt'
k.set_contents_from_string(author_books[1])
else:
previous_contents = key.get_contents_as_string()
previous_contents = unicode(previous_contents, 'utf-8')
updated_document = previous_contents + author_books[1]
key.set_contents_from_string(updated_document)
#Get All Keys
keys = bucket.list(prefix = 'raw/')
#Save the documents to S3
rdd = sc.parallelize(keys).map(get_author_book_pair).reduceByKey(lambda x,y: x+y,200).foreach(save_document_to_s3_corpus)
#Clean the S3 corpus folders that contain works of unidentified authors
# or multiple authors and agencies
#S3 Bucket name storing raw Gutenberg English books
S3_BUCKET_NAME = 'cs109-gutenberg'
#Connect with S3
s3 = boto.connect_s3()
bucket = s3.get_bucket(S3_BUCKET_NAME)
for key_suffix in unwanted_authors:
key_ = Key(bucket)
key_suffix = key_suffix.replace(' ','_')
key_.key = 'corpus/'+key_suffix+'.txt'
bucket.delete_key(key_)
Explanation: 2.8 Corpus Creation
Now that we have all the files copied over to S3 and a CSV file containing records of all these files we can create our corpus. In this corpus each author should have just one document containing all his/her works. In order to create corpus we therefore need to concatenate books from each author together. I used python gutenberg library to strip Gutenberg header and footer from each book before adding them to the document.
Corpus Creation requires a number of python libraries including:
1. pandas
2. gutenberg
3. boto
We need to launch Amazon EMR cluster with custom bootstrap to install and configure
Anaconda distribution of python and other libraries
gutenberg
boto
The next step is to connect to the master node and launch Spark jobs to concatenate the books by each author.
End of explanation
#Retrieve S3 Keys for selected authors
S3_BUCKET_NAME = 'cs109-gutenberg'
#Connect with S3
s3 = boto.connect_s3()
bucket = s3.get_bucket(S3_BUCKET_NAME)
famous_authors = ['charles_dickens','william_shakespeare','jane_austen','james_joyce','mark_twain','oscar_wilde','edgar_allan_poe',
'francis_bacon_st_albans','christopher_marlowe','joseph_conrad','agatha_christie','dh_lawrence']
corpus_keys=[]
url_prefix = 'corpus/'
document_extension = '.txt'
for author in famous_authors:
key = bucket.get_key(url_prefix+author+document_extension)
if key is not None:
corpus_keys.append(key)
Explanation: Now we are at a stage where we can derive corpus of interest from the Gutenberg corpus. This corpus of interest shall contain works of the authors we are interested in for our authorship attribution model building.
End of explanation
# Find authors contribution to corpus size as percentage
#S3 Bucket name storing raw Gutenberg English books
S3_BUCKET_NAME = 'cs109-gutenberg'
#Connect with S3
s3 = boto.connect_s3()
bucket = s3.get_bucket(S3_BUCKET_NAME)
sc.broadcast(bucket)
#Get All Keys (only to be used if the analysis is on the whole gutenberg corpus)
#keys = bucket.list(prefix = 'corpus/')
#Get total size
corpus_size = 0
for key in corpus_keys:
corpus_size = corpus_size + key.size
def get_author_document_size(key):
Compute document size for each document
:param key: An s3 key path string.
:return: A tuple (author, document_size)
s_key = str(key)
s_key = s_key[s_key.find('/')+1:-5]
percentage = (float(key.size)/corpus_size)
return (s_key,percentage)
author_contribution = sc.parallelize(corpus_keys).map(get_author_document_size).collect()
df_author_contribution = pd.DataFrame(author_contribution,columns=['author','contribution'])
df_author_contribution = df_author_contribution.sort_values('contribution',ascending=False)
print df_author_contribution
#Save as CSV
df_author_contribution.to_csv('data/author_contribution.csv')
Explanation: 2.9 Exploratory Data Analysis
Once we have the corpus in S3 with each document named as author name and with contents containing all the works available in the corpus for that particular author, we can conduct some basic exploratory data analysis using this corpus.
2.9.1 Find the authors with highest document size to corpus size ratios in percentage
End of explanation
#Broadcast stopwords and lemmatizer to all nodes (Approximately 127 stop words)
stopwords_ = stopwords.words('english')
sc.broadcast(stopwords_)
#Broadcast WordNetLemmatizer()
wordnet_lemmatizer_ = WordNetLemmatizer()
sc.broadcast(wordnet_lemmatizer_)
# Find Vocabulary Richness Ratio(VRR)
# These words will not include stop words
def get_author_vrr_pair(key):
Find ratio of unique words to toal words used by each author
:param key: An s3 key path string.
:return: A tuple (author, vrr)
s_key = str(key)
s_key = s_key[s_key.find('/')+1:-5]
contents = key.get_contents_as_string()
vocab_richness_ratio = 0.0
if contents is not None:
contents = unicode(contents, 'utf-8')
prog = re.compile('[\t\n\r\f\v\d\']',re.UNICODE)
contents = re.sub(prog,' ',contents).lower()
#Remove punctuations
prog=re.compile('[!\"#$%&\'()*+\,-./:;<=>?@[\]^_`{|}~]',re.UNICODE)
contents = re.sub(prog,' ',contents)
words = word_tokenize(contents)
#Remove stop words lemmatize and remove punctuations
#Also remove noisy single alphabets
vocab = []
for word in words:
word=word.strip()
if len(word)>1:
if word not in stopwords_:
vocab.append(wordnet_lemmatizer_.lemmatize(word))
#Size of vocabulary
vocab_size = len(vocab)
unique_vocab = list(set(vocab))
unique_vocab_size = len(unique_vocab)
vocab_richness_ratio = float(unique_vocab_size)/vocab_size
return (s_key,vocab_richness_ratio)
author_vrr = sc.parallelize(corpus_keys).map(get_author_vrr_pair).collect()
df_author_vrr = pd.DataFrame(author_vrr,columns=['author','vrr'])
df_author_vrr = df_author_vrr.sort_values('vrr',ascending=False)
#Save as CSV
df_author_vrr.to_csv('data/author_vrr.csv')
#Create a joint dataframe
df_vrr_contribution = pd.merge(df_author_vrr, df_author_contribution, on=['author'])
print df_vrr_contribution
plt.figure()
#Plot VRR and Contribution
ax = df_vrr_contribution.plot(kind='barh',stacked=True,figsize=(12,8),colormap='Paired')
c=ax.set_yticklabels(df_vrr_contribution.author)
Explanation: This means that Shakespeare and Dickens constitute almost 42% of our corpus. Please note that this percentage is in our corpus size and not of Gutenberg in all.
2.9.2 Find vocabulary richness ratio (VRR)
Next we would like to find out richness of vocabulary for each author. We should remove all the stop words and find total number of words in each author's works. The richness is not computed in terms of absolute vocabulary size but is instead computed as ratio of unique words to total words(excluding stop words) as percentage. VRR is also known as lexical richness.
First of all we need to install stopwords corpus of nltk on AWS EMR Master node.
In order to accomplish that we need to ssh into the Master node and run the following command.
sudo /home/hadoop/anaconda2/bin/python -m nltk.downloader -d /usr/share/nltk_data stopwords
End of explanation
#Remove punctuations and compute word frequency
#Compute Vocabulary
def get_author_vocabulary(key):
Find vocabulary of each author
:param key: An s3 key path string.
:return: A tuple (author, vocabulary list)
s_key = str(key)
s_key = s_key[s_key.find('/')+1:-5]
contents = key.get_contents_as_string()
vocab_richness_ratio = 0.0
if contents is not None:
contents = unicode(contents, 'utf-8')
prog = re.compile('[\t\n\r\f\v\d\']',re.UNICODE)
contents = re.sub(prog,' ',contents).lower()
#Remove punctuations
prog=re.compile('[!\"#$%&\'()*+\,-./:;<=>?@[\]^_`{|}~]',re.UNICODE)
contents = re.sub(prog,' ',contents)
words = word_tokenize(contents)
vocab = []
#Remove noisy single alphabets
for word in words:
word=word.strip()
if len(word)>1:
if word not in stopwords_:
vocab.append(wordnet_lemmatizer_.lemmatize(word))
return (s_key,vocab)
author_vocabulary = sc.parallelize(corpus_keys).map(get_author_vocabulary).collect()
#Compute and Plot frequency distribution
n_tuples = len(author_vocabulary)
authors_index = range(n_tuples)
all_words = []
for i in authors_index:
all_words = all_words + author_vocabulary[i][1]
f, axarr = plt.subplots(1, figsize=(12, 5))
fdist = nl.FreqDist(all_words)
fdist.plot(50,cumulative=False)
Explanation: This graph shows somethig unexpected. The bars that have a lot more red and only a small proportion of blue (such as Jane Austen) mean that the corpus contains a lot of text from that particular author but there is a lot of repitition in usage of words. This could either mean that there is a lot of noisy data in the document or repitition of words.
2.9.3 Word Frequency Distribution
End of explanation
#Generic utility function to perforn K-Fold cross validation
# to find optimal parameters for a given classifier
def cv_optimize(clf, parameters, X, y, n_jobs=1, n_folds=5, score_func=None):
if score_func:
gs = GridSearchCV(clf, param_grid=parameters, cv=n_folds, n_jobs=n_jobs, scoring=score_func)
else:
gs = GridSearchCV(clf, param_grid=parameters, n_jobs=n_jobs, cv=n_folds)
gs.fit(X, y)
print "Best Parameters:", gs.best_params_, gs.best_score_, gs.grid_scores_
best = gs.best_estimator_
return best
#Generic utility function to classify given classifier, its parameters and features
def do_classify(clf, parameters, x_train, y_train,x_test,y_test, score_func=None, n_folds=5, n_jobs=1):
if parameters:
clf = cv_optimize(clf, parameters, x_train, y_train, n_jobs=n_jobs, n_folds=n_folds, score_func=score_func)
clf=clf.fit(x_train, y_train)
training_accuracy = clf.score(x_train, y_train)
test_accuracy = clf.score(x_test, y_test)
print "Accuracy on training data: %0.2f" % (training_accuracy)
print "Accuracy on test data: %0.2f" % (test_accuracy)
#print confusion_matrix(y_test, clf.predict(x_test))#
return clf,x_train,x_test,y_train,y_test
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
Explanation: The distribution almost resembles Zipf's distribution(as expected).
3. Model Planning
There are many diverse approaches employed by different researchers in authorship attribution, however in this project I decided to only use Bag of Words approach based on the assumption that every author has certain favourite words that he/she repeats in his/her written work.
I also decided to give preference to Scikit Learn Toolkit for different machine learning algorithms as it offered more learning algorithms choice than offered by MLLib.
3.1 Utility Functions for Classification
These functions are adapted from function used in lab 6 of CS109. Copyrights reserved with Harvard Extension school staff.
End of explanation
# Save Corpus and Compute Vocabulary
def get_author_vocabulary_pair(key):
Find vocabulary of each author
:param key: An s3 key path string.
:return: A tuple (author, vocabulary list)
s_key = str(key)
s_key = s_key[s_key.find('/')+1:-5]
contents = key.get_contents_as_string()
vocab_richness_ratio = 0.0
if contents is not None:
contents = unicode(contents, 'utf-8')
prog = re.compile('[\t\n\r\f\v\d\']',re.UNICODE)
contents = re.sub(prog,' ',contents).lower()
#Remove punctuations
prog=re.compile('[!\"#$%&\'()*+\,-./:;<=>?@[\]^_`{|}~]',re.UNICODE)
contents = re.sub(prog,' ',contents)
words = word_tokenize(contents)
#Remove stop words
vocab = []
for word in words:
word=word.strip()
if len(word)>1:
if word not in stopwords_:
vocab.append(wordnet_lemmatizer_.lemmatize(word))
unique_vocab = list(set(vocab))
return (s_key,unique_vocab)
#Compute vocabulary text document without removing
#unique words
def get_author_document_pair(key):
Return cleaned document of each author
:param key: An s3 key path string.
:return: A tuple (author, vocabulary list)
s_key = str(key)
s_key = s_key[s_key.find('/')+1:-5]
contents = key.get_contents_as_string()
if contents is not None:
contents = unicode(contents, 'utf-8')
prog = re.compile('[\t\n\r\f\v\d\']',re.UNICODE)
contents = re.sub(prog,' ',contents).lower()
#Remove punctuations
prog=re.compile('[!\"#$%&\'()*+\,-./:;<=>?@[\]^_`{|}~]',re.UNICODE)
contents = re.sub(prog,' ',contents)
words = word_tokenize(contents)
#Remove stop words and punctuations
vocab = []
for word in words:
word=word.strip()
if len(word)>1:
if word not in stopwords_:
vocab.append(wordnet_lemmatizer_.lemmatize(word))
document = ' '.join(w for w in vocab)
return (s_key,document)
#Save corpus in S3 as JSON
corpus_rdd = sc.parallelize(corpus_keys).map(get_author_document_pair)
corpus_rdd.map(lambda x: json.dumps({"author":x[0],"vocabulary":x[1]})) \
.saveAsTextFile('s3://'+S3_BUCKET_NAME+'/acorpus.json')
#Find unique words by each author
vocab_rdd = sc.parallelize(corpus_keys).map(get_author_vocabulary_pair)
vocabulary = vocab_rdd.collect()
#Size of vocabulary for each author
n_tuples = len(vocabulary)
authors_index = range(n_tuples)
author_vocabularysize_pairs = [(vocabulary[i][0],len(vocabulary[i][1])) for i in authors_index]
df_author_vocabularysize_pairs=pd.DataFrame(author_vocabularysize_pairs,columns=['author','vocabulary_size'] )
df_author_vocabularysize_pairs=df_author_vocabularysize_pairs.sort_values('vocabulary_size',ascending=False)
print df_author_vocabularysize_pairs
Explanation: 3.2 Vocabulary Creation
The first step is to build our vocabulary by using all unique words from the corpus
End of explanation
#Combine all individual vocabularies
corpus_all_words = []
for i in authors_index:
corpus_all_words = corpus_all_words + vocabulary[i][1]
vocabulary_ = list(set(corpus_all_words))
print "Corpus Vocabulary Size:"+str(len(vocabulary_))+" words"
Explanation: Now we need to combine all these individual vocabularies into corpus vocabulary
End of explanation
#Create vocabulary RDD
vocabrdd = sc.parallelize([word for word in vocabulary_]).map(lambda l: l)
#Create vocabulary tuple
vocabtups = (vocabrdd.map(lambda word: (word, 1))
.reduceByKey(lambda a, b: a + b)
.map(lambda (x,y): x)
.zipWithIndex()
).cache()
vocab = vocabtups.collectAsMap()
Explanation: It is important to mention that these words may contain Nouns that we can remove by POS tagging BUT in authorship attribution our assumption is that some authors may have favourite character names.
End of explanation
#Take Samples from Corpus
S3_BUCKET_NAME='cs109-gutenberg'
#Function to get feature samples
def get_features_sample(author_document):
author_label = author_document[0]
document = author_document[1]
complete_sample =[]
sample_size = 200
num_of_samples = 20
index = range(num_of_samples)
words = word_tokenize(document)
for i in index:
sample = np.random.choice(words,sample_size,replace=False)
complete_sample.append(sample)
author_sample = {}
author_sample[author_label] = complete_sample
return (author_label,author_sample)
#Read vocabulary from S3 as dataframe
vocab_df = sqlContext.read.json('s3://'+S3_BUCKET_NAME+'/acorpus.json')
#Take as many samples as minimum from each vocabulary
# Sampling may not be required for clusters with EC2 instances with better
# CPU and Memory resources
sampled_vocab_rdd = vocab_df.map(lambda x:get_features_sample(x))
#Create a dictionary with
#every sample matched with label
def create_author_sample_dictionary(feature_sample):
author_label = feature_sample[0]
document_dictionary = feature_sample[1]
dictionary_keys = document_dictionary.keys()
sample_dictionary_list=[]
sample_dictionary = {}
for key in dictionary_keys:
document_array = document_dictionary.get(key)
for samples in document_array:
sample_dictionary = {}
sample_dictionary[key] = samples.tolist()
sample_dictionary_list.append(sample_dictionary)
return (author_label,sample_dictionary_list)
author_sample_rdd = sampled_vocab_rdd.map(lambda x:create_author_sample_dictionary(x))
sample_dictionary = author_sample_rdd.flatMap(lambda x:x[1]).collect()
#Create Data Frame for feature sets
data =[]
for dictionary in sample_dictionary:
keys = dictionary.keys()
values_array = dictionary.get(keys[0])
author_data_pair =[]
author_data_pair.append(keys[0])
author_data_pair.append(' '.join(w for w in values_array))
data.append(author_data_pair)
df_features = pd.DataFrame(data, columns=['author','sample'])
#Convert author names to numerical values
author_name_value_dict = {}
for i,author in enumerate(famous_authors):
author_name_value_dict[author]=i
df_features['author'] = df_features['author'].map(author_name_value_dict)
Explanation: This vocabulary will be passed to CountVectorizer
3.3 Feature Selection
Careful Feature Selection is imperative for building good machine learning models. From the same datasets one may select different features for different problems. For sentiment analysis, adjectives are selected as features and punctuations may not be critical but an authorship attribution task may use syntactic features of a document. In this stage we can employ dimensionality reduction to reduce the feature space.
In this project, I have used "Bag of Words" approach. As the name implies, bag of words approach represents text of a document as a set of terms and does not give any significance to context or order.
End of explanation
#Extract Vector Features
vectorizer = CountVectorizer(vocabulary=vocab)
#Create x,y i.e features, responses
vectorized_features = vectorizer.fit_transform(df_features['sample'])
X = vectorized_features
y = df_features['author'].values
Explanation: 3.3 Feature Extraction
Feature Extraction Process involves converting textual data to numerical feature vectors that can be used as input to machine learning algorithms. I have mainly used CountVectorizer for feature extraction.
End of explanation
xtrain, xtest, ytrain, ytest = train_test_split(X,y)
Explanation: 4. Model Building
In this phase I converted the feature set to training and testing portions and used following approach to model building.
1) Select a machine learning algorithm
2) Conduct KFold Cross Validation for finding optimal parameters
3) Fit the model on training data with optimal parameters
4) Test the model on testing data
5) Compute Accuracy
Repeat the steps for each model.
4.1 Training Testing Split
End of explanation
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier()
parameters = {"n_neighbors":[2,4,6,8]}
model = do_classify(clf,parameters,xtrain,ytrain,xtest,ytest)
Explanation: 4.2 K Nearest Neighbors (KNN) Classifier
KNN is a non-parametric classification algorithm. It does not make any assumption about the distribution of the underlying data and is very fast. It takes majority voting of K neighbors to decide class the feature belongs to.
End of explanation
from sklearn.svm import LinearSVC
clf_linearSVC = LinearSVC()
parameters = {"C": [0.001, 0.01, 0.1, 1, 10, 100, 1000]}
model = do_classify(clf_linearSVC,parameters,xtrain,ytrain,xtest,ytest)
Explanation: 4.3 Support Vector Machines
Support Vector Machines (SVM) is an extension of Support Vector Classifier. It classifies features by using separating hyperplanes.
End of explanation
clf = MultinomialNB()
parameters = {"alpha": [0.01, 0.1, 0.5, 1]}
result = do_classify(clf,parameters,xtrain,ytrain,xtest,ytest)
Explanation: 4.4 Naive Bayes
Naive Bayes is a probablistic classification method based on Baye's theorem. A naive Bayes classifier assumes that absence or presence of a feature of a class is not related to the absence or presence of other features. This is also called conditional independence.
Mathematicall this can be written as :
$$P(c|d) \propto P(d|c) P(c) $$
$$P(d|c) = \prod_k P(t_k | c) $$
Where c is class, d denotes document and t_k is the kth term. This therefore means that the probability that it is class c given document d is proportional to the product of prior probability of the class and probability of document being d if the class is c.
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
parameters = {"n_estimators": [10, 20, 30, 40]}
result = do_classify(clf,parameters,xtrain,ytrain,xtest,ytest)
Explanation: 4.5 Random Forest
Random Forest is an ensemble method that fits a number of decision tree classifiers on various subsamples of the dataset and use averaging to improve the predictive accuracy and control over-fitting.
End of explanation
#Get Confusion matrix
clf = MultinomialNB(alpha=0.01)
clf.fit(xtrain,ytrain)
training_accuracy = clf.score(xtrain, ytrain)
test_accuracy = clf.score(xtest, ytest)
print confusion_matrix(ytest,clf.predict(xtest))
print "Accuracy on training data: %0.2f" % (training_accuracy)
print "Accuracy on test data: %0.2f" % (test_accuracy)
#Print Classification Report
print classification_report(ytest,clf.predict(xtest))
Explanation: 4.6 Final Analysis of Multiclass classification
Looking at the accuracy results, Multinomial Naive Bayesian and SVM have given the maximum prediction accuracy on testing data. We shall use Naive Bayesian for operational implementation.
End of explanation
#We can save the model as pickel file
from sklearn.externals import joblib
clf_linearSVC = LinearSVC(C=0.01)
clf_linearSVC.fit(xtrain,ytrain)
joblib.dump(clf_linearSVC, './data/svm.pkl')
joblib.dump(clf, './data/classifier.pkl')
joblib.dump(vectorizer,'./data/vectorizer.pkl')
Explanation: 5. Results
The results of the project can be summarized as responses to the following questions.
1) Can "Bag of Words" as features give acceptable classification accuracy?
All classifiers have given good accuracies making it a suitable technique for Authorship attribution.
2) Which algorithms perform best with Bag of Words?
SVM and Naive Bayes have performed best for Bag of Words based authorship attribution.
3) How can Big Data processing engines such as Apache Spark aid in preparing data for Machine Learning algorithms?
Spark was used through out this project and managed to clean 12G of raw gutenberg data in matter of minutes.
4) How can one clean Gutenberg data to be used for different NLP related research projects?
Cleaning Gutenberg data took a long time. NLTK provides a subset of Gutenberg corpus but it would be interesting to scale up authorship attribution for the full set of authors using complete Gutenberg data.
5) Can AWS EMR be used as a viable cloud based Big Data platform?
I have found AWS EMR to be very convenient. EMR clusters with m3.xlarge with 3 nodes are more than enough for most operations but the cleaning stage required 5 EC2 nodes of m3.xlarge.
6. Operational Implementation
In this phase the model should be saved and used via command line.
6.1 Saving the model
End of explanation
#This code is given in the scripts folder and should be run on
#via command line
#!/usr/bin/env python
# This script can be used to detect author's name from sample of his/her works
from sklearn.externals import joblib
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
import re
import sys
#Model file path
model_file = '../data/classifier.pkl'
vectorizer_file = '../data/vectorizer.pkl'
# The machine learning model used in this case is only trained with the following authors.
famous_authors = ['charles_dickens','william_shakespeare','jane_austen','james_joyce','mark_twain','oscar_wilde','edgar_allan_poe',
'francis_bacon_st_albans','christopher_marlowe','joseph_conrad','agatha_christie','dh_lawrence']
#Function : print_help
#Purpose : Function to display help message
def print_help():
print "Usage :"+sys.argv[0]+" <path to sample text file>"
print " Where sample text file contains text by one of authors given above"
#Function : get_author_text
#Purpose : Convert raw text document to tokens
def get_author_text(sample_file):
try:
with open(sample_file,'r') as file:
data = file.read()
except IOError as e:
print "I/O Error".format(e.errno, e.strerror)
sys.exit(2)
#Set language for stopwords
stopwords_ = stopwords.words('english')
#Instantiate Lemmatizer
wordnet_lemmatizer_ = WordNetLemmatizer()
#Clean the sample data
contents = unicode(data, 'utf-8')
prog = re.compile('[\t\n\r\f\v\d\']',re.UNICODE)
contents = re.sub(prog,' ',contents).lower()
#Remove punctuations
prog=re.compile('[!\"#$%&\'()*+\,-./:;<=>?@[\]^_`{|}~]',re.UNICODE)
contents = re.sub(prog,' ',contents)
words = word_tokenize(contents)
#Remove stop words and punctuations
vocab = []
for word in words:
word=word.strip()
if len(word)>1:
if word not in stopwords_:
vocab.append(wordnet_lemmatizer_.lemmatize(word))
return vocab
#Check input arguments
if (len(sys.argv) < 2):
print_help()
sys.exit(1)
text = get_author_text(sys.argv[1])
clf = joblib.load(model_file)
svm = joblib.load('../data/svm.pkl')
vectorizer = joblib.load(vectorizer_file)
features = vectorizer.transform(text)
nb_prediction= clf.predict(features).tolist()
svm_prediction = svm.predict(features).tolist()
print nb_prediction
print svm_prediction
Explanation: 6.2 Script to Load Text File and Give Result
End of explanation |
14,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Randomization
In the previous chapter, we saw how randomization eliminates selection bias. Let's explain what we mean by randomization, describe several ways we might want to randomly assign treatments, and discuss the components other than the assignment that can be randomized.
Randomization refers to using "a known, well-understood probabilistic scheme" to assign treatments to units (Oehlert, 2010). Randomization "ensures that assignment to the treatment group is statistically independent of all observed or unobserved variables" (Gerber and Green, 2012).
Simple Random Assignment
With simple random assignment, every unit has the same probability of being assigned to a particular treatment group. The probability can be anything greater than zero and less than one. This will approximately determine the number of units in each group. For example, assuming a single treatment group and a single control group, if the probability is 0.75, about 75% will be assigned to the treatment group.
Let's imagine we have 10 units to which we assign a treatment with 0.5 probability. Will our groups be balanced? That is, will we have 5 units in the treatment group and 5 units in the control group? Let's find out.
Step1: This counts the number of successes—think of "success" as being assigned to the treatment group—in 10 independent trials, where success occurs 50% of the time.
Each time you run the cell above, you'll get a different result—it's not always 5! This is a drawback of simple random assignment.
[Y]ou could flip a coin to assign each of 10 [units] to the treatment condition, but there is only a 24.6% chance of ending up with exactly 5 [units] in treatment and 5 in control (Gerber and Green, 2012)
So that others may reproduce our assignments, we can use a random seed. This is highly recommended, though, in practice, we won't use np.random.binomial(). (Note
Step2: Complete Random Assignment
If, instead, we'd like to assign exactly $m$ of $N$ units to the treatment group, we can use complete random assignment. Here, as before, each unit has an identical probability of being assigned to the treatment group. Gerber and Green describe three ways to implement complete random assignment
Step3: Here, using the seed of 42, units 1, 3, 4, 5, and 8 get assigned to the treatment group.
Randomly Order | Python Code:
import numpy as np
n, p = 10, 0.5
np.random.binomial(n, p)
Explanation: Randomization
In the previous chapter, we saw how randomization eliminates selection bias. Let's explain what we mean by randomization, describe several ways we might want to randomly assign treatments, and discuss the components other than the assignment that can be randomized.
Randomization refers to using "a known, well-understood probabilistic scheme" to assign treatments to units (Oehlert, 2010). Randomization "ensures that assignment to the treatment group is statistically independent of all observed or unobserved variables" (Gerber and Green, 2012).
Simple Random Assignment
With simple random assignment, every unit has the same probability of being assigned to a particular treatment group. The probability can be anything greater than zero and less than one. This will approximately determine the number of units in each group. For example, assuming a single treatment group and a single control group, if the probability is 0.75, about 75% will be assigned to the treatment group.
Let's imagine we have 10 units to which we assign a treatment with 0.5 probability. Will our groups be balanced? That is, will we have 5 units in the treatment group and 5 units in the control group? Let's find out.
End of explanation
np.random.seed(42)
np.random.binomial(n, p)
Explanation: This counts the number of successes—think of "success" as being assigned to the treatment group—in 10 independent trials, where success occurs 50% of the time.
Each time you run the cell above, you'll get a different result—it's not always 5! This is a drawback of simple random assignment.
[Y]ou could flip a coin to assign each of 10 [units] to the treatment condition, but there is only a 24.6% chance of ending up with exactly 5 [units] in treatment and 5 in control (Gerber and Green, 2012)
So that others may reproduce our assignments, we can use a random seed. This is highly recommended, though, in practice, we won't use np.random.binomial(). (Note: I'll always use 42 as the seed.)
End of explanation
from math import factorial
possible_combinations = factorial(10) / (factorial(5) * factorial(10 - 5))
import random
from itertools import combinations
# enumerate the possible ways to select m of N units
enumerated = list(combinations(range(10), 5))
# randomly select one of those allocations
random.seed(42)
select = random.randint(0, possible_combinations-1)
treatment = enumerated[select]
print(list(treatment))
Explanation: Complete Random Assignment
If, instead, we'd like to assign exactly $m$ of $N$ units to the treatment group, we can use complete random assignment. Here, as before, each unit has an identical probability of being assigned to the treatment group. Gerber and Green describe three ways to implement complete random assignment:
randomly select units until there are $m$ of them in the treatment group
enumerate all of the possible ways to select $m$ of $N$ units and randomly select one of those allocations
randomly order the $N$ units and select the first $m$
Let's show examples for the second and third approaches.
Enumerate
There are
$$\frac{n!}{r!(n - r)!} = \frac{10!}{5!5!} = 252$$
possible ways to select 5 of 10 units.
We can enumerate these combinations using the itertools module.
End of explanation
units = list(range(10))
random.seed(42)
random.shuffle(units)
units[:5]
Explanation: Here, using the seed of 42, units 1, 3, 4, 5, and 8 get assigned to the treatment group.
Randomly Order
End of explanation |
14,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Layerwise Sequential Unit Variance (LSUV)
Getting the MNIST data and a CNN
Jump_to lesson 11 video
Step1: Now we're going to look at the paper All You Need is a Good Init, which introduces Layer-wise Sequential Unit-Variance (LSUV). We initialize our neural net with the usual technique, then we pass a batch through the model and check the outputs of the linear and convolutional layers. We can then rescale the weights according to the actual variance we observe on the activations, and subtract the mean we observe from the initial bias. That way we will have activations that stay normalized.
We repeat this process until we are satisfied with the mean/variance we observe.
Let's start by looking at a baseline
Step2: Now we recreate our model and we'll try again with LSUV. Hopefully, we'll get better results!
Step3: Helper function to get one batch of a given dataloader, with the callbacks called to preprocess it.
Step4: We only want the outputs of convolutional or linear layers. To find them, we need a recursive function. We can use sum(list, []) to concatenate the lists the function finds (sum applies the + operate between the elements of the list you pass it, beginning with the initial state in the second argument).
Step5: This is a helper function to grab the mean and std of the output of a hooked layer.
Step6: So now we can look at the mean and std of the conv layers of our model.
Step7: We first adjust the bias terms to make the means 0, then we adjust the standard deviations to make the stds 1 (with a threshold of 1e-3). The mdl(xb) is not None clause is just there to pass xb through mdl and compute all the activations so that the hooks get updated.
Step8: We execute that initialization on all the conv layers in order
Step9: Note that the mean doesn't exactly stay at 0. since we change the standard deviation after by scaling the weight.
Then training is beginning on better grounds.
Step10: LSUV is particularly useful for more complex and deeper architectures that are hard to initialize to get unit variance at the last layer.
Export | Python Code:
x_train,y_train,x_valid,y_valid = get_data()
x_train,x_valid = normalize_to(x_train,x_valid)
train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
nh,bs = 50,512
c = y_train.max().item()+1
loss_func = F.cross_entropy
data = DataBunch(*get_dls(train_ds, valid_ds, bs), c)
mnist_view = view_tfm(1,28,28)
cbfs = [Recorder,
partial(AvgStatsCallback,accuracy),
CudaCallback,
partial(BatchTransformXCallback, mnist_view)]
nfs = [8,16,32,64,64]
class ConvLayer(nn.Module):
def __init__(self, ni, nf, ks=3, stride=2, sub=0., **kwargs):
super().__init__()
self.conv = nn.Conv2d(ni, nf, ks, padding=ks//2, stride=stride, bias=True)
self.relu = GeneralRelu(sub=sub, **kwargs)
def forward(self, x): return self.relu(self.conv(x))
@property
def bias(self): return -self.relu.sub
@bias.setter
def bias(self,v): self.relu.sub = -v
@property
def weight(self): return self.conv.weight
learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)
Explanation: Layerwise Sequential Unit Variance (LSUV)
Getting the MNIST data and a CNN
Jump_to lesson 11 video
End of explanation
run.fit(2, learn)
Explanation: Now we're going to look at the paper All You Need is a Good Init, which introduces Layer-wise Sequential Unit-Variance (LSUV). We initialize our neural net with the usual technique, then we pass a batch through the model and check the outputs of the linear and convolutional layers. We can then rescale the weights according to the actual variance we observe on the activations, and subtract the mean we observe from the initial bias. That way we will have activations that stay normalized.
We repeat this process until we are satisfied with the mean/variance we observe.
Let's start by looking at a baseline:
End of explanation
learn,run = get_learn_run(nfs, data, 0.6, ConvLayer, cbs=cbfs)
Explanation: Now we recreate our model and we'll try again with LSUV. Hopefully, we'll get better results!
End of explanation
#export
def get_batch(dl, run):
run.xb,run.yb = next(iter(dl))
for cb in run.cbs: cb.set_runner(run)
run('begin_batch')
return run.xb,run.yb
xb,yb = get_batch(data.train_dl, run)
Explanation: Helper function to get one batch of a given dataloader, with the callbacks called to preprocess it.
End of explanation
#export
def find_modules(m, cond):
if cond(m): return [m]
return sum([find_modules(o,cond) for o in m.children()], [])
def is_lin_layer(l):
lin_layers = (nn.Conv1d, nn.Conv2d, nn.Conv3d, nn.Linear, nn.ReLU)
return isinstance(l, lin_layers)
mods = find_modules(learn.model, lambda o: isinstance(o,ConvLayer))
mods
Explanation: We only want the outputs of convolutional or linear layers. To find them, we need a recursive function. We can use sum(list, []) to concatenate the lists the function finds (sum applies the + operate between the elements of the list you pass it, beginning with the initial state in the second argument).
End of explanation
def append_stat(hook, mod, inp, outp):
d = outp.data
hook.mean,hook.std = d.mean().item(),d.std().item()
mdl = learn.model.cuda()
Explanation: This is a helper function to grab the mean and std of the output of a hooked layer.
End of explanation
with Hooks(mods, append_stat) as hooks:
mdl(xb)
for hook in hooks: print(hook.mean,hook.std)
Explanation: So now we can look at the mean and std of the conv layers of our model.
End of explanation
#export
def lsuv_module(m, xb):
h = Hook(m, append_stat)
while mdl(xb) is not None and abs(h.mean) > 1e-3: m.bias -= h.mean
while mdl(xb) is not None and abs(h.std-1) > 1e-3: m.weight.data /= h.std
h.remove()
return h.mean,h.std
Explanation: We first adjust the bias terms to make the means 0, then we adjust the standard deviations to make the stds 1 (with a threshold of 1e-3). The mdl(xb) is not None clause is just there to pass xb through mdl and compute all the activations so that the hooks get updated.
End of explanation
for m in mods: print(lsuv_module(m, xb))
Explanation: We execute that initialization on all the conv layers in order:
End of explanation
%time run.fit(2, learn)
Explanation: Note that the mean doesn't exactly stay at 0. since we change the standard deviation after by scaling the weight.
Then training is beginning on better grounds.
End of explanation
!python notebook2script.py 07a_lsuv.ipynb
Explanation: LSUV is particularly useful for more complex and deeper architectures that are hard to initialize to get unit variance at the last layer.
Export
End of explanation |
14,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook shows an analysis of the Falcon-9 upper stage S-band telemetry frames. It is based on r00t.cz's analysis.
The frames are CCSDS Reed-Solomon frames with an interleaving depth of 5, a (255,239) code, and an (uncoded) frame size of 1195 bytes.
Step1: The first byte of all the frames is 0xe0. Here we see that one of the frames has an error in this byte.
Step2: The next three bytes form a header composed of a 13 bit frame counter and an 11 bit field that indicates where the first packet inside the payload starts (akin to a first header pointer in CCSDS protocols).
Step3: Valid packets contain a 2 byte header where the 4 MSBs are set to 1 and the remaining 12 bits indicate the size of the packet payload in bytes (so the total packet size is this value plus 2). Using this header, the packets can be defragmented in the same way as CCSDS Space Packets transmitted using the M_PDU protocol.
Step4: Only ~76% of the frames payload contains packets. The rest is padding.
Step5: After the 2 byte header, the next 8 bytes of the packet can be used to identify its source or type.
Step6: Some packets have 64-bit timestamps starting 3 bytes after the packet source ID. These give nanoseconds since the GPS epoch.
Step7: Video packets
Video packets are stored in a particular source ID. If we remove the first 25 and last 2 bytes of these packets, we obtain 5 188-byte transport stream packets.
Step8: Only around 28% of the transmitted data is the transport stream video.
Step9: GPS log | Python Code:
x = np.fromfile('falcon9_frames_20210324_084608.u8', dtype = 'uint8')
x = x.reshape((-1, 1195))
Explanation: This notebook shows an analysis of the Falcon-9 upper stage S-band telemetry frames. It is based on r00t.cz's analysis.
The frames are CCSDS Reed-Solomon frames with an interleaving depth of 5, a (255,239) code, and an (uncoded) frame size of 1195 bytes.
End of explanation
collections.Counter(x[:,0])
Explanation: The first byte of all the frames is 0xe0. Here we see that one of the frames has an error in this byte.
End of explanation
header = np.unpackbits(x[:,1:4], axis = 1)
counter = header[:,:13]
counter = np.concatenate((np.zeros((x.shape[0], 3), dtype = 'uint8'), counter), axis = 1)
counter = np.packbits(counter, axis = 1)
counter = counter.ravel().view('uint16').byteswap()
start_offset = header[:,-11:]
start_offset = np.concatenate((np.zeros((x.shape[0], 5), dtype = 'uint8'), start_offset), axis = 1)
start_offset = np.packbits(start_offset, axis = 1)
start_offset = start_offset.ravel().view('uint16').byteswap()
plt.plot(counter, '.')
plt.title('Falcon-9 frame counter')
plt.ylabel('13-bit frame counter')
plt.xlabel('Decoded frame');
Explanation: The next three bytes form a header composed of a 13 bit frame counter and an 11 bit field that indicates where the first packet inside the payload starts (akin to a first header pointer in CCSDS protocols).
End of explanation
def packet_len(packet):
packet = np.frombuffer(packet[:2], dtype = 'uint8')
return (packet.view('uint16').byteswap()[0] & 0xfff) + 2
def valid_packet(packet):
return packet[0] >> 4 == 0xf
def defrag(x, counter, start_offset):
packet = bytearray()
frame_count = None
for frame, count, first in zip(x, counter, start_offset):
frame = frame[4:]
if frame_count is not None \
and count != ((frame_count + 1) % 2**13):
# broken stream
packet = bytearray()
frame_count = count
if first == 0x7fe:
# only idle
continue
elif first == 0x7ff:
# no packet starts
if packet:
packet.extend(frame)
continue
if packet:
packet.extend(frame[:first])
packet = bytes(packet)
yield packet, frame_count
while True:
packet = bytearray(frame[first:][:2])
if len(packet) < 2:
# not full header inside frame
break
first += 2
if not valid_packet(packet):
# padding found
packet = bytearray()
break
length = packet_len(packet) - 2
packet.extend(frame[first:][:length])
first += length
if first > len(frame):
# packet does not end in this frame
break
packet = bytes(packet)
yield packet, frame_count
packet = bytearray()
if first == len(frame):
# packet just ends in this frame
break
packets = list(defrag(x, counter, start_offset))
Explanation: Valid packets contain a 2 byte header where the 4 MSBs are set to 1 and the remaining 12 bits indicate the size of the packet payload in bytes (so the total packet size is this value plus 2). Using this header, the packets can be defragmented in the same way as CCSDS Space Packets transmitted using the M_PDU protocol.
End of explanation
sum([len(p[0]) for p in packets])/x[:,4:].size
Explanation: Only ~76% of the frames payload contains packets. The rest is padding.
End of explanation
source_ids = [p[0][2:10].hex().upper() for p in packets]
collections.Counter(source_ids)
Explanation: After the 2 byte header, the next 8 bytes of the packet can be used to identify its source or type.
End of explanation
timestamps = np.datetime64('1980-01-06') + \
np.array([np.frombuffer(p[0][13:][:8], dtype = 'uint64').byteswap()[0] for p in packets]) \
* np.timedelta64(1, 'ns')
timestamps_valid = (timestamps >= np.datetime64('2021-01-01')) & (timestamps <= np.datetime64('2022-01-01'))
plt.plot(timestamps[timestamps_valid],
np.array([p[1] for p in packets])[timestamps_valid], '.')
plt.title('Falcon-9 packet timestamps')
plt.xlabel('Timestamp (GPS time)')
plt.ylabel('Frame counter');
Explanation: Some packets have 64-bit timestamps starting 3 bytes after the packet source ID. These give nanoseconds since the GPS epoch.
End of explanation
video_source = '01123201042E1403'
video_packets = [p for p,s in zip(packets, source_ids)
if s == video_source]
video_ts = bytes().join([p[0][25:-2] for p in video_packets])
Explanation: Video packets
Video packets are stored in a particular source ID. If we remove the first 25 and last 2 bytes of these packets, we obtain 5 188-byte transport stream packets.
End of explanation
len(video_ts)/sum([len(p[0]) for p in packets])
with open('/tmp/falcon9.ts', 'wb') as f:
f.write(video_ts)
ts = np.frombuffer(video_ts, dtype = 'uint8').reshape((-1,188))
# sync byte 71 = 0x47
np.unique(ts[:,0])
# TEI = 0
np.unique(ts[:,1] >> 7)
pusi = (ts[:,1] >> 6) & 1
# priority = 0
np.unique((ts[:,1] >> 5) & 1)
pid = ts[:,1:3].ravel().view('uint16').byteswap() & 0x1fff
np.unique(pid)
for p in np.unique(pid):
print(f'PID {p} ratio {np.average(pid == p) * 100:.1f}%')
# TSC = 0
np.unique(ts[:,3] >> 6)
adaptation = (ts[:,3] >> 4) & 0x3
np.unique(adaptation)
continuity = ts[:,3] & 0xf
for p in np.unique(pid):
print('PID', p, 'PUSI values', np.unique(pusi[pid == p]),
'adaptation field values', np.unique(adaptation[pid == p]))
pcr_pid = ts[pid == 511]
pcr = np.concatenate((np.zeros((pcr_pid.shape[0], 2), dtype = 'uint8'), pcr_pid[:,6:12]), axis = 1)
pcr = pcr.view('uint64').byteswap().ravel()
pcr_base = pcr >> 15
pcr_extension = pcr & 0x1ff
pcr_value = (pcr_base * 300 + pcr_extension) / 27e6
video_timestamps = timestamps[[s == video_source for s in source_ids]]
ts_timestamps = np.repeat(video_timestamps, 5)
pcr_pid_timestamps = ts_timestamps[pid == 511]
plt.plot(pcr_pid_timestamps, pcr_value, '.')
plt.title('Falcon-9 PCR timestamps')
plt.ylabel('PID 511 PCR (s)')
plt.xlabel('Packet timestamp (GPS time)');
Explanation: Only around 28% of the transmitted data is the transport stream video.
End of explanation
gps_source = '0117FE0800320303'
gps_packets = [p for p,s in zip(packets, source_ids)
if s == gps_source]
gps_log = ''.join([str(g[0][25:-2], encoding = 'ascii') for g in gps_packets])
with open('/tmp/gps.txt', 'w') as f:
f.write(gps_log)
print(gps_log)
Explanation: GPS log
End of explanation |
14,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ENV / ATM 415
Step1: Go ahead and edit the Python code cell above to do something different. To evaluate whatever is in the cell, just press shift-enter.
Notice that you are free to jump around and evaluate cells in any order you want. The effect is exactly like typing each cell into a Python console in the order that you evaluate them.
Your assignment
Answer all questions below, using the example code as a guide.
You will submit your work as a single notebook file. You can use this file as a template, or create a new, empty notebook. Select Markdown for cells that contain your written answers to the questions below.
Save your notebook as
[your last name]_Assignment03.ipynb
(so for example, my submission would be Rose_Assignment03.ipynb).
The easiest way to do this is just click on the title text at the top of the window. Currently it says Assignment03. When you click on it, you get a prompt for a new notebook name.
Try to make sure your notebook runs from start to finish without error. Do this
Step2: Question 1
(Primer Section 3.8, Review question 1)
List 10 questions that a strictly zero-dimensional climate model cannot answer. For at least five of the 10 questions, add your explanation of why.
Your answer here...
Question 2
(Primer Section 3.8, Review question 2)
The similarities between the first two EBMs (those of Budyko and Sellers) are fairly obvious -- what are they? Now list at least differences bewteen these very early EBMs.
Your answer here...
Question 3
Using the function climlab.solar.insolation.daily_insolation(), calculate the incoming solar radiation (insolation) at three different latitudes | Python Code:
# This is an example of a Python code cell.
# Note that I can include text as long as I use the # symbol (Python comment)
# Results of my code will display below the input
print 3+5
Explanation: ENV / ATM 415: Climate Laboratory, Spring 2016
Assignment 3
Out: Tuesday February 23, 2016
Due: Thursday March 3, 2016 at 10:15 am.
About this document
This file is a Jupyter notebook (also formerly called IPython notebook).
Each cell contains either a block of Python code, or some formatted text.
To open this document, you should launch your Jupyter notebook server by typing
jupyter notebook
from your command line (or use the ipython-notebook button on the Anaconda launcher).
Basic navigation
To select a particular cell for editing, just double-click on it.
There is a pull-down menu at the top of the window, used for setting the content of each cell. A text cell like this one will say Markdown.
End of explanation
# We usually want to begin every notebook by setting up our tools:
# graphics in the notebook, rather than in separate windows
%matplotlib inline
# Some standard imports
import numpy as np
import matplotlib.pyplot as plt
# We need the custom climlab package for this assignment
import climlab
Explanation: Go ahead and edit the Python code cell above to do something different. To evaluate whatever is in the cell, just press shift-enter.
Notice that you are free to jump around and evaluate cells in any order you want. The effect is exactly like typing each cell into a Python console in the order that you evaluate them.
Your assignment
Answer all questions below, using the example code as a guide.
You will submit your work as a single notebook file. You can use this file as a template, or create a new, empty notebook. Select Markdown for cells that contain your written answers to the questions below.
Save your notebook as
[your last name]_Assignment03.ipynb
(so for example, my submission would be Rose_Assignment03.ipynb).
The easiest way to do this is just click on the title text at the top of the window. Currently it says Assignment03. When you click on it, you get a prompt for a new notebook name.
Try to make sure your notebook runs from start to finish without error. Do this:
Save your work
From the Kernel menu, select Restart (this will wipe out any variables stored in memory).
From the Cell menu, select Run All. This will run each cell in your notebook in order.
Did it reach the end without error, and with the results you expected?
Yes: Good.
No: Find and fix the errors. (remember that the Python interpreter only knows what has already been defined in previous cells. The order of evaluation matters)
Save your work and submit your notebook file by email to brose@albany.edu
End of explanation
# This is a code cell.
# Use the '+' button on the toolbar above to add new cells.
# Use the arrow buttons to reorder cells.
Explanation: Question 1
(Primer Section 3.8, Review question 1)
List 10 questions that a strictly zero-dimensional climate model cannot answer. For at least five of the 10 questions, add your explanation of why.
Your answer here...
Question 2
(Primer Section 3.8, Review question 2)
The similarities between the first two EBMs (those of Budyko and Sellers) are fairly obvious -- what are they? Now list at least differences bewteen these very early EBMs.
Your answer here...
Question 3
Using the function climlab.solar.insolation.daily_insolation(), calculate the incoming solar radiation (insolation) at three different latitudes:
- the equator
- 45ºN
- the North Pole.
Use present-day orbital parameters.
a) Make a well-labeled graph that shows all three insolation curves on the same plot. The x
axis of your graph should be days of the calendar year (beginning January 1), and the y axis should be insolation in W/m2. Include a legend showing which curve corresponds to which latitude.
b) Comment on the very different shapes of these three curves.
End of explanation |
14,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Errores en Python
Objetivos
Aprender a diagosticar y solucionar errores comunes en python.
Aprender técnicas comunes de debugging.
0.1 Instrucciones
Las instrucciones de instalación y uso de un ipython notebook se encuentran en el siguiente link.
Después de descargar y abrir el presente notebook, recuerden
Step2: Contenido
Introducción
Técnicas de debugging.
Sobre el Notebook
Existen 4 desafíos
Step6: 2.2 Debug
Step8: 2.3 Debug
Step10: 2.4 Debug | Python Code:
IPython Notebook v4.0 para python 3.0
Librerías adicionales: IPython, pdb
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
Explanation: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Errores en Python
Objetivos
Aprender a diagosticar y solucionar errores comunes en python.
Aprender técnicas comunes de debugging.
0.1 Instrucciones
Las instrucciones de instalación y uso de un ipython notebook se encuentran en el siguiente link.
Después de descargar y abrir el presente notebook, recuerden:
* Desarrollar los problemas de manera secuencial.
* Guardar constantemente con Ctr-S para evitar sorpresas.
* Reemplazar en las celdas de código donde diga FIX_ME por el código correspondiente.
* Ejecutar cada celda de código utilizando Ctr-Enter
0.2 Licenciamiento y Configuración
Ejecutar la siguiente celda mediante Ctr-S.
End of explanation
import numpy as np
def promedio_positivo(a):
pos_mean = a[a>0].mean()
return pos_mean
N = 100
x = np.linspace(-1,1,N)
y = 0.5 - x**2 # No cambiar esta linea
print(promedio_positivo(y))
# Error 1:
# Error 2:
# Error 3:
# Error 4:
# Error 5:
Explanation: Contenido
Introducción
Técnicas de debugging.
Sobre el Notebook
Existen 4 desafíos:
* En todos los casos, documenten lo encontrado. Escriban como un #comentario o comentario los errores que vayan detectando.
* En el desafío 1: Ejecute la celda, lea el output y arregle el código. Comente los 5 errores en la misma celda.
* En el desafío 2: Ejecute la celda y encuentre los errores utilizando print. Comente los 3 errores en la misma celda.
* En el desafío 3: Ejecute el archivo ./mat281_code/desafio_3.py, y encuentre los 3 errores utilizando pdb.set_trace()
* En el desafío 4: Ejecute el archivo ./mat281_code/desafio_4.py, y encuentre los 3 errores utilizando IPython.embed()
1. Introducción
Debugging: Eliminación de errores de un programa computacional.
* Fácilmente 40-60% del tiempo utilizado en la creación de un programa.
* Ningún programa está excento de bugs/errores.
* Imposible garantizar utilización 100% segura por parte del usuario.
* Programas computaciones tienen inconsistencias/errores de implementación.
* ¡Hardware también puede tener errores!
1. Introducción
¿Porqué se le llama bugs?
Existen registros en la correspondencia de Thomas Edisson, en 1848, hablaba de bugs para referirse a errores en sus inventos. El término se utilizaba ocasionalmente en el dominio informático. En 1947, el ordenador Mark II presentaba un error. Al buscar el origen del error, los técnicos encontraron una polilla, que se había introducido en la máquina.
<img src="images/bug.jpg" alt="" width="600px" align="middle"/>
Toda la historia en el siguiente enlace a wikipedia (ingles).
2. Técnicas para Debug
Leer output entregado por python para posibles errores
Utilizando print
Utilizando pdb: python debugger
Lanzamiento condicional de Ipython embed
2.1 Debug: Leer output de errores
Cuando el programa no funciona y entrega un error normalmente es fácil solucionarlo.
El mensaje de error entregará: la línea donde se detecta el error y el tipo de error.
PROS:
* Explicativo
* Fácil de detectar y reparar
CONTRA:
* No todos los errores arrojan error, particularmente los errores conceptuales.
2.1.1 Lista de errores comunes
Los errores más comunes en un programa son los siguientes:
* SyntaxError:
* Parentésis no cierran adecuadamente.
* Faltan comillas en un string.
* Falta dos puntos para definir un bloque if-elif-ese, una función, o un ciclo.
* NameError:
* Se está usando una variable que no existe (nombre mal escrito o se define después de donde es utilizada)
* Todavía no se ha definido la función o variable.
* No se ha importado el módulo requerido
* IOError: El archivo a abrir no existe.
* KeyError: La llave no existe en el diccionario.
* TypeError: La función no puede aplicarse sobre el objeto.
* IndentationError: Los bloques de código no están bien definidos. Revisar la indentación.
Un error clásico y que es dificil de detectar es la asignación involuntaria: Escribir $a=b$ cuando realmente se quiere testear la igualdad $a==b$.
Desafío 1
Arregle el siguiente programa en python para que funcione. Contiene 5 errores. Anote los errores como comentarios en el código.
Al ejecutar sin errores, debería regresar el valor 0.333384348536
End of explanation
def fibonacci(n):
Debe regresar la lista con los primeros n numeros de fibonacci.
Para n<1, regresar [].
Para n=1, regresar [1].
Para n=2, regresar [1,1].
Para n=3, regresar [1,1,2].
Para n=4, regresar [1,1,2,3].
Y sucesivamente
a = 1
b = 1
fib = [a,b]
count = 2
if n<1:
return []
if n=1:
return [1]
while count <= n:
aux = a
a = b
b = aux + b
count += 1
fib.append(aux)
return fib
print "fibonacci(-1):", fibonacci(-1) # Deberia ser []
print "fibonacci(0):", fibonacci(0) # Deberia ser []
print "fibonacci(1):", fibonacci(1) # Deberia ser [1]
print "fibonacci(2):", fibonacci(2) # Deberia ser [1,1]
print "fibonacci(3):", fibonacci(3) # Deberia ser [1,1,2]
print "fibonacci(5):", fibonacci(5) # Deberia ser [1,1,2,3,5]
print "fibonacci(10):", fibonacci(10) # Deberia ser ...
ERRORES DETECTADOS:
1)
2)
3)
Explanation: 2.2 Debug: Utilización de print
Utilizar print es la técnica más sencilla y habitual, apropiada si los errores son sencillos.
PRO:
* Fácil y rápido de implementar.
* Permite inspeccionar valores de variable a lo largo de todo un programa
CONTRA:
* Requiere escribir expresiones más complicadas para estudiar más de una variable simultáneamente.
* Impresión no ayuda para estudiar datos multidimensionales (arreglos, matrices, diccionarios grandes).
* Eliminacion de múltiples print puede ser compleja en programa grande.
* Inapropiado si la ejecución del programa tarde demasiado (por ejemplo si tiene que leer un archivo de disco), pues habitualmente se van insertando prints a lo largo de varias ejecuciones "persiguiendo" el valor de una variable.
Consejo
Si se desea inspeccionar la variable mi_variable_con_error, utilice
print("!!!" + str(mi_variable_con_error))
o bien
print(mi_variable_con_error) #!!!
De esa forma será más facil ver en el output donde está la variable impresa, y luego de solucionar el bug, será también más fácil eliminar las expresiones print que han sido insertadas para debugear (no se confundirá con los print que sí son necesarios y naturales al programa).
Desafío 2
Detecte porqué el programa se comporta de manera inadecuada, utilizando print donde parezca adecuado.
No elimine los print que usted haya introducido, sólo coméntelos con #.
Arregle el desperfecto e indique con un comentario en el código donde estaba el error.
End of explanation
# Desafio 3 - Errores encontrados en ./mat281_code/desafio_3.py
Se detectaron los siguientes errores:
1- FIX ME - COMENTAR AQUI
2- FIX ME - COMENTAR AQUI
3- FIX ME - COMENTAR AQUI
Explanation: 2.3 Debug: Utilización de pdb
Python trae un debugger por defecto: pdb (python debugger), que guarda similaridades con gdb (el debugger de C).
PRO:
* Permite inspeccionar el estado real de la máquina en un instante dado.
* Permite ejecutar las instrucciones siguientes.
CONTRA:
* Requiere conocer comandos.
* No tiene completación por tabulación como IPython.
El funcionamiento de pdb similar a los breakpoints en matlab.
Se debe realizar lo siguiente:
Importar la librería
import pdb
Solicitar que se ejecute la inspección en las líneas que potencialmente tienen el error. Para ello es necesario insertar en una nueva línea, con el alineamiento adecuado, lo siguiente:
pdb.set_trace()
Ejecutar el programa como se realizaría normalmente:
$ python mi_programa_con_error.py
Al realizar las acciones anteriores, pdb ejecuta todas las instrucciones hasta el primer pdb.set_trace() y regresa el terminal al usuario, para que inspeccione las variables y revise el código. Los comandos principales a memorizar son:
n + Enter: Permite ejecutar la siguiente instrucción (línea).
c + Enter: Permite continar la ejecución del programa, hasta el próximo pdb.set_trace() o el final del programa.
l + Enter: Permite ver qué línea esta actualmente en ejecución.
p mi_variable + Enter: Imprime la variable mi_variable.
Enter: Ejecuta la última accion realizada en pdb.
2.3.1 Ejemplo
Ejecute el archivo ./mat281_code/ejemplo_pdb.py y siga las instrucciones que obtendrá:
$ python ./mat281_code/ejemplo_pdb.py
Desafío 3
Utilice pdb para debuggear el archivo ./mat281_code/desafio_3.py.
El desafío 3 consiste en hallar 3 errores en la implementación defectuosa del método de la secante:
link wikipedia
Instrucciones:
* Después de utilizar pdb.set_trace() no borre la línea creada, solo coméntela con # para poder revisar su utilización.
* Anote en la celda a continuación los errores que ha encontrado en el archivo ./mat281_code/desafio_3.py
End of explanation
# Desafio 4 - Errores encontrados en ./mat281_code/desafio_4.py
Se detectaron los siguientes errores:
1- FIX ME - COMENTAR AQUI
2- FIX ME - COMENTAR AQUI
3- FIX ME - COMENTAR AQUI
Explanation: 2.4 Debug: Utilización de IPython
PRO:
* Permite inspeccionar el estado real de la máquina en un instante dado.
* Permite calcular cualquier expresión, de manera sencilla.
* Permite graficar, imprimir matrices, etc.
* Tiene completación por tabulación de IPython.
* Tiene todo el poder de IPython (%who, %whos, etc.)
CONTRA:
* No permite avanzar a la próxima instrucción como n+Enter en pdb.
El funcionamiento de IPython es el siguiente:
Importar la librería
import IPython
Solicitar que se ejecute la inspección en las líneas que potencialmente tienen el error. Para ello es necesario insertar en una nueva línea, con el alineamiento adecuado, lo siguiente:
IPython.embed()
Ejecutar el programa como se realizaría normalmente:
$ python mi_programa_con_error.py
Al realizar las acciones anteriores, python ejecuta todas las instrucciones hasta el primer IPython.embed() y regresa el terminal interactivo IPython al usuario en el punto seleccionado, para que inspeccione las variables y revise el código.
Para salir de IPython es necesario utilizar Ctr+d.
2.3.1 Ejemplo
Ejecute el archivo ./mat281_code/ejemplo_ipython.py y siga las instrucciones que obtendrá:
$ python ./mat281_code/ejemplo_ipython.py
Desafío 4
Utilice IPython para debuggear el archivo ./mat281_code/desafio_4.py.
El desafío 4 consiste en reparar una implementación defectuosa del método de bisección:
link wikipedia
Instrucciones:
* Después de utilizar IPython.embed() no borre la línea, solo coméntela con # para poder revisar su utilización.
* Anote en la celda a continuación los errores que ha encontrado en el archivo ./mat281_code/desafio_4.py
End of explanation |
14,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AdaptiveMD
Example 4 - Custom Task objects
0. Imports
Step1: Let's open our test project by its name. If you completed the first examples this should all work out of the box.
Step2: Open all connections to the MongoDB and Session so we can get started.
Let's see again where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
Step3: Now restore our old ways to generate tasks by loading the previously used generators.
Step4: A simple task
A task is in essence a bash script-like description of what should be executed by the worker. It has details about files to be linked to the working directory, bash commands to be executed and some meta information about what should happen in case we succeed or fail.
The execution structure
Let's first explain briefly how a task is executed and what its components are. This was originally build so that it is compatible with radical.pilot and still is. So, if you are familiar with it, all of the following information should sould very familiar.
A task is executed from within a unique directory that only exists for this particular task. These are located in adaptivemd/workers/ and look like
worker.0x5dcccd05097611e7829b000000000072L/
the long number is a hex representation of the UUID of the task. Just if you are curious type
print hex(my_task.__uuid__)
Then we change directory to this folder write a running.sh bash script and execute it. This script is created from the task definition and also depends on your resource setting (which basically only contain the path to the workers directory, etc)
The script is divided into 1 or 3 parts depending on which Task class you use. The main Task uses a single list of commands, while PrePostTask has the following structure
Pre-Exec
Step5: We are linking a lot of files to the worker directory and change the name for the .pdb in the process. Then call the actual python script that runs openmm. And finally move the output.dcd and the restart file back tp the trajectory folder.
There is a way to list lot's of things about tasks and we will use it a lot to see our modifications.
Step6: Modify a task
As long as a task is not saved and hence placed in the queue, it can be altered in any way. All of the 3 / 5 phases can be changed separately. You can add things to the staging phases or bash phases or change the command. So, let's do that now
Add a bash line
First, a Task is very similar to a list of bash commands and you can simply append (or prepend) a command. A text line will be interpreted as a bash command.
Step7: As expected this line was added to the end of the script.
Add staging actions
To set staging is more difficult. The reason is, that you normally have no idea where files are located and hence writing a copy or move is impossible. This is why the staging commands are not bash lines but objects that hold information about the actual file transaction to be done. There are some task methods that help you move files but also files itself can generate this commands for you.
Let's move one trajectory (directory) around a little more as an example
Step8: This looks like in the script. The default for a copy is to move a file or folder to the worker directory under the same name, but you can give it another name/location if you use that as an argument. Note that since trajectories are a directory you need to give a directory name (which end in a /)
Step9: If you want to move it not to the worker directory you have to specify the location and you can do so with the prefixes (shared
Step10: Besides .copy you can also .move or .link files.
Step11: Local files
Let's mention these because they require special treatment. We cannot (like RP can) copy files to the HPC, we need to store them in the DB first.
Step12: Make sure you use file
Step13: Note that now there are 3 / in the filename, two from the
Step14: For local files you normally use .transfer, but copy, move or link work as well. Still, there is no difference since the file only exists in the DB now and copying from the DB to a place on the HPC results in a simple file creation.
Now, we want to add a command to the staging and see what happens.
Step15: We now have one more transfer command. But something else has changed. There is one more files listed as required. So, the task can only run, if that file exists, but since we loaded it into the DB, it exists (for us). For example the newly created trajectory 25.dcd does not exist yet. Would that be a requirement the task would fail. But let's check that it exists.
Step16: Okay, we have now the PDB file staged and so any real bash commands could work with a file ntl9.pdb. Alright, so let's output its stats.
Step17: Note that usually you place these stage commands at the top or your script.
Now we could run this task, as before and see, if it works. (Make sure you still have a worker running)
Step18: And check, that the task is running
Step19: If we did not screw up the task, it should have succeeded and we can look at the STDOUT.
Step20: Well, great, we have the pointless output and the stats of the newly staged file ntl9.pdb
How does a real script look like
Just for fun let's create the same scheduler that the adaptivemdworker uses, but from inside this notebook.
Step21: If you really wanted to use the worker you need to initialize it and it will create directories and stage files for the generators, etc. For that you need to call sc.enter(project), but since we only want it to parse our tasks, we only set the project without invoking initialization. You should normally not do that.
Step22: Now we can use a function .task_to_script that will parse a task into a bash script. So this is really what would be run on your machine now.
Step23: Now you see that all file paths have been properly interpreted to work. See that there is a comment about a temporary file from the DB that is then renamed. This is a little trick to be compatible with RPs way of handling files. (TODO
Step24: And voila, the path has changed to a relative path from the working directory of the worker. Note that you see here the line we added in the very beginning of example 1 to our resource!
A Task from scratch
If you want to start a new task you can begin with
Step25: as we did before.
Just start adding staging and bash commands and you are done. When you create a task you can assign it a generator, then the system will assume that this task was generated by that generator, so don't do it for you custom tasks, unless you generated them in a generator. Setting this allows you to tell a worker only to run tasks of certain types.
The Python RPC Task
The tasks so far a very powerful, but they lack the possibility to call a python function. Since we are using python here, it would be great to really pretend to call a python function from here and not taking the detour of writing a python bash executable with arguments, etc... An example for this is the PyEmma generator which uses this capability.
Let's do an example of this as well. Assume we have a python function in a file (you need to have your code in a file so far so that we can copy the file to the HPC if necessary). Let's create the .py file now.
Step26: Now create a PythonTask instead
Step27: and the call function has changed. Note that also now you can still add all the bash and stage commands as before. A PythonTask is also a subclass of PrePostTask so we have a .pre and .post phase available.
Step28: We call the function my_func with one argument
Step29: Well, interesting. What this actually does is to write the input arguments to the function into a temporary .json file on the worker, (in RP on the local machine and then transfers it to remote), rename it to input.json and read it in the _run_.py. This is still a little clumsy, but needs to be this way to be RP compatible which only works with files! Look at the actual script.
You see, that we really copy the .py file that contains the source code to the worker directory. All that is done automatically. A little caution on this. You can either write a function in a single file or use any installed package, but in this case the same package needs to be installed on the remote machine as well!
Let's run it and see what happens.
Step30: And wait until the task is done
Step31: The default settings will automatically save the content from the resulting output.json in the DB an you can access the data that was returned from the task at .output. In our example the result was just the size of a the file in bytes
Step32: And you can use this information in an adaptive script to make decisions.
success callback
The last thing we did not talk about is the possibility to also call a function with the returned data automatically on successful execution. Since this function is executed on the worker we (so far) only support function calls with the following restrictions.
you can call a function of the related generator class. for this you need to create the task using PythonTask(generator)
the function name you want to call is stored in task.then_func_name. So you can write a generator class with several possible outcomes and chose the function for each task.
The Generator needs to be part of adaptivemd
So in the case of modeller.execute we create a PythonTask that references the following functions
Step33: So we will call the default then_func of modeller or the class modeller is of.
Step34: These callbacks are called with the current project, the resulting data (which is in the modeller case a Model object) and array of initial inputs.
This is the actual code of the callback
py
@staticmethod
def then_func(project, task, model, inputs) | Python Code:
import sys, os
from adaptivemd import (
Project, Task, File, PythonTask
)
Explanation: AdaptiveMD
Example 4 - Custom Task objects
0. Imports
End of explanation
project = Project('tutorial')
Explanation: Let's open our test project by its name. If you completed the first examples this should all work out of the box.
End of explanation
print project.files
print project.generators
print project.models
Explanation: Open all connections to the MongoDB and Session so we can get started.
Let's see again where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
End of explanation
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
Explanation: Now restore our old ways to generate tasks by loading the previously used generators.
End of explanation
task = engine.task_run_trajectory(project.new_trajectory(pdb_file, 100))
task.script
Explanation: A simple task
A task is in essence a bash script-like description of what should be executed by the worker. It has details about files to be linked to the working directory, bash commands to be executed and some meta information about what should happen in case we succeed or fail.
The execution structure
Let's first explain briefly how a task is executed and what its components are. This was originally build so that it is compatible with radical.pilot and still is. So, if you are familiar with it, all of the following information should sould very familiar.
A task is executed from within a unique directory that only exists for this particular task. These are located in adaptivemd/workers/ and look like
worker.0x5dcccd05097611e7829b000000000072L/
the long number is a hex representation of the UUID of the task. Just if you are curious type
print hex(my_task.__uuid__)
Then we change directory to this folder write a running.sh bash script and execute it. This script is created from the task definition and also depends on your resource setting (which basically only contain the path to the workers directory, etc)
The script is divided into 1 or 3 parts depending on which Task class you use. The main Task uses a single list of commands, while PrePostTask has the following structure
Pre-Exec: Things to happen before the main command (optional)
Main: the main commands are executed
Post-Exec: Things to happen after the main command (optional)
Okay, lots of theory, now some real code for running a task that generated a trajectory
End of explanation
print task.description
Explanation: We are linking a lot of files to the worker directory and change the name for the .pdb in the process. Then call the actual python script that runs openmm. And finally move the output.dcd and the restart file back tp the trajectory folder.
There is a way to list lot's of things about tasks and we will use it a lot to see our modifications.
End of explanation
task.append('echo "This new line is pointless"')
print task.description
Explanation: Modify a task
As long as a task is not saved and hence placed in the queue, it can be altered in any way. All of the 3 / 5 phases can be changed separately. You can add things to the staging phases or bash phases or change the command. So, let's do that now
Add a bash line
First, a Task is very similar to a list of bash commands and you can simply append (or prepend) a command. A text line will be interpreted as a bash command.
End of explanation
traj = project.trajectories.one
transaction = traj.copy()
print transaction
Explanation: As expected this line was added to the end of the script.
Add staging actions
To set staging is more difficult. The reason is, that you normally have no idea where files are located and hence writing a copy or move is impossible. This is why the staging commands are not bash lines but objects that hold information about the actual file transaction to be done. There are some task methods that help you move files but also files itself can generate this commands for you.
Let's move one trajectory (directory) around a little more as an example
End of explanation
transaction = traj.copy('new_traj/')
print transaction
Explanation: This looks like in the script. The default for a copy is to move a file or folder to the worker directory under the same name, but you can give it another name/location if you use that as an argument. Note that since trajectories are a directory you need to give a directory name (which end in a /)
End of explanation
transaction = traj.copy('staging:///cached_trajs/')
print transaction
Explanation: If you want to move it not to the worker directory you have to specify the location and you can do so with the prefixes (shared://, sandbox://, staging:// as explained in the previous examples)
End of explanation
transaction = pdb_file.copy('staging:///delete.pdb')
print transaction
transaction = pdb_file.move('staging:///delete.pdb')
print transaction
transaction = pdb_file.link('staging:///delete.pdb')
print transaction
Explanation: Besides .copy you can also .move or .link files.
End of explanation
new_pdb = File('file://../files/ntl9/ntl9.pdb').load()
Explanation: Local files
Let's mention these because they require special treatment. We cannot (like RP can) copy files to the HPC, we need to store them in the DB first.
End of explanation
print new_pdb.location
Explanation: Make sure you use file:// to indicate that you are using a local file. The above example uses a relative path which will be replaced by an absolute one, otherwise we ran into trouble once we open the project at a different directory.
End of explanation
print new_pdb.get_file()[:300]
Explanation: Note that now there are 3 / in the filename, two from the :// and one from the root directory of your machine
The load() at the end really loads the file and when you save this File now it will contain the content of the file. You can access this content as seen in the previous example.
End of explanation
transaction = new_pdb.transfer()
print transaction
task.append(transaction)
print task.description
Explanation: For local files you normally use .transfer, but copy, move or link work as well. Still, there is no difference since the file only exists in the DB now and copying from the DB to a place on the HPC results in a simple file creation.
Now, we want to add a command to the staging and see what happens.
End of explanation
new_pdb.exists
Explanation: We now have one more transfer command. But something else has changed. There is one more files listed as required. So, the task can only run, if that file exists, but since we loaded it into the DB, it exists (for us). For example the newly created trajectory 25.dcd does not exist yet. Would that be a requirement the task would fail. But let's check that it exists.
End of explanation
task.append('stat ntl9.pdb')
Explanation: Okay, we have now the PDB file staged and so any real bash commands could work with a file ntl9.pdb. Alright, so let's output its stats.
End of explanation
project.queue(task)
Explanation: Note that usually you place these stage commands at the top or your script.
Now we could run this task, as before and see, if it works. (Make sure you still have a worker running)
End of explanation
task.state
Explanation: And check, that the task is running
End of explanation
print task.stdout
Explanation: If we did not screw up the task, it should have succeeded and we can look at the STDOUT.
End of explanation
from adaptivemd import WorkerScheduler
sc = WorkerScheduler(project.resource)
Explanation: Well, great, we have the pointless output and the stats of the newly staged file ntl9.pdb
How does a real script look like
Just for fun let's create the same scheduler that the adaptivemdworker uses, but from inside this notebook.
End of explanation
sc.project = project
Explanation: If you really wanted to use the worker you need to initialize it and it will create directories and stage files for the generators, etc. For that you need to call sc.enter(project), but since we only want it to parse our tasks, we only set the project without invoking initialization. You should normally not do that.
End of explanation
print '\n'.join(sc.task_to_script(task))
Explanation: Now we can use a function .task_to_script that will parse a task into a bash script. So this is really what would be run on your machine now.
End of explanation
task = Task()
task.append('touch staging:///my_file.txt')
print '\n'.join(sc.task_to_script(task))
Explanation: Now you see that all file paths have been properly interpreted to work. See that there is a comment about a temporary file from the DB that is then renamed. This is a little trick to be compatible with RPs way of handling files. (TODO: We might change this to just write to the target file. Need to check if that is still consistent)
A note on file locations
One problem with bash scripts is that when you create the tasks you have no concept on where the files actually are located. To get around this the created bash script will be scanned for paths, that contain prefixed like we are used to and are interpreted in the context of the worker / scheduler. The worker is the only instance to know all that is necessary so this is the place to fix that problem.
Let's see that in a little example, where we create an empty file in the staging area.
End of explanation
task = Task()
Explanation: And voila, the path has changed to a relative path from the working directory of the worker. Note that you see here the line we added in the very beginning of example 1 to our resource!
A Task from scratch
If you want to start a new task you can begin with
End of explanation
%%file my_rpc_function.py
def my_func(f):
import os
print f
return os.path.getsize(f)
Explanation: as we did before.
Just start adding staging and bash commands and you are done. When you create a task you can assign it a generator, then the system will assume that this task was generated by that generator, so don't do it for you custom tasks, unless you generated them in a generator. Setting this allows you to tell a worker only to run tasks of certain types.
The Python RPC Task
The tasks so far a very powerful, but they lack the possibility to call a python function. Since we are using python here, it would be great to really pretend to call a python function from here and not taking the detour of writing a python bash executable with arguments, etc... An example for this is the PyEmma generator which uses this capability.
Let's do an example of this as well. Assume we have a python function in a file (you need to have your code in a file so far so that we can copy the file to the HPC if necessary). Let's create the .py file now.
End of explanation
task = PythonTask()
Explanation: Now create a PythonTask instead
End of explanation
from my_rpc_function import my_func
Explanation: and the call function has changed. Note that also now you can still add all the bash and stage commands as before. A PythonTask is also a subclass of PrePostTask so we have a .pre and .post phase available.
End of explanation
task.call(my_func, f=project.trajectories.one)
print task.description
Explanation: We call the function my_func with one argument
End of explanation
project.queue(task)
Explanation: Well, interesting. What this actually does is to write the input arguments to the function into a temporary .json file on the worker, (in RP on the local machine and then transfers it to remote), rename it to input.json and read it in the _run_.py. This is still a little clumsy, but needs to be this way to be RP compatible which only works with files! Look at the actual script.
You see, that we really copy the .py file that contains the source code to the worker directory. All that is done automatically. A little caution on this. You can either write a function in a single file or use any installed package, but in this case the same package needs to be installed on the remote machine as well!
Let's run it and see what happens.
End of explanation
project.wait_until(task.is_done)
Explanation: And wait until the task is done
End of explanation
task.output
Explanation: The default settings will automatically save the content from the resulting output.json in the DB an you can access the data that was returned from the task at .output. In our example the result was just the size of a the file in bytes
End of explanation
task = modeller.execute(project.trajectories)
task.then_func_name
Explanation: And you can use this information in an adaptive script to make decisions.
success callback
The last thing we did not talk about is the possibility to also call a function with the returned data automatically on successful execution. Since this function is executed on the worker we (so far) only support function calls with the following restrictions.
you can call a function of the related generator class. for this you need to create the task using PythonTask(generator)
the function name you want to call is stored in task.then_func_name. So you can write a generator class with several possible outcomes and chose the function for each task.
The Generator needs to be part of adaptivemd
So in the case of modeller.execute we create a PythonTask that references the following functions
End of explanation
help(modeller.then_func)
Explanation: So we will call the default then_func of modeller or the class modeller is of.
End of explanation
project.close()
Explanation: These callbacks are called with the current project, the resulting data (which is in the modeller case a Model object) and array of initial inputs.
This is the actual code of the callback
py
@staticmethod
def then_func(project, task, model, inputs):
# add the input arguments for later reference
model.data['input']['trajectories'] = inputs['kwargs']['files']
model.data['input']['pdb'] = inputs['kwargs']['topfile']
project.models.add(model)
All it does is to add some of the input parameters to the model for later reference and then store the model in the project. You are free to define all sorts of actions here, even queue new tasks.
Next, we will talk about the factories for Task objects, called generators. There we will actually write a new class that does some stuff with the results.
End of explanation |
14,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Тест. Практика проверки гипотез
По данным опроса, 75% работников ресторанов утверждают, что испытывают на работе существенный стресс, оказывающий негативное влияние на их личную жизнь. Крупная ресторанная сеть опрашивает 100 своих работников, чтобы выяснить, отличается ли уровень стресса работников в их ресторанах от среднего. 67 из 100 работников отметили высокий уровень стресса.
Посчитайте достигаемый уровень значимости, округлите ответ до четырёх знаков после десятичной точки.
Step1: <b>
Представим теперь, что в другой ресторанной сети только 22 из 50 работников испытывают существенный стресс. Гипотеза о том, что 22/50 соответствует 75% по всей популяции, методом, который вы использовали в предыдущей задаче, отвергается. Чем это может объясняться? Выберите все возможные варианты.
Step2: <b>
The Wage Tract — заповедник в округе Тома, Джорджия, США, деревья в котором не затронуты деятельностью человека со времён первых поселенцев. Для участка заповедника размером 200х200 м имеется информация о координатах сосен (sn — координата в направлении север-юг, we — в направлении запад-восток, обе от 0 до 200).
pines.txt
Проверим, можно ли пространственное распределение сосен считать равномерным, или они растут кластерами.
Загрузите данные, поделите участок на 5х5 одинаковых квадратов размера 40x40 м, посчитайте количество сосен в каждом квадрате (чтобы получить такой же результат, как у нас, используйте функцию scipy.stats.binned_statistic_2d).
Если сосны действительно растут равномерно, какое среднее ожидаемое количество сосен в каждом квадрате? В правильном ответе два знака после десятичной точки.
Step3: <b>
Чтобы сравнить распределение сосен с равномерным, посчитайте значение статистики хи-квадрат для полученных 5х5 квадратов. Округлите ответ до двух знаков после десятичной точки. | Python Code:
from __future__ import division
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
n = 100
prob = 0.75
F_H0 = stats.binom(n, prob)
x = np.linspace(0,100,101)
plt.bar(x, F_H0.pmf(x), align = 'center')
plt.xlim(60, 90)
plt.show()
print('p-value: %.4f' % stats.binom_test(67, 100, prob))
Explanation: Тест. Практика проверки гипотез
По данным опроса, 75% работников ресторанов утверждают, что испытывают на работе существенный стресс, оказывающий негативное влияние на их личную жизнь. Крупная ресторанная сеть опрашивает 100 своих работников, чтобы выяснить, отличается ли уровень стресса работников в их ресторанах от среднего. 67 из 100 работников отметили высокий уровень стресса.
Посчитайте достигаемый уровень значимости, округлите ответ до четырёх знаков после десятичной точки.
End of explanation
print('p-value: %.10f' % stats.binom_test(22, 50, prob))
Explanation: <b>
Представим теперь, что в другой ресторанной сети только 22 из 50 работников испытывают существенный стресс. Гипотеза о том, что 22/50 соответствует 75% по всей популяции, методом, который вы использовали в предыдущей задаче, отвергается. Чем это может объясняться? Выберите все возможные варианты.
End of explanation
pines_data = pd.read_table('pines.txt')
pines_data.describe()
pines_data.head()
sns.pairplot(pines_data, size=4);
sn_num, we_num = 5, 5
trees_bins = stats.binned_statistic_2d(pines_data.sn, pines_data.we, None, statistic='count', bins=[sn_num, we_num])
trees_squares_num = trees_bins.statistic
trees_squares_num
trees_bins.x_edge
trees_bins.y_edge
mean_trees_num = np.sum(trees_squares_num) / 25
print(mean_trees_num)
Explanation: <b>
The Wage Tract — заповедник в округе Тома, Джорджия, США, деревья в котором не затронуты деятельностью человека со времён первых поселенцев. Для участка заповедника размером 200х200 м имеется информация о координатах сосен (sn — координата в направлении север-юг, we — в направлении запад-восток, обе от 0 до 200).
pines.txt
Проверим, можно ли пространственное распределение сосен считать равномерным, или они растут кластерами.
Загрузите данные, поделите участок на 5х5 одинаковых квадратов размера 40x40 м, посчитайте количество сосен в каждом квадрате (чтобы получить такой же результат, как у нас, используйте функцию scipy.stats.binned_statistic_2d).
Если сосны действительно растут равномерно, какое среднее ожидаемое количество сосен в каждом квадрате? В правильном ответе два знака после десятичной точки.
End of explanation
stats.chisquare(trees_squares_num.flatten(), ddof = 0)
Explanation: <b>
Чтобы сравнить распределение сосен с равномерным, посчитайте значение статистики хи-квадрат для полученных 5х5 квадратов. Округлите ответ до двух знаков после десятичной точки.
End of explanation |
14,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You'll need to download some resources for NLTK (the natural language toolkit) in order to do the kind of processing we want on all the mailing list text. In particular, for this notebook you'll need punkt, the Punkt Tokenizer Models.
To download, from an interactive Python shell, run
Step1: Group the dataframe by the month and year, and aggregate the counts for the checkword during each month to get a quick histogram of how frequently that word has been used over time. | Python Code:
df = pd.DataFrame(columns=["MessageId","Date","From","In-Reply-To","Count"])
for row in archives[0].data.iterrows():
try:
w = row[1]["Body"].replace("'", "")
k = re.sub(r'[^\w]', ' ', w)
k = k.lower()
t = nltk.tokenize.word_tokenize(k)
subdict = {}
count = 0
for g in t:
try:
word = st.stem(g)
except:
print g
pass
if word == checkword:
count += 1
if count == 0:
continue
else:
subdict["MessageId"] = row[0]
subdict["Date"] = row[1]["Date"]
subdict["From"] = row[1]["From"]
subdict["In-Reply-To"] = row[1]["In-Reply-To"]
subdict["Count"] = count
df = df.append(subdict,ignore_index=True)
except:
if row[1]["Body"] is None:
print '!!! Detected an email with an empty Body field...'
else: print 'error'
df[:5] #dataframe of informations of the particular word.
Explanation: You'll need to download some resources for NLTK (the natural language toolkit) in order to do the kind of processing we want on all the mailing list text. In particular, for this notebook you'll need punkt, the Punkt Tokenizer Models.
To download, from an interactive Python shell, run:
import nltk
nltk.download()
And in the graphical UI that appears, choose "punkt" from the All Packages tab and Download.
End of explanation
df.groupby([df.Date.dt.year, df.Date.dt.month]).agg({'Count':np.sum}).plot(y='Count')
Explanation: Group the dataframe by the month and year, and aggregate the counts for the checkword during each month to get a quick histogram of how frequently that word has been used over time.
End of explanation |
14,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Estimators Deep Dive
The purporse of this tutorial is to explain the details of how to create a premade TensorFlow estimator, how trainining and evaluation work with different configurations, and how the model is exported for serving. The tutorial covers the following points
Step1: Download Data
UCI Adult Dataset
Step2: The training data includes 32,561 records, while the evaluation data includes 16,278 records.
Step3: Dataset Metadata
Step4: 1. Data Input Function
Use tf.data.Dataset APIs
Step5: 2. Create feature columns
<br/>
<img valign="middle" src="images/tf-feature-columns.jpeg" width="800">
Base feature columns
1. numeric_column
2. categorical_column_with_vocabulary_list
3. categorical_column_with_vocabulary_file
4. categorical_column_with_identity
5. categorical_column_with_hash_buckets
Extended feature columns
1. bucketized_column
2. indicator_column
3. crossing_column
4. embedding_column
Step6: 3. Instantiate a Wide and Deep Estimator
<br/>
<img valign="middle" src="images/dnn-wide-deep.jpeg">
Step7: 4. Implement Train and Evaluate Experiment
<img valign="middle" src="images/tf-estimators.jpeg" width="900">
Delete the model_dir file if you don't want a Warm Start
* If not deleted, and you change the model, it will error.
TrainSpec
* Set shuffle in the input_fn to True
* Set num_epochs in the input_fn to None
* Set max_steps. One batch (feed-forward pass & backpropagation)
corresponds to 1 training step.
EvalSpec
* Set shuffle in the input_fn to False
* Set Set num_epochs in the input_fn to 1
* Set steps to None if you want to use all the evaluation data.
* Otherwise, set steps to the number of batches you want to use for evaluation, and set shuffle to True.
* Set start_delay_secs to 0 to start evaluation as soon as a checkpoint is produced.
* Set throttle_secs to 0 to re-evaluate as soon as a new checkpoint is produced.
Step8: Set Parameters and Run Configurations.
Set model_dir in the run_config
If the data size is known, training steps, with respect to epochs would be
Step9: Run Experiment
Step10: 5. Export your trained model
Implement serving input receiver function
Step11: Export to saved_model
Step12: Test saved_model
Step13: Export the Model during Training and Evaluation
Saved models are exported under <model_dir>/export/<folder_name>.
* Latest Exporter
Step14: 6. Early Stopping
stop_if_higher_hook
stop_if_lower_hook
stop_if_no_increase_hook
stop_if_no_decrease_hook
Step15: 7. Using Distribution Strategy for Utilising Multiple GPUs
Step16: 8. Extending a Premade Estimator
Add an evaluation metric
tf.metrics
tf.estimator.add_metric
Step17: Add Forward Features
tf.estimator.forward_features
This is very useful for batch prediction, in order to make instances to their predictions
Step18: 9. Adaptive learning rate
exponential_decay
consine_decay
linear_cosine_decay
consine_decay_restarts
polynomial decay
piecewise_constant_decay | Python Code:
try:
COLAB = True
from google.colab import auth
auth.authenticate_user()
except:
pass
RANDOM_SEED = 19831006
import os
import math
import multiprocessing
import pandas as pd
from datetime import datetime
import tensorflow as tf
print "TensorFlow : {}".format(tf.__version__)
tf.enable_eager_execution()
print "Eager Execution Enabled: {}".format(tf.executing_eagerly())
Explanation: TensorFlow Estimators Deep Dive
The purporse of this tutorial is to explain the details of how to create a premade TensorFlow estimator, how trainining and evaluation work with different configurations, and how the model is exported for serving. The tutorial covers the following points:
Implementing Input function with tf.data APIs.
Creating Feature columns.
Creating a Wide and Deep model with a premade estimator.
Configuring Train and evaluate parameters.
Exporting trained models for serving.
Implementing Early stopping.
Distribution Strategy for multi-GPUs.
Extending premade estimators.
Adaptive learning rate.
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/sme_academy/01_tf_estimator_deepdive.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img valign="middle" src="images/tf-layers.jpeg" width="400">
End of explanation
DATA_DIR='data'
!mkdir $DATA_DIR
!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.data.csv $DATA_DIR
!gsutil cp gs://cloud-samples-data/ml-engine/census/data/adult.test.csv $DATA_DIR
TRAIN_DATA_FILE = os.path.join(DATA_DIR, 'adult.data.csv')
EVAL_DATA_FILE = os.path.join(DATA_DIR, 'adult.test.csv')
!wc -l $TRAIN_DATA_FILE
!wc -l $EVAL_DATA_FILE
Explanation: Download Data
UCI Adult Dataset: https://archive.ics.uci.edu/ml/datasets/adult
Predict whether income exceeds $50K/yr based on census data. Also known as "Census Income" dataset.
End of explanation
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
pd.read_csv(TRAIN_DATA_FILE, names=HEADER).head()
Explanation: The training data includes 32,561 records, while the evaluation data includes 16,278 records.
End of explanation
HEADER = ['age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week',
'native_country', 'income_bracket']
HEADER_DEFAULTS = [[0], [''], [0], [''], [0], [''], [''], [''], [''], [''],
[0], [0], [0], [''], ['']]
NUMERIC_FEATURE_NAMES = ['age', 'education_num', 'capital_gain', 'capital_loss', 'hours_per_week']
CATEGORICAL_FEATURE_WITH_VOCABULARY = {
'workclass': ['State-gov', 'Self-emp-not-inc', 'Private', 'Federal-gov', 'Local-gov', '?', 'Self-emp-inc', 'Without-pay', 'Never-worked'],
'relationship': ['Not-in-family', 'Husband', 'Wife', 'Own-child', 'Unmarried', 'Other-relative'],
'gender': [' Male', 'Female'], 'marital_status': [' Never-married', 'Married-civ-spouse', 'Divorced', 'Married-spouse-absent', 'Separated', 'Married-AF-spouse', 'Widowed'],
'race': [' White', 'Black', 'Asian-Pac-Islander', 'Amer-Indian-Eskimo', 'Other'],
'education': ['Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college', 'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school', '5th-6th', '10th', '1st-4th', 'Preschool', '12th'],
}
CATEGORICAL_FEATURE_WITH_HASH_BUCKETS = {
'native_country': 60,
'occupation': 20
}
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_WITH_VOCABULARY.keys() + CATEGORICAL_FEATURE_WITH_HASH_BUCKETS.keys()
TARGET_NAME = 'income_bracket'
TARGET_LABELS = [' <=50K', ' >50K']
WEIGHT_COLUMN_NAME = 'fnlwgt'
Explanation: Dataset Metadata
End of explanation
def process_features(features, target):
for feature_name in CATEGORICAL_FEATURE_WITH_VOCABULARY.keys() + CATEGORICAL_FEATURE_WITH_HASH_BUCKETS.keys():
features[feature_name] = tf.strings.strip(features[feature_name])
features['capital_total'] = features['capital_gain'] - features['capital_loss']
return features, target
def make_input_fn(file_pattern, batch_size, num_epochs=1, shuffle=False):
def _input_fn():
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
column_names=HEADER,
column_defaults=HEADER_DEFAULTS,
label_name=TARGET_NAME,
field_delim=',',
use_quote_delim=True,
header=False,
num_epochs=num_epochs,
shuffle=shuffle,
shuffle_buffer_size=(5 * batch_size),
shuffle_seed=RANDOM_SEED,
num_parallel_reads=multiprocessing.cpu_count(),
sloppy=True,
)
return dataset.map(process_features).cache()
return _input_fn
# You need to run tf.enable_eager_execution() at the top.
dataset = make_input_fn(TRAIN_DATA_FILE, batch_size=1)()
for features, target in dataset.take(1):
print "Input Features:"
for key in features:
print "{}:{}".format(key, features[key])
print ""
print "Target:"
print target
Explanation: 1. Data Input Function
Use tf.data.Dataset APIs: list_files(), skip(), map(), filter(), batch(), shuffle(), repeat(), prefetch(), cache(), etc.
Use tf.data.experimental.make_csv_dataset to read and parse CSV data files.
Use tf.data.experimental.make_batched_features_dataset to read and parse TFRecords data files.
End of explanation
def create_feature_columns():
wide_columns = []
deep_columns = []
for column in NUMERIC_FEATURE_NAMES:
# Create numeric columns.
numeric_column = tf.feature_column.numeric_column(column)
deep_columns.append(numeric_column)
for column in CATEGORICAL_FEATURE_WITH_VOCABULARY:
# Create categorical columns with vocab.
vocabolary = CATEGORICAL_FEATURE_WITH_VOCABULARY[column]
categorical_column = tf.feature_column.categorical_column_with_vocabulary_list(
column, vocabolary)
wide_columns.append(categorical_column)
# Create embeddings of the categorical columns.
embed_size = int(math.sqrt(len(vocabolary)))
embedding_column = tf.feature_column.embedding_column(
categorical_column, embed_size)
deep_columns.append(embedding_column)
for column in CATEGORICAL_FEATURE_WITH_HASH_BUCKETS:
# Create categorical columns with hashing.
hash_columns = tf.feature_column.categorical_column_with_hash_bucket(
column,
hash_bucket_size=CATEGORICAL_FEATURE_WITH_HASH_BUCKETS[column])
wide_columns.append(hash_columns)
# Create indicators for hashing columns.
indicator_column = tf.feature_column.indicator_column(hash_columns)
deep_columns.append(indicator_column)
# Create bucktized column.
age_bucketized = tf.feature_column.bucketized_column(
deep_columns[0], boundaries = [18, 25, 30, 35, 40, 45, 50, 55, 60]
)
wide_columns.append(age_bucketized)
# Create crossing column.
education_X_occupation = tf.feature_column.crossed_column(
['education', 'workclass'], hash_bucket_size=int(1e4))
wide_columns.append(education_X_occupation)
# Create embeddings for crossing column.
education_X_occupation_embedded = tf.feature_column.embedding_column(
education_X_occupation, dimension=10)
deep_columns.append(education_X_occupation_embedded)
return wide_columns, deep_columns
wide_columns, deep_columns = create_feature_columns()
print ""
print "Wide columns:"
for column in wide_columns:
print column
print ""
print "Deep columns:"
for column in deep_columns:
print column
Explanation: 2. Create feature columns
<br/>
<img valign="middle" src="images/tf-feature-columns.jpeg" width="800">
Base feature columns
1. numeric_column
2. categorical_column_with_vocabulary_list
3. categorical_column_with_vocabulary_file
4. categorical_column_with_identity
5. categorical_column_with_hash_buckets
Extended feature columns
1. bucketized_column
2. indicator_column
3. crossing_column
4. embedding_column
End of explanation
def create_estimator(params, run_config):
wide_columns, deep_columns = create_feature_columns()
estimator = tf.estimator.DNNLinearCombinedClassifier(
n_classes=len(TARGET_LABELS),
label_vocabulary=TARGET_LABELS,
weight_column=WEIGHT_COLUMN_NAME,
dnn_feature_columns=deep_columns,
dnn_optimizer=tf.train.AdamOptimizer(
learning_rate=params.learning_rate),
dnn_hidden_units=params.hidden_units,
dnn_dropout=params.dropout,
dnn_activation_fn=tf.nn.relu,
batch_norm=True,
linear_feature_columns=wide_columns,
linear_optimizer='Ftrl',
config=run_config
)
return estimator
Explanation: 3. Instantiate a Wide and Deep Estimator
<br/>
<img valign="middle" src="images/dnn-wide-deep.jpeg">
End of explanation
def run_experiment(estimator, params, run_config,
resume=False, train_hooks=None, exporters=None):
print "Resume training {}: ".format(resume)
print "Epochs: {}".format(epochs)
print "Batch size: {}".format(params.batch_size)
print "Steps per epoch: {}".format(steps_per_epoch)
print "Training steps: {}".format(params.max_steps)
print "Learning rate: {}".format(params.learning_rate)
print "Hidden Units: {}".format(params.hidden_units)
print "Dropout probability: {}".format(params.dropout)
print "Save a checkpoint and evaluate afer {} step(s)".format(run_config.save_checkpoints_steps)
print "Keep the last {} checkpoint(s)".format(run_config.keep_checkpoint_max)
print ""
tf.logging.set_verbosity(tf.logging.INFO)
if not resume:
if tf.gfile.Exists(run_config.model_dir):
print "Removing previous artefacts..."
tf.gfile.DeleteRecursively(run_config.model_dir)
else:
print "Resuming training..."
# Create train specs.
train_spec = tf.estimator.TrainSpec(
input_fn = make_input_fn(
TRAIN_DATA_FILE,
batch_size=params.batch_size,
num_epochs=None, # Run until the max_steps is reached.
shuffle=True
),
max_steps=params.max_steps,
hooks=train_hooks
)
# Create eval specs.
eval_spec = tf.estimator.EvalSpec(
input_fn = make_input_fn(
EVAL_DATA_FILE,
batch_size=params.batch_size,
),
exporters=exporters,
start_delay_secs=0,
throttle_secs=0,
steps=None # Set to limit number of steps for evaluation.
)
time_start = datetime.utcnow()
print "Experiment started at {}".format(time_start.strftime("%H:%M:%S"))
print "......................................."
# Run train and evaluate.
tf.estimator.train_and_evaluate(
estimator=estimator,
train_spec=train_spec,
eval_spec=eval_spec)
time_end = datetime.utcnow()
print "......................................."
print "Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))
print ""
time_elapsed = time_end - time_start
print "Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds())
Explanation: 4. Implement Train and Evaluate Experiment
<img valign="middle" src="images/tf-estimators.jpeg" width="900">
Delete the model_dir file if you don't want a Warm Start
* If not deleted, and you change the model, it will error.
TrainSpec
* Set shuffle in the input_fn to True
* Set num_epochs in the input_fn to None
* Set max_steps. One batch (feed-forward pass & backpropagation)
corresponds to 1 training step.
EvalSpec
* Set shuffle in the input_fn to False
* Set Set num_epochs in the input_fn to 1
* Set steps to None if you want to use all the evaluation data.
* Otherwise, set steps to the number of batches you want to use for evaluation, and set shuffle to True.
* Set start_delay_secs to 0 to start evaluation as soon as a checkpoint is produced.
* Set throttle_secs to 0 to re-evaluate as soon as a new checkpoint is produced.
End of explanation
class Parameters():
pass
MODELS_LOCATION = 'gs://ksalama-gcs-cloudml/others/models/census'
MODEL_NAME = 'dnn_classifier'
model_dir = os.path.join(MODELS_LOCATION, MODEL_NAME)
os.environ['MODEL_DIR'] = model_dir
TRAIN_DATA_SIZE = 32561
params = Parameters()
params.learning_rate = 0.001
params.hidden_units = [128, 128, 128]
params.dropout = 0.15
params.batch_size = 128
# Set number of steps with respect to epochs.
epochs = 5
steps_per_epoch = int(math.ceil(TRAIN_DATA_SIZE / params.batch_size))
params.max_steps = steps_per_epoch * epochs
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=steps_per_epoch, # Save a checkpoint after each epoch, evaluate the model after each epoch.
keep_checkpoint_max=3, # Keep the 3 most recently produced checkpoints.
model_dir=model_dir,
save_summary_steps=100, # Summary steps for Tensorboard.
log_step_count_steps=50
)
Explanation: Set Parameters and Run Configurations.
Set model_dir in the run_config
If the data size is known, training steps, with respect to epochs would be: (training_size / batch_size) * epochs
By default, a checkpoint is saved every 600 secs. That is, the model is evaluated only every 10mins.
To change this behaviour, set one of the following parameters in the run_config
save_checkpoints_secs: Save checkpoints every this many seconds.
save_checkpoints_steps: Save checkpoints every this many steps.
Set the number of the checkpoints to keep using keep_checkpoint_max
End of explanation
if COLAB:
from tensorboardcolab import *
TensorBoardColab(graph_path=model_dir)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config)
print model_dir
!gsutil ls {model_dir}
Explanation: Run Experiment
End of explanation
def make_serving_input_receiver_fn():
inputs = {}
for feature_name in FEATURE_NAMES:
dtype = tf.float32 if feature_name in NUMERIC_FEATURE_NAMES else tf.string
inputs[feature_name] = tf.placeholder(shape=[None], dtype=dtype)
# What is wrong here?
return tf.estimator.export.build_raw_serving_input_receiver_fn(inputs)
Explanation: 5. Export your trained model
Implement serving input receiver function
End of explanation
export_dir = os.path.join(model_dir, 'export')
# Delete export directory if exists.
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
# Export the estimator as a saved_model.
estimator.export_savedmodel(
export_dir_base=export_dir,
serving_input_receiver_fn=make_serving_input_receiver_fn()
)
!gsutil ls gs://ksalama-gcs-cloudml/others/models/census/dnn_classifier/export/1552582374
%%bash
saved_models_base=${MODEL_DIR}/export/
saved_model_dir=$(gsutil ls ${saved_models_base} | tail -n 1)
saved_model_cli show --dir=${saved_model_dir} --all
Explanation: Export to saved_model
End of explanation
export_dir = os.path.join(model_dir, 'export')
tf.gfile.ListDirectory(export_dir)[-1]
saved_model_dir = os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[-1])
print(saved_model_dir)
print ""
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="predict"
)
output = predictor_fn(
{
'age': [34.0],
'workclass': ['Private'],
'education': ['Doctorate'],
'education_num': [10.0],
'marital_status': ['Married-civ-spouse'],
'occupation': ['Prof-specialty'],
'relationship': ['Husband'],
'race': ['White'],
'gender': ['Male'],
'capital_gain': [0.0],
'capital_loss': [0.0],
'hours_per_week': [40.0],
'native_country':['Egyptian']
}
)
print(output)
Explanation: Test saved_model
End of explanation
def _accuracy_bigger(best_eval_result, current_eval_result):
metric = 'accuracy'
return best_eval_result[metric] < current_eval_result[metric]
params.max_steps = 1000
params.hidden_units = [128, 128]
params.dropout = 0
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=200,
keep_checkpoint_max=1,
model_dir=model_dir,
log_step_count_steps=50
)
exporter = tf.estimator.BestExporter(
compare_fn=_accuracy_bigger,
event_file_pattern='eval_{}/*.tfevents.*'.format(datetime.utcnow().strftime("%H%M%S")),
name="estimate", # Saved models are exported under /export/estimate/
serving_input_receiver_fn=make_serving_input_receiver_fn(),
exports_to_keep=1
)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config, exporters = [exporter])
!gsutil ls {model_dir}/export/estimate
Explanation: Export the Model during Training and Evaluation
Saved models are exported under <model_dir>/export/<folder_name>.
* Latest Exporter: exports a model after each evaluation.
* specify the maximum number of exported models to keep using exports_to_keep param.
* Final Exporter: exports only the very last evaluated checkpoint. of the model.
* Best exporter: runs everytime when the newly evaluted checkpoint is better than any exsiting model.
* specify the maximum number of exported models to keep using exports_to_keep param.
* It uses the evaluation events stored under the eval folder.
End of explanation
early_stopping_hook = tf.contrib.estimator.stop_if_no_increase_hook(
estimator,
'accuracy',
max_steps_without_increase=100,
run_every_secs=None,
run_every_steps=500
)
params.max_steps = 1000000
params.hidden_units = [128, 128]
params.dropout = 0
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=500,
keep_checkpoint_max=1,
model_dir=model_dir,
log_step_count_steps=100
)
run_experiment(estimator, params, run_config, exporters = [exporter], train_hooks=[early_stopping_hook])
Explanation: 6. Early Stopping
stop_if_higher_hook
stop_if_lower_hook
stop_if_no_increase_hook
stop_if_no_decrease_hook
End of explanation
strategy = None
num_gpus = len([device_name for device_name in tf.contrib.eager.list_devices()
if '/device:GPU' in device_name])
print "GPUs available: {}".format(num_gpus)
if num_gpus > 1:
strategy = tf.distribute.MirroredStrategy()
params.batch_size = int(math.ceil(params.batch_size / num_gpus))
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=200,
model_dir=model_dir,
train_distribute=strategy
)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config)
Explanation: 7. Using Distribution Strategy for Utilising Multiple GPUs
End of explanation
def metric_fn(labels, predictions):
metrics = {}
label_index = tf.contrib.lookup.index_table_from_tensor(tf.constant(TARGET_LABELS)).lookup(labels)
one_hot_labels = tf.one_hot(label_index, len(TARGET_LABELS))
metrics['mirco_accuracy'] = tf.metrics.mean_per_class_accuracy(
labels=label_index,
predictions=predictions['class_ids'],
num_classes=2)
metrics['f1_score'] = tf.contrib.metrics.f1_score(
labels=one_hot_labels,
predictions=predictions['probabilities'])
return metrics
params.max_steps = 1
estimator = create_estimator(params, run_config)
estimator = tf.contrib.estimator.add_metrics(estimator, metric_fn)
run_experiment(estimator, params, run_config)
Explanation: 8. Extending a Premade Estimator
Add an evaluation metric
tf.metrics
tf.estimator.add_metric
End of explanation
estimator = tf.contrib.estimator.forward_features(estimator, keys="row_identifier")
def make_serving_input_receiver_fn():
inputs = {}
for feature_name in FEATURE_NAMES:
dtype = tf.float32 if feature_name in NUMERIC_FEATURE_NAMES else tf.string
inputs[feature_name] = tf.placeholder(shape=[None], dtype=dtype)
processed_inputs,_ = process_features(inputs, None)
processed_inputs['row_identifier'] = tf.placeholder(shape=[None], dtype=tf.string)
return tf.estimator.export.build_raw_serving_input_receiver_fn(processed_inputs)
export_dir = os.path.join(model_dir, 'export')
if tf.gfile.Exists(export_dir):
tf.gfile.DeleteRecursively(export_dir)
estimator.export_savedmodel(
export_dir_base=export_dir,
serving_input_receiver_fn=make_serving_input_receiver_fn()
)
%%bash
saved_models_base=${MODEL_DIR}/export/
saved_model_dir=$(gsutil ls ${saved_models_base} | tail -n 1)
saved_model_cli show --dir=${saved_model_dir} --all
export_dir = os.path.join(model_dir, 'export')
tf.gfile.ListDirectory(export_dir)[-1]
saved_model_dir = os.path.join(export_dir, tf.gfile.ListDirectory(export_dir)[-1])
print(saved_model_dir)
print ""
predictor_fn = tf.contrib.predictor.from_saved_model(
export_dir = saved_model_dir,
signature_def_key="predict"
)
output = predictor_fn(
{ 'row_identifier': ['key0123'],
'age': [34.0],
'workclass': ['Private'],
'education': ['Doctorate'],
'education_num': [10.0],
'marital_status': ['Married-civ-spouse'],
'occupation': ['Prof-specialty'],
'relationship': ['Husband'],
'race': ['White'],
'gender': ['Male'],
'capital_gain': [0.0],
'capital_loss': [0.0],
'hours_per_week': [40.0],
'native_country':['Egyptian']
}
)
print(output)
Explanation: Add Forward Features
tf.estimator.forward_features
This is very useful for batch prediction, in order to make instances to their predictions
End of explanation
def create_estimator(params, run_config):
wide_columns, deep_columns = create_feature_columns()
def _update_optimizer(initial_learning_rate, decay_steps):
# learning_rate = tf.train.exponential_decay(
# initial_learning_rate,
# global_step=tf.train.get_global_step(),
# decay_steps=decay_steps,
# decay_rate=0.9
# )
learning_rate = tf.train.cosine_decay_restarts(
initial_learning_rate,
tf.train.get_global_step(),
first_decay_steps=50,
t_mul=2.0,
m_mul=1.0,
alpha=0.0,
)
tf.summary.scalar('learning_rate', learning_rate)
return tf.train.AdamOptimizer(
learning_rate=initial_learning_rate)
estimator = tf.estimator.DNNLinearCombinedClassifier(
n_classes=len(TARGET_LABELS),
label_vocabulary=TARGET_LABELS,
weight_column=WEIGHT_COLUMN_NAME,
dnn_feature_columns=deep_columns,
dnn_optimizer=lambda: _update_optimizer(params.learning_rate, params.max_steps),
dnn_hidden_units=params.hidden_units,
dnn_dropout=params.dropout,
batch_norm=True,
linear_feature_columns=wide_columns,
linear_optimizer='Ftrl',
config=run_config
)
return estimator
params.learning_rate = 0.1
params.max_steps = 1000
run_config = tf.estimator.RunConfig(
tf_random_seed=RANDOM_SEED,
save_checkpoints_steps=200,
model_dir=model_dir,
)
if COLAB:
from tensorboardcolab import *
TensorBoardColab(graph_path=model_dir)
estimator = create_estimator(params, run_config)
run_experiment(estimator, params, run_config)
Explanation: 9. Adaptive learning rate
exponential_decay
consine_decay
linear_cosine_decay
consine_decay_restarts
polynomial decay
piecewise_constant_decay
End of explanation |
14,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NOTE
This notebook will make more sense (provide speed-up) once the LLVM backend is exposed in the python wrappers for SymEngine. I need to get back working on that here.
In this notebook we will use symengine the increase the performance of our callbacks produced by lambdify in SymPy.
Step1: The ODEsys class and convenience functions from previous notebook (35) has been put in two modules for easy importing. Recapping what we did last
Step2: so that is the benchmark to beat.
Step3: Just to see that everything looks alright | Python Code:
import json
import numpy as np
from scipy2017codegen.odesys import ODEsys
from scipy2017codegen.chem import mk_rsys
Explanation: NOTE
This notebook will make more sense (provide speed-up) once the LLVM backend is exposed in the python wrappers for SymEngine. I need to get back working on that here.
In this notebook we will use symengine the increase the performance of our callbacks produced by lambdify in SymPy.
End of explanation
watrad_data = json.load(open('../scipy2017codegen/data/radiolysis_300_Gy_s.json'))
watrad = mk_rsys(ODEsys, **watrad_data)
tout = np.logspace(-6, 3, 200) # close to one hour of operation
c0 = {'H2O': 55.4e3, 'H+': 1e-4, 'OH-': 1e-4}
y0 = [c0.get(symb.name, 0) for symb in watrad.y]
%timeit yout, info = watrad.integrate_odeint(tout, y0)
Explanation: The ODEsys class and convenience functions from previous notebook (35) has been put in two modules for easy importing. Recapping what we did last:
End of explanation
import symengine as se
def _lambdify(args, exprs):
if isinstance(exprs, sym.MutableDenseMatrix):
exprs = se.DenseMatrix(exprs.shape[0], exprs.shape[1], exprs.tolist())
lmb = se.Lambdify(args, exprs)
return lambda *args: lmb(args)
watrad_symengine = mk_rsys(ODEsys, **watrad_data, lambdify=_lambdify)
%timeit watrad_symengine.integrate_odeint(tout, y0)
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: so that is the benchmark to beat.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(14, 6))
watrad_symengine.plot_result(tout, *watrad_symengine.integrate_odeint(tout, y0), ax=ax)
ax.set_xscale('log')
ax.set_yscale('log')
Explanation: Just to see that everything looks alright:
End of explanation |
14,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
San Diego Burrito Analytics
Step1: Load data
Step3: Vitalness metric
Step5: Savior metric | Python Code:
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
import pandasql
import seaborn as sns
sns.set_style("white")
Explanation: San Diego Burrito Analytics: Data characterization
Scott Cole
1 July 2016
This notebook applies nonlinear technqiues to analyze the contributions of burrito dimensions to the overall burrito rating.
Create the ‘vitalness’ metric. For each dimension, identify the burritos that scored below average (defined as 2 or lower), then calculate the linear model’s predicted overall score and compare it to the actual overall score. For what dimensions is this distribution not symmetric around 0?
If this distribution trends greater than 0 (Overall_predict - Overall_actual), that means that the actual score is lower than the predicted score. This means that this metric is ‘vital’ and that it being bad will make the whole burrito bad
If vitalness < 0, then the metric being really bad actually doesn’t affect the overall burrito as much as it should.
In opposite theme, make the ‘saving’ metric for all burritos in which the dimension was 4.5 or 5
For those that are significantly different from 0, quantify the effect size. (e.g. a burrito with a 2 or lower rating for this metric: its overall rating will be disproportionately impacted by XX points).
For the dimensions, how many are nonzero? If all of them are 0, then burritos are perfectly linear, which would be weird. If many of them are nonzero, then burritos are highly nonlinear.
NOTE: A Neural network is not recommended because we should have 30x as many examples as weights (and for 3-layer neural network with 4 nodes in the first 2 layers and 1 in the last layer, that would be (16+4 = 20), so would need 600 burritos. One option would be to artificially create data.
Default imports
End of explanation
import util
df = util.load_burritos()
N = df.shape[0]
Explanation: Load data
End of explanation
def vitalness(df, dim, rating_cutoff = 2,
metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meatfilling',
'Uniformity','Salsa','Wrap']):
# Fit GLM to get predicted values
dffull = df[np.hstack((metrics,'overall'))].dropna()
X = sm.add_constant(dffull[metrics])
y = dffull['overall']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
dffull['overallpred'] = res.fittedvalues
# Make exception for Meat:filling in order to avoid pandasql error
if dim == 'Meat:filling':
dffull = dffull.rename(columns={'Meat:filling':'Meatfilling'})
dim = 'Meatfilling'
# Compare predicted and actual overall ratings for each metric below the rating cutoff
import pandasql
q =
SELECT
overall, overallpred
FROM
dffull
WHERE
q = q + dim + ' <= ' + np.str(rating_cutoff)
df2 = pandasql.sqldf(q.lower(), locals())
return sp.stats.ttest_rel(df2.overall,df2.overallpred)
vital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap']
for metric in vital_metrics:
print metric
if metric == 'Volume':
rating_cutoff = .7
else:
rating_cutoff = 1
print vitalness(df,metric,rating_cutoff=rating_cutoff, metrics=vital_metrics)
Explanation: Vitalness metric
End of explanation
def savior(df, dim, rating_cutoff = 2,
metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meatfilling',
'Uniformity','Salsa','Wrap']):
# Fit GLM to get predicted values
dffull = df[np.hstack((metrics,'overall'))].dropna()
X = sm.add_constant(dffull[metrics])
y = dffull['overall']
my_glm = sm.GLM(y,X)
res = my_glm.fit()
dffull['overallpred'] = res.fittedvalues
# Make exception for Meat:filling in order to avoid pandasql error
if dim == 'Meat:filling':
dffull = dffull.rename(columns={'Meat:filling':'Meatfilling'})
dim = 'Meatfilling'
# Compare predicted and actual overall ratings for each metric below the rating cutoff
import pandasql
q =
SELECT
overall, overallpred
FROM
dffull
WHERE
q = q + dim + ' >= ' + np.str(rating_cutoff)
df2 = pandasql.sqldf(q.lower(), locals())
print len(df2)
return sp.stats.ttest_rel(df2.overall,df2.overallpred)
vital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap']
for metric in vital_metrics:
print metric
print savior(df,metric,rating_cutoff=5, metrics=vital_metrics)
print 'Volume'
vital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',
'Uniformity','Salsa','Wrap','Volume']
print savior(df,'Volume',rating_cutoff=.9,metrics=vital_metrics)
Explanation: Savior metric
End of explanation |
14,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aprendizaje computacional en grandes volúmenes de texto
Mario Graff ([email protected], [email protected])
Sabino Miranda ([email protected])
Daniela Moctezuma ([email protected])
Eric S. Tellez ([email protected])
CONACYT, INFOTEC y CentroGEO
https
Step1: diac, dup y punc
Autoridades de la Ciudad de México aclaran que el equipo del cineasta mexicano no fue asaltado, pero sí una riña ahhh.
Step2: emo
Hoy es un día feliz
Step3: lc
@mgraffg pon atención para sacar 10 en http
Step4: num
@mgraffg pon atención para sacar 10 en http
Step5: url
@mgraffg pon atención para sacar 10 en http
Step6: usr
@mgraffg pon atención para sacar 10 en http
Step7: Tokenizadores
Los tokenizadores son en realidad una lista de tokenizadores, y están definidos tokenizer un elemento en $\wp{(\text{n-words} \cup \text{q-grams} \cup \text{skip-grams})} \setminus {\emptyset}$
| nombre | valores | descripción |
|-----------|---------------------|--------------------------------------|
| n-words | ${1,2,3}$ | Longitud de n-gramas de palabras (n-words) |
| q-grams | ${1,2,3,4,5,6,7}$ | Longitud de q-gramas de caracteres) |
| skip-grams | ${(2,1), (3, 1), (2, 2), (3, 2)}$ | Lista de skip-grams|
configuraciones
Step8: n-words
que buena esta la platica
Step9: q-grams
que buena esta la platica
Step10: skip-grams
que buena esta la platica
Step11: ¿Por qué es robusto a errores?
Considere los siguientes textos $T=I_like_vanilla$, $T' = I_lik3_vanila$
Para fijar ideas pongamos que se usar el coeficiente de Jaccard como medida de similitud, i.e.
$$\frac{|{{I, like, vanilla}} \cap {{I, lik3, vanila}}|}{|{{I, like, vanilla}} \cup {{I, lik3, vanila}}|} = 0.2$$
$$Q^T_3 = { I_l, _li, lik, ike, ke_, e_v, _va, van, ani, nil, ill, lla }$$
$$Q^{T'}_3 = { I_l, _li, lik, ik3, k3_, 3_v, _va, van, ani, nil, ila }$$
Bajo la misma medida
$$\frac{|Q^T_3 \cap Q^{T'}_3|}{|Q^T_3 \cup Q^{T'}_3|} = 0.448.$$
Se puede ver que estos conjuntos son más similares que los tokenizados por palabra
La idea es que un algoritmo de aprendizaje tenga un poco más de soporte para determinar que $T$ es parecido a $T'$
Pesado de texto
| nombre | valores | descripción |
|-----------|---------------------|--------------------------------------|
|token_min_filter | ${0.01, 0.03, 0.1, 0.30, -1, -5, -10}$ | Filtro de frequencias bajas |
|token_max_filter | ${0.9, 99, 1.0}$ | Filtro de frequencias altas |
| tfidf | yes, no | Determina si se debe realizar un pesado TFIDF de terminos |
Sobre el pesado
El pesado de tokens esta fijo a TFIDF. Su nombre viene de la formulación $tf \times idf$
$tf$ es term frequency; es una medida de importancia local del término $t$ en el documento $d$, de manera normalizada esta definida como
Step12: TFIDF
buen dia microtc | Python Code:
from microtc.textmodel import norm_chars
text = "Autoridades de la Ciudad de México aclaran que el equipo del cineasta mexicano no fue asaltado, pero sí una riña ahhh."
Explanation: Aprendizaje computacional en grandes volúmenes de texto
Mario Graff ([email protected], [email protected])
Sabino Miranda ([email protected])
Daniela Moctezuma ([email protected])
Eric S. Tellez ([email protected])
CONACYT, INFOTEC y CentroGEO
https://github.com/ingeotec
Representación vectorial del texto
Normalización
Tokenización (n-words, q-grams, skip-grams)
Pesado de texto (TFIDF)
Medidas de similitud
Aprendizaje supervisado
Modelo general de aprendizaje; Entrenamiento, test, score (accuracy, recall, precision, f1)
Máquinas de soporte vectorial (SVM)
Programación genética (EvoDAG)
Distant supervision
$\mu$TC
Pipeline de transformaciones
Optimización de parámetros
Clasificadores
Uso del $\mu$TC
Aplicaciones
Análisis de sentimientos
Determinación de autoría
Clasificación de noticias
Spam
Género y edad
Conclusiones
Procesamiento de Lenguaje Natural (NLP)
$d=s_1\cdots s_n$ es un documento donde $s \in \Sigma$, $\Sigma$ es un alfabeto de tamaño $\sigma = |\Sigma|$
Twitter tendría: $26^{140} \simeq 1.248 \times 10^{198}$
Reglas sobre que símbolos se pueden unir
Noción de términos o palabras, i.e., morfología
Reglas sobre como las palabras se pueden combinar, i.e., sintaxis y gramática
Problema sumamente complicado
Reglas
Variantes
Excepciones
Errores
Conceptos que aparecen de manera diferente en todos los lenguajes
Además, esta el problema semántico:
Un término $s_i$ tiene significados diferentes (antónimos)
Lo contrario también existe, $s_i \not= s_j$ pero que son idénticos en significado (sinónimos)
En ambos casos, el significado preciso depende del contexto
También hay casos aproximados de todo lo anterior
Ironias, sarcamos, etc.
... hay muchísimos problemas abiertos. NLP es complicado, de hecho es AI-complete
Categorización de texto
El problema consiste en, dado un texto $d$, determinar la(s) categoría(s) a la que pertenece en un conjunto $C$ de categorias, previamente conocido.
Más formalmente:
Dado un conjunto de categorias $\cal{C} = {c_1, ..., c_m}$, determinar el subconjunto de categorias
$C_d \in \wp(\cal{C})$ a las que pertenece $d$.
Notese que $C_t$ puede ser vacio o $\cal{C}$.
Clasificación de texto
La clasificación de texto es una especialización del problema de categorización, donde $|C_d| = 1$, esto es $d$ solo puede ser asignado a una categoría.
Es un problema de interés en la industria y la acádemia, con aplicaciones variadas a distintas áreas del conocimiento.
Análisis de sentimiento
Determinación de autoría, e.g., género, edad, estilo, etc.
Detección de spam
Categorización de noticias
Clasificación de idioma
Nuestro Enfoque
Por su complejidad, trabajar en NLP tiene una gran cantidad de problemas abiertos, en particular nosotros nos enfocamos en la clasificación de texto escrito de manera informal (e.g., Twitter).
Para esto se utiliza un pipeline estándar
Enfoque teórico (muchas simplificaciones)
Lógica
Lingüistica
Semántica
Enfoque práctico supone muchas cosas
Se fija el lenguaje
Se fija el problema
Se supone que mientras más técnicas sofísticadas se usen, mejores resultados se tendrán
Ambos enfoques suponen ausencia de errores
Nuestro enfoque se basa en:
* Aprendizaje computacional
* Optimización combinatoria
Características:
* Independiente del lenguaje
* Robusto a errores
Esta compuesto por:
* Una serie de funciones de transformación de texto
* Una serie de tokenizadores
* Filtros de palabras
* Algoritmos de pesado de términos
Normalizadores multilenguaje
| nombre | valores | descripción |
|-----------|---------------------|--------------------------------------|
| del-punc | yes, no | Determina si las puntuaciones deben removerse |
| del-d1 | yes, no | Determina si se deben borrar letras repetidas |
| del-diac | yes, no | Determina si los simbolos que no ocupan espacios deben ser removidos |
| lc | yes, no | Determina si los símbolos deben ser normalizados en minúsculas |
| emo | remove, group, none | Controla como deben tratarse los emoticones |
| num | remove, group, none | ........................ números |
| url | remove, group, none | ........................ urls |
| usr | remove, group, none | ........................ usuarios |
End of explanation
diac = norm_chars(text, del_diac=True, del_dup=False, del_punc=False).replace('~', ' ')
Markdown("## diac\n" + diac)
dup = norm_chars(text, del_diac=False, del_dup=True, del_punc=False).replace('~', ' ')
Markdown("## dup\n" + dup)
punc = norm_chars(text, del_diac=False, del_dup=False, del_punc=True).replace('~', ' ')
Markdown("## punc\n" + punc)
from microtc.emoticons import EmoticonClassifier
from microtc.params import OPTION_GROUP, OPTION_DELETE
text = "Hoy es un día feliz :) :) o no :( "
Explanation: diac, dup y punc
Autoridades de la Ciudad de México aclaran que el equipo del cineasta mexicano no fue asaltado, pero sí una riña ahhh.
End of explanation
emo = EmoticonClassifier()
group = emo.replace(text, OPTION_GROUP)
delete = emo.replace(text, OPTION_DELETE)
Markdown("## delete\n%s\n## group\n%s" % (delete, group))
from IPython.core.display import Markdown
import re
text = "@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC"
Explanation: emo
Hoy es un día feliz :) :) o no :(
End of explanation
lc = text.lower()
print(lc)
Explanation: lc
@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC
End of explanation
delete = re.sub(r"\d+\.?\d+", "", text)
group = re.sub(r"\d+\.?\d+", "_num", text)
Markdown("## delete\n%s\n## group\n%s" % (delete, group))
Explanation: num
@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC
End of explanation
delete = re.sub(r"https?://\S+", "", text)
group = re.sub(r"https?://\S+", "_url", text)
Markdown("## delete\n%s\n## group\n%s" % (delete, group))
Explanation: url
@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC
End of explanation
delete = re.sub(r"@\S+", "", text)
group = re.sub(r"@\S+", "_usr", text)
Markdown("## delete\n%s\n## group\n%s" % (delete, group))
Explanation: usr
@mgraffg pon atención para sacar 10 en http://github.com/INGEOTEC
End of explanation
from microtc.textmodel import TextModel
text = "que buena esta la platica"
model = TextModel([], token_list=[-1, -2])
Explanation: Tokenizadores
Los tokenizadores son en realidad una lista de tokenizadores, y están definidos tokenizer un elemento en $\wp{(\text{n-words} \cup \text{q-grams} \cup \text{skip-grams})} \setminus {\emptyset}$
| nombre | valores | descripción |
|-----------|---------------------|--------------------------------------|
| n-words | ${1,2,3}$ | Longitud de n-gramas de palabras (n-words) |
| q-grams | ${1,2,3,4,5,6,7}$ | Longitud de q-gramas de caracteres) |
| skip-grams | ${(2,1), (3, 1), (2, 2), (3, 2)}$ | Lista de skip-grams|
configuraciones: 16383
End of explanation
model = TextModel([], token_list=[-1])
words = model.tokenize(text)
model = TextModel([], token_list=[-2])
biw = model.tokenize(text)
Markdown("## -1\n %s\n## -2\n%s" % (", ".join(words), ", ".join(biw)))
Explanation: n-words
que buena esta la platica
End of explanation
model = TextModel([], token_list=[3])
words = model.tokenize(text)
model = TextModel([], token_list=[4])
biw = model.tokenize(text)
Markdown("## 3\n %s\n## 4\n%s" % (", ".join(words), ", ".join(biw)))
Explanation: q-grams
que buena esta la platica
End of explanation
model = TextModel([], token_list=[(2, 1)])
words = model.tokenize(text)
model = TextModel([], token_list=[(2, 2)])
biw = model.tokenize(text)
Markdown("## (2, 1)\n %s\n## (2, 2)\n%s" % (", ".join(words), ", ".join(biw)))
Explanation: skip-grams
que buena esta la platica
End of explanation
docs = ["buen dia microtc", "excelente dia", "buenas tardes",
"las vacas me deprimen", "odio los lunes", "odio el trafico",
"la computadora", "la mesa", "la ventana"]
l = ["* " + x for x in docs]
Markdown("# Corpus\n" + "\n".join(l))
Explanation: ¿Por qué es robusto a errores?
Considere los siguientes textos $T=I_like_vanilla$, $T' = I_lik3_vanila$
Para fijar ideas pongamos que se usar el coeficiente de Jaccard como medida de similitud, i.e.
$$\frac{|{{I, like, vanilla}} \cap {{I, lik3, vanila}}|}{|{{I, like, vanilla}} \cup {{I, lik3, vanila}}|} = 0.2$$
$$Q^T_3 = { I_l, _li, lik, ike, ke_, e_v, _va, van, ani, nil, ill, lla }$$
$$Q^{T'}_3 = { I_l, _li, lik, ik3, k3_, 3_v, _va, van, ani, nil, ila }$$
Bajo la misma medida
$$\frac{|Q^T_3 \cap Q^{T'}_3|}{|Q^T_3 \cup Q^{T'}_3|} = 0.448.$$
Se puede ver que estos conjuntos son más similares que los tokenizados por palabra
La idea es que un algoritmo de aprendizaje tenga un poco más de soporte para determinar que $T$ es parecido a $T'$
Pesado de texto
| nombre | valores | descripción |
|-----------|---------------------|--------------------------------------|
|token_min_filter | ${0.01, 0.03, 0.1, 0.30, -1, -5, -10}$ | Filtro de frequencias bajas |
|token_max_filter | ${0.9, 99, 1.0}$ | Filtro de frequencias altas |
| tfidf | yes, no | Determina si se debe realizar un pesado TFIDF de terminos |
Sobre el pesado
El pesado de tokens esta fijo a TFIDF. Su nombre viene de la formulación $tf \times idf$
$tf$ es term frequency; es una medida de importancia local del término $t$ en el documento $d$, de manera normalizada esta definida como:
$$tf(t,d) = \frac{freq(t, d)}{\max_{w \in d}{freq(w, d)}}$$
entre más veces aparece en el documento $d$, $t$ es más importante
$idf$ quiere decir inverse document frequency; es una medida global a la colección $D$, esta definida como:
$$ idf(t,d) = log{\frac{|D|}{1+|{d \in D: t \in d}|}} $$
entre más veces aparece $t$ en la colección, el término es más común y menos discriminante; por lo tanto, menos importante
End of explanation
from microtc.textmodel import TextModel
model = TextModel(docs, token_list=[-1])
print(model[docs[0]])
Explanation: TFIDF
buen dia microtc
End of explanation |
14,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 6 - Diagonalização de Matrizes
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
Step1: Estimativa de $\lambda$ em cada iteração ímpar
Step2: Cálculo utilizando o método sem critério de parada, mas em vez disso, em função do k
Retornando
Step3: Cálculo de Valores e Vectores Próprios utilizando uma função interna do numpy
Step4: Exercício 2 - Método de Jacobi Cíclico | Python Code:
import numpy as np
import numpy.linalg as linalg
import matplotlib.pyplot as pl
%matplotlib inline
def npower(matrix, k): # método da potência n-ésima para um
x = np.array([1, 0, 0])
print(0, x, x.transpose() @ matrix @ x)
for i in range(k):
x = matrix @ x
x_norm = linalg.norm(x)
xk = x / x_norm
eig = xk.transpose() @ A @ xk
print(i+1, xk, eig)
return xk, eig
def npower_eps(matrix, lambda_diff):
x0 = np.array([1, 0, 0])
xk = matrix @ x0
xk = xk / linalg.norm(xk)
k = 1
def eig(vector): return vector.transpose() @ matrix @ vector
print(k, eig(xk))
while np.abs(eig(xk) - eig(x0)) > lambda_diff:
x0 = xk
xk = matrix @ xk
xk /= linalg.norm(xk)
k += 1
if k%2 != 0: print(k, eig(xk))
return k, eig(xk), xk, eig(x0), x0, np.abs(eig(x0) - eig(xk))
A = np.array([[1., 1, 1/2], [1, 1, 1/4], [1/2, 1/4, 2]])
Explanation: Departamento de Física - Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Física Computacional - Ficha 6 - Diagonalização de Matrizes
Rafael Isaque Santos - 2012144694 - Licenciatura em Física
End of explanation
npower_eps(A, 1e-5)
Explanation: Estimativa de $\lambda$ em cada iteração ímpar:
Abaixo, número de iterações necessárias, valor próprio encontrado ao final, vector próprio associado, valor próprio do passo anterior e seu vector próprio, e diferença entre último e penúltimo vectores próprios encontrados.
End of explanation
npower(A, 6)
npower(A, 12)
Explanation: Cálculo utilizando o método sem critério de parada, mas em vez disso, em função do k
Retornando:
+ Iteração
+ Vector Normalizado
+ Valor próprio
End of explanation
linalg.eig(A)
Explanation: Cálculo de Valores e Vectores Próprios utilizando uma função interna do numpy
End of explanation
def jacobi(A_i, eps, nitmax):
A = np.copy(A_i) # para cortar dependências
m = len(A)
iteration = 0
Q = np.identity(m)
def off(mat):
off_sum = 0
for i in range(m):
for j in range(m):
if j != i: off_sum += mat[i, j]**2
return np.sqrt(off_sum)
def frobenius_norm(mat):
norm = 0
for i in range(m):
for j in range(m):
norm += mat[i, j]**2
return np.sqrt(norm)
while (off(A) > eps and iteration < nitmax):
j, k, ajk = 0, 0, 0.
for ji in range(m-1):
for ki in range(ji+1, m):
absjk = abs(A[ji, ki])
if absjk >= ajk:
j, k, ajk = ji, ki, absjk
def CSjk(mati, j, k):
mat = np.copy(mati)
if mat[j, j] == mat[k, k]:
C, S = np.cos(np.pi / 4), np.sin(np.pi / 4)
else:
tau = 2*mat[j, k] / (mat[k, k] - mat[j, j])
chi = 1 / np.sqrt(1 + tau**2)
C = np.sqrt((1 + chi) / 2)
S = np.sign(tau) * np.sqrt((1 - chi) / 2)
return C, S
C, S = CSjk(A, j, k)
A_l = np.zeros_like(A)
for r in range(m):
if r != j and r != k:
A_l[r, j] = C * A[r, j] - S * A[r, k]
A_l[j, r] = C * A[r, j] - S * A[r, k]
A_l[r, k] = S * A[r, j] + C * A[r, k]
A_l[k, r] = S * A[r, j] + C * A[r, k]
for s in range(m):
if s != j and s != k:
A_l[r, s] = np.copy(A[r, s])
A_l[j, j] = np.copy((C**2 * A[j, j]) + (S**2 * A[k, k]) - (2 * S * C * A[j, k]))
A_l[j, k] = S * C * (A[j, j] - A[k, k]) + ((C**2 - S**2) * A[j, k])
# A_l[j, k] = 0
A_l[k, j] = np.copy(A_l[j, k])
A_l[k, k] = np.copy((S**2 * A[j, j]) + (C**2 * A[k, k]) + (2 * S * C * A[j, k]))
A = A_l
Q_l = np.zeros_like(Q)
for r in range(m):
for s in range(m):
if s != j and s!= k:
Q_l[r, s] = np.copy(Q[r, s])
Q_l[r, j] = C * Q[r, j] - S * Q[r, k]
Q_l[r, k] = S * Q[r, j] + C * Q[r, k]
Q = Q_l
iteration += 1
D = Q.transpose().dot(A_i).dot(Q)
return A, off(A), D, Q, iteration, off(A_i), frobenius_norm(A_i)
a, oa, d, q, it, oi, fi = jacobi(A, 1e-4, 100)
print('D:')
print( d)
print('Q:')
print(q)
print('Iterações:', it)
oi**2 / fi**2
oa**2 / fi**2
linalg.eig(A)
oi**2 / 8.625
oa**2 / 8.625
np.cos(np.pi/4)
df_5pts = lambda f, x, h: (-3*f(x+4*h) + 16*f(x+3*h) - 36*f(x+2*h) + 48*f(x+h) - 25*f(x)) / (12*h)
df2_5pts = lambda f, x, h: df_5pts(df_5pts, x, h)
def ui(x):
if
def schrodingerpvp(xmin, xmax, subint, eps):
h = (xmax - xmin) / subint
xi = [xmin + i for i in range(subint)]
# ui = [lambda x: u(x) for x in xi]
D = np.zeros((subint-1, subint-1))
for i in range(subint-2):
D[i, i] = (2 / h**2) + xi[i]**2
if i==0: D[i, 1] = - 1 / h**2
elif i== subint-2: D[i, i-1] = - 1 / h**2
else:
D[i, i+1] = -1 / h**2
D[i, i-1] = -1 / h**2
Dt, odt, lam, vec, it, od, frobd = jacobi(D, eps, 100000)
return lam, vec, D
l, v, D = schrodingerpvp(-10, 10, 50, 1e-4)
linalg.eigvals(D)
Explanation: Exercício 2 - Método de Jacobi Cíclico
End of explanation |
14,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to visualizing data in the eeghdf files
Getting started
The EEG is stored in hierachical data format (HDF5). This format is widely used, open, and supported in many languages, e.g., matlab, R, python, C, etc.
Here, I will use the h5py library in python
Step1: The data is stored hierachically in an hdf5 file as a tree of keys and values.
It is possible to inspect the file using standard hdf5 tools.
Below we show the keys and values associated with the root of the tree. This shows that there is a "patient" group and a group "record-0"
Step2: We can focus on the patient group and access it via hdf['patient'] as if it was a python dictionary. Here are the key,value pairs in that group. Note that the patient information has been anonymized. Everyone is given the same set of birthdays. This shows that this file is for Subject 2619, who is male.
Step3: Now we look at how the waveform data is stored. By convention, the first record is called "record-0" and it contains the waveform data as well as the approximate time (relative to the birthdate)at which the study was done, as well as technical information like the number of channels, electrode names and sample rate.
Step4: We can then grab the actual waveform data and visualize it.
Step5: Simple visualization of EEG (brief absence seizure)
Step6: Annotations
It was not a coincidence that I chose this time in the record. I used the annotations to focus on portion of the record which was marked as having a seizure.
You can access the clinical annotations via rec['edf_annotations']
Step7: It is easy then to find the annotations related to seizures | Python Code:
# import libraries
from __future__ import print_function, division, unicode_literals
%matplotlib inline
# %matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import h5py
from pprint import pprint
import stacklineplot # local copy
# matplotlib.rcParams['figure.figsize'] = (18.0, 12.0)
matplotlib.rcParams['figure.figsize'] = (12.0, 8.0)
hdf = h5py.File('./archive/YA2741G2_1-1+.eeghdf')
Explanation: Introduction to visualizing data in the eeghdf files
Getting started
The EEG is stored in hierachical data format (HDF5). This format is widely used, open, and supported in many languages, e.g., matlab, R, python, C, etc.
Here, I will use the h5py library in python
End of explanation
list(hdf.items())
Explanation: The data is stored hierachically in an hdf5 file as a tree of keys and values.
It is possible to inspect the file using standard hdf5 tools.
Below we show the keys and values associated with the root of the tree. This shows that there is a "patient" group and a group "record-0"
End of explanation
list(hdf['patient'].attrs.items())
Explanation: We can focus on the patient group and access it via hdf['patient'] as if it was a python dictionary. Here are the key,value pairs in that group. Note that the patient information has been anonymized. Everyone is given the same set of birthdays. This shows that this file is for Subject 2619, who is male.
End of explanation
rec = hdf['record-0']
list(rec.attrs.items())
# here is the list of data arrays stored in the record
list(rec.items())
rec['physical_dimensions'][:]
rec['prefilters'][:]
rec['signal_digital_maxs'][:]
rec['signal_digital_mins'][:]
rec['signal_physical_maxs'][:]
Explanation: Now we look at how the waveform data is stored. By convention, the first record is called "record-0" and it contains the waveform data as well as the approximate time (relative to the birthdate)at which the study was done, as well as technical information like the number of channels, electrode names and sample rate.
End of explanation
signals = rec['signals']
labels = rec['signal_labels']
electrode_labels = [str(s,'ascii') for s in labels]
numbered_electrode_labels = ["%d:%s" % (ii, str(labels[ii], 'ascii')) for ii in range(len(labels))]
Explanation: We can then grab the actual waveform data and visualize it.
End of explanation
# search identified spasms at 1836, 1871, 1901, 1939
stacklineplot.show_epoch_centered(signals, 1476,epoch_width_sec=15,chstart=0, chstop=19, fs=rec.attrs['sample_frequency'], ylabels=electrode_labels, yscale=3.0)
plt.title('Absence Seizure');
Explanation: Simple visualization of EEG (brief absence seizure)
End of explanation
annot = rec['edf_annotations']
antext = [s.decode('utf-8') for s in annot['texts'][:]]
starts100ns = [xx for xx in annot['starts_100ns'][:]] # process the bytes into text and lists of start times
df = pd.DataFrame(data=antext, columns=['text']) # load into a pandas data frame
df['starts100ns'] = starts100ns
df['starts_sec'] = df['starts100ns']/10**7
del df['starts100ns']
Explanation: Annotations
It was not a coincidence that I chose this time in the record. I used the annotations to focus on portion of the record which was marked as having a seizure.
You can access the clinical annotations via rec['edf_annotations']
End of explanation
df[df.text.str.contains('sz',case=False)]
Explanation: It is easy then to find the annotations related to seizures
End of explanation |
14,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
Kunci utama dari Efisiensi pengiriman adalah customer harus terassign ke kitchen yang terdekat dulu.
Kami melakukannya dengan menSort customer dari jarak yang paling jauh dari titik pusat customer (sum/total lat long).
Solusi tersebut belum optimal,tapi mendekati.
Solusi optimal = Sort dari Outermost customer.
Customer kemudian di assign ke kitchen terdekatnya, apabila sudah full maka diassign ke kitchen kedua terdekat, dst.
Sehingga bisa didapat group berupa customer yang terassign ke suatu kitchen.
Driver kemudian di assign per group berdasarkan degree dan jarak.
Di assign tidak hanya berdasarkan jarak untuk mengoptimalkan waktu pengiriman selama 1 jam.
Grouping customer to the best kitchen
Step1: Assign driver in group based on degree and distance | Python Code:
# Find center point of customer, buat nyari
# long
long_centroid = sum(customer['long'])/len(customer)
# lat
lat_centroid = sum(customer['lat'])/len(customer)
# Find distance from customer point to central customer point
customer['distSort'] = np.sqrt( (customer.long-long_centroid)**2 + (customer.lat-lat_centroid)**2)
# Sort by longest distance
customer = data.sort_values(['distSort'], ascending=False)
# Data already sorted from outermost customer
# For each row in the column,assign customer to the the nearest kitchen,
# if the kitchen already full, assign customer to the second nearest kitchen and so on.
# BELUM SELESAI YANG INI
clusters = []
for row in customer['distSort']:
#clusters.append(cluster)
data['cluster'] = clusters
# Data visulization customer assigned to its kitchen
def visualize(data):
x = data['long']
y = data['lat']
Cluster = data['cluster']
fig = plt.figure()
ax = fig.add_subplot(111)
scatter = ax.scatter(x,y,c=Cluster, cmap=plt.cm.Paired, s=10, label='customer')
ax.scatter(kitchen['long'],kitchen['lat'], s=10, c='r', marker="x", label='second')
ax.set_xlabel('longitude')
ax.set_ylabel('latitude')
plt.colorbar(scatter)
fig.show()
# Visualization Example customer assigned to kitchen (without following constraint)
# THIS IS ONLY EXAMPLE
y = kitchen['kitchenName']
X = pd.DataFrame(kitchen.drop('kitchenName', axis=1))
clf = NearestCentroid()
clf.fit(X, y)
pred = clf.predict(customer)
customer1['cluster'] = pd.Series(pred, index=customer1.index)
customer['cluster'] = pd.Series(pred, index=customer.index)
visualize(customer)
# Count customer order assigned to Kitchen
dapurMiji = (customer1.where(customer1['cluster'] == 0))['qtyOrdered'].sum()
dapurNusantara = (customer1.where(customer1['cluster'] == 1))['qtyOrdered'].sum()
familiaCatering = (customer1.where(customer1['cluster'] == 2))['qtyOrdered'].sum()
pondokRawon = (customer1.where(customer1['cluster'] == 3))['qtyOrdered'].sum()
roseCatering = (customer1.where(customer1['cluster'] == 4))['qtyOrdered'].sum()
tigaKitchenCatering = (customer1.where(customer1['cluster'] == 5))['qtyOrdered'].sum()
ummuUwais = (customer1.where(customer1['cluster'] == 6))['qtyOrdered'].sum()
d = {'Dapur Miji': dapurMiji , 'Dapur Nusantara': dapurNusantara, 'Familia Catering': familiaCatering, 'Pondok Rawon': pondokRawon,'Rose Catering': roseCatering, 'Tiga Kitchen Catering': tigaKitchenCatering, 'Ummu Uwais': ummuUwais}
# print(customer.cluster.value_counts())
# Print sum of assigned
print(d)
Explanation: Overview
Kunci utama dari Efisiensi pengiriman adalah customer harus terassign ke kitchen yang terdekat dulu.
Kami melakukannya dengan menSort customer dari jarak yang paling jauh dari titik pusat customer (sum/total lat long).
Solusi tersebut belum optimal,tapi mendekati.
Solusi optimal = Sort dari Outermost customer.
Customer kemudian di assign ke kitchen terdekatnya, apabila sudah full maka diassign ke kitchen kedua terdekat, dst.
Sehingga bisa didapat group berupa customer yang terassign ke suatu kitchen.
Driver kemudian di assign per group berdasarkan degree dan jarak.
Di assign tidak hanya berdasarkan jarak untuk mengoptimalkan waktu pengiriman selama 1 jam.
Grouping customer to the best kitchen
End of explanation
# Get degree for each customer in the cluster
def getDegree(data):
# distance
# center long lat (start of routing)
center_latitude = #Tiap Kitchen
center_longitude = #Tiap Kitchen
degrees = []
degree = 0
# For each row in the column,
for row in data['longitude']:
degrees = np.rint(np.rad2deg(np.arctan2((data['latitude']-center_latitude),(data['longitude']-center_longitude))))
#center di pulogadung
data['degrees'] = degrees
return data
# Assign driver dari kitchen ke customer berdasarkan degree dan jarak
# Priority utama berdasarkan degree jadi gaada driver yang deket doang
# Tapi belum dipikir gimana bisa optimize waktu harus satu jam max, tapi seenggaknya driver udah agak rata jaraknya
# Kasus khusus apabila yg degree nya kecil jaraknya jauh banget, dia driver baru.
# BELUM SELESAI YANG INI
Explanation: Assign driver in group based on degree and distance
End of explanation |
14,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
通用範例/範例二
Step1: 測試資料:
* iris為一個dict型別資料。
| 顯示 | 說明 |
| -- | -- |
| ('target_names', (3L,))| 共有三種鳶尾花 setosa, versicolor, virginica |
| ('data', (150L, 4L)) | 有150筆資料,共四種特徵 |
| ('target', (150L,))| 這150筆資料各是那一種鳶尾花|
| DESCR | 資料之描述 |
| feature_names| 四個特徵代表的意義 |
(二)PCA與SelectKBest
PCA(n_components = 主要成份數量)
Step2: (三)FeatureUnionc
使用sklearn.pipeline.FeatureUnion合併主成分分析(PCA)和綜合篩選(SelectKBest)。<br />
最後得到選出的特徵。
Step3: (四)找到最佳的結果
Scikit-lenarn的支持向量機分類涵式庫提供使用簡單易懂的指令,可以用 SVC() 建立運算物件,之後並可以用運算物件內的方法 .fit() 與 .predict() 來做訓練與預測。
使用GridSearchCV交叉驗證,得到由參數網格計算出的分數網格,並找到分數網格中最優的點。
最後印出這個點所代表的參數 | Python Code:
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
iris = load_iris()
X, y = iris.data, iris.target
Explanation: 通用範例/範例二: Concatenating multiple feature extraction methods
http://scikit-learn.org/stable/auto_examples/feature_stacker.html
在許多真的案例中,會有很多方法可以從一個數據集中提取特徵。也常常會組合多個方法來獲得良好的特徵。這個例子說明如何使用FeatureUnion 來結合由PCA 和univariate selection 時的特徵。雖然在這個例子中使用此方法並沒有特殊幫助,只是用來說明如何使用FeatureUnion 。
這個範例的主要目的:
* 使用iris 鳶尾花資料集
* 使用FeatureUnion
(一)資料匯入及描述
首先先匯入iris 鳶尾花資料集,使用from sklearn.datasets import load_iris將資料存入
準備X (特徵資料) 以及 y (目標資料)
End of explanation
# This dataset is way to high-dimensional. Better do PCA:
pca = PCA(n_components=2)
# Maybe some original features where good, too?
selection = SelectKBest(k=1)
Explanation: 測試資料:
* iris為一個dict型別資料。
| 顯示 | 說明 |
| -- | -- |
| ('target_names', (3L,))| 共有三種鳶尾花 setosa, versicolor, virginica |
| ('data', (150L, 4L)) | 有150筆資料,共四種特徵 |
| ('target', (150L,))| 這150筆資料各是那一種鳶尾花|
| DESCR | 資料之描述 |
| feature_names| 四個特徵代表的意義 |
(二)PCA與SelectKBest
PCA(n_components = 主要成份數量):Principal Component Analysis(PCA)主成份分析,是一個常用的將資料維度減少的方法。它的原理是找出一個新的座標軸,將資料投影到該軸時,數據的變異量會最大。利用這個方式減少資料維度,又希望能保留住原數據點的特性。
SelectKBest(score_func , k ): score_func是選擇特徵值所依據的函式,而K值則是設定要選出多少特徵。
End of explanation
# Build estimator from PCA and Univariate selection:
combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
Explanation: (三)FeatureUnionc
使用sklearn.pipeline.FeatureUnion合併主成分分析(PCA)和綜合篩選(SelectKBest)。<br />
最後得到選出的特徵。
End of explanation
svm = SVC(kernel="linear")
# Do grid search over k, n_components and C:
pipeline = Pipeline([("features", combined_features), ("svm", svm)])
param_grid = dict(features__pca__n_components=[1, 2, 3],
features__univ_select__k=[1, 2],
svm__C=[0.1, 1, 10])
grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
grid_search.fit(X, y)
print(grid_search.best_estimator_)
Explanation: (四)找到最佳的結果
Scikit-lenarn的支持向量機分類涵式庫提供使用簡單易懂的指令,可以用 SVC() 建立運算物件,之後並可以用運算物件內的方法 .fit() 與 .predict() 來做訓練與預測。
使用GridSearchCV交叉驗證,得到由參數網格計算出的分數網格,並找到分數網格中最優的點。
最後印出這個點所代表的參數
End of explanation |
14,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x: 1.0/(1+ np.exp(-x))
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)# signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
output_errors = targets - final_outputs
# Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
# errors propagated to the hidden layer
hidden_grad = hidden_outputs * (1 - hidden_outputs)
# hidden layer gradients
# TODO: Update the weights
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
# update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot((hidden_errors * hidden_grad), inputs.T)
# update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs# signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 2000
learning_rate = 0.03
hidden_nodes = 10
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
# plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
14,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task
Step1: Define path to data
Step2: A few basic libraries that we'll need for the initial exercises
Step3: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
Step4: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline
Step5: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object
Step6: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder
Step7: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
Step8: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
Step9: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
Step10: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four
Step11: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
Step12: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
Step13: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
Step14: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras
Step15: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
Step16: Here's a few examples of the categories we just imported
Step17: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition
Step18: ...and here's the fully-connected definition.
Step19: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model
Step20: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
Step21: We'll learn about what these different blocks do later in the course. For now, it's enough to know that
Step22: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
Step23: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
Step24: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data
Step25: From here we can use exactly the same steps as before to look at predictions from the model.
Step26: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label. | Python Code:
%matplotlib inline
Explanation: Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
End of explanation
path = "data/dogscats/"
# path = "data/dogscats/sample/"
Explanation: Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
End of explanation
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
Explanation: A few basic libraries that we'll need for the initial exercises:
End of explanation
from imp import reload
import utils; reload(utils)
from utils import plots
Explanation: We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
End of explanation
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=8
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
# batches = vgg.get_batches(path+'train', batch_size=batch_size)
# val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
# vgg.finetune(batches)
# vgg.fit(batches, val_batches, nb_epoch=1)
Explanation: Use a pretrained VGG model with our Vgg16 class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, Vgg16, which makes using the VGG 16 model very straightforward.
The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
End of explanation
# vgg = Vgg16()
Explanation: The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
Use Vgg16 for basic image recognition
Let's start off by using the Vgg16 class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
End of explanation
batches = vgg.get_batches(path+'train', batch_size=4)
Explanation: Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
End of explanation
imgs,labels = next(batches)
Explanation: (BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
Batches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
End of explanation
plots(imgs, titles=labels)
Explanation: As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
End of explanation
vgg.predict(imgs, True)
Explanation: We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
End of explanation
vgg.classes[:4]
Explanation: The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
End of explanation
batch_size=8
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
Explanation: (Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().
We create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
End of explanation
vgg.finetune(batches)
Explanation: Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
End of explanation
vgg.fit(batches, val_batches, nb_epoch=1)
Explanation: Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
End of explanation
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
Explanation: That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
End of explanation
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
Explanation: Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
End of explanation
classes[:5]
Explanation: Here's a few examples of the categories we just imported:
End of explanation
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
Explanation: Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
End of explanation
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
Explanation: ...and here's the fully-connected definition.
End of explanation
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
Explanation: When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
End of explanation
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
Explanation: Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
End of explanation
model = VGG_16()
Explanation: We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
Convolution layers are for finding patterns in images
Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
End of explanation
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
Explanation: As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
End of explanation
batch_size = 4
Explanation: Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call predict() on them.
End of explanation
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
Explanation: Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
End of explanation
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
Explanation: From here we can use exactly the same steps as before to look at predictions from the model.
End of explanation
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
Explanation: The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with np.argmax()) we can find the predicted label.
End of explanation |
14,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Recursive Parser for Arithmetic Expressions
In this notebook we implement a simple recursive descend parser for arithmetic expressions.
This parser will implement the following grammar
Step1: The function tokenize receives a string s as argument and returns a list of tokens.
The string s is supposed to represent an arithmetical expression.
Note
Step2: Implementing the Recursive Descend Parser
The function parse takes a string s as input and parses this string according to the recursive grammar
shown above. The function returns the floating point number that results from evaluation the expression given in s.
Step3: The function parseExpr implements the following grammar rule
Step4: The function parseExprRest implements the following grammar rules
Step5: The function parseProduct implements the following grammar rule
Step6: The function parseProductRest implements the following grammar rules
Step7: The function parseFactor implements the following grammar rules
Step8: Testing | Python Code:
import re
Explanation: A Recursive Parser for Arithmetic Expressions
In this notebook we implement a simple recursive descend parser for arithmetic expressions.
This parser will implement the following grammar:
$$
\begin{eqnarray}
\mathrm{expr} & \rightarrow & \mathrm{product}\;\;\mathrm{exprRest} \[0.2cm]
\mathrm{exprRest} & \rightarrow & \texttt{'+'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \texttt{'-'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \varepsilon \[0.2cm]
\mathrm{product} & \rightarrow & \mathrm{factor}\;\;\mathrm{productRest} \[0.2cm]
\mathrm{productRest} & \rightarrow & \texttt{''} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \texttt{'/'} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \varepsilon \[0.2cm]
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \texttt{NUMBER}
\end{eqnarray*}
$$
Implementing a Scanner
We implement a scanner with the help of the module re.
End of explanation
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
lexSpec = r'''([ \t]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # parentheses
([-+*/]) | # arithmetical operators
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
for ws, number, parenthesis, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ number ]
elif parenthesis:
result += [ parenthesis ]
elif operator:
result += [ operator ]
else:
result += [ f'ERROR({error})']
return result
tokenize('1 + (2 + @ 34 - 2**0)/7')
Explanation: The function tokenize receives a string s as argument and returns a list of tokens.
The string s is supposed to represent an arithmetical expression.
Note:
1. We need to set the flag re.VERBOSE in our call of the function findall
below because otherwise we are not able to format the regular expression lexSpec the way
we have done it.
2. The regular expression lexSpec contains 5 parenthesized groups. Therefore,
findall returns a list of 5-tuples where the 5 components correspond to the 5
groups of the regular expression.
End of explanation
def parse(s):
TL = tokenize(s)
result, Rest = parseExpr(TL)
assert Rest == [], f'Parse Error: could not parse {TL}'
return result
Explanation: Implementing the Recursive Descend Parser
The function parse takes a string s as input and parses this string according to the recursive grammar
shown above. The function returns the floating point number that results from evaluation the expression given in s.
End of explanation
def parseExpr(TL):
product, Rest = parseProduct(TL)
return parseExprRest(product, Rest)
Explanation: The function parseExpr implements the following grammar rule:
$$ \mathrm{expr} \rightarrow \;\mathrm{product}\;\;\mathrm{exprRest} $$
It takes a token list TL as its input and returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed during the parse process.
End of explanation
def parseExprRest(Sum, TL):
if TL == []:
return Sum, []
elif TL[0] == '+':
product, Rest = parseProduct(TL[1:])
return parseExprRest(Sum + product, Rest)
elif TL[0] == '-':
product, Rest = parseProduct(TL[1:])
return parseExprRest(Sum - product, Rest)
else:
return Sum, TL
Explanation: The function parseExprRest implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{exprRest} & \rightarrow & \texttt{'+'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \texttt{'-'} \;\;\mathrm{product}\;\;\mathrm{exprRest} \
& \mid & \;\varepsilon \[0.2cm]
\end{eqnarray}
$$
It takes two arguments:
- sum is the value that has already been parsed,
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed during the parse process.
End of explanation
def parseProduct(TL):
factor, Rest = parseFactor(TL)
return parseProductRest(factor, Rest)
Explanation: The function parseProduct implements the following grammar rule:
$$ \mathrm{product} \rightarrow \;\mathrm{factor}\;\;\mathrm{productRest} $$
It takes one argument:
- TL is the list of tokens that need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse a product.
End of explanation
def parseProductRest(product, TL):
if TL == []:
return product, []
elif TL[0] == '*':
factor, Rest = parseFactor(TL[1:])
return parseProductRest(product * factor, Rest)
elif TL[0] == '/':
factor, Rest = parseFactor(TL[1:])
return parseProductRest(product / factor, Rest)
else:
return product, TL
Explanation: The function parseProductRest implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{productRest} & \rightarrow & \texttt{''} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \texttt{'/'} \;\;\mathrm{factor}\;\;\mathrm{productRest} \
& \mid & \;\varepsilon \
\end{eqnarray*}
$$
It takes two arguments:
- product is the value that has already been parsed,
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse the rest of a product.
End of explanation
def parseFactor(TL):
if TL[0] == '(':
expr, Rest = parseExpr(TL[1:])
assert Rest[0] == ')', 'Parse Error: expected ")"'
return expr, Rest[1:]
else:
return float(TL[0]), TL[1:]
Explanation: The function parseFactor implements the following grammar rules:
$$
\begin{eqnarray}
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \;\texttt{NUMBER}
\end{eqnarray}
$$
It takes one argument:
- TL is the list of tokens that still need to be consumed.
It returns a pair of the form (value, Rest) where
- value is the result of evaluating the arithmetical expression
that is represented by TL and
- Rest is a list of those tokens that have not been consumed while trying to parse a factor.
End of explanation
def test(s):
r1 = parse(s)
r2 = eval(s)
assert r1 == r2
return r1
test('11+22*(33-44)/(5-10*5/(4-3))')
test('0*11+22*(33-44)/(5-10*5/(4-3))')
Explanation: Testing
End of explanation |
14,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Imbalanced Weighted Binary Classification
Step1: Get data
Step2: Create model
Step3: Train unweighted loss model
Step4: Now train weighted loss model
Step5: Grid search
Step6: Plot results
Step7: Conclusion
Let's look at the top 10 sorted descending AUC_ROC and AUC_PR values and the corresponding positive weights.
AUC_ROC
Step8: AUC_PR | Python Code:
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
print(tf.__version__)
Explanation: Imbalanced Weighted Binary Classification
End of explanation
df = pd.read_csv(filepath_or_buffer = "UCI_Credit_Card.csv")
df.head()
df.describe()
FEATURE_NAMES = list(df.columns)
NUMERIC_FEATURE_NAMES = "LIMIT_BAL,AGE,BILL_AMT1,BILL_AMT2,BILL_AMT3,BILL_AMT4,BILL_AMT5,BILL_AMT6,PAY_AMT1,PAY_AMT2,PAY_AMT3,PAY_AMT4,PAY_AMT5,PAY_AMT6".split(',')
CATEGORICAL_FEATURE_NAMES = "SEX,EDUCATION,MARRIAGE,PAY_0,PAY_2,PAY_3,PAY_4,PAY_5,PAY_6".split(',')
LABEL_NAME = FEATURE_NAMES[-1]
train_rows = int(len(df) * 0.9)
eval_rows = len(df) - train_rows
print("train_rows = {} & eval_rows = {}".format(train_rows, eval_rows))
Explanation: Get data
End of explanation
def train_input_fn(df, batch_size = 128):
#1. Convert dataframe into correct (features,label) format for Estimator API
dataset = tf.data.Dataset.from_tensor_slices(
tensors = (dict(df[FEATURE_NAMES]), df[LABEL_NAME]))
# Note:
# If we returned now, the Dataset would iterate over the data once
# in a fixed order, and only produce a single element at a time.
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
def eval_input_fn(df, batch_size = 128):
#1. Convert dataframe into correct (features,label) format for Estimator API
dataset = tf.data.Dataset.from_tensor_slices(
tensors = (dict(df[FEATURE_NAMES]), df[LABEL_NAME]))
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
return dataset
def create_feature_columns():
numeric_columns = [tf.feature_column.numeric_column(key = key)
for key in NUMERIC_FEATURE_NAMES]
categorical_columns = [tf.feature_column.indicator_column(
categorical_column = tf.feature_column.categorical_column_with_vocabulary_list(
key = key,
vocabulary_list = list(df[key].unique())))
for key in CATEGORICAL_FEATURE_NAMES]
feature_columns = numeric_columns + categorical_columns
return feature_columns
def model_fn(features, labels, mode, params):
input_layer = tf.feature_column.input_layer(
features = features,
feature_columns = create_feature_columns())
logits = tf.layers.dense(
inputs =
input_layer,
units = 1,
activation = None)
# shape = (current_batch_size, 1)
probabilities = tf.nn.sigmoid(x = logits)
# shape = (current_batch_size,)
class_ids = tf.where(
condition = probabilities < 0.5,
x = tf.zeros_like(tensor = probabilities, dtype = tf.float64),
y = tf.ones_like(tensor = probabilities, dtype = tf.float64))
if mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL:
labels = tf.expand_dims(
input = tf.cast(x = labels, dtype = tf.float32),
axis = -1)
loss = tf.reduce_mean(
input_tensor = tf.nn.weighted_cross_entropy_with_logits(
targets = labels,
logits = logits,
pos_weight = params["pos_weight"]))
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss = loss,
global_step = tf.train.get_global_step(),
learning_rate = params["learning_rate"],
optimizer = "Adam")
eval_metric_ops = None
else:
train_op = None
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels = labels,
predictions = class_ids),
"true_positives_at_thresholds": tf.metrics.true_positives_at_thresholds(
labels = labels,
predictions = probabilities,
thresholds = list(np.arange(0.0, 1.005, 0.005))),
"false_negatives_at_thresholds": tf.metrics.false_negatives_at_thresholds(
labels = labels,
predictions = probabilities,
thresholds = list(np.arange(0.0, 1.005, 0.005))),
"false_positives_at_thresholds": tf.metrics.false_positives_at_thresholds(
labels = labels,
predictions = probabilities,
thresholds = list(np.arange(0.0, 1.005, 0.005))),
"true_negatives_at_thresholds": tf.metrics.true_negatives_at_thresholds(
labels = labels,
predictions = probabilities,
thresholds = list(np.arange(0.0, 1.005, 0.005))),
"precision_at_thresholds": tf.metrics.precision_at_thresholds(
labels = labels,
predictions = probabilities,
thresholds = list(np.arange(0.0, 1.005, 0.005))),
"recall_at_thresholds": tf.metrics.recall_at_thresholds(
labels = labels,
predictions = probabilities,
thresholds = list(np.arange(0.0, 1.005, 0.005))),
"auc_roc": tf.metrics.auc(
labels = labels,
predictions = probabilities,
curve = "ROC",
summation_method = "careful_interpolation"),
"auc_pr": tf.metrics.auc(
labels = labels,
predictions = probabilities,
curve = "PR",
summation_method = "careful_interpolation")}
else:
loss = None
train_op = None
eval_metric_ops = None
return tf.estimator.EstimatorSpec(
mode = mode,
predictions = {
"probabilities": probabilities,
"class_ids": class_ids},
loss = loss,
train_op = train_op,
eval_metric_ops = eval_metric_ops,
export_outputs = {
"classes": tf.estimator.export.PredictOutput(
outputs = {
"probabilities": probabilities,
"class_ids": class_ids})})
def serving_input_fn():
numeric_feature_placeholders = {key: tf.placeholder(
dtype = tf.float64,
shape = [None])
for key in NUMERIC_FEATURE_NAMES}
categorical_feature_placeholders = {key: tf.placeholder(
dtype = tf.int64,
shape = [None])
for key in CATEGORICAL_FEATURE_NAMES}
feature_placeholders = {**numeric_feature_placeholders,
**categorical_feature_placeholders}
features = feature_placeholders
return tf.estimator.export.ServingInputReceiver(
features = features,
receiver_tensors = feature_placeholders)
def train_and_evaluate(output_dir, hparams):
# Ensure filewriter cache is clear for TensorBoard events file
tf.summary.FileWriterCache.clear()
EVAL_INTERVAL = 60
estimator = tf.estimator.Estimator(
model_fn = model_fn,
params = hparams,
config = tf.estimator.RunConfig(
save_checkpoints_secs = EVAL_INTERVAL,
tf_random_seed = 1),
model_dir = output_dir)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn(
df = df[:train_rows],
batch_size = hparams["train_batch_size"]),
max_steps = hparams["train_steps"])
exporter = tf.estimator.LatestExporter(
name = "exporter",
serving_input_receiver_fn = serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn(
df = df[train_rows:],
batch_size = hparams["eval_batch_size"]),
steps = None,
exporters = exporter,
throttle_secs = EVAL_INTERVAL)
tf.estimator.train_and_evaluate(
estimator = estimator,
train_spec = train_spec,
eval_spec = eval_spec)
return estimator
hparams = {}
hparams["train_batch_size"] = 128
hparams["eval_batch_size"] = 128
hparams["learning_rate"] = 0.01
hparams["train_steps"] = 1000
Explanation: Create model
End of explanation
tf.logging.set_verbosity(v = tf.logging.INFO)
UNWEIGHTED_MODEL_DIR = "unweighted_trained"
hparams["pos_weight"] = 1.0
shutil.rmtree(path = UNWEIGHTED_MODEL_DIR, ignore_errors = True) # start fresh each time
unweighted_estimator = train_and_evaluate(UNWEIGHTED_MODEL_DIR, hparams)
Explanation: Train unweighted loss model
End of explanation
negative_positive_ratio = 1.0 / df[LABEL_NAME].mean()
negative_positive_ratio
tf.logging.set_verbosity(v = tf.logging.INFO)
WEIGHTED_MODEL_DIR = "weighted_trained"
hparams["pos_weight"] = negative_positive_ratio
shutil.rmtree(path = WEIGHTED_MODEL_DIR, ignore_errors = True) # start fresh each time
weighted_estimator = train_and_evaluate(WEIGHTED_MODEL_DIR, hparams)
Explanation: Now train weighted loss model
End of explanation
metrics_list = []
tf.logging.set_verbosity(v = tf.logging.ERROR)
for pos_weight in list(np.arange(0.1, 10.1, 0.1)):
hparams["pos_weight"] = pos_weight
UNWEIGHTED_MODEL_DIR = "unweighted_trained"
shutil.rmtree(path = UNWEIGHTED_MODEL_DIR, ignore_errors = True) # start fresh each time
weighted_estimator = tf.estimator.Estimator(
model_fn = model_fn,
params = hparams,
config = tf.estimator.RunConfig(save_checkpoints_secs = 600, tf_random_seed = 1),
model_dir = UNWEIGHTED_MODEL_DIR)
weighted_estimator.train(
input_fn = lambda: train_input_fn(
df = df[:train_rows],
batch_size = 128),
steps = 1000)
metrics = weighted_estimator.evaluate(
input_fn = lambda: eval_input_fn(
df = df[train_rows:],
batch_size = 128))
metrics_list.append(metrics)
print("pos_weight = {}, metrics['accuracy'] = {}, metrics['auc_roc'] = {}, metrics['auc_pr'] = {}".format(pos_weight, metrics["accuracy"], metrics["auc_roc"], metrics["auc_pr"]))
Explanation: Grid search
End of explanation
import seaborn as sns
sns.set(rc = {"figure.figsize": (15,10)})
sns.lineplot(
x = "pos_weight",
y = "accuracy",
data = {
"pos_weight": list(np.arange(0.1, 10.1, 0.1)),
"accuracy": [metric["accuracy"]
for metric in metrics_list]})
sns.lineplot(
x = "pos_weight",
y = "auc_roc",
data = {
"pos_weight": list(np.arange(0.1, 10.1, 0.1)),
"auc_roc": [metric["auc_roc"]
for metric in metrics_list]})
sns.lineplot(
x = "pos_weight",
y = "auc_pr",
data = {
"pos_weight": list(np.arange(0.1, 10.1, 0.1)),
"auc_pr": [metric["auc_pr"]
for metric in metrics_list]})
sns.lineplot(
data = pd.DataFrame(
data = {
"accuracy": [metric["accuracy"]
for metric in metrics_list],
"auc_roc": [metric["auc_roc"]
for metric in metrics_list],
"auc_pr": [metric["auc_pr"]
for metric in metrics_list]},
index = list(np.arange(0.1, 10.1, 0.1))))
accuracy_arr = np.array([metric["accuracy"]
for metric in metrics_list])
accuracy_arr /= np.sum(accuracy_arr)
auc_roc_arr = np.array([metric["auc_roc"]
for metric in metrics_list])
auc_roc_arr /= np.sum(auc_roc_arr)
auc_pr_arr = np.array([metric["auc_pr"]
for metric in metrics_list])
auc_pr_arr /= np.sum(auc_pr_arr)
sns.lineplot(
data = pd.DataFrame(
data = {
"accuracy": accuracy_arr,
"auc_roc": auc_roc_arr,
"auc_pr": auc_pr_arr,
"avg": (accuracy_arr * 0.2 + auc_roc_arr * 0.1 + auc_pr_arr * 0.7)},
index = list(np.arange(0.1, 10.1, 0.1))))
sns.lineplot(
data = pd.DataFrame(
data = {
"avg": (accuracy_arr * 0.2 + auc_roc_arr * 0.1 + auc_pr_arr * 0.7)},
index = list(np.arange(0.1, 10.1, 0.1))))
Explanation: Plot results
End of explanation
auc_roc_val = np.array(object = [metric["auc_roc"]
for metric in metrics_list])
auc_roc_idx = np.flip(m = np.argsort(a = auc_roc_val))
auc_roc_sorted = np.stack(
arrays = [auc_roc_val[auc_roc_idx],
np.arange(0.1, 10.1, 0.1)[auc_roc_idx]],
axis = 1)
print(auc_roc_sorted[0:10])
auc_roc_baseline_idx = np.where(auc_roc_sorted[:, 1] == 1.0)[0]
print("pos_weight of 1.0 is {0} highest AUC_ROC with value {1}".format(int(auc_roc_baseline_idx), auc_roc_sorted[auc_roc_baseline_idx][0][0]))
Explanation: Conclusion
Let's look at the top 10 sorted descending AUC_ROC and AUC_PR values and the corresponding positive weights.
AUC_ROC
End of explanation
auc_pr_val = np.array(object = [metric["auc_pr"]
for metric in metrics_list])
auc_pr_idx = np.flip(m = np.argsort(a = auc_pr_val))
auc_pr_sorted = np.stack(
arrays = [auc_pr_val[auc_pr_idx],
np.arange(0.1, 10.1, 0.1)[auc_pr_idx]],
axis = 1)
print(auc_pr_sorted[0:10])
auc_pr_baseline_idx = np.where(auc_pr_sorted[:, 1] == 1.0)[0]
print("pos_weight of 1.0 is {0} highest AUC_PR with value {1}".format(int(auc_pr_baseline_idx), auc_pr_sorted[auc_pr_baseline_idx][0][0]))
Explanation: AUC_PR
End of explanation |
14,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSCS530 Winter 2015
Complex Systems 530 - Computer Modeling of Complex Systems (Winter 2015)
Course ID
Step1: Random number generation and seeds
Basic reading on random number generation
Step2: With a seeded RNG | Python Code:
%matplotlib inline
# Imports
import numpy
import numpy.random
import matplotlib.pyplot as plt
Explanation: CSCS530 Winter 2015
Complex Systems 530 - Computer Modeling of Complex Systems (Winter 2015)
Course ID: CMPLXSYS 530
Course Title: Computer Modeling of Complex Systems
Term: Winter 2015
Schedule: Wednesdays and Friday, 1:00-2:30PM ET
Location: 120 West Hall (http://www.lsa.umich.edu/cscs/research/computerlab)
Teachers: Mike Bommarito and Sarah Cherng
View this repository on NBViewer
End of explanation
# Let's make a random draw without seeding/controlling our RNG
for n in range(3):
print("Draw {0}".format(n))
X = numpy.random.uniform(size=10)
print(X)
print(X.mean())
print("=" * 16 + "\n")
Explanation: Random number generation and seeds
Basic reading on random number generation:
http://en.wikipedia.org/wiki/Random_number_generation
On Determinism
The second method uses computational algorithms that can produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed or key. The latter type are often called pseudorandom number generators. These types of generators do not typically rely on sources of naturally occurring entropy, though they may be periodically seeded by natural sources, they are non-blocking i.e. not rate-limited by an external event.
A "random number generator" based solely on deterministic computation cannot be regarded as a "true" random number generator in the purest sense of the word, since their output is inherently predictable if all seed values are known. In practice however they are sufficient for most tasks. Carefully designed and implemented pseudo-random number generators can even be certified for security-critical cryptographic purposes, as is the case with the yarrow algorithm and fortuna (PRNG). (The former being the basis of the /dev/random source of entropy on FreeBSD, AIX, Mac OS X, NetBSD and others. OpenBSD also uses a pseudo-random number algorithm based on ChaCha20 known as arc4random.[5])
On distributions
Random numbers uniformly distributed between 0 and 1 can be used to generate random numbers of any desired distribution by passing them through the inverse cumulative distribution function (CDF) of the desired distribution. Inverse CDFs are also called quantile functions. To generate a pair of statistically independent standard normally distributed random numbers (x, y), one may first generate the polar coordinates (r, θ), where r~χ22 and θ~UNIFORM(0,2π) (see Box–Muller transform).
Without a seeded RNG
End of explanation
# Now let's try again with a fixed seed
seed = 0
# Let's make a random draw without seeding/controlling our RNG
for n in range(3):
print("Draw {0}".format(n))
rs = numpy.random.RandomState(seed)
Y = rs.uniform(size=10)
print(Y)
print(Y.mean())
print("=" * 16 + "\n")
Explanation: With a seeded RNG
End of explanation |
14,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples
Importing libraries
Step1: datacleaning
The datacleaning module is used to clean and organize the data into 51 CSV files corresponding to the 50 states of the US and the District of Columbia.
The wrapping function clean_all_data takes all the data sets as input and sorts the data in to CSV files of the states.
The CSVs are stored in the Cleaned Data directory which is under the Data directory.
Step2: missing_data
The missing_data module is used to estimate the missing data of the GDP (from 1960 - 1962) and determine the values of the predictors (from 2016-2020).
The wrapping function predict_all takes the CSV files of the states as input and stores the predicted missing values in the same CSV files.
The CSVs generated replace the previous CSV files in the Cleaned Data directory which is under the Data directory.
Step3: ridge_prediction
The ridge_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using ridge regression.
The wrapping function ridge_predict_all takes the CSV files of the states as input and stores the future values of the energies in another CSV file under Ridge Regression folder under the Predicted Data directory.
Step4: svr_prediction
The svr_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using Support Vector Regression
The wrapping function SVR_predict_all takes the CSV files of the states as input and stores the future values of the energies in another CSV file under SVR folder under the Predicted Data directory.
Step5: plots
Visualizations is done using Tableau software. The Tableau workbook for the predicted data is included in the repository. The Tableau dashboard created for this data is illustrated below | Python Code:
from ceo import data_cleaning
from ceo import missing_data
from ceo import svr_prediction
from ceo import ridge_prediction
Explanation: Examples
Importing libraries
End of explanation
data_cleaning.clean_all_data()
Explanation: datacleaning
The datacleaning module is used to clean and organize the data into 51 CSV files corresponding to the 50 states of the US and the District of Columbia.
The wrapping function clean_all_data takes all the data sets as input and sorts the data in to CSV files of the states.
The CSVs are stored in the Cleaned Data directory which is under the Data directory.
End of explanation
missing_data.predict_all()
Explanation: missing_data
The missing_data module is used to estimate the missing data of the GDP (from 1960 - 1962) and determine the values of the predictors (from 2016-2020).
The wrapping function predict_all takes the CSV files of the states as input and stores the predicted missing values in the same CSV files.
The CSVs generated replace the previous CSV files in the Cleaned Data directory which is under the Data directory.
End of explanation
ridge_prediction.ridge_predict_all()
Explanation: ridge_prediction
The ridge_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using ridge regression.
The wrapping function ridge_predict_all takes the CSV files of the states as input and stores the future values of the energies in another CSV file under Ridge Regression folder under the Predicted Data directory.
End of explanation
svr_prediction.SVR_predict_all()
Explanation: svr_prediction
The svr_prediction module is used to predict the future values of energies like wind energy, solar energy, hydro energy and nuclear energy from 2016-2020 using Support Vector Regression
The wrapping function SVR_predict_all takes the CSV files of the states as input and stores the future values of the energies in another CSV file under SVR folder under the Predicted Data directory.
End of explanation
%%HTML
<div class='tableauPlaceholder' id='viz1489609724011' style='position: relative'><noscript><a href='#'><img alt='Clean Energy Production in the contiguous United States(in million kWh) ' src='https://public.tableau.com/static/images/PB/PB87S38NW/1_rss.png' style='border: none' /></a></noscript><object class='tableauViz' style='display:none;'><param name='host_url' value='https%3A%2F%2Fpublic.tableau.com%2F' /> <param name='path' value='shared/PB87S38NW' /> <param name='toolbar' value='yes' /><param name='static_image' value='https://public.tableau.com/static/images/PB/PB87S38NW/1.png' /> <param name='animate_transition' value='yes' /><param name='display_static_image' value='yes' /><param name='display_spinner' value='yes' /><param name='display_overlay' value='yes' /><param name='display_count' value='yes' /></object></div> <script type='text/javascript'> var divElement = document.getElementById('viz1489609724011'); var vizElement = divElement.getElementsByTagName('object')[0]; vizElement.style.width='1004px';vizElement.style.height='869px'; var scriptElement = document.createElement('script'); scriptElement.src = 'https://public.tableau.com/javascripts/api/viz_v1.js'; vizElement.parentNode.insertBefore(scriptElement, vizElement); </script>
Explanation: plots
Visualizations is done using Tableau software. The Tableau workbook for the predicted data is included in the repository. The Tableau dashboard created for this data is illustrated below:
End of explanation |
14,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Suggestions for lab exercises.
Variables and assignment
Exercise 1
Remember that $n! = n \times (n - 1) \times \dots \times 2 \times 1$. Compute $15!$, assigning the result to a sensible variable name.
Solution
Step1: Exercise 2
Using the math module, check your result for $15$ factorial. You should explore the help for the math library and its functions, using eg tab-completion, the spyder inspector, or online sources.
Solution
Step2: Exercise 3
Stirling's approximation gives that, for large enough $n$,
\begin{equation}
n! \simeq \sqrt{2 \pi} n^{n + 1/2} e^{-n}.
\end{equation}
Using functions and constants from the math library, compare the results of $n!$ and Stirling's approximation for $n = 5, 10, 15, 20$. In what sense does the approximation improve?
Solution
Step4: We see that the relative error decreases, whilst the absolute error grows (significantly).
Basic functions
Exercise 1
Write a function to calculate the volume of a cuboid with edge lengths $a, b, c$. Test your code on sample values such as
$a=1, b=1, c=1$ (result should be $1$);
$a=1, b=2, c=3.5$ (result should be $7.0$);
$a=0, b=1, c=1$ (result should be $0$);
$a=2, b=-1, c=1$ (what do you think the result should be?).
Solution
Step6: In later cases, after having covered exceptions, I would suggest raising a NotImplementedError for negative edge lengths.
Exercise 2
Write a function to compute the time (in seconds) taken for an object to fall from a height $H$ (in metres) to the ground, using the formula
\begin{equation}
h(t) = \frac{1}{2} g t^2.
\end{equation}
Use the value of the acceleration due to gravity $g$ from scipy.constants.g. Test your code on sample values such as
$H = 1$m (result should be $\approx 0.452$s);
$H = 10$m (result should be $\approx 1.428$s);
$H = 0$m (result should be $0$s);
$H = -1$m (what do you think the result should be?).
Solution
Step8: Exercise 3
Write a function that computes the area of a triangle with edge lengths $a, b, c$. You may use the formula
\begin{equation}
A = \sqrt{s (s - a) (s - b) (s - c)}, \qquad s = \frac{a + b + c}{2}.
\end{equation}
Construct your own test cases to cover a range of possibilities.
Step9: Floating point numbers
Exercise 1
Computers cannot, in principle, represent real numbers perfectly. This can lead to problems of accuracy. For example, if
\begin{equation}
x = 1, \qquad y = 1 + 10^{-14} \sqrt{3}
\end{equation}
then it should be true that
\begin{equation}
10^{14} (y - x) = \sqrt{3}.
\end{equation}
Check how accurately this equation holds in Python and see what this implies about the accuracy of subtracting two numbers that are close together.
Solution
Step10: We see that the first three digits are correct. This isn't too surprising
Step12: There is a difference in the fifth significant figure in both solutions in the first case, which gets to the third (arguably the second) significant figure in the second case. Comparing to the limiting solutions above, we see that the larger root is definitely more accurately captured with the first formula than the second (as the result should be bigger than $10^{-2n}$).
In the second case we have divided by a very small number to get the big number, which loses accuracy.
Exercise 5
The standard definition of the derivative of a function is
\begin{equation}
\left. \frac{\text{d} f}{\text{d} x} \right|{x=X} = \lim{\delta \to 0} \frac{f(X + \delta) - f(X)}{\delta}.
\end{equation}
We can approximate this by computing the result for a finite value of $\delta$
Step13: Exercise 6
The function $f_1(x) = e^x$ has derivative with the exact value $1$ at $x=0$. Compute the approximate derivative using your function above, for $\delta = 10^{-2 n}$ with $n = 1, \dots, 7$. You should see the results initially improve, then get worse. Why is this?
Solution
Step15: We have a combination of floating point inaccuracies
Step16: Exercise 2
500 years ago some believed that the number $2^n - 1$ was prime for all primes $n$. Use your function to find the first prime $n$ for which this is not true.
Solution
We could do this many ways. This "elegant" solution says
Step17: Exercise 3
The Mersenne primes are those that have the form $2^n-1$, where $n$ is prime. Use your previous solutions to generate all the $n < 40$ that give Mersenne primes.
Solution
Step19: Exercise 4
Write a function to compute all prime factors of an integer $n$, including their multiplicities. Test it by printing the prime factors (without multiplicities) of $n = 17, \dots, 20$ and the multiplicities (without factors) of $n = 48$.
Note
One effective solution is to return a dictionary, where the keys are the factors and the values are the multiplicities.
Solution
This solution uses the trick of immediately dividing $n$ by any divisor
Step21: Exercise 5
Write a function to generate all the integer divisors, including 1, but not including $n$ itself, of an integer $n$. Test it on $n = 16, \dots, 20$.
Note
You could use the prime factorization from the previous exercise, or you could do it directly.
Solution
Here we will do it directly.
Step23: Exercise 6
A perfect number $n$ is one where the divisors sum to $n$. For example, 6 has divisors 1, 2, and 3, which sum to 6. Use your previous solution to find all perfect numbers $n < 10,000$ (there are only four!).
Solution
We can do this much more efficiently than the code below using packages such as numpy, but this is a "bare python" solution.
Step24: Exercise 7
Using your previous functions, check that all perfect numbers $n < 10,000$ can be written as $2^{k-1} \times (2^k - 1)$, where $2^k-1$ is a Mersenne prime.
Solution
In fact we did this above already
Step25: It's worth thinking about the operation counts of the various functions implemented here. The implementations are inefficient, but even in the best case you see how the number of operations (and hence computing time required) rapidly increases.
Logistic map
Partly taken from Newman's book, p 120.
The logistic map builds a sequence of numbers ${ x_n }$ using the relation
\begin{equation}
x_{n+1} = r x_n \left( 1 - x_n \right),
\end{equation}
where $0 \le x_0 \le 1$.
Exercise 1
Write a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).
Solution
Step26: Exercise 2
Fix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$ Plot the last 100 members of the sequence in both cases.
What does this suggest about the long-term behaviour of the sequence?
Solution
Step27: This suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly.
Exercise 3
Fix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).
Solution
Step28: Exercise 4
For iterative maps such as the logistic map, one of three things can occur
Step29: Exercise 2
Check the points $c=0$ and $c=\pm 2 \pm 2 \text{i}$ and ensure they do what you expect. (What should you expect?)
Solution
Step30: Exercise 3
Write a function that, given $N$
generates an $N \times N$ grid spanning $c = x + \text{i} y$, for $-2 \le x \le 2$ and $-2 \le y \le 2$;
returns an $N\times N$ array containing one if the associated grid point is in the Mandelbrot set, and zero otherwise.
Solution
Step31: Exercise 4
Using the function imshow from matplotlib, plot the resulting array for a $100 \times 100$ array to make sure you see the expected shape.
Solution
Step32: Exercise 5
Modify your functions so that, instead of returning whether a point is inside the set or not, it returns the logarithm of the number of iterations it takes. Plot the result using imshow again.
Solution
Step33: Exercise 6
Try some higher resolution plots, and try plotting only a section to see the structure. Note this is not a good way to get high accuracy close up images!
Solution
This is a simple example
Step34: Equivalence classes
An equivalence class is a relation that groups objects in a set into related subsets. For example, if we think of the integers modulo $7$, then $1$ is in the same equivalence class as $8$ (and $15$, and $22$, and so on), and $3$ is in the same equivalence class as $10$. We use the tilde $3 \sim 10$ to denote two objects within the same equivalence class.
Here, we are going to define the positive integers programmatically from equivalent sequences.
Exercise 1
Define a python class Eqint. This should be
Initialized by a sequence;
Store the sequence;
Define its representation (via the __repr__ function) to be the integer length of the sequence;
Redefine equality (via the __eq__ function) so that two eqints are equal if their sequences have same length.
Solution
Step35: Exercise 2
Define a zero object from the empty list, and three one objects, from a single object list, tuple, and string. For example
python
one_list = Eqint([1])
one_tuple = Eqint((1,))
one_string = Eqint('1')
Check that none of the one objects equal the zero object, but all equal the other one objects. Print each object to check that the representation gives the integer length.
Solution
Step36: Exercise 3
Redefine the class by including an __add__ method that combines the two sequences. That is, if a and b are Eqints then a+b should return an Eqint defined from combining a and bs sequences.
Note
Adding two different types of sequences (eg, a list to a tuple) does not work, so it is better to either iterate over the sequences, or to convert to a uniform type before adding.
Solution
Step37: Exercise 4
Check your addition function by adding together all your previous Eqint objects (which will need re-defining, as the class has been redefined). Print the resulting object to check you get 3, and also print its internal sequence.
Solution
Step38: Exercise 5
We will sketch a construction of the positive integers from nothing.
Define an empty list positive_integers.
Define an Eqint called zero from the empty list. Append it to positive_integers.
Define an Eqint called next_integer from the Eqint defined by a copy of positive_integers (ie, use Eqint(list(positive_integers)). Append it to positive_integers.
Repeat step 3 as often as needed.
Use this procedure to define the Eqint equivalent to $10$. Print it, and its internal sequence, to check.
Solution
Step39: Rational numbers
Instead of working with floating point numbers, which are not "exact", we could work with the rational numbers $\mathbb{Q}$. A rational number $q \in \mathbb{Q}$ is defined by the numerator $n$ and denominator $d$ as $q = \frac{n}{d}$, where $n$ and $d$ are coprime (ie, have no common divisor other than $1$).
Exercise 1
Find a python function that finds the greatest common divisor (gcd) of two numbers. Use this to write a function normal_form that takes a numerator and divisor and returns the coprime $n$ and $d$. Test this function on $q = \frac{3}{2}$, $q = \frac{15}{3}$, and $q = \frac{20}{42}$.
Solution
Step41: Exercise 2
Define a class Rational that uses the normal_form function to store the rational number in the appropriate form. Define a __repr__ function that prints a string that looks like $\frac{n}{d}$ (hint
Step43: Exercise 3
Overload the __add__ function so that you can add two rational numbers. Test it on $\frac{1}{2} + \frac{1}{3} + \frac{1}{6} = 1$.
Solution
Step45: Exercise 4
Overload the __mul__ function so that you can multiply two rational numbers. Test it on $\frac{1}{3} \times \frac{15}{2} \times \frac{2}{5} = 1$.
Solution
Step47: Exercise 5
Overload the __rmul__ function so that you can multiply a rational by an integer. Check that $\frac{1}{2} \times 2 = 1$ and $\frac{1}{2} + (-1) \times \frac{1}{2} = 0$. Also overload the __sub__ function (using previous functions!) so that you can subtract rational numbers and check that $\frac{1}{2} - \frac{1}{2} = 0$.
Solution
Step49: Exercise 6
Overload the __float__ function so that float(q) returns the floating point approximation to the rational number q. Test this on $\frac{1}{2}, \frac{1}{3}$, and $\frac{1}{11}$.
Solution
Step51: Exercise 7
Overload the __lt__ function to compare two rational numbers. Create a list of rational numbers where the denominator is $n = 2, \dots, 11$ and the numerator is the floored integer $n/2$, ie n//2. Use the sorted function on that list (which relies on the __lt__ function).
Solution
Step53: Exercise 8
The Wallis formula for $\pi$ is
\begin{equation}
\pi = 2 \prod_{n=1}^{\infty} \frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)}.
\end{equation}
We can define a partial product $\pi_N$ as
\begin{equation}
\pi_N = 2 \prod_{n=1}^{N} \frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)},
\end{equation}
each of which are rational numbers.
Construct a list of the first 20 rational number approximations to $\pi$ and print them out. Print the sorted list to show that the approximations are always increasing. Then convert them to floating point numbers, construct a numpy array, and subtract this array from $\pi$ to see how accurate they are.
Solution
Step54: The shortest published Mathematical paper
A candidate for the shortest mathematical paper ever shows the following result
Step55: Exercise 2
The more interesting statement in the paper is that
\begin{equation}
27^5 + 84^5 + 110^5 + 133^5 = 144^5.
\end{equation}
[is] the smallest instance in which four fifth powers sum to a fifth power.
Interpreting "the smallest instance" to mean the solution where the right hand side term (the largest integer) is the smallest, we want to use python to check this statement.
You may find the combinations function from the itertools package useful.
Step56: The combinations function returns all the combinations (ignoring order) of r elements from a given list. For example, take a list of length 6, [1, 2, 3, 4, 5, 6] and compute all the combinations of length 4
Step57: We can already see that the number of terms to consider is large.
Note that we have used the list function to explicitly get a list of the combinations. The combinations function returns a generator, which can be used in a loop as if it were a list, without storing all elements of the list.
How fast does the number of combinations grow? The standard formula says that for a list of length $n$ there are
\begin{equation}
\begin{pmatrix} n \ k \end{pmatrix} = \frac{n!}{k! (n-k)!}
\end{equation}
combinations of length $k$. For $k=4$ as needed here we will have $n (n-1) (n-2) (n-3) / 24$ combinations. For $n=144$ we therefore have
Step58: Exercise 2a
Show, by getting python to compute the number of combinations $N = \begin{pmatrix} n \ 4 \end{pmatrix}$ that $N$ grows roughly as $n^4$. To do this, plot the number of combinations and $n^4$ on a log-log scale. Restrict to $n \le 50$.
Solution
Step59: With 17 million combinations to work with, we'll need to be a little careful how we compute.
One thing we could try is to loop through each possible "smallest instance" (the term on the right hand side) in increasing order. We then check all possible combinations of left hand sides.
This is computationally very expensive as we repeat a lot of calculations. We repeatedly recalculate combinations (a bad idea). We repeatedly recalculate the powers of the same number.
Instead, let us try creating the list of all combinations of powers once.
Exercise 2b
Construct a numpy array containing all integers in $1, \dots, 144$ to the fifth power.
Construct a list of all combinations of four elements from this array.
Construct a list of sums of all these combinations.
Loop over one list and check if the entry appears in the other list (ie, use the in keyword).
Solution
Step60: Then calculate the sums
Step61: Finally, loop through the sums and check to see if it matches any possible term on the RHS
Step63: Lorenz attractor
The Lorenz system is a set of ordinary differential equations which can be written
\begin{equation}
\frac{\text{d} \vec{v}}{\text{d} \vec{t}} = \vec{f}(\vec{v})
\end{equation}
where the variables in the state vector $\vec{v}$ are
\begin{equation}
\vec{v} = \begin{pmatrix} x(t) \ y(t) \ z(t) \end{pmatrix}
\end{equation}
and the function defining the ODE is
\begin{equation}
\vec{f} = \begin{pmatrix} \sigma \left( y(t) - x(t) \right) \ x(t) \left( \rho - z(t) \right) - y(t) \ x(t) y(t) - \beta z(t) \end{pmatrix}.
\end{equation}
The parameters $\sigma, \rho, \beta$ are all real numbers.
Exercise 1
Write a function dvdt(v, t, params) that returns $\vec{f}$ given $\vec{v}, t$ and the parameters $\sigma, \rho, \beta$.
Solution
Step64: Exercise 2
Fix $\sigma=10, \beta=8/3$. Set initial data to be $\vec{v}(0) = \vec{1}$. Using scipy, specifically the odeint function of scipy.integrate, solve the Lorenz system up to $t=100$ for $\rho=13, 14, 15$ and $28$.
Plot your results in 3d, plotting $x, y, z$.
Solution
Step65: Exercise 3
Fix $\rho = 28$. Solve the Lorenz system twice, up to $t=40$, using the two different initial conditions $\vec{v}(0) = \vec{1}$ and $\vec{v}(0) = \vec{1} + \vec{10^{-5}}$.
Show four plots. Each plot should show the two solutions on the same axes, plotting $x, y$ and $z$. Each plot should show $10$ units of time, ie the first shows $t \in [0, 10]$, the second shows $t \in [10, 20]$, and so on.
Solution
Step66: This shows the sensitive dependence on initial conditions that is characteristic of chaotic behaviour.
Systematic ODE solving with sympy
We are interested in the solution of
\begin{equation}
\frac{\text{d} y}{\text{d} t} = e^{-t} - y^n, \qquad y(0) = 1,
\end{equation}
where $n > 1$ is an integer. The "minor" change from the above examples mean that sympy can only give the solution as a power series.
Exercise 1
Compute the general solution as a power series for $n = 2$.
Solution
Step67: Exercise 2
Investigate the help for the dsolve function to straightforwardly impose the initial condition $y(0) = 1$ using the ics argument. Using this, compute the specific solutions that satisfy the ODE for $n = 2, \dots, 10$.
Solution
Step68: Exercise 3
Using the removeO command, plot each of these solutions for $t \in [0, 1]$.
Step70: Twin primes
A twin prime is a pair $(p_1, p_2)$ such that both $p_1$ and $p_2$ are prime and $p_2 = p_1 + 2$.
Exercise 1
Write a generator that returns twin primes. You can use the generators above, and may want to look at the itertools module together with its recipes, particularly the pairwise recipe.
Solution
Note
Step71: Now we can generate pairs using the pairwise recipe
Step73: We could examine the results of the two primes directly. But an efficient solution is to use python's filter function. To do this, first define a function checking if the pair are twin primes
Step75: Then use the filter function to define another generator
Step76: Now check by finding the twin primes with $N<20$
Step78: Exercise 2
Find how many twin primes there are with $p_2 < 1000$.
Solution
Again there are many solutions, but the itertools recipes has the quantify pattern. Looking ahead to exercise 3 we'll define
Step79: Exercise 3
Let $\pi_N$ be the number of twin primes such that $p_2 < N$. Plot how $\pi_N / N$ varies with $N$ for $N=2^k$ and $k = 4, 5, \dots 16$. (You should use a logarithmic scale where appropriate!)
Solution
We've now done all the hard work and can use the solutions above.
Step80: For those that have checked Wikipedia, you'll see Brun's theorem which suggests a specific scaling, that $\pi_N$ is bounded by $C N / \log(N)^2$. Checking this numerically on this data
Step83: A basis for the polynomials
In the section on classes we defined a Monomial class to represent a polynomial with leading coefficient $1$. As the $N+1$ monomials $1, x, x^2, \dots, x^N$ form a basis for the vector space of polynomials of order $N$, $\mathbb{P}^N$, we can use the Monomial class to return this basis.
Exercise 1
Define a generator that will iterate through this basis of $\mathbb{P}^N$ and test it on $\mathbb{P}^3$.
Solution
Again we first take the definition of the crucial class from the notes.
Step85: Now we can define the first basis
Step86: Then test it on $\mathbb{P}^N$
Step88: This looks horrible, but is correct. To really make this look good, we need to improve the output. If we use
Step89: then we can deal with the uglier cases, and re-running the test we get
Step91: An even better solution would be to use the numpy.unique function as in this stackoverflow answer (the second one!) to get the frequency of all the roots.
Exercise 2
An alternative basis is given by the monomials
\begin{align}
p_0(x) &= 1, \ p_1(x) &= 1-x, \ p_2(x) &= (1-x)(2-x), \ \dots & \quad \dots, \ p_N(x) &= \prod_{n=1}^N (n-x).
\end{align}
Define a generator that will iterate through this basis of $\mathbb{P}^N$ and test it on $\mathbb{P}^4$.
Solution
Step93: I am too lazy to work back through the definitions and flip all the signs; it should be clear how to do this!
Exercise 3
Use these generators to write another generator that produces a basis of $\mathbb{P^3} \times \mathbb{P^4}$.
Solution
Hopefully by now you'll be aware of how useful itertools is!
Step95: I've cheated here as I haven't introduced the yield from syntax (which returns an iterator from a generator). We could write this out instead as | Python Code:
fifteen_factorial = 15*14*13*12*11*10*9*8*7*6*5*4*3*2*1
print(fifteen_factorial)
Explanation: Suggestions for lab exercises.
Variables and assignment
Exercise 1
Remember that $n! = n \times (n - 1) \times \dots \times 2 \times 1$. Compute $15!$, assigning the result to a sensible variable name.
Solution
End of explanation
import math
print(math.factorial(15))
print("Result correct?", math.factorial(15) == fifteen_factorial)
Explanation: Exercise 2
Using the math module, check your result for $15$ factorial. You should explore the help for the math library and its functions, using eg tab-completion, the spyder inspector, or online sources.
Solution
End of explanation
print(math.factorial(5), math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5))
print(math.factorial(10), math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10))
print(math.factorial(15), math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15))
print(math.factorial(20), math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20))
print("Absolute differences:")
print(math.factorial(5) - math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5))
print(math.factorial(10) - math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10))
print(math.factorial(15) - math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15))
print(math.factorial(20) - math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20))
print("Relative differences:")
print((math.factorial(5) - math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5)) / math.factorial(5))
print((math.factorial(10) - math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10)) / math.factorial(10))
print((math.factorial(15) - math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15)) / math.factorial(15))
print((math.factorial(20) - math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20)) / math.factorial(20))
Explanation: Exercise 3
Stirling's approximation gives that, for large enough $n$,
\begin{equation}
n! \simeq \sqrt{2 \pi} n^{n + 1/2} e^{-n}.
\end{equation}
Using functions and constants from the math library, compare the results of $n!$ and Stirling's approximation for $n = 5, 10, 15, 20$. In what sense does the approximation improve?
Solution
End of explanation
def cuboid_volume(a, b, c):
Compute the volume of a cuboid with edge lengths a, b, c.
Volume is abc. Only makes sense if all are non-negative.
Parameters
----------
a : float
Edge length 1
b : float
Edge length 2
c : float
Edge length 3
Returns
-------
volume : float
The volume a*b*c
if (a < 0.0) or (b < 0.0) or (c < 0.0):
print("Negative edge length makes no sense!")
return 0
return a*b*c
print(cuboid_volume(1,1,1))
print(cuboid_volume(1,2,3.5))
print(cuboid_volume(0,1,1))
print(cuboid_volume(2,-1,1))
Explanation: We see that the relative error decreases, whilst the absolute error grows (significantly).
Basic functions
Exercise 1
Write a function to calculate the volume of a cuboid with edge lengths $a, b, c$. Test your code on sample values such as
$a=1, b=1, c=1$ (result should be $1$);
$a=1, b=2, c=3.5$ (result should be $7.0$);
$a=0, b=1, c=1$ (result should be $0$);
$a=2, b=-1, c=1$ (what do you think the result should be?).
Solution
End of explanation
def fall_time(H):
Give the time in seconds for an object to fall to the ground
from H metres.
Parameters
----------
H : float
Starting height (metres)
Returns
-------
T : float
Fall time (seconds)
from math import sqrt
from scipy.constants import g
if (H < 0):
print("Negative height makes no sense!")
return 0
return sqrt(2.0*H/g)
print(fall_time(1))
print(fall_time(10))
print(fall_time(0))
print(fall_time(-1))
Explanation: In later cases, after having covered exceptions, I would suggest raising a NotImplementedError for negative edge lengths.
Exercise 2
Write a function to compute the time (in seconds) taken for an object to fall from a height $H$ (in metres) to the ground, using the formula
\begin{equation}
h(t) = \frac{1}{2} g t^2.
\end{equation}
Use the value of the acceleration due to gravity $g$ from scipy.constants.g. Test your code on sample values such as
$H = 1$m (result should be $\approx 0.452$s);
$H = 10$m (result should be $\approx 1.428$s);
$H = 0$m (result should be $0$s);
$H = -1$m (what do you think the result should be?).
Solution
End of explanation
def triangle_area(a, b, c):
Compute the area of a triangle with edge lengths a, b, c.
Area is sqrt(s (s-a) (s-b) (s-c)).
s is (a+b+c)/2.
Only makes sense if all are non-negative.
Parameters
----------
a : float
Edge length 1
b : float
Edge length 2
c : float
Edge length 3
Returns
-------
area : float
The triangle area.
from math import sqrt
if (a < 0.0) or (b < 0.0) or (c < 0.0):
print("Negative edge length makes no sense!")
return 0
s = 0.5 * (a + b + c)
return sqrt(s * (s-a) * (s-b) * (s-c))
print(triangle_area(1,1,1)) # Equilateral; answer sqrt(3)/4 ~ 0.433
print(triangle_area(3,4,5)) # Right triangle; answer 6
print(triangle_area(1,1,0)) # Not a triangle; answer 0
print(triangle_area(-1,1,1)) # Not a triangle; exception or 0.
Explanation: Exercise 3
Write a function that computes the area of a triangle with edge lengths $a, b, c$. You may use the formula
\begin{equation}
A = \sqrt{s (s - a) (s - b) (s - c)}, \qquad s = \frac{a + b + c}{2}.
\end{equation}
Construct your own test cases to cover a range of possibilities.
End of explanation
from math import sqrt
x = 1.0
y = 1.0 + 1e-14 * sqrt(3.0)
print("The calculation gives {}".format(1e14*(y-x)))
print("The result should be {}".format(sqrt(3.0)))
Explanation: Floating point numbers
Exercise 1
Computers cannot, in principle, represent real numbers perfectly. This can lead to problems of accuracy. For example, if
\begin{equation}
x = 1, \qquad y = 1 + 10^{-14} \sqrt{3}
\end{equation}
then it should be true that
\begin{equation}
10^{14} (y - x) = \sqrt{3}.
\end{equation}
Check how accurately this equation holds in Python and see what this implies about the accuracy of subtracting two numbers that are close together.
Solution
End of explanation
a = 1e-3
b = 1e3
c = a
formula1_n3_plus = (-b + sqrt(b**2 - 4.0*a*c))/(2.0*a)
formula1_n3_minus = (-b - sqrt(b**2 - 4.0*a*c))/(2.0*a)
formula2_n3_plus = (2.0*c)/(-b + sqrt(b**2 - 4.0*a*c))
formula2_n3_minus = (2.0*c)/(-b - sqrt(b**2 - 4.0*a*c))
print("For n=3, first formula, solutions are {} and {}.".format(formula1_n3_plus,
formula1_n3_minus))
print("For n=3, second formula, solutions are {} and {}.".format(formula2_n3_plus,
formula2_n3_minus))
a = 1e-4
b = 1e4
c = a
formula1_n4_plus = (-b + sqrt(b**2 - 4.0*a*c))/(2.0*a)
formula1_n4_minus = (-b - sqrt(b**2 - 4.0*a*c))/(2.0*a)
formula2_n4_plus = (2.0*c)/(-b + sqrt(b**2 - 4.0*a*c))
formula2_n4_minus = (2.0*c)/(-b - sqrt(b**2 - 4.0*a*c))
print("For n=4, first formula, solutions are {} and {}.".format(formula1_n4_plus,
formula1_n4_minus))
print("For n=4, second formula, solutions are {} and {}.".format(formula2_n4_plus,
formula2_n4_minus))
Explanation: We see that the first three digits are correct. This isn't too surprising: we expect 16 digits of accuracy for a floating point number, but $x$ and $y$ are identical for the first 14 digits.
Exercise 2
The standard quadratic formula gives the solutions to
\begin{equation}
a x^2 + b x + c = 0
\end{equation}
as
\begin{equation}
x = \frac{-b \pm \sqrt{b^2 - 4 a c}}{2 a}.
\end{equation}
Show that, if $a = 10^{-n} = c$ and $b = 10^n$ then
\begin{equation}
x = \frac{10^{2 n}}{2} \left( -1 \pm \sqrt{1 - 4 \times 10^{-4n}} \right).
\end{equation}
Using the expansion (from Taylor's theorem)
\begin{equation}
\sqrt{1 - 10^{-4 n}} \simeq 1 - 2 \times 10^{-4 n} + \dots, \qquad n \gg 1,
\end{equation}
show that
\begin{equation}
x \simeq -10^{2 n} + 10^{-2 n} \quad \text{and} \quad -10^{-2n}, \qquad n \gg 1.
\end{equation}
Solution
This is pen-and-paper work; each step should be re-arranging.
Exercise 3
By multiplying and dividing by $-b \mp \sqrt{b^2 - 4 a c}$, check that we can also write the solutions to the quadratic equation as
\begin{equation}
x = \frac{2 c}{-b \mp \sqrt{b^2 - 4 a c}}.
\end{equation}
Solution
Using the difference of two squares we get
\begin{equation}
x = \frac{b^2 - \left( b^2 - 4 a c \right)}{2a \left( -b \mp \sqrt{b^2 - 4 a c} \right)}
\end{equation}
which re-arranges to give the required solution.
Exercise 4
Using Python, calculate both solutions to the quadratic equation
\begin{equation}
10^{-n} x^2 + 10^n x + 10^{-n} = 0
\end{equation}
for $n = 3$ and $n = 4$ using both formulas. What do you see? How has floating point accuracy caused problems here?
Solution
End of explanation
def g(f, X, delta):
Approximate the derivative of a given function at a point.
Parameters
----------
f : function
Function to be differentiated
X : real
Point at which the derivative is evaluated
delta : real
Step length
Returns
-------
g : real
Approximation to the derivative
return (f(X+delta) - f(X)) / delta
Explanation: There is a difference in the fifth significant figure in both solutions in the first case, which gets to the third (arguably the second) significant figure in the second case. Comparing to the limiting solutions above, we see that the larger root is definitely more accurately captured with the first formula than the second (as the result should be bigger than $10^{-2n}$).
In the second case we have divided by a very small number to get the big number, which loses accuracy.
Exercise 5
The standard definition of the derivative of a function is
\begin{equation}
\left. \frac{\text{d} f}{\text{d} x} \right|{x=X} = \lim{\delta \to 0} \frac{f(X + \delta) - f(X)}{\delta}.
\end{equation}
We can approximate this by computing the result for a finite value of $\delta$:
\begin{equation}
g(x, \delta) = \frac{f(x + \delta) - f(x)}{\delta}.
\end{equation}
Write a function that takes as inputs a function of one variable, $f(x)$, a location $X$, and a step length $\delta$, and returns the approximation to the derivative given by $g$.
Solution
End of explanation
from math import exp
for n in range(1, 8):
print("For n={}, the approx derivative is {}.".format(n, g(exp, 0.0, 10**(-2.0*n))))
Explanation: Exercise 6
The function $f_1(x) = e^x$ has derivative with the exact value $1$ at $x=0$. Compute the approximate derivative using your function above, for $\delta = 10^{-2 n}$ with $n = 1, \dots, 7$. You should see the results initially improve, then get worse. Why is this?
Solution
End of explanation
def isprime(n):
Checks to see if an integer is prime.
Parameters
----------
n : integer
Number to check
Returns
-------
isprime : Boolean
If n is prime
# No number less than 2 can be prime
if n < 2:
return False
# We only need to check for divisors up to sqrt(n)
for m in range(2, int(n**0.5)+1):
if n%m == 0:
return False
# If we've got this far, there are no divisors.
return True
for n in range(50):
if isprime(n):
print("Function says that {} is prime.".format(n))
Explanation: We have a combination of floating point inaccuracies: in the numerator we have two terms that are nearly equal, leading to a very small number. We then divide two very small numbers. This is inherently inaccurate.
This does not mean that you can't calculate derivatives to high accuracy, but alternative approaches are definitely recommended.
Prime numbers
Exercise 1
Write a function that tests if a number is prime. Test it by writing out all prime numbers less than 50.
Solution
This is a "simple" solution, but not efficient.
End of explanation
n = 2
while (not isprime(n)) or (isprime(2**n-1)):
n += 1
print("The first n such that 2^n-1 is not prime is {}.".format(n))
Explanation: Exercise 2
500 years ago some believed that the number $2^n - 1$ was prime for all primes $n$. Use your function to find the first prime $n$ for which this is not true.
Solution
We could do this many ways. This "elegant" solution says:
Start from the smallest possible $n$ (2).
Check if $n$ is prime. If not, add one to $n$.
If $n$ is prime, check if $2^n-1$ is prime. If it is, add one to $n$.
If both those logical checks fail, we have found the $n$ we want.
End of explanation
for n in range(2, 41):
if isprime(n) and isprime(2**n-1):
print("n={} is such that 2^n-1 is prime.".format(n))
Explanation: Exercise 3
The Mersenne primes are those that have the form $2^n-1$, where $n$ is prime. Use your previous solutions to generate all the $n < 40$ that give Mersenne primes.
Solution
End of explanation
def prime_factors(n):
Generate all the prime factors of n.
Parameters
----------
n : integer
Number to be checked
Returns
-------
factors : dict
Prime factors (keys) and multiplicities (values)
factors = {}
m = 2
while m <= n:
if n%m == 0:
factors[m] = 1
n //= m
while n%m == 0:
factors[m] += 1
n //= m
m += 1
return factors
for n in range(17, 21):
print("Prime factors of {} are {}.".format(n, prime_factors(n).keys()))
print("Multiplicities of prime factors of 48 are {}.".format(prime_factors(48).values()))
Explanation: Exercise 4
Write a function to compute all prime factors of an integer $n$, including their multiplicities. Test it by printing the prime factors (without multiplicities) of $n = 17, \dots, 20$ and the multiplicities (without factors) of $n = 48$.
Note
One effective solution is to return a dictionary, where the keys are the factors and the values are the multiplicities.
Solution
This solution uses the trick of immediately dividing $n$ by any divisor: this means we never have to check the divisor for being prime.
End of explanation
def divisors(n):
Generate all integer divisors of n.
Parameters
----------
n : integer
Number to be checked
Returns
-------
divs : list
All integer divisors, including 1.
divs = [1]
m = 2
while m <= n/2:
if n%m == 0:
divs.append(m)
m += 1
return divs
for n in range(16, 21):
print("The divisors of {} are {}.".format(n, divisors(n)))
Explanation: Exercise 5
Write a function to generate all the integer divisors, including 1, but not including $n$ itself, of an integer $n$. Test it on $n = 16, \dots, 20$.
Note
You could use the prime factorization from the previous exercise, or you could do it directly.
Solution
Here we will do it directly.
End of explanation
def isperfect(n):
Check if a number is perfect.
Parameters
----------
n : integer
Number to check
Returns
-------
isperfect : Boolean
Whether it is perfect or not.
divs = divisors(n)
sum_divs = 0
for d in divs:
sum_divs += d
return n == sum_divs
for n in range(2,10000):
if (isperfect(n)):
factors = prime_factors(n)
print("{} is perfect.\n"
"Divisors are {}.\n"
"Prime factors {} (multiplicities {}).".format(
n, divisors(n), factors.keys(), factors.values()))
Explanation: Exercise 6
A perfect number $n$ is one where the divisors sum to $n$. For example, 6 has divisors 1, 2, and 3, which sum to 6. Use your previous solution to find all perfect numbers $n < 10,000$ (there are only four!).
Solution
We can do this much more efficiently than the code below using packages such as numpy, but this is a "bare python" solution.
End of explanation
%timeit isperfect(2**(3-1)*(2**3-1))
%timeit isperfect(2**(5-1)*(2**5-1))
%timeit isperfect(2**(7-1)*(2**7-1))
%timeit isperfect(2**(13-1)*(2**13-1))
Explanation: Exercise 7
Using your previous functions, check that all perfect numbers $n < 10,000$ can be written as $2^{k-1} \times (2^k - 1)$, where $2^k-1$ is a Mersenne prime.
Solution
In fact we did this above already:
$6 = 2^{2-1} \times (2^2 - 1)$. 2 is the first number on our Mersenne list.
$28 = 2^{3-1} \times (2^3 - 1)$. 3 is the second number on our Mersenne list.
$496 = 2^{5-1} \times (2^5 - 1)$. 5 is the third number on our Mersenne list.
$8128 = 2^{7-1} \times (2^7 - 1)$. 7 is the fourth number on our Mersenne list.
Exercise 8 (bonus)
Investigate the timeit function in python or IPython. Use this to measure how long your function takes to check that, if $k$ on the Mersenne list then $n = 2^{k-1} \times (2^k - 1)$ is a perfect number, using your functions. Stop increasing $k$ when the time takes too long!
Note
You could waste considerable time on this, and on optimizing the functions above to work efficiently. It is not worth it, other than to show how rapidly the computation time can grow!
Solution
End of explanation
def logistic(x0, r, N = 1000):
sequence = [x0]
xn = x0
for n in range(N):
xnew = r*xn*(1.0-xn)
sequence.append(xnew)
xn = xnew
return sequence
Explanation: It's worth thinking about the operation counts of the various functions implemented here. The implementations are inefficient, but even in the best case you see how the number of operations (and hence computing time required) rapidly increases.
Logistic map
Partly taken from Newman's book, p 120.
The logistic map builds a sequence of numbers ${ x_n }$ using the relation
\begin{equation}
x_{n+1} = r x_n \left( 1 - x_n \right),
\end{equation}
where $0 \le x_0 \le 1$.
Exercise 1
Write a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).
Solution
End of explanation
import numpy
from matplotlib import pyplot
%matplotlib inline
x0 = 0.5
N = 2000
sequence1 = logistic(x0, 1.5, N)
sequence2 = logistic(x0, 3.5, N)
pyplot.plot(sequence1[-100:], 'b-', label = r'$r=1.5$')
pyplot.plot(sequence2[-100:], 'k-', label = r'$r=3.5$')
pyplot.xlabel(r'$n$')
pyplot.ylabel(r'$x$')
pyplot.show()
Explanation: Exercise 2
Fix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$ Plot the last 100 members of the sequence in both cases.
What does this suggest about the long-term behaviour of the sequence?
Solution
End of explanation
import numpy
from matplotlib import pyplot
%matplotlib inline
r_values = numpy.linspace(1.0, 4.0, 401)
x0 = 0.5
N = 2000
for r in r_values:
sequence = logistic(x0, r, N)
pyplot.plot(r*numpy.ones_like(sequence[1000:]), sequence[1000:], 'k.')
pyplot.xlabel(r'$r$')
pyplot.ylabel(r'$x$')
pyplot.show()
Explanation: This suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly.
Exercise 3
Fix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).
Solution
End of explanation
def in_Mandelbrot(c, n_iterations = 100):
z0 = 0.0 + 0j
in_set = True
n = 0
zn = z0
while in_set and (n < n_iterations):
n += 1
znew = zn**2 + c
in_set = abs(znew) < 2.0
zn = znew
return in_set
Explanation: Exercise 4
For iterative maps such as the logistic map, one of three things can occur:
The sequence settles down to a fixed point.
The sequence rotates through a finite number of values. This is called a limit cycle.
The sequence generates an infinite number of values. This is called deterministic chaos.
Using just your plot, or new plots from this data, work out approximate values of $r$ for which there is a transition from fixed points to limit cycles, from limit cycles of a given number of values to more values, and the transition to chaos.
Solution
The first transition is at $r \approx 3$, the next at $r \approx 3.45$, the next at $r \approx 3.55$. The transition to chaos appears to happen before $r=4$, but it's not obvious exactly where.
Mandelbrot
The Mandelbrot set is also generated from a sequence, ${ z_n }$, using the relation
\begin{equation}
z_{n+1} = z_n^2 + c, \qquad z_0 = 0.
\end{equation}
The members of the sequence, and the constant $c$, are all complex. The point in the complex plane at $c$ is in the Mandelbrot set only if the $|z_n| < 2$ for all members of the sequence. In reality, checking the first 100 iterations is sufficient.
Note: the python notation for a complex number $x + \text{i} y$ is x + yj: that is, j is used to indicate $\sqrt{-1}$. If you know the values of x and y then x + yj constructs a complex number; if they are stored in variables you can use complex(x, y).
Exercise 1
Write a function that checks if the point $c$ is in the Mandelbrot set.
Solution
End of explanation
c_values = [0.0, 2+2j, 2-2j, -2+2j, -2-2j]
for c in c_values:
print("Is {} in the Mandelbrot set? {}.".format(c, in_Mandelbrot(c)))
Explanation: Exercise 2
Check the points $c=0$ and $c=\pm 2 \pm 2 \text{i}$ and ensure they do what you expect. (What should you expect?)
Solution
End of explanation
import numpy
def grid_Mandelbrot(N):
x = numpy.linspace(-2.0, 2.0, N)
X, Y = numpy.meshgrid(x, x)
C = X + 1j*Y
grid = numpy.zeros((N, N), int)
for nx in range(N):
for ny in range(N):
grid[nx, ny] = int(in_Mandelbrot(C[nx, ny]))
return grid
Explanation: Exercise 3
Write a function that, given $N$
generates an $N \times N$ grid spanning $c = x + \text{i} y$, for $-2 \le x \le 2$ and $-2 \le y \le 2$;
returns an $N\times N$ array containing one if the associated grid point is in the Mandelbrot set, and zero otherwise.
Solution
End of explanation
from matplotlib import pyplot
%matplotlib inline
pyplot.imshow(grid_Mandelbrot(100))
Explanation: Exercise 4
Using the function imshow from matplotlib, plot the resulting array for a $100 \times 100$ array to make sure you see the expected shape.
Solution
End of explanation
from math import log
def log_Mandelbrot(c, n_iterations = 100):
z0 = 0.0 + 0j
in_set = True
n = 0
zn = z0
while in_set and (n < n_iterations):
n += 1
znew = zn**2 + c
in_set = abs(znew) < 2.0
zn = znew
return log(n)
def log_grid_Mandelbrot(N):
x = numpy.linspace(-2.0, 2.0, N)
X, Y = numpy.meshgrid(x, x)
C = X + 1j*Y
grid = numpy.zeros((N, N), int)
for nx in range(N):
for ny in range(N):
grid[nx, ny] = log_Mandelbrot(C[nx, ny])
return grid
from matplotlib import pyplot
%matplotlib inline
pyplot.imshow(log_grid_Mandelbrot(100))
Explanation: Exercise 5
Modify your functions so that, instead of returning whether a point is inside the set or not, it returns the logarithm of the number of iterations it takes. Plot the result using imshow again.
Solution
End of explanation
pyplot.imshow(log_grid_Mandelbrot(1000)[600:800,400:600])
Explanation: Exercise 6
Try some higher resolution plots, and try plotting only a section to see the structure. Note this is not a good way to get high accuracy close up images!
Solution
This is a simple example:
End of explanation
class Eqint(object):
def __init__(self, sequence):
self.sequence = sequence
def __repr__(self):
return str(len(self.sequence))
def __eq__(self, other):
return len(self.sequence)==len(other.sequence)
Explanation: Equivalence classes
An equivalence class is a relation that groups objects in a set into related subsets. For example, if we think of the integers modulo $7$, then $1$ is in the same equivalence class as $8$ (and $15$, and $22$, and so on), and $3$ is in the same equivalence class as $10$. We use the tilde $3 \sim 10$ to denote two objects within the same equivalence class.
Here, we are going to define the positive integers programmatically from equivalent sequences.
Exercise 1
Define a python class Eqint. This should be
Initialized by a sequence;
Store the sequence;
Define its representation (via the __repr__ function) to be the integer length of the sequence;
Redefine equality (via the __eq__ function) so that two eqints are equal if their sequences have same length.
Solution
End of explanation
zero = Eqint([])
one_list = Eqint([1])
one_tuple = Eqint((1,))
one_string = Eqint('1')
print("Is zero equivalent to one? {}, {}, {}".format(zero == one_list,
zero == one_tuple,
zero == one_string))
print("Is one equivalent to one? {}, {}, {}.".format(one_list == one_tuple,
one_list == one_string,
one_tuple == one_string))
print(zero)
print(one_list)
print(one_tuple)
print(one_string)
Explanation: Exercise 2
Define a zero object from the empty list, and three one objects, from a single object list, tuple, and string. For example
python
one_list = Eqint([1])
one_tuple = Eqint((1,))
one_string = Eqint('1')
Check that none of the one objects equal the zero object, but all equal the other one objects. Print each object to check that the representation gives the integer length.
Solution
End of explanation
class Eqint(object):
def __init__(self, sequence):
self.sequence = sequence
def __repr__(self):
return str(len(self.sequence))
def __eq__(self, other):
return len(self.sequence)==len(other.sequence)
def __add__(a, b):
return Eqint(tuple(a.sequence) + tuple(b.sequence))
Explanation: Exercise 3
Redefine the class by including an __add__ method that combines the two sequences. That is, if a and b are Eqints then a+b should return an Eqint defined from combining a and bs sequences.
Note
Adding two different types of sequences (eg, a list to a tuple) does not work, so it is better to either iterate over the sequences, or to convert to a uniform type before adding.
Solution
End of explanation
zero = Eqint([])
one_list = Eqint([1])
one_tuple = Eqint((1,))
one_string = Eqint('1')
sum_eqint = zero + one_list + one_tuple + one_string
print("The sum is {}.".format(sum_eqint))
print("The internal sequence is {}.".format(sum_eqint.sequence))
Explanation: Exercise 4
Check your addition function by adding together all your previous Eqint objects (which will need re-defining, as the class has been redefined). Print the resulting object to check you get 3, and also print its internal sequence.
Solution
End of explanation
positive_integers = []
zero = Eqint([])
positive_integers.append(zero)
N = 10
for n in range(1,N+1):
positive_integers.append(Eqint(list(positive_integers)))
print("The 'final' Eqint is {}".format(positive_integers[-1]))
print("Its sequence is {}".format(positive_integers[-1].sequence))
print("That is, it contains all Eqints with length less than 10.")
Explanation: Exercise 5
We will sketch a construction of the positive integers from nothing.
Define an empty list positive_integers.
Define an Eqint called zero from the empty list. Append it to positive_integers.
Define an Eqint called next_integer from the Eqint defined by a copy of positive_integers (ie, use Eqint(list(positive_integers)). Append it to positive_integers.
Repeat step 3 as often as needed.
Use this procedure to define the Eqint equivalent to $10$. Print it, and its internal sequence, to check.
Solution
End of explanation
def normal_form(numerator, denominator):
from fractions import gcd
factor = gcd(numerator, denominator)
return numerator//factor, denominator//factor
print(normal_form(3, 2))
print(normal_form(15, 3))
print(normal_form(20, 42))
Explanation: Rational numbers
Instead of working with floating point numbers, which are not "exact", we could work with the rational numbers $\mathbb{Q}$. A rational number $q \in \mathbb{Q}$ is defined by the numerator $n$ and denominator $d$ as $q = \frac{n}{d}$, where $n$ and $d$ are coprime (ie, have no common divisor other than $1$).
Exercise 1
Find a python function that finds the greatest common divisor (gcd) of two numbers. Use this to write a function normal_form that takes a numerator and divisor and returns the coprime $n$ and $d$. Test this function on $q = \frac{3}{2}$, $q = \frac{15}{3}$, and $q = \frac{20}{42}$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
q1 = Rational(3, 2)
print(q1)
q2 = Rational(15, 3)
print(q2)
q3 = Rational(20, 42)
print(q3)
Explanation: Exercise 2
Define a class Rational that uses the normal_form function to store the rational number in the appropriate form. Define a __repr__ function that prints a string that looks like $\frac{n}{d}$ (hint: use len(str(number)) to find the number of digits of an integer). Test it on the cases above.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
print(Rational(1,2) + Rational(1,3) + Rational(1,6))
Explanation: Exercise 3
Overload the __add__ function so that you can add two rational numbers. Test it on $\frac{1}{2} + \frac{1}{3} + \frac{1}{6} = 1$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __mul__(a, b):
numerator = a.numerator * b.numerator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
print(Rational(1,3)*Rational(15,2)*Rational(2,5))
Explanation: Exercise 4
Overload the __mul__ function so that you can multiply two rational numbers. Test it on $\frac{1}{3} \times \frac{15}{2} \times \frac{2}{5} = 1$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __mul__(a, b):
numerator = a.numerator * b.numerator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __rmul__(self, other):
numerator = self.numerator * other
return Rational(numerator, self.denominator)
def __sub__(a, b):
return a + (-1)*b
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
half = Rational(1,2)
print(2*half)
print(half+(-1)*half)
print(half-half)
Explanation: Exercise 5
Overload the __rmul__ function so that you can multiply a rational by an integer. Check that $\frac{1}{2} \times 2 = 1$ and $\frac{1}{2} + (-1) \times \frac{1}{2} = 0$. Also overload the __sub__ function (using previous functions!) so that you can subtract rational numbers and check that $\frac{1}{2} - \frac{1}{2} = 0$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __mul__(a, b):
numerator = a.numerator * b.numerator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __rmul__(self, other):
numerator = self.numerator * other
return Rational(numerator, self.denominator)
def __sub__(a, b):
return a + (-1)*b
def __float__(a):
return float(a.numerator) / float(a.denominator)
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
print(float(Rational(1,2)))
print(float(Rational(1,3)))
print(float(Rational(1,11)))
Explanation: Exercise 6
Overload the __float__ function so that float(q) returns the floating point approximation to the rational number q. Test this on $\frac{1}{2}, \frac{1}{3}$, and $\frac{1}{11}$.
Solution
End of explanation
class Rational(object):
A rational number.
def __init__(self, numerator, denominator):
n, d = normal_form(numerator, denominator)
self.numerator = n
self.denominator = d
return None
def __add__(a, b):
numerator = a.numerator * b.denominator + b.numerator * a.denominator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __mul__(a, b):
numerator = a.numerator * b.numerator
denominator = a.denominator * b.denominator
return Rational(numerator, denominator)
def __rmul__(self, other):
numerator = self.numerator * other
return Rational(numerator, self.denominator)
def __sub__(a, b):
return a + (-1)*b
def __float__(a):
return float(a.numerator) / float(a.denominator)
def __lt__(a, b):
return a.numerator * b.denominator < a.denominator * b.numerator
def __repr__(self):
max_length = max(len(str(self.numerator)), len(str(self.denominator)))
if self.denominator == 1:
frac = str(self.numerator)
else:
numerator = '\n'+str(self.numerator)+'\n'
bar = max_length*'-'+'\n'
denominator = str(self.denominator)
frac = numerator+bar+denominator
return frac
q_list = [Rational(n//2, n) for n in range(2, 12)]
print(sorted(q_list))
Explanation: Exercise 7
Overload the __lt__ function to compare two rational numbers. Create a list of rational numbers where the denominator is $n = 2, \dots, 11$ and the numerator is the floored integer $n/2$, ie n//2. Use the sorted function on that list (which relies on the __lt__ function).
Solution
End of explanation
def wallis_rational(N):
The partial product approximation to pi using the first N terms of Wallis' formula.
Parameters
----------
N : int
Number of terms in product
Returns
-------
partial : Rational
A rational number approximation to pi
partial = Rational(2,1)
for n in range(1, N+1):
partial = partial * Rational((2*n)**2, (2*n-1)*(2*n+1))
return partial
pi_list = [wallis_rational(n) for n in range(1, 21)]
print(pi_list)
print(sorted(pi_list))
import numpy
print(numpy.pi-numpy.array(list(map(float, pi_list))))
Explanation: Exercise 8
The Wallis formula for $\pi$ is
\begin{equation}
\pi = 2 \prod_{n=1}^{\infty} \frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)}.
\end{equation}
We can define a partial product $\pi_N$ as
\begin{equation}
\pi_N = 2 \prod_{n=1}^{N} \frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)},
\end{equation}
each of which are rational numbers.
Construct a list of the first 20 rational number approximations to $\pi$ and print them out. Print the sorted list to show that the approximations are always increasing. Then convert them to floating point numbers, construct a numpy array, and subtract this array from $\pi$ to see how accurate they are.
Solution
End of explanation
lhs = 27**5 + 84**5 + 110**5 + 133**5
rhs = 144**5
print("Does the LHS {} equal the RHS {}? {}".format(lhs, rhs, lhs==rhs))
Explanation: The shortest published Mathematical paper
A candidate for the shortest mathematical paper ever shows the following result:
\begin{equation}
27^5 + 84^5 + 110^5 + 133^5 = 144^5.
\end{equation}
This is interesting as
This is a counterexample to a conjecture by Euler ... that at least $n$ $n$th powers are required to sum to an $n$th power, $n > 2$.
Exercise 1
Using python, check the equation above is true.
Solution
End of explanation
import numpy
import itertools
Explanation: Exercise 2
The more interesting statement in the paper is that
\begin{equation}
27^5 + 84^5 + 110^5 + 133^5 = 144^5.
\end{equation}
[is] the smallest instance in which four fifth powers sum to a fifth power.
Interpreting "the smallest instance" to mean the solution where the right hand side term (the largest integer) is the smallest, we want to use python to check this statement.
You may find the combinations function from the itertools package useful.
End of explanation
input_list = numpy.arange(1, 7)
combinations = list(itertools.combinations(input_list, 4))
print(combinations)
Explanation: The combinations function returns all the combinations (ignoring order) of r elements from a given list. For example, take a list of length 6, [1, 2, 3, 4, 5, 6] and compute all the combinations of length 4:
End of explanation
n_combinations = 144*143*142*141/24
print("Number of combinations of 4 objects from 144 is {}".format(n_combinations))
Explanation: We can already see that the number of terms to consider is large.
Note that we have used the list function to explicitly get a list of the combinations. The combinations function returns a generator, which can be used in a loop as if it were a list, without storing all elements of the list.
How fast does the number of combinations grow? The standard formula says that for a list of length $n$ there are
\begin{equation}
\begin{pmatrix} n \ k \end{pmatrix} = \frac{n!}{k! (n-k)!}
\end{equation}
combinations of length $k$. For $k=4$ as needed here we will have $n (n-1) (n-2) (n-3) / 24$ combinations. For $n=144$ we therefore have
End of explanation
from matplotlib import pyplot
%matplotlib inline
n = numpy.arange(5, 51)
N = numpy.zeros_like(n)
for i, n_c in enumerate(n):
combinations = list(itertools.combinations(numpy.arange(1,n_c+1), 4))
N[i] = len(combinations)
pyplot.figure(figsize=(12,6))
pyplot.loglog(n, N, linestyle='None', marker='x', color='k', label='Combinations')
pyplot.loglog(n, n**4, color='b', label=r'$n^4$')
pyplot.xlabel(r'$n$')
pyplot.ylabel(r'$N$')
pyplot.legend(loc='upper left')
pyplot.show()
Explanation: Exercise 2a
Show, by getting python to compute the number of combinations $N = \begin{pmatrix} n \ 4 \end{pmatrix}$ that $N$ grows roughly as $n^4$. To do this, plot the number of combinations and $n^4$ on a log-log scale. Restrict to $n \le 50$.
Solution
End of explanation
nmax=145
range_to_power = numpy.arange(1, nmax)**5
lhs_combinations = list(itertools.combinations(range_to_power, 4))
Explanation: With 17 million combinations to work with, we'll need to be a little careful how we compute.
One thing we could try is to loop through each possible "smallest instance" (the term on the right hand side) in increasing order. We then check all possible combinations of left hand sides.
This is computationally very expensive as we repeat a lot of calculations. We repeatedly recalculate combinations (a bad idea). We repeatedly recalculate the powers of the same number.
Instead, let us try creating the list of all combinations of powers once.
Exercise 2b
Construct a numpy array containing all integers in $1, \dots, 144$ to the fifth power.
Construct a list of all combinations of four elements from this array.
Construct a list of sums of all these combinations.
Loop over one list and check if the entry appears in the other list (ie, use the in keyword).
Solution
End of explanation
lhs_sums = []
for lhs_terms in lhs_combinations:
lhs_sums.append(numpy.sum(numpy.array(lhs_terms)))
Explanation: Then calculate the sums:
End of explanation
for i, lhs in enumerate(lhs_sums):
if lhs in range_to_power:
rhs_primitive = int(lhs**(0.2))
lhs_primitive = (numpy.array(lhs_combinations[i])**(0.2)).astype(int)
print("The LHS terms are {}.".format(lhs_primitive))
print("The RHS term is {}.".format(rhs_primitive))
Explanation: Finally, loop through the sums and check to see if it matches any possible term on the RHS:
End of explanation
def dvdt(v, t, sigma, rho, beta):
Define the Lorenz system.
Parameters
----------
v : list
State vector
t : float
Time
sigma : float
Parameter
rho : float
Parameter
beta : float
Parameter
Returns
-------
dvdt : list
RHS defining the Lorenz system
x, y, z = v
return [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]
Explanation: Lorenz attractor
The Lorenz system is a set of ordinary differential equations which can be written
\begin{equation}
\frac{\text{d} \vec{v}}{\text{d} \vec{t}} = \vec{f}(\vec{v})
\end{equation}
where the variables in the state vector $\vec{v}$ are
\begin{equation}
\vec{v} = \begin{pmatrix} x(t) \ y(t) \ z(t) \end{pmatrix}
\end{equation}
and the function defining the ODE is
\begin{equation}
\vec{f} = \begin{pmatrix} \sigma \left( y(t) - x(t) \right) \ x(t) \left( \rho - z(t) \right) - y(t) \ x(t) y(t) - \beta z(t) \end{pmatrix}.
\end{equation}
The parameters $\sigma, \rho, \beta$ are all real numbers.
Exercise 1
Write a function dvdt(v, t, params) that returns $\vec{f}$ given $\vec{v}, t$ and the parameters $\sigma, \rho, \beta$.
Solution
End of explanation
import numpy
from scipy.integrate import odeint
v0 = [1.0, 1.0, 1.0]
sigma = 10.0
beta = 8.0/3.0
t_values = numpy.linspace(0.0, 100.0, 5000)
rho_values = [13.0, 14.0, 15.0, 28.0]
v_values = []
for rho in rho_values:
params = (sigma, rho, beta)
v = odeint(dvdt, v0, t_values, args=params)
v_values.append(v)
%matplotlib inline
from matplotlib import pyplot
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = pyplot.figure(figsize=(12,6))
for i, v in enumerate(v_values):
ax = fig.add_subplot(2,2,i+1,projection='3d')
ax.plot(v[:,0], v[:,1], v[:,2])
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y$')
ax.set_zlabel(r'$z$')
ax.set_title(r"$\rho={}$".format(rho_values[i]))
pyplot.show()
Explanation: Exercise 2
Fix $\sigma=10, \beta=8/3$. Set initial data to be $\vec{v}(0) = \vec{1}$. Using scipy, specifically the odeint function of scipy.integrate, solve the Lorenz system up to $t=100$ for $\rho=13, 14, 15$ and $28$.
Plot your results in 3d, plotting $x, y, z$.
Solution
End of explanation
t_values = numpy.linspace(0.0, 40.0, 4000)
rho = 28.0
params = (sigma, rho, beta)
v_values = []
v0_values = [[1.0,1.0,1.0],
[1.0+1e-5,1.0+1e-5,1.0+1e-5]]
for v0 in v0_values:
v = odeint(dvdt, v0, t_values, args=params)
v_values.append(v)
fig = pyplot.figure(figsize=(12,6))
line_colours = 'by'
for tstart in range(4):
ax = fig.add_subplot(2,2,tstart+1,projection='3d')
for i, v in enumerate(v_values):
ax.plot(v[tstart*1000:(tstart+1)*1000,0],
v[tstart*1000:(tstart+1)*1000,1],
v[tstart*1000:(tstart+1)*1000,2],
color=line_colours[i])
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y$')
ax.set_zlabel(r'$z$')
ax.set_title(r"$t \in [{},{}]$".format(tstart*10, (tstart+1)*10))
pyplot.show()
Explanation: Exercise 3
Fix $\rho = 28$. Solve the Lorenz system twice, up to $t=40$, using the two different initial conditions $\vec{v}(0) = \vec{1}$ and $\vec{v}(0) = \vec{1} + \vec{10^{-5}}$.
Show four plots. Each plot should show the two solutions on the same axes, plotting $x, y$ and $z$. Each plot should show $10$ units of time, ie the first shows $t \in [0, 10]$, the second shows $t \in [10, 20]$, and so on.
Solution
End of explanation
import sympy
sympy.init_printing()
y, t = sympy.symbols('y, t')
sympy.dsolve(sympy.diff(y(t), t) + y(t)**2 - sympy.exp(-t), y(t))
Explanation: This shows the sensitive dependence on initial conditions that is characteristic of chaotic behaviour.
Systematic ODE solving with sympy
We are interested in the solution of
\begin{equation}
\frac{\text{d} y}{\text{d} t} = e^{-t} - y^n, \qquad y(0) = 1,
\end{equation}
where $n > 1$ is an integer. The "minor" change from the above examples mean that sympy can only give the solution as a power series.
Exercise 1
Compute the general solution as a power series for $n = 2$.
Solution
End of explanation
for n in range(2, 11):
ode_solution = sympy.dsolve(sympy.diff(y(t), t) + y(t)**n - sympy.exp(-t), y(t),
ics = {y(0) : 1})
print(ode_solution)
Explanation: Exercise 2
Investigate the help for the dsolve function to straightforwardly impose the initial condition $y(0) = 1$ using the ics argument. Using this, compute the specific solutions that satisfy the ODE for $n = 2, \dots, 10$.
Solution
End of explanation
%matplotlib inline
for n in range(2, 11):
ode_solution = sympy.dsolve(sympy.diff(y(t), t) + y(t)**n - sympy.exp(-t), y(t),
ics = {y(0) : 1})
sympy.plot(ode_solution.rhs.removeO(), (t, 0, 1));
Explanation: Exercise 3
Using the removeO command, plot each of these solutions for $t \in [0, 1]$.
End of explanation
def all_primes(N):
Return all primes less than or equal to N.
Parameters
----------
N : int
Maximum number
Returns
-------
prime : generator
Prime numbers
primes = []
for n in range(2, N+1):
is_n_prime = True
for p in primes:
if n%p == 0:
is_n_prime = False
break
if is_n_prime:
primes.append(n)
yield n
Explanation: Twin primes
A twin prime is a pair $(p_1, p_2)$ such that both $p_1$ and $p_2$ are prime and $p_2 = p_1 + 2$.
Exercise 1
Write a generator that returns twin primes. You can use the generators above, and may want to look at the itertools module together with its recipes, particularly the pairwise recipe.
Solution
Note: we need to first pull in the generators introduced in that notebook
End of explanation
from itertools import tee
def pair_primes(N):
"Generate consecutive prime pairs, using the itertools recipe"
a, b = tee(all_primes(N))
next(b, None)
return zip(a, b)
Explanation: Now we can generate pairs using the pairwise recipe:
End of explanation
def check_twin(pair):
Take in a pair of integers, check if they differ by 2.
p1, p2 = pair
return p2-p1 == 2
Explanation: We could examine the results of the two primes directly. But an efficient solution is to use python's filter function. To do this, first define a function checking if the pair are twin primes:
End of explanation
def twin_primes(N):
Return all twin primes
return filter(check_twin, pair_primes(N))
Explanation: Then use the filter function to define another generator:
End of explanation
for tp in twin_primes(20):
print(tp)
Explanation: Now check by finding the twin primes with $N<20$:
End of explanation
def pi_N(N):
Use the quantify pattern from itertools to count the number of twin primes.
return sum(map(check_twin, pair_primes(N)))
pi_N(1000)
Explanation: Exercise 2
Find how many twin primes there are with $p_2 < 1000$.
Solution
Again there are many solutions, but the itertools recipes has the quantify pattern. Looking ahead to exercise 3 we'll define:
End of explanation
import numpy
from matplotlib import pyplot
%matplotlib inline
N = numpy.array([2**k for k in range(4, 17)])
twin_prime_fraction = numpy.array(list(map(pi_N, N))) / N
pyplot.semilogx(N, twin_prime_fraction)
pyplot.xlabel(r"$N$")
pyplot.ylabel(r"$\pi_N / N$")
pyplot.show()
Explanation: Exercise 3
Let $\pi_N$ be the number of twin primes such that $p_2 < N$. Plot how $\pi_N / N$ varies with $N$ for $N=2^k$ and $k = 4, 5, \dots 16$. (You should use a logarithmic scale where appropriate!)
Solution
We've now done all the hard work and can use the solutions above.
End of explanation
pyplot.semilogx(N, twin_prime_fraction * numpy.log(N)**2)
pyplot.xlabel(r"$N$")
pyplot.ylabel(r"$\pi_N \times \log(N)^2 / N$")
pyplot.show()
Explanation: For those that have checked Wikipedia, you'll see Brun's theorem which suggests a specific scaling, that $\pi_N$ is bounded by $C N / \log(N)^2$. Checking this numerically on this data:
End of explanation
class Polynomial(object):
Representing a polynomial.
explanation = "I am a polynomial"
def __init__(self, roots, leading_term):
self.roots = roots
self.leading_term = leading_term
self.order = len(roots)
def __repr__(self):
string = str(self.leading_term)
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
def __mul__(self, other):
roots = self.roots + other.roots
leading_term = self.leading_term * other.leading_term
return Polynomial(roots, leading_term)
def explain_to(self, caller):
print("Hello, {}. {}.".format(caller,self.explanation))
print("My roots are {}.".format(self.roots))
return None
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
explanation = "I am a monomial"
def __init__(self, roots):
Polynomial.__init__(self, roots, 1)
def __repr__(self):
string = ""
for root in self.roots:
if root == 0:
string = string + "x"
elif root > 0:
string = string + "(x - {})".format(root)
else:
string = string + "(x + {})".format(-root)
return string
Explanation: A basis for the polynomials
In the section on classes we defined a Monomial class to represent a polynomial with leading coefficient $1$. As the $N+1$ monomials $1, x, x^2, \dots, x^N$ form a basis for the vector space of polynomials of order $N$, $\mathbb{P}^N$, we can use the Monomial class to return this basis.
Exercise 1
Define a generator that will iterate through this basis of $\mathbb{P}^N$ and test it on $\mathbb{P}^3$.
Solution
Again we first take the definition of the crucial class from the notes.
End of explanation
def basis_pN(N):
A generator for the simplest basis of P^N.
for n in range(N+1):
yield Monomial(n*[0])
Explanation: Now we can define the first basis:
End of explanation
for poly in basis_pN(3):
print(poly)
Explanation: Then test it on $\mathbb{P}^N$:
End of explanation
class Monomial(Polynomial):
Representing a monomial, which is a polynomial with leading term 1.
explanation = "I am a monomial"
def __init__(self, roots):
Polynomial.__init__(self, roots, 1)
def __repr__(self):
if len(self.roots):
string = ""
n_zero_roots = len(self.roots) - numpy.count_nonzero(self.roots)
if n_zero_roots == 1:
string = "x"
elif n_zero_roots > 1:
string = "x^{}".format(n_zero_roots)
else: # Monomial degree 0.
string = "1"
for root in self.roots:
if root > 0:
string = string + "(x - {})".format(root)
elif root < 0:
string = string + "(x + {})".format(-root)
return string
Explanation: This looks horrible, but is correct. To really make this look good, we need to improve the output. If we use
End of explanation
for poly in basis_pN(3):
print(poly)
Explanation: then we can deal with the uglier cases, and re-running the test we get
End of explanation
def basis_pN_variant(N):
A generator for the 'sum' basis of P^N.
for n in range(N+1):
yield Monomial(range(n+1))
for poly in basis_pN_variant(4):
print(poly)
Explanation: An even better solution would be to use the numpy.unique function as in this stackoverflow answer (the second one!) to get the frequency of all the roots.
Exercise 2
An alternative basis is given by the monomials
\begin{align}
p_0(x) &= 1, \ p_1(x) &= 1-x, \ p_2(x) &= (1-x)(2-x), \ \dots & \quad \dots, \ p_N(x) &= \prod_{n=1}^N (n-x).
\end{align}
Define a generator that will iterate through this basis of $\mathbb{P}^N$ and test it on $\mathbb{P}^4$.
Solution
End of explanation
from itertools import product
def basis_product():
Basis of the product space
yield from product(basis_pN(3), basis_pN_variant(4))
for p1, p2 in basis_product():
print("Basis element is ({}) X ({}).".format(p1, p2))
Explanation: I am too lazy to work back through the definitions and flip all the signs; it should be clear how to do this!
Exercise 3
Use these generators to write another generator that produces a basis of $\mathbb{P^3} \times \mathbb{P^4}$.
Solution
Hopefully by now you'll be aware of how useful itertools is!
End of explanation
def basis_product_long_form():
Basis of the product space (without using yield_from)
prod = product(basis_pN(3), basis_pN_variant(4))
yield next(prod)
for p1, p2 in basis_product():
print("Basis element is ({}) X ({}).".format(p1, p2))
Explanation: I've cheated here as I haven't introduced the yield from syntax (which returns an iterator from a generator). We could write this out instead as
End of explanation |
14,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Projeto Hillary x Trump
Nesse projeto vamos utilizar tweets relacionados a última eleição presidencial dos Estados Unidos, onde Hillary Clinton e Donald Trump dispuram o pleito final. A proposta é utilizar os métodos de aprendizados supervisionados estudados para classificar tweets entre duas categorias
Step1: O objeto dict_ representa todos os textos associados a classe correspondente.
Na etapa de pré-processamento vamos fazer algumas operações
Step2: Bigrams e Trigram mais frequentes da Hillary
Step3: Bigrams e Trigram mais frequentes do Trump
Step4: Construindo um bag of words
Step5: Ao final deste processo já temos nossa base de dados dividido em duas variáveis
Step6: Atenção | Python Code:
import pandas as pd
import nltk
df = pd.read_csv("https://www.data2learning.com/machinelearning/datasets/tweets.csv")
dataset = df[['text','handle']]
dict_ = dataset.T.to_dict("list")
Explanation: Projeto Hillary x Trump
Nesse projeto vamos utilizar tweets relacionados a última eleição presidencial dos Estados Unidos, onde Hillary Clinton e Donald Trump dispuram o pleito final. A proposta é utilizar os métodos de aprendizados supervisionados estudados para classificar tweets entre duas categorias: Hillary e Trump.
O primeiro passo foi obter um conjunto de tweets que foi publicado pelas contas oficiais do tweet dos dois candidatos. Para isso, vamos utilizar este dataset disponibilizado pelo Kaglle. Com este conjunto de dados, vamos construir um modelo capaz de aprender, a partir de um conjunto de palavras, se o texto foi digitado pela conta da Hillary ou do Trump.
Uma vez que este modelo foi construído, vamos classificar um conjunto novo de dados relacionados às eleições americadas e classifica-los em um dos discursos. A proposta é tentar classificar tweets que tenham um direcionamento mais próximo do discurso da Hillary e aqueles que são mais próximos do discurso do Trump. Lê-se como discurso os tweets publicados.
Para essa base de teste, vamos utilizar um subconjunto de tweets deste dataset que consta com tweets que foram postados no dia da eleição americana.
Para exibir os resultados, vamos construir uma página html. Além da análise automática, esta página terá informações sobre os termos mais citados pelas contas dos candidatos.
Sendo assim, o primeiro passo é gerar a base de tweets dos candidatos e extrair as informações mais relevantes. Vamos trabalhar primeiro aqui no Jupyter Notebook para testar os métodos. Ao final será gerado um JSON que será lido pela página HTML. Um exemplo da página já alimentada pode ser encontrada neste link.
Vamos começar ;)
Pré-Processamento da Base dos Candidatos
End of explanation
from unicodedata import normalize, category
from nltk.tokenize import regexp_tokenize
from collections import Counter, Set
from nltk.corpus import stopwords
import re
def pre_process_text(text):
# Expressão regular para extrair padrões do texto. São reconhecidos (na ordem, o símbolo | separa um padrão):
# - links com https, links com http, links com www, palavras, nome de usuários (começa com @), hashtags (começa com #)
pattern = r'(https://[^"\' ]+|www.[^"\' ]+|http://[^"\' ]+|[a-zA-Z]+|\@\w+|\#\w+)'
#Cria a lista de stopwords
english_stop = stopwords.words(['english'])
users_cited = []
hash_tags = []
tokens = []
text = text.lower()
patterns = regexp_tokenize(text, pattern)
users_cited = [e for e in patterns if e[0] == '@']
hashtags = [e for e in patterns if e[0] == '#']
tokens = [e for e in patterns if e[:4] != 'http']
tokens = [e for e in tokens if e[:4] != 'www.']
tokens = [e for e in tokens if e[0] != '#']
tokens = [e for e in tokens if e[0] != '@']
tokens = [e for e in tokens if e not in english_stop]
tokens = [e for e in tokens if len(e) > 3]
return users_cited, hashtags, tokens
users_cited_h = [] # armazena os usuários citatdos por hillary
users_cited_t = [] # armazena os usuários citados por trump
hashtags_h = [] # armazena as hashtags de hillary
hashtags_t = [] # armazena as hashtags de trump
words_h = [] # lista de palavras que apareceram no discurso de hillary
words_t = [] # lista de palavras que apareceram no discurso de trump
all_tokens = [] # armazena todos os tokens, para gerar o vocabulário final
all_texts = [] # armazena todos
for d in dict_:
text_ = dict_[d][0]
class_ = dict_[d][1]
users_, hash_, tokens_ = pre_process_text(text_)
if class_ == "HillaryClinton":
class_ = "hillary"
users_cited_h += users_
hashtags_h += hash_
words_h += tokens_
elif class_ == "realDonaldTrump":
class_ = "trump"
users_cited_t += users_
hashtags_t += hash_
words_t += tokens_
temp_dict = {
'text': " ".join(tokens_),
'class_': class_
}
all_tokens += tokens_
all_texts.append(temp_dict)
print("Termos mais frequentes ditos por Hillary:")
print()
hillary_frequent_terms = nltk.FreqDist(words_h).most_common(10)
for word in hillary_frequent_terms:
print(word[0])
print("Termos mais frequentes ditos por Trump:")
print()
trump_frequent_terms = nltk.FreqDist(words_t).most_common(10)
for word in trump_frequent_terms:
print(word[0])
Explanation: O objeto dict_ representa todos os textos associados a classe correspondente.
Na etapa de pré-processamento vamos fazer algumas operações:
retirar dos textos hashtags, usuários e links. Essas informações serão incluídas em listas separadas paara serem usadas posteriormente.
serão eliminados stopwords, símbolos de pontuação, palavras curtas;
numerais tambéms serão descartados, mantendo apenas palavras.
Essas etapas de pré-processamento dependem do objetivo do trabalho. Pode ser de interessa, a depender da tarefa de classificação, manter tais símbolos. Para o nosso trabalho, só é de interesse as palavras em si.
Para esta tarefa, vamos utilizar também o NLTK, conjunto de ferramentas voltadas para o processamento de linguagem natural.
vamos criar um método para isso, já que iremos utiliza-lo mais adiante com a base de teste:
End of explanation
#Pegando os bigram e trigram mais frequentes
from nltk.collocations import BigramCollocationFinder, TrigramCollocationFinder
from nltk.metrics import BigramAssocMeasures, TrigramAssocMeasures
bcf = BigramCollocationFinder.from_words(words_h)
tcf = TrigramCollocationFinder.from_words(words_h)
bcf.apply_freq_filter(3)
tcf.apply_freq_filter(3)
result_bi = bcf.nbest(BigramAssocMeasures.raw_freq, 5)
result_tri = tcf.nbest(TrigramAssocMeasures.raw_freq, 5)
hillary_frequent_bitrigram = []
for r in result_bi:
w_ = " ".join(r)
print(w_)
hillary_frequent_bitrigram.append(w_)
print
for r in result_tri:
w_ = " ".join(r)
print(w_)
hillary_frequent_bitrigram.append(w_)
Explanation: Bigrams e Trigram mais frequentes da Hillary
End of explanation
bcf = BigramCollocationFinder.from_words(words_t)
tcf = TrigramCollocationFinder.from_words(words_t)
bcf.apply_freq_filter(3)
tcf.apply_freq_filter(3)
result_bi = bcf.nbest(BigramAssocMeasures.raw_freq, 5)
result_tri = tcf.nbest(TrigramAssocMeasures.raw_freq, 5)
trump_frequent_bitrigram = []
for r in result_bi:
w_ = " ".join(r)
print(w_)
trump_frequent_bitrigram.append(w_)
print
for r in result_tri:
w_ = " ".join(r)
print(w_)
trump_frequent_bitrigram.append(w_)
Explanation: Bigrams e Trigram mais frequentes do Trump
End of explanation
# Cada token é concatenado em uma única string que representa um tweet
# Cada classe é atribuída a um vetor (hillary, trump)
# Instâncias: [t1, t2, t3, t4]
# Classes: [c1, c2, c3, c4]
all_tweets = []
all_class = []
for t in all_texts:
all_tweets.append(t['text'])
all_class.append(t['class_'])
print("Criar o bag of words...\n")
#Número de features, coluna da tabela
max_features = 2000
from sklearn.feature_extraction.text import CountVectorizer
# Initialize the "CountVectorizer" object, which is scikit-learn's
# bag of words tool.
vectorizer = CountVectorizer(analyzer = "word", \
tokenizer = None, \
preprocessor = None, \
stop_words = None, \
max_features = max_features)
# fit_transform() does two functions: First, it fits the model
# and learns the vocabulary; second, it transforms our training data
# into feature vectors. The input to fit_transform should be a list of
# strings.
X = vectorizer.fit_transform(all_tweets)
# Numpy arrays are easy to work with, so convert the result to an
# array
X = X.toarray()
y = all_class
print("Train data: OK!")
X.shape
Explanation: Construindo um bag of words
End of explanation
# Teste os modelos a partir daqui
Explanation: Ao final deste processo já temos nossa base de dados dividido em duas variáveis: X e y.
X corresponde ao bag of words, ou seja, cada linha consiste de um twitter e cada coluna de uma palavra presente no vocabulário da base dados. Para cada linha/coluna é atribuído um valor que corresponde a quantidade de vezes que aquela palavra aparece no respectivo tweet. Se a palavra não está presente, o valor de 0 é atribuído.
y corresponde a classe de cada tweet: hillary, tweet do perfil @HillaryClinton e trump, tweet do perfil @realDonaldTrump.
Testando diferentes modelos
Vamos testar os diferentes modelos estudados com a base de dados criada e escolher aquele que melhor generaliza o conjunto de dados. Para testar os modelos, vamos utilizar validação cruzada de 10 folds. Aplique os modelos estudados:
KNN (teste diferentes valores de K e escolha o melhor)
Árvore de Decisão
SVM (varie o valor de C e escolha o melhor)
Além disso, teste dois outros modelos:
RandomForest
Naive Bayes
Para estes dois, pesquise no Google como utiliza-los no Scikit-Learn.
Para cada modelo imprima a acurácia no treino e na média dos 10 folds da validação cruzada. A escolha do melhor deve ser feita a partir do valor da média da validação cruzada.
O melhor modelo será utilizado para classificar outros textos extraídos do twitter e na implementação da página web.
Atenção: dada a quantidade de dados, alguns modelos pode demorar alguns minutos para executar
End of explanation
hillary_frequent_hashtags = nltk.FreqDist(hashtags_h).most_common(10)
trump_frequent_hashtags = nltk.FreqDist(hashtags_t).most_common(10)
dict_web = {
'hillary_information': {
'frequent_terms': hillary_frequent_terms,
'frequent_bitrigram': hillary_frequent_bitrigram,
'frequent_hashtags': hillary_frequent_hashtags
},
'trump_information': {
'frequent_terms': trump_frequent_terms,
'frequent_bitrigram': trump_frequent_bitrigram,
'frequent_hashtags': trump_frequent_hashtags
},
'classified_information': {
'hillary_terms': hillary_classified_frequent_terms,
'hillary_bigram': hillary_classified_bitrigram,
'trump_terms': trump_classified_frequent_terms,
'trump_bigram': trump_classified_bitrigram,
'texts_classified': all_classified
}
}
with open('data.json', 'w') as outfile:
json.dump(dict_web, outfile)
Explanation: Atenção: as tarefas a seguir serão disponibilizadas após a entrega da primeira parte. Sendo assim, não precisa enviar o que se pede a seguir. Quando passar a data da entrega, disponibilizo o notebook completo. No entanto, fiquem a vontade de fazer a próxima tarefa como forma de aprendizado. É um bom exercício ;)
Usando o melhor modelo em novos textos
Vamos executar o melhor clasificador em um conjunto de textos novos. Esses textos não tem classificação. Eles foram postados durante o dia da eleição americana. A idéia é identificar de forma automática os tweets que estão mais próximos dos discursos da Hillary Clinton e de Donald Trump.
Essa tarefa será realizada em sala após a entrega da atividade do melhor modelo
Gerando o .json lido pela página web
Esse será o JSON gerado após a etapa de teste do melhor modelo. Essa tarefa também será realizada em sala após a entrega do teste dos melhores modelos.
End of explanation |
14,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run full timeseries simulations
In this section, we will learn how to
Step1: Get timeseries inputs
Step2: Prepare PV array parameters
Step3: Run single timestep with PVEngine and inspect results
Instantiate the PVEngine class and fit it to the data
Step4: The user can run a simulation for a single timestep and plot the returned PV array
Step5: The user can inspect the results very easily thanks to the simple geometry API
Step6: Run multiple timesteps with PVEngine
The users can also obtain a "report" that will look like whatever the users want, and which will rely on the simple geometry API shown above.
Here is an example
Step7: A function that builds a report needs to be specified, otherwise nothing will be returned by the simulation.
Here is an example of a report function that will return the total incident irradiance ('qinc') on the back surface of the rightmost PV row.
A good way to get started building the reporting function is to use the example provided in the report.py module of the pvfactors package.
Step8: Now we can run the timeseries simulation again using the same engine but a different report function.
Step9: We can see in the printed output the new report generated by the simulation run.
For convenience, we've been using dictionaries as the data structure holding the reports, but it could be anything else, like numpy arrays, pandas dataframes, etc.
Run one or multiple timesteps with the run_timeseries_engine() function
The same thing can be accomplished using a function from the run.py module of the pvfactors package.
But only the report will be returned. | Python Code:
# Import external libraries
import os
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import pandas as pd
import warnings
# Settings
%matplotlib inline
np.set_printoptions(precision=3, linewidth=300)
warnings.filterwarnings('ignore')
# Paths
LOCAL_DIR = os.getcwd()
DATA_DIR = os.path.join(LOCAL_DIR, 'data')
filepath = os.path.join(DATA_DIR, 'test_df_inputs_MET_clearsky_tucson.csv')
Explanation: Run full timeseries simulations
In this section, we will learn how to:
run full timeseries simulations using the PVEngine class, and visualize some of the results
run full timeseries simulations using the run_timeseries_engine() function
Imports and settings
End of explanation
def export_data(fp):
tz = 'US/Arizona'
df = pd.read_csv(fp, index_col=0)
df.index = pd.DatetimeIndex(df.index).tz_convert(tz)
return df
df = export_data(filepath)
df_inputs = df.iloc[:24, :]
# Plot the data
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12, 3))
df_inputs[['dni', 'dhi']].plot(ax=ax1)
df_inputs[['solar_zenith', 'solar_azimuth']].plot(ax=ax2)
df_inputs[['surface_tilt', 'surface_azimuth']].plot(ax=ax3)
plt.show()
# Use a fixed albedo
albedo = 0.2
Explanation: Get timeseries inputs
End of explanation
pvarray_parameters = {
'n_pvrows': 3, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'gcr': 0.4, # ground coverage ratio
'rho_front_pvrow': 0.01, # pv row front surface reflectivity
'rho_back_pvrow': 0.03, # pv row back surface reflectivity
}
Explanation: Prepare PV array parameters
End of explanation
from pvfactors.engine import PVEngine
from pvfactors.geometry import OrderedPVArray
# Create ordered PV array
pvarray = OrderedPVArray.init_from_dict(pvarray_parameters)
# Create engine
engine = PVEngine(pvarray)
# Fit engine to data
engine.fit(df_inputs.index, df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo)
Explanation: Run single timestep with PVEngine and inspect results
Instantiate the PVEngine class and fit it to the data
End of explanation
# Get the PV array
pvarray = engine.run_full_mode(fn_build_report=lambda pvarray: pvarray)
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(10, 3))
pvarray.plot_at_idx(15, ax, with_surface_index=True)
ax.set_title(df.index[15])
plt.show()
Explanation: The user can run a simulation for a single timestep and plot the returned PV array
End of explanation
# Get the calculated outputs from the pv array
center_row_front_incident_irradiance = pvarray.ts_pvrows[1].front.get_param_weighted('qinc')
left_row_back_reflected_incident_irradiance = pvarray.ts_pvrows[0].back.get_param_weighted('reflection')
right_row_back_isotropic_incident_irradiance = pvarray.ts_pvrows[2].back.get_param_weighted('isotropic')
print("Incident irradiance on front surface of middle pv row: \n{} W/m2"
.format(center_row_front_incident_irradiance))
print("Reflected irradiance on back surface of left pv row: \n{} W/m2"
.format(left_row_back_reflected_incident_irradiance))
print("Isotropic irradiance on back surface of right pv row: \n{} W/m2"
.format(right_row_back_isotropic_incident_irradiance))
Explanation: The user can inspect the results very easily thanks to the simple geometry API
End of explanation
# Create a function that will build a report
from pvfactors.report import example_fn_build_report
# Run full simulation
report = engine.run_full_mode(fn_build_report=example_fn_build_report)
# Print results (report is defined by report function passed by user)
df_report = pd.DataFrame(report, index=df_inputs.index)
df_report.iloc[6:11]
f, ax = plt.subplots(1, 2, figsize=(10, 3))
df_report[['qinc_front', 'qinc_back']].plot(ax=ax[0])
df_report[['iso_front', 'iso_back']].plot(ax=ax[1])
plt.show()
Explanation: Run multiple timesteps with PVEngine
The users can also obtain a "report" that will look like whatever the users want, and which will rely on the simple geometry API shown above.
Here is an example:
End of explanation
def new_fn_build_report(pvarray): return {'total_inc_back': pvarray.ts_pvrows[1].back.get_param_weighted('qinc')}
Explanation: A function that builds a report needs to be specified, otherwise nothing will be returned by the simulation.
Here is an example of a report function that will return the total incident irradiance ('qinc') on the back surface of the rightmost PV row.
A good way to get started building the reporting function is to use the example provided in the report.py module of the pvfactors package.
End of explanation
# Run full simulation using new report function
new_report = engine.run_full_mode(fn_build_report=new_fn_build_report)
# Print results
df_new_report = pd.DataFrame(new_report, index=df_inputs.index)
df_new_report.iloc[6:11]
f, ax = plt.subplots(figsize=(5, 3))
df_new_report.plot(ax=ax)
plt.show()
Explanation: Now we can run the timeseries simulation again using the same engine but a different report function.
End of explanation
# import function
from pvfactors.run import run_timeseries_engine
# run simulation using new_fn_build_report
report_from_fn = run_timeseries_engine(new_fn_build_report, pvarray_parameters, df_inputs.index,
df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo)
# make a dataframe out of the report
df_report_from_fn = pd.DataFrame(report_from_fn, index=df_inputs.index)
f, ax = plt.subplots(figsize=(5, 3))
df_report_from_fn.plot(ax=ax)
plt.show()
Explanation: We can see in the printed output the new report generated by the simulation run.
For convenience, we've been using dictionaries as the data structure holding the reports, but it could be anything else, like numpy arrays, pandas dataframes, etc.
Run one or multiple timesteps with the run_timeseries_engine() function
The same thing can be accomplished using a function from the run.py module of the pvfactors package.
But only the report will be returned.
End of explanation |
14,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create A Dictionary
Step2: Convert Dictionary To Feature Matrix
Step3: View Feature Names | Python Code:
from sklearn.feature_extraction import DictVectorizer
Explanation: Title: Loading Features From Dictionaries
Slug: loading_features_from_dictionaries
Summary: Loading Features From Dictionaries
Date: 2016-11-01 12:00
Category: Machine Learning
Tags: Preprocessing Structured Data
Authors: Chris Albon
Preliminaries
End of explanation
staff = [{'name': 'Steve Miller', 'age': 33.},
{'name': 'Lyndon Jones', 'age': 12.},
{'name': 'Baxter Morth', 'age': 18.}]
Explanation: Create A Dictionary
End of explanation
# Create an object for our dictionary vectorizer
vec = DictVectorizer()
# Fit then transform the staff dictionary with vec, then output an array
vec.fit_transform(staff).toarray()
Explanation: Convert Dictionary To Feature Matrix
End of explanation
# Get Feature Names
vec.get_feature_names()
Explanation: View Feature Names
End of explanation |
14,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Imports" data-toc-modified-id="Imports-1"><span class="toc-item-num">1 </span>Imports</a></div><div class="lev1 toc-item"><a href="#Load-the-data-from-disk-and-set-up-the-dataframes" data-toc-modified-id="Load-the-data-from-disk-and-set-up-the-dataframes-2"><span class="toc-item-num">2 </span>Load the data from disk and set up the dataframes</a></div><div class="lev1 toc-item"><a href="#Show-a-heatmap-of-how-many-texts-you've-exchanged" data-toc-modified-id="Show-a-heatmap-of-how-many-texts-you've-exchanged-3"><span class="toc-item-num">3 </span>Show a heatmap of how many texts you've exchanged</a></div><div class="lev1 toc-item"><a href="#Table-and-graph-of-who-you-text-the-most" data-toc-modified-id="Table-and-graph-of-who-you-text-the-most-4"><span class="toc-item-num">4 </span>Table and graph of who you text the most</a></div><div class="lev1 toc-item"><a href="#Steamgraph" data-toc-modified-id="Steamgraph-5"><span class="toc-item-num">5 </span>Steamgraph</a></div><div class="lev3 toc-item"><a href="#Dump-the-necessary-data-to-JS" data-toc-modified-id="Dump-the-necessary-data-to-JS-501"><span class="toc-item-num">5.0.1 </span>Dump the necessary data to JS</a></div><div class="lev3 toc-item"><a href="#Draw-the-graph!" data-toc-modified-id="Draw-the-graph!-502"><span class="toc-item-num">5.0.2 </span>Draw the graph!</a></div><div class="lev1 toc-item"><a href="#Wordcloud" data-toc-modified-id="Wordcloud-6"><span class="toc-item-num">6 </span>Wordcloud</a></div><div class="lev3 toc-item"><a href="#Define-the-helper-method" data-toc-modified-id="Define-the-helper-method-601"><span class="toc-item-num">6.0.1 </span>Define the helper method</a></div><div class="lev3 toc-item"><a href="#Texts-you've-sent" data-toc-modified-id="Texts-you've-sent-602"><span class="toc-item-num">6.0.2 </span>Texts you've sent</a></div><div class="lev3 toc-item"><a href="#Texts-to/from-a-specific-contact" data-toc-modified-id="Texts-to/from-a-specific-contact-603"><span class="toc-item-num">6.0.3 </span>Texts to/from a specific contact</a></div><div class="lev1 toc-item"><a href="#Diving-deeper-into-the-actual-text" data-toc-modified-id="Diving-deeper-into-the-actual-text-7"><span class="toc-item-num">7 </span>Diving deeper into the actual text</a></div><div class="lev3 toc-item"><a href="#Visualize-a-word-tree-of-texts-exchanged-with-a-specific-contact" data-toc-modified-id="Visualize-a-word-tree-of-texts-exchanged-with-a-specific-contact-701"><span class="toc-item-num">7.0.1 </span>Visualize a word tree of texts exchanged with a specific contact</a></div><div class="lev3 toc-item"><a href="#Preprocessing-and-data-munging-for-TFIDF" data-toc-modified-id="Preprocessing-and-data-munging-for-TFIDF-702"><span class="toc-item-num">7.0.2 </span>Preprocessing and data munging for TFIDF</a></div><div class="lev3 toc-item"><a href="#Create-TFIDF-matrix-for-all-contacts" data-toc-modified-id="Create-TFIDF-matrix-for-all-contacts-703"><span class="toc-item-num">7.0.3 </span>Create TFIDF matrix for all contacts</a></div><div class="lev3 toc-item"><a href="#Helper-methods-to-leverage-the-TFIDF-matrix" data-toc-modified-id="Helper-methods-to-leverage-the-TFIDF-matrix-704"><span class="toc-item-num">7.0.4 </span>Helper methods to leverage the TFIDF matrix</a></div><div class="lev3 toc-item"><a href="#Words-that-identify-a-specific-contact" data-toc-modified-id="Words-that-identify-a-specific-contact-705"><span class="toc-item-num">7.0.5 </span>Words that identify a specific contact</a></div><div class="lev3 toc-item"><a href="#Words-that-identify-the-difference-between-two-contacts" data-toc-modified-id="Words-that-identify-the-difference-between-two-contacts-706"><span class="toc-item-num">7.0.6 </span>Words that identify the difference between two contacts</a></div><div class="lev1 toc-item"><a href="#Looking-at-language-progression-over-the-years" data-toc-modified-id="Looking-at-language-progression-over-the-years-8"><span class="toc-item-num">8 </span>Looking at language progression over the years</a></div><div class="lev3 toc-item"><a href="#Helper-methods-for-looking-at-TFIDF-by-year" data-toc-modified-id="Helper-methods-for-looking-at-TFIDF-by-year-801"><span class="toc-item-num">8.0.1 </span>Helper methods for looking at TFIDF by year</a></div><div class="lev3 toc-item"><a href="#My-top-words-over-the-years" data-toc-modified-id="My-top-words-over-the-years-802"><span class="toc-item-num">8.0.2 </span>My top words over the years</a></div><div class="lev3 toc-item"><a href="#Top-words-over-the-years-from/to-a-specific-contact" data-toc-modified-id="Top-words-over-the-years-from/to-a-specific-contact-803"><span class="toc-item-num">8.0.3 </span>Top words over the years from/to a specific contact</a></div>
See the README for an explanation of how this code runs and functions.
Contact michaeldezube at gmail dot com with questions.
# Imports
Step1: Load the data from disk and set up the dataframes
Step3: Use fully_merged_messages_df and address_book_df for analysis, they contain all messages with columns for the sender and all contacts, respectively
Show a heatmap of how many texts you've exchanged
Step4: Table and graph of who you text the most
Step5: Steamgraph
Dump the necessary data to JS
Step6: Draw the graph!
Step7: Wordcloud
Define the helper method
Step8: Texts you've sent
Step9: Texts to/from a specific contact
Step10: Diving deeper into the actual text
Visualize a word tree of texts exchanged with a specific contact
Step11: Preprocessing and data munging for TFIDF
Step12: Create TFIDF matrix for all contacts
Note the methods below focus on texts received from these contacts, not texts you've sent to them.
Step13: Helper methods to leverage the TFIDF matrix
Step14: Words that identify a specific contact
Step15: Words that identify the difference between two contacts
Step19: Looking at language progression over the years
Helper methods for looking at TFIDF by year
Step20: My top words over the years
This offers an interesting insight into the main topics over the years.
Step23: Top words over the years from/to a specific contact
This offers an interesting insight into the main topics over the years. | Python Code:
from __future__ import print_function
from __future__ import division
import copy
import json
import re
import string
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn # To improve the chart styling.
import wordtree
from IPython.display import display
from IPython.display import HTML
from IPython.display import Javascript
from wordcloud import STOPWORDS
import ipywidgets as widgets
from wordcloud import WordCloud
import iphone_connector
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Imports" data-toc-modified-id="Imports-1"><span class="toc-item-num">1 </span>Imports</a></div><div class="lev1 toc-item"><a href="#Load-the-data-from-disk-and-set-up-the-dataframes" data-toc-modified-id="Load-the-data-from-disk-and-set-up-the-dataframes-2"><span class="toc-item-num">2 </span>Load the data from disk and set up the dataframes</a></div><div class="lev1 toc-item"><a href="#Show-a-heatmap-of-how-many-texts-you've-exchanged" data-toc-modified-id="Show-a-heatmap-of-how-many-texts-you've-exchanged-3"><span class="toc-item-num">3 </span>Show a heatmap of how many texts you've exchanged</a></div><div class="lev1 toc-item"><a href="#Table-and-graph-of-who-you-text-the-most" data-toc-modified-id="Table-and-graph-of-who-you-text-the-most-4"><span class="toc-item-num">4 </span>Table and graph of who you text the most</a></div><div class="lev1 toc-item"><a href="#Steamgraph" data-toc-modified-id="Steamgraph-5"><span class="toc-item-num">5 </span>Steamgraph</a></div><div class="lev3 toc-item"><a href="#Dump-the-necessary-data-to-JS" data-toc-modified-id="Dump-the-necessary-data-to-JS-501"><span class="toc-item-num">5.0.1 </span>Dump the necessary data to JS</a></div><div class="lev3 toc-item"><a href="#Draw-the-graph!" data-toc-modified-id="Draw-the-graph!-502"><span class="toc-item-num">5.0.2 </span>Draw the graph!</a></div><div class="lev1 toc-item"><a href="#Wordcloud" data-toc-modified-id="Wordcloud-6"><span class="toc-item-num">6 </span>Wordcloud</a></div><div class="lev3 toc-item"><a href="#Define-the-helper-method" data-toc-modified-id="Define-the-helper-method-601"><span class="toc-item-num">6.0.1 </span>Define the helper method</a></div><div class="lev3 toc-item"><a href="#Texts-you've-sent" data-toc-modified-id="Texts-you've-sent-602"><span class="toc-item-num">6.0.2 </span>Texts you've sent</a></div><div class="lev3 toc-item"><a href="#Texts-to/from-a-specific-contact" data-toc-modified-id="Texts-to/from-a-specific-contact-603"><span class="toc-item-num">6.0.3 </span>Texts to/from a specific contact</a></div><div class="lev1 toc-item"><a href="#Diving-deeper-into-the-actual-text" data-toc-modified-id="Diving-deeper-into-the-actual-text-7"><span class="toc-item-num">7 </span>Diving deeper into the actual text</a></div><div class="lev3 toc-item"><a href="#Visualize-a-word-tree-of-texts-exchanged-with-a-specific-contact" data-toc-modified-id="Visualize-a-word-tree-of-texts-exchanged-with-a-specific-contact-701"><span class="toc-item-num">7.0.1 </span>Visualize a word tree of texts exchanged with a specific contact</a></div><div class="lev3 toc-item"><a href="#Preprocessing-and-data-munging-for-TFIDF" data-toc-modified-id="Preprocessing-and-data-munging-for-TFIDF-702"><span class="toc-item-num">7.0.2 </span>Preprocessing and data munging for TFIDF</a></div><div class="lev3 toc-item"><a href="#Create-TFIDF-matrix-for-all-contacts" data-toc-modified-id="Create-TFIDF-matrix-for-all-contacts-703"><span class="toc-item-num">7.0.3 </span>Create TFIDF matrix for all contacts</a></div><div class="lev3 toc-item"><a href="#Helper-methods-to-leverage-the-TFIDF-matrix" data-toc-modified-id="Helper-methods-to-leverage-the-TFIDF-matrix-704"><span class="toc-item-num">7.0.4 </span>Helper methods to leverage the TFIDF matrix</a></div><div class="lev3 toc-item"><a href="#Words-that-identify-a-specific-contact" data-toc-modified-id="Words-that-identify-a-specific-contact-705"><span class="toc-item-num">7.0.5 </span>Words that identify a specific contact</a></div><div class="lev3 toc-item"><a href="#Words-that-identify-the-difference-between-two-contacts" data-toc-modified-id="Words-that-identify-the-difference-between-two-contacts-706"><span class="toc-item-num">7.0.6 </span>Words that identify the difference between two contacts</a></div><div class="lev1 toc-item"><a href="#Looking-at-language-progression-over-the-years" data-toc-modified-id="Looking-at-language-progression-over-the-years-8"><span class="toc-item-num">8 </span>Looking at language progression over the years</a></div><div class="lev3 toc-item"><a href="#Helper-methods-for-looking-at-TFIDF-by-year" data-toc-modified-id="Helper-methods-for-looking-at-TFIDF-by-year-801"><span class="toc-item-num">8.0.1 </span>Helper methods for looking at TFIDF by year</a></div><div class="lev3 toc-item"><a href="#My-top-words-over-the-years" data-toc-modified-id="My-top-words-over-the-years-802"><span class="toc-item-num">8.0.2 </span>My top words over the years</a></div><div class="lev3 toc-item"><a href="#Top-words-over-the-years-from/to-a-specific-contact" data-toc-modified-id="Top-words-over-the-years-from/to-a-specific-contact-803"><span class="toc-item-num">8.0.3 </span>Top words over the years from/to a specific contact</a></div>
See the README for an explanation of how this code runs and functions.
Contact michaeldezube at gmail dot com with questions.
# Imports
End of explanation
%matplotlib inline
matplotlib.style.use('ggplot')
pd.set_option('display.max_colwidth', 1000)
iphone_connector.initialize()
fully_merged_messages_df, address_book_df = iphone_connector.get_cleaned_fully_merged_messages()
full_names = set(address_book_df.full_name) # Handy set to check for misspellings later on.
fully_merged_messages_df.full_name.replace('nan nan nan', 'Unknown', inplace=True)
WORDS_PER_PAGE = 450 # Based upon http://wordstopages.com/
print('\nTotal pages if all texts were printed: {0:,d} (Arial size 12, single spaced)\n'.format(
sum(fully_merged_messages_df.text.apply(lambda x: len(x.split())))//WORDS_PER_PAGE))
fully_merged_messages_df = fully_merged_messages_df.reset_index(drop=True)
fully_merged_messages_df
address_book_df
Explanation: Load the data from disk and set up the dataframes
End of explanation
def plot_year_month_heatmap(df, trim_incomplete=True, search_term=None, figsize=(18, 10)):
Plots a heatmap of the dataframe grouped by year and month.
Args:
df: The dataframe, must contain a column named `date`.
trim_incomplete: If true, don't plot rows that lack 12 full months of data. Default True.
search_term: A case insensitive term to require in all rows of the dataframe's `text`
column. Default None.
figsize: The size of the plot as a tuple. Default (18, 10);
if search_term:
df = df[df['text'].str.contains(search_term, case=False)]
month_year_messages = pd.DataFrame(df['date'])
month_year_messages['year'] = month_year_messages.apply(lambda row: row.date.year, axis=1)
month_year_messages['month'] = month_year_messages.apply(lambda row: row.date.month, axis=1)
month_year_messages = month_year_messages.drop('date', axis=1)
month_year_messages_pivot = month_year_messages.pivot_table(index='year',
columns='month',
aggfunc=len, dropna=True)
if trim_incomplete:
month_year_messages_pivot = month_year_messages_pivot[month_year_messages_pivot.count(axis=1) == 12]
if month_year_messages_pivot.shape[0] == 0:
print('After trimming rows that didn\'t have 12 months, no rows remained, bailing out.')
return
f, ax = plt.subplots(figsize=figsize)
seaborn.heatmap(month_year_messages_pivot, annot=True, fmt=".0f", square=True, cmap="YlGnBu", ax=ax)
# Plot all text messages exchanges over the years.
plot_year_month_heatmap(fully_merged_messages_df, search_term='')
Explanation: Use fully_merged_messages_df and address_book_df for analysis, they contain all messages with columns for the sender and all contacts, respectively
Show a heatmap of how many texts you've exchanged
End of explanation
# Helper method to better support py2 and py3.
def convert_unicode_to_str_if_needed(unicode_or_str):
if type(unicode_or_str).__name__ == 'unicode':
return unicode_or_str.encode('utf-8')
return unicode_or_str
# Note "Unknown" means the number was not found in your address book.
def get_message_counts(dataframe):
return pd.Series({'Texts sent': dataframe[dataframe.is_from_me == 1].shape[0],
'Texts received': dataframe[dataframe.is_from_me == 0].shape[0],
'Texts exchanged': dataframe.shape[0]})
messages_grouped = fully_merged_messages_df.groupby('full_name').apply(get_message_counts)
messages_grouped = messages_grouped.sort_values(by='Texts exchanged', ascending=False)
widgets.interact(messages_grouped.head,
n=widgets.IntSlider(min=5, max=50, step=1, value=5, continuous_update=False,
description='Number of people to show:'))
# Helper method so we can wrap it with interact().
def _plot_most_common_text(top_n=10):
messages_grouped.head(top_n).plot(figsize=(20,10), kind='bar')
widgets.interact(_plot_most_common_text,
top_n=widgets.IntSlider(min=5, max=100, step=1, value=5, continuous_update=False,
description='Number of people to show:'))
Explanation: Table and graph of who you text the most
End of explanation
# Restrict to the top N people you text the most so the steamgraph is legible.
TOP_N = 10 # Freely change this value.
sliced_df = fully_merged_messages_df[fully_merged_messages_df.full_name.isin(messages_grouped.head(TOP_N).index)]
grouped_by_month = sliced_df.groupby([
sliced_df.apply(lambda x: x.date.strftime('%Y/%m'), axis=1),
'full_name']
)['text'].count().to_frame()
grouped_by_month = grouped_by_month.sort_index()
# We create a dense dataframe for every year/month combination so even if a person didn't text in a specific
# year/month, we have a 0 so the steamgraph can propertly graph the value.
grouped_by_month_dense = grouped_by_month.unstack().fillna(0).stack()
# Dump the dataframe to a global JS variable so we can access it in our JS code.
# TODO(mdezube): Dump out as JSON instead.
formatted_for_steamgraph = grouped_by_month_dense.reset_index(level=1)
formatted_for_steamgraph.index.name = 'date'
formatted_for_steamgraph.columns = ['key', 'value']
Javascript("window.csvAsString='{}'".format(formatted_for_steamgraph.to_csv(index_label='date').replace('\n', '\\n')))
Explanation: Steamgraph
Dump the necessary data to JS
End of explanation
%%javascript
// Draw the streamgraph using d3.
element.append('<div class="chart" style="height:600px; width:100%"></div>')
element.append('<style>.axis path, .axis line' +
'{fill: none; stroke: #000;stroke-width: 2px; shape-rendering: crispEdges;}' +
'</style>')
element.append("<script src='d3.min.js'></script>")
element.append("<script src='colorbrewer.min.js'></script>")
element.append("<script src='steamgraph.js'></script>")
// Choose your favorite from https://bl.ocks.org/mbostock/5577023
var colorBrewerPalette = "Spectral";
// Set a timeout to let the JS scripts actually load into memory, this is a bit of a hack but works reliably.
setTimeout(function(){createSteamgraph(csvAsString, colorBrewerPalette)}, 200);
Explanation: Draw the graph!
End of explanation
def generate_cloud(texts, max_words=30):
# Add more words here if you want to ignore them:
my_stopwords = STOPWORDS.copy()
my_stopwords.update(['go', 'ya', 'come', 'back', 'good', 'sound'])
words = ' '.join(texts).lower()
wordcloud = WordCloud(font_path='CabinSketch-Bold.ttf',
stopwords=my_stopwords,
background_color='black',
width=800,
height=600,
relative_scaling=1,
max_words=max_words
).generate_from_text(words)
print('Based on {0:,} texts'.format(len(texts)))
fig, ax = plt.subplots(figsize=(15,10))
ax.imshow(wordcloud)
ax.axis('off')
plt.show()
Explanation: Wordcloud
Define the helper method
End of explanation
# Word cloud of the top 25 words I use based on the most recent 30,000 messages.
texts_from_me = fully_merged_messages_df[fully_merged_messages_df.is_from_me == 1].text[-30000:]
widgets.interact(
generate_cloud,
texts=widgets.fixed(texts_from_me),
max_words=widgets.IntSlider(min=5,max=50,step=1,value=10, continuous_update=False,
description='Max words to show:'))
Explanation: Texts you've sent
End of explanation
def _word_cloud_specific_contact(max_words, from_me, contact):
contact = convert_unicode_to_str_if_needed(contact)
if contact not in full_names:
print('{} not found'.format(contact))
return
sliced_df = fully_merged_messages_df[(fully_merged_messages_df.full_name == contact) &
(fully_merged_messages_df.is_from_me == from_me)].text
generate_cloud(sliced_df, max_words)
widgets.interact(
_word_cloud_specific_contact,
max_words=widgets.IntSlider(min=5, max=50, step=1, value=10,
continuous_update=False, description='Max words to show:'),
from_me=widgets.RadioButtons(
options={'Show messages FROM me': True, 'Show messages TO me': False}, description=' '),
contact=widgets.Text(value='Mom', description='Contact name:')
)
Explanation: Texts to/from a specific contact
End of explanation
# Note this requires an internet connection to load Google's JS library.
def get_json_for_word_tree(contact):
df = fully_merged_messages_df[(fully_merged_messages_df.full_name == contact)]
print('Exchanged {0:,} texts with {1}'.format(df.shape[0], contact))
array_for_json = [[text[1]] for text in df.text.iteritems()]
array_for_json.insert(0, [['Phrases']])
return json.dumps(array_for_json)
CONTACT_NAME = 'Mom'
ROOT_WORD = 'feel'
HTML(wordtree.get_word_tree_html(get_json_for_word_tree('Mom'),
ROOT_WORD.lower(),
lowercase=True,
tree_type='double'))
Explanation: Diving deeper into the actual text
Visualize a word tree of texts exchanged with a specific contact
End of explanation
punctuation = copy.copy(string.punctuation)
punctuation += u'“”‘’\ufffc\uff0c' # Include some UTF-8 punctuation that occurred.
punct_regex = re.compile(u'[{0}]'.format(punctuation))
spaces_regex = re.compile(r'\s{2,}')
numbers_regex = re.compile(r'\d+')
def clean_text(input_str):
processed = input_str.lower()
processed = punct_regex.sub('', processed)
# Also try: processed = numbers_regex.sub('_NUMBER_', processed)
processed = numbers_regex.sub('', processed)
processed = spaces_regex.sub(' ', processed)
return processed
# The normal stopwords list contains words like "i'll" which is unprocessed.
processed_stopwords = [clean_text(word) for word in STOPWORDS]
# Group the texts by person and collapse them into a single string per person.
grouped_by_name = fully_merged_messages_df[fully_merged_messages_df.is_from_me == 0].groupby(
'full_name')['text'].apply(lambda x: ' '.join(x)).to_frame()
grouped_by_name.info(memory_usage='deep')
grouped_by_name.head(1)
Explanation: Preprocessing and data munging for TFIDF
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk import tokenize
import numpy as np
vectorizer = TfidfVectorizer(preprocessor=clean_text,
tokenizer=tokenize.WordPunctTokenizer().tokenize,
stop_words=processed_stopwords,
ngram_range=(1, 2), max_df=.9, max_features=50000)
tfidf_transformed_dataset = vectorizer.fit_transform(grouped_by_name.text)
word_list = pd.Series(vectorizer.get_feature_names())
print('TFIDF sparse matrix is {0}MB'.format(tfidf_transformed_dataset.data.nbytes / 1024 / 1024))
print('TFIDF matrix has shape: {0}'.format(tfidf_transformed_dataset.shape))
Explanation: Create TFIDF matrix for all contacts
Note the methods below focus on texts received from these contacts, not texts you've sent to them.
End of explanation
def get_word_summary_for_contact(contact, top_n=25):
contact = convert_unicode_to_str_if_needed(contact)
tfidf_record = _get_tfidf_record_for_contact(contact)
if tfidf_record is None:
print('"{0}" was not found.'.format(contact))
return
sorted_indices = tfidf_record.argsort()[::-1]
return pd.DataFrame({'Word': word_list.iloc[sorted_indices[:top_n]]}).reset_index(drop=True)
def get_word_summary_for_diffs(contact, other_contact, top_n=25):
contact = convert_unicode_to_str_if_needed(contact)
other_contact = convert_unicode_to_str_if_needed(other_contact)
tfidf_record_contact = _get_tfidf_record_for_contact(contact)
tfidf_record_other_contact = _get_tfidf_record_for_contact(other_contact)
if tfidf_record_contact is None or tfidf_record_other_contact is None:
# Print out the first contact not found.
contact_not_found = contact if tfidf_record_contact is None else other_contact
print('"{0}" was not found.'.format(contact_not_found))
return
sorted_indices = (tfidf_record_contact - tfidf_record_other_contact).argsort()[::-1]
return pd.DataFrame({'Word': word_list.iloc[sorted_indices[:top_n]]}).reset_index(drop=True)
# Returns the row in the TFIDF matrix for a given contact by name.
def _get_tfidf_record_for_contact(contact):
if contact not in grouped_by_name.index:
return None
row = np.argmax(grouped_by_name.index == contact)
return tfidf_transformed_dataset.getrow(row).toarray().squeeze()
Explanation: Helper methods to leverage the TFIDF matrix
End of explanation
widgets.interact(
get_word_summary_for_contact,
contact=widgets.Text(value='Mom', description='Contact name:', placeholder='Enter name'),
top_n=widgets.IntSlider(min=10, max=100, step=1, value=5, description='Max words to show:')
)
Explanation: Words that identify a specific contact
End of explanation
widgets.interact(
get_word_summary_for_diffs,
contact=widgets.Text(description='1st Contact:', placeholder='Enter 1st name'),
other_contact=widgets.Text(description='2nd Contact:', placeholder='Enter 2nd name'),
top_n=widgets.IntSlider(description='Max words to show:', min=10, max=100, step=1, value=5)
)
Explanation: Words that identify the difference between two contacts
End of explanation
def top_words_by_year_from_tfidf(tfidf_by_year, years_as_list, top_n=15):
Returns a dataframe of the top words for each year by their TFIDF score.
To determine the "top", we look at one year's TFIDF - avg(other years' TFIDFs)
Args:
tfidf_by_year: TFIDF matrix with as many rows as entries in years_as_list
years_as_list: Years that are represented in the TFIDF matrix
top_n: Number of top words per year to include in the result
# Densify the tfidf matrix so we can operate on it.
tfidf_by_year_dense = tfidf_by_year.toarray()
df_by_year = []
for i in range(tfidf_by_year_dense.shape[0]):
this_year = years_as_list[i]
tfidf_this_year = tfidf_by_year_dense[i]
tfidf_other_years = np.delete(tfidf_by_year_dense, i, axis=0).mean(axis=0)
sorted_indices = (tfidf_this_year - tfidf_other_years).argsort()[::-1]
df = pd.DataFrame({this_year: word_list.iloc[sorted_indices[:top_n]]})
df = df.reset_index(drop=True)
df_by_year.append(df)
return pd.concat(df_by_year, axis=1)
def top_words_by_year_from_df(slice_of_texts_df, top_n=15, min_texts_required=100):
Returns a dataframe of the top words for each year by their TFIDF score.
Top is determined by the `top_words_by_year_from_tfidf` method.
Args:
slice_of_texts_df: A dataframe with the text messages to process
top_n: Number of top words per year to include in the result
min_texts_required: Number of texts to require in each year to not drop the record
grouped_by_year_tfidf, years = _tfidf_by_year(slice_of_texts_df, min_texts_required)
return top_words_by_year_from_tfidf(grouped_by_year_tfidf, years, top_n)
def _tfidf_by_year(slice_of_texts_df, min_texts_required=100):
Returns a TFIDF matrix of the texts grouped by year.
Years with less than `min_texts_required` texts will be dropped.
grouper = slice_of_texts_df.date.apply(lambda x: x.year)
grouped_by_year = slice_of_texts_df.groupby(grouper).apply(
lambda row: pd.Series({'count': len(row.date), 'text': ' '.join(row.text)})
)
# Drops years with less than min_texts_required texts since they won't be very meaningful.
years_to_drop = grouped_by_year[grouped_by_year['count'] < min_texts_required].index
print('Dropping year(s): {0}, each had fewer than {1} texts.'.format(
', '.join(str(year) for year in years_to_drop), min_texts_required))
grouped_by_year = grouped_by_year[grouped_by_year['count'] >= min_texts_required]
grouped_by_year.index.name = 'year'
if grouped_by_year.shape[0] == 0:
print('Bailing out, no years found with at least {0} texts.'.format(min_texts_required))
return None
grouped_by_year_tfidf = vectorizer.transform(grouped_by_year['text'])
print('Found {0} years with more than {1} texts each.'.format(grouped_by_year_tfidf.shape[0],
min_texts_required))
return grouped_by_year_tfidf, grouped_by_year.index
Explanation: Looking at language progression over the years
Helper methods for looking at TFIDF by year
End of explanation
top_words_by_year_from_df(fully_merged_messages_df[fully_merged_messages_df.is_from_me == 1],
top_n=15)
Explanation: My top words over the years
This offers an interesting insight into the main topics over the years.
End of explanation
# Wrapper method so we can use interact().
def _top_words_by_year_for_contact(contact, from_me, top_n):
contact = convert_unicode_to_str_if_needed(contact)
if contact not in full_names:
print('"{0}" not found'.format(contact))
return
# Slice to texts from/to the contact.
df = fully_merged_messages_df[(fully_merged_messages_df.is_from_me == from_me) &
(fully_merged_messages_df.full_name == contact)]
return top_words_by_year_from_df(df, top_n)
widgets.interact(
_top_words_by_year_for_contact,
contact=widgets.Text(value='Mom', description='Contact name:', placeholder='Enter name'),
from_me=widgets.RadioButtons(
options={'Show messages FROM me': True, 'Show messages TO me': False}, description=' '),
top_n=widgets.IntSlider(min=15, max=100, step=1, value=5, description='Max words to show:')
)
from sklearn.cluster import KMeans
from sklearn.decomposition import TruncatedSVD
def _top_words_by_cluster_from_tfidf(
cluster_id,
tfidf_per_sender,
cluster_for_tfidf_index,
top_n=15,
):
Returns a dataframe of the top words for each cluster by their TFIDF score.
To determine the "top", we look at one cluster's TFIDF - avg(other clusters' TFIDFs)
Args:
cluster_id: The cluster we want to find the top words for (referred to as "given cluster")
tfidf_per_sender: TFIDF matrix with as many rows as entries in cluster_for_tfidf_index
cluster_for_tfidf_index: Cluster assignment for each entry in tfidf_per_sender
top_n: Number of top words per cluster to include in the result
# First, we separate the given cluster we want to consider from all other entries.
this_cluster_records = tfidf_per_sender[cluster_for_tfidf_index == cluster_id]
other_cluster_records = tfidf_per_sender[cluster_for_tfidf_index != cluster_id]
# Next, we calculate the mean for each: the given cluster and the rest of the corpus
mean_this_cluster = np.asarray(this_cluster_records.mean(axis=0)).squeeze()
mean_other_cluster = np.asarray(other_cluster_records.mean(axis=0)).squeeze()
# Finally, we identify the words for which the given cluster shows the biggest difference.
difference = mean_this_cluster - mean_other_cluster
most_different_indicies = difference.argsort()
# Only display top_n
return most_different_indicies[::-1][:top_n]
def _tfidf_by_sender(messages_df, min_texts_required=100):
Returns a TFIDF matrix of the texts grouped by sender.
Message exchanges with less than `min_texts_required` texts will be dropped.
# First we group messages by name, then we merge each conversation into one string.
grouped_by_name = messages_df.groupby("full_name").apply(
lambda row: pd.Series({'count': len(row.full_name), 'text': ' '.join(row.text)})
)
# Drop all conversations that don't meet the requirements for minimum number of messages.
grouped_by_name = grouped_by_name[grouped_by_name['count'] >= min_texts_required]
grouped_by_name.index.name = 'full_name'
# Bail if we have no data
if grouped_by_name.shape[0] == 0:
print('Bailing out, no conversations found with at least {0} texts.'.format(min_texts_required))
return None
grouped_by_name_tfidf = vectorizer.transform(grouped_by_name['text'])
print('Found {0} conversations with at least than {1} texts each.'.format(grouped_by_name_tfidf.shape[0],
min_texts_required))
return grouped_by_name_tfidf, grouped_by_name.index
# Get the TFIDF vector for each data point and the list of receivers.
tfidf_per_sender, names_sender = _tfidf_by_sender(fully_merged_messages_df[fully_merged_messages_df.is_from_me == 0])
# First, we reduce the dimensionality of the dataset.
# This reduces the difference between the clusters found by KMeans and the 2D graphic of the clusters.
tfidf_sender_reduced_dim = TruncatedSVD(n_components=7).fit_transform(tfidf_per_sender)
# Let's run KMeans clustering on the data.
NUMBER_OF_CLUSTERS = 7
kmeans_tfidf_sender = KMeans(n_clusters=NUMBER_OF_CLUSTERS)
tfidf_per_sender_cluster_assignment = kmeans_tfidf_sender.fit_transform(tfidf_sender_reduced_dim).argmin(axis=1)
# We further reduce the dimensionality of the data, so that we can graph it.
tfidf_per_sender_2d = TruncatedSVD(n_components=2).fit_transform(tfidf_sender_reduced_dim)
clustered_tfidf_by_sender_df = pd.DataFrame({
"x": tfidf_per_sender_2d[:,0],
"y": tfidf_per_sender_2d[:,1],
"name": names_sender,
"group": ["Cluster: " + str(e) for e in tfidf_per_sender_cluster_assignment],
})
clustered_tfidf_by_sender_df.head()
import plotly.offline as py
import plotly.figure_factory as ff
import plotly.graph_objs as go
py.init_notebook_mode(connected=True)
clusters = clustered_tfidf_by_sender_df.group.unique()
def plot_data(cluster_selection):
traces = []
top_words = None
if cluster_selection == "All":
clusters_to_plot = clusters
else:
clusters_to_plot = [cluster_selection]
top_words_indexes = _top_words_by_cluster_from_tfidf(
int(cluster_selection[-1]),
tfidf_per_sender,
tfidf_per_sender_cluster_assignment
)[0:10]
top_words = word_list.iloc[top_words_indexes].to_frame()
top_words.columns = ['Top Words In Cluster']
top_words = top_words.reset_index(drop=True)
for cluster in clusters_to_plot:
cluster_data = clustered_tfidf_by_sender_df[clustered_tfidf_by_sender_df.group == cluster]
scatter = go.Scatter(
x=cluster_data["x"],
y=cluster_data["y"],
text=cluster_data["name"],
mode = 'markers',
name=cluster
)
traces.append(scatter)
py.iplot(traces)
return top_words
cluster_selection = widgets.Dropdown(
options=["All"] + list(clusters),
value="All",
description="Cluster: "
)
print('We\'ve clustered your contacts by their word usage, hover over the dots to see which '
'cluster each person is in. Adjust the dropdown to restrict to a cluster.\nDots closer '
'to each other indicate the people talk similarly.')
widgets.interact(
plot_data,
cluster_selection=cluster_selection,
)
display(cluster_selection)
Explanation: Top words over the years from/to a specific contact
This offers an interesting insight into the main topics over the years.
End of explanation |
14,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programming Assignment
Step1: Составление корпуса
Step2: Наша коллекция небольшая, и целиком помещается в оперативную память. Gensim может работать с такими данными и не требует их сохранения на диск в специальном формате. Для этого коллекция должна быть представлена в виде списка списков, каждый внутренний список соответствует отдельному документу и состоит из его слов. Пример коллекции из двух документов
Step3: У объекта dictionary есть полезная переменная dictionary.token2id, позволяющая находить соответствие между ингредиентами и их индексами.
Обучение модели
Вам может понадобиться документация LDA в gensim.
Задание 1. Обучите модель LDA с 40 темами, установив количество проходов по коллекции 5 и оставив остальные параметры по умолчанию.
Затем вызовите метод модели show_topics, указав количество тем 40 и количество токенов 10, и сохраните результат (топы ингредиентов в темах) в отдельную переменную. Если при вызове метода show_topics указать параметр formatted=True, то топы ингредиентов будет удобно выводить на печать, если formatted=False, будет удобно работать со списком программно. Выведите топы на печать, рассмотрите темы, а затем ответьте на вопрос
Step4: Фильтрация словаря
В топах тем гораздо чаще встречаются первые три рассмотренных ингредиента, чем последние три. При этом наличие в рецепте курицы, яиц и грибов яснее дает понять, что мы будем готовить, чем наличие соли, сахара и воды. Таким образом, даже в рецептах есть слова, часто встречающиеся в текстах и не несущие смысловой нагрузки, и поэтому их не желательно видеть в темах. Наиболее простой прием борьбы с такими фоновыми элементами — фильтрация словаря по частоте. Обычно словарь фильтруют с двух сторон
Step5: Задание 2. У объекта dictionary2 есть переменная dfs — это словарь, ключами которого являются id токена, а элементами — число раз, сколько слово встретилось во всей коллекции. Сохраните в отдельный список ингредиенты, которые встретились в коллекции больше 4000 раз. Вызовите метод словаря filter_tokens, подав в качестве первого аргумента полученный список популярных ингредиентов. Вычислите две величины
Step6: Сравнение когерентностей
Задание 3. Постройте еще одну модель по корпусу corpus2 и словарю dictionary2, остальные параметры оставьте такими же, как при первом построении модели. Сохраните новую модель в другую переменную (не перезаписывайте предыдущую модель). Не забудьте про фиксирование seed!
Затем воспользуйтесь методом top_topics модели, чтобы вычислить ее когерентность. Передайте в качестве аргумента соответствующий модели корпус. Метод вернет список кортежей (топ токенов, когерентность), отсортированных по убыванию последней. Вычислите среднюю по всем темам когерентность для каждой из двух моделей и передайте в функцию save_answers3.
Step7: Считается, что когерентность хорошо соотносится с человеческими оценками интерпретируемости тем. Поэтому на больших текстовых коллекциях когерентность обычно повышается, если убрать фоновую лексику. Однако в нашем случае этого не произошло.
Изучение влияния гиперпараметра alpha
В этом разделе мы будем работать со второй моделью, то есть той, которая построена по сокращенному корпусу.
Пока что мы посмотрели только на матрицу темы-слова, теперь давайте посмотрим на матрицу темы-документы. Выведите темы для нулевого (или любого другого) документа из корпуса, воспользовавшись методом get_document_topics второй модели
Step8: Также выведите содержимое переменной .alpha второй модели
Step9: У вас должно получиться, что документ характеризуется небольшим числом тем. Попробуем поменять гиперпараметр alpha, задающий априорное распределение Дирихле для распределений тем в документах.
Задание 4. Обучите третью модель
Step10: Таким образом, гиперпараметр alpha влияет на разреженность распределений тем в документах. Аналогично гиперпараметр eta влияет на разреженность распределений слов в темах.
LDA как способ понижения размерности
Иногда, распределения над темами, найденные с помощью LDA, добавляют в матрицу объекты-признаки как дополнительные, семантические, признаки, и это может улучшить качество решения задачи. Для простоты давайте просто обучим классификатор рецептов на кухни на признаках, полученных из LDA, и измерим точность (accuracy).
Задание 5. Используйте модель, построенную по сокращенной выборке с alpha по умолчанию (вторую модель). Составьте матрицу $\Theta = p(t|d)$ вероятностей тем в документах; вы можете использовать тот же метод get_document_topics, а также вектор правильных ответов y (в том же порядке, в котором рецепты идут в переменной recipes). Создайте объект RandomForestClassifier со 100 деревьями, с помощью функции cross_val_score вычислите среднюю accuracy по трем фолдам (перемешивать данные не нужно) и передайте в функцию save_answers5.
Step11: Для такого большого количества классов это неплохая точность. Вы можете попроовать обучать RandomForest на исходной матрице частот слов, имеющей значительно большую размерность, и увидеть, что accuracy увеличивается на 10–15%. Таким образом, LDA собрал не всю, но достаточно большую часть информации из выборки, в матрице низкого ранга.
LDA — вероятностная модель
Матричное разложение, использующееся в LDA, интерпретируется как следующий процесс генерации документов.
Для документа $d$ длины $n_d$
Step12: Интерпретация построенной модели
Вы можете рассмотреть топы ингредиентов каждой темы. Большиснтво тем сами по себе похожи на рецепты; в некоторых собираются продукты одного вида, например, свежие фрукты или разные виды сыра.
Попробуем эмпирически соотнести наши темы с национальными кухнями (cuisine). Построим матрицу $A$ размера темы $x$ кухни, ее элементы $a_{tc}$ — суммы $p(t|d)$ по всем документам $d$, которые отнесены к кухне $c$. Нормируем матрицу на частоты рецептов по разным кухням, чтобы избежать дисбаланса между кухнями. Следующая функция получает на вход объект модели, объект корпуса и исходные данные и возвращает нормированную матрицу $A$. Ее удобно визуализировать с помощью seaborn. | Python Code:
import json
with open("recipes.json") as f:
recipes = json.load(f)
print recipes[0]
Explanation: Programming Assignment:
Готовим LDA по рецептам
Как вы уже знаете, в тематическом моделировании делается предположение о том, что для определения тематики порядок слов в документе не важен; об этом гласит гипотеза «мешка слов». Сегодня мы будем работать с несколько нестандартной для тематического моделирования коллекцией, которую можно назвать «мешком ингредиентов», потому что она состоит из рецептов блюд разных кухонь. Тематические модели ищут слова, которые часто вместе встречаются в документах, и составляют из них темы. Мы попробуем применить эту идею к рецептам и найти кулинарные «темы». Эта коллекция хороша тем, что не требует предобработки. Кроме того, эта задача достаточно наглядно иллюстрирует принцип работы тематических моделей.
Для выполнения заданий, помимо часто используемых в курсе библиотек, потребуются модули json и gensim. Первый входит в дистрибутив Anaconda, второй можно поставить командой
pip install gensim
Построение модели занимает некоторое время. На ноутбуке с процессором Intel Core i7 и тактовой частотой 2400 МГц на построение одной модели уходит менее 10 минут.
Загрузка данных
Коллекция дана в json-формате: для каждого рецепта известны его id, кухня (cuisine) и список ингредиентов, в него входящих. Загрузить данные можно с помощью модуля json (он входит в дистрибутив Anaconda):
End of explanation
from gensim import corpora, models
import numpy as np
Explanation: Составление корпуса
End of explanation
texts = [recipe["ingredients"] for recipe in recipes]
dictionary = corpora.Dictionary(texts) # составляем словарь
corpus = [dictionary.doc2bow(text) for text in texts] # составляем корпус документов
print texts[0]
print corpus[0]
Explanation: Наша коллекция небольшая, и целиком помещается в оперативную память. Gensim может работать с такими данными и не требует их сохранения на диск в специальном формате. Для этого коллекция должна быть представлена в виде списка списков, каждый внутренний список соответствует отдельному документу и состоит из его слов. Пример коллекции из двух документов:
[["hello", "world"], ["programming", "in", "python"]]
Преобразуем наши данные в такой формат, а затем создадим объекты corpus и dictionary, с которыми будет работать модель.
End of explanation
np.random.seed(76543)
# здесь код для построения модели:
# обучение модели
%time ldamodel = models.LdaMulticore(corpus, id2word=dictionary, num_topics=40, passes=5, workers=1)
topics = ldamodel.show_topics(num_topics=40, num_words=10, formatted=False)
words = []
for topic in topics:
for word_prob in topic[1]:
word, _ = word_prob
words.append(word)
c_salt = words.count('salt')
print('"salt" counter: %d' % c_salt)
c_sugar = words.count('sugar')
print('"sugar" counter: %d' % c_sugar)
c_water = words.count('water')
print('"water" counter: %d' % c_water)
c_mushrooms = words.count('mushrooms')
print('"mushrooms" counter: %d' % c_mushrooms)
c_chicken = words.count('chicken')
print('"chicken" counter: %d' % c_chicken)
c_eggs = words.count('eggs')
print('"eggs" counter: %d' % c_eggs)
def save_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs):
with open("cooking_LDA_pa_task1.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs]]))
save_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs)
Explanation: У объекта dictionary есть полезная переменная dictionary.token2id, позволяющая находить соответствие между ингредиентами и их индексами.
Обучение модели
Вам может понадобиться документация LDA в gensim.
Задание 1. Обучите модель LDA с 40 темами, установив количество проходов по коллекции 5 и оставив остальные параметры по умолчанию.
Затем вызовите метод модели show_topics, указав количество тем 40 и количество токенов 10, и сохраните результат (топы ингредиентов в темах) в отдельную переменную. Если при вызове метода show_topics указать параметр formatted=True, то топы ингредиентов будет удобно выводить на печать, если formatted=False, будет удобно работать со списком программно. Выведите топы на печать, рассмотрите темы, а затем ответьте на вопрос:
Сколько раз ингредиенты "salt", "sugar", "water", "mushrooms", "chicken", "eggs" встретились среди топов-10 всех 40 тем? При ответе не нужно учитывать составные ингредиенты, например, "hot water".
Передайте 6 чисел в функцию save_answers1 и загрузите сгенерированный файл в форму.
У gensim нет возможности фиксировать случайное приближение через параметры метода, но библиотека использует numpy для инициализации матриц. Поэтому, по утверждению автора библиотеки, фиксировать случайное приближение нужно командой, которая написана в следующей ячейке. Перед строкой кода с построением модели обязательно вставляйте указанную строку фиксации random.seed.
End of explanation
import copy
dictionary2 = copy.deepcopy(dictionary)
Explanation: Фильтрация словаря
В топах тем гораздо чаще встречаются первые три рассмотренных ингредиента, чем последние три. При этом наличие в рецепте курицы, яиц и грибов яснее дает понять, что мы будем готовить, чем наличие соли, сахара и воды. Таким образом, даже в рецептах есть слова, часто встречающиеся в текстах и не несущие смысловой нагрузки, и поэтому их не желательно видеть в темах. Наиболее простой прием борьбы с такими фоновыми элементами — фильтрация словаря по частоте. Обычно словарь фильтруют с двух сторон: убирают очень редкие слова (в целях экономии памяти) и очень частые слова (в целях повышения интерпретируемости тем). Мы уберем только частые слова.
End of explanation
frequent_words_id = [num for num, cnt in dictionary2.dfs.iteritems() if cnt>=4000]
frequent_words = [dictionary2[num] for num in frequent_words_id]
frequent_words
dictionary2.filter_tokens(bad_ids=frequent_words_id)
dict_size_before = len(dictionary.dfs)
dict_size_after = len(dictionary2.dfs)
print('Original dictionary size: %d' % dict_size_before)
print('Reduced dictionary size: %d' % dict_size_after)
corpus2 = [dictionary2.doc2bow(text) for text in texts]
corpus_size_before = 0
for text in corpus:
corpus_size_before += len(text)
corpus_size_after = 0
for text in corpus2:
corpus_size_after += len(text)
print('Original corpus size: %d' % corpus_size_before)
print('Reduced corpus size: %d' % corpus_size_after)
def save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after):
with open("cooking_LDA_pa_task2.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [dict_size_before, dict_size_after, corpus_size_before, corpus_size_after]]))
save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after)
Explanation: Задание 2. У объекта dictionary2 есть переменная dfs — это словарь, ключами которого являются id токена, а элементами — число раз, сколько слово встретилось во всей коллекции. Сохраните в отдельный список ингредиенты, которые встретились в коллекции больше 4000 раз. Вызовите метод словаря filter_tokens, подав в качестве первого аргумента полученный список популярных ингредиентов. Вычислите две величины: dict_size_before и dict_size_after — размер словаря до и после фильтрации.
Затем, используя новый словарь, создайте новый корпус документов, corpus2, по аналогии с тем, как это сделано в начале ноутбука. Вычислите две величины: corpus_size_before и corpus_size_after — суммарное количество ингредиентов в корпусе (для каждого документа вычислите число различных ингредиентов в нем и просуммируйте по всем документам) до и после фильтрации.
Передайте величины dict_size_before, dict_size_after, corpus_size_before, corpus_size_after в функцию save_answers2 и загрузите сгенерированный файл в форму.
End of explanation
np.random.seed(76543)
# здесь код для построения модели:
# обучение модели
%time ldamodel2 = models.LdaMulticore(corpus2, id2word=dictionary2, num_topics=40, passes=5, workers=1)
coherence = np.mean( [coh[1] for coh in ldamodel.top_topics(corpus)] )
print(coherence)
coherence2 = np.mean( [coh[1] for coh in ldamodel2.top_topics(corpus2)] )
print(coherence2)
def save_answers3(coherence, coherence2):
with open("cooking_LDA_pa_task3.txt", "w") as fout:
fout.write(" ".join(["%3f"%el for el in [coherence, coherence2]]))
save_answers3(coherence, coherence2)
Explanation: Сравнение когерентностей
Задание 3. Постройте еще одну модель по корпусу corpus2 и словарю dictionary2, остальные параметры оставьте такими же, как при первом построении модели. Сохраните новую модель в другую переменную (не перезаписывайте предыдущую модель). Не забудьте про фиксирование seed!
Затем воспользуйтесь методом top_topics модели, чтобы вычислить ее когерентность. Передайте в качестве аргумента соответствующий модели корпус. Метод вернет список кортежей (топ токенов, когерентность), отсортированных по убыванию последней. Вычислите среднюю по всем темам когерентность для каждой из двух моделей и передайте в функцию save_answers3.
End of explanation
print(ldamodel2.get_document_topics(corpus2[0]))
words_0 = [dictionary2[num] for num, _ in ldamodel2.get_document_topics(corpus2[0])]
words_0
Explanation: Считается, что когерентность хорошо соотносится с человеческими оценками интерпретируемости тем. Поэтому на больших текстовых коллекциях когерентность обычно повышается, если убрать фоновую лексику. Однако в нашем случае этого не произошло.
Изучение влияния гиперпараметра alpha
В этом разделе мы будем работать со второй моделью, то есть той, которая построена по сокращенному корпусу.
Пока что мы посмотрели только на матрицу темы-слова, теперь давайте посмотрим на матрицу темы-документы. Выведите темы для нулевого (или любого другого) документа из корпуса, воспользовавшись методом get_document_topics второй модели:
End of explanation
ldamodel2.alpha
Explanation: Также выведите содержимое переменной .alpha второй модели:
End of explanation
np.random.seed(76543)
# здесь код для построения модели:
# обучение модели
%time ldamodel3 = models.LdaMulticore(corpus2, id2word=dictionary2, alpha=1, num_topics=40, passes=5, workers=1)
print(ldamodel3.get_document_topics(corpus2[0]))
words_0 = [dictionary2[num] for num, _ in ldamodel3.get_document_topics(corpus2[0])]
words_0
count_model2 = 0
count_model3 = 0
for doc in corpus2:
count_model2 += len(ldamodel2.get_document_topics(doc, minimum_probability=0.01))
count_model3 += len(ldamodel3.get_document_topics(doc, minimum_probability=0.01))
print('Number of elements with probability higher than 0.2')
print('Model 2 (alpha="symmetric"): %d' % count_model2)
print('Model 3 (alpha=1): %d' % count_model3)
def save_answers4(count_model2, count_model3):
with open("cooking_LDA_pa_task4.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [count_model2, count_model3]]))
save_answers4(count_model2, count_model3)
np.random.seed(76543)
# здесь код для построения модели:
# обучение модели
%time ldamodel4 = models.LdaMulticore(corpus2, id2word=dictionary2, alpha=1, passes=5, workers=1)
count_model4 = 0
for doc in corpus2:
count_model4 += len(ldamodel4.get_document_topics(doc, minimum_probability=0.01))
print('Number of elements with probability higher than 0.2')
print('Model 2 (alpha="symmetric"): %d' % count_model2)
print('Model 3 (alpha=1): %d' % count_model4)
save_answers4(count_model2, count_model4)
Explanation: У вас должно получиться, что документ характеризуется небольшим числом тем. Попробуем поменять гиперпараметр alpha, задающий априорное распределение Дирихле для распределений тем в документах.
Задание 4. Обучите третью модель: используйте сокращенный корпус (corpus2 и dictionary2) и установите параметр alpha=1, passes=5. Не забудьте про фиксацию seed! Выведите темы новой модели для нулевого документа; должно получиться, что распределение над множеством тем практически равномерное. Чтобы убедиться в том, что во второй модели документы описываются гораздо более разреженными распределениями, чем в третьей, посчитайте суммарное количество элементов, превосходящих 0.01, в матрицах темы-документы обеих моделей. Другими словами, запросите темы модели для каждого документа с параметром minimum_probability=0.01 и просуммируйте число элементов в получаемых массивах. Передайте две суммы (сначала для модели с alpha по умолчанию, затем для модели в alpha=1) в функцию save_answers4.
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_score
def save_answers5(accuracy):
with open("cooking_LDA_pa_task5.txt", "w") as fout:
fout.write(str(accuracy))
Explanation: Таким образом, гиперпараметр alpha влияет на разреженность распределений тем в документах. Аналогично гиперпараметр eta влияет на разреженность распределений слов в темах.
LDA как способ понижения размерности
Иногда, распределения над темами, найденные с помощью LDA, добавляют в матрицу объекты-признаки как дополнительные, семантические, признаки, и это может улучшить качество решения задачи. Для простоты давайте просто обучим классификатор рецептов на кухни на признаках, полученных из LDA, и измерим точность (accuracy).
Задание 5. Используйте модель, построенную по сокращенной выборке с alpha по умолчанию (вторую модель). Составьте матрицу $\Theta = p(t|d)$ вероятностей тем в документах; вы можете использовать тот же метод get_document_topics, а также вектор правильных ответов y (в том же порядке, в котором рецепты идут в переменной recipes). Создайте объект RandomForestClassifier со 100 деревьями, с помощью функции cross_val_score вычислите среднюю accuracy по трем фолдам (перемешивать данные не нужно) и передайте в функцию save_answers5.
End of explanation
def generate_recipe(model, num_ingredients):
theta = np.random.dirichlet(model.alpha)
for i in range(num_ingredients):
t = np.random.choice(np.arange(model.num_topics), p=theta)
topic = model.show_topic(t, topn=model.num_terms)
topic_distr = [x[1] for x in topic]
terms = [x[0] for x in topic]
w = np.random.choice(terms, p=topic_distr)
print w
Explanation: Для такого большого количества классов это неплохая точность. Вы можете попроовать обучать RandomForest на исходной матрице частот слов, имеющей значительно большую размерность, и увидеть, что accuracy увеличивается на 10–15%. Таким образом, LDA собрал не всю, но достаточно большую часть информации из выборки, в матрице низкого ранга.
LDA — вероятностная модель
Матричное разложение, использующееся в LDA, интерпретируется как следующий процесс генерации документов.
Для документа $d$ длины $n_d$:
1. Из априорного распределения Дирихле с параметром alpha сгенерировать распределение над множеством тем: $\theta_d \sim Dirichlet(\alpha)$
1. Для каждого слова $w = 1, \dots, n_d$:
1. Сгенерировать тему из дискретного распределения $t \sim \theta_{d}$
1. Сгенерировать слово из дискретного распределения $w \sim \phi_{t}$.
Подробнее об этом в Википедии.
В контексте нашей задачи получается, что, используя данный генеративный процесс, можно создавать новые рецепты. Вы можете передать в функцию модель и число ингредиентов и сгенерировать рецепт :)
End of explanation
import pandas
import seaborn
from matplotlib import pyplot as plt
%matplotlib inline
def compute_topic_cuisine_matrix(model, corpus, recipes):
# составляем вектор целевых признаков
targets = list(set([recipe["cuisine"] for recipe in recipes]))
# составляем матрицу
tc_matrix = pandas.DataFrame(data=np.zeros((model.num_topics, len(targets))), columns=targets)
for recipe, bow in zip(recipes, corpus):
recipe_topic = model.get_document_topics(bow)
for t, prob in recipe_topic:
tc_matrix[recipe["cuisine"]][t] += prob
# нормируем матрицу
target_sums = pandas.DataFrame(data=np.zeros((1, len(targets))), columns=targets)
for recipe in recipes:
target_sums[recipe["cuisine"]] += 1
return pandas.DataFrame(tc_matrix.values/target_sums.values, columns=tc_matrix.columns)
def plot_matrix(tc_matrix):
plt.figure(figsize=(10, 10))
seaborn.heatmap(tc_matrix, square=True)
# Визуализируйте матрицу
Explanation: Интерпретация построенной модели
Вы можете рассмотреть топы ингредиентов каждой темы. Большиснтво тем сами по себе похожи на рецепты; в некоторых собираются продукты одного вида, например, свежие фрукты или разные виды сыра.
Попробуем эмпирически соотнести наши темы с национальными кухнями (cuisine). Построим матрицу $A$ размера темы $x$ кухни, ее элементы $a_{tc}$ — суммы $p(t|d)$ по всем документам $d$, которые отнесены к кухне $c$. Нормируем матрицу на частоты рецептов по разным кухням, чтобы избежать дисбаланса между кухнями. Следующая функция получает на вход объект модели, объект корпуса и исходные данные и возвращает нормированную матрицу $A$. Ее удобно визуализировать с помощью seaborn.
End of explanation |
14,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What's new in PyKE 3?
Developed since 2012, PyKE offers a user-friendly way to inspect and analyze the pixels and lightcurves obtained by NASA's Kepler and K2.
The latest version of PyKE, v3.1, was released in January 2018 and adds a new object-oriented Python API which is intended to aid the development of custom pipelines and tools by the community.
Step1: Introducing a generic LightCurve class
The most notable change is the introduction of a generic LightCurve class which provides operations that are intended to suit time series data from any astronomical survey. A light curve is simply instantiated as follows
Step2: A LightCurve object provides easy access to a range of common operations, such as fold(), flatten(), remove_outliers(), cdpp(), plot(), and more. To demonstrate these operations, let's create a LightCurve object from a KeplerLightCurveFile we obtain from the data archive at MAST
Step3: Now lc is a LightCurve object on which you can run operations. For example, we can plot it
Step4: We can access several of the metadata properties
Step5: We can access the time and flux as arrays
Step6: We don't particularly care about the long-term trends, so let's use a Savitzky-Golay filter to flatten the lightcurve
Step7: We can also compute the CDPP noise metric
Step8: Target Pixel File (TPF)
PyKE 3.1 includes class called KeplerTargetPixelFile which is used to handle target pixel files
Step9: A KeplerTargetPixelFile can be instantiated either from a local file or a url
Step10: Additionally, we can mask out cadences that are flagged using the quality_bitmask argument in the constructor
Step11: Furthermore, we can mask out pixel values using the aperture_mask argument. The default behaviour is to use
all pixels that have real values. This argument can also get a string value 'kepler-pipeline', in which case the default aperture used by Kepler's pipeline is applied.
Step12: The TPF objects stores both data and a few metadata information, e.g., channel number, EPIC number, reference column and row, module, and shape. The whole header is also available
Step13: The pixel fluxes time series can be accessed using the flux property
Step14: This shows that our TPF is a 35 x 35 image recorded over 3209 cadences.
One can visualize the pixel data at a given cadence using the plot method
Step15: We can perform aperture photometry using the method to_lightcurve
Step16: Let's see how the previous light curve compares against the 'SAP_FLUX' produced by Kepler's pipeline. For that, we are going to explore the KeplerLightCurveFile class
Step17: Now, let's correct this light curve using by fitting cotrending basis vectors. That can be achived either with the KeplerCBVCorrector class or the compute_cotrended_lightcurve in KeplerLightCurveFile. Let's try the latter
Step18: Utility functions
PyKE has included two convinience functions to convert between module.output to channel and vice-versa
Step19: PyKE 3.1 includes KeplerQualityFlags class which encodes the meaning of the Kepler QUALITY bitmask flags as documented in the Kepler Archive Manual (Table 2.3)
Step20: It also can handle multiple flags
Step21: A few quality flags are already computed
Step22: Pixel Response Function (PRF) Photometry
PyKE 3.1 also includes tools to perform PRF Photometry
Step23: For that, let's create a SceneModel which will be fitted to the object of the following TPF
Step24: We also need to define prior distributions on the parameters of our SceneModel model. Those parameters are
the flux, center positions of the target, and a constant background level. We can do that with oktopus | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from matplotlib import rcParams
rcParams["figure.figsize"] = (14, 5)
Explanation: What's new in PyKE 3?
Developed since 2012, PyKE offers a user-friendly way to inspect and analyze the pixels and lightcurves obtained by NASA's Kepler and K2.
The latest version of PyKE, v3.1, was released in January 2018 and adds a new object-oriented Python API which is intended to aid the development of custom pipelines and tools by the community.
End of explanation
from pyke import LightCurve
lc = LightCurve(time=[1, 2, 3], flux=[78.4, 79.6, 76.5])
Explanation: Introducing a generic LightCurve class
The most notable change is the introduction of a generic LightCurve class which provides operations that are intended to suit time series data from any astronomical survey. A light curve is simply instantiated as follows:
End of explanation
from pyke import KeplerLightCurveFile
lcfile = KeplerLightCurveFile("https://archive.stsci.edu/missions/kepler/lightcurves/0119/011904151/kplr011904151-2010009091648_llc.fits")
lc = lcfile.SAP_FLUX
Explanation: A LightCurve object provides easy access to a range of common operations, such as fold(), flatten(), remove_outliers(), cdpp(), plot(), and more. To demonstrate these operations, let's create a LightCurve object from a KeplerLightCurveFile we obtain from the data archive at MAST:
End of explanation
lc.plot()
Explanation: Now lc is a LightCurve object on which you can run operations. For example, we can plot it:
End of explanation
lc.keplerid
lc.channel
lc.quarter
Explanation: We can access several of the metadata properties:
End of explanation
lc.time[:10]
lc.flux[:10]
Explanation: We can access the time and flux as arrays:
End of explanation
detrended_lc, _ = lc.flatten(polyorder=1)
detrended_lc.plot()
folded_lc = detrended_lc.fold(period=0.837495, phase=0.92)
folded_lc.plot();
Explanation: We don't particularly care about the long-term trends, so let's use a Savitzky-Golay filter to flatten the lightcurve:
End of explanation
lc.cdpp()
Explanation: We can also compute the CDPP noise metric:
End of explanation
from pyke import KeplerTargetPixelFile
Explanation: Target Pixel File (TPF)
PyKE 3.1 includes class called KeplerTargetPixelFile which is used to handle target pixel files:
End of explanation
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz')
Explanation: A KeplerTargetPixelFile can be instantiated either from a local file or a url:
End of explanation
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
Explanation: Additionally, we can mask out cadences that are flagged using the quality_bitmask argument in the constructor:
End of explanation
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'200100000/82000/ktwo200182949-c14_lpd-targ.fits.gz',
aperture_mask='kepler-pipeline',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
tpf.aperture_mask
Explanation: Furthermore, we can mask out pixel values using the aperture_mask argument. The default behaviour is to use
all pixels that have real values. This argument can also get a string value 'kepler-pipeline', in which case the default aperture used by Kepler's pipeline is applied.
End of explanation
tpf.header(ext=0)
Explanation: The TPF objects stores both data and a few metadata information, e.g., channel number, EPIC number, reference column and row, module, and shape. The whole header is also available:
End of explanation
tpf.flux.shape
Explanation: The pixel fluxes time series can be accessed using the flux property:
End of explanation
tpf.plot(frame=1)
Explanation: This shows that our TPF is a 35 x 35 image recorded over 3209 cadences.
One can visualize the pixel data at a given cadence using the plot method:
End of explanation
lc = tpf.to_lightcurve()
plt.figure(figsize=[17, 4])
plt.plot(lc.time, lc.flux)
Explanation: We can perform aperture photometry using the method to_lightcurve:
End of explanation
from pyke.lightcurve import KeplerLightCurveFile
klc = KeplerLightCurveFile('https://archive.stsci.edu/missions/k2/lightcurves/'
'c14/200100000/82000/ktwo200182949-c14_llc.fits',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
sap_lc = klc.SAP_FLUX
plt.figure(figsize=[17, 4])
plt.plot(lc.time, lc.flux)
plt.plot(sap_lc.time, sap_lc.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
Explanation: Let's see how the previous light curve compares against the 'SAP_FLUX' produced by Kepler's pipeline. For that, we are going to explore the KeplerLightCurveFile class:
End of explanation
klc_corrected = klc.compute_cotrended_lightcurve(cbvs=range(1, 17))
plt.figure(figsize=[17, 4])
plt.plot(klc_corrected.time, klc_corrected.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
pdcsap_lc = klc.PDCSAP_FLUX
plt.figure(figsize=[17, 4])
plt.plot(klc_corrected.time, klc_corrected.flux)
plt.plot(pdcsap_lc.time, pdcsap_lc.flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
Explanation: Now, let's correct this light curve using by fitting cotrending basis vectors. That can be achived either with the KeplerCBVCorrector class or the compute_cotrended_lightcurve in KeplerLightCurveFile. Let's try the latter:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from pyke.utils import module_output_to_channel, channel_to_module_output
module_output_to_channel(module=19, output=3)
channel_to_module_output(67)
Explanation: Utility functions
PyKE has included two convinience functions to convert between module.output to channel and vice-versa:
End of explanation
from pyke.utils import KeplerQualityFlags
KeplerQualityFlags.decode(1)
Explanation: PyKE 3.1 includes KeplerQualityFlags class which encodes the meaning of the Kepler QUALITY bitmask flags as documented in the Kepler Archive Manual (Table 2.3):
End of explanation
KeplerQualityFlags.decode(1 + 1024 + 1048576)
Explanation: It also can handle multiple flags:
End of explanation
KeplerQualityFlags.decode(KeplerQualityFlags.DEFAULT_BITMASK)
KeplerQualityFlags.decode(KeplerQualityFlags.CONSERVATIVE_BITMASK)
Explanation: A few quality flags are already computed:
End of explanation
from pyke.prf import PRFPhotometry, SceneModel, SimpleKeplerPRF
Explanation: Pixel Response Function (PRF) Photometry
PyKE 3.1 also includes tools to perform PRF Photometry:
End of explanation
tpf = KeplerTargetPixelFile('https://archive.stsci.edu/missions/k2/target_pixel_files/c14/'
'201500000/43000/ktwo201543306-c14_lpd-targ.fits.gz',
quality_bitmask=KeplerQualityFlags.CONSERVATIVE_BITMASK)
tpf.plot(frame=100)
scene = SceneModel(prfs=[SimpleKeplerPRF(channel=tpf.channel, shape=tpf.shape[1:],
column=tpf.column, row=tpf.row)])
Explanation: For that, let's create a SceneModel which will be fitted to the object of the following TPF:
End of explanation
from oktopus.prior import UniformPrior
unif_prior = UniformPrior(lb=[0, 1090., 706., 0.],
ub=[1e5, 1096., 712., 1e5])
scene.plot(*unif_prior.mean)
prf_phot = PRFPhotometry(scene_model=scene, prior=unif_prior)
results = prf_phot.fit(tpf.flux + tpf.flux_bkg)
plt.imshow(prf_phot.residuals[1], origin='lower')
plt.colorbar()
flux = results[:, 0]
xcenter = results[:, 1]
ycenter = results[:, 2]
bkg_density = results[:, 3]
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, flux)
plt.ylabel('Flux (e-/s)')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, xcenter)
plt.ylabel('Column position')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, ycenter)
plt.ylabel('Row position')
plt.xlabel('Time (BJD - 2454833)')
plt.figure(figsize=[17, 4])
plt.plot(tpf.time, bkg_density)
plt.ylabel('Background density')
plt.xlabel('Time (BJD - 2454833)')
Explanation: We also need to define prior distributions on the parameters of our SceneModel model. Those parameters are
the flux, center positions of the target, and a constant background level. We can do that with oktopus:
End of explanation |
14,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/logo.jpg" style="display
Step2: <p style="text-align
Step3: <p style="text-align
Step4: <p style="text-align
Step5: <p style="text-align
Step6: <p style="text-align
Step7: <p style="text-align
Step8: <p style="text-align
Step9: <p style="text-align
Step10: <p style="text-align
Step11: <p style="text-align
Step12: <p style="text-align
Step13: <figure>
<img src="images/try_except_flow_full.svg?v=5" style="width
Step14: <p style="text-align
Step15: <p style="text-align
Step16: <span style="text-align
Step17: <p style="text-align
Step18: <span style="text-align
Step19: <p style="text-align
Step21: <div class="align-center" style="display
Step22: <span style="text-align
Step23: <p style="text-align
Step26: <span style="text-align
Step28: <p style="text-align
Step29: <p style="text-align
Step30: <p style="text-align
Step31: <span style="text-align | Python Code:
import os
import zipfile
Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.">
<span style="text-align: right; direction: rtl; float: right;">חריגות – חלק 2</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במחברת הקודמת התמודדנו לראשונה עם חריגות.<br>
למדנו לפרק הודעות שגיאה לרכיביהן ולחלץ מהן מידע מועיל, העמקנו בדרך הפעולה של Traceback ודיברנו על סוגי החריגות השונים בפייתון.<br>
ראינו לראשונה את מילות המפתח <code>try</code> ו־<code>except</code>, ולמדנו כיצד להשתמש בהן כדי לטפל בחריגות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
דיברנו על כך שטיפול בחריגות עשוי למנוע את קריסת התוכנית, וציינו גם שכדאי לבחור היטב באילו חריגות לטפל.<br>
הבהרנו שאם נטפל בחריגות ללא אבחנה, אנחנו עלולים ליצור "תקלים שקטים" שפייתון לא תדווח לנו עליהם ויהיו קשים לאיתור.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לבסוף, הצגנו כיצד השגיאות בפייתון הן בסך הכול מופע שנוצר ממחלקה שמייצגת את סוג החריגה.<br>
הראינו כיצד לקבל גישה למופע הזה מתוך ה־<code>except</code>, וראינו את עץ הירושה המרשים של סוגי החריגות בפייתון.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
במחברת זו נמשיך ללמוד על טיפול בחריגות.<br>
עד סוף המחברת תוכלו להתריע בעצמכם על חריגה וליצור סוגי חריגות משל עצמכם.<br>
זאת ועוד, תלמדו על יכולות מתקדמות יותר הנוגעות לטיפול בחריגות בפייתון, ועל הרגלי עבודה נכונים בכל הקשור בעבודה עם חריגות.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ניקוי משטחים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעיתים חשוב לנו לוודא ששורת קוד תתבצע בכל מקרה, גם אם הכול סביב עולה באש.<br>
לרוב, זה קורה כאשר אנחנו פותחים משאב כלשהו (קובץ, חיבור לאתר אינטרנט) וצריכים למחוק או לסגור את המשאב בסוף הפעולה.<br>
במקרים כאלו, חשוב לנו שהשורה תתבצע אפילו אם הייתה התרעה על חריגה במהלך הרצת הקוד.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ננסה, לדוגמה, לכווץ את כל התמונות בתיקיית images לארכיון בעזרת המודול <var>zipfile</var>.<br>
אין מה לחשוש – המודול מובן יחסית וקל לשימוש.<br>
כל שנצטרך לעשות זה ליצור מופע של <var>ZipFile</var> ולהפעיל עליו את הפעולה <var>write</var> כדי לצרף לארכיון קבצים.<br>
אם אתם מרגישים נוח, זה הזמן לכתוב את הפתרון לכך בעצמכם. אם לא, ודאו שאתם מבינים היטב את התאים הבאים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נתחיל ביבוא המודולים הרלוונטיים:
</p>
End of explanation
def get_file_paths_from_folder(folder):
Yield paths for all the files in `folder`.
for file in os.listdir(folder):
path = os.path.join(folder, file)
yield path
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
וככלי עזר, נכתוב generator שמקבל כפרמטר נתיב לתיקייה, ומחזיר את הנתיב לכל הקבצים שבה:
</p>
End of explanation
def zip_folder(folder_name):
our_zipfile = zipfile.ZipFile('images.zip', 'w')
for file in get_file_paths_from_folder(folder_name):
our_zipfile.write(file)
our_zipfile.close()
zip_folder('images')
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
עכשיו נכתוב פונקציה שיוצרת קובץ ארכיון חדש, מוסיפה אליו את הקבצים שבתיקיית התמונות וסוגרת את קובץ הארכיון:
</p>
End of explanation
def zip_folder(folder_name):
our_zipfile = zipfile.ZipFile('images.zip', 'w')
try:
for file in get_file_paths_from_folder(folder_name):
our_zipfile.write(file)
except Exception as error:
print(f"Critical failure occurred: {error}.")
our_zipfile.close()
zip_folder('NON_EXISTING_DIRECTORY')
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
אבל מה יקרה אם תיקיית התמונות גדולה במיוחד ונגמר לנו המקום בזיכרון של המחשב?<br>
מה יקרה אם אין לנו גישה לאחד הקבצים והקריאה של אותו קובץ תיכשל?<br>
נטפל במקרים שבהם פייתון תתריע על חריגה:
</p>
End of explanation
try:
1 / 0
finally:
print("+-----------------+")
print("| Executed anyway |")
print("+-----------------+")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
התא למעלה מפר עיקרון חשוב שדיברנו עליו:<br>
עדיף שלא לתפוס את החריגה אם לא יודעים בדיוק מה הסוג שלה, למה היא התרחשה וכיצד לטפל בה.<br>
אבל רגע! אם לא נתפוס את החריגה, כיצד נוודא שהקוד שלנו סגר את קובץ הארכיון באופן מסודר לפני שהתוכנה קרסה?
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
זה הזמן להכיר את מילת המפתח <code>finally</code>, שבאה אחרי ה־<code>except</code> או במקומו.<br>
השורות שכתובות ב־<code>finally</code> יתבצעו <em>תמיד</em>, גם אם הקוד קרס בגלל חריגה.<br>
שימוש ב־<code>finally</code> ייראה כך:
</p>
End of explanation
def stubborn_finally_example():
try:
return True
finally:
print("This line will be executed anyway.")
stubborn_finally_example()
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
שימו לב שאף על פי שהקוד שנמצא בתוך ה־<code>try</code> קרס, ה־<code>finally</code> התבצע.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
למעשה, <code>finally</code> עקשן כל כך שהוא יתבצע אפילו אם היה <code>return</code>:
</p>
End of explanation
def zip_folder(folder_name):
our_zipfile = zipfile.ZipFile('images.zip', 'w')
try:
for file in get_file_paths_from_folder(folder_name):
our_zipfile.write(file)
finally:
our_zipfile.close()
print(f"Is our_zipfiles closed?... {our_zipfile}")
zip_folder('images')
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נשתמש במנגנון הזה כדי לוודא שקובץ הארכיון באמת ייסגר בסופו של דבר, ללא תלות במה שיקרה בדרך:
</p>
End of explanation
zip_folder('NO_SUCH_DIRECTORY')
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ונבדוק שזה יעבוד גם אם נספק תיקייה לא קיימת, לדוגמה:
</p>
End of explanation
def zip_folder(folder_name):
our_zipfile = zipfile.ZipFile('images.zip', 'w')
try:
for file in get_file_paths_from_folder(folder_name):
our_zipfile.write(file)
except FileNotFoundError as err:
print(f"Critical error: {err}.\nArchive is probably incomplete.")
finally:
our_zipfile.close()
print(f"Is our_zipfiles closed?... {our_zipfile}")
zip_folder('NO_SUCH_DIRECTORY')
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
יופי! עכשיו כשראינו התרעה על חריגת <var>FileNotFoundError</var> כשמשתמש הכניס נתיב לא תקין לתיקייה, ראוי שנטפל בה:
</p>
End of explanation
def read_file(path):
try:
princess = open(path, 'r')
except FileNotFoundError as err:
print(f"Can't find file '{path}'.\n{err}.")
return None
else:
text = princess.read()
princess.close()
return text
print(read_file('resources/castle.txt'))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
יותר טוב!<br>
היתרון בצורת הכתיבה הזו הוא שגם אם תהיה התרעה על חריגה שאינה מסוג <var>FileNotFoundError</var> והתוכנה תקרוס,<br>
נוכל להיות בטוחים שקובץ הארכיון נסגר כראוי.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">הכול בסדר</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כה למדנו על 3 מילות מפתח שקשורות במנגנון לטיפול בחריגות של פייתון: <code>try</code>, <code>except</code> ו־<code>finally</code>.<br>
אלו רעיונות מרכזיים בטיפול בחריגות, ותוכלו למצוא אותם בצורות כאלו ואחרות בכל שפת תכנות עכשווית שמאפשרת טיפול בחריגות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אלא שבפייתון ישנה מילת מפתח נוספת שהתגנבה למנגנון הטיפול בחריגות: <code>else</code>.<br>
תחת מילת המפתח הזו יופיעו פעולות שנרצה לבצע רק אם הקוד שב־<code>try</code> רץ במלואו בהצלחה,<br>
או במילים אחרות: באף שלב לא הייתה התרעה על חריגה; אף לא <code>except</code> אחד התבצע.
</p>
End of explanation
def read_file(path):
try:
princess = open(path, 'r')
text = princess.read()
princess.close()
return text
except FileNotFoundError as err:
print(f"Can't find file '{path}'.\n{err}.")
return None
print(read_file('resources/castle.txt'))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
"אבל רגע", ישאלו חדי העין מביניכם.<br>
"הרי המטרה היחידה של <code>else</code> היא להריץ קוד אם הקוד שב־<code>try</code> רץ עד סופו,<br>
אז למה שלא פשוט נכניס אותו כבר לתוך ה־<code>try</code>, מייד אחרי הקוד שרצינו לבצע?"<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
וזו שאלה שיש בה היגיון רב –<br>
הרי קוד שקורס ב־<code>try</code> ממילא גורם לכך שהקוד שנמצא אחריו ב־<code>try</code> יפסיק לרוץ.<br>
אז למה לא פשוט לשים שם את קוד ההמשך? מה רע בקטע הקוד הבא?
</p>
End of explanation
def read_file(path):
try:
princess = open(path, 'r')
text = princess.read()
except (FileNotFoundError, PermissionError) as err:
print(f"Can't find file '{path}'.\n{err}.")
text = None
else:
princess.close()
finally:
return text
print(read_file('resources/castle.txt3'))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
ההבדל הוא רעיוני בעיקרו.<br>
המטרה שלנו היא להעביר את הרעיון שמשתקף מהקוד שלנו לקוראו בצורה נהירה יותר, קצת כמו בספר טוב.<br>
מילת המפתח <code>else</code> תעזור לקורא להבין איפה חשבנו שעשויה להיות ההתרעה על החריגה,<br>
ואיפה אנחנו רוצים להמשיך ולהריץ קוד פייתון שקשור לאותו קוד.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
ישנו יתרון נוסף בהפרדת הקוד ל־<code>try</code> ול־<code>else</code> –<br>
השיטה הזו עוזרת לנו להפריד בין הקוד שבו ייתפסו התרעות על חריגות, לבין הקוד שירוץ אחריו ושבו לא יטופלו חריגות.<br>
כיוון שהשורות שנמצאות בתוך ה־<code>else</code> לא נמצאות בתוך ה־<code>try</code>, פייתון לא תתפוס התרעות על חריגות שהתרחשו במהלך הרצתן.<br>
שיטה זו עוזרת לנו ליישם את כלל האצבע שמורה לנו לתפוס התרעות על חריגות באופן ממוקד – <br>
בעזרת <code>else</code> לא נתפוס התרעות על חריגות בקוד שבו לא התכוונו מלכתחילה לתפוס התרעות על חריגות.
</p>
<div class="align-center" style="display: flex; text-align: right; direction: rtl; clear: both;">
<div style="display: flex; width: 10%; float: right; clear: both;">
<img src="images/exercise.svg" style="height: 50px !important;" alt="תרגול">
</div>
<div style="width: 70%">
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>print_item</var> שמקבלת כפרמטר ראשון רשימה, וכפרמטר שני מספר ($n$).<br>
הפונקציה תדפיס את האיבר ה־$n$־י ברשימה.<br>
טפלו בכל ההתרעות על חריגות שעלולות להיווצר בעקבות הרצת הפונקציה.
</p>
</div>
<div style="display: flex; width: 20%; border-right: 0.1rem solid #A5A5A5; padding: 1rem 2rem;">
<p style="text-align: center; direction: rtl; justify-content: center; align-items: center; clear: both;">
<strong>חשוב!</strong><br>
פתרו לפני שתמשיכו!
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לסיכום, ניצור קטע קוד שמשתמש בכל מילות המפתח שלמדנו בהקשר של טיפול בחריגות:
</p>
End of explanation
raise ValueError("Just an example.")
Explanation: <figure>
<img src="images/try_except_flow_full.svg?v=5" style="width: 700px; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה יש תרשים זרימה המציג כיצד פייתון קוראת את הקוד במבנה try-except-else-finally. התרשים בסגנון קומיקסי עם אימוג'ים. החץ ממנו נכנסים לתרשים הוא 'התחל ב־try' עם סמלון של דגל מרוצים, שמוביל לעיגול שבו כתוב 'הרץ את השורה המוזחת הבאה בתוך ה־try'. מתוך עיגול זה יש חץ לעצמו, שבו כתוב 'אין התראה על חריגה' עם סמלון של וי ירוק, וחץ נוסף שבו כתוב 'אין שורות נוספות ב־try' עם סמלון של וי ירוק שמוביל לעיגול 'הרץ את השורות המוזחות בתוך else, אם יש כזה'. מעיגול זה יוצא חץ נוסף ל'הרץ את השורות המוזחות בתוך finally, אם יש כאלו'. מהעיגול האחרון שהוזכר יוצא חץ כלפי מטה לכיוון מלל עם דגל מרוצים שעליו כתוב 'סוף'. מהעיגול הראשון שהוזכר, 'הרץ את השורה המוזחת הבאה בתוך ה־try', יוצא גם חץ שעליו כתוב 'התרעה על חריגה' עם סמלון של פיצוץ, ומוביל לעיגול שבו כתוב 'חפש except עם סוג החריגה'. מעיגול זה יוצאים שני חצים: הראשון 'לא קיים' (החץ אדום מקווקו), עם סמלון של איקס אדום שמוביל לעיגול ללא מוצא בו כתוב 'זרוק התרעה על חריגה', שמוביל (בעזרת חץ אדום מקווקו) לשרשרת עיגולים ללא מוצא. בראשון כתוב 'הרץ את השורות המוזחות בתוך finally, אם יש כזה', והוא מצביע בעזרת חץ אדום מקווקו על עיגול נוסף בו כתוב 'חדול מהרצת התוכנית'. על החץ השני שיוצא מ'חפש except עם סוג החריגה' כתוב 'קיים' עם סמלון של וי ירוק, והוא מוביל לעיגול 'הרץ את השורות המוזחות בתוך ה־except'. ממנו יש חץ לעיגול שתואר מקודם, 'הרץ את השורות המוזחות בתוך ה־finally, אם יש כזה', ומוביל לכיתוב 'סוף הטיפול בשגיאות. המשך בהרצת התוכנית.' עם דגל מרוץ. כל החצים באיור ירוקים פרט לחצים שהוזכרו כאדומים."/>
<figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">
תרשים זרימה המציג כיצד פייתון קוראת את הקוד במבנה <code>try</code>, <code>except</code>, <code>else</code>, <code>finally</code>.
</figcaption>
</figure>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: פותחים שעון</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>estimate_read_time</var>, שמקבלת נתיב לקובץ, ומודדת בתוך כמה זמן פייתון קוראת את הקובץ.<br>
על הפונקציה להוסיף לקובץ בשם log.txt שורה שבה כתוב את שם הקובץ שניסיתם לקרוא, ובתוך כמה שניות פייתון קראה את הקובץ.<br>
הפונקציה תטפל בכל מקרי הקצה ובהתרעות על חריגות שבהם היא עלולה להיתקל.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">יצירת התרעה על חריגה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
עד כה התמקדנו בטיפול בהתרעות על חריגות שעלולות להיווצר במהלך ריצת התוכנית.<br>
בהגיענו לכתוב תוכניות גדולות יותר שמתכנתים אחרים ישתמשו בהן, לעיתים קרובות נרצה ליצור בעצמנו התרעות על חריגות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
התרעה על חריגה, כפי שלמדנו, היא דרך לדווח למתכנת שמשהו בעייתי התרחש בזמן ריצת התוכנית.<br>
נוכל ליצור התרעות כאלו בעצמנו, כדי להודיע על בעיות אפשריות למתכנתים שמשתמשים בקוד שלנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יצירת התרעה על חריגה היא עניין פשוט למדי שמורכב מ־3 חלקים:<br>
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>שימוש במילת המפתח <code>raise</code>.</li>
<li>ציון סוג החריגה שעליה אנחנו הולכים להתריע – <var>ValueError</var>, לדוגמה.</li>
<li>בסוגריים אחרי כן – הודעה שתתאר למתכנת שישתמש בקוד את הבעיה.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
זה ייראה כך:
</p>
End of explanation
def _check_time_fields(hour, minute, second, microsecond, fold):
if not 0 <= hour <= 23:
raise ValueError('hour must be in 0..23', hour)
if not 0 <= minute <= 59:
raise ValueError('minute must be in 0..59', minute)
if not 0 <= second <= 59:
raise ValueError('second must be in 0..59', second)
if not 0 <= microsecond <= 999999:
raise ValueError('microsecond must be in 0..999999', microsecond)
if fold not in (0, 1):
raise ValueError('fold must be either 0 or 1', fold)
return hour, minute, second, microsecond, fold
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה דוגמה לקוד אמיתי שמממש התרעה על חריגה.<br>
<a href="https://github.com/python/cpython/blob/578c3955e0222ec7b3146197467fbb0fcfae12fe/Lib/datetime.py#L397">הקוד הבא</a> לקוח מהמודול <var>datetime</var>, והוא רץ בכל פעם <a href="https://github.com/python/cpython/blob/578c3955e0222ec7b3146197467fbb0fcfae12fe/Lib/datetime.py#L1589">שמבקשים ליצור</a> מופע חדש של תאריך.<br>
שימו לב כיצד יוצר המודול בודק את כל אחד מחלקי התאריך, ואם הערך חורג מהטווח שהוגדר – הוא מתריע על חריגה עם הודעת חריגה ממוקדת:
</p>
End of explanation
DAYS = [
'Sunday', 'Monday', 'Tuesday', 'Wednesday',
'Thursday', 'Friday', 'Saturday',
]
def get_day_by_number(number):
try:
return DAYS[number - 1]
except IndexError:
raise ValueError("The number parameter must be between 1 and 7.")
for i in range(1, 9):
print(get_day_by_number(i))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מטרת הפונקציה היא להבין אם השעה שהועברה ל־<var>datetime</var> תקינה.<br>
בפונקציה, בודקים אם השעה היא מספר בטווח 0–23, אם מספר הדקות הוא מספר בטווח 0–59 וכן הלאה.<br>
אם אחד התנאים לא מתקיים – מתריעים למתכנת שניסה ליצור את מופע התאריך על חריגה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הקוד משתמש בתעלול מבורך – ביצירת מופע ממחלקה של חריגה, אפשר להשתמש ביותר מפרמטר אחד.<br>
הפרמטר הראשון תמיד יוקדש להודעת השגיאה, אבל אפשר להשתמש בשאר הפרמטרים כדי להעביר מידע נוסף על החריגה.<br>
בדרך כלל מעבירים שם מידע על הערכים שגרמו לבעיה, או את הערכים עצמם.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: סכו"ם</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בתור רשת לכלי עבודה אתם מנסים לספור את מלאי ה<b>ס</b>ולמות, <b>כ</b>רסומות <b>ומ</b>חרטות שקיימים אצלכם.<br>
כתבו מחלקה שמייצגת חנות (<var>Store</var>), ולה 3 תכונות:<br>
מספר הסולמות (<var>ladders</var>), מספר הכרסומות (<var>millings</var>) ומספר המחרטות (<var>lathes</var>) במלאי.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה בשם <var>count_inventory</var> שמקבלת רשימת מופעים של חנויות, ומחזירה את מספר הפריטים הכולל במלאי.<br>
צרו התרעות על חריגות במידת הצורך, בין אם במחלקה ובין אם בפונקציה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">טכניקות בניהול חריגות</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">מיקוד החריגה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טכניקה מעניינת שמשתמשים בה מדי פעם היא ניסוח מחדש של התרעה על חריגה.<br>
נבחר לנהוג כך כשהניסוח מחדש יעזור לנו למקד את מי שישתמש בקוד שלנו.<br>
בטכניקה הזו נתפוס בעזרת <code>try</code> חריגה מסוג מסוים, וב־<code>except</code> ניצור התרעה חדשה על חריגה עם הודעת שגיאה משלנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה דוגמה:
</p>
End of explanation
ADDRESS_BOOK = {
'Padfoot': '12 Grimmauld Place, London, UK',
'Jerry': 'Apartment 5A, 129 West 81st Street, New York, New York',
'Clark': '344 Clinton St., Apt. 3B, Metropolis, USA',
}
def get_address_by_name(name):
try:
return ADDRESS_BOOK[name]
except KeyError as err:
with open('errors.txt', 'a') as errors:
errors.write(str(err))
raise KeyError(str(err))
for name in ('Padfoot', 'Clark', 'Jerry', 'The Ink Spots'):
print(get_address_by_name(name))
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">טיפול והתרעה</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טכניקה נוספת היא ביצוע פעולות מסוימות במהלך ה־<code>except</code>, והתרעה על החריגה מחדש.<br>
השימוש בטכניקה הזו נפוץ מאוד.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שימוש בה הוא מעין סיפור קצר בשלושה חלקים:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>תופסים את החריגה.</li>
<li>מבצעים פעולות רלוונטיות כמו:
<ul>
<li>מתעדים את התרחשות החריגה במקום חיצוני, כמו קובץ, או אפילו מערכת ייעודית לניהול שגיאות.</li>
<li>מבטלים את הפעולות שכן הספקנו לעשות לפני שהייתה התרעה על חריגה.</li>
</ul>
</li>
<li>מקפיצים מחדש את החריגה – את אותה חריגה בדיוק או אחת מדויקת יותר.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לדוגמה:
</p>
End of explanation
def get_address_by_name(name):
try:
return ADDRESS_BOOK[name]
except KeyError as err:
with open('errors.txt', 'a') as errors:
errors.write(str(err))
raise
for name in ('Padfoot', 'Clark', 'Jerry', 'The Ink Spots'):
print(get_address_by_name(name))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
למעשה, הרעיון של התרעה מחדש על חריגה הוא כה נפוץ, שמפתחי פייתון יצרו עבורו מעין קיצור.<br>
אם אתם נמצאים בתוך <code>except</code> ורוצים לזרוק בדיוק את החריגה שתפסתם, פשוט כתבו <code>raise</code> בלי כלום אחריו:
</p>
End of explanation
class AddressUnknownError(Exception):
pass
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">יצירת חריגה משלנו</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בתוכנות גדולות במיוחד נרצה ליצור סוגי חריגות משלנו.<br>
נוכל לעשות זאת בקלות אם נירש ממחלקה קיימת שמייצגת חריגה:
</p>
End of explanation
def get_address_by_name(name):
try:
return ADDRESS_BOOK[name]
except KeyError:
raise AddressUnknownError(f"Can't find the address of {name}.")
for name in ('Padfoot', 'Clark', 'Jerry', 'The Ink Spots'):
print(get_address_by_name(name))
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
בשלב זה, נוכל להתריע על חריגה בעזרת סוג החריגה שיצרנו:
</p>
End of explanation
class DrunkUserError(Exception):
Exception raised for errors in the input.
def __init__(self, name, bac, *args, **kwargs):
super().__init__(*args, **kwargs)
self.name = name
self.bac = bac # Blood Alcohol Content
def __str__(self):
return (
f"{self.name} must not drriiiive!!! @_@"
f"\nBAC: {self.bac}"
)
def start_driving(username, blood_alcohol_content):
if blood_alcohol_content > 0.024:
raise DrunkUserError(username, blood_alcohol_content)
return True
start_driving("Kipik", 0.05)
Explanation: <div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; float: right; ">
<img src="images/tip.png" style="height: 50px !important;" alt="טיפ!" title="טיפ!">
</div>
<div style="width: 90%;">
נהוג לסיים את שמות המחלקות המייצגות חריגה במילה <em>Error</em>.
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
זכרו שהירושה כאן משפיעה על הדרך שבה תטופל החריגה שלכם.<br>
אם, נניח, <var>AddressUnknownError</var> הייתה יורשת מ־<var>KeyError</var>, ולא מ־<var>Exception</var>,<br>
זה אומר שכל מי שהיה עושה <code>except KeyError</code> היה תופס גם חריגות מסוג <var>AddressUnknownError</var>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יש לא מעט יתרונות ליצירת שגיאות משל עצמנו:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li>המתכנתים שמשתמשים בפונקציה יכולים לתפוס התרעות ספציפיות יותר.</li>
<li>הקוד הופך לבהיר יותר עבור הקורא ועבור מי שמקבל את ההתרעה על החריגה.</li>
<li>בזכות רעיון הירושה, אפשר לספק לחריגות הללו התנהגות מותאמת אישית.</li>
</ol>
<div class="align-center" style="display: flex; text-align: right; direction: rtl;">
<div style="display: flex; width: 10%; ">
<img src="images/deeper.svg?a=1" style="height: 50px !important;" alt="העמקה" title="העמקה">
</div>
<div style="width: 90%">
<p style="text-align: right; direction: rtl;">
כבכל ירושה, תוכלו לדרוס את הפעולות <code>__init__</code> ו־<code>__str__</code> של מחלקת־העל שממנה ירשתם.<br>
דריסה כזו תספק לכם גמישות רבה בהגדרת החריגות שיצרתם ובשימוש בהן.
</p>
</div>
</div>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה דוגמה קצרצרה ליצירת חריגה מותאמת אישית:
</p>
End of explanation
def get_nth_char(string, n):
n = n - 1 # string[0] is the first char (n = 1)
if isinstance(string, (str, bytes)) and n < len(string):
return string[n]
return ''
print(get_nth_char("hello", 1))
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">נימוסים והליכות</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טיפול בחריגות היא הדרך הטובה ביותר להגיב על התרחשויות לא סדירות ולנהל אותן בקוד הפייתון שאנחנו כותבים.<br>
כפי שכבר ראינו במחברות קודמות, בכלים מורכבים ומתקדמים יש יותר מקום לטעויות, וקווים מנחים יעזרו לנו להתנהל בצורה נכונה.<br>
נעבור על כמה כללי אצבע ורעיונות מועילים שיקלו עליכם לעבוד נכון עם חריגות:
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">טיפול ממוקד</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
באופן כללי, נעדיף להיות כמה שיותר ממוקדים בטיפול בחריגות.<br>
כשאנחנו מטפלים בחריגה, אנחנו יוצאים מנקודת הנחה שאנחנו יודעים מה הבעיה וכיצד יש לטפל בה.<br>
לדוגמה, אם משתמש הזין ערך שלא נתמך בקוד שלנו, נרצה לעצור את קריסת התוכנית ולבקש ממנו להזין ערך מתאים.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לא נרצה, לדוגמה, לתפוס התרעות על חריגות שלא התכוונו לתפוס מלכתחילה.<br>
אנחנו מעוניינים לטפל רק בבעיות שאנחנו יודעים שעלולות להתרחש.<br>
אם ישנה בעיה שאנחנו לא יודעים עליה – אנחנו מעדיפים שפייתון תצעק כדי שנדע שהיא קיימת.<br>
"השתקה" של בעיות שאנחנו לא יודעים על קיומן היא פתח לתקלים בלתי צפויים וחמורים אף יותר.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד, הנקודה הזו תבוא לידי ביטוי כשנכתוב אחרי ה־<code>except</code> את רשימת סוגי החריגות שבהן נטפל.<br>
נשתדל שלא לטפל ב־<var>Exception</var>, משום שאז נתפוס כל סוג חריגה שיורש ממנה (כמעט כולם).<br>
נשתדל גם לא לדחוס אחרי ה־<code>except</code> סוגי חריגות שאנחנו לא יודעים אם הם רלוונטיים או לא.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
יתרה מזאת, טיפול בשגיאות יתבצע רק על קוד שאנחנו יודעים שעלול לגרום להתרעה על חריגה.<br>
קוד שלא קשור לחריגה שהולכת להתרחש – לא יהיה חלק מהליך הטיפול בשגיאות.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בקוד, הנקודה הזו תבוא לידי ביטוי בכך שבתוך ה־<code>try</code> יוזחו כמה שפחות שורות קוד.<br>
תחת ה־<code>try</code> נכתוב אך ורק את הקוד שעלול להתריע על חריגה, ושום דבר מעבר לו.<br>
כך נדע שאנחנו לא תופסים בטעות חריגות שלא התכוונו לתפוס מלכתחילה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">חריגות הן עבור המתכנת</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אנחנו מעוניינים שהמתכנת שישתמש בקוד יקבל התרעות על חריגות שיבהירו לו מהן הבעיות בקוד שכתב, ויאפשרו לו לטפל בהן.<br>
אם כתבנו מודול או פונקציה שמתכנת אחר הולך להשתמש בה, לדוגמה, נקפיד ליצור התרעות על חריגות שיעזרו לו לנווט בקוד שלנו.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לעומת המתכנת, אנחנו שואפים שמי שישתמש בתוכנית (הלקוח של המוצר, נניח) לעולם לא יצטרך להתמודד עם התרעות על חריגות.<br>
התוכנית לא אמורה לקרוס בגלל חריגה אף פעם, אלא לטפל בחריגה ולחזור לפעולה תקינה.<br>
אם החריגה קיצונית ומחייבת את הפסקת הריצה של התוכנית, עלינו לפעול בצורה אחראית:<br>
נבצע שמירה מסודרת של כמה שיותר פרטים על הודעת השגיאה, נסגור חיבורים למשאבים, נמחק קבצים שיצרנו ונכבה את התוכנה בצורה מסודרת.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">EAFP או LBYL</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בכל הקשור לשפות תכנות, ישנן שתי גישות נפוצות לטיפול במקרי קצה בתוכנית.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הגישה הראשונה נקראת <var>LBYL</var>, או Look Before You Leap ("הסתכל לפני שאתה קופץ").<br>
גישה זו דוגלת בבדיקת השטח לפני ביצוע כל פעולה.<br>
הפעולה תתבצע לבסוף, רק כשנהיה בטוחים שהרצתה חוקית ולא גורמת להתרעה על חריגה.<br>
קוד שכתב מי שדוגל בשיטה הזו מתאפיין בשימוש תדיר במילת המפתח <code>if</code>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הגישה השנייה נקראת <var>EAFP</var>, או Easier to Ask for Forgiveness than Permission ("קל יותר לבקש סליחה מלבקש רשות").<br>
גישה זו דוגלת בביצוע פעולות מבלי לבדוק לפני כן את היתכנותן, ותפיסה של התרעה על חריגה אם היא מתרחשת.<br>
קוד שכתב מי שדוגל בשיטה הזו מתאפיין בשימוש תדיר במבני <code>try-except</code>.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נראה שתי דוגמאות להבדלים בגישות.<br>
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">דוגמה 1: מספר תו במחרוזת</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נכתוב פונקציה שמקבלת מחרוזת ומיקום ($n$), ומחזירה את התו במיקום ה־$n$־י במחרוזת.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפניכם הקוד בגישת LBYL, ובו אנחנו מנסים לבדוק בזהירות אם אכן מדובר במחרוזת, ואם יש בה לפחות $n$ תווים.<br>
רק אחרי שאנחנו מוודאים שכל דרישות הקדם מתקיימות, אנחנו ניגשים לבצע את הפעולה.
</p>
End of explanation
def get_nth_char(string, n):
try:
return string[n - 1]
except (IndexError, TypeError) as e:
print(e)
return ''
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
והנה אותו קוד בגישת EAFP. הפעם פשוט ננסה לאחזר את התו, ונסמוך על מבנה ה־<code>try-except</code> שיתפוס עבורנו את החריגות:
</p>
End of explanation
import os
import pathlib
def is_path_writable(filepath):
Return if the path is writable.
path = pathlib.Path(filepath)
directory = path.parent
is_dir_writable = directory.is_dir() and os.access(directory, os.W_OK)
is_exists = path.exists()
is_file_writable = path.is_file() and os.access(path, os.W_OK)
return is_dir_writable and ((not is_exists) or is_file_writable)
def write_textfile(filepath, text):
Safely write `text` to `filepath`.
if is_path_writable(filepath):
with open(filepath, 'w', encoding='utf-8') as f:
f.write(text)
return True
return False
write_textfile("not_worms.txt", "What the holy hand grenade was that?")
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">דוגמה 2: כתיבה לקובץ</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
נתכנת פונקציה שמקבלת נתיב לקובץ וטקסט, וכותבת את הטקסט לקובץ.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
הנה הקוד בגישת LBYL, ובו אנחנו מנסים לבדוק בזהירות אם הקובץ אכן בטוח לכתיבה.<br>
רק אחרי שאנחנו מוודאים שיש לנו גישה אליו, שאכן מדובר בקובץ ושאפשר לכתוב אליו, אנחנו מבצעים את הכתיבה לקובץ.
</p>
End of explanation
import os
import pathlib
def write_textfile(filepath, text):
Safely write `text` to `filepath`.
try:
with open(filepath, 'w', encoding='utf-8') as f:
f.write(text)
except (ValueError, OSError) as e:
print(e)
return False
return True
write_textfile("not_worms.txt", "What the holy hand grenade was that?")
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
והנה אותו קוד בגישת EAFP. הפעם פשוט ננסה לכתוב לקובץ, ונסמוך על מבנה ה־<code>try-except</code> שיתפוס עבורנו את החריגות:
</p>
End of explanation
try:
# Code
...
except Exception:
pass
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
מתכנתי פייתון נוטים יותר לתכנות בגישת EAFP.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">אחריות אישית</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
טיפול בחריגה ימנע מהתוכנה לקרוס, ועשוי להחביא את העובדה שהייתה בעיה בזרימת התוכנית.<br>
לרוב זה מצוין ובדיוק מה שאנחנו רוצים, אבל מתכנתים בתחילת דרכם עלולים להתפתות לנצל את העובדה הזו יתר על המידה.<br>
לפניכם דוגמה לקטע קוד שחניכים רבים משתמשים בו בתחילת דרכם:
</p>
End of explanation
# Example 1
class PhoneNumberNotFound(Exception):
pass
# Example 2
def get_key(d, k, default=None):
try:
return d[k]
except:
return default
# Example 3
def write_file(path, text):
try:
f = open(path, 'w')
f.write(text)
f.close()
except IOError:
pass
# Example 4
PHONEBOOK = {'867-5309': 'Jenny'}
def get_name_by_phone(phonebook, phone_number):
if phone_number not in phonebook:
raise ValueError("person_number not in phonebook")
return phonebook[phone_number]
phone_number = input("Hi Mr. User!\nEnter phone:")
get_name_by_phone(PHONEBOOK, phone_number)
# Example 5
def my_sum(items):
try:
total = 0
for element in items:
total = total + element
return total
except TypeError:
return 0
Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;">
הטריק הזה נקרא "השתקת חריגות".<br>
ברוב המוחלט של המקרים זה לא מה שאנחנו רוצים.<br>
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתקת החריגה עלולה לגרום לתקל בהמשך ריצת התוכנית, ויהיה לנו קשה מאוד לאתר אותו בעתיד.<br>
פעמים רבות השתקה שכזו מעידה על כך שהחריגה נתפסה מוקדם מדי.<br>
במקרים כאלו, עדיף לטפל בהתרעה על החריגה בפונקציה שקראה למקום שבו התרחשה ההתרעה על החריגה.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
אם תגיעו למצב שבו אתם משתיקים חריגות, עצרו ושאלו את עצמכם אם זה הפתרון הטוב ביותר.<br>
לרוב, עדיף יהיה לטפל בהתרעה על החריגה ולהביא את התוכנה למצב תקין,<br>
או לפחות לשמור את פרטי ההתרעה לקובץ המתעד את ההתרעות על החריגות שהתרחשו בזמן ריצת התוכנה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">תרגילים</span>
<span style="text-align: right; direction: rtl; float: right; clear: both;">באנו להנמיך</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לפניכם דוגמאות קוד מחרידות להפליא.<br>
תקנו אותן כך שיתאימו לנימוסים והליכות שלמדנו בסוף המחברת.<br>
היעזרו באינטרנט במידת הצורך.
</p>
End of explanation
def digest(key, data):
S = list(range(256))
j = 0
for i in range(256):
j = (j + S[i] + ord(key[i % len(key)])) % 256
S[i], S[j] = S[j], S[i]
j = 0
y = 0
for char in data:
j = (j + 1) % 256
y = (y + S[j]) % 256
S[j], S[y] = S[y], S[j]
yield chr(ord(char) ^ S[(S[j] + S[y]) % 256])
def decrypt(key, message):
return ''.join(digest(key, message))
Explanation: <span style="text-align: right; direction: rtl; float: right; clear: both;">באנו להרים</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו פונקציה המיועדת למתכנתים בחברת "The Syndicate".<br>
הפונקציה תקבל כפרמטרים נתיב לקובץ (<var>filepath</var>) ומספר שורה (<var>line_number</var>).<br>
הפונקציה תחזיר את מה שכתוב בקובץ שנתיבו הוא <var>filepath</var> בשורה שמספרה הוא <var>line_number</var>.<br>
נהלו את השגיאות היטב. בכל פעם שישנה התרעה על חריגה, כתבו אותה לקובץ log.txt עם חותמת זמן וההודעה.
</p>
<span style="text-align: right; direction: rtl; float: right; clear: both;">ילד שלי מוצלח</span>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
צפנת פענח ניסה להעביר ליוליוס פואמה מעניינת שכתב.<br>
בניסיוננו להתחקות אחר עקבותיו של צפנת פענח, ניסינו לשים את ידינו על המסר – אך גילינו שהוא מוצפן.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
בתיקיית resources מצורפים שני קבצים: users.txt ו־passwords.txt.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כל שורה בקובץ users.txt נראית כך:
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
העמודה הראשונה מייצגת את מספר המשתמש, העמודה השנייה מייצגת את שמו ושאר העמודות מייצגות פרטים מזהים עליו.<br>
העמודות מופרדות בתו |.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כל שורה בקובץ בקובץ passwords.txt נראית כך:
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
שתי העמודות הראשונות הן מספרי המשתמש, כפי שהם מוגדרים ב־users.txt.<br>
העמודה השלישית היא סיסמת ההתקשרות ביניהם.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
כתבו את הפונקציות הבאות:
</p>
<ol style="text-align: right; direction: rtl; float: right; clear: both;">
<li><var>load_file</var> – טוענת קובץ טבלאי שהשורה הראשונה שבו היא כותרת, והעמודות שבו מופרדת זו מזו בתו |.<br>
הפונקציה תחזיר רשימה של מילונים. כל מילון ברשימה ייצג שורה בקובץ. המפתחות של כל מילון יהיו שמות השדות מהכותרת.</li>
<li><var>get_user_id</var> – שמקבלת את שם המשתמש, ומחזירה את מספר המשתמש שלו.</li>
<li><var>get_password</var> – שמקבלת שני מספרים סידוריים של משתמשים ומחזירה את סיסמת ההתקשרות בינם.</li>
<li><var>decrypt_file</var> – שמקבלת מפתח ונתיב לקובץ, ומפענחת אותו באמצעות הפונקציה <var>decrypt</var>.</li>
</ol>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
לצורך פתרון החידה, מצאו את סיסמת ההתקשרות של המשתמשים Zaphnath Paaneah ו־Gaius Iulius Caesar.<br>
פענחו בעזרתה את המסר הסודי שבקובץ message.txt.
</p>
<p style="text-align: right; direction: rtl; float: right; clear: both;">
השתמשו בתרגיל כדי לתרגל את מה שלמדתם בנושא טיפול בחריגות.
</p>
End of explanation |
14,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create the reference solar abundance file
Here we join the Asplund+ (2009) data with other sources of information to create our output solar_abundances.fits file.
Step1: Read in the Asplund+ (2009) data
For convenience, cull to Z < 75.
Step2: Add column for references
Step3: Simple text list of element names
Step4: Oxygen...issues.
Step5: Adopt the Steffen+ (2015) oxygen abundance
Step6: Adopt the Amarsi+ (2018) oxygen abundance
Amarsi+ (2018) have questioned the results of Steffen+ (2015). Their derived abundance is equivalent to the original Asplund version. It appears the solar oxygen abundance is still in question.
Step7: Add mean atomic mass
Step8: Now we have all the information we want in the combined table atoms.
Step9: Write to output file
Step10: Mean metal abundances by mass
Step11: Calculate X, Y, Z | Python Code:
output_file = 'solarabundances.fits'
import matplotlib.pyplot as plt
import numpy as np
from astropy.io import fits
from astropy.table import Table,Column,join
Explanation: Create the reference solar abundance file
Here we join the Asplund+ (2009) data with other sources of information to create our output solar_abundances.fits file.
End of explanation
fl = 'asplund2009_abundances.txt'
asplund = Table.read(fl,format='ascii',comment=';')
asplund.remove_column('DIFF_PH_MET')
#Excise Z>75 data
gd = (asplund['Z'] <= 75)
solar = asplund[gd]
Explanation: Read in the Asplund+ (2009) data
For convenience, cull to Z < 75.
End of explanation
reference_column = Column(['Asplund+ (2009)']*np.size(solar), name='Reference', dtype='U30')
solar.add_column(reference_column)
Explanation: Add column for references:
End of explanation
element_names=np.chararray.strip(solar['ELEMENT'])
Explanation: Simple text list of element names:
End of explanation
o_indx = np.where((element_names.lower() == 'O'.lower()))
solar[o_indx]
Explanation: Oxygen...issues.
End of explanation
steffen = [8.76, 0.02]
solar['BEST'][o_indx] = steffen[0] # Steffen abundance
solar['ERR'][o_indx] = steffen[1]
solar['PHOTO'][o_indx] = steffen[0] # Steffen abundance
solar['PHOTO_ERR'][o_indx] = steffen[1]
solar['Reference'][o_indx] = 'Steffen+ (2015)'
#Total mass:
mmm = (10.**(solar['PHOTO'][0]-12.)/0.7381)
#Old oxygen mass:
print("Old O mass fraction: {0:0.5f}".format((10.**(asplund['PHOTO'][o_indx]-12.)*16./mmm)[0]))
#New oxygen mass:
print("New O mass fraction: {0:0.5f}".format((10.**(solar['BEST'][o_indx]-12.)*16./mmm)[0]))
solar[o_indx]
Explanation: Adopt the Steffen+ (2015) oxygen abundance
End of explanation
amarsi = [8.69, 0.03]
solar['BEST'][o_indx] = amarsi[0] # Steffen abundance
solar['ERR'][o_indx] = amarsi[1]
solar['PHOTO'][o_indx] = amarsi[0] # Steffen abundance
solar['PHOTO_ERR'][o_indx] = amarsi[1]
solar['Reference'][o_indx] = 'Amarsi+ (2018);Asplund+ (2009)'
#Total mass:
mmm = (10.**(solar['PHOTO'][0]-12.)/0.7381)
#Old oxygen mass:
print("Old O mass fraction: {0:0.5f}".format((10.**(asplund['PHOTO'][o_indx]-12.)*16./mmm)[0]))
#New oxygen mass:
print("New O mass fraction: {0:0.5f}".format((10.**(solar['BEST'][o_indx]-12.)*16./mmm)[0]))
solar[o_indx]
Explanation: Adopt the Amarsi+ (2018) oxygen abundance
Amarsi+ (2018) have questioned the results of Steffen+ (2015). Their derived abundance is equivalent to the original Asplund version. It appears the solar oxygen abundance is still in question.
End of explanation
iso = Table.read('isotopes.dat',format='ascii',comment=';;')
iso.keep_columns(['Z','AtomicWeight'])
atoms = join(solar,iso,keys='Z')
Explanation: Add mean atomic mass
End of explanation
atoms[0:5]
Explanation: Now we have all the information we want in the combined table atoms.
End of explanation
atoms.write(output_file,overwrite=True)
Explanation: Write to output file
End of explanation
# The abundance by mass:
eps = (10.**(atoms['BEST']-12.)*atoms['AtomicWeight']).data
total_mass = eps.sum()
plt.plot(atoms['Z'],np.log10(eps/total_mass),'ro-',markersize=4,lw=1);
Explanation: Mean metal abundances by mass
End of explanation
X = eps[0]/total_mass
Y = eps[1]/total_mass
zzz = (atoms['Z'] > 2)
Z = eps[zzz].sum()/total_mass
print("\nMean abundances by mass: X=H, Y=He, Z=metals; mean mass per H atom, µ.\n")
print("\tX = {0:0.4f}".format(X))
print("\tY = {0:0.4f}".format(Y))
print("\tZ = {0:0.4f}".format(Z))
print("\tµ = {0:0.4f}".format(1./X))
Explanation: Calculate X, Y, Z
End of explanation |
14,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Postar uma foto
Step1: Postar uma foto em um Album
Step2: Recuperar os álbuns existentes na minha conta
Step3: Exercício 1 – Crie um álbum de fotos chamado Cursos. Depois adicione as 4 imagens que estão disponíveis na pasta da disciplina.
Importante – A descrição (name) de cada imagem deve ser o nome da imagem, sem a extensão (.jpg ou .png).
Utilize a biblioteca embutida chamada os. Para visualizar todos os arquivos da pasta atual, utilize o método os.listdir('.'), Segue um exemplo para salvar todos os arquivos da pasta atual em uma lista.
```python
import os
arquivos = os.listdir('.')
print(arquivos)
['.ipynb_checkpoints', 'Analise-Exploratoria.png', 'aula6-parte1.ipynb', 'aula6-parte2.ipynb', 'aula6-parte3.ipynb', 'Banner-Iot.png', 'banners_hadoop01.png', 'Extensao-Big-Data-01.jpg', 'fia.jpg', 'Untitled.ipynb', 'Untitled1.ipynb', 'Untitled2.ipynb']
``` | Python Code:
import facebook
access_token = 'EAACUzLmOZC7kBAPfCPMRBG23rGoY3iQWKJMIO7ESZCp0LPZCwQQv0AoQeEtBm9IyNDi5yP2RHMGzCzjquLb4ZCWUHLZA6vY1Pp6x8oFXZA7IMissQbporZAwUoIZCuZBoOBrWQDxi8PUUZCb96uWmSwB2ZBqEIwnvZCRZBnqJjGZBQJVhl1gZDZD'
api = facebook.GraphAPI(access_token)
api.version
foto = open("fia.jpg", "rb")
api.put_photo(foto, name="Logo FIA")
api.put_photo?
Explanation: Postar uma foto
End of explanation
import simplejson as json
Explanation: Postar uma foto em um Album
End of explanation
albuns = api.get_object('me/albums')
decodificar = json.dumps(albuns, sort_keys=True, indent=4)
print(decodificar)
albuns['data'][1] # Recupera o album Cursos
#id_album = api.put_object('me', 'albums', name='FIA')
id_album = albuns['data'][1]['id']
api.put_photo(foto, album_path=id_album + '/photos')
Explanation: Recuperar os álbuns existentes na minha conta:
End of explanation
import os
arquivos = os.listdir('.')
print(arquivos)
Explanation: Exercício 1 – Crie um álbum de fotos chamado Cursos. Depois adicione as 4 imagens que estão disponíveis na pasta da disciplina.
Importante – A descrição (name) de cada imagem deve ser o nome da imagem, sem a extensão (.jpg ou .png).
Utilize a biblioteca embutida chamada os. Para visualizar todos os arquivos da pasta atual, utilize o método os.listdir('.'), Segue um exemplo para salvar todos os arquivos da pasta atual em uma lista.
```python
import os
arquivos = os.listdir('.')
print(arquivos)
['.ipynb_checkpoints', 'Analise-Exploratoria.png', 'aula6-parte1.ipynb', 'aula6-parte2.ipynb', 'aula6-parte3.ipynb', 'Banner-Iot.png', 'banners_hadoop01.png', 'Extensao-Big-Data-01.jpg', 'fia.jpg', 'Untitled.ipynb', 'Untitled1.ipynb', 'Untitled2.ipynb']
```
End of explanation |
14,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize GAR Global Flood Hazard Map with Python
Flooding is one of the most damaging natural hazards, accounting for 31% of all economic losses worldwide resulting from natural hazards(European Commission, 2007; UNISDR and CRED, 2015). Over the period 1980-2013, flood losses exceeded $1 trillion globally, and resulted in ca. 220,000 fatalities (Re, 2014). Moreover, with the frequency and magnitude of flood disasters projected to increase due to both climate change and growing population exposure [UNISDR 2009; Jongman et al., 2012], flooding is one of the key societal challenges for this century.
Quantifying flood hazard is an essential component of resilience planning, emergency response, and mitigation, including insurance (Trigg et al., 2016). Flood hazard maps (showing the probability and magnitude of flood events over an area) and flood risk assessment maps (showing potential consequences of a flood event in terms of affected population and assets, and expected economic damages) can increase preparedness and improve land use planning and management in flood prone areas. On the other hand, reliable and fast flood forecasting tools are crucial to develop effective emergency response strategies and to prevent and reduce impacts (Dottori et al., 2016).
Thanks to mathematical models for predicting and mapping flood hazard and risk, model outputs are now available and being used to address science and management questions related to flood risk, including the issue of how these risks could change in the future due to climate change and socioeconomic development(e.g.,Dottori et al., 2016).
In this notebook, the 2015 GAR global flood hazard layer will be visualized using Python. The data is a probabilistic model based on available streamflow data from 8,000 stations around the world. The GAR model then calculates potential discharge at selected points along rivers and the resulting flood extent. Flood hazard is reported at 1 km resolution for 25-, 50-, 100-, 500-, and 1,000-year return periods. The flood hazard map has been prepared down to country levels. Australia is taken as an example.
1. Load all needed libraries
Step1: 2. Load data
2.1 Read data and mask arr<=0.0
Step2: 2.2 Prepare coordinates
Step3: 3. Visualize | Python Code:
import numpy as np
import numpy.ma as ma
from osgeo import gdal
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
%matplotlib inline
Explanation: Visualize GAR Global Flood Hazard Map with Python
Flooding is one of the most damaging natural hazards, accounting for 31% of all economic losses worldwide resulting from natural hazards(European Commission, 2007; UNISDR and CRED, 2015). Over the period 1980-2013, flood losses exceeded $1 trillion globally, and resulted in ca. 220,000 fatalities (Re, 2014). Moreover, with the frequency and magnitude of flood disasters projected to increase due to both climate change and growing population exposure [UNISDR 2009; Jongman et al., 2012], flooding is one of the key societal challenges for this century.
Quantifying flood hazard is an essential component of resilience planning, emergency response, and mitigation, including insurance (Trigg et al., 2016). Flood hazard maps (showing the probability and magnitude of flood events over an area) and flood risk assessment maps (showing potential consequences of a flood event in terms of affected population and assets, and expected economic damages) can increase preparedness and improve land use planning and management in flood prone areas. On the other hand, reliable and fast flood forecasting tools are crucial to develop effective emergency response strategies and to prevent and reduce impacts (Dottori et al., 2016).
Thanks to mathematical models for predicting and mapping flood hazard and risk, model outputs are now available and being used to address science and management questions related to flood risk, including the issue of how these risks could change in the future due to climate change and socioeconomic development(e.g.,Dottori et al., 2016).
In this notebook, the 2015 GAR global flood hazard layer will be visualized using Python. The data is a probabilistic model based on available streamflow data from 8,000 stations around the world. The GAR model then calculates potential discharge at selected points along rivers and the resulting flood extent. Flood hazard is reported at 1 km resolution for 25-, 50-, 100-, 500-, and 1,000-year return periods. The flood hazard map has been prepared down to country levels. Australia is taken as an example.
1. Load all needed libraries
End of explanation
geo = gdal.Open('data\Hazard_AUS__1000.grd')
arr = geo.ReadAsArray()
arr = ma.masked_less_equal(arr, 0.0, copy=True)
Explanation: 2. Load data
2.1 Read data and mask arr<=0.0
End of explanation
x_coords = np.arange(geo.RasterXSize)
y_coords = np.arange(geo.RasterYSize)
(upper_left_x, x_size, x_rotation, upper_left_y, y_rotation, y_size) = geo.GetGeoTransform()
x_coords = x_coords * x_size + upper_left_x + (x_size / 2) # add half the cell size
y_coords = y_coords * y_size + upper_left_y + (y_size / 2) # to centre the point
Explanation: 2.2 Prepare coordinates
End of explanation
fig = plt.figure(figsize=(9, 15))
ax = fig.add_subplot(1, 1, 1)
m = Basemap(projection='cyl', resolution='i',
llcrnrlon=min(x_coords), llcrnrlat=min(y_coords),
urcrnrlon=max(x_coords), urcrnrlat=max(y_coords))
x, y = m(*np.meshgrid(x_coords, y_coords))
#m.arcgisimage(service='World_Terrain_Base', xpixels = 3500, dpi=500, verbose= True)
cs = m.contourf(x, y, arr, cmap='RdBu_r')
m.drawcoastlines()
m.drawrivers()
m.drawstates()
cb = m.colorbar(cs, pad="1%", size="3%",)
plt.title('Flood Hazard 1000 years (cm)')
Explanation: 3. Visualize
End of explanation |
14,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diagonalizing operators
Step1: Periodic A
Step2: $v_k = \omega^{jk}, j \in {0,1,\ldots,N-1}$
Step3: To see eigenvalues, divide the product $Av$ by $v$ | Python Code:
import scipy.linalg as LA
# Example from Strang, 1999
A0 = LA.circulant([2,-1,0,-1])
print(A0)
# LA.LU
Lam, V = LA.eig(A0)
print(Lam)
print(V)
print(V[:, 0])
LA.norm(V, axis=1)
LA.norm(V[:, 0])
1/np.sqrt(2)
Explanation: Diagonalizing operators:
End of explanation
N = A0.shape[0]
omega = np.exp(1j*2*np.pi / N)
print(omega)
Explanation: Periodic A: Diagonlized by the DFT
The columns of the N = 4 DFT matrix are eigenvectors of the $A_0$ matrix.
Using $\omega = e^{i 2 \pi / n}$,
End of explanation
Vs = []
for j in range(N):
v = omega ** (j * np.arange(0, N))
Vs.append(v)
print(f"{j = }, {np.around(v, 2)}")
Explanation: $v_k = \omega^{jk}, j \in {0,1,\ldots,N-1}$
End of explanation
for j in range(N):
lam = np.around((A0 @ Vs[j]) / Vs[j], 2)
print(f"{j=}, {lam}")
print(V[:, 1])
LA.eig?
import numpy.linalg as NLA
L2, V2 = NLA.eig(A0.astype(complex))
np.around(V2, 2)
Explanation: To see eigenvalues, divide the product $Av$ by $v$:
End of explanation |
14,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Watson Visual Recognition Training with Spectrogram Images from SETI Signal Data
https
Step1: <br/>
Init the Watson Visual Recognition Python Library
you may need to install the SDK first
Step2: <br/>
Look For Existing Custom Classifier
Use an existing custom classifier (and update) if one exists, else a new custom classifier will be created
Step3: <br/>
Send the Images Archives to the Watson Visual Recognition Service for Training
https
Step4: <br/>
Take a Random Data File for Testing
Take a random data file from the test set
Create a Spectrogram Image
Step5: <br/>
Run the Complete Test Set
Step6: Generate CSV file for Scoreboard
Here's an example of what the CSV file should look like for submission to the scoreboard. Although, in this case, we only have 4 classes instead of 7.
NOTE | Python Code:
#!pip install --user --upgrade watson-developer-cloud
#Making a local folder to put my data.
#NOTE: YOU MUST do something like this on a Spark Enterprise cluster at the hackathon so that
#you can put your data into a separate local file space. Otherwise, you'll likely collide with
#your fellow participants.
my_team_name_data_folder = 'my_team_name_data_folder'
mydatafolder = os.environ['PWD'] + '/' + my_team_name_data_folder + '/zipfiles'
if os.path.exists(mydatafolder) is False:
os.makedirs(mydatafolder)
!ls -al $mydatafolder
from __future__ import division
import cStringIO
import glob
import json
import numpy
import os
import re
import requests
import time
import timeit
import zipfile
import copy
from random import randint
import matplotlib.pyplot as plt
import numpy as np
import ibmseti
from watson_developer_cloud import VisualRecognitionV3
apiVer = VisualRecognitionV3.latest_version #'2016-05-20'
classifier_prefix = 'setisignals'
#You can sign up with WatsonVR through Bluemix to get a key
#However, Hackathon participants will be provided with a WATSON VR key that has more free API calls per day.
apiKey = 'WATSON-VISUAL-RECOGNITION-API-KEY'
# TODO: remove before publication
apiKey = ''
Explanation: Watson Visual Recognition Training with Spectrogram Images from SETI Signal Data
https://www.ibm.com/watson/developercloud/visual-recognition/api/v3/
https://www.ibm.com/watson/developercloud/doc/visual-recognition/customizing.html
https://github.com/watson-developer-cloud/python-sdk
https://github.com/watson-developer-cloud/python-sdk/blob/master/watson_developer_cloud/visual_recognition_v3.py
<hr>
Install the Watson Developer Cloud Python SDK
Install the Python SDK if has not been previously installed !pip install --upgrade watson-developer-cloud
Restart the kernel, after installing the SDK
End of explanation
vr = VisualRecognitionV3(apiVer, api_key=apiKey)
Explanation: <br/>
Init the Watson Visual Recognition Python Library
you may need to install the SDK first: !pip install --upgrade watson-developer-cloud
you will need the API key from the Watson Visual Recognition Service
End of explanation
## View all of your classifiers
classifiers = vr.list_classifiers()
print json.dumps(classifiers, indent=2)
## Run this cell ONLY IF you want to REMOVE all classifiers
# Otherwise, the subsequent cell will append images to the `classifier_prefix` classifier
classifiers = vr.list_classifiers()
for c in classifiers['classifiers']:
vr.delete_classifier(c['classifier_id'])
classifiers = vr.list_classifiers()
print json.dumps(classifiers, indent=2)
#Create new classifier, or get the ID for the latest SETISIGNALS classifier
classifier_id = None
classifier = None
classifiers = vr.list_classifiers()
for c in classifiers['classifiers']:
if c['status'] == 'ready' and (classifier_prefix in c['classifier_id']):
classifier_id = c['classifier_id']
if classifier_id is not None:
classifier = vr.get_classifier(classifier_id)
print '\r\nFound classifer:\r\n\r\n{}'.format(json.dumps(classifier, indent=2))
else:
print 'No custom classifier available\r\n'
print(json.dumps(classifiers, indent=2))
Explanation: <br/>
Look For Existing Custom Classifier
Use an existing custom classifier (and update) if one exists, else a new custom classifier will be created
End of explanation
squiggle = sorted(glob.glob('{}/classification_*_squiggle.zip'.format(mydatafolder)))
narrowband = sorted(glob.glob('{}/classification_*_narrowband.zip'.format(mydatafolder)))
narrowbanddrd = sorted(glob.glob('{}/classification_*_narrowbanddrd.zip'.format(mydatafolder)))
noise = sorted(glob.glob('{}/classification_*_noise.zip'.format(mydatafolder)))
sq = len(squiggle)
nb = len(narrowband)
nd = len(narrowbanddrd)
ns = len(noise)
## Possible todo here: Try using the 'noise' as a "negative" example when training Watson. See the Watson documentation.
num = max(sq, nb, nd, ns)
#num = max(sq, nb, nd)
if classifier_id is None:
print 'Adding custom classifier ... this may take awhile'
else:
print 'Updating custom classifier {} ... this may take awhile'.format(classifier_id)
for i in range(num):
squiggle_p = open(squiggle[i], 'rb') if i < sq else None
narrowband_p = open(narrowband[i], 'rb') if i < nb else None
narrowbanddrd_p = open(narrowbanddrd[i], 'rb') if i < nd else None
noise_p = open(noise[i], 'rb') if i < ns else None
if classifier_id is None:
# print 'Creating with\r\n{}\r\n{}\r\n{}\r'.format(squiggle_p, narrowband_p, narrowbanddrd_p) #use this line if going to use 'noise' as negative example
print 'Creating with\r\n{}\r\n{}\r\n{}\r\n{}\r'.format(squiggle_p, narrowband_p, narrowbanddrd_p, noise_p)
classifier = vr.create_classifier(
classifier_prefix,
squiggle_positive_examples = squiggle_p,
narrowband_positive_examples = narrowband_p,
narrowbanddrd_positive_examples = narrowbanddrd_p,
noise_positive_examples = noise_p #remove this if going to use noise as 'negative' examples
)
classifier_id = classifier['classifier_id']
else:
print 'Updating with\r\n{}\r\n{}\r\n{}\r\n{}\r'.format(squiggle_p, narrowband_p, narrowbanddrd_p, noise_p)
# print 'Updating with\r\n{}\r\n{}\r\n{}\r'.format(squiggle_p, narrowband_p, narrowbanddrd_p) #use this line if going to use 'noise' as negative example
classifier = vr.update_classifier(
classifier_id,
squiggle_positive_examples = squiggle_p,
narrowband_positive_examples = narrowband_p,
narrowbanddrd_positive_examples = narrowbanddrd_p,
noise_positive_examples = noise_p #remove this if going to use noise as 'negative' examples
)
if squiggle_p is not None:
squiggle_p.close()
if narrowband_p is not None:
narrowband_p.close()
if narrowbanddrd_p is not None:
narrowbanddrd_p.close()
if noise_p is not None:
noise_p.close()
if classifier is not None:
print('Classifier: {}'.format(classifier_id))
status = classifier['status']
startTimer = timeit.default_timer()
while status in ['training', 'retraining']:
print('Status: {}'.format(status))
time.sleep(10)
classifier = vr.get_classifier(classifier_id)
status = classifier['status']
stopTimer = timeit.default_timer()
print '{} took {} minutes'.format('Training' if i == 0 else 'Retraining', int(stopTimer - startTimer) / 60)
print(json.dumps(vr.get_classifier(classifier_id), indent=2))
Explanation: <br/>
Send the Images Archives to the Watson Visual Recognition Service for Training
https://www.ibm.com/watson/developercloud/doc/visual-recognition/customizing.html
https://www.ibm.com/watson/developercloud/visual-recognition/api/v3/
https://github.com/watson-developer-cloud/python-sdk
End of explanation
zz = zipfile.ZipFile(mydatafolder + '/' + 'testset_1_narrowband.zip')
test_list = zz.namelist()
randomSignal = zz.open(test_list[10],'r')
from IPython.display import Image
squigImg = randomSignal.read()
Image(squigImg)
#note - have to 'open' this again because it was already .read() out in the line above
randomSignal = zz.open(test_list[10],'r')
url_result = vr.classify(images_file=randomSignal, classifier_ids=classifier_id, threshold=0.0)
print(json.dumps(url_result, indent=2))
Explanation: <br/>
Take a Random Data File for Testing
Take a random data file from the test set
Create a Spectrogram Image
End of explanation
#Create a dictionary object to store results from Watson
from collections import defaultdict
class_list = ['squiggle', 'noise', 'narrowband', 'narrowbanddrd']
results_group_by_class = {}
for classification in class_list:
results_group_by_class[classification] = defaultdict(list)
failed_to_classify_uuid_list = []
print classifier_id
results_group_by_class
# locate test archives that were produced in step 3 and add them to the test set
test_set = []
for classification in class_list:
test_set = numpy.concatenate((test_set, sorted(glob.glob('{}/testset_*_{}.zip'.format(mydatafolder, classification)))))
for image_archive_name in test_set:
image_count = 0
# count number of images in <image_archive_name>
with zipfile.ZipFile(image_archive_name,'r') as image_archive:
images = image_archive.namelist()
image_count = len(images)
# bulk classify images in <image_archive_name>
with open(image_archive_name, 'rb') as images_file:
print 'Running test ({} images) for {}... this may take a while.'.format(image_count, image_archive_name)
startTimer = timeit.default_timer()
classify_results = vr.classify(images_file=images_file, classifier_ids=[classifier_id], threshold=0.0)
# print(json.dumps(classify_results, indent=2))
# identify class from ZIP file name, e.g. testset_10_squiggle.zip
mo = re.match('^(.+)_(\d+)_(.+)\.zip$',image_archive_name.split('/')[-1])
classification = mo.group(3)
resdict = results_group_by_class[classification]
passed = 0
for classify_result in classify_results['images']:
pngfilename = classify_result['image'].split('/')[-1]
uuid = pngfilename.split('.')[0]
maxscore = 0
maxscoreclass = None
if "error" in classify_result:
# print error information
print classify_result
#add to failed list
failed_to_classify_uuid_list.append(uuid)
else:
classifiers_arr = classify_result['classifiers']
score_list = []
for classifier_result in classifiers_arr:
for class_result in classifier_result['classes']:
score_list.append((class_result['class'],class_result['score']))
if class_result['score'] > maxscore:
maxscore = class_result['score']
maxscoreclass = class_result['class']
#sort alphabetically
score_list.sort(key = lambda x: x[0])
score_list = map(lambda x:x[1], score_list)
if maxscoreclass is None:
print 'Failed: {} - Actual: {}, No classification returned'.format(pngfilename, classification)
#print(json.dumps(classify_result, indent=2))
elif maxscoreclass != classification:
print 'Failed: {} - Actual: {}, Watson Predicted: {} ({})'.format(pngfilename, classification, maxscoreclass, maxscore)
else:
passed += 1
print 'Passed: {} - Actual: {}, Watson Predicted: {} ({})'.format(pngfilename, classification, maxscoreclass, maxscore)
if maxscoreclass is not None:
resdict['signal_classification'].append(classification)
resdict['uuid'].append(uuid)
resdict['watson_class'].append(maxscoreclass)
resdict['watson_class_score'].append(maxscore)
resdict['scores'].append(score_list)
else:
#add to failed list
failed_to_classify_uuid_list.append(uuid)
stopTimer = timeit.default_timer()
print 'Test Score: {}% ({} of {} Passed)'.format(int((float(passed) / image_count) * 100), passed, image_count)
print 'Tested {} images in {} minutes'.format(image_count, int(stopTimer - startTimer) / 60)
print "DONE."
import pickle
pickle.dump(results_group_by_class, open(mydatafolder + '/' + "watson_results.pickle", "w"))
watson_results = pickle.load(open(mydatafolder + '/' + "watson_results.pickle","r"))
# reorganize the watson_results dictionary to extract
# a list of [true_class, [scores], estimated_class] and
# use these for measuring our model's performance
class_scores = []
for k in watson_results.keys():
class_scores += zip(watson_results[k]['uuid'], watson_results[k]['signal_classification'], watson_results[k]['scores'], watson_results[k]['watson_class'] )
class_scores[100]
from sklearn.metrics import classification_report
import sklearn
y_train = [x[1] for x in class_scores]
y_pred = [x[3] for x in class_scores]
y_prob = [x[2] for x in class_scores]
#we normalize the Watson score values to 1 in order to use them in the log_loss calculation even though the Watson VR scores are not true class prediction probabilities
y_prob = map(lambda x: (x, sum(x)), y_prob)
y_prob = map(lambda x: [y / x[1] for y in x[0]], y_prob)
print sklearn.metrics.classification_report(y_train,y_pred)
print sklearn.metrics.confusion_matrix(y_train,y_pred)
print("Classification accuracy: %0.6f" % sklearn.metrics.accuracy_score(y_train,y_pred) )
print("Log Loss: %0.6f" % sklearn.metrics.log_loss(y_train,y_prob) )
Explanation: <br/>
Run the Complete Test Set
End of explanation
import csv
my_output_results = my_team_name_data_folder + '/' + 'watson_scores.csv'
with open(my_output_results, 'w') as csvfile:
fwriter = csv.writer(csvfile, delimiter=',')
for row in class_scores:
fwriter.writerow([row[0]] + row[2])
!cat $my_team_name_data_folder/watson_scores.csv
Explanation: Generate CSV file for Scoreboard
Here's an example of what the CSV file should look like for submission to the scoreboard. Although, in this case, we only have 4 classes instead of 7.
NOTE: This uses the PNG files created in the Step 3 notebook, which only contain the BASIC4 data set. The code challenge and hackathon will be based on the Primary Data Set which contains 7 signal classes
This only shows you how to create a csv file. You'll need to take the primary test set data, create PNGs for them, package them into zips, then modify the code above to send those zip files to Watson
End of explanation |
14,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Gentle Introduction to HARK
This notebook provides a simple, hands-on tutorial for first time HARK users -- and potentially first time Python users. It does not go "into the weeds" - we have hidden some code cells that do boring things that you don't need to digest on your first experience with HARK. Our aim is to convey a feel for how the toolkit works.
For readers for whom this is your very first experience with Python, we have put important Python concepts in boldface. For those for whom this is the first time they have used a Jupyter notebook, we have put Jupyter instructions in italics. Only cursory definitions (if any) are provided here. If you want to learn more, there are many online Python and Jupyter tutorials.
Step1: Your First HARK Model
Step2: The $\texttt{PerfForesightConsumerType}$ class contains within itself the python code that constructs the solution for the perfect foresight model we are studying here, as specifically articulated in these lecture notes.
To create an instance of $\texttt{PerfForesightConsumerType}$, we simply call the class as if it were a function, passing as arguments the specific parameter values we want it to have. In the hidden cell below, we define a $\textbf{dictionary}$ named $\texttt{PF_dictionary}$ with these parameter values
Step3: Let's make an object named $\texttt{PFexample}$ which is an instance of the $\texttt{PerfForesightConsumerType}$ class. The object $\texttt{PFexample}$ will bundle together the abstract mathematical description of the solution embodied in $\texttt{PerfForesightConsumerType}$, and the specific set of parameter values defined in $\texttt{PF_dictionary}$. Such a bundle is created passing $\texttt{PF_dictionary}$ to the class $\texttt{PerfForesightConsumerType}$
Step4: In $\texttt{PFexample}$, we now have defined the problem of a particular infinite horizon perfect foresight consumer who knows how to solve this problem.
Solving an Agent's Problem
To tell the agent actually to solve the problem, we call the agent's $\texttt{solve}$ method. (A method is essentially a function that an object runs that affects the object's own internal characteristics -- in this case, the method adds the consumption function to the contents of $\texttt{PFexample}$.)
The cell below calls the $\texttt{solve}$ method for $\texttt{PFexample}$
Step5: Running the $\texttt{solve}$ method creates the attribute of $\texttt{PFexample}$ named $\texttt{solution}$. In fact, every subclass of $\texttt{AgentType}$ works the same way
Step6: One of the results proven in the associated the lecture notes is that, for the specific problem defined above, there is a solution in which the ratio $c = C/P$ is a linear function of the ratio of market resources to permanent income, $m = M/P$.
This is why $\texttt{cFunc}$ can be represented by a linear interpolation. It can be plotted between an $m$ ratio of 0 and 10 using the command below.
Step7: The figure illustrates one of the surprising features of the perfect foresight model
Step8: Yikes! Let's take a look at the bottom of the consumption function. In the cell below, the bounds of the plot_funcs function are set to display down to the lowest defined value of the consumption function.
Step9: Changing Agent Parameters
Suppose you wanted to change one (or more) of the parameters of the agent's problem and see what that does. We want to compare consumption functions before and after we change parameters, so let's make a new instance of $\texttt{PerfForesightConsumerType}$ by copying $\texttt{PFexample}$.
Step10: You can assign new parameters to an AgentType with the assign_parameter method. For example, we could make the new agent less patient
Step11: (Note that you can pass a list of functions to plot_funcs as the first argument rather than just a single function. Lists are written inside of [square brackets].)
Let's try to deal with the "problem" of massive human wealth by making another consumer who has essentially no future income. We can virtually eliminate human wealth by making the permanent income growth factor $\textit{very}$ small.
In $\texttt{PFexample}$, the agent's income grew by 1 percent per period -- his $\texttt{PermGroFac}$ took the value 1.01. What if our new agent had a growth factor of 0.01 -- his income shrinks by 99 percent each period? In the cell below, set $\texttt{NewExample}$'s discount factor back to its original value, then set its $\texttt{PermGroFac}$ attribute so that the growth factor is 0.01 each period.
Important
Step12: Now $\texttt{NewExample}$'s consumption function has the same slope (MPC) as $\texttt{PFexample}$, but it emanates from (almost) zero-- he has basically no future income to borrow against!
If you'd like, use the cell above to alter $\texttt{NewExample}$'s other attributes (relative risk aversion, etc) and see how the consumption function changes. However, keep in mind that \textit{no solution exists} for some combinations of parameters. HARK should let you know if this is the case if you try to solve such a model.
Your Second HARK Model
Step13: As before, we need to import the relevant subclass of $\texttt{AgentType}$ into our workspace, then create an instance by passing the dictionary to the class as if the class were a function.
Step14: Now we can solve our new agent's problem just like before, using the $\texttt{solve}$ method.
Step15: Changing Constructed Attributes
In the parameter dictionary above, we chose values for HARK to use when constructing its numeric representation of $F_t$, the joint distribution of permanent and transitory income shocks. When $\texttt{IndShockExample}$ was created, those parameters ($\texttt{TranShkStd}$, etc) were used by the constructor or initialization method of $\texttt{IndShockConsumerType}$ to construct an attribute called $\texttt{IncomeDstn}$.
Suppose you were interested in changing (say) the amount of permanent income risk. From the section above, you might think that you could simply change the attribute $\texttt{TranShkStd}$, solve the model again, and it would work.
That's almost true-- there's one extra step. $\texttt{TranShkStd}$ is a primitive input, but it's not the thing you actually want to change. Changing $\texttt{TranShkStd}$ doesn't actually update the income distribution... unless you tell it to (just like changing an agent's preferences does not change the consumption function that was stored for the old set of parameters -- until you invoke the $\texttt{solve}$ method again). In the cell below, we invoke the method $\texttt{update_income_process}$ so HARK knows to reconstruct the attribute $\texttt{IncomeDstn}$.
Step16: In the cell below, use your blossoming HARK skills to plot the consumption function for $\texttt{IndShockExample}$ and $\texttt{OtherExample}$ on the same figure. | Python Code:
# This cell has a bit of initial setup. You can click the triangle to the left to expand it.
# Click the "Run" button immediately above the notebook in order to execute the contents of any cell
# WARNING: Each cell in the notebook relies upon results generated by previous cells
# The most common problem beginners have is to execute a cell before all its predecessors
# If you do this, you can restart the kernel (see the "Kernel" menu above) and start over
import matplotlib.pyplot as plt
import numpy as np
import HARK
from copy import deepcopy
mystr = lambda number : "{:.4f}".format(number)
from HARK.utilities import plot_funcs
Explanation: A Gentle Introduction to HARK
This notebook provides a simple, hands-on tutorial for first time HARK users -- and potentially first time Python users. It does not go "into the weeds" - we have hidden some code cells that do boring things that you don't need to digest on your first experience with HARK. Our aim is to convey a feel for how the toolkit works.
For readers for whom this is your very first experience with Python, we have put important Python concepts in boldface. For those for whom this is the first time they have used a Jupyter notebook, we have put Jupyter instructions in italics. Only cursory definitions (if any) are provided here. If you want to learn more, there are many online Python and Jupyter tutorials.
End of explanation
from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType
Explanation: Your First HARK Model: Perfect Foresight
We start with almost the simplest possible consumption model: A consumer with CRRA utility
\begin{equation}
U(C) = \frac{C^{1-\rho}}{1-\rho}
\end{equation}
has perfect foresight about everything except the (stochastic) date of death, which occurs with constant probability implying a "survival probability" $\newcommand{\LivPrb}{\aleph}\LivPrb < 1$. Permanent labor income $P_t$ grows from period to period by a factor $\Gamma_t$. At the beginning of each period $t$, the consumer has some amount of market resources $M_t$ (which includes both market wealth and currrent income) and must choose how much of those resources to consume $C_t$ and how much to retain in a riskless asset $A_t$ which will earn return factor $R$. The agent's flow of utility $U(C_t)$ from consumption is geometrically discounted by factor $\beta$. Between periods, the agent dies with probability $\mathsf{D}_t$, ending his problem.
The agent's problem can be written in Bellman form as:
\begin{eqnarray}
V_t(M_t,P_t) &=& \max_{C_t}~U(C_t) + \beta \aleph V_{t+1}(M_{t+1},P_{t+1}), \
& s.t. & \
%A_t &=& M_t - C_t, \
M_{t+1} &=& R (M_{t}-C_{t}) + Y_{t+1}, \
P_{t+1} &=& \Gamma_{t+1} P_t, \
\end{eqnarray}
A particular perfect foresight agent's problem can be characterized by values of risk aversion $\rho$, discount factor $\beta$, and return factor $R$, along with sequences of income growth factors ${ \Gamma_t }$ and survival probabilities ${\mathsf{\aleph}_t}$. To keep things simple, let's forget about "sequences" of income growth and mortality, and just think about an $\textit{infinite horizon}$ consumer with constant income growth and survival probability.
Representing Agents in HARK
HARK represents agents solving this type of problem as $\textbf{instances}$ of the $\textbf{class}$ $\texttt{PerfForesightConsumerType}$, a $\textbf{subclass}$ of $\texttt{AgentType}$. To make agents of this class, we must import the class itself into our workspace. (Run the cell below in order to do this).
End of explanation
# This cell defines a parameter dictionary. You can expand it if you want to see what that looks like.
PF_dictionary = {
'CRRA' : 2.5,
'DiscFac' : 0.96,
'Rfree' : 1.03,
'LivPrb' : [0.98],
'PermGroFac' : [1.01],
'T_cycle' : 1,
'cycles' : 0,
'AgentCount' : 10000
}
# To those curious enough to open this hidden cell, you might notice that we defined
# a few extra parameters in that dictionary: T_cycle, cycles, and AgentCount. Don't
# worry about these for now.
Explanation: The $\texttt{PerfForesightConsumerType}$ class contains within itself the python code that constructs the solution for the perfect foresight model we are studying here, as specifically articulated in these lecture notes.
To create an instance of $\texttt{PerfForesightConsumerType}$, we simply call the class as if it were a function, passing as arguments the specific parameter values we want it to have. In the hidden cell below, we define a $\textbf{dictionary}$ named $\texttt{PF_dictionary}$ with these parameter values:
| Param | Description | Code | Value |
| :---: | --- | --- | :---: |
| $\rho$ | Relative risk aversion | $\texttt{CRRA}$ | 2.5 |
| $\beta$ | Discount factor | $\texttt{DiscFac}$ | 0.96 |
| $R$ | Risk free interest factor | $\texttt{Rfree}$ | 1.03 |
| $\aleph$ | Survival probability | $\texttt{LivPrb}$ | 0.98 |
| $\Gamma$ | Income growth factor | $\texttt{PermGroFac}$ | 1.01 |
For now, don't worry about the specifics of dictionaries. All you need to know is that a dictionary lets us pass many arguments wrapped up in one simple data structure.
End of explanation
PFexample = PerfForesightConsumerType(**PF_dictionary)
# the asterisks ** basically say "here come some arguments" to PerfForesightConsumerType
Explanation: Let's make an object named $\texttt{PFexample}$ which is an instance of the $\texttt{PerfForesightConsumerType}$ class. The object $\texttt{PFexample}$ will bundle together the abstract mathematical description of the solution embodied in $\texttt{PerfForesightConsumerType}$, and the specific set of parameter values defined in $\texttt{PF_dictionary}$. Such a bundle is created passing $\texttt{PF_dictionary}$ to the class $\texttt{PerfForesightConsumerType}$:
End of explanation
PFexample.solve()
Explanation: In $\texttt{PFexample}$, we now have defined the problem of a particular infinite horizon perfect foresight consumer who knows how to solve this problem.
Solving an Agent's Problem
To tell the agent actually to solve the problem, we call the agent's $\texttt{solve}$ method. (A method is essentially a function that an object runs that affects the object's own internal characteristics -- in this case, the method adds the consumption function to the contents of $\texttt{PFexample}$.)
The cell below calls the $\texttt{solve}$ method for $\texttt{PFexample}$
End of explanation
PFexample.solution[0].cFunc
Explanation: Running the $\texttt{solve}$ method creates the attribute of $\texttt{PFexample}$ named $\texttt{solution}$. In fact, every subclass of $\texttt{AgentType}$ works the same way: The class definition contains the abstract algorithm that knows how to solve the model, but to obtain the particular solution for a specific instance (paramterization/configuration), that instance must be instructed to $\texttt{solve()}$ its problem.
The $\texttt{solution}$ attribute is always a $\textit{list}$ of solutions to a single period of the problem. In the case of an infinite horizon model like the one here, there is just one element in that list -- the solution to all periods of the infinite horizon problem. The consumption function stored as the first element (element 0) of the solution list can be retrieved by:
End of explanation
mPlotTop=10
plot_funcs(PFexample.solution[0].cFunc,0.,mPlotTop)
Explanation: One of the results proven in the associated the lecture notes is that, for the specific problem defined above, there is a solution in which the ratio $c = C/P$ is a linear function of the ratio of market resources to permanent income, $m = M/P$.
This is why $\texttt{cFunc}$ can be represented by a linear interpolation. It can be plotted between an $m$ ratio of 0 and 10 using the command below.
End of explanation
humanWealth = PFexample.solution[0].hNrm
mMinimum = PFexample.solution[0].mNrmMin
print("This agent's human wealth is " + str(humanWealth) + ' times his current income level.')
print("This agent's consumption function is defined (consumption is positive) down to m_t = " + str(mMinimum))
Explanation: The figure illustrates one of the surprising features of the perfect foresight model: A person with zero money should be spending at a rate more than double their income (that is, $\texttt{cFunc}(0.) \approx 2.08$ - the intersection on the vertical axis). How can this be?
The answer is that we have not incorporated any constraint that would prevent the agent from borrowing against the entire PDV of future earnings-- human wealth. How much is that? What's the minimum value of $m_t$ where the consumption function is defined? We can check by retrieving the $\texttt{hNrm}$ attribute of the solution, which calculates the value of human wealth normalized by permanent income:
End of explanation
plot_funcs(PFexample.solution[0].cFunc,
mMinimum,
mPlotTop)
Explanation: Yikes! Let's take a look at the bottom of the consumption function. In the cell below, the bounds of the plot_funcs function are set to display down to the lowest defined value of the consumption function.
End of explanation
NewExample = deepcopy(PFexample)
Explanation: Changing Agent Parameters
Suppose you wanted to change one (or more) of the parameters of the agent's problem and see what that does. We want to compare consumption functions before and after we change parameters, so let's make a new instance of $\texttt{PerfForesightConsumerType}$ by copying $\texttt{PFexample}$.
End of explanation
NewExample.assign_parameters(DiscFac = 0.90)
NewExample.solve()
mPlotBottom = mMinimum
plot_funcs([PFexample.solution[0].cFunc,
NewExample.solution[0].cFunc],
mPlotBottom,
mPlotTop)
Explanation: You can assign new parameters to an AgentType with the assign_parameter method. For example, we could make the new agent less patient:
End of explanation
# Revert NewExample's discount factor and make his future income minuscule
# print("your lines here")
# Compare the old and new consumption functions
plot_funcs([PFexample.solution[0].cFunc,NewExample.solution[0].cFunc],0.,10.)
Explanation: (Note that you can pass a list of functions to plot_funcs as the first argument rather than just a single function. Lists are written inside of [square brackets].)
Let's try to deal with the "problem" of massive human wealth by making another consumer who has essentially no future income. We can virtually eliminate human wealth by making the permanent income growth factor $\textit{very}$ small.
In $\texttt{PFexample}$, the agent's income grew by 1 percent per period -- his $\texttt{PermGroFac}$ took the value 1.01. What if our new agent had a growth factor of 0.01 -- his income shrinks by 99 percent each period? In the cell below, set $\texttt{NewExample}$'s discount factor back to its original value, then set its $\texttt{PermGroFac}$ attribute so that the growth factor is 0.01 each period.
Important: Recall that the model at the top of this document said that an agent's problem is characterized by a sequence of income growth factors, but we tabled that concept. Because $\texttt{PerfForesightConsumerType}$ treats $\texttt{PermGroFac}$ as a time-varying attribute, it must be specified as a list (with a single element in this case).
End of explanation
# This cell defines a parameter dictionary for making an instance of IndShockConsumerType.
IndShockDictionary = {
'CRRA': 2.5, # The dictionary includes our original parameters...
'Rfree': 1.03,
'DiscFac': 0.96,
'LivPrb': [0.98],
'PermGroFac': [1.01],
'PermShkStd': [0.1], # ... and the new parameters for constructing the income process.
'PermShkCount': 7,
'TranShkStd': [0.1],
'TranShkCount': 7,
'UnempPrb': 0.05,
'IncUnemp': 0.3,
'BoroCnstArt': 0.0,
'aXtraMin': 0.001, # aXtra parameters specify how to construct the grid of assets.
'aXtraMax': 50., # Don't worry about these for now
'aXtraNestFac': 3,
'aXtraCount': 48,
'aXtraExtra': [None],
'vFuncBool': False, # These booleans indicate whether the value function should be calculated
'CubicBool': False, # and whether to use cubic spline interpolation. You can ignore them.
'aNrmInitMean' : -10.,
'aNrmInitStd' : 0.0, # These parameters specify the (log) distribution of normalized assets
'pLvlInitMean' : 0.0, # and permanent income for agents at "birth". They are only relevant in
'pLvlInitStd' : 0.0, # simulation and you don't need to worry about them.
'PermGroFacAgg' : 1.0,
'T_retire': 0, # What's this about retirement? ConsIndShock is set up to be able to
'UnempPrbRet': 0.0, # handle lifecycle models as well as infinite horizon problems. Swapping
'IncUnempRet': 0.0, # out the structure of the income process is easy, but ignore for now.
'T_age' : None,
'T_cycle' : 1,
'cycles' : 0,
'AgentCount': 10000,
'tax_rate':0.0,
}
# Hey, there's a lot of parameters we didn't tell you about! Yes, but you don't need to
# think about them for now.
Explanation: Now $\texttt{NewExample}$'s consumption function has the same slope (MPC) as $\texttt{PFexample}$, but it emanates from (almost) zero-- he has basically no future income to borrow against!
If you'd like, use the cell above to alter $\texttt{NewExample}$'s other attributes (relative risk aversion, etc) and see how the consumption function changes. However, keep in mind that \textit{no solution exists} for some combinations of parameters. HARK should let you know if this is the case if you try to solve such a model.
Your Second HARK Model: Adding Income Shocks
Linear consumption functions are pretty boring, and you'd be justified in feeling unimpressed if all HARK could do was plot some lines. Let's look at another model that adds two important layers of complexity: income shocks and (artificial) borrowing constraints.
Specifically, our new type of consumer receives two income shocks at the beginning of each period: a completely transitory shock $\theta_t$ and a completely permanent shock $\psi_t$. Moreover, lenders will not let the agent borrow money such that his ratio of end-of-period assets $A_t$ to permanent income $P_t$ is less than $\underline{a}$. As with the perfect foresight problem, this model can be framed in terms of normalized variables, e.g. $m_t \equiv M_t/P_t$. (See here for all the theory).
\begin{eqnarray}
v_t(m_t) &=& \max_{c_t} ~ U(c_t) ~ + \phantom{\LivFac} \beta \mathbb{E} [(\Gamma_{t+1}\psi_{t+1})^{1-\rho} v_{t+1}(m_{t+1}) ], \
a_t &=& m_t - c_t, \
a_t &\geq& \underset{\bar{}}{a}, \
m_{t+1} &=& R/(\Gamma_{t+1} \psi_{t+1}) a_t + \theta_{t+1}, \
\mathbb{E}[\psi]=\mathbb{E}[\theta] &=& 1, \
u(c) &=& \frac{c^{1-\rho}}{1-\rho}.
\end{eqnarray}
HARK represents agents with this kind of problem as instances of the class $\texttt{IndShockConsumerType}$. To create an $\texttt{IndShockConsumerType}$, we must specify the same set of parameters as for a $\texttt{PerfForesightConsumerType}$, as well as an artificial borrowing constraint $\underline{a}$ and a sequence of income shocks. It's easy enough to pick a borrowing constraint -- say, zero -- but how would we specify the distributions of the shocks? Can't the joint distribution of permanent and transitory shocks be just about anything?
Yes, and HARK can handle whatever correlation structure a user might care to specify. However, the default behavior of $\texttt{IndShockConsumerType}$ is that the distribution of permanent income shocks is mean one lognormal, and the distribution of transitory shocks is mean one lognormal augmented with a point mass representing unemployment. The distributions are independent of each other by default, and by default are approximated with $N$ point equiprobable distributions.
Let's make an infinite horizon instance of $\texttt{IndShockConsumerType}$ with the same parameters as our original perfect foresight agent, plus the extra parameters to specify the income shock distribution and the artificial borrowing constraint. As before, we'll make a dictionary:
| Param | Description | Code | Value |
| :---: | --- | --- | :---: |
| \underline{a} | Artificial borrowing constraint | $\texttt{BoroCnstArt}$ | 0.0 |
| $\sigma_\psi$ | Underlying stdev of permanent income shocks | $\texttt{PermShkStd}$ | 0.1 |
| $\sigma_\theta$ | Underlying stdev of transitory income shocks | $\texttt{TranShkStd}$ | 0.1 |
| $N_\psi$ | Number of discrete permanent income shocks | $\texttt{PermShkCount}$ | 7 |
| $N_\theta$ | Number of discrete transitory income shocks | $\texttt{TranShkCount}$ | 7 |
| $\mho$ | Unemployment probability | $\texttt{UnempPrb}$ | 0.05 |
| $\underset{\bar{}}{\theta}$ | Transitory shock when unemployed | $\texttt{IncUnemp}$ | 0.3 |
End of explanation
from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType
IndShockExample = IndShockConsumerType(**IndShockDictionary)
Explanation: As before, we need to import the relevant subclass of $\texttt{AgentType}$ into our workspace, then create an instance by passing the dictionary to the class as if the class were a function.
End of explanation
IndShockExample.solve()
plot_funcs(IndShockExample.solution[0].cFunc,0.,10.)
Explanation: Now we can solve our new agent's problem just like before, using the $\texttt{solve}$ method.
End of explanation
OtherExample = deepcopy(IndShockExample) # Make a copy so we can compare consumption functions
OtherExample.assign_parameters(PermShkStd = [0.2]) # Double permanent income risk (note that it's a one element list)
OtherExample.update_income_process() # Call the method to reconstruct the representation of F_t
OtherExample.solve()
Explanation: Changing Constructed Attributes
In the parameter dictionary above, we chose values for HARK to use when constructing its numeric representation of $F_t$, the joint distribution of permanent and transitory income shocks. When $\texttt{IndShockExample}$ was created, those parameters ($\texttt{TranShkStd}$, etc) were used by the constructor or initialization method of $\texttt{IndShockConsumerType}$ to construct an attribute called $\texttt{IncomeDstn}$.
Suppose you were interested in changing (say) the amount of permanent income risk. From the section above, you might think that you could simply change the attribute $\texttt{TranShkStd}$, solve the model again, and it would work.
That's almost true-- there's one extra step. $\texttt{TranShkStd}$ is a primitive input, but it's not the thing you actually want to change. Changing $\texttt{TranShkStd}$ doesn't actually update the income distribution... unless you tell it to (just like changing an agent's preferences does not change the consumption function that was stored for the old set of parameters -- until you invoke the $\texttt{solve}$ method again). In the cell below, we invoke the method $\texttt{update_income_process}$ so HARK knows to reconstruct the attribute $\texttt{IncomeDstn}$.
End of explanation
# Use the line(s) below to plot the consumptions functions against each other
Explanation: In the cell below, use your blossoming HARK skills to plot the consumption function for $\texttt{IndShockExample}$ and $\texttt{OtherExample}$ on the same figure.
End of explanation |
14,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute envelope correlations in source space
Compute envelope correlations of orthogonalized activity
Step1: Here we do some things in the name of speed, such as crop (which will
hurt SNR) and downsample. Then we compute SSP projectors and apply them.
Step2: Now we band-pass filter our data and create epochs.
Step3: Compute the forward and inverse
Step4: Compute label time series and do envelope correlation
Step5: Compute the degree and plot it | Python Code:
# Authors: Eric Larson <[email protected]>
# Sheraz Khan <[email protected]>
# Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.connectivity import envelope_correlation
from mne.minimum_norm import make_inverse_operator, apply_inverse_epochs
from mne.preprocessing import compute_proj_ecg, compute_proj_eog
data_path = mne.datasets.brainstorm.bst_resting.data_path()
subjects_dir = op.join(data_path, 'subjects')
subject = 'bst_resting'
trans = op.join(data_path, 'MEG', 'bst_resting', 'bst_resting-trans.fif')
src = op.join(subjects_dir, subject, 'bem', subject + '-oct-6-src.fif')
bem = op.join(subjects_dir, subject, 'bem', subject + '-5120-bem-sol.fif')
raw_fname = op.join(data_path, 'MEG', 'bst_resting',
'subj002_spontaneous_20111102_01_AUX.ds')
Explanation: Compute envelope correlations in source space
Compute envelope correlations of orthogonalized activity
:footcite:HippEtAl2012,KhanEtAl2018 in source space using resting state
CTF data.
End of explanation
raw = mne.io.read_raw_ctf(raw_fname, verbose='error')
raw.crop(0, 60).pick_types(meg=True, eeg=False).load_data().resample(80)
raw.apply_gradient_compensation(3)
projs_ecg, _ = compute_proj_ecg(raw, n_grad=1, n_mag=2)
projs_eog, _ = compute_proj_eog(raw, n_grad=1, n_mag=2, ch_name='MLT31-4407')
raw.info['projs'] += projs_ecg
raw.info['projs'] += projs_eog
raw.apply_proj()
cov = mne.compute_raw_covariance(raw) # compute before band-pass of interest
Explanation: Here we do some things in the name of speed, such as crop (which will
hurt SNR) and downsample. Then we compute SSP projectors and apply them.
End of explanation
raw.filter(14, 30)
events = mne.make_fixed_length_events(raw, duration=5.)
epochs = mne.Epochs(raw, events=events, tmin=0, tmax=5.,
baseline=None, reject=dict(mag=8e-13), preload=True)
del raw
Explanation: Now we band-pass filter our data and create epochs.
End of explanation
src = mne.read_source_spaces(src)
fwd = mne.make_forward_solution(epochs.info, trans, src, bem)
inv = make_inverse_operator(epochs.info, fwd, cov)
del fwd, src
Explanation: Compute the forward and inverse
End of explanation
labels = mne.read_labels_from_annot(subject, 'aparc_sub',
subjects_dir=subjects_dir)
epochs.apply_hilbert() # faster to apply in sensor space
stcs = apply_inverse_epochs(epochs, inv, lambda2=1. / 9., pick_ori='normal',
return_generator=True)
label_ts = mne.extract_label_time_course(
stcs, labels, inv['src'], return_generator=True)
corr = envelope_correlation(label_ts, verbose=True)
# let's plot this matrix
fig, ax = plt.subplots(figsize=(4, 4))
ax.imshow(corr, cmap='viridis', clim=np.percentile(corr, [5, 95]))
fig.tight_layout()
Explanation: Compute label time series and do envelope correlation
End of explanation
threshold_prop = 0.15 # percentage of strongest edges to keep in the graph
degree = mne.connectivity.degree(corr, threshold_prop=threshold_prop)
stc = mne.labels_to_stc(labels, degree)
stc = stc.in_label(mne.Label(inv['src'][0]['vertno'], hemi='lh') +
mne.Label(inv['src'][1]['vertno'], hemi='rh'))
brain = stc.plot(
clim=dict(kind='percent', lims=[75, 85, 95]), colormap='gnuplot',
subjects_dir=subjects_dir, views='dorsal', hemi='both',
smoothing_steps=25, time_label='Beta band')
Explanation: Compute the degree and plot it
End of explanation |
14,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a new (initially empty) viewer. This starts a webserver in a background thread, which serves a copy of the Neuroglancer client, and which also can serve local volume data and handles sending and receiving Neuroglancer state updates.
Step1: Print a link to the viewer (only valid while the notebook kernel is running). Note that while the Viewer is running, anyone with the link can obtain any authentication credentials that the neuroglancer Python module obtains. Therefore, be very careful about sharing the link, and keep in mind that sharing the notebook will likely also share viewer links.
Step2: Add some example layers using the precomputed data source (HHMI Janelia FlyEM FIB-25 dataset).
Step3: Display a numpy array as an additional layer. A reference to the numpy array is kept only as long as the layer remains in the viewer.
Move the viewer position.
Step5: Hide the segmentation layer.
Step6: Modify the overlay volume, and call invalidate() to notify the Neuroglancer client.
Step7: Select a couple segments.
Step8: Print the neuroglancer viewer state. The Neuroglancer Python library provides a set of Python objects that wrap the JSON-encoded viewer state. viewer.state returns a read-only snapshot of the state. To modify the state, use the viewer.txn() function, or viewer.set_state.
Step9: Print the set of selected segments.|
Step10: Update the state by calling set_state directly.
Step11: Bind the 't' key in neuroglancer to a Python action.
Step12: Change the view layout to 3-d.
Step13: Take a screenshot (useful for creating publication figures, or for generating videos). While capturing the screenshot, we hide the UI and specify the viewer size so that we get a result independent of the browser size.
Step14: Change the view layout to show the segmentation side by side with the image, rather than overlayed. This can also be done from the UI by dragging and dropping. The side by side views by default have synchronized position, orientation, and zoom level, but this can be changed.
Step15: Remove the overlay layer.
Step16: Create a publicly sharable URL to the viewer state (only works for external data sources, not layers served from Python). The Python objects for representing the viewer state (neuroglancer.ViewerState and friends) can also be used independently from the interactive Python-tied viewer to create Neuroglancer links.
Step17: Stop the Neuroglancer web server, which invalidates any existing links to the Python-tied viewer. | Python Code:
viewer = neuroglancer.Viewer()
Explanation: Create a new (initially empty) viewer. This starts a webserver in a background thread, which serves a copy of the Neuroglancer client, and which also can serve local volume data and handles sending and receiving Neuroglancer state updates.
End of explanation
viewer
Explanation: Print a link to the viewer (only valid while the notebook kernel is running). Note that while the Viewer is running, anyone with the link can obtain any authentication credentials that the neuroglancer Python module obtains. Therefore, be very careful about sharing the link, and keep in mind that sharing the notebook will likely also share viewer links.
End of explanation
with viewer.txn() as s:
s.layers['image'] = neuroglancer.ImageLayer(source='precomputed://gs://neuroglancer-public-data/flyem_fib-25/image')
s.layers['segmentation'] = neuroglancer.SegmentationLayer(source='precomputed://gs://neuroglancer-public-data/flyem_fib-25/ground_truth', selected_alpha=0.3)
Explanation: Add some example layers using the precomputed data source (HHMI Janelia FlyEM FIB-25 dataset).
End of explanation
with viewer.txn() as s:
s.voxel_coordinates = [3000, 3000, 3000]
Explanation: Display a numpy array as an additional layer. A reference to the numpy array is kept only as long as the layer remains in the viewer.
Move the viewer position.
End of explanation
with viewer.txn() as s:
s.layers['segmentation'].visible = False
import cloudvolume
image_vol = cloudvolume.CloudVolume('https://storage.googleapis.com/neuroglancer-public-data/flyem_fib-25/image',
mip=0, bounded=True, progress=False, provenance={})
a = np.zeros((200,200,200), np.uint8)
def make_thresholded(threshold):
a[...] = image_vol[3000:3200,3000:3200,3000:3200][...,0] > threshold
make_thresholded(110)
# This volume handle can be used to notify the viewer that the data has changed.
volume = neuroglancer.LocalVolume(
a,
dimensions=neuroglancer.CoordinateSpace(
names=['x', 'y', 'z'],
units='nm',
scales=[8, 8, 8],
),
voxel_offset=[3000, 3000, 3000])
with viewer.txn() as s:
s.layers['overlay'] = neuroglancer.ImageLayer(
source=volume,
# Define a custom shader to display this mask array as red+alpha.
shader=
void main() {
float v = toNormalized(getDataValue(0)) * 255.0;
emitRGBA(vec4(v, 0.0, 0.0, v));
}
,
)
Explanation: Hide the segmentation layer.
End of explanation
make_thresholded(100)
volume.invalidate()
Explanation: Modify the overlay volume, and call invalidate() to notify the Neuroglancer client.
End of explanation
with viewer.txn() as s:
s.layers['segmentation'].segments.update([1752, 88847])
s.layers['segmentation'].visible = True
Explanation: Select a couple segments.
End of explanation
viewer.state
Explanation: Print the neuroglancer viewer state. The Neuroglancer Python library provides a set of Python objects that wrap the JSON-encoded viewer state. viewer.state returns a read-only snapshot of the state. To modify the state, use the viewer.txn() function, or viewer.set_state.
End of explanation
viewer.state.layers['segmentation'].segments
Explanation: Print the set of selected segments.|
End of explanation
import copy
new_state = copy.deepcopy(viewer.state)
new_state.layers['segmentation'].segments.add(10625)
viewer.set_state(new_state)
Explanation: Update the state by calling set_state directly.
End of explanation
num_actions = 0
def my_action(s):
global num_actions
num_actions += 1
with viewer.config_state.txn() as st:
st.status_messages['hello'] = ('Got action %d: mouse position = %r' %
(num_actions, s.mouse_voxel_coordinates))
print('Got my-action')
print(' Mouse position: %s' % (s.mouse_voxel_coordinates,))
print(' Layer selected values: %s' % (s.selected_values,))
viewer.actions.add('my-action', my_action)
with viewer.config_state.txn() as s:
s.input_event_bindings.viewer['keyt'] = 'my-action'
s.status_messages['hello'] = 'Welcome to this example'
Explanation: Bind the 't' key in neuroglancer to a Python action.
End of explanation
with viewer.txn() as s:
s.layout = '3d'
s.projection_scale = 3000
Explanation: Change the view layout to 3-d.
End of explanation
from ipywidgets import Image
screenshot = viewer.screenshot(size=[1000, 1000])
screenshot_image = Image(value=screenshot.screenshot.image)
screenshot_image
Explanation: Take a screenshot (useful for creating publication figures, or for generating videos). While capturing the screenshot, we hide the UI and specify the viewer size so that we get a result independent of the browser size.
End of explanation
with viewer.txn() as s:
s.layout = neuroglancer.row_layout(
[neuroglancer.LayerGroupViewer(layers=['image', 'overlay']),
neuroglancer.LayerGroupViewer(layers=['segmentation'])])
Explanation: Change the view layout to show the segmentation side by side with the image, rather than overlayed. This can also be done from the UI by dragging and dropping. The side by side views by default have synchronized position, orientation, and zoom level, but this can be changed.
End of explanation
with viewer.txn() as s:
s.layout = neuroglancer.row_layout(
[neuroglancer.LayerGroupViewer(layers=['image']),
neuroglancer.LayerGroupViewer(layers=['segmentation'])])
Explanation: Remove the overlay layer.
End of explanation
print(neuroglancer.to_url(viewer.state))
Explanation: Create a publicly sharable URL to the viewer state (only works for external data sources, not layers served from Python). The Python objects for representing the viewer state (neuroglancer.ViewerState and friends) can also be used independently from the interactive Python-tied viewer to create Neuroglancer links.
End of explanation
neuroglancer.stop()
Explanation: Stop the Neuroglancer web server, which invalidates any existing links to the Python-tied viewer.
End of explanation |
14,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Hidden Treasure
This ebook contains few lesser known Python gems. As usual, I will try to keep them updated and will continue to expand. If you wish to add any new, send them to me at (funmayank @ yahoo . co . in).
Variables
In-place value swapping
Step1: Unicode identifier
Python 3 allows to have unicode identifier's, which allows non-english speaking users to code.
Step2: Integer
Negative round
round is a function to round off the numbers and its normal usage is as follows
Step3: The second parameter defines the decimal number to which the number to rounded of. But if we provide a -ve number to it then it starts rounding of the number itself instead of decimal digit as shown in the below example
Step4: pow power - pow() can calculate (x ** y) % z
Step6: String
Multi line strings
In python we can have multiple ways to achieve multi line strings.
Using triple quotes
Step7: Using brackets "( )"
Step8: Print String multiple times
using string multiply with int results in concatinating string that number of times. Lets print a line on console using -.
Step9: Search substring in string
Step10: Join list of strings
Step11: Reverse the string
There are few methods to reverse the string, but two are most common
using slices
Step12: List / Tuple
tuple / list unpacking
Step13: List/tuple multiplication ;)
similar to String we can literally multiply string and tuples with integer as shown below
Step14: Array Transpose using zip
Step15: enumerate with predefined starting index
Step16: Now, lets change the starting index to 10
Step17: Reverse the list
built-in keyword reversed allows the list to be reversed.
Step18: Flattening of list
Step19: Method 1
Step20: NOTE
Step21: Method 2
Step22: NOTE
Step23: Lets update code to handle this situation
Step24: Method 3
Step25: NOTE
Step26: Method 4
Step27: NOTE
Step28: Method 5
Step29: NOTE
Step30: Method 6
Step31: NOTE
Step32: Infinite Recursion
Step33: lets check if really we have infinite recursion, with the following code. We should get RuntimeError
Step34: Both the variables are still pointing to same list, thus change in one will change another also.
Step35: Deepcopy a list
Step36: !!! ouch moment !!!
Step37: rescue using Deep copy
Step38: Dictionaries
Reverse the key values in unique dictionary
Step39: Method 1
Step40: Method 2
Step41: Method 3
Step42: Method 4
Step43: Creating dictionaries
Multiple methods can be used to create a dictionary. We are going to cover few of the cool ones.
Using two lists
Step44: Using arguments
Step45: list of tuples
Step46: By adding two dictionary using copy and update
Step47: ```python
for Python >= 3.5
Step48: if
Conditional Assignment
Step49: Functions
default arguments
Dangerous mutable default arguments
Step51: TODO
Step52: Function arguments
Step53: Finally returns the ultimate return
Step54: OOPS
Attributes
Dynamically added attributes
Step55: operators
Chaining comparison operators
Step56: enumerate
Wrap an iterable with enumerate and it will yield the item along with its index.
Step58: Generators
Sending values into generator functions
https
Step60: Descriptor
http
Step61: I/O
with
open multiple files in a single with.
Step62: Exception
Re-raising exceptions
Step63: !!! Easter Eggs !!!
Step64: Lets encrypt our code using cot13 | Python Code:
a = 10
b = "TEST"
a, b = b, a
print(a, b)
Explanation: Python Hidden Treasure
This ebook contains few lesser known Python gems. As usual, I will try to keep them updated and will continue to expand. If you wish to add any new, send them to me at (funmayank @ yahoo . co . in).
Variables
In-place value swapping
End of explanation
हिन्दी = 10
print(हिन्दी)
Explanation: Unicode identifier
Python 3 allows to have unicode identifier's, which allows non-english speaking users to code.
End of explanation
num = round(283746.32321, 1)
print(num)
Explanation: Integer
Negative round
round is a function to round off the numbers and its normal usage is as follows
End of explanation
num = round(283746.32321, -2)
print(num)
num = round(283746.32321, -1)
print(num)
num = round(283746.32321, -4)
print(num)
Explanation: The second parameter defines the decimal number to which the number to rounded of. But if we provide a -ve number to it then it starts rounding of the number itself instead of decimal digit as shown in the below example
End of explanation
x, y, z = 1019292929191, 1029228322, 222224
pow(x, y, z)
# Do not run this, please. it will take forever.
##### (x ** y) % z
Explanation: pow power - pow() can calculate (x ** y) % z
End of explanation
txt = The Supreme Lord said: The indestructible, transcendental living
entity is called Brahman and his eternal nature is called the
self. Action pertaining to the development of these material
bodies is called karma, or fruitive activities.
print(txt)
Explanation: String
Multi line strings
In python we can have multiple ways to achieve multi line strings.
Using triple quotes
End of explanation
txt = ("The Supreme Lord said: The indestructible, transcendental living"
"entity is called Brahman and his eternal nature is called the "
"self. Action pertaining to the development of these material"
"bodies is called karma, or fruitive activities.")
print(txt)
txt = "The Supreme Lord said: The indestructible, transcendental living " \
"entity is called Brahman and his eternal nature is called the"
print(txt)
Explanation: Using brackets "( )"
End of explanation
print("~^*" * 10)
Explanation: Print String multiple times
using string multiply with int results in concatinating string that number of times. Lets print a line on console using -.
End of explanation
print("ash" in "ashwini")
print("ash" is ['a', 's', 'h'])
print("ash" is ('a', 's', 'h'))
print("ash" is 'ash')
### Implicit concatenation without "+" operator
name = "Mayank" " " "Johri"
print(name)
try:
name = "Mayank" " " "Johri" ' .'
print(name)
except SyntaxError:
pass
Explanation: Search substring in string
End of explanation
list_cities = ["Bhopal", "New Delhi", "Agra", "Mumbai", "Aligarh", "Hyderabad"]
# Lets join the list of string in string using `join`
str_cities = ", ".join(list_cities)
print(str_cities)
list_cities = ("Bhopal", "New Delhi", "Agra", "Mumbai", "Aligarh", "Hyderabad")
# Lets join the list of string in string using `join`
str_cities = ", ".join(list_cities)
print(str_cities)
Explanation: Join list of strings
End of explanation
txt = "The Mother Earth"
print(txt[::-1])
txt = "The Mother Earth"
print("".join(list(reversed(txt))))
Explanation: Reverse the string
There are few methods to reverse the string, but two are most common
using slices
End of explanation
a, b, *remaining = (1, 2, 3, 4, 5, "test")
print(a, b)
print(remaining)
a, b, *remaining = [1, 2, 3, 4, 5, "test"]
print(a, b)
print(remaining)
first,*middle,last = (1, 2, 3, 4, 5, 6, 7, 8)
print(first, last)
print(middle)
first,*middle,last = [1, 2, 3, 4, 5, 6, 7, 8]
print(first, last)
print(middle)
Explanation: List / Tuple
tuple / list unpacking
End of explanation
lst = [1, 2, 3]
print(lst * 3)
lst = (1, 2, 3)
print(lst * 3)
Explanation: List/tuple multiplication ;)
similar to String we can literally multiply string and tuples with integer as shown below
End of explanation
a = [(1,2), (3,4), (5,6)]
print(list(zip(a)))
print("*" * 33)
print(list(zip(*a)))
a = [(1, 2, 7),
(3, 4, 8),
(5, 6, 9)]
print(list(zip(a)))
print("*" * 33)
print(list(zip(*a)))
Explanation: Array Transpose using zip
End of explanation
lst = ["Ashwini", "Banti", "Bhaiya", "Mayank", "Shashank", "Rahul" ]
list(enumerate(lst))
Explanation: enumerate with predefined starting index
End of explanation
print(list(enumerate(lst, 10)))
Explanation: Now, lets change the starting index to 10
End of explanation
lst = [1, 2, 3, 4, 53]
print(list(reversed(lst)))
print(lst[::-1])
Explanation: Reverse the list
built-in keyword reversed allows the list to be reversed.
End of explanation
l = [[1,2], [3], [4,5], [6], [7, 8, 9]]
l1 = [[1,2], 3, [4,5], [6], [7, 8, 9]]
l2 = [[1,2], [3], [4,5], [6], [[7, 8], 9], 10]
Explanation: Flattening of list
End of explanation
from itertools import chain
flattened_list = list(chain(*l))
print(flattened_list)
Explanation: Method 1:
End of explanation
from itertools import chain
try:
flattened_list = list(chain(*l1))
print(flattened_list)
except:
print("Error !!!")
from itertools import chain
flattened_list = list(chain(*l2))
print(flattened_list)
Explanation: NOTE: this method will fail if any of the element is non list item as shown in the below example
End of explanation
flattened_list = [y for x in l for y in x]
print(flattened_list)
Explanation: Method 2:
End of explanation
flattened_list = [y for x in l1 for y in x]
print(flattened_list)
Explanation: NOTE: this method will fail if any of the element is non list item as shown in the below example
End of explanation
flattened_list = [si for i in l1 for si in (i if isinstance(i, list) else [i])]
print(flattened_list)
Explanation: Lets update code to handle this situation
End of explanation
flattened_list = sum(l, [])
print(flattened_list)
Explanation: Method 3:
End of explanation
sum(l1, [])
Explanation: NOTE: this method will fail if any of the element is non list item as shown in the below example
End of explanation
flattened_list = []
for x in l:
for y in x:
flattened_list.append(y)
print(flattened_list)
Explanation: Method 4:
End of explanation
flattened_list = []
for x in l1:
for y in x:
flattened_list.append(y)
print(flattened_list)
Explanation: NOTE: this method will fail if any of the element is non list item as shown in the below example
End of explanation
from functools import reduce
flattened_list = reduce(lambda x, y: x + y, l)
print(flattened_list)
Explanation: Method 5:
End of explanation
flattened_list = reduce(lambda x, y: x + y, l1)
print(flattened_list)
Explanation: NOTE: this method will fail if any of the element is non list item as shown in the below example
End of explanation
import operator
flattened_list = reduce(operator.add, l)
print(flattened_list)
Explanation: Method 6:
End of explanation
import operator
flattened_list = reduce(operator.add, l1)
print(flattened_list)
Explanation: NOTE: this method will fail if any of the element is non list item as shown in the below example
End of explanation
lst = [1, 2]
lst.append(lst)
print(lst)
Explanation: Infinite Recursion
End of explanation
ori = [1, 2, 3, 4, 5, 6]
dup = ori
print(id(ori))
print(id(dup))
Explanation: lets check if really we have infinite recursion, with the following code. We should get RuntimeError: maximum recursion depth exceeded in comparison error later in the execution.
```python
def test(lst):
for a in lst:
if isinstance(a, list):
print("A", a)
test(a)
print(a)
test(lst)
```
Copy a list
End of explanation
dup.insert(0, 29)
print(ori)
print(dup)
Explanation: Both the variables are still pointing to same list, thus change in one will change another also.
End of explanation
ori = [1, 2, 3, 4, 5, 6]
dup = ori[:]
print(id(ori))
print(id(dup))
dup.insert(0, 29)
print(ori)
print(dup)
Explanation: Deepcopy a list
End of explanation
ori = [1, 2, 3, [4], 5, 6]
dup = ori[:]
print(id(ori))
print(id(dup))
ori[3].append(10)
print(ori)
print(dup)
print(id(ori[3]))
print(id(dup[4]))
print(id(ori))
print(id(dup))
Explanation: !!! ouch moment !!!
End of explanation
from copy import deepcopy
ori = [1, 2, 3, [4], 5, 6]
dup = deepcopy(ori)
print(ori)
print(dup)
print(id(ori[3]))
print(id(dup[3]))
print(id(ori))
print(id(dup))
ori[3].append(10)
print(ori)
print(dup)
print(id(ori[3]))
print(id(dup[3]))
print(id(ori))
print(id(dup))
Explanation: rescue using Deep copy
End of explanation
states_capitals = {'MP': 'Bhopal', 'UP': 'Lucknow', 'Rajasthan': 'Jaipur'}
Explanation: Dictionaries
Reverse the key values in unique dictionary
End of explanation
capitals_states = dict(zip(*list(zip(*states_capitals.items()))[::-1]))
print(capitals_states)
Explanation: Method 1:
End of explanation
capitals_states = dict([v, k] for k, v in states_capitals.items())
print(capitals_states)
Explanation: Method 2:
End of explanation
capitals_states = dict(zip(states_capitals.values(), states_capitals.keys()))
print(capitals_states)
Explanation: Method 3:
End of explanation
capitals_states = {states_capitals[k] : k for k in states_capitals}
print(capitals_states)
Explanation: Method 4:
End of explanation
states = ["MP", "UP", "Rajasthan"]
capitals = ["Bhopal", "Lucknow", "Jaipur"]
states_capitals = dict(zip(states, capitals))
print(states_capitals)
Explanation: Creating dictionaries
Multiple methods can be used to create a dictionary. We are going to cover few of the cool ones.
Using two lists
End of explanation
states_capitals = dict(MP='Bhopal', Rajasthan='Jaipur', UP='Lucknow')
print(states_capitals)
Explanation: Using arguments
End of explanation
states_capitals = dict([('MP', 'Bhopal'), ('UP', 'Lucknow'), ('Rajasthan', 'Jaipur')])
print(states_capitals)
Explanation: list of tuples
End of explanation
a = {'MP': 'Bhopal', 'UP': 'Lucknow', 'Rajasthan': 'Jaipur'}
b = {'Jaipur': 'Rajasthan', 'Bhopal': 'MP', 'Lucknow': 'UP'}
c = a.copy()
c.update(b)
print(c)
Explanation: By adding two dictionary using copy and update
End of explanation
def double_bubble(x):
yield x
yield x*x
d = {k:v for k, v in double_bubble}
{chr(97+i)*2 : i for i in range(5)}
Explanation: ```python
for Python >= 3.5: https://www.python.org/dev/peps/pep-0448
c = {b, a}
print(c)
```
Using dictionary comprehension
End of explanation
y = 10
x = 3 if (y == 1) else 2
print(x)
x = 3 if (y == 1) else 2 if (y == -1) else 1
print(x)
Explanation: if
Conditional Assignment
End of explanation
def foo(x=[]):
x.append(1)
print(x)
foo()
foo()
foo()
# instead use:
def fun(x=None):
if x is None:
x = []
x.append(1)
print(x)
fun()
fun()
fun()
Explanation: Functions
default arguments
Dangerous mutable default arguments
End of explanation
def draw_point(x, y):
You can unpack a list or a dictionary as
function arguments using * and **.
print(x, y)
point_foo = (3, 4)
point_bar = {'y': 3, 'x': 2}
draw_point(*point_foo)
draw_point(**point_bar)
Explanation: TODO: Add more examples
Function argument unpacking
End of explanation
def letsEcho():
test = "Hello"
print(test)
letsEcho.test = "Welcome"
print(letsEcho.test)
letsEcho()
print(letsEcho.test)
Explanation: Function arguments
End of explanation
def dum_dum():
try:
return '`dum dum` returning from try'
finally:
return '`dum dum` returning from finally'
print(dum_dum())
Explanation: Finally returns the ultimate return
End of explanation
class Test():
def __getattribute__(self, name):
f = lambda: " ".join([name, name[::-1]])
return f
t = Test()
# New attribute created at runtime
t.rev()
Explanation: OOPS
Attributes
Dynamically added attributes
End of explanation
x = 5
1 < x < 100
1 < x > 100
1 > x > 100
1 > x < 100
10 < x < 20
x < 10 < x*10 < 100
x < 10 < x*10 < 50
x < 10 < x*10 <= 50
10 > x <= 9
5 == x > 4
x == 5 > 4
Explanation: operators
Chaining comparison operators
End of explanation
a = ['a', 'b', 'c', 'd', 'e']
for index, item in enumerate(a): print (index, item)
Explanation: enumerate
Wrap an iterable with enumerate and it will yield the item along with its index.
End of explanation
def mygen():
Yield 5 until something else is passed back via send()
a = 5
while True:
f = (yield a) #yield a and possibly get f in return
if f is not None:
a = f #store the new value
g = mygen()
print(next(g))
print(next(g))
g.send(7)
print(next(g))
print(next(g))
g.send(17)
print(next(g))
print(next(g))
Explanation: Generators
Sending values into generator functions
https://www.python.org/dev/peps/pep-0342/, also please reaad http://www.dabeaz.com/coroutines/
End of explanation
def seek_next_line(f):
The iter(callable, until_value) function repeatedly calls
callable and yields its result until until_value is returned.
for c in iter(lambda: f.read(1),'\n'):
pass
Explanation: Descriptor
http://users.rcn.com/python/download/Descriptor.htm
Iterators
iter() can take a callable argument
End of explanation
try:
with open('a', 'w') as a, open('b', 'w') as b:
pass
except IOError as e:
print ('Operation failed: %s' % e.strerror)
#### write file using `print`
with open("outfile.txt" , "w+") as outFile:
print('Modern Standard Hindi is a standardised and sanskritised register of the Hindustani language.', file=outFile)
Explanation: I/O
with
open multiple files in a single with.
End of explanation
# Python 2 syntax
try:
some_operation()
except SomeError, e:
if is_fatal(e):
raise
handle_nonfatal(e)
def some_operation():
raise Exception
def is_fatal(e):
return True
# Python 3 syntax
try:
some_operation()
except Exception as e:
if is_fatal(e):
raise
handle_nonfatal(e)
Explanation: Exception
Re-raising exceptions:
End of explanation
from __future__ import braces
import __hello__
Explanation: !!! Easter Eggs !!!
End of explanation
import codecs
s = 'The Zen of Python, by Tim Peters'
enc = codecs.getencoder( "rot-13" )
dec = codecs.getdecoder("rot-13")
os = enc( s )[0]
print(os)
print(dec(os)[0])
import this
Explanation: Lets encrypt our code using cot13
End of explanation |
14,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2017 Google LLC.
Step1: # TensorFlow 编程概念
学习目标:
* 学习 TensorFlow 编程模型的基础知识,重点了解以下概念:
* 张量
* 指令
* 图
* 会话
* 构建一个简单的 TensorFlow 程序,使用该程序绘制一个默认图并创建一个运行该图的会话
注意:请仔细阅读本教程。TensorFlow 编程模型很可能与您遇到的其他模型不同,因此可能不如您期望的那样直观。
## 概念概览
TensorFlow 的名称源自张量,张量是任意维度的数组。借助 TensorFlow,您可以操控具有大量维度的张量。即便如此,在大多数情况下,您会使用以下一个或多个低维张量:
标量是零维数组(零阶张量)。例如,\'Howdy\' 或 5
矢量是一维数组(一阶张量)。例如,[2, 3, 5, 7, 11] 或 [5]
矩阵是二维数组(二阶张量)。例如,[[3.1, 8.2, 5.9][4.3, -2.7, 6.5]]
TensorFlow 指令会创建、销毁和操控张量。典型 TensorFlow 程序中的大多数代码行都是指令。
TensorFlow 图(也称为计算图或数据流图)是一种图数据结构。很多 TensorFlow 程序由单个图构成,但是 TensorFlow 程序可以选择创建多个图。图的节点是指令;图的边是张量。张量流经图,在每个节点由一个指令操控。一个指令的输出张量通常会变成后续指令的输入张量。TensorFlow 会实现延迟执行模型,意味着系统仅会根据相关节点的需求在需要时计算节点。
张量可以作为常量或变量存储在图中。您可能已经猜到,常量存储的是值不会发生更改的张量,而变量存储的是值会发生更改的张量。不过,您可能没有猜到的是,常量和变量都只是图中的一种指令。常量是始终会返回同一张量值的指令。变量是会返回分配给它的任何张量的指令。
要定义常量,请使用 tf.constant 指令,并传入它的值。例如:
x = tf.constant([5.2])
同样,您可以创建如下变量:
y = tf.Variable([5])
或者,您也可以先创建变量,然后再如下所示地分配一个值(注意:您始终需要指定一个默认值):
y = tf.Variable([0])
y = y.assign([5])
定义一些常量或变量后,您可以将它们与其他指令(如 tf.add)结合使用。在评估 tf.add 指令时,它会调用您的 tf.constant 或 tf.Variable 指令,以获取它们的值,然后返回一个包含这些值之和的新张量。
图必须在 TensorFlow 会话中运行,会话存储了它所运行的图的状态:
with tf.Session() as sess
Step2: 请勿忘记执行前面的代码块(import 语句)。
其他常见的 import 语句包括:
import matplotlib.pyplot as plt # 数据集可视化。
import numpy as np # 低级数字 Python 库。
import pandas as pd # 较高级别的数字 Python 库。
TensorFlow 提供了一个默认图。不过,我们建议您明确创建自己的 Graph,以便跟踪状态(例如,您可能希望在每个单元格中使用一个不同的 Graph)。
Step3: ## 练习:引入第三个运算数
修改上面的代码列表,以将三个整数(而不是两个)相加:
定义第三个标量整数常量 z,并为其分配一个值 4。
将 sum 与 z 相加,以得出一个新的和。
提示:请参阅有关 tf.add() 的 API 文档,了解有关其函数签名的更多详细信息。
重新运行修改后的代码块。该程序是否生成了正确的总和?
### 解决方案
点击下方,查看解决方案。 | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2017 Google LLC.
End of explanation
import tensorflow as tf
Explanation: # TensorFlow 编程概念
学习目标:
* 学习 TensorFlow 编程模型的基础知识,重点了解以下概念:
* 张量
* 指令
* 图
* 会话
* 构建一个简单的 TensorFlow 程序,使用该程序绘制一个默认图并创建一个运行该图的会话
注意:请仔细阅读本教程。TensorFlow 编程模型很可能与您遇到的其他模型不同,因此可能不如您期望的那样直观。
## 概念概览
TensorFlow 的名称源自张量,张量是任意维度的数组。借助 TensorFlow,您可以操控具有大量维度的张量。即便如此,在大多数情况下,您会使用以下一个或多个低维张量:
标量是零维数组(零阶张量)。例如,\'Howdy\' 或 5
矢量是一维数组(一阶张量)。例如,[2, 3, 5, 7, 11] 或 [5]
矩阵是二维数组(二阶张量)。例如,[[3.1, 8.2, 5.9][4.3, -2.7, 6.5]]
TensorFlow 指令会创建、销毁和操控张量。典型 TensorFlow 程序中的大多数代码行都是指令。
TensorFlow 图(也称为计算图或数据流图)是一种图数据结构。很多 TensorFlow 程序由单个图构成,但是 TensorFlow 程序可以选择创建多个图。图的节点是指令;图的边是张量。张量流经图,在每个节点由一个指令操控。一个指令的输出张量通常会变成后续指令的输入张量。TensorFlow 会实现延迟执行模型,意味着系统仅会根据相关节点的需求在需要时计算节点。
张量可以作为常量或变量存储在图中。您可能已经猜到,常量存储的是值不会发生更改的张量,而变量存储的是值会发生更改的张量。不过,您可能没有猜到的是,常量和变量都只是图中的一种指令。常量是始终会返回同一张量值的指令。变量是会返回分配给它的任何张量的指令。
要定义常量,请使用 tf.constant 指令,并传入它的值。例如:
x = tf.constant([5.2])
同样,您可以创建如下变量:
y = tf.Variable([5])
或者,您也可以先创建变量,然后再如下所示地分配一个值(注意:您始终需要指定一个默认值):
y = tf.Variable([0])
y = y.assign([5])
定义一些常量或变量后,您可以将它们与其他指令(如 tf.add)结合使用。在评估 tf.add 指令时,它会调用您的 tf.constant 或 tf.Variable 指令,以获取它们的值,然后返回一个包含这些值之和的新张量。
图必须在 TensorFlow 会话中运行,会话存储了它所运行的图的状态:
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
print(y.eval())
在使用 tf.Variable 时,您必须在会话开始时调用 tf.global_variables_initializer,以明确初始化这些变量,如上所示。
注意:会话可以将图分发到多个机器上执行(假设程序在某个分布式计算框架上运行)。有关详情,请参阅分布式 TensorFlow。
总结
TensorFlow 编程本质上是一个两步流程:
将常量、变量和指令整合到一个图中。
在一个会话中评估这些常量、变量和指令。
## 创建一个简单的 TensorFlow 程序
我们来看看如何编写一个将两个常量相加的简单 TensorFlow 程序。
### 添加 import 语句
与几乎所有 Python 程序一样,您首先要添加一些 import 语句。
当然,运行 TensorFlow 程序所需的 import 语句组合取决于您的程序将要访问的功能。至少,您必须在所有 TensorFlow 程序中添加 import tensorflow 语句:
End of explanation
from __future__ import print_function
import tensorflow as tf
# Create a graph.
g = tf.Graph()
# Establish the graph as the "default" graph.
with g.as_default():
# Assemble a graph consisting of the following three operations:
# * Two tf.constant operations to create the operands.
# * One tf.add operation to add the two operands.
x = tf.constant(8, name="x_const")
y = tf.constant(5, name="y_const")
sum = tf.add(x, y, name="x_y_sum")
# Now create a session.
# The session will run the default graph.
with tf.Session() as sess:
print(sum.eval())
Explanation: 请勿忘记执行前面的代码块(import 语句)。
其他常见的 import 语句包括:
import matplotlib.pyplot as plt # 数据集可视化。
import numpy as np # 低级数字 Python 库。
import pandas as pd # 较高级别的数字 Python 库。
TensorFlow 提供了一个默认图。不过,我们建议您明确创建自己的 Graph,以便跟踪状态(例如,您可能希望在每个单元格中使用一个不同的 Graph)。
End of explanation
# Create a graph.
g = tf.Graph()
# Establish our graph as the "default" graph.
with g.as_default():
# Assemble a graph consisting of three operations.
# (Creating a tensor is an operation.)
x = tf.constant(8, name="x_const")
y = tf.constant(5, name="y_const")
sum = tf.add(x, y, name="x_y_sum")
# Task 1: Define a third scalar integer constant z.
z = tf.constant(4, name="z_const")
# Task 2: Add z to `sum` to yield a new sum.
new_sum = tf.add(sum, z, name="x_y_z_sum")
# Now create a session.
# The session will run the default graph.
with tf.Session() as sess:
# Task 3: Ensure the program yields the correct grand total.
print(new_sum.eval())
Explanation: ## 练习:引入第三个运算数
修改上面的代码列表,以将三个整数(而不是两个)相加:
定义第三个标量整数常量 z,并为其分配一个值 4。
将 sum 与 z 相加,以得出一个新的和。
提示:请参阅有关 tf.add() 的 API 文档,了解有关其函数签名的更多详细信息。
重新运行修改后的代码块。该程序是否生成了正确的总和?
### 解决方案
点击下方,查看解决方案。
End of explanation |
14,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
340-Plotting and Fitting Data
This is the same set of data and fitting function as in the "Intro to Matlab" document.
Data and error bars
Step1: Fitting function to the data
For physical reasons we expect our data is described by a circle.
The equation of a circle with radius $a$ centered at $(x,y)=(b,c)$ is given by
$$(x-b)^2+(y-c)^2 = a^2$$
Let's rewrite this in terms of $y$,
$$y=-\sqrt{a^2-(x-b)^2}+c$$
We define the function and then want to find the best estimates for $a, b, c$ consistent with our data.
Step2: Here are the initial guesses for the parameters $a$, $b$, and $c$ to pass to the fitting function.
Step3: The 'curve_fit' function gets the best y by adjusting the parameters 'p'.
Step4: Now we use the fitted parameters in our function to compare with the data. | Python Code:
%pylab inline
# mathematical routines are expecting 'array'
x = array([-10, -9, -8, -7, -6, -5, -4, -3, 0]);
y = array([2.65, 2.10, 1.90, 1.40, 1.00, 0.80, 0.60, 0.30, 0.00]);
ey = array([0.1, 0.1, 0.1, 0.1, 0.05, 0.05, 0.05, 0.05, 0.2]);
# Plot the data with error bars
errorbar(x,y,ey,linestyle = '',marker = 'o') # no connecting line, circle
# Don’t forget axes labels
xlabel('x (mm)')
ylabel('y (mm)')
axis([-12,0.5,-0.5,3])
grid(True)
Explanation: 340-Plotting and Fitting Data
This is the same set of data and fitting function as in the "Intro to Matlab" document.
Data and error bars
End of explanation
def myfun(x,a,b,c):
ans = -sqrt(a**2-(x-b)**2)+c # this is y, "the function to be fit"
return ans
Explanation: Fitting function to the data
For physical reasons we expect our data is described by a circle.
The equation of a circle with radius $a$ centered at $(x,y)=(b,c)$ is given by
$$(x-b)^2+(y-c)^2 = a^2$$
Let's rewrite this in terms of $y$,
$$y=-\sqrt{a^2-(x-b)^2}+c$$
We define the function and then want to find the best estimates for $a, b, c$ consistent with our data.
End of explanation
p0 = [15, 0, 15]
Explanation: Here are the initial guesses for the parameters $a$, $b$, and $c$ to pass to the fitting function.
End of explanation
from scipy.optimize import curve_fit # import the curve fitting function
plsq, pcov = curve_fit(myfun, x, y, p0, ey) # curve fit returns p and covariance matrix
# these give the parameters and the uncertainties
print('a = %.3f +/- %.3f' % (plsq[0], sqrt(pcov[0,0])))
print('b = %.3f +/- %.3f' % (plsq[1], sqrt(pcov[1,1])))
print('c = %.3f +/- %.3f' % (plsq[2], sqrt(pcov[2,2])))
Explanation: The 'curve_fit' function gets the best y by adjusting the parameters 'p'.
End of explanation
xlots = linspace(-11,0.5) # need lots of data points for smooth curve
yfit = myfun(xlots,plsq[0],plsq[1],plsq[2]) # use fit results for a, b, c
errorbar(x,y,ey,linestyle = '',marker = 'o')
xlabel('x (mm)')
ylabel('y (mm)')
plot(xlots,yfit)
title('Least-squares fit to data')
legend(['data','Fit'])
axis([-12,0.5,-0.5,3])
grid(True)
Explanation: Now we use the fitted parameters in our function to compare with the data.
End of explanation |
14,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploration of Prudential Life Insurance Data
Data retrieved from
Step1: Define categorical data types
Step2: Importing life insurance data set
The following variables are all categorical (nominal)
Step3: Grouping of various categorical data sets
Histograms and descriptive statistics for Risk Response, Ins_Age, BMI, Wt
Step4: Histograms and descriptive statistics for Product_Info_1-7
Step5: Split dataframes into categorical, continuous, discrete, dummy, and response
Step6: Descriptive statistics and scatter plot relating Product_Info_2 and Response | Python Code:
# Importing libraries
%pylab inline
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from sklearn import preprocessing
import numpy as np
# Convert variable data into categorical, continuous, discrete,
# and dummy variable lists the following into a dictionary
Explanation: Exploration of Prudential Life Insurance Data
Data retrieved from:
https://www.kaggle.com/c/prudential-life-insurance-assessment
File descriptions:
train.csv - the training set, contains the Response values
test.csv - the test set, you must predict the Response variable for all rows in this file
sample_submission.csv - a sample submission file in the correct format
Data fields:
Variable | Description
-------- | ------------
Id | A unique identifier associated with an application.
Product_Info_1-7 | A set of normalized variables relating to the product applied for
Ins_Age | Normalized age of applicant
Ht | Normalized height of applicant
Wt | Normalized weight of applicant
BMI | Normalized BMI of applicant
Employment_Info_1-6 | A set of normalized variables relating to the employment history of the applicant.
InsuredInfo_1-6 | A set of normalized variables providing information about the applicant.
Insurance_History_1-9 | A set of normalized variables relating to the insurance history of the applicant.
Family_Hist_1-5 | A set of normalized variables relating to the family history of the applicant.
Medical_History_1-41 | A set of normalized variables relating to the medical history of the applicant.
Medical_Keyword_1-48 | A set of dummy variables relating to the presence of/absence of a medical keyword being associated with the application.
Response | This is the target variable, an ordinal variable relating to the final decision associated with an application
The following variables are all categorical (nominal):
Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41
The following variables are continuous:
Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5
The following variables are discrete:
Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32
Medical_Keyword_1-48 are dummy variables.
My thoughts are as follows:
The main dependent variable is the Risk Response (1-8)
What are variables are correlated to the risk response?
How do I perform correlation analysis between variables?
Import libraries
End of explanation
s = ["Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41",
"Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5",
"Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32"]
varTypes = dict()
#Very hacky way of inserting and appending ID and Response columns to the required dataframes
#Make this better
varTypes['categorical'] = s[0].split(', ')
#varTypes['categorical'].insert(0, 'Id')
#varTypes['categorical'].append('Response')
varTypes['continuous'] = s[1].split(', ')
#varTypes['continuous'].insert(0, 'Id')
#varTypes['continuous'].append('Response')
varTypes['discrete'] = s[2].split(', ')
#varTypes['discrete'].insert(0, 'Id')
#varTypes['discrete'].append('Response')
varTypes['dummy'] = ["Medical_Keyword_"+str(i) for i in range(1,49)]
varTypes['dummy'].insert(0, 'Id')
varTypes['dummy'].append('Response')
#Prints out each of the the variable types as a check
#for i in iter(varTypes['dummy']):
#print i
Explanation: Define categorical data types
End of explanation
#Import training data
d = pd.read_csv('prud_files/train.csv')
def normalize_df(d):
min_max_scaler = preprocessing.MinMaxScaler()
x = d.values.astype(np.float)
return pd.DataFrame(min_max_scaler.fit_transform(x))
# Import training data
d = pd.read_csv('prud_files/train.csv')
#Separation into groups
df_cat = pd.DataFrame(d, columns=["Id","Response"]+varTypes["categorical"])
df_disc = pd.DataFrame(d, columns=["Id","Response"]+varTypes["categorical"])
df_cont = pd.DataFrame(d, columns=["Id","Response"]+varTypes["categorical"])
d_cat = df_cat.copy()
#normalizes the columns for binary classification
norm_product_info_2 = [pd.get_dummies(d_cat["Product_Info_2"])]
a = pd.DataFrame(normalize_df(d_cat["Response"]))
a.columns=["nResponse"]
d_cat = pd.concat([d_cat, a], axis=1, join='outer')
for x in varTypes["categorical"]:
try:
a = pd.DataFrame(normalize_df(d_cat[x]))
a.columns=[str("n"+x)]
d_cat = pd.concat([d_cat, a], axis=1, join='outer')
except Exception as e:
print e.args
print "Error on "+str(x)+" w error: "+str(e)
d_cat.iloc[:,62:66].head(5)
# Normalization of columns
# Create a minimum and maximum processor object
# Define various group by data streams
df = d
gb_PI2 = df.groupby('Product_Info_1')
gb_PI2 = df.groupby('Product_Info_2')
gb_Ins_Age = df.groupby('Ins_Age')
gb_Ht = df.groupby('Ht')
gb_Wt = df.groupby('Wt')
gb_response = df.groupby('Response')
#Outputs rows the differnet categorical groups
for c in df.columns:
if (c in varTypes['categorical']):
if(c != 'Id'):
a = [ str(x)+", " for x in df.groupby(c).groups ]
print c + " : " + str(a)
df_prod_info = pd.DataFrame(d, columns=(["Response"]+ [ "Product_Info_"+str(x) for x in range(1,8)]))
df_emp_info = pd.DataFrame(d, columns=(["Response"]+ [ "Employment_Info_"+str(x) for x in range(1,6)]))
df_bio = pd.DataFrame(d, columns=["Response", "Ins_Age", "Ht", "Wt","BMI"])
df_med_kw = pd.DataFrame(d, columns=(["Response"]+ [ "Medical_Keyword_"+str(x) for x in range(1,48)])).add(axis=[ "Medical_Keyword_"+str(x) for x in range(1,48)])
df_med_kw.describe()
df.head(5)
df.describe()
Explanation: Importing life insurance data set
The following variables are all categorical (nominal):
Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41
The following variables are continuous:
Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5
The following variables are discrete:
Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32
Medical_Keyword_1-48 are dummy variables.
End of explanation
plt.figure(0)
plt.title("Categorical - Histogram for Risk Response")
plt.xlabel("Risk Response (1-7)")
plt.ylabel("Frequency")
plt.hist(df.Response)
plt.savefig('images/hist_Response.png')
print df.Response.describe()
print ""
plt.figure(1)
plt.title("Continuous - Histogram for Ins_Age")
plt.xlabel("Normalized Ins_Age [0,1]")
plt.ylabel("Frequency")
plt.hist(df.Ins_Age)
plt.savefig('images/hist_Ins_Age.png')
print df.Ins_Age.describe()
print ""
plt.figure(2)
plt.title("Continuous - Histogram for BMI")
plt.xlabel("Normalized BMI [0,1]")
plt.ylabel("Frequency")
plt.hist(df.BMI)
plt.savefig('images/hist_BMI.png')
print df.BMI.describe()
print ""
plt.figure(3)
plt.title("Continuous - Histogram for Wt")
plt.xlabel("Normalized Wt [0,1]")
plt.ylabel("Frequency")
plt.hist(df.Wt)
plt.savefig('images/hist_Wt.png')
print df.Wt.describe()
print ""
plt.show()
Explanation: Grouping of various categorical data sets
Histograms and descriptive statistics for Risk Response, Ins_Age, BMI, Wt
End of explanation
for i in range(1,8):
print "The iteration is: "+str(i)
print df['Product_Info_'+str(i)].describe()
print ""
plt.figure(i)
if(i == 4):
plt.title("Continuous - Histogram for Product_Info_"+str(i))
plt.xlabel("Normalized value: [0,1]")
plt.ylabel("Frequency")
else:
plt.title("Categorical - Histogram of Product_Info_"+str(i))
plt.xlabel("Categories")
plt.ylabel("Frequency")
if(i == 2):
df.Product_Info_2.value_counts().plot(kind='bar')
else:
plt.hist(df['Product_Info_'+str(i)])
plt.savefig('images/hist_Product_Info_'+str(i)+'.png')
plt.show()
Explanation: Histograms and descriptive statistics for Product_Info_1-7
End of explanation
catD = df.loc[:,varTypes['categorical']]
contD = df.loc[:,varTypes['continuous']]
disD = df.loc[:,varTypes['discrete']]
dummyD = df.loc[:,varTypes['dummy']]
respD = df.loc[:,['id','Response']]
Explanation: Split dataframes into categorical, continuous, discrete, dummy, and response
End of explanation
prod_info = [ "Product_Info_"+str(i) for i in range(1,8)]
a = catD.loc[:, prod_info[1]]
stats = catD.groupby(prod_info[1]).describe()
c = gb_PI2.Response.count()
plt.figure(0)
plt.scatter(c[0],c[1])
plt.figure(0)
plt.title("Histogram of "+"Product_Info_"+str(i))
plt.xlabel("Categories " + str((a.describe())['count']))
plt.ylabel("Frequency")
for i in range(1,8):
a = catD.loc[:, "Product_Info_"+str(i)]
if(i is not 4):
print a.describe()
print ""
plt.figure(i)
plt.title("Histogram of "+"Product_Info_"+str(i))
plt.xlabel("Categories " + str((catD.groupby(key).describe())['count']))
plt.ylabel("Frequency")
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if a.dtype in (np.int64, np.float, float, int):
a.hist()
# Random functions
#catD.Product_Info_1.describe()
#catD.loc[:, prod_info].groupby('Product_Info_2').describe()
#df[varTypes['categorical']].hist()
catD.head(5)
#Exploration of the discrete data
disD.describe()
disD.head(5)
#Iterate through each categorical column of data
#Perform a 2D histogram later
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
plt.title("Histogram of "+str(key))
plt.xlabel("Categories " + str((df.groupby(key).describe())['count']))
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if df[key].dtype in (np.int64, np.float, float, int):
df[key].hist()
i+=1
#Iterate through each 'discrete' column of data
#Perform a 2D histogram later
i=0
for key in varTypes['discrete']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
fig, axes = plt.subplots(nrows = 1, ncols = 2)
#Histogram based on normalized value counts of the data set
disD[key].value_counts().hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#Cumulative histogram based on normalized value counts of the data set
disD[key].value_counts().hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
i+=1
#2D Histogram
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
x = catD[key].value_counts(normalize=True)
y = df['Response']
plt.hist2d(x[1], y, bins=40, norm=LogNorm())
plt.colorbar()
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
i+=1
#Iterate through each categorical column of data
#Perform a 2D histogram later
i=0
for key in varTypes['categorical']:
#print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l)
plt.figure(i)
#fig, axes = plt.subplots(nrows = 1, ncols = 2)
#catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key))
#catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key))
if df[key].dtype in (np.int64, np.float, float, int):
#(1.*df[key].value_counts()/len(df[key])).hist()
df[key].value_counts(normalize=True).plot(kind='bar')
i+=1
df.loc('Product_Info_1')
Explanation: Descriptive statistics and scatter plot relating Product_Info_2 and Response
End of explanation |
14,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
L'objectif des "compare" est d'évaluer la qualité des calages effectués. Ils comparent les dépenses ou quantités agrégées de Budget des Familles après calage, avec celles de la comptabilité nationale. Les calages sont ici effectués sur les dépenses en carburants.
Import de modules généraux
Step1: Import de modules spécifiques à Openfisca
Step2: Import d'une nouvelle palette de couleurs
Step3: Import des fichiers csv donnant les montants agrégés des quantités consommées répertoriées dans les enquêtes BdF. Ces montants sont calculés dans compute_quantite_carburants
Step4: Import des fichiers csv donnant les quantités agrégées d'après les Comptes du Transport.
Step5: Création des graphiques pour comparer les consommations obtenues via Bdf vis-à-vis de la comptabilité nationale | Python Code:
from __future__ import division
import pkg_resources
import os
import pandas as pd
from pandas import concat
import seaborn
Explanation: L'objectif des "compare" est d'évaluer la qualité des calages effectués. Ils comparent les dépenses ou quantités agrégées de Budget des Familles après calage, avec celles de la comptabilité nationale. Les calages sont ici effectués sur les dépenses en carburants.
Import de modules généraux
End of explanation
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_line
Explanation: Import de modules spécifiques à Openfisca
End of explanation
seaborn.set_palette(seaborn.color_palette("Set2", 12))
%matplotlib inline
Explanation: Import d'une nouvelle palette de couleurs
End of explanation
assets_directory = os.path.join(
pkg_resources.get_distribution('openfisca_france_indirect_taxation').location
)
quantite_bdf = pd.DataFrame()
produits = ['carburants', 'diesel', 'essence']
for element in produits:
quantite = pd.DataFrame.from_csv(os.path.join(assets_directory,
'openfisca_france_indirect_taxation', 'assets', 'quantites',
'quantites_{}_consommees_bdf.csv'.format(element)), sep = ',', header = -1)
quantite.rename(columns = {1: '{} bdf'.format(element)}, inplace = True)
quantite.index = quantite.index.str.replace('en milliers de m3 en ', '')
quantite = quantite.sort_index()
quantite_bdf = concat([quantite, quantite_bdf], axis = 1)
Explanation: Import des fichiers csv donnant les montants agrégés des quantités consommées répertoriées dans les enquêtes BdF. Ces montants sont calculés dans compute_quantite_carburants
End of explanation
quantite_carbu_vp_france = pd.read_csv(os.path.join(assets_directory,
'openfisca_france_indirect_taxation', 'assets', 'quantites',
'quantite_carbu_vp_france.csv'), sep = ';')
quantite_carbu_vp_france['Unnamed: 0'] = quantite_carbu_vp_france['Unnamed: 0'].astype(str)
quantite_carbu_vp_france = quantite_carbu_vp_france.set_index('Unnamed: 0')
quantite_carbu_vp_france.rename(columns = {'essence': 'essence agregat'}, inplace = True)
quantite_carbu_vp_france.rename(columns = {'diesel': 'diesel agregat'}, inplace = True)
quantite_carbu_vp_france['carburants agregat'] = quantite_carbu_vp_france.sum(axis = 1)
comparaison_bdf_agregats = concat([quantite_carbu_vp_france, quantite_bdf], axis = 1)
comparaison_bdf_agregats = comparaison_bdf_agregats.dropna()
Explanation: Import des fichiers csv donnant les quantités agrégées d'après les Comptes du Transport.
End of explanation
print 'Comparaison pour l essence'
graph_builder_line(comparaison_bdf_agregats[['essence agregat'] + ['essence bdf']])
print 'Comparaison pour le diesel'
graph_builder_line(comparaison_bdf_agregats[['diesel agregat'] + ['diesel bdf']])
print 'Comparaison sur l ensemble des carburants'
graph_builder_line(comparaison_bdf_agregats[['carburants agregat'] + ['carburants bdf']])
Explanation: Création des graphiques pour comparer les consommations obtenues via Bdf vis-à-vis de la comptabilité nationale
End of explanation |
14,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
2. Mathmatical Groundwork
Previous
Step1: Import section specific modules
Step5: 2.11 Least-squares Minimization<a id='groundwork
Step6: The three functions defined above will be used frequently during the Levenberg-Marquardt solution procedure. The following few lines of code just set up the values we need to call the Levenberg-Marquardt solver.
Step8: The following plots show the observed data and the curve corresponding to our initial guess for the parameters.
Step9: The above is the main function of the Levenberg-Marquardt algorithm. The code may appear daunting at first, but all it does is implement the Levenberg-Marquardt update rule and some checks of convergence. We can now apply it to the problem with relative ease to obtain a numerical solution for our parameter vector.
Step10: We can now compare our numerical result with both the truth and the data. The following plot shows the various quantities of interest.
Step11: The fitted values are so close to the true values that it is almost impossible to differentiate between the red and green lines in the above plot. The true values have been omitted from the following plot to make it clearer that the numerical solution does an excellent job of arriving at the correct parameter values.
Step12: A final, important thing to note is that the Levenberg-Marquardt algorithm is already implemented in Python. It is used in scipy.optimise.leastsq. This is often useful for doing rapid numerical solution without the need for an analytic Jacobian. As a simple proof, we can call the built-in method to verify our results.
Step13: In this case, the built-in method clearly fails. I have done this deliberately to illustrate a point - a given implementation of an algorithm might not be the best one for your application. In this case, the manner in which the tuning parameters are handled prevents the solution from converging correctly. This can be avoided by choosing a starting guess closer to the truth and once again highlights the importance of initial values in problems of this type. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
2. Mathmatical Groundwork
Previous: 2.10 Linear Algrebra
Next: 2.12 Solid Angle
Import standard modules:
End of explanation
from scipy.optimize import leastsq
plt.rcParams['figure.figsize'] = (18, 6)
from IPython.display import HTML
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
def sinusoid(x, t):
Returns a vector containing the values of a sinusoid with parameters x evaluated at points t.
INPUTS:
t Value of independent variable at the sampled points.
x Vector of parameters.
x1 = x[0] #Amplitude
x2 = x[1] #Frequency
x3 = x[2] #Phase-shift
return x1*np.sin(2*np.pi*x2*t + x3)
def sinusoid_jacobian(x, t):
Returns the Jacobian corresponding to the function defined in sinusoid.
INPUTS:
t Value of independent variable at the sampled points.
x Vector of parameters.
x1 = x[0] #Amplitude
x2 = x[1] #Frequency
x3 = x[2] #Phase-shift
jacobian = np.empty([t.shape[0], x.shape[0]])
jacobian[:,0] = np.sin(2*np.pi*x2*t + x3)
jacobian[:,1] = 2*np.pi*t*x1*np.cos(2*np.pi*x2*t + x3)
jacobian[:,2] = x1*np.cos(2*np.pi*x2*t + x3)
return jacobian
def sinusoid_residual(x, t, d):
Returns a vector containing the residual values.
INPUTS:
d Vector of measured values.
t Value of independent variable at the sampled points.
x Vector of parameters.
return d - sinusoid(x, t)
Explanation: 2.11 Least-squares Minimization<a id='groundwork:sec:leastsquares'></a>
In the field of radio interferometry, we often encounter problems which must be solved numerically. One such problem is least-squares minimization. Conceptually, it is very simple. Given a model and some data, we want to find the values of a set of parameters which minimize the difference between our model and our observations.
Firstly, we need to phrase the problem in simple mathematics. Let us start by defining the quantities of interest and the function we wish to minimize.
We will refer to our data vector as $\mathbf{d}$ and our model vector as $\mathbf{m}$. These vectors contain the measured values and those predicted by the model respectively. We wish to minimize the $L^2$ or Euclidean vector norm of their difference. Whilst you may not have called it that in the past, you have almost certainly encountered it,
$$\lVert\mathbf{r}\rVert = \lVert\mathbf{d}-\mathbf{m}\rVert = \sqrt{\sum\limits_{i=1}^N(d_i - m_i)^2}.$$
$\mathbf{r}$ is the residual vector and it is a measure of the difference between the values predicted by our model and the observed values.
It is important to note that in general $\mathbf{m}$ is a function of a number of parameters, such as $(x_1, x_2, x_3, \dots)$. These parameters form the parameter vector $\mathbf{x}$ which is what we ultimately want to determine.
There are many methods which solve problems of the given form. However, we will stick to explaining two of the most commonly used in radio interferometry. Specifically, these are Gauss-Newton and Levenberg–Marquardt; both of which are technically non-linear least squares solvers. They can be applied to linear problems too.
We will not present the derivations of the methods, although they are readily available. The Gauss-Newton update rule is given by the following:
$$\delta \mathbf{x} = {(\mathbb{J}^T\mathbb{J})}^{-1} \mathbb{J}^T \mathbf{r}.$$
This is far simpler than it may at first appear. $\delta \mathbf{x}$ is simply the update to the current best guess of the parameter vector. $\mathbb{J}$ is the Jacobian of the problem which we will discuss in detail shortly. $(\cdot)^T$ denotes a matrix transpose, and $(\cdot)^{-1}$ denotes a matrix inverse. $\mathbf{r}$ is still the residual vector. In practice we use an iterative algorithm which starts from some initial guess which is updated in accordance with:
$$x_{k+1} = x_{k} + \delta x$$
The Jacobian is simply a matrix of the first derivatives of the model term relative to the parameter vector. This can be writen analytically for a model vector of length $M$ and a parameter vector of length $N$ as:
$$\mathbb{J} = \frac{\delta \mathbf{m}}{\delta \mathbf{x}} = \begin{bmatrix}
\frac{\delta m_1}{\delta x_1} & \frac{\delta m_1}{\delta x_2} & \dots & \frac{\delta m_1}{\delta x_N} \
\frac{\delta m_2}{\delta x_1} & \frac{\delta m_2}{\delta x_2} & \dots & \frac{\delta m_2}{\delta x_N} \
\vdots & \vdots & \ddots & \vdots \
\frac{\delta m_M}{\delta x_1} & \frac{\delta m_M}{\delta x_2} & \dots & \frac{\delta m_M}{\delta x_N}
\end{bmatrix}$$
This convention is somewhat unique to the radio interferometry problem. The Jacobian is usually defined as the derivative of the residual vector relative to the parameter vector. This has an associated change of sign, though it doesn't alter the algorithm. For the sake of consistency, we will stick to the positive, interferometric convention.
For the sake of completeness, we will also introduce the Levenberg-Marquardt update rule. It is used more frequently as it has better convergence behaviour than basic Gauss-Newton. The update rule itself is very similar:
$$\delta \mathbf{x} = {(\mathbb{J}^T\mathbb{J}+\lambda_{LM} \mathbf{D})}^{-1} \mathbb{J}^T \mathbf{r}.$$
The addition of the $\lambda_{LM} \mathbf{D}$ factor leads to the Levenberg-Marquardt algorithm being referred to as a damped least squares method. There is a degree of choice regarding the matrix $\mathbf{D}$. However, in practice it is usually the identity matrix, $\mathbf{I}$, or a matrix containing the diagonal entries of $\mathbb{J}^T\mathbb{J}$. The lambda factor is used to tune the algorithm and improve its convergence. The choice of lambda is largely heurstic, and a value which works in one case may fail completely in another. This highlights an important fact regarding least squares methods: the choice of starting parameters does alter the behaviour of the algorithm. This will become clearer as we proceed.
It is useful to note that, when we implement these methods, it is worth implementing Levenberg-Marquardt as setting $\lambda$ to zero will always return us to the Gauss-Newton approach.
Armed with the mathematical background above, we can now begin to implement a rudimentay Levenberg-Marquardt solver. We will start with a relatively simple function and demonstrate how we set up the algorithm. The function which we will use is as follows:
$$ m_i = x_1 \sin(2\pi x_2 t_i + x_3).$$
This is the equation for a simple sinusoid. The parameters $x_1$, $x_2$ and $x_3$ have been used in place of the more traditional $A$, $\nu$ and $\phi$ in order to maintain a consistent notation.
As we will not be performing a true experiment, we will obtain our "measured" values by adding some gaussian noise to our model signal.
Before we begin implementing the solver, we will write out the derivatives of the model function. These will be used to construct the Jacobian. The derivatives are as follows:
$$ \frac{\delta m_i}{\delta x_1} = \sin(2\pi x_2 t_i + x_3)$$
$$ \frac{\delta m_i}{\delta x_2} = 2 \pi t_i x_1 \cos(2\pi x_2 t_i + x_3)$$
$$ \frac{\delta m_i}{\delta x_3} = x_1 \cos(2\pi x_2 t_i + x_3).$$
These derivatives are very easy to calculate analytically. Note that $t$ is the independent variable for the problem and its index, $i$, simply enumerates the number of samples we have, which is in turn simply the length of our residual vector.
End of explanation
t = np.arange(-0.06, 0.06, 0.06/300) #The points at which we will be taking our "measurements"
noise = 2*np.random.normal(size=(t.shape[0])) #A noise vector which we will use to manufacture "real" measurements.
true_x = np.array([10., 33.3, 0.52]) #The true values of our parameter vector.
x = np.array([8., 43.5, 1.05]) #Initial guess of parameter vector for our solver.
d = sinusoid(true_x, t) + noise #Our "observed" data, contructed from our true parameter values and the noise vector.
m = sinusoid(x, t) #Our fitted function using the initial guess parameters.
Explanation: The three functions defined above will be used frequently during the Levenberg-Marquardt solution procedure. The following few lines of code just set up the values we need to call the Levenberg-Marquardt solver.
End of explanation
plt.plot(t, d)
plt.plot(t, m)
plt.show()
def levenberg_marquardt(d, t, x, r_func, j_func, maxit=100, lamda=1, K=10, eps1=1e-6, eps2=1e-6):
Returns a vector containing the optimal parameter values found by the algorithm.
INPUTS:
d Vector of measured values.
t Value of independent variable at the sampled points.
x Vector of parameters.
r_func Function which generates the residual vector.
j_func Function which generates the Jacobian.
maxiter Maximum number of iterations.
lamda Initial value of tuning parameter.
K Initial value of retuning factor.
eps1 First tolerance parameter - triggers when residual is below this number.
eps2 Second tolerance parameter - triggers when relative changes to the parameter
vector are below this number.
#Initialises some important values and stores the original lamda value.
r = r_func(x, t, d)
old_chi = np.linalg.norm(r)
olamda = lamda
it = 0
while True:
#Heavy lifting portion of the algorithm. Computes the parameter update.
#This is just the implementation of the mathmatical update rule.
J = j_func(x, t)
JT = J.T
JTJ = JT.dot(J)
JTJdiag = np.eye(JTJ.shape[0])*JTJ
JTJinv = np.linalg.pinv(JTJ + lamda*JTJdiag)
JTr = JT.dot(r)
delta_x = JTJinv.dot(JTr)
x += delta_x
#Convergence tests. If a solution has been found, returns the result.
#The chi value is the norm of the residual and is used to determine
#whether the solution is improving. If the chi value is sufficiently
#small, the function terminates. The second test checks to see whether
#or not the solution is improving, and terminates if it isn't.
r = r_func(x, t, d)
new_chi = np.linalg.norm(r)
if new_chi < eps1:
return x
elif np.linalg.norm(delta_x) < eps2*(np.linalg.norm(x) + eps2):
return x
#Tuning stage. If the parameter update was good, continue and restore lamda.
#If the update was bad, scale lamda by K and revert last update.
if new_chi > old_chi:
x -= delta_x
lamda = lamda*K
else:
old_chi = new_chi
lamda = olamda
#If the number of iterations grows too large, return the last value of x.
it += 1
if it >= maxit:
return x
Explanation: The following plots show the observed data and the curve corresponding to our initial guess for the parameters.
End of explanation
solved_x = levenberg_marquardt(d, t, x, sinusoid_residual, sinusoid_jacobian)
print solved_x
Explanation: The above is the main function of the Levenberg-Marquardt algorithm. The code may appear daunting at first, but all it does is implement the Levenberg-Marquardt update rule and some checks of convergence. We can now apply it to the problem with relative ease to obtain a numerical solution for our parameter vector.
End of explanation
plt.plot(t, d, label="Data")
plt.plot(t, sinusoid(solved_x, t), label="LM")
plt.plot(t, sinusoid(true_x, t), label="Truth")
plt.xlabel("t")
plt.legend(loc='upper right')
plt.show()
Explanation: We can now compare our numerical result with both the truth and the data. The following plot shows the various quantities of interest.
End of explanation
plt.plot(t, d, label="Data")
plt.plot(t, sinusoid(solved_x, t), label="LM")
plt.xlabel("t")
plt.legend(loc='upper right')
plt.show()
Explanation: The fitted values are so close to the true values that it is almost impossible to differentiate between the red and green lines in the above plot. The true values have been omitted from the following plot to make it clearer that the numerical solution does an excellent job of arriving at the correct parameter values.
End of explanation
x = np.array([8., 43.5, 1.05])
leastsq_x = leastsq(sinusoid_residual, x, args=(t, d))
print "scipy.optimize.leastsq: ", leastsq_x[0]
print "Our LM: ", solved_x
plt.plot(t, d, label="Data")
plt.plot(t, sinusoid(leastsq_x[0], t), label="optimize.leastsq")
plt.xlabel("t")
plt.legend(loc='upper right')
plt.show()
Explanation: A final, important thing to note is that the Levenberg-Marquardt algorithm is already implemented in Python. It is used in scipy.optimise.leastsq. This is often useful for doing rapid numerical solution without the need for an analytic Jacobian. As a simple proof, we can call the built-in method to verify our results.
End of explanation
x = np.array([8., 35., 1.05])
leastsq_x = leastsq(sinusoid_residual, x, args=(t, d))
print "scipy.optimize.leastsq: ", leastsq_x[0]
print "Our LM: ", solved_x
plt.plot(t, d, label="Data")
plt.plot(t, sinusoid(leastsq_x[0], t), label="optimize.leastsq")
plt.xlabel("t")
plt.legend(loc='upper right')
plt.show()
Explanation: In this case, the built-in method clearly fails. I have done this deliberately to illustrate a point - a given implementation of an algorithm might not be the best one for your application. In this case, the manner in which the tuning parameters are handled prevents the solution from converging correctly. This can be avoided by choosing a starting guess closer to the truth and once again highlights the importance of initial values in problems of this type.
End of explanation |
14,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load the airport and flight data from Cloudant
Step1: Build the vertices and edges dataframe from the data
Step2: Install GraphFrames package using PixieDust packageManager
The GraphFrames package to install depends on the environment.
Spark 1.6
graphframes
Step3: Create the GraphFrame from the Vertices and Edges Dataframes
Step4: Compute the degree for each vertex in the graph
The degree of a vertex is the number of edges incident to the vertex. In a directed graph, in-degree is the number of edges where vertex is the destination and out-degree is the number of edges where the vertex is the source. With GraphFrames, there is a degrees, outDegrees and inDegrees property that return a DataFrame containing the id of the vertext and the number of edges. We then sort then in descending order
Step5: Compute a list of shortest paths for each vertex to a specified list of landmarks
For this we use the shortestPaths api that returns DataFrame containing the properties for each vertex plus an extra column called distances that contains the number of hops to each landmark.
In the following code, we use BOS and LAX as the landmarks
Step6: Compute the pageRank for each vertex in the graph
PageRank is a famous algorithm used by Google Search to rank vertices in a graph by order of importance. To compute pageRank, we'll use the pageRank api that returns a new graph in which the vertices have a new pagerank column representing the pagerank score for the vertex and the edges have a new weight column representing the edge weight that contributed to the pageRank score. We'll then display the vertice ids and associated pageranks sorted descending
Step7: Search routes between 2 airports with specific criteria
In this section, we want to find all the routes between Boston and San Francisco operated by United Airlines with at most 2 hops. To accomplish this, we use the bfs (Breath First Search) api that returns a DataFrame containing the shortest path between matching vertices. For clarity will only keep the edge when displaying the results
Step8: Find all airports that do not have direct flights between each other
In this section, we'll use a very powerful graphFrames search feature that uses a pattern called motif to find nodes. The pattern we'll use the following pattern "(a)-[]->(b);(b)-[]->(c);!(a)-[]->(c)" which searches for all nodes a, b and c that have a path to (a,b) and a path to (b,c) but not a path to (a,c).
Also, because the search is computationally expensive, we reduce the number of edges by grouping the flights that have the same src and dst.
Step9: Compute the strongly connected components for this graph
Strongly Connected Components are components for which each vertex is reachable from every other vertex. To compute them, we'll use the stronglyConnectedComponents api that returns a DataFrame containing all the vertices with the addition of a component column that has the component id in which the vertex belongs to. We then group all the rows by components and aggregate the sum of all the member vertices. This gives us a good idea of the components distribution in the graph
Step10: Detect communities in the graph using Label Propagation algorithm
Label Propagation algorithm is a popular algorithm for finding communities within a graph. It has the advantage to be computationally inexpensive and thus works well with large graphs. To compute the communities, we'll use the labelPropagation api that returns a DataFrame containing all the vertices with the addition of a label column that has the label id for the communities in which the vertex belongs to. Similar to the strongly connected components, we'll then group all the rows by label and aggregate the sum of all the member vertices.
Step11: Use AggregateMessages to compute the average flight delays by originating airport
AggregateMessages api is not currently available in Python, so we use PixieDust Scala bridge to call out the Scala API
Note | Python Code:
cloudantHost='dtaieb.cloudant.com'
cloudantUserName='weenesserliffircedinvers'
cloudantPassword='72a5c4f939a9e2578698029d2bb041d775d088b5'
airports = sqlContext.read.format("com.cloudant.spark").option("cloudant.host",cloudantHost)\
.option("cloudant.username",cloudantUserName).option("cloudant.password",cloudantPassword)\
.option("schemaSampleSize", "-1").load("flight-metadata")
airports.cache()
airports.registerTempTable("airports")
import pixiedust
# Display the airports data
display(airports)
flights = sqlContext.read.format("com.cloudant.spark").option("cloudant.host",cloudantHost)\
.option("cloudant.username",cloudantUserName).option("cloudant.password",cloudantPassword)\
.option("schemaSampleSize", "-1").load("pycon_flightpredict_training_set")
flights.cache()
flights.registerTempTable("training")
# Display the flights data
display(flights)
Explanation: Load the airport and flight data from Cloudant
End of explanation
from pyspark.sql import functions as f
from pyspark.sql.types import *
rdd = flights.rdd.flatMap(lambda s: [s.arrivalAirportFsCode, s.departureAirportFsCode]).distinct()\
.map(lambda row:[row])
vertices = airports.join(
sqlContext.createDataFrame(rdd, StructType([StructField("fs",StringType())])), "fs"
).dropDuplicates(["fs"]).withColumnRenamed("fs","id")
print(vertices.count())
edges = flights.withColumnRenamed("arrivalAirportFsCode","dst")\
.withColumnRenamed("departureAirportFsCode","src")\
.drop("departureWeather").drop("arrivalWeather").drop("pt_type").drop("_id").drop("_rev")
print(edges.count())
Explanation: Build the vertices and edges dataframe from the data
End of explanation
import pixiedust
if sc.version.startswith('1.6.'): # Spark 1.6
pixiedust.installPackage("graphframes:graphframes:0.5.0-spark1.6-s_2.11")
elif sc.version.startswith('2.'): # Spark 2.1, 2.0
pixiedust.installPackage("graphframes:graphframes:0.5.0-spark2.1-s_2.11")
pixiedust.installPackage("com.typesafe.scala-logging:scala-logging-api_2.11:2.1.2")
pixiedust.installPackage("com.typesafe.scala-logging:scala-logging-slf4j_2.11:2.1.2")
print("done")
Explanation: Install GraphFrames package using PixieDust packageManager
The GraphFrames package to install depends on the environment.
Spark 1.6
graphframes:graphframes:0.5.0-spark1.6-s_2.11
Spark 2.x
graphframes:graphframes:0.5.0-spark2.1-s_2.11
In addition, recent versions of graphframes have dependencies on other packages which will need to also be installed:
com.typesafe.scala-logging:scala-logging-api_2.11:2.1.2
com.typesafe.scala-logging:scala-logging-slf4j_2.11:2.1.2
Note: After installing packages, the kernel will need to be restarted and all the previous cells re-run (including the install package cell).
End of explanation
from graphframes import GraphFrame
g = GraphFrame(vertices, edges)
display(g)
Explanation: Create the GraphFrame from the Vertices and Edges Dataframes
End of explanation
from pyspark.sql.functions import *
degrees = g.degrees.sort(desc("degree"))
display( degrees )
Explanation: Compute the degree for each vertex in the graph
The degree of a vertex is the number of edges incident to the vertex. In a directed graph, in-degree is the number of edges where vertex is the destination and out-degree is the number of edges where the vertex is the source. With GraphFrames, there is a degrees, outDegrees and inDegrees property that return a DataFrame containing the id of the vertext and the number of edges. We then sort then in descending order
End of explanation
r = g.shortestPaths(landmarks=["BOS", "LAX"]).select("id", "distances")
display(r)
Explanation: Compute a list of shortest paths for each vertex to a specified list of landmarks
For this we use the shortestPaths api that returns DataFrame containing the properties for each vertex plus an extra column called distances that contains the number of hops to each landmark.
In the following code, we use BOS and LAX as the landmarks
End of explanation
from pyspark.sql.functions import *
ranks = g.pageRank(resetProbability=0.20, maxIter=5)
rankedVertices = ranks.vertices.select("id","pagerank").orderBy(desc("pagerank"))
rankedEdges = ranks.edges.select("src", "dst", "weight").orderBy(desc("weight") )
ranks = GraphFrame(rankedVertices, rankedEdges)
display(ranks)
Explanation: Compute the pageRank for each vertex in the graph
PageRank is a famous algorithm used by Google Search to rank vertices in a graph by order of importance. To compute pageRank, we'll use the pageRank api that returns a new graph in which the vertices have a new pagerank column representing the pagerank score for the vertex and the edges have a new weight column representing the edge weight that contributed to the pageRank score. We'll then display the vertice ids and associated pageranks sorted descending:
End of explanation
paths = g.bfs(fromExpr="id='BOS'",toExpr="id = 'SFO'",edgeFilter="carrierFsCode='UA'", maxPathLength = 2)\
.drop("from").drop("to")
paths.cache()
display(paths)
Explanation: Search routes between 2 airports with specific criteria
In this section, we want to find all the routes between Boston and San Francisco operated by United Airlines with at most 2 hops. To accomplish this, we use the bfs (Breath First Search) api that returns a DataFrame containing the shortest path between matching vertices. For clarity will only keep the edge when displaying the results
End of explanation
from pyspark.sql.functions import *
h = GraphFrame(g.vertices, g.edges.select("src","dst")\
.groupBy("src","dst").agg(count("src").alias("count")))
query = h.find("(a)-[]->(b);(b)-[]->(c);!(a)-[]->(c)").drop("b")
query.cache()
display(query)
Explanation: Find all airports that do not have direct flights between each other
In this section, we'll use a very powerful graphFrames search feature that uses a pattern called motif to find nodes. The pattern we'll use the following pattern "(a)-[]->(b);(b)-[]->(c);!(a)-[]->(c)" which searches for all nodes a, b and c that have a path to (a,b) and a path to (b,c) but not a path to (a,c).
Also, because the search is computationally expensive, we reduce the number of edges by grouping the flights that have the same src and dst.
End of explanation
from pyspark.sql.functions import *
components = g.stronglyConnectedComponents(maxIter=10).select("id","component")\
.groupBy("component").agg(count("id").alias("count")).orderBy(desc("count"))
display(components)
Explanation: Compute the strongly connected components for this graph
Strongly Connected Components are components for which each vertex is reachable from every other vertex. To compute them, we'll use the stronglyConnectedComponents api that returns a DataFrame containing all the vertices with the addition of a component column that has the component id in which the vertex belongs to. We then group all the rows by components and aggregate the sum of all the member vertices. This gives us a good idea of the components distribution in the graph
End of explanation
from pyspark.sql.functions import *
communities = g.labelPropagation(maxIter=5).select("id", "label")\
.groupBy("label").agg(count("id").alias("count")).orderBy(desc("count"))
display(communities)
Explanation: Detect communities in the graph using Label Propagation algorithm
Label Propagation algorithm is a popular algorithm for finding communities within a graph. It has the advantage to be computationally inexpensive and thus works well with large graphs. To compute the communities, we'll use the labelPropagation api that returns a DataFrame containing all the vertices with the addition of a label column that has the label id for the communities in which the vertex belongs to. Similar to the strongly connected components, we'll then group all the rows by label and aggregate the sum of all the member vertices.
End of explanation
%%scala
import org.graphframes.lib.AggregateMessages
import org.apache.spark.sql.functions.{avg,desc,floor}
// For each airport, average the delays of the departing flights
val msgToSrc = AggregateMessages.edge("deltaDeparture")
val __agg = g.aggregateMessages
.sendToSrc(msgToSrc) // send each flight delay to source
.agg(floor(avg(AggregateMessages.msg)).as("averageDelays")) // average up all delays
.orderBy(desc("averageDelays"))
.limit(10)
__agg.cache()
__agg.show()
display(__agg)
Explanation: Use AggregateMessages to compute the average flight delays by originating airport
AggregateMessages api is not currently available in Python, so we use PixieDust Scala bridge to call out the Scala API
Note: Notice that PixieDust is automatically rebinding the python GraphFrame variable g into a scala GraphFrame with same name
End of explanation |
14,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An early result in the study of human dynamic systems is the claim that response times to email follow a power law distribution (http
Step1: We will look at messages in our archive that are responses to other messages and how long after the original email the response was made. | Python Code:
from bigbang.archive import Archive
import pandas as pd
arx = Archive("ipython-dev",archive_dir="../archives")
print arx.data.shape
arx.data.drop_duplicates(subset=('From','Date'),inplace=True)
Explanation: An early result in the study of human dynamic systems is the claim that response times to email follow a power law distribution (http://cds.cern.ch/record/613536/). This result has been built on by others (http://www.uvm.edu/~pdodds/files/papers/others/2004/johansen2004.pdf, http://dx.doi.org/10.1103/physreve.83.056101). However, Clauset, Shalizi, and Newman (citation needed) have challenged the pervasive use discovery of powerlaws, claiming that these studies often depend on unsound statistics.
Here we apply the method of power law distribution fitting and testing to the email response times of several public mailing lists.
End of explanation
response_times = []
response_times = []
for x in list(arx.data.iterrows()):
if x[1]['In-Reply-To'] is not None:
try:
d1 = arx.data.loc[x[1]['In-Reply-To']]['Date']
if isinstance(d1,pd.Series):
d1 = d1[0]
d2 = x[1]['Date']
rt = (d2 - d1)
response_times.append(rt.total_seconds())
except AttributeError as e:
print e
except TypeError as e:
print e
except KeyError as e:
# print e -- suppress error
pass
len(response_times)
import matplotlib.pyplot as plt
%matplotlib inline
plt.semilogy(sorted(response_times,reverse=True))
import powerlaw
f = powerlaw.Fit(response_times)
print f.power_law.alpha
print f.xmin
print f.D
R, p = f.distribution_compare('power_law', 'lognormal')
print R,p
Explanation: We will look at messages in our archive that are responses to other messages and how long after the original email the response was made.
End of explanation |
14,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clase 4
Step1: 1. Uso de Pandas para descargar datos de precios de cierre
Ahora, en forma de función
Step2: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota
Step3: Nota
Step4: 3. Selección de activos
Step5: 4. Optimización de portafolios | Python Code:
#importar los paquetes que se van a usar
import pandas as pd
import pandas_datareader.data as web
import numpy as np
from sklearn.cluster import KMeans
import datetime
from datetime import datetime
import scipy.stats as stats
import scipy as sp
import scipy.optimize as optimize
import scipy.cluster.hierarchy as hac
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#algunas opciones para Python
pd.set_option('display.notebook_repr_html', True)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
Explanation: Clase 4: Portafolios y riesgo - Selección
Juan Diego Sánchez Torres,
Profesor, MAF ITESO
Departamento de Matemáticas y Física
[email protected]
Tel. 3669-34-34 Ext. 3069
Oficina: Cubículo 4, Edificio J, 2do piso
1. Motivación
En primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos.
End of explanation
def get_historical_closes(ticker, start_date, end_date):
p = web.DataReader(ticker, "yahoo", start_date, end_date).sort_index('major_axis')
d = p.to_frame()['Adj Close'].reset_index()
d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True)
pivoted = d.pivot(index='Date', columns='Ticker')
pivoted.columns = pivoted.columns.droplevel(0)
return pivoted
Explanation: 1. Uso de Pandas para descargar datos de precios de cierre
Ahora, en forma de función
End of explanation
data=get_historical_closes(['AA','AAPL','AMZN','MSFT','KO','NVDA', '^GSPC'], '2011-01-01', '2016-12-31')
closes=data[['AA','AAPL','AMZN','MSFT','KO','NVDA']]
sp=data[['^GSPC']]
closes.plot(figsize=(8,6));
Explanation: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:
*conda install -c conda-forge pandas-datareader *
End of explanation
def calc_daily_returns(closes):
return np.log(closes/closes.shift(1))[1:]
daily_returns=calc_daily_returns(closes)
daily_returns.plot(figsize=(8,6));
daily_returns.corr()
def calc_annual_returns(daily_returns):
grouped = np.exp(daily_returns.groupby(lambda date: date.year).sum())-1
return grouped
annual_returns = calc_annual_returns(daily_returns)
annual_returns
def calc_portfolio_var(returns, weights=None):
if (weights is None):
weights = np.ones(returns.columns.size)/returns.columns.size
sigma = np.cov(returns.T,ddof=0)
var = (weights * sigma * weights.T).sum()
return var
calc_portfolio_var(annual_returns)
def sharpe_ratio(returns, weights = None, risk_free_rate = 0.015):
n = returns.columns.size
if weights is None: weights = np.ones(n)/n
var = calc_portfolio_var(returns, weights)
means = returns.mean()
return (means.dot(weights) - risk_free_rate)/np.sqrt(var)
sharpe_ratio(annual_returns)
Explanation: Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX.
Por ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.
2. Formulación del riesgo de un portafolio
End of explanation
daily_returns_mean=daily_returns.mean()
daily_returns_mean
daily_returns_std=daily_returns.std()
daily_returns_std
daily_returns_ms=pd.concat([daily_returns_mean, daily_returns_std], axis=1)
daily_returns_ms
random_state = 10
y_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(daily_returns_ms)
plt.scatter(daily_returns_mean, daily_returns_std, c=y_pred);
plt.axis([-0.01, 0.01, 0, 0.05]);
corr_mat=daily_returns.corr(method='spearman')
corr_mat
Z = hac.linkage(corr_mat, 'single')
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
plt.show()
selected=closes[['AAPL', 'AMZN']]
selected.plot(figsize=(8,6));
daily_returns_sel=calc_daily_returns(selected)
daily_returns_sel.plot(figsize=(8,6));
annual_returns_sel = calc_annual_returns(daily_returns_sel)
annual_returns_sel
Explanation: 3. Selección de activos
End of explanation
def target_func(x, cov_matrix, mean_vector, r):
f = float(-(x.dot(mean_vector) - r) / np.sqrt(x.dot(cov_matrix).dot(x.T)))
return f
def optimal_portfolio(profits, r, allow_short=True):
x = np.ones(len(profits.T))
mean_vector = np.mean(profits)
cov_matrix = np.cov(profits.T)
cons = ({'type': 'eq','fun': lambda x: np.sum(x) - 1})
if not allow_short:
bounds = [(0, None,) for i in range(len(x))]
else:
bounds = None
minimize = optimize.minimize(target_func, x, args=(cov_matrix, mean_vector, r), bounds=bounds,
constraints=cons)
return minimize
opt=optimal_portfolio(annual_returns_sel, 0.015)
opt
annual_returns_sel.dot(opt.x)
asp=calc_annual_returns(calc_daily_returns(sp))
asp
def objfun(W, R, target_ret):
stock_mean = np.mean(R,axis=0)
port_mean = np.dot(W,stock_mean)
cov=np.cov(R.T)
port_var = np.dot(np.dot(W,cov),W.T)
penalty = 2000*abs(port_mean-target_ret)
return np.sqrt(port_var) + penalty
def calc_efficient_frontier(returns):
result_means = []
result_stds = []
result_weights = []
means = returns.mean()
min_mean, max_mean = means.min(), means.max()
nstocks = returns.columns.size
for r in np.linspace(min_mean, max_mean, 150):
weights = np.ones(nstocks)/nstocks
bounds = [(0,1) for i in np.arange(nstocks)]
constraints = ({'type': 'eq', 'fun': lambda W: np.sum(W) - 1})
results = optimize.minimize(objfun, weights, (returns, r), method='SLSQP', constraints = constraints, bounds = bounds)
if not results.success: # handle error
raise Exception(result.message)
result_means.append(np.round(r,4)) # 4 decimal places
std_=np.round(np.std(np.sum(returns*results.x,axis=1)),6)
result_stds.append(std_)
result_weights.append(np.round(results.x, 5))
return {'Means': result_means, 'Stds': result_stds, 'Weights': result_weights}
frontier_data = calc_efficient_frontier(annual_returns_sel)
def plot_efficient_frontier(ef_data):
plt.figure(figsize=(12,8))
plt.title('Efficient Frontier')
plt.xlabel('Standard Deviation of the porfolio (Risk))')
plt.ylabel('Return of the portfolio')
plt.plot(ef_data['Stds'], ef_data['Means'], '--');
plot_efficient_frontier(frontier_data)
Explanation: 4. Optimización de portafolios
End of explanation |
14,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="alert alert-block alert-info" style="margin-top
Step1: Set the random seed
Step2: create a linear regression object, as our input and output will be two we set the parameters accordingly
Step3: we can use the diagram to represent the model or object
<img src = "https
Step4: we can create a tensor with two rows representing one sample of data
Step5: we can make a prediction
Step6: each row in the following tensor represents a different sample
Step7: we can make a prediction using multiple samples | Python Code:
from torch import nn
import torch
Set the random seed:
torch.manual_seed(1)
Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/pytorch_link_top"><img src = "http://cocl.us/Pytorch_top" width = 950, align = "center"></a>
<img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center">
<h1 align=center><font size = 5>Linear Regression with Multiple Outputs </font></h1>
# Table of Contents
In this lab, we will review how to make a prediction for Linear Regression with Multiple Output.
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref2">Build Custom Modules </a></li>
<br>
<p></p>
Estimated Time Needed: <strong>15 min</strong>
</div>
<hr>
<a id="ref1"></a>
<h2 align=center>Class Linear </h2>
End of explanation
class linear_regression(nn.Module):
def __init__(self,input_size,output_size):
super(linear_regression,self).__init__()
self.linear=nn.Linear(input_size,output_size)
def forward(self,x):
yhat=self.linear(x)
return yhat
Explanation: Set the random seed:
End of explanation
model=linear_regression(2,2)
Explanation: create a linear regression object, as our input and output will be two we set the parameters accordingly
End of explanation
list(model.parameters())
Explanation: we can use the diagram to represent the model or object
<img src = "https://ibm.box.com/shared/static/icmwnxru7nytlhnq5x486rffea9ncpk7.png" width = 600, align = "center">
we can see the parameters
End of explanation
x=torch.tensor([[1.0,3.0]])
Explanation: we can create a tensor with two rows representing one sample of data
End of explanation
yhat=model(x)
yhat
Explanation: we can make a prediction
End of explanation
X=torch.tensor([[1.0,1.0],[1.0,2.0],[1.0,3.0]])
Explanation: each row in the following tensor represents a different sample
End of explanation
Yhat=model(X)
Yhat
Explanation: we can make a prediction using multiple samples
End of explanation |
14,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
You all know word clouds
Step1: We see that <tt>wordcloud</tt> just did some text preprocessing (like removing the "!") and rendered the texts.
That's it!
Advanced example
The example above wasn't very impressive and expressive. So we use <tt>nltk</tt> to retrieve some big texts from a corpus. In the following word cloud, we want to display the main topics of some movie reviews.
Step2: We also need some basic text preprocessing
Step3: Let's do some basic text preprocessing by only keeping alphabetic characters and removing the stop words.
Step4: We join the text array together to get a long string and display it as word cloud.
Step5: Let's move on to some more interesting visualizations.
HTML webpage
We can read any website via Python's built-in <tt>request</tt> library. In this example, we read the aim42 (a guide for software architecture improvement; very valuable and interesting!) directly from the guide's website. We just need to get the response's text, that delivers the website as pure HTML.
Step6: We use <tt>beautifulsoup</tt> so get only the text from the HTML.
Step7: Again, we clean up the data by only processing any alphabetic text and keeping all the tokens that aren't in the stop word list.
Step8: And again, we produce a word cloud. For a nicer styling, we set some arguments in the constructor.
Step9: Here we are! The aim42 guide is (amongst other things) about data, system and architecture
Step10: As above, we join the names and create a word cloud.
Step11: Well, I guess the PetClinic software is about "pets" ;-)
PowerPoint Presentation
Last but not least, to our original goal
Step12: Cleaning up the tokens is a little bit tricky because we have German as well as English words in it. We also need some additional stop words for words that aren't common but don't make sense in a word cloud.
And we filter out the words that are contained in the stop list.
Step13: And now finally | Python Code:
from wordcloud import WordCloud
WordCloud().generate("Hello reader!").to_image()
Explanation: Introduction
You all know word clouds:
They give you a quick overview of the top topics of your blog, book, source code – or presentation. The latter was the one that got me thinking: How cool would it be if you start your presentation with a word cloud of the main topics of your talk? And: How easy is it to do that completely automated with Python?!
So let's go crazy this time with word cloud computing for different data sources like texts, web sites, source code and PowerPoint presentations!
The Idea
The main idea is to read any text in any format, preprocess it and throw it against a word cloud producing library. For this, we need some main libraries that do all the work for us:
<tt>nltk</tt> for some big texts and for cleaning up our input texts
<tt>beautifulsoup</tt> for getting raw texts from HTML websites
<tt>pygments</tt> for retrieving identifiers from source code
<tt>python-pptx</tt> for extracting texts from PowerPoint slides
<tt>wordcloud</tt> for producing, well, a word cloud
So let's get started!
Basic example
In this section, I demonstrate how easy it is to produce a word cloud from some given text with <tt>wordcloud</tt>. We import the <tt>WordCloud</tt> class and use it directly to generate a word cloud with two words.
End of explanation
from nltk.corpus import movie_reviews
movie_reviews.words()[:10]
Explanation: We see that <tt>wordcloud</tt> just did some text preprocessing (like removing the "!") and rendered the texts.
That's it!
Advanced example
The example above wasn't very impressive and expressive. So we use <tt>nltk</tt> to retrieve some big texts from a corpus. In the following word cloud, we want to display the main topics of some movie reviews.
End of explanation
from nltk.corpus import stopwords
english_stopword_tokens = stopwords.words('english')
english_stopword_tokens[:10]
Explanation: We also need some basic text preprocessing: removing the language's common words via a stop list for that language. The stop list is just a different kind of corpus – a list of text tokens.
End of explanation
movie_reviews_tokens = [s for s in movie_reviews.words()
if s.isalpha() and
not s in english_stopword_tokens]
movie_reviews_tokens[:5]
Explanation: Let's do some basic text preprocessing by only keeping alphabetic characters and removing the stop words.
End of explanation
movie_reviews_texts = " ".join(movie_reviews_tokens)
WordCloud().generate(movie_reviews_texts).to_image()
Explanation: We join the text array together to get a long string and display it as word cloud.
End of explanation
import requests
webpage = requests.get("http://aim42.github.io/")
webpage.text[:100]
Explanation: Let's move on to some more interesting visualizations.
HTML webpage
We can read any website via Python's built-in <tt>request</tt> library. In this example, we read the aim42 (a guide for software architecture improvement; very valuable and interesting!) directly from the guide's website. We just need to get the response's text, that delivers the website as pure HTML.
End of explanation
from bs4 import BeautifulSoup
parsed_content = BeautifulSoup(webpage.text, 'html.parser')
text_content = parsed_content.body.get_text()
text_content[:100]
Explanation: We use <tt>beautifulsoup</tt> so get only the text from the HTML.
End of explanation
content_tokens = []
for line in text_content.split("\n"):
for token in line.split(" "):
if token.isalpha() and not token in english_stopword_tokens:
content_tokens.append(token.lower())
content_tokens[0:5]
Explanation: Again, we clean up the data by only processing any alphabetic text and keeping all the tokens that aren't in the stop word list.
End of explanation
text = " ".join(content_tokens)
WordCloud(max_font_size=40,
scale=1.5,
background_color="white").generate(text).to_image()
Explanation: And again, we produce a word cloud. For a nicer styling, we set some arguments in the constructor.
End of explanation
import glob
from pygments.token import Token
from pygments.lexers.jvm import JavaLexer
import re
LEXER = JavaLexer()
CAMEL_CASE_1_PATTERN = re.compile(r'(.)([A-Z][a-z]+)')
CAMEL_CASE_2_PATTERN = re.compile(r'([a-z0-9])([A-Z])')
WORD_BOUNDARY_PATTERN = re.compile(r'[^a-zA-Z]')
JAVA_STOP_WORDS = set(["byte", "short", "int", "long",
"float", "double", "char", "string",
"object", "java", "get", "set", "is"])
STOP_LIST = JAVA_STOP_WORDS | set(english_stopword_tokens)
MIN_WORD_LENGTH = 3
def break_tokens(tokens):
tokens = CAMEL_CASE_1_PATTERN.sub(r'\1 \2', tokens)
tokens = CAMEL_CASE_2_PATTERN.sub(r'\1 \2', tokens)
tokens = WORD_BOUNDARY_PATTERN.sub(' ', tokens)
return tokens.split(' ')
def filter_token(tokens_of_name):
filtered_tokens = []
for token in tokens_of_name:
if len(token) >= MIN_WORD_LENGTH and token.lower() not in STOP_LIST:
filtered_tokens.append(token.lower())
return filtered_tokens
def extract_names(file_path, token_types):
extracted_names = []
with open(file_path) as source_code_file:
source_code_content = source_code_file.read()
for token_type, tokens in LEXER.get_tokens(source_code_content):
if token_type in token_types:
tokens_of_name = break_tokens(tokens)
extracted_names.extend(filter_token(tokens_of_name))
return extracted_names
def extract_names_from_source_code(root_dir, glob_pattern, relevant_token_types):
file_paths = glob.glob(root_dir + glob_pattern, recursive=True)
filtered_names = []
for file_path in file_paths:
names = extract_names(file_path, relevant_token_types)
if len(names) > 0:
filtered_names.extend(names)
return filtered_names
relevant_token_types = [Token.Name]
names = extract_names_from_source_code(
r'../../spring-petclinic/src/main',
'/**/*.java',
relevant_token_types)
names[:5]
Explanation: Here we are! The aim42 guide is (amongst other things) about data, system and architecture :-).
Source code
Next, we move over to source code!
It's nice to get a quick overview of what your software is all about. Our study object, in this case, is the Spring PetClinic. For our nice word cloud, we need to read all the source code files, extract the identifiers, handle some programming language specific cases and remove common words via the stop list.
And here comes some magic all in one dirty little code block (I promise to get through it step by step in another blog post/notebook):
We use <tt>glob</tt> to get a list of source code paths and read the content.
We use <tt>pygments</tt>'s <tt>JavaLexer</tt> to retrieve just the tokens we want.
We use some regex voodoo to break Java's CamelCase names into separate words.
We filter out common Java and English words via a stop list.
We set a minimal number of characters a word has to consist of.
End of explanation
names_text = " ".join(names)
WordCloud(width=360, height=360).generate(names_text).to_image()
Explanation: As above, we join the names and create a word cloud.
End of explanation
from pptx import Presentation
PPTX = r'data/talk.pptx'
prs = Presentation(PPTX)
text_fragments = []
for slide in prs.slides:
# store title for later replacement
title = ""
if slide.shapes.title and slide.shapes.title.text:
title = slide.shapes.title.text
# read the slide's notes
if slide.has_notes_slide:
notes = slide.notes_slide.notes_text_frame.text
note_tokens = notes.split(" ")
text_fragments.extend(s for s in note_tokens if s.isalpha())
# read the slide's text
for shape in slide.shapes:
if not shape.has_text_frame:
continue
text_frame_text = shape.text_frame.text.replace(title, "")
text_frame_tokens = text_frame_text.split(" ")
text_fragments.extend(s for s in text_frame_tokens if s.isalpha())
text_fragments[:5]
Explanation: Well, I guess the PetClinic software is about "pets" ;-)
PowerPoint Presentation
Last but not least, to our original goal: creating a word cloud for my PowerPoint presentation (in German and English).
We use <tt>python-pptx</tt> to wind our way through the presentation.
End of explanation
german_stopword_tokens = stopwords.words('german')
stop_word_tokens = set(german_stopword_tokens) | set(english_stopword_tokens)
custom_stop_words = ["Nichts", "unserer", "viele", "großen", "Du", "Deiner"]
stop_word_tokens = stop_word_tokens | set(custom_stop_words)
text_tokens = [token for token
in text_fragments
if token not in stop_word_tokens]
text_tokens[:5]
Explanation: Cleaning up the tokens is a little bit tricky because we have German as well as English words in it. We also need some additional stop words for words that aren't common but don't make sense in a word cloud.
And we filter out the words that are contained in the stop list.
End of explanation
text = " ".join(text_tokens)
WordCloud(min_font_size=30,
max_font_size=80,
scale=0.5,
width=960,
height=720).generate(text).to_image()
Explanation: And now finally: The word cloud of my PowerPoint presentation! :-)
End of explanation |
14,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CEOS Data Cube - Water Analysis Notebook
Description
Step1: First, we must connect to our data cube. We can then query the contents of the data cube we have connected to, including both the metadata and the actual data.
Step2: Obtain the metadata of our cube... Initially, we need to get the platforms and products in the cube. The rest of the metadata will be dependent on these two options.
Step3: Execute the following code and then use the generated form to choose your desired platfrom and product.
Step4: With the platform and product, we can get the rest of the metadata. This includes the resolution of a pixel, the latitude/longitude extents, and the minimum and maximum dates available of the chosen platform/product combination.
Step5: Execute the following code and then use the generated form to choose the extents of your desired data.
Step6: Now that we have filled out the above two forms, we have enough information to query our data cube. The following code snippet ends with the actual Data Cube query, which will return the dataset with all the data matching our query.
Step7: At this point, we have finished accessing our data cube and we can turn to analyzing our data. In this example, we will run the WOfS algorithm. The wofs_classify function, seen below, will return a modified dataset, where a value of 1 indicates the pixel has been classified as water by the WoFS algorithm and 0 represents the pixel is non-water.
For more information on the WOfS algorithm, refer to
Step8: Execute the following code and then use the generated form to choose your desired acquisition date. The following two code blocks are only necessary if you would like to see the water mask of a single acquisition date.
Step9: With all of the pixels classified as either water/non-water, let's perform a time series analysis over our derived water class. The function, perform_timeseries_analysis, takes in a dataset of 3 dimensions (time, latitude, and longitude), then sums the values of each pixel over time. It also keeps track of the number of clear observations we have at each pixel. We can then normalize each pixel to determine areas at risk of flooding. The normalization calculation is simply
Step10: The following plots visualize the results of our timeseries analysis. You may change the color scales with the cmap option. For color scales available for use by cmap, see http | Python Code:
%matplotlib inline
from datetime import datetime
import numpy as np
import datacube
from dc_water_classifier import wofs_classify
from dc_utilities import perform_timeseries_analysis
import dc_au_colormaps
from dc_notebook_utilities import *
Explanation: CEOS Data Cube - Water Analysis Notebook
Description: This Python notebook allows users to directly interact with a CEOS-formatted data cube to perform analyses for water management. The following steps will allow users to connect to a data cube, define the analysis location and time period (extent of latitude/longitude and dates), and then run the Australian Water Observations from Space (WOFS) algorithm. The outputs of the WOFS algorithm include static and time series pixel-level water observations for any pixel. These results provide critical information for water management that will allow users to assess water cycle dynamics, historical water extent and the risk of floods and droughts. Future versions may consider the addition of water quality parameters (e.g. Total Suspended Matter, Chlorophyll-A, CDOM), coastal erosion analyses and in-situ precipitation and surface temperature data.
Import necessary Data Cube libraries and dependencies.
End of explanation
dc = datacube.Datacube(app='dc-water-analysis')
api = datacube.api.API(datacube=dc)
Explanation: First, we must connect to our data cube. We can then query the contents of the data cube we have connected to, including both the metadata and the actual data.
End of explanation
# Get available products
products = dc.list_products()
platform_names = list(set(products.platform))
product_names = list(products.name)
Explanation: Obtain the metadata of our cube... Initially, we need to get the platforms and products in the cube. The rest of the metadata will be dependent on these two options.
End of explanation
product_values = create_platform_product_gui(platform_names, product_names)
Explanation: Execute the following code and then use the generated form to choose your desired platfrom and product.
End of explanation
# Save the form values
platform = product_values[0].value
product = product_values[1].value
# Get the pixel resolution of the selected product
resolution = products.resolution[products.name == product]
lat_dist = resolution.values[0][0]
lon_dist = resolution.values[0][1]
# Get the extents of the cube
descriptor = api.get_descriptor({'platform': platform})[product]
min_date = descriptor['result_min'][0]
min_lat = descriptor['result_min'][1]
min_lon = descriptor['result_min'][2]
min_date_str = str(min_date.year) + '-' + str(min_date.month) + '-' + str(min_date.day)
min_lat_rounded = round(min_lat, 3)
min_lon_rounded = round(min_lon, 3)
max_date = descriptor['result_max'][0]
max_lat = descriptor['result_max'][1]
max_lon = descriptor['result_max'][2]
max_date_str = str(max_date.year) + '-' + str(max_date.month) + '-' + str(max_date.day)
max_lat_rounded = round(max_lat, 3) #calculates latitude of the pixel's center
max_lon_rounded = round(max_lon, 3) #calculates longitude of the pixel's center
# Display metadata
generate_metadata_report(min_date_str, max_date_str,
min_lon_rounded, max_lon_rounded, lon_dist,
min_lat_rounded, max_lat_rounded, lat_dist)
show_map_extents(min_lon_rounded, max_lon_rounded, min_lat_rounded, max_lat_rounded)
Explanation: With the platform and product, we can get the rest of the metadata. This includes the resolution of a pixel, the latitude/longitude extents, and the minimum and maximum dates available of the chosen platform/product combination.
End of explanation
extent_values = create_extents_gui(min_date_str, max_date_str,
min_lon_rounded, max_lon_rounded,
min_lat_rounded, max_lat_rounded)
Explanation: Execute the following code and then use the generated form to choose the extents of your desired data.
End of explanation
# Save form values
start_date = datetime.strptime(extent_values[0].value, '%Y-%m-%d')
end_date = datetime.strptime(extent_values[1].value, '%Y-%m-%d')
min_lon = extent_values[2].value
max_lon = extent_values[3].value
min_lat = extent_values[4].value
max_lat = extent_values[5].value
# Query the Data Cube
dataset_in = dc.load(platform=platform,
product=product,
time=(start_date, end_date),
lon=(min_lon, max_lon),
lat=(min_lat, max_lat))
Explanation: Now that we have filled out the above two forms, we have enough information to query our data cube. The following code snippet ends with the actual Data Cube query, which will return the dataset with all the data matching our query.
End of explanation
water_class = wofs_classify(dataset_in)
Explanation: At this point, we have finished accessing our data cube and we can turn to analyzing our data. In this example, we will run the WOfS algorithm. The wofs_classify function, seen below, will return a modified dataset, where a value of 1 indicates the pixel has been classified as water by the WoFS algorithm and 0 represents the pixel is non-water.
For more information on the WOfS algorithm, refer to:
Mueller, et al. (2015) "Water observations from space: Mapping surface water from 25 years of Landsat imagery across Australia." Remote Sensing of Environment.
End of explanation
acq_dates = list(water_class.time.values.astype(str))
acq_date_input = create_acq_date_gui(acq_dates)
# Save form value
acq_date = acq_date_input.value
acq_date_index = acq_dates.index(acq_date)
# Get water class for selected acquisition date and mask no data values
water_class_for_acq_date = water_class.wofs[acq_date_index]
water_class_for_acq_date.values = water_class_for_acq_date.values.astype('float')
water_class_for_acq_date.values[water_class_for_acq_date.values == -9999] = np.nan
water_observations_for_acq_date_plot = water_class_for_acq_date.plot(cmap='BuPu')
Explanation: Execute the following code and then use the generated form to choose your desired acquisition date. The following two code blocks are only necessary if you would like to see the water mask of a single acquisition date.
End of explanation
time_series = perform_timeseries_analysis(water_class)
Explanation: With all of the pixels classified as either water/non-water, let's perform a time series analysis over our derived water class. The function, perform_timeseries_analysis, takes in a dataset of 3 dimensions (time, latitude, and longitude), then sums the values of each pixel over time. It also keeps track of the number of clear observations we have at each pixel. We can then normalize each pixel to determine areas at risk of flooding. The normalization calculation is simply:
$$normalized_water_observations = \dfrac{total_water_observations}{total_clear_observations}$$.
The output each of the three calculations can be seen below.
End of explanation
normalized_water_observations_plot = time_series.normalized_data.plot(cmap='dc_au_WaterSummary')
total_water_observations_plot = time_series.total_data.plot(cmap='dc_au_WaterObservations')
total_clear_observations_plot = time_series.total_clean.plot(cmap='dc_au_ClearObservations')
Explanation: The following plots visualize the results of our timeseries analysis. You may change the color scales with the cmap option. For color scales available for use by cmap, see http://matplotlib.org/examples/color/colormaps_reference.html. You can also define discrete color scales by using the levels and colors. For example:
normalized_water_observations_plot = normalized_water_observations.plot(levels=3, colors=['#E5E5FF', '#4C4CFF', '#0000FF'])
normalized_water_observations_plot = normalized_water_observations.plot(levels=[0.00, 0.50, 1.01], colors=['#E5E5FF', '#0000FF'])
For more examples on how you can modify plots, see http://xarray.pydata.org/en/stable/plotting.html.
End of explanation |
14,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
High-level Plotting with Pandas and Seaborn
In 2016, there are more options for generating plots in Python than ever before
Step1: Notice that by default a line plot is drawn, and light background is included. These decisions were made on your behalf by pandas.
All of this can be changed, however
Step2: Similarly, for a DataFrame
Step3: As an illustration of the high-level nature of Pandas plots, we can split multiple series into subplots with a single argument for plot
Step4: Or, we may want to have some series displayed on the secondary y-axis, which can allow for greater detail and less empty space
Step5: If we would like a little more control, we can use matplotlib's subplots function directly, and manually assign plots to its axes
Step6: Bar plots
Bar plots are useful for displaying and comparing measurable quantities, such as counts or volumes. In Pandas, we just use the plot method with a kind='bar' argument.
For this series of examples, let's load up the Titanic dataset
Step7: Another way of comparing the groups is to look at the survival rate, by adjusting for the number of people in each group.
Step8: Histograms
Frequenfly it is useful to look at the distribution of data before you analyze it. Histograms are a sort of bar graph that displays relative frequencies of data values; hence, the y-axis is always some measure of frequency. This can either be raw counts of values or scaled proportions.
For example, we might want to see how the fares were distributed aboard the titanic
Step9: The hist method puts the continuous fare values into bins, trying to make a sensible décision about how many bins to use (or equivalently, how wide the bins are). We can override the default value (10)
Step10: There are algorithms for determining an "optimal" number of bins, each of which varies somehow with the number of observations in the data series.
Step11: A density plot is similar to a histogram in that it describes the distribution of the underlying data, but rather than being a pure empirical representation, it is an estimate of the underlying "true" distribution. As a result, it is smoothed into a continuous line plot. We create them in Pandas using the plot method with kind='kde', where kde stands for kernel density estimate.
Step12: Often, histograms and density plots are shown together
Step13: Here, we had to normalize the histogram (normed=True), since the kernel density is normalized by definition (it is a probability distribution).
We will explore kernel density estimates more in the next section.
Boxplots
A different way of visualizing the distribution of data is the boxplot, which is a display of common quantiles; these are typically the quartiles and the lower and upper 5 percent values.
Step14: You can think of the box plot as viewing the distribution from above. The blue crosses are "outlier" points that occur outside the extreme quantiles.
One way to add additional information to a boxplot is to overlay the actual data; this is generally most suitable with small- or moderate-sized data series.
Step15: When data are dense, a couple of tricks used above help the visualization
Step16: Scatterplots
To look at how Pandas does scatterplots, let's look at a small dataset in wine chemistry.
Step17: Scatterplots are useful for data exploration, where we seek to uncover relationships among variables. There are no scatterplot methods for Series or DataFrame objects; we must instead use the matplotlib function scatter.
Step18: We can add additional information to scatterplots by assigning variables to either the size of the symbols or their colors.
Step19: To view scatterplots of a large numbers of variables simultaneously, we can use the scatter_matrix function that was recently added to Pandas. It generates a matrix of pair-wise scatterplots, optiorally with histograms or kernel density estimates on the diagonal.
Step20: Seaborn
Seaborn is a modern data visualization tool for Python, created by Michael Waskom. Seaborn's high-level interface makes it easy to visually explore your data, by being able to easily iterate through different plot types and layouts with minimal hand-coding. In this way, Seaborn complements matplotlib (which we will learn about later) in the data science toolbox.
An easy way to see how Seaborn can immediately improve your data visualization, is by setting the plot style using one of its sevefral built-in styles.
Here is a simple pandas plot before Seaborn
Step21: Seaborn is conventionally imported using the sns alias. Simply importing Seaborn invokes the default Seaborn settings. These are generally more muted colors with a light gray background and subtle white grid lines.
Step22: Customizing Seaborn Figure Aesthetics
Seaborn manages plotting parameters in two general groups
Step23: The figure still looks heavy, with the axes distracting from the lines in the boxplot. We can remove them with despine
Step24: Finally, we can give the plot yet more space by specifying arguments to despine; specifically, we can move axes away from the figure elements (via offset) and minimize the length of the axes to the lowest and highest major tick value (via trim)
Step25: The second set of figure aesthetic parameters controls the scale of the plot elements.
There are four default scales that correspond to different contexts that a plot may be intended for use with.
paper
notebook
talk
poster
The default is notebook, which is optimized for use in Jupyter notebooks. We can change the scaling with set_context
Step26: Each of the contexts can be fine-tuned for more specific applications
Step27: The detailed settings are available in the plotting.context
Step28: Seaborn works hand-in-hand with pandas to create publication-quality visualizations quickly and easily from DataFrame and Series data.
For example, we can generate kernel density estimates of two sets of simulated data, via the kdeplot function.
Step29: distplot combines a kernel density estimate and a histogram.
Step30: If kdeplot is provided with two columns of data, it will automatically generate a contour plot of the joint KDE.
Step31: Similarly, jointplot will generate a shaded joint KDE, along with the marginal KDEs of the two variables.
Step32: Notice in the above, we used a context manager to temporarily assign a white axis stype to the plot. This is a great way of changing the defaults for just one figure, without having to set and then reset preferences.
You can do this with a number of the seaborn defaults. Here is a dictionary of the style settings
Step33: To explore correlations among several variables, the pairplot function generates pairwise plots, along with histograms along the diagonal, and a fair bit of customization.
Step34: Plotting Small Multiples on Data-aware Grids
The pairplot above is an example of replicating the same visualization on different subsets of a particular dataset. This facilitates easy visual comparisons among groups, making otherwise-hidden patterns in complex data more apparent.
Seaborn affords a flexible means for generating plots on "data-aware grids", provided that your pandas DataFrame is structured appropriately. In particular, you need to organize your variables into columns and your observations (replicates) into rows. Using this baseline pattern of organization, you can take advantage of Seaborn's functions for easily creating lattice plots from your dataset.
FacetGrid is a Seaborn object for plotting mutliple variables simulaneously as trellis plots. Variables can be assigned to one of three dimensions of the FacetGrid
Step35: The FacetGrid's map method then allows a third variable to be plotted in each grid cell, according to the plot type passed. For example, a distplot will generate both a histogram and kernel density estimate for age, according each combination of sex and passenger class as follows
Step36: To more fully explore trellis plots in Seaborn, we will use a biomedical dataset. These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable
Step37: Notice that this data represents time series of individual patients, comprised of follow-up measurements at 2-4 week intervals following treatment.
As a first pass, we may wish to see how the trajectories of outcomes vary from patient to patient. Using pointplot, we can create a grid of plots to represent the time series for each patient. Let's just look at the first 12 patients
Step38: Where pointplot is particularly useful is in representing the central tendency and variance of multiple replicate measurements. Having examined individual responses to treatment, we may now want to look at the average response among treatment groups. Where there are mutluple outcomes (y variable) for each predictor (x variable), pointplot will plot the mean, and calculate the 95% confidence interval for the mean, using bootstrapping
Step39: Notice that to enforce the desired order of the facets (lowest to highest treatment level), the labels were passed as a col_order argument to FacetGrid.
Let's revisit the distplot function to look at how the disribution of the outcome variables vary by time and treatment. Instead of a histogram, however, we will here include the "rug", which are just the locations of individual data points that were used to fit the kernel density estimate.
Step40: displot can also fit parametric data models (instead of a kde). For example, we may wish to fit the data to normal distributions. We can used the distributions included in the SciPy package; Seaborn knows how to use these distributions to generate a fit to the data.
Step41: We can take the statistical analysis a step further, by using regplot to conduct regression analyses.
For example, we can simultaneously examine the relationship between age and the primary outcome variable as a function of both the treatment received and the week of the treatment by creating a scatterplot of the data, and fitting a linear relationship between age and twstrs
Step42: Exercise
From the AIS subdirectory of the data directory, import both the vessel_information table and transit_segments table and join them. Use the resulting table to create a faceted scatterplot of segment length (seg_length) and average speed (avg_sog) as a trellis plot by flag and vessel type.
To simplify the plot, first generate a subset of the data that includes only the 5 most commont ship types and the 5 most common countries. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
normals = pd.Series(np.random.normal(size=10))
normals.plot()
Explanation: High-level Plotting with Pandas and Seaborn
In 2016, there are more options for generating plots in Python than ever before:
matplotlib
Pandas
Seaborn
ggplot
Bokeh
pygal
Plotly
These packages vary with respect to their APIs, output formats, and complexity. A package like matplotlib, while powerful, is a relatively low-level plotting package, that makes very few assumptions about what constitutes good layout (by design), but has a lot of flexiblility to allow the user to completely customize the look of the output.
On the other hand, Seaborn and Pandas include methods for DataFrame and Series objects that are relatively high-level, and that make reasonable assumptions about how the plot should look. This allows users to generate publication-quality visualizations in a relatively automated way.
End of explanation
normals.cumsum().plot(grid=True)
Explanation: Notice that by default a line plot is drawn, and light background is included. These decisions were made on your behalf by pandas.
All of this can be changed, however:
End of explanation
variables = pd.DataFrame({'normal': np.random.normal(size=100),
'gamma': np.random.gamma(1, size=100),
'poisson': np.random.poisson(size=100)})
variables.cumsum(0).plot()
Explanation: Similarly, for a DataFrame:
End of explanation
variables.cumsum(0).plot(subplots=True, grid=True)
Explanation: As an illustration of the high-level nature of Pandas plots, we can split multiple series into subplots with a single argument for plot:
End of explanation
variables.cumsum(0).plot(secondary_y='normal', grid=True)
Explanation: Or, we may want to have some series displayed on the secondary y-axis, which can allow for greater detail and less empty space:
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
for i,var in enumerate(['normal','gamma','poisson']):
variables[var].cumsum(0).plot(ax=axes[i], title=var)
axes[0].set_ylabel('cumulative sum')
Explanation: If we would like a little more control, we can use matplotlib's subplots function directly, and manually assign plots to its axes:
End of explanation
titanic = pd.read_excel("../data/titanic.xls", "titanic")
titanic.head()
titanic.groupby('pclass').survived.sum().plot.bar()
titanic.groupby(['sex','pclass']).survived.sum().plot.barh()
death_counts = pd.crosstab([titanic.pclass, titanic.sex], titanic.survived.astype(bool))
death_counts.plot.bar(stacked=True, color=['black','gold'], grid=True)
Explanation: Bar plots
Bar plots are useful for displaying and comparing measurable quantities, such as counts or volumes. In Pandas, we just use the plot method with a kind='bar' argument.
For this series of examples, let's load up the Titanic dataset:
End of explanation
death_counts.div(death_counts.sum(1).astype(float), axis=0).plot.barh(stacked=True, color=['black','gold'])
Explanation: Another way of comparing the groups is to look at the survival rate, by adjusting for the number of people in each group.
End of explanation
titanic.fare.hist(grid=False)
Explanation: Histograms
Frequenfly it is useful to look at the distribution of data before you analyze it. Histograms are a sort of bar graph that displays relative frequencies of data values; hence, the y-axis is always some measure of frequency. This can either be raw counts of values or scaled proportions.
For example, we might want to see how the fares were distributed aboard the titanic:
End of explanation
titanic.fare.hist(bins=30)
Explanation: The hist method puts the continuous fare values into bins, trying to make a sensible décision about how many bins to use (or equivalently, how wide the bins are). We can override the default value (10):
End of explanation
sturges = lambda n: int(np.log2(n) + 1)
square_root = lambda n: int(np.sqrt(n))
from scipy.stats import kurtosis
doanes = lambda data: int(1 + np.log(len(data)) + np.log(1 + kurtosis(data) * (len(data) / 6.) ** 0.5))
n = len(titanic)
sturges(n), square_root(n), doanes(titanic.fare.dropna())
titanic.fare.hist(bins=doanes(titanic.fare.dropna()))
Explanation: There are algorithms for determining an "optimal" number of bins, each of which varies somehow with the number of observations in the data series.
End of explanation
titanic.fare.dropna().plot.kde(xlim=(0,600))
Explanation: A density plot is similar to a histogram in that it describes the distribution of the underlying data, but rather than being a pure empirical representation, it is an estimate of the underlying "true" distribution. As a result, it is smoothed into a continuous line plot. We create them in Pandas using the plot method with kind='kde', where kde stands for kernel density estimate.
End of explanation
titanic.fare.hist(bins=doanes(titanic.fare.dropna()), normed=True, color='lightseagreen')
titanic.fare.dropna().plot.kde(xlim=(0,600), style='r--')
Explanation: Often, histograms and density plots are shown together:
End of explanation
titanic.boxplot(column='fare', by='pclass', grid=False)
Explanation: Here, we had to normalize the histogram (normed=True), since the kernel density is normalized by definition (it is a probability distribution).
We will explore kernel density estimates more in the next section.
Boxplots
A different way of visualizing the distribution of data is the boxplot, which is a display of common quantiles; these are typically the quartiles and the lower and upper 5 percent values.
End of explanation
bp = titanic.boxplot(column='age', by='pclass', grid=False)
for i in [1,2,3]:
y = titanic.age[titanic.pclass==i].dropna()
# Add some random "jitter" to the x-axis
x = np.random.normal(i, 0.04, size=len(y))
plt.plot(x, y.values, 'r.', alpha=0.2)
Explanation: You can think of the box plot as viewing the distribution from above. The blue crosses are "outlier" points that occur outside the extreme quantiles.
One way to add additional information to a boxplot is to overlay the actual data; this is generally most suitable with small- or moderate-sized data series.
End of explanation
# Write your answer here
Explanation: When data are dense, a couple of tricks used above help the visualization:
reducing the alpha level to make the points partially transparent
adding random "jitter" along the x-axis to avoid overstriking
Exercise
Using the Titanic data, create kernel density estimate plots of the age distributions of survivors and victims.
End of explanation
wine = pd.read_table("../data/wine.dat", sep='\s+')
attributes = ['Grape',
'Alcohol',
'Malic acid',
'Ash',
'Alcalinity of ash',
'Magnesium',
'Total phenols',
'Flavanoids',
'Nonflavanoid phenols',
'Proanthocyanins',
'Color intensity',
'Hue',
'OD280/OD315 of diluted wines',
'Proline']
wine.columns = attributes
Explanation: Scatterplots
To look at how Pandas does scatterplots, let's look at a small dataset in wine chemistry.
End of explanation
wine.plot.scatter('Color intensity', 'Hue')
Explanation: Scatterplots are useful for data exploration, where we seek to uncover relationships among variables. There are no scatterplot methods for Series or DataFrame objects; we must instead use the matplotlib function scatter.
End of explanation
wine.plot.scatter('Color intensity', 'Hue', s=wine.Alcohol*100, alpha=0.5)
wine.plot.scatter('Color intensity', 'Hue', c=wine.Grape)
wine.plot.scatter('Color intensity', 'Hue', c=wine.Alcohol*100, cmap='hot')
Explanation: We can add additional information to scatterplots by assigning variables to either the size of the symbols or their colors.
End of explanation
_ = pd.scatter_matrix(wine.loc[:, 'Alcohol':'Flavanoids'], figsize=(14,14), diagonal='kde')
Explanation: To view scatterplots of a large numbers of variables simultaneously, we can use the scatter_matrix function that was recently added to Pandas. It generates a matrix of pair-wise scatterplots, optiorally with histograms or kernel density estimates on the diagonal.
End of explanation
normals.plot()
Explanation: Seaborn
Seaborn is a modern data visualization tool for Python, created by Michael Waskom. Seaborn's high-level interface makes it easy to visually explore your data, by being able to easily iterate through different plot types and layouts with minimal hand-coding. In this way, Seaborn complements matplotlib (which we will learn about later) in the data science toolbox.
An easy way to see how Seaborn can immediately improve your data visualization, is by setting the plot style using one of its sevefral built-in styles.
Here is a simple pandas plot before Seaborn:
End of explanation
import seaborn as sns
normals.plot()
Explanation: Seaborn is conventionally imported using the sns alias. Simply importing Seaborn invokes the default Seaborn settings. These are generally more muted colors with a light gray background and subtle white grid lines.
End of explanation
sns.set_style('whitegrid')
sns.boxplot(x='pclass', y='age', data=titanic)
sns.set_style('ticks')
sns.boxplot(x='pclass', y='age', data=titanic)
Explanation: Customizing Seaborn Figure Aesthetics
Seaborn manages plotting parameters in two general groups:
setting components of aesthetic style of the plot
scaling elements of the figure
This default theme is called darkgrid; there are a handful of preset themes:
darkgrid
whitegrid
dark
white
ticks
Each are suited to partiular applications. For example, in more "data-heavy" situations, one might want a lighter background.
We can apply an alternate theme using set_style:
End of explanation
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine()
Explanation: The figure still looks heavy, with the axes distracting from the lines in the boxplot. We can remove them with despine:
End of explanation
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine(offset=20, trim=True)
Explanation: Finally, we can give the plot yet more space by specifying arguments to despine; specifically, we can move axes away from the figure elements (via offset) and minimize the length of the axes to the lowest and highest major tick value (via trim):
End of explanation
sns.set_context('paper')
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine(offset=20, trim=True)
sns.set_context('poster')
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine(offset=20, trim=True)
Explanation: The second set of figure aesthetic parameters controls the scale of the plot elements.
There are four default scales that correspond to different contexts that a plot may be intended for use with.
paper
notebook
talk
poster
The default is notebook, which is optimized for use in Jupyter notebooks. We can change the scaling with set_context:
End of explanation
sns.set_context('notebook', font_scale=0.5, rc={'lines.linewidth': 0.5})
sns.boxplot(x='pclass', y='age', data=titanic)
sns.despine(offset=20, trim=True)
Explanation: Each of the contexts can be fine-tuned for more specific applications:
End of explanation
sns.plotting_context()
Explanation: The detailed settings are available in the plotting.context:
End of explanation
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
data.head()
sns.set()
for col in 'xy':
sns.kdeplot(data[col], shade=True)
Explanation: Seaborn works hand-in-hand with pandas to create publication-quality visualizations quickly and easily from DataFrame and Series data.
For example, we can generate kernel density estimates of two sets of simulated data, via the kdeplot function.
End of explanation
sns.distplot(data['x'])
Explanation: distplot combines a kernel density estimate and a histogram.
End of explanation
sns.kdeplot(data);
cmap = {1:'Reds', 2:'Blues', 3:'Greens'}
for grape in cmap:
alcohol, phenols = wine.loc[wine.Grape==grape, ['Alcohol', 'Total phenols']].T.values
sns.kdeplot(alcohol, phenols,
cmap=cmap[grape], shade=True, shade_lowest=False, alpha=0.3)
Explanation: If kdeplot is provided with two columns of data, it will automatically generate a contour plot of the joint KDE.
End of explanation
with sns.axes_style('white'):
sns.jointplot("Alcohol", "Total phenols", wine, kind='kde');
Explanation: Similarly, jointplot will generate a shaded joint KDE, along with the marginal KDEs of the two variables.
End of explanation
sns.axes_style()
with sns.axes_style('white', {'font.family': ['serif']}):
sns.jointplot("Alcohol", "Total phenols", wine, kind='kde');
Explanation: Notice in the above, we used a context manager to temporarily assign a white axis stype to the plot. This is a great way of changing the defaults for just one figure, without having to set and then reset preferences.
You can do this with a number of the seaborn defaults. Here is a dictionary of the style settings:
End of explanation
titanic = titanic[titanic.age.notnull() & titanic.fare.notnull()]
sns.pairplot(titanic, vars=['age', 'fare', 'pclass', 'sibsp'], hue='survived', palette="muted", markers='+')
Explanation: To explore correlations among several variables, the pairplot function generates pairwise plots, along with histograms along the diagonal, and a fair bit of customization.
End of explanation
sns.FacetGrid(titanic, col="sex", row="pclass")
Explanation: Plotting Small Multiples on Data-aware Grids
The pairplot above is an example of replicating the same visualization on different subsets of a particular dataset. This facilitates easy visual comparisons among groups, making otherwise-hidden patterns in complex data more apparent.
Seaborn affords a flexible means for generating plots on "data-aware grids", provided that your pandas DataFrame is structured appropriately. In particular, you need to organize your variables into columns and your observations (replicates) into rows. Using this baseline pattern of organization, you can take advantage of Seaborn's functions for easily creating lattice plots from your dataset.
FacetGrid is a Seaborn object for plotting mutliple variables simulaneously as trellis plots. Variables can be assigned to one of three dimensions of the FacetGrid:
rows
columns
colors (hue)
Let's use the titanic dataset to create a trellis plot that represents 3 variables at a time. This consists of 2 steps:
Create a FacetGrid object that relates two variables in the dataset in a grid of pairwise comparisons.
Add the actual plot (distplot) that will be used to visualize each comparison.
The first step creates a set of axes, according to the dimensions passed as row and col. These axes are empty, however:
End of explanation
g = sns.FacetGrid(titanic, col="sex", row="pclass")
g.map(sns.distplot, 'age')
Explanation: The FacetGrid's map method then allows a third variable to be plotted in each grid cell, according to the plot type passed. For example, a distplot will generate both a histogram and kernel density estimate for age, according each combination of sex and passenger class as follows:
End of explanation
cdystonia = pd.read_csv('../data/cdystonia.csv')
cdystonia.head()
Explanation: To more fully explore trellis plots in Seaborn, we will use a biomedical dataset. These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began
End of explanation
g = sns.FacetGrid(cdystonia[cdystonia.patient<=12], col='patient', col_wrap=4)
g.map(sns.pointplot, 'week', 'twstrs', color='0.5')
Explanation: Notice that this data represents time series of individual patients, comprised of follow-up measurements at 2-4 week intervals following treatment.
As a first pass, we may wish to see how the trajectories of outcomes vary from patient to patient. Using pointplot, we can create a grid of plots to represent the time series for each patient. Let's just look at the first 12 patients:
End of explanation
ordered_treat = ['Placebo', '5000U', '10000U']
g = sns.FacetGrid(cdystonia, col='treat', col_order=ordered_treat)
g.map(sns.pointplot, 'week', 'twstrs', color='0.5')
Explanation: Where pointplot is particularly useful is in representing the central tendency and variance of multiple replicate measurements. Having examined individual responses to treatment, we may now want to look at the average response among treatment groups. Where there are mutluple outcomes (y variable) for each predictor (x variable), pointplot will plot the mean, and calculate the 95% confidence interval for the mean, using bootstrapping:
End of explanation
g = sns.FacetGrid(cdystonia, row='treat', col='week')
g.map(sns.distplot, 'twstrs', hist=False, rug=True)
Explanation: Notice that to enforce the desired order of the facets (lowest to highest treatment level), the labels were passed as a col_order argument to FacetGrid.
Let's revisit the distplot function to look at how the disribution of the outcome variables vary by time and treatment. Instead of a histogram, however, we will here include the "rug", which are just the locations of individual data points that were used to fit the kernel density estimate.
End of explanation
from scipy.stats import norm
g = sns.FacetGrid(cdystonia, row='treat', col='week')
g.map(sns.distplot, 'twstrs', kde=False, fit=norm)
Explanation: displot can also fit parametric data models (instead of a kde). For example, we may wish to fit the data to normal distributions. We can used the distributions included in the SciPy package; Seaborn knows how to use these distributions to generate a fit to the data.
End of explanation
g = sns.FacetGrid(cdystonia, col='treat', row='week')
g.map(sns.regplot, 'age', 'twstrs')
Explanation: We can take the statistical analysis a step further, by using regplot to conduct regression analyses.
For example, we can simultaneously examine the relationship between age and the primary outcome variable as a function of both the treatment received and the week of the treatment by creating a scatterplot of the data, and fitting a linear relationship between age and twstrs:
End of explanation
segments = pd.read_csv('../data/AIS/transit_segments.csv')
segments.head()
Explanation: Exercise
From the AIS subdirectory of the data directory, import both the vessel_information table and transit_segments table and join them. Use the resulting table to create a faceted scatterplot of segment length (seg_length) and average speed (avg_sog) as a trellis plot by flag and vessel type.
To simplify the plot, first generate a subset of the data that includes only the 5 most commont ship types and the 5 most common countries.
End of explanation |
14,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Postprocessing
This notebook visualizes the output of the deep neural network and plots the associated ROC curve.
Step2: A 5-layer neural network was trained to separate $B \rightarrow \rho \gamma$ decays from the kinematically similar and topologically identical mode $B \rightarrow K^* \gamma$. The neural network output for the training and validation data is plotted to check if the network has overfit.
Step3: Receiver Operating Characteristic
The true positive rate (recall) is plotted against the false positive rate (probability of false alarm). Used to evaluate classifier performance as we vary its discrimination threshold. The BDT output is a continuous random variable $X$. Given a threshold parameter $T$, the instance is classified as signal is $X>T$ and background otherwise. The random variable $X$ should follow a probability density $f_{sig}(x)$ if is true signal, and $f_{bkg}(x)$ otherwise. The respective rates are therefore given as cumulative density functions | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
def output_probs(network_output, y):
# Break network output down into signal and background components
labels = np.argmax(y, 1)
sig_indices = np.where(labels == 1)
bkg_indices = np.where(labels == 0)
sig_output = network_output[sig_indices][:,1]
bkg_output = network_output[bkg_indices][:,1]
return sig_output, bkg_output
def normalize_weights(x):
# Weights to normalize output histograms
normalizing_weights = np.ones(x.shape[0])*1/x.shape[0]
return normalizing_weights
def NN_output(network_output, onehot_y, meta, nbins = 50):
# Plot neural network output
sea_green = '#54ff9f'
cornflower = '#6495ED'
labels = np.argmax(onehot_y, 1)
sig_indices = np.where(labels == 1)
bkg_indices = np.where(labels == 0)
sig_output = network_output[sig_indices][:,1]
bkg_output = network_output[bkg_indices][:,1]
plt.figure()
plt.axes([.1,.1,.8,.7])
plt.figtext(.5,.9, r'$\mathrm{NN \; Output}$', fontsize=12, ha='center')
plt.figtext(.5,.86, meta, fontsize=8, ha='center')
sns.distplot(sig_output, color = sea_green, label = r'$\mathrm{Signal}$', bins = nbins, kde = False)
sns.distplot(bkg_output, color = cornflower, label = r'$\mathrm{Crossfeed}$', bins = nbins, kde = False)
plt.xlabel(r'$\mathrm{Signal \; Probability}$')
plt.ylabel(r'$\mathrm{Entries/bin}$')
plt.legend(loc='best')
plt.savefig("graphs/" + "NNoutput.pdf", format='pdf', dpi=1000)
plt.show()
plt.gcf().clear()
def NN_output_train_test(network_output_test, network_output_train, y_test, y_train, meta, nbins = 50):
# Plot neural network output for train, test instances to check overtraining
sea_green = '#54ff9f'
cornflower = '#6495ED'
sig_output_train, bkg_output_train = output_probs(network_output_train, y_train)
sig_output_test, bkg_output_test = output_probs(network_output_test, y_test)
plt.figure()
plt.axes([.1,.1,.8,.7])
plt.figtext(.5,.9, r'$\mathrm{NN \; Output}$', fontsize=12, ha='center')
plt.figtext(.5,.86, meta, fontsize=8, ha='center')
# Plot the training sample as filled histograms
sns.distplot(sig_output_train, color = sea_green, label = r'$\mathrm{Signal}$',bins = nbins, kde = False,
hist_kws={'weights': normalize_weights(sig_output_train)})
sns.distplot(bkg_output_train, color = cornflower, label = r'$\mathrm{Crossfeed}$',bins=nbins, kde = False,
hist_kws={'weights': normalize_weights(bkg_output_train)})
hist, bins = np.histogram(sig_output_test, bins = nbins, weights = normalize_weights(sig_output_test))
center = (bins[:-1] + bins[1:])/2
plt.errorbar(center, hist, fmt='.',c = sea_green, label = r'$\mathrm{Signal \;(test)}$', markersize='10')
hist, bins = np.histogram(bkg_output_test, bins = nbins, weights = normalize_weights(bkg_output_test))
center = (bins[:-1] + bins[1:])/2
plt.errorbar(center, hist, fmt='.',c = cornflower, label = r'$\mathrm{Crossfeed \;(test)}$', markersize='10')
plt.xlabel(r'$\mathrm{Signal \; Probability}$')
plt.ylabel(r'$\mathrm{Normalized \; Entries/bin}$')
plt.legend(loc='best')
plt.savefig("graphs/" + "NNoutput_traintestcheck.pdf", format='pdf', dpi=1000)
plt.show()
plt.gcf().clear()
def plot_ROC_curve(y, network_output, meta):
Plots the receiver-operating characteristic curve
Inputs: y: One-hot encoded binary labels
network_output: NN output probabilities
Output: AUC: Area under the ROC Curve
from sklearn.metrics import roc_curve, auc
# Get class output scores
y_score = network_output[:,1]
y_truth = np.argmax(y,1)
# Compute ROC curve, integrate
fpr, tpr, thresholds = roc_curve(y_truth, y_score)
roc_auc = auc(fpr, tpr)
plt.figure()
plt.axes([.1,.1,.8,.7])
plt.figtext(.5,.9, r'$\mathrm{Receiver \;operating \;characteristic}$', fontsize=15, ha='center')
plt.figtext(.5,.85, meta,fontsize=10,ha='center')
plt.plot(fpr, tpr, color='darkorange',
lw=2, label='ROC curve - custom (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=1.0, linestyle='--')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel(r'$\mathrm{False \;Positive \;Rate}$')
plt.ylabel(r'$\mathrm{True \;Positive \;Rate}$')
plt.legend(loc="lower right")
plt.savefig("graphs/" + "NN_ROCcurve.pdf",format='pdf', dpi=1000)
plt.show()
plt.gcf().clear()
Explanation: Postprocessing
This notebook visualizes the output of the deep neural network and plots the associated ROC curve.
End of explanation
import pickle
# Load previously saved network output, and network architecture
network_output_train = np.load('persistance/rho0/neuralnet/n_train.npy')
y_train = np.load('persistance/rho0/neuralnet/y_train.npy')
network_output_test = np.load('persistance/rho0/neuralnet/n_test.npy')
y_test = np.load('persistance/rho0/neuralnet/y_test.npy')
NN_meta = pickle.load(open('persistance/rho0/neuralnet/rho0_arch.p', 'rb'))
NN_output(network_output_train, y_train, meta = NN_meta)
NN_output_train_test(network_output_test, network_output_train, y_test, y_train, meta = NN_meta)
Explanation: A 5-layer neural network was trained to separate $B \rightarrow \rho \gamma$ decays from the kinematically similar and topologically identical mode $B \rightarrow K^* \gamma$. The neural network output for the training and validation data is plotted to check if the network has overfit.
End of explanation
# Here the ROC threshold is evaluated at the default of 0.5
plot_ROC_curve(y_train, network_output_train, meta = NN_meta)
Explanation: Receiver Operating Characteristic
The true positive rate (recall) is plotted against the false positive rate (probability of false alarm). Used to evaluate classifier performance as we vary its discrimination threshold. The BDT output is a continuous random variable $X$. Given a threshold parameter $T$, the instance is classified as signal is $X>T$ and background otherwise. The random variable $X$ should follow a probability density $f_{sig}(x)$ if is true signal, and $f_{bkg}(x)$ otherwise. The respective rates are therefore given as cumulative density functions:
$$ \mathbf{TPR}(T) = \int_T^{\infty} dx \; f_{sig}(x), \; \; \mathbf{FPR}(T) = \int_T^{\infty} dx \; f_{bkg}(x), $$
The ROC curve plots $\mathbf{TPR}(T)$ versus $\mathbf{FPR}(T)$ with the discrimination threshold as the varying parameter. The optimal point in ROC - space is $(0,1)$ in the upper left corner - the error-free point. The gradient and area of the ROC curve are also useful metrics. The latter will be used as a testing metric, and is given by:
$$ \mathrm{AUC} = \theta = \int_{-\infty}^{\infty} dT \; \mathbf{TPR}(T) \; \frac{d\; \mathbf{FPR}}{dT}(T) $$
End of explanation |
14,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If you have not already read it, you may want to start with the first tutorial
Step1: The data for our two surveys are stored in two separate CSV files included with the documentation. We will load separate RVData instances for the two data sets and append these objects to a list of datasets
Step2: In the plot below, the two data sets are shown in different colors
Step3: To tell The Joker to handle additional linear parameters to account for offsets in absolute velocity, we must define a new parameter for the offset betwen survey 1 and survey 2 and specify a prior. Here we will assume a Gaussian prior on the offset, centered on 0, but with a 10 km/s standard deviation. We then pass this in to JokerPrior.default() (all other parameters here use the default prior) through the v0_offsets argument
Step4: The rest should look familiar
Step5: Note that the new parameter, dv0_1, now appears in the returned samples above.
If we pass these samples in to the plot_rv_curves function, the data from other surveys is, by default, shifted by the mean value of the offset before plotting
Step6: However, the above behavior can be disabled by setting apply_mean_v0_offset=False. Note that with this set, the inferred orbit will not generally pass through data that suffer from a measurable offset
Step7: As introduced in the previous tutorial, we can also continue generating samples by initializing and running standard MCMC
Step8: Here the true offset is 4.8 km/s, so it looks like we recover this value!
A full corner plot of the MCMC samples | Python Code:
import astropy.table as at
import astropy.units as u
from astropy.visualization.units import quantity_support
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import corner
import pymc3 as pm
import pymc3_ext as pmx
import exoplanet as xo
import exoplanet.units as xu
import arviz as az
import thejoker as tj
# set up a random number generator to ensure reproducibility
rnd = np.random.default_rng(seed=42)
Explanation: If you have not already read it, you may want to start with the first tutorial: Getting started with The Joker.
Inferring calibration offsets between instruments
Also in addition to the default linear parameters (see Tutorial 1, or the documentation for JokerSamples.default()), The Joker allows adding linear parameters to account for possible calibration offsets between instruments. For example, there may be an absolute velocity offset between two spectrographs. Below we will demonstrate how to simultaneously infer and marginalize over a constant velocity offset between two simulated surveys of the same "star".
First, some imports we will need later:
End of explanation
data = []
for filename in ['data-survey1.ecsv', 'data-survey2.ecsv']:
tbl = at.QTable.read(filename)
_data = tj.RVData.guess_from_table(tbl, t_ref=tbl.meta['t_ref'])
data.append(_data)
Explanation: The data for our two surveys are stored in two separate CSV files included with the documentation. We will load separate RVData instances for the two data sets and append these objects to a list of datasets:
End of explanation
for d, color in zip(data, ['tab:blue', 'tab:red']):
_ = d.plot(color=color)
Explanation: In the plot below, the two data sets are shown in different colors:
End of explanation
with pm.Model() as model:
dv0_1 = xu.with_unit(pm.Normal('dv0_1', 0, 10),
u.km/u.s)
prior = tj.JokerPrior.default(
P_min=2*u.day, P_max=256*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s,
v0_offsets=[dv0_1])
Explanation: To tell The Joker to handle additional linear parameters to account for offsets in absolute velocity, we must define a new parameter for the offset betwen survey 1 and survey 2 and specify a prior. Here we will assume a Gaussian prior on the offset, centered on 0, but with a 10 km/s standard deviation. We then pass this in to JokerPrior.default() (all other parameters here use the default prior) through the v0_offsets argument:
End of explanation
prior_samples = prior.sample(size=1_000_000,
random_state=rnd)
joker = tj.TheJoker(prior, random_state=rnd)
joker_samples = joker.rejection_sample(data, prior_samples,
max_posterior_samples=128)
joker_samples
Explanation: The rest should look familiar: The code below is identical to previous tutorials, in which we generate prior samples and then rejection sample with The Joker:
End of explanation
_ = tj.plot_rv_curves(joker_samples, data=data)
Explanation: Note that the new parameter, dv0_1, now appears in the returned samples above.
If we pass these samples in to the plot_rv_curves function, the data from other surveys is, by default, shifted by the mean value of the offset before plotting:
End of explanation
_ = tj.plot_rv_curves(joker_samples, data=data,
apply_mean_v0_offset=False)
Explanation: However, the above behavior can be disabled by setting apply_mean_v0_offset=False. Note that with this set, the inferred orbit will not generally pass through data that suffer from a measurable offset:
End of explanation
with prior.model:
mcmc_init = joker.setup_mcmc(data, joker_samples)
trace = pmx.sample(
tune=500, draws=500,
start=mcmc_init,
cores=1, chains=2)
az.summary(trace, var_names=prior.par_names)
Explanation: As introduced in the previous tutorial, we can also continue generating samples by initializing and running standard MCMC:
End of explanation
mcmc_samples = joker.trace_to_samples(trace, data)
mcmc_samples.wrap_K()
df = mcmc_samples.tbl.to_pandas()
colnames = mcmc_samples.par_names
colnames.pop(colnames.index('s'))
_ = corner.corner(df[colnames])
Explanation: Here the true offset is 4.8 km/s, so it looks like we recover this value!
A full corner plot of the MCMC samples:
End of explanation |
14,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="alert alert-block alert-info" style="margin-top
Step1: <code>plot_channels</code>
Step2: <code>show_data</code>
Step4: Create some toy data
Step5: <code>plot_activation</code>
Step6: Utility function for computing output of convolutions
takes a tuple of (h,w) and returns a tuple of (h,w)
Step7: <a id="ref1"></a>
<h2 align=center>Prepare Data </h2>
Load the training dataset with 10000 samples
Step8: Load the validating dataset
Step9: The data type is long
Data Visualization
Each element in the rectangular tensor corresponds to a number representing a pixel intensity as demonstrated by the following image.
Print out the third label
Step10: Plot the third sample
<a id="ref3"></a>
Build a Convolutional Neral Network Class
The input image is 11 x11, the following will change the size of the activations
Step11: Build a Convolutional Network class with two Convolutional layers and one fully connected layer. Pre-determine the size of the final output matrix. The parameters in the constructor are the number of output channels for the first and second layer.
Step12: <a id="ref3"></a>
<h2> Define the Convolutional Neral Network Classifier , Criterion function, Optimizer and Train the Model </h2>
There are 2 output channels for the first layer, and 1 output channel for the second layer
Step13: Print the model parameters with the object
Step14: Plot the model parameters for the kernels before training the kernels. The kernels are initialized randomly.
Step15: Define the loss function
Step16: Define the optimizer class
Step17: Define the data loader
Step18: Train the model and determine validation accuracy
Step19: <a id="ref3"></a>
<h2 align=center>Analyse Results</h2>
Plot the loss and accuracy on the validation data
Step20: View the results of the parameters for the Convolutional layers
Step21: Consider the following sample
Step22: Determine the activations
Step23: Plot maps out
Step24: Save the output of the activation after flattening
Step25: Try the same thing for a sample where y=0 | Python Code:
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
import matplotlib.pylab as plt
import numpy as np
import pandas as pd
torch.manual_seed(4)
Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/pytorch_link_top"><img src = "http://cocl.us/Pytorch_top" width = 950, align = "center">
<img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 200, align = "center">
<h1 align=center><font size = 5>Convolutional Neral Network Simple example </font></h1>
# Table of Contents
In this lab, we will use a Convolutional Neral Networks to classify horizontal and vertical Lines
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">Helper functions </a></li>
<li><a href="#ref1"> Prepare Data </a></li>
<li><a href="#ref2">Convolutional Neral Network </a></li>
<li><a href="#ref3">Define Softmax , Criterion function, Optimizer and Train the Model</a></li>
<li><a href="#ref4">Analyse Results</a></li>
<br>
<p></p>
Estimated Time Needed: <strong>25 min</strong>
</div>
<hr>
<a id="ref0"></a>
<h2 align=center>Helper functions </h2>
End of explanation
def plot_channels(W):
#number of output channels
n_out=W.shape[0]
#number of input channels
n_in=W.shape[1]
w_min=W.min().item()
w_max=W.max().item()
fig, axes = plt.subplots(n_out,n_in)
fig.subplots_adjust(hspace = 0.1)
out_index=0
in_index=0
#plot outputs as rows inputs as columns
for ax in axes.flat:
if in_index>n_in-1:
out_index=out_index+1
in_index=0
ax.imshow(W[out_index,in_index,:,:], vmin=w_min, vmax=w_max, cmap='seismic')
ax.set_yticklabels([])
ax.set_xticklabels([])
in_index=in_index+1
plt.show()
Explanation: <code>plot_channels</code>: plot out the parameters of the Convolutional layers
End of explanation
def show_data(dataset,sample):
plt.imshow(dataset.x[sample,0,:,:].numpy(),cmap='gray')
plt.title('y='+str(dataset.y[sample].item()))
plt.show()
Explanation: <code>show_data</code>: plot out data sample
End of explanation
from torch.utils.data import Dataset, DataLoader
class Data(Dataset):
def __init__(self,N_images=100,offset=0,p=0.9, train=False):
p:portability that pixel is wight
N_images:number of images
offset:set a random vertical and horizontal offset images by a sample should be less than 3
if train==True:
np.random.seed(1)
#make images multiple of 3
N_images=2*(N_images//2)
images=np.zeros((N_images,1,11,11))
start1=3
start2=1
self.y=torch.zeros(N_images).type(torch.long)
for n in range(N_images):
if offset>0:
low=int(np.random.randint(low=start1, high=start1+offset, size=1))
high=int(np.random.randint(low=start2, high=start2+offset, size=1))
else:
low=4
high=1
if n<=N_images//2:
self.y[n]=0
images[n,0,high:high+9,low:low+3]= np.random.binomial(1, p, (9,3))
elif n>N_images//2:
self.y[n]=1
images[n,0,low:low+3,high:high+9] = np.random.binomial(1, p, (3,9))
self.x=torch.from_numpy(images).type(torch.FloatTensor)
self.len=self.x.shape[0]
del(images)
np.random.seed(0)
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
Explanation: Create some toy data
End of explanation
def plot_activations(A,number_rows= 1,name=""):
A=A[0,:,:,:].detach().numpy()
n_activations=A.shape[0]
print(n_activations)
A_min=A.min().item()
A_max=A.max().item()
if n_activations==1:
# Plot the image.
plt.imshow(A[0,:], vmin=A_min, vmax=A_max, cmap='seismic')
else:
fig, axes = plt.subplots(number_rows, n_activations//number_rows)
fig.subplots_adjust(hspace = 0.4)
for i,ax in enumerate(axes.flat):
if i< n_activations:
# Set the label for the sub-plot.
ax.set_xlabel( "activation:{0}".format(i+1))
# Plot the image.
ax.imshow(A[i,:], vmin=A_min, vmax=A_max, cmap='seismic')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
Explanation: <code>plot_activation</code>: plot out the activations of the Convolutional layers
End of explanation
def conv_output_shape(h_w, kernel_size=1, stride=1, pad=0, dilation=1):
#by Duane Nielsen
from math import floor
if type(kernel_size) is not tuple:
kernel_size = (kernel_size, kernel_size)
h = floor( ((h_w[0] + (2 * pad) - ( dilation * (kernel_size[0] - 1) ) - 1 )/ stride) + 1)
w = floor( ((h_w[1] + (2 * pad) - ( dilation * (kernel_size[1] - 1) ) - 1 )/ stride) + 1)
return h, w
Explanation: Utility function for computing output of convolutions
takes a tuple of (h,w) and returns a tuple of (h,w)
End of explanation
N_images=10000
train_dataset=Data(N_images=N_images)
Explanation: <a id="ref1"></a>
<h2 align=center>Prepare Data </h2>
Load the training dataset with 10000 samples
End of explanation
validation_dataset=Data(N_images=1000,train=False)
validation_dataset
Explanation: Load the validating dataset
End of explanation
show_data(train_dataset,0)
show_data(train_dataset,N_images//2+2)
Explanation: The data type is long
Data Visualization
Each element in the rectangular tensor corresponds to a number representing a pixel intensity as demonstrated by the following image.
Print out the third label
End of explanation
out=conv_output_shape((11,11), kernel_size=2, stride=1, pad=0, dilation=1)
print(out)
out1=conv_output_shape(out, kernel_size=2, stride=1, pad=0, dilation=1)
print(out1)
out2=conv_output_shape(out1, kernel_size=2, stride=1, pad=0, dilation=1)
print(out2)
out3=conv_output_shape(out2, kernel_size=2, stride=1, pad=0, dilation=1)
print(out3)
Explanation: Plot the third sample
<a id="ref3"></a>
Build a Convolutional Neral Network Class
The input image is 11 x11, the following will change the size of the activations:
<ul>
<il>convolutional layer</il>
</ul>
<ul>
<il>max pooling layer</il>
</ul>
<ul>
<il>convolutional layer </il>
</ul>
<ul>
<il>max pooling layer </il>
</ul>
The following lines of code change the image before we get to the fully connected layer with the following parameters <code>kernel_size</code>, <code>stride</code> and <code> pad</code>.
End of explanation
class CNN(nn.Module):
def __init__(self,out_1=2,out_2=1):
super(CNN,self).__init__()
#first Convolutional layers
self.cnn1=nn.Conv2d(in_channels=1,out_channels=out_1,kernel_size=2,padding=0)
#activation function
self.relu1=nn.ReLU()
#max pooling
self.maxpool1=nn.MaxPool2d(kernel_size=2 ,stride=1)
#second Convolutional layers
self.cnn2=nn.Conv2d(in_channels=out_1,out_channels=out_2,kernel_size=2,stride=1,padding=0)
#activation function
self.relu2=nn.ReLU()
#max pooling
self.maxpool2=nn.MaxPool2d(kernel_size=2 ,stride=1)
#fully connected layer
self.fc1=nn.Linear(out_2*7*7,2)
def forward(self,x):
#first Convolutional layers
out=self.cnn1(x)
#activation function
out=self.relu1(out)
#max pooling
out=self.maxpool1(out)
#first Convolutional layers
out=self.cnn2(out)
#activation function
out=self.relu2(out)
#max pooling
out=self.maxpool2(out)
#flatten output
out=out.view(out.size(0),-1)
#fully connected layer
out=self.fc1(out)
return out
def activations(self,x):
#outputs activation this is not necessary just for fun
z1=self.cnn1(x)
a1=self.relu1(z1)
out=self.maxpool1(a1)
z2=self.cnn2(out)
a2=self.relu2(z2)
out=self.maxpool2(a2)
out=out.view(out.size(0),-1)
return z1,a1,z2,a2,out
Explanation: Build a Convolutional Network class with two Convolutional layers and one fully connected layer. Pre-determine the size of the final output matrix. The parameters in the constructor are the number of output channels for the first and second layer.
End of explanation
model=CNN(2,1)
Explanation: <a id="ref3"></a>
<h2> Define the Convolutional Neral Network Classifier , Criterion function, Optimizer and Train the Model </h2>
There are 2 output channels for the first layer, and 1 output channel for the second layer
End of explanation
model
Explanation: Print the model parameters with the object
End of explanation
plot_channels(model.state_dict()['cnn1.weight'])
plot_channels(model.state_dict()['cnn2.weight'])
Explanation: Plot the model parameters for the kernels before training the kernels. The kernels are initialized randomly.
End of explanation
criterion=nn.CrossEntropyLoss()
Explanation: Define the loss function
End of explanation
learning_rate=0.001
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
Explanation: Define the optimizer class
End of explanation
train_loader=torch.utils.data.DataLoader(dataset=train_dataset,batch_size=10)
validation_loader=torch.utils.data.DataLoader(dataset=validation_dataset,batch_size=20)
Explanation: Define the data loader
End of explanation
n_epochs=10
loss_list=[]
accuracy_list=[]
N_test=len(validation_dataset)
#n_epochs
for epoch in range(n_epochs):
for x, y in train_loader:
#clear gradient
optimizer.zero_grad()
#make a prediction
z=model(x)
# calculate loss
loss=criterion(z,y)
# calculate gradients of parameters
loss.backward()
# update parameters
optimizer.step()
correct=0
#perform a prediction on the validation data
for x_test, y_test in validation_loader:
z=model(x_test)
_,yhat=torch.max(z.data,1)
correct+=(yhat==y_test).sum().item()
accuracy=correct/N_test
accuracy_list.append(accuracy)
loss_list.append(loss.data)
Explanation: Train the model and determine validation accuracy
End of explanation
fig, ax1 = plt.subplots()
color = 'tab:red'
ax1.plot(loss_list,color=color)
ax1.set_xlabel('epoch',color=color)
ax1.set_ylabel('total loss',color=color)
ax1.tick_params(axis='y', color=color)
ax2 = ax1.twinx()
color = 'tab:blue'
ax2.set_ylabel('accuracy', color=color)
ax2.plot( accuracy_list, color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout()
Explanation: <a id="ref3"></a>
<h2 align=center>Analyse Results</h2>
Plot the loss and accuracy on the validation data:
End of explanation
model.state_dict()['cnn1.weight']
plot_channels(model.state_dict()['cnn1.weight'])
model.state_dict()['cnn1.weight']
plot_channels(model.state_dict()['cnn2.weight'])
Explanation: View the results of the parameters for the Convolutional layers
End of explanation
show_data(train_dataset,N_images//2+2)
Explanation: Consider the following sample
End of explanation
out=model.activations(train_dataset[N_images//2+2][0].view(1,1,11,11))
Explanation: Determine the activations
End of explanation
plot_activations(out[0],number_rows=1,name="first feature map")
plt.show()
plot_activations(out[2],number_rows=1,name="first feature map")
plt.show()
plot_activations(out[3],number_rows=1,name="first feature map")
plt.show()
Explanation: Plot maps out
End of explanation
out1=out[4][0].detach().numpy()
Explanation: Save the output of the activation after flattening
End of explanation
out0=model.activations(train_dataset[100][0].view(1,1,11,11))[4][0].detach().numpy()
out0
plt.subplot(2, 1, 1)
plt.plot( out1, 'b')
plt.title('Flatted Activation Values ')
plt.ylabel('Activation')
plt.xlabel('index')
plt.subplot(2, 1, 2)
plt.plot(out0, 'r')
plt.xlabel('index')
plt.ylabel('Activation')
Explanation: Try the same thing for a sample where y=0
End of explanation |
14,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-lr', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-LR
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
14,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Income Inequality between high earners and low earners
A critique of http
Step1: Getting the data
Before going into the purely visual aspects and how effective they are at conveying a story, I want to understand what data we are dealing with. At the bottom of the graph, there is a bit.ly URL that points to a google drive document. Adding export?format=xlsx will allow us to download this document as an excel spreadsheet, which can then be sliced and diced easily with the pandas analytics module.
Step2: First issue with the data, right away we can see the wide range of dates. Let's look at the date distribution. We probably would want to use only 2010 if it represents enough data. We will make a note of <b>39.99</b> as the average Gini coefficient over all those years.
Step3: We will get just the data for 2009. Not only it is recent, but it is plenty of data points to represent at once. This will also address the other issue with the data
Step4: This is already way easier to compare than the original infographic. Perhaps not as snazzy, but at least it gives us a start in trying to understand the data. But it is just that, a start. One angle would be to investigate how much above average is the Gini for the US. But I would also want to have the measures, including the average from the same year. A quick comparison of the two distributions (2009 vs all the data) shows how sampling on 2009 skews toward a higher Gini.
Step5: Comparing with GDP, population, gender inequality, even subjective "satisfaction indexes" and the like would be much more interesting. To tell a real story, we need to show some correlation, and provide some narrative and/or visualization to explain Gini. At the end of the day, perhaps the real story is that Gini is not a great universal indicator.
Colors
Where the graph at http | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set(palette = sns.dark_palette("skyblue", 8, reverse=True))
Explanation: Income Inequality between high earners and low earners
A critique of http://www.informationisbeautiful.net/visualizations/what-are-wallst-protestors-angry-about/
End of explanation
!wget 'https://docs.google.com/spreadsheets/d/1N_Hc-xKr7DQc8bZAvLROGWr5Cr-A6MfGnH91fFW3ZwA/export?format=xlsx&id=1N_Hc-xKr7DQc8bZAvLROGWr5Cr-A6MfGnH91fFW3ZwA' -O wallstreet.xlsx
df = pd.read_excel('wallstreet.xlsx', skiprows=1, index_col = 'Country')
df.describe()
Explanation: Getting the data
Before going into the purely visual aspects and how effective they are at conveying a story, I want to understand what data we are dealing with. At the bottom of the graph, there is a bit.ly URL that points to a google drive document. Adding export?format=xlsx will allow us to download this document as an excel spreadsheet, which can then be sliced and diced easily with the pandas analytics module.
End of explanation
df['Year'].hist(bins=22) # 22 bins so I get every year as a distinct sum
Explanation: First issue with the data, right away we can see the wide range of dates. Let's look at the date distribution. We probably would want to use only 2010 if it represents enough data. We will make a note of <b>39.99</b> as the average Gini coefficient over all those years.
End of explanation
gini_df = df[(df.Year==2009)|(df.index=='United States')]['Gini'] # Only 2009, and choose only the gini columns (and the index, country)
gini_df
current_ax = gini_df.plot(kind='barh', color=sns.color_palette()[0])
current_ax.set_title('Gini index (%) in 2009')
current_ax.vlines(39.99, 0, len(gini_df), color=sns.color_palette()[2])
Explanation: We will get just the data for 2009. Not only it is recent, but it is plenty of data points to represent at once. This will also address the other issue with the data: in the raw form, it is too numerous and will overload the reader if presented as is. We will also load the US data, since it is supposed to tell the story of <b>'occupy wallstreet'</b>. If we are missing further critical data, we can always add a specific data point later, as we are keeping the original data frame untouched.
End of explanation
ax = df['Gini'].plot(kind='kde')
gini_df.plot(kind='kde', ax=ax) #overlay 2009 vs all years/countries
Explanation: This is already way easier to compare than the original infographic. Perhaps not as snazzy, but at least it gives us a start in trying to understand the data. But it is just that, a start. One angle would be to investigate how much above average is the Gini for the US. But I would also want to have the measures, including the average from the same year. A quick comparison of the two distributions (2009 vs all the data) shows how sampling on 2009 skews toward a higher Gini.
End of explanation
current_ax = gini_df.plot(kind='barh', color=sns.color_palette()[0])
current_ax.patches[list(gini_df.index).index("United States")].set_facecolor('#cc5555')
current_ax.set_title('Gini index (%) in 2009')
current_ax.vlines(39.99, 0, len(gini_df), color=sns.color_palette()[2])
current_ax.annotate('Average for\n1989-2010',
(40, 2),
xytext=(20, 10),
textcoords='offset points',
arrowprops=dict(arrowstyle='-|>'))
Explanation: Comparing with GDP, population, gender inequality, even subjective "satisfaction indexes" and the like would be much more interesting. To tell a real story, we need to show some correlation, and provide some narrative and/or visualization to explain Gini. At the end of the day, perhaps the real story is that Gini is not a great universal indicator.
Colors
Where the graph at http://www.informationisbeautiful.net/visualizations/what-are-wallst-protestors-angry-about/ was using a very gradual change in hue based on the value (redundant, the width and the number already shows this), it is so subtle that it doesn't show any significant difference between two consecutive rows.
A better use of color is to highlight our focus, or for reference lines. With that in mind, let's enhance our bar plot with judicious use of color for making it quicker to spot the US data.
End of explanation |
14,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
More on SQLalchemy filter and operators
Step1: Let's start from a simple query and see how we can use different operators to refine it.
Step2: equals ( == )
Step3: not equals ( != )
Step4: LIKE ( like % ) NB like is case-insensitive
Step5: IN ( in_ )
Step6: NOT IN ( ~ in_ )
Step7: The AND and OR operators need to be explicitly imported
Step8: AND ( and_ )
Step9: Let's try using the same two constraints as consecutive filter calls.
Step10: and_ returns the same results as using two filters one after the other.
Let's try to pass two constraints directly to the same filter call.
Step11: Again we're getting the same result as if we used and_.
OR ( or_ )
Step12: Getting deeper in the query object
Let's check exactly what the outputs() function returns
Step13: outputs( ) is a method of an ARCCSSive session which is actually using the SQLalchemy Session.query( ) method.
For example in this case is equivalent to
db.query( )
results is a query object. This means that we haven't yet retrieve any actual value from the Instance table.
What SQLalchemy has done up to now is to generate an SQL query statement from our input arguments to pass
to the database.
The SQL statement is executed only when we explicitly retrieve the results to use them.
This why the query is always instantaneous
Try to run
results=db.output()
that effectivily retrieves the entire Istance table and see how long it takes.
Step14: We can directly loop through the query object returned or we can use one of the methods that return a value, as
Step15: If we specified all the 5 constraints we can pass to the outputs function we should always get only one row back, since you cannot have two rows sharing all these values.
In this case we can use the one( ) method to return that row.
Step16: Let'see what happens if you use one( ) with a query that returns multiple rows.
Step17: This generates an error, so we should use only when we are expecting one row back or if we want to generate two different responses inc ase a query returns one or many rows.
If we have multiple rows returned by the query we use can use the method first() to get only the first result.
Step18: Another useful method of the query is order_by( ). | Python Code:
! module use /g/data3/hh5/public/modules
! module load conda/analysis27
from ARCCSSive import CMIP5
from ARCCSSive.CMIP5.Model import Instance
from ARCCSSive.CMIP5.other_functions import unique
db=CMIP5.connect()
Explanation: More on SQLalchemy filter and operators
End of explanation
results=db.outputs(ensemble='r1i1p1',experiment='rcp45',mip='day')
results.count()
Explanation: Let's start from a simple query and see how we can use different operators to refine it.
End of explanation
miroc5=results.filter(Instance.model == 'MIROC5')
miroc5.count()
unique(miroc5,'model')
Explanation: equals ( == )
End of explanation
not_miroc5=results.filter(Instance.model != 'MIROC5')
not_miroc5.count()
'MIROC5' in unique(not_miroc5,'model')
Explanation: not equals ( != )
End of explanation
miroc_models=results.filter(Instance.model.like('MIROC%'))
# miroc_models=results.filter(Instance.model.like('miroc%'))
miroc_models.count()
unique(miroc_models,'model')
Explanation: LIKE ( like % ) NB like is case-insensitive
End of explanation
tasmin_tasmax=results.filter(Instance.variable.in_(['tasmin','tasmax']))
tasmin_tasmax.count()
unique(tasmin_tasmax,'variable')
Explanation: IN ( in_ )
End of explanation
not_tasmin_tasmax=results.filter(~Instance.variable.in_(['tasmin','tasmax']))
not_tasmin_tasmax.count()
print(unique(not_tasmin_tasmax,'variable'))
Explanation: NOT IN ( ~ in_ )
End of explanation
from sqlalchemy import and_, or_
Explanation: The AND and OR operators need to be explicitly imported
End of explanation
miroc5_tas=results.filter(and_(Instance.model == 'MIROC5',Instance.variable == 'tas'))
print( miroc5_tas.count() )
print( unique(miroc5_tas,'model') )
print( unique(miroc5_tas,'variable') )
Explanation: AND ( and_ )
End of explanation
miroc5_tas=results.filter(Instance.model == 'MIROC5').filter(Instance.variable == 'tas')
print( miroc5_tas.count() )
print( unique(miroc5_tas,'model') )
print( unique(miroc5_tas,'variable') )
Explanation: Let's try using the same two constraints as consecutive filter calls.
End of explanation
miroc5_tas=results.filter(Instance.model == 'MIROC5', Instance.variable == 'tas')
print( miroc5_tas.count() )
print( unique(miroc5_tas,'model') )
print( unique(miroc5_tas,'variable') )
Explanation: and_ returns the same results as using two filters one after the other.
Let's try to pass two constraints directly to the same filter call.
End of explanation
miroc5_or_clt=results.filter(or_(Instance.model == 'MIROC5', Instance.variable == 'clt'))
miroc5_or_clt.count()
for o in miroc5_or_clt:
print( o.model, o.variable )
Explanation: Again we're getting the same result as if we used and_.
OR ( or_ )
End of explanation
results=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5')
print(type(results))
Explanation: Getting deeper in the query object
Let's check exactly what the outputs() function returns
End of explanation
results=db.outputs()
results.count()
Explanation: outputs( ) is a method of an ARCCSSive session which is actually using the SQLalchemy Session.query( ) method.
For example in this case is equivalent to
db.query( )
results is a query object. This means that we haven't yet retrieve any actual value from the Instance table.
What SQLalchemy has done up to now is to generate an SQL query statement from our input arguments to pass
to the database.
The SQL statement is executed only when we explicitly retrieve the results to use them.
This why the query is always instantaneous
Try to run
results=db.output()
that effectivily retrieves the entire Istance table and see how long it takes.
End of explanation
results=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5').all()
print(type(results))
print( results )
Explanation: We can directly loop through the query object returned or we can use one of the methods that return a value, as: all( ), one( ) and first( ).
End of explanation
result=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5',ensemble='r1i1p1').one()
print(type(result))
Explanation: If we specified all the 5 constraints we can pass to the outputs function we should always get only one row back, since you cannot have two rows sharing all these values.
In this case we can use the one( ) method to return that row.
End of explanation
result=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5').one()
Explanation: Let'see what happens if you use one( ) with a query that returns multiple rows.
End of explanation
result=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5').first()
print(type(result))
Explanation: This generates an error, so we should use only when we are expecting one row back or if we want to generate two different responses inc ase a query returns one or many rows.
If we have multiple rows returned by the query we use can use the method first() to get only the first result.
End of explanation
results=db.outputs(variable='tas',experiment='historical',mip='Amon',model='MIROC5').order_by(Instance.ensemble)
for o in results:
print(o.ensemble)
Explanation: Another useful method of the query is order_by( ).
End of explanation |
14,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Athena
Modelling the Effect of Relative Humidity on Surface Temperature
In this short tutorial, we'll use Athena to build an equation that models the relationship between hour of day, humidity, and surface temperature. The data is from a hobbyist weather station that is owned by ArabiaWeather.
Step1: Fix the random seed so that we can obtain reproducible results.
Step2: Importing Data and Data Sanitization
We read in our data from a comma separated values (CSV) file using the Pandas library. We are only interested in a select few columns in the data so we select only them.
Step3: We have to sanitize the data before we start fitting equations to it. We need a function to convert time to a percentage of the day (eg
Step4: Athena includes some helper functions to assist in equation building; one of them is the split_dataframe function. This will randomly split the data into x% training and (100-x)% testing sets.
Step5: Athena can also automatically normalize your data; normalizing data to fit between 0 and 1 greatly assists curve fitting because most of the built in equations are designed to work optimally on that numerical range. A parameter map first splits columns into normalized and non-normalized, then maps each column name in the data to a short-hand name to be used in the equation. Here we will map time to t, and humidity to h.
Step6: Using the Athena Framework
Every equation building session in Athena operates out of something called a Framework; as its name suggests, an Athena framework will consilidate all your datasets, equations, hyper-parameters, etc in one Python class. Beginning a Framework class requires passing a Python dictionary containing hyper-parameters; you can leave this empty and the default parameters will be used instead.
Step7: We can initialize our datasets along with their parameter map using the Dataset class. We then feed this into the Framework class by using the function add_dataset. Our data is now sanitized and inside our framework, ready for processing and equation building.
Step8: Introduction to the Additive Model
There are many types of model equations we can use through Athena as a base for our equation building blocks. The simplest of which is the Additive Model; this means that every function in this model is added - for example, if we have f(a), g(b), and h(c), applying these to an additive model will create an equation of the form f(a) + g(b) + h(c). There are more complex models available to the user, such as the multiplicative model and the composite model, all of which are explained in more advanced tutorials.
Step9: We get the target column in both training and testing datasets and assign them as floats in training_targets and testing_targets to be used later in accuracy measurement.
Step10: Creating the components of an additive model is simple and similar in fashion to creating a neural network model in Keras, for example. We first add a Bias function, the simplest kind of function which adds a variable number to our equation. Then we add four SimpleSinusoidal functions and pass them the name of the column we want the function to work on; since temperature changes sinusoidally with time, we add four sine functions that have variable amplitudes, frequencies, and phase shifts to our equation (thisis very similar to taking the first four terms in an infintie Fourier series). Finally, because we assume that humidity changes exponentially with temperature, we add two FlexiblePower functions to reflect this assumption.
Step11: Here comes the fun part
Step12: We can get the equation from Athena by using the framework function produce_equation. This will return a sympy equation; sympy is, at its core, a powerful Python library for symbolic mathematics - we will need it for a bunch of cool things, like automatic symbolic simplification, pretty printing, and even outputting LaTeX for use in an academic paper.
Step13: Visualizing the Result with Sympy and Matplotlib
When in an interactive Python notebook, we can output a pretty simplified version of our temperature equation to the notebook and examine the coefficients we produced. Upon simple inspection, we can see we inferred an inverse correlation, also known as a negative correlation, between humidity and temperature. That is, the more we increase humidity, we will see an exponential decrease in temperature.
Step14: Sympy lets us create a fast Python function from our generated equation, so that we can quickly substitute large arrays of numbers to produce results for table or chart production later.
Step15: Let's observe the effect of increasing humidity on surface temperature throughout the hours of the day. We can generate the y-axis for each humidity level by utilizing the function we created before. We'll plot the temperatures at varying humidity levels on the same graph. | Python Code:
import pandas as pd
from dateutil.parser import parse
from athena.equations import *
from athena.framework import Framework
from athena.dataset import Dataset
from athena.model import AdditiveModel
from athena.helpers import *
Explanation: Introduction to Athena
Modelling the Effect of Relative Humidity on Surface Temperature
In this short tutorial, we'll use Athena to build an equation that models the relationship between hour of day, humidity, and surface temperature. The data is from a hobbyist weather station that is owned by ArabiaWeather.
End of explanation
np.random.seed(seed = 4)
Explanation: Fix the random seed so that we can obtain reproducible results.
End of explanation
data_frame = pd.read_csv('test_data.csv')
data_frame = data_frame[["time", "temp", "humidity"]]
Explanation: Importing Data and Data Sanitization
We read in our data from a comma separated values (CSV) file using the Pandas library. We are only interested in a select few columns in the data so we select only them.
End of explanation
def get_hour(x):
y = parse(x)
return y.hour + y.minute/60.0
data_frame["time"] = [get_hour(x)/24.0 for x in data_frame["time"].values]
Explanation: We have to sanitize the data before we start fitting equations to it. We need a function to convert time to a percentage of the day (eg: 12:00 would become 50%). This will help a great deal when we want to fit a sine wave to the hour of day to relate it to surface temperature.
End of explanation
training_df, testing_df = split_dataframe(data_frame, 0.9)
Explanation: Athena includes some helper functions to assist in equation building; one of them is the split_dataframe function. This will randomly split the data into x% training and (100-x)% testing sets.
End of explanation
parameters_map = {
"normalized": {
},
"not_normalized": {
"time": "t",
"humidity": "h",
},
"target": "temp"
}
Explanation: Athena can also automatically normalize your data; normalizing data to fit between 0 and 1 greatly assists curve fitting because most of the built in equations are designed to work optimally on that numerical range. A parameter map first splits columns into normalized and non-normalized, then maps each column name in the data to a short-hand name to be used in the equation. Here we will map time to t, and humidity to h.
End of explanation
max_iterations = int(1e4)
starter_learning_rate = 0.0005
momentum = 0.95
framework_parameters = {
"starting_lr": starter_learning_rate,
"max_iterations": max_iterations,
"momentum": momentum,
}
fw = Framework(framework_parameters)
Explanation: Using the Athena Framework
Every equation building session in Athena operates out of something called a Framework; as its name suggests, an Athena framework will consilidate all your datasets, equations, hyper-parameters, etc in one Python class. Beginning a Framework class requires passing a Python dictionary containing hyper-parameters; you can leave this empty and the default parameters will be used instead.
End of explanation
fw.add_dataset(Dataset(training_df, testing_df, parameters_map))
Explanation: We can initialize our datasets along with their parameter map using the Dataset class. We then feed this into the Framework class by using the function add_dataset. Our data is now sanitized and inside our framework, ready for processing and equation building.
End of explanation
model = AdditiveModel(fw)
Explanation: Introduction to the Additive Model
There are many types of model equations we can use through Athena as a base for our equation building blocks. The simplest of which is the Additive Model; this means that every function in this model is added - for example, if we have f(a), g(b), and h(c), applying these to an additive model will create an equation of the form f(a) + g(b) + h(c). There are more complex models available to the user, such as the multiplicative model and the composite model, all of which are explained in more advanced tutorials.
End of explanation
training_targets = fw.dataset.training_targets
testing_targets = fw.dataset.testing_targets
Explanation: We get the target column in both training and testing datasets and assign them as floats in training_targets and testing_targets to be used later in accuracy measurement.
End of explanation
model.add(Bias)
for i in range(4):
model.add(SimpleSinusoidal, "time")
for i in range(2):
model.add(FlexiblePower, "humidity")
fw.initialize(model, training_targets)
Explanation: Creating the components of an additive model is simple and similar in fashion to creating a neural network model in Keras, for example. We first add a Bias function, the simplest kind of function which adds a variable number to our equation. Then we add four SimpleSinusoidal functions and pass them the name of the column we want the function to work on; since temperature changes sinusoidally with time, we add four sine functions that have variable amplitudes, frequencies, and phase shifts to our equation (thisis very similar to taking the first four terms in an infintie Fourier series). Finally, because we assume that humidity changes exponentially with temperature, we add two FlexiblePower functions to reflect this assumption.
End of explanation
for step in range(int(fw.max_iters + 1)):
fw.run_learning_step()
if step % int(fw.max_iters / 10) == 0:
print("\n", "=" * 40, "\n", round(step / fw.max_iters * 100), "% \n", "=" * 40, sep="", end="\n")
training_t = training_targets, fw.get_training_predictions()
testing_t = testing_targets, fw.get_testing_predictions()
try:
for j, k in list(zip(["Training", "Testing "], [training_t, testing_t])):
print(j, end = "\t")
print_statistics(*k)
except Exception as e:
print("Error! {}".format(e))
Explanation: Here comes the fun part: iteratively fitting our equation. Athena, using Tensorflow as its backend, will automatically differentiate the equation we want to fit our data to, then find the roots of the differential using whichever optimization algorithm we specified in the framework (the default algorithm is the Adam Stochastic Optimization algorithm). At every 10% of the fitting process, we output some accuracy metrics of our equation for both the training and testing datasets; in this example, we output the Pearson correlation coefficient, then the root mean squared error, 90th, 95th and 99th percentile errors, in that order.
End of explanation
equation = fw.produce_equation()
fw.session.close()
Explanation: We can get the equation from Athena by using the framework function produce_equation. This will return a sympy equation; sympy is, at its core, a powerful Python library for symbolic mathematics - we will need it for a bunch of cool things, like automatic symbolic simplification, pretty printing, and even outputting LaTeX for use in an academic paper.
End of explanation
from sympy import N, nsimplify, init_printing
init_printing()
N(nsimplify(equation, tolerance=1e-4), 2)
% pylab inline
import seaborn as sns
sns.set_style('whitegrid')
from sympy import lambdify
from sympy.abc import t, h
Explanation: Visualizing the Result with Sympy and Matplotlib
When in an interactive Python notebook, we can output a pretty simplified version of our temperature equation to the notebook and examine the coefficients we produced. Upon simple inspection, we can see we inferred an inverse correlation, also known as a negative correlation, between humidity and temperature. That is, the more we increase humidity, we will see an exponential decrease in temperature.
End of explanation
y_axis = lambdify((t, h), equation, "numpy")
Explanation: Sympy lets us create a fast Python function from our generated equation, so that we can quickly substitute large arrays of numbers to produce results for table or chart production later.
End of explanation
plt.figure(figsize=(8, 8))
x_axis = np.linspace(0.0, 1.0, 100)
for humidity in range(5, 100, 20):
plt.plot(x_axis * 24.0, y_axis(x_axis, np.array([humidity])), label='{}%'.format(humidity))
plt.legend()
plt.xlabel('Time of Day (hrs)', fontsize=16)
plt.ylabel('Temperature (°C)', fontsize=16)
plt.show()
Explanation: Let's observe the effect of increasing humidity on surface temperature throughout the hours of the day. We can generate the y-axis for each humidity level by utilizing the function we created before. We'll plot the temperatures at varying humidity levels on the same graph.
End of explanation |
14,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hello world
(press space)
This is how you do slides with ipython notebooks!
Formatting is simple, with markdown
...your python love will help you...
Step1: How cool to have live code
inside your live slideshow!?!?!
Just another one | Python Code:
a = "Hello"
b = "World"
print a,b + "!"
Explanation: Hello world
(press space)
This is how you do slides with ipython notebooks!
Formatting is simple, with markdown
...your python love will help you...
End of explanation
# Please consider also that you can re-use
# variables defined in older slides ;)
print type(a + b)
Explanation: How cool to have live code
inside your live slideshow!?!?!
Just another one
End of explanation |
14,269 | Given the following text description, write Python code to implement the functionality described.
Description:
Count numbers less than N containing digits from the given set : Digit DP
Python3 implementation to find the count of numbers possible less than N , such that every digit is from the given set of digits ; Function to convert integer into the string ; Recursive function to find the count of numbers possible less than N , such that every digit is from the given set of digits ; Base case ; Condition when the subproblem is computed previously ; Condition when the number chosen till now is definietly smaller than the given number N ; Loop to traverse all the digits of the given set ; Loop to traverse all the digits from the given set ; Store the solution for current subproblem ; Function to count the numbers less then N from given set of digits ; Converting the number to string ; Find the solution of all the number equal to the length of the given number N ; Loop to find the number less in in the length of the given number ; Driver Code ; Function Call
| Python Code:
import numpy as np ;
dp = np . ones(( 15 , 2 ) ) * - 1 ;
def convertToString(num ) :
return str(num ) ;
def calculate(pos , tight , D , sz , num ) :
if(pos == len(num ) ) :
return 1 ;
if(dp[pos ][tight ] != - 1 ) :
return dp[pos ][tight ] ;
val = 0 ;
if(tight == 0 ) :
for i in range(sz ) :
if(D[i ] <(ord(num[pos ] ) - ord('0' ) ) ) :
val += calculate(pos + 1 , 1 , D , sz , num ) ;
elif(D[i ] == ord(num[pos ] ) - ord('0' ) ) :
val += calculate(pos + 1 , tight , D , sz , num ) ;
else :
for i in range(sz ) :
val += calculate(pos + 1 , tight , D , sz , num ) ;
dp[pos ][tight ] = val ;
return dp[pos ][tight ] ;
def countNumbers(D , N , sz ) :
num = convertToString(N ) ;
length = len(num ) ;
ans = calculate(0 , 0 , D , sz , num ) ;
for i in range(1 , length ) :
ans += calculate(i , 1 , D , sz , num ) ;
return ans ;
if __name__== "__main __":
sz = 3 ;
D =[1 , 4 , 9 ] ;
N = 10 ;
print(countNumbers(D , N , sz ) ) ;
|
14,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 高斯分布
高斯分布,也称为正态分布,广泛应用于连续型随机变量分布的模型中。
对于一元变量x的情形,高斯分布可以写成如下的形式:
$$\mathcal{N}(x|\mu,\sigma^2)=\frac{1}{(2\pi\sigma^2)^{1/2}}exp{-\frac{1}{2\sigma^2}(x-\mu)^2}$$
其中$\mu$是均值,$\sigma^2$是方差。
对于D维向量$\textbf{x}$,多元高斯分布的形式为:
$$\mathcal{N}(\textbf{x}|\textbf{$\mu$},\Sigma)=\frac{1}{(2\pi)^{D/2}|\Sigma|^{1/2}}exp{-\frac{1}{2}(\textbf{x}-\textbf{$\mu$})^T\Sigma^{-1}(\textbf{x}-\textbf{$\mu$})}$$
其中,$\textbf{$\mu$}$是一个D维均值向量,$\Sigma$是一个D*D的协方差矩阵,$|\Sigma|$是$\Sigma$的行列式。
高斯分布有着优良的性质,便于推导,很多时候会得到解析解。 一元高斯分布是个钟形癿曲线,大部分都集中在均值附近,朝两边癿概率呈指数衰减,这个可以用契比雪夫不等式来说明,偏离均值超过3个标准差的概率就非常低。
1. 拉普拉斯中心极限定理
拉普拉斯提出的中心极限定理(central limit theorem)告诉我们,对于某些温和的情况,一组随机变量之和(当然也是随机变量)的概率分布随着和式中项的数量的增加而逐渐趋向高斯分布。
下面的代码说明,多个均匀分布之和的均值的概率分布,随着N的增加,分布趋向于高斯分布
Step2: 2. 高斯分布的几何形式
高斯对于x的依赖体现在二次型$\Delta^2=(\textbf{x}-\textbf{$\mu$})^T\Sigma^{-1}(\textbf{x}-\textbf{$\mu$})$上。$\Delta$被称为$\textbf{$\mu$}$和$\textbf{x}$之间的马氏距离(Mahalanobis distance)。当$\Sigma$是单位矩阵时,就变成了欧式距离。对于x空间中这个二次型事常数的曲面,高斯分布也是常数。
现在考虑协方差矩阵的特征向量方程$$\Sigma\textbf{$\mu$}_i=\lambda_i\textbf{$\mu$}_i$$
其中$i=1,...,D$。
由于$\Sigma$是实对称矩阵,因此它的特征值也是实数,并且特征向量可以被选成是单位正交的。
协方差矩阵可以表示成特征向量的展开形式$$\Sigma=\sum_\limits{i=1}^D\lambda_i\textbf{u}i\textbf{u}_i^T$$
协方差矩阵的逆矩阵可以表示为$$\Sigma^{-1}=\sum\limits{i=1}^D\frac{1}{\lambda_i}\textbf{u}_i\textbf{u}_i^T$$
于是二次型就变成了$$\Delta^2=\sum_\limits{i=1}^D\frac{y_i^2}{\lambda_i}$$
其中定义$y_i=\textbf{u}_i^T(\textbf{x}-\textbf{$\mu$})$。
我们把${y_i}$表示成单位正交向量$\textbf{u}_i$关于原始的$x_i$坐标经过平移和旋转后形成的新的坐标系。
定义$\textbf{y}=(y_1,...,y_D)^T$,我们有$$\textbf{y}=\textbf{U}(\textbf{x}-\textbf{$\mu$})$$
其中$\textbf{U}$是一个矩阵,它的行是向量$\textbf{u}_i^T$。
如果所有的特征值$\lambda_i$都是正数,那么这些曲面表示椭球面,椭球中心位于$\textbf{$\mu$}$,椭球的轴的方向沿着$\textbf{u}_i$,沿着轴向的缩放因子为$\lambda_i^{\frac{1}{2}}$。如下图所示:
高斯分布的局限
高斯分布的局限主要体现在其自由参数的数量和单峰分布上。
对于一般的协方差矩阵,其参数的总数随着维度D的增长呈平方的方式增长,为了简化参数,可以将协方差矩阵约束成对角矩阵或者各向同性协方差矩阵(正比于单位矩阵),这样虽然限制了概率分布的自由度的数量,并且很容易求协方差矩阵的逆矩阵,但却大大限制了概率密度的形式,限制了描述模型中相关性的能力。
高斯分布本质上是单峰的(只有一个最大值),而不能很好近似多峰分布。后面,我们会引入潜在变量来解决这一问题。通过引入离散型潜在变量,相当多的多峰分布可以使用混合高斯分布来描述;通过引入连续型潜在变量可以产生出一种模型,该模型的自由参数可以被控制成与数据空间的维度D无关,同时仍然允许模型描述数据集里主要的相关性关系。
不同形式的协方差矩阵对应的概率密度曲线
Step3: 下面是一般形式的协方差矩阵对应的密度轮廓线,协方差矩阵有D(D+1)/2个独立参数,参数的总数随着D以平方的方式增长,大矩阵计算和求逆困难。
Step4: 下面协方差矩阵是对角矩阵的情况,椭圆的轮廓线与坐标轴对齐。概率密度模型中有总数为2D个独立参数。
Step5: 下面协方差矩阵正比于单位矩阵,该协方差矩阵又被称为各向同性协方差矩阵,轮廓线是同心圆。这使得模型有D+1个独立的参数。 | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import uniform
from scipy.stats import binom
from scipy.stats import norm as norm_dist
def uniform_central_limit(n, length):
@param:
n:计算rv的n次平均值, length:平均随机变量的样本数
@return:
rv_mean: 长度为length的数组,它是平均随机变量的样本
gaussian: 对data进行拟合所得到的高斯分布
rv_mean = np.zeros(length)
for i in xrange(n):
rv = uniform.rvs(size=length)
rv_mean = rv_mean + rv
rv_mean = rv_mean / n
gaussian_params = norm_dist.fit(rv_mean)
gaussian = norm_dist(gaussian_params[0], gaussian_params[1])
return rv_mean, gaussian
fig = plt.figure(figsize=(14,12))
x = np.linspace(0,1,100)
for i, n in enumerate([1,2,10,20,50,100]):
ax = fig.add_subplot(3,2,i+1)
data, gaussian = uniform_central_limit(n, 1000)
ax.hist(data, bins=20, normed=True)
plt.plot(x, gaussian.pdf(x), "r", lw=2)
plt.title("n=%d" % n)
plt.show()
Explanation: 高斯分布
高斯分布,也称为正态分布,广泛应用于连续型随机变量分布的模型中。
对于一元变量x的情形,高斯分布可以写成如下的形式:
$$\mathcal{N}(x|\mu,\sigma^2)=\frac{1}{(2\pi\sigma^2)^{1/2}}exp{-\frac{1}{2\sigma^2}(x-\mu)^2}$$
其中$\mu$是均值,$\sigma^2$是方差。
对于D维向量$\textbf{x}$,多元高斯分布的形式为:
$$\mathcal{N}(\textbf{x}|\textbf{$\mu$},\Sigma)=\frac{1}{(2\pi)^{D/2}|\Sigma|^{1/2}}exp{-\frac{1}{2}(\textbf{x}-\textbf{$\mu$})^T\Sigma^{-1}(\textbf{x}-\textbf{$\mu$})}$$
其中,$\textbf{$\mu$}$是一个D维均值向量,$\Sigma$是一个D*D的协方差矩阵,$|\Sigma|$是$\Sigma$的行列式。
高斯分布有着优良的性质,便于推导,很多时候会得到解析解。 一元高斯分布是个钟形癿曲线,大部分都集中在均值附近,朝两边癿概率呈指数衰减,这个可以用契比雪夫不等式来说明,偏离均值超过3个标准差的概率就非常低。
1. 拉普拉斯中心极限定理
拉普拉斯提出的中心极限定理(central limit theorem)告诉我们,对于某些温和的情况,一组随机变量之和(当然也是随机变量)的概率分布随着和式中项的数量的增加而逐渐趋向高斯分布。
下面的代码说明,多个均匀分布之和的均值的概率分布,随着N的增加,分布趋向于高斯分布
End of explanation
import matplotlib.mlab as mlab
from mpl_toolkits.mplot3d import Axes3D
def plot_2d_normal(mux, muy, sigmaxx, sigmayy, sigmaxy):
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot()
x = np.arange(0, 5, 0.1)
y = np.arange(0, 5, 0.1)
x, y = np.meshgrid(x, y)
z = mlab.bivariate_normal(x, y, sigmaxx, sigmayy, mux, muy, sigmaxy)
ret = plt.contourf(x, y, z, cmap=plt.get_cmap('coolwarm'))
fig.colorbar(ret, shrink=0.5, aspect=5)
plt.show()
Explanation: 2. 高斯分布的几何形式
高斯对于x的依赖体现在二次型$\Delta^2=(\textbf{x}-\textbf{$\mu$})^T\Sigma^{-1}(\textbf{x}-\textbf{$\mu$})$上。$\Delta$被称为$\textbf{$\mu$}$和$\textbf{x}$之间的马氏距离(Mahalanobis distance)。当$\Sigma$是单位矩阵时,就变成了欧式距离。对于x空间中这个二次型事常数的曲面,高斯分布也是常数。
现在考虑协方差矩阵的特征向量方程$$\Sigma\textbf{$\mu$}_i=\lambda_i\textbf{$\mu$}_i$$
其中$i=1,...,D$。
由于$\Sigma$是实对称矩阵,因此它的特征值也是实数,并且特征向量可以被选成是单位正交的。
协方差矩阵可以表示成特征向量的展开形式$$\Sigma=\sum_\limits{i=1}^D\lambda_i\textbf{u}i\textbf{u}_i^T$$
协方差矩阵的逆矩阵可以表示为$$\Sigma^{-1}=\sum\limits{i=1}^D\frac{1}{\lambda_i}\textbf{u}_i\textbf{u}_i^T$$
于是二次型就变成了$$\Delta^2=\sum_\limits{i=1}^D\frac{y_i^2}{\lambda_i}$$
其中定义$y_i=\textbf{u}_i^T(\textbf{x}-\textbf{$\mu$})$。
我们把${y_i}$表示成单位正交向量$\textbf{u}_i$关于原始的$x_i$坐标经过平移和旋转后形成的新的坐标系。
定义$\textbf{y}=(y_1,...,y_D)^T$,我们有$$\textbf{y}=\textbf{U}(\textbf{x}-\textbf{$\mu$})$$
其中$\textbf{U}$是一个矩阵,它的行是向量$\textbf{u}_i^T$。
如果所有的特征值$\lambda_i$都是正数,那么这些曲面表示椭球面,椭球中心位于$\textbf{$\mu$}$,椭球的轴的方向沿着$\textbf{u}_i$,沿着轴向的缩放因子为$\lambda_i^{\frac{1}{2}}$。如下图所示:
高斯分布的局限
高斯分布的局限主要体现在其自由参数的数量和单峰分布上。
对于一般的协方差矩阵,其参数的总数随着维度D的增长呈平方的方式增长,为了简化参数,可以将协方差矩阵约束成对角矩阵或者各向同性协方差矩阵(正比于单位矩阵),这样虽然限制了概率分布的自由度的数量,并且很容易求协方差矩阵的逆矩阵,但却大大限制了概率密度的形式,限制了描述模型中相关性的能力。
高斯分布本质上是单峰的(只有一个最大值),而不能很好近似多峰分布。后面,我们会引入潜在变量来解决这一问题。通过引入离散型潜在变量,相当多的多峰分布可以使用混合高斯分布来描述;通过引入连续型潜在变量可以产生出一种模型,该模型的自由参数可以被控制成与数据空间的维度D无关,同时仍然允许模型描述数据集里主要的相关性关系。
不同形式的协方差矩阵对应的概率密度曲线
End of explanation
plot_2d_normal(2.5, 2.5, 1.0, 1.0, 0.8)
Explanation: 下面是一般形式的协方差矩阵对应的密度轮廓线,协方差矩阵有D(D+1)/2个独立参数,参数的总数随着D以平方的方式增长,大矩阵计算和求逆困难。
End of explanation
plot_2d_normal(2.5, 2.5, 1.0, 0.6, 0)
Explanation: 下面协方差矩阵是对角矩阵的情况,椭圆的轮廓线与坐标轴对齐。概率密度模型中有总数为2D个独立参数。
End of explanation
plot_2d_normal(2.5, 2.5, 1.0, 1.0, 0)
Explanation: 下面协方差矩阵正比于单位矩阵,该协方差矩阵又被称为各向同性协方差矩阵,轮廓线是同心圆。这使得模型有D+1个独立的参数。
End of explanation |
14,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Word Tokens
Step2: Load Stop Words
Step3: Remove Stop Words | Python Code:
# Load library
from nltk.corpus import stopwords
# You will have to download the set of stop words the first time
import nltk
nltk.download('stopwords')
Explanation: Title: Remove Stop Words
Slug: remove_stop_words
Summary: How to remove stop words from unstructured text data for machine learning in Python.
Date: 2016-09-09 12:00
Category: Machine Learning
Tags: Preprocessing Text
Authors: Chris Albon
Preliminaries
End of explanation
# Create word tokens
tokenized_words = ['i', 'am', 'going', 'to', 'go', 'to', 'the', 'store', 'and', 'park']
Explanation: Create Word Tokens
End of explanation
# Load stop words
stop_words = stopwords.words('english')
# Show stop words
stop_words[:5]
Explanation: Load Stop Words
End of explanation
# Remove stop words
[word for word in tokenized_words if word not in stop_words]
Explanation: Remove Stop Words
End of explanation |
14,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
expression.thompson
Generate the Thompson automaton from an expression.
Caveats
Step1: You may, however, use a labelset which does not feature a "one", in which case the context of the automaton will be different from the one of the expression.
Step2: Weights
Weights are supported.
Step3: Note however that you may generate invalid automata | Python Code:
import vcsn
from IPython.display import display
vcsn.context('lan_char, b').expression('a[bc]d').thompson()
vcsn.context('law_char, b').expression("'aa'[bc]'dd'").thompson()
Explanation: expression.thompson
Generate the Thompson automaton from an expression.
Caveats:
- it is not guaranteed that Result.is_valid()
- the context of the result might be different from the original context: spontaneous-transition support is required.
Properties:
- Result.proper().is_isomorphic(r.standard())
See also:
- expression.automaton
Examples
The Thompson procedure generates an automaton with spontaneous-transitions, which requires a labelset that feature a "one" label. The nullableset and wordset labelsets (and their compositions) does support a "one" label.
End of explanation
vcsn.context('lal_char, b').expression("a").thompson().context()
Explanation: You may, however, use a labelset which does not feature a "one", in which case the context of the automaton will be different from the one of the expression.
End of explanation
r = vcsn.context('lan_char(abc), q').expression('(<1/6>a*+<1/3>b*)*')
r
t = r.thompson()
t
t.proper()
r.standard()
Explanation: Weights
Weights are supported.
End of explanation
t = vcsn.context('lan_char(abc), q').expression('\e*').thompson()
t
t.is_valid()
Explanation: Note however that you may generate invalid automata:
End of explanation |
14,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TFRecord and tf.Example
Learning Objectives
Understand the TFRecord format for storing data
Understand the tf.Example message type
Read and Write a TFRecord file
Introduction
In this notebook, you create, parse, and use the tf.Example message, and then serialize, write, and read tf.Example messages to and from .tfrecord files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
The TFRecord format
The TFRecord format is a simple format for storing a sequence of binary records. Protocol buffers are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by .proto files, these are often the easiest way to understand a message type.
The tf.Example message (or protobuf) is a flexible message type that represents a {"string"
Step4: Please ignore any incompatibility warnings and errors.
tf.Example
Data types for tf.Example
Fundamentally, a tf.Example is a {"string"
Step5: Note
Step6: Lab Task #1b
Step7: Creating a tf.Example message
Suppose you want to create a tf.Example message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the tf.Example message from a single observation will be the same
Step9: Each of these features can be coerced into a tf.Example-compatible type using one of _bytes_feature, _float_feature, _int64_feature. You can then create a tf.Example message from these encoded features
Step10: For example, suppose you have a single observation from the dataset, [False, 4, bytes('goat'), 0.9876]. You can create and print the tf.Example message for this observation using create_message(). Each single observation will be written as a Features message as per the above. Note that the tf.Example message is just a wrapper around the Features message
Step11: Lab Task #1c
Step12: TFRecords format details
A TFRecord file contains a sequence of records. The file can only be read sequentially.
Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.
Each record is stored in the following formats
Step13: Applied to a tuple of arrays, it returns a dataset of tuples
Step14: Use the tf.data.Dataset.map method to apply a function to each element of a Dataset.
The mapped function must operate in TensorFlow graph mode—it must operate on and return tf.Tensors. A non-tensor function, like serialize_example, can be wrapped with tf.py_function to make it compatible.
Lab Task 2a
Step15: Lab Task 2b
Step16: And write them to a TFRecord file
Step17: Reading a TFRecord file
You can also read the TFRecord file using the tf.data.TFRecordDataset class.
More information on consuming TFRecord files using tf.data can be found here.
Lab Task 2c
Step18: At this point the dataset contains serialized tf.train.Example messages. When iterated over it returns these as scalar string tensors.
Use the .take method to only show the first 10 records.
Note
Step19: These tensors can be parsed using the function below. Note that the feature_description is necessary here because datasets use graph-execution, and need this description to build their shape and type signature
Step20: Alternatively, use tf.parse example to parse the whole batch at once. Apply this function to each item in the dataset using the tf.data.Dataset.map method
Step21: Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a tf.Tensor, and the numpy element of this tensor displays the value of the feature
Step22: Here, the tf.parse_example function unpacks the tf.Example fields into standard tensors.
TFRecord files in Python
The tf.io module also contains pure-Python functions for reading and writing TFRecord files.
Writing a TFRecord file
Next, write the 10,000 observations to the file test.tfrecord. Each observation is converted to a tf.Example message, then written to file. You can then verify that the file test.tfrecord has been created
Step23: Reading a TFRecord file
These serialized tensors can be easily parsed using tf.train.Example.ParseFromString
Step24: Walkthrough
Step25: Write the TFRecord file
As before, encode the features as types compatible with tf.Example. This stores the raw image string feature, as well as the height, width, depth, and arbitrary label feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use 0 for the cat_in_snow image, and 1 for the williamsburg_bridge image.
Step26: Notice that all of the features are now stored in the tf.Example message. Next, functionalize the code above and write the example messages to a file named images.tfrecords
Step27: Read the TFRecord file
You now have the file—images.tfrecords—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely example.features.feature['image_raw'].bytes_list.value[0]. You can also use the labels to determine which record is the cat and which one is the bridge
Step28: Recover the images from the TFRecord file | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tf-nightly
import IPython.display as display
import numpy as np
import tensorflow as tf
print("TensorFlow version: ", tf.version.VERSION)
Explanation: TFRecord and tf.Example
Learning Objectives
Understand the TFRecord format for storing data
Understand the tf.Example message type
Read and Write a TFRecord file
Introduction
In this notebook, you create, parse, and use the tf.Example message, and then serialize, write, and read tf.Example messages to and from .tfrecord files. To read data efficiently it can be helpful to serialize your data and store it in a set of files (100-200MB each) that can each be read linearly. This is especially true if the data is being streamed over a network. This can also be useful for caching any data-preprocessing.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
The TFRecord format
The TFRecord format is a simple format for storing a sequence of binary records. Protocol buffers are a cross-platform, cross-language library for efficient serialization of structured data. Protocol messages are defined by .proto files, these are often the easiest way to understand a message type.
The tf.Example message (or protobuf) is a flexible message type that represents a {"string": value} mapping. It is designed for use with TensorFlow and is used throughout the higher-level APIs such as TFX.
Note: While useful, these structures are optional. There is no need to convert existing code to use TFRecords, unless you are using tf.data and reading data is still the bottleneck to training. See Data Input Pipeline Performance for dataset performance tips.
Load necessary libraries
We will start by importing the necessary libraries for this lab.
End of explanation
# TODO 1a
# The following functions can be used to convert a value to a type compatible
# with tf.Example.
def _bytes_feature(value):
Returns a bytes_list from a string / byte.
if isinstance(value, type(tf.constant(0))):
value = (
value.numpy()
) # BytesList won't unpack a string from an EagerTensor.
return # TODO: Complete the code here.
def _float_feature(value):
Returns a float_list from a float / double.
return # TODO: Complete the code here.
def _int64_feature(value):
Returns an int64_list from a bool / enum / int / uint.
return # TODO: Complete the code here.
Explanation: Please ignore any incompatibility warnings and errors.
tf.Example
Data types for tf.Example
Fundamentally, a tf.Example is a {"string": tf.train.Feature} mapping.
The tf.train.Feature message type can accept one of the following three types (See the .proto file for reference). Most other generic types can be coerced into one of these:
tf.train.BytesList (the following types can be coerced)
string
byte
tf.train.FloatList (the following types can be coerced)
float (float32)
double (float64)
tf.train.Int64List (the following types can be coerced)
bool
enum
int32
uint32
int64
uint64
Lab Task #1a: In order to convert a standard TensorFlow type to a tf.Example-compatible tf.train.Feature, you can use the shortcut functions below. Note that each function takes a scalar input value and returns a tf.train.Feature containing one of the three list types above. Complete the TODOs below using these types.
End of explanation
print(_bytes_feature(b"test_string"))
print(_bytes_feature(b"test_bytes"))
print(_float_feature(np.exp(1)))
print(_int64_feature(True))
print(_int64_feature(1))
Explanation: Note: To stay simple, this example only uses scalar inputs. The simplest way to handle non-scalar features is to use tf.serialize_tensor to convert tensors to binary-strings. Strings are scalars in tensorflow. Use tf.parse_tensor to convert the binary-string back to a tensor.
Below are some examples of how these functions work. Note the varying input types and the standardized output types. If the input type for a function does not match one of the coercible types stated above, the function will raise an exception (e.g. _int64_feature(1.0) will error out, since 1.0 is a float, so should be used with the _float_feature function instead):
End of explanation
feature = _float_feature(np.exp(1))
# TODO 1b
# TODO: Complete the code here
Explanation: Lab Task #1b: All proto messages can be serialized to a binary-string using the .SerializeToString method. Use this method to complete the below TODO:
End of explanation
# The number of observations in the dataset.
n_observations = int(1e4)
# Boolean feature, encoded as False or True.
feature0 = np.random.choice([False, True], n_observations)
# Integer feature, random from 0 to 4.
feature1 = np.random.randint(0, 5, n_observations)
# String feature
strings = np.array([b"cat", b"dog", b"chicken", b"horse", b"goat"])
feature2 = strings[feature1]
# Float feature, from a standard normal distribution
feature3 = np.random.randn(n_observations)
Explanation: Creating a tf.Example message
Suppose you want to create a tf.Example message from existing data. In practice, the dataset may come from anywhere, but the procedure of creating the tf.Example message from a single observation will be the same:
Within each observation, each value needs to be converted to a tf.train.Feature containing one of the 3 compatible types, using one of the functions above.
You create a map (dictionary) from the feature name string to the encoded feature value produced in #1.
The map produced in step 2 is converted to a Features message.
In this notebook, you will create a dataset using NumPy.
This dataset will have 4 features:
a boolean feature, False or True with equal probability
an integer feature uniformly randomly chosen from [0, 5]
a string feature generated from a string table by using the integer feature as an index
a float feature from a standard normal distribution
Consider a sample consisting of 10,000 independently and identically distributed observations from each of the above distributions:
End of explanation
def serialize_example(feature0, feature1, feature2, feature3):
Creates a tf.Example message ready to be written to a file.
# Create a dictionary mapping the feature name to the tf.Example-compatible
# data type.
feature = {
"feature0": _int64_feature(feature0),
"feature1": _int64_feature(feature1),
"feature2": _bytes_feature(feature2),
"feature3": _float_feature(feature3),
}
# Create a Features message using tf.train.Example.
example_proto = tf.train.Example(
features=tf.train.Features(feature=feature)
)
return example_proto.SerializeToString()
Explanation: Each of these features can be coerced into a tf.Example-compatible type using one of _bytes_feature, _float_feature, _int64_feature. You can then create a tf.Example message from these encoded features:
End of explanation
# This is an example observation from the dataset.
example_observation = []
serialized_example = serialize_example(False, 4, b"goat", 0.9876)
serialized_example
Explanation: For example, suppose you have a single observation from the dataset, [False, 4, bytes('goat'), 0.9876]. You can create and print the tf.Example message for this observation using create_message(). Each single observation will be written as a Features message as per the above. Note that the tf.Example message is just a wrapper around the Features message:
End of explanation
# TODO 1c
example_proto = # TODO: Complete the code here
example_proto
Explanation: Lab Task #1c: To decode the message use the tf.train.Example.FromString method and complete the below TODO
End of explanation
tf.data.Dataset.from_tensor_slices(feature1)
Explanation: TFRecords format details
A TFRecord file contains a sequence of records. The file can only be read sequentially.
Each record contains a byte-string, for the data-payload, plus the data-length, and CRC32C (32-bit CRC using the Castagnoli polynomial) hashes for integrity checking.
Each record is stored in the following formats:
uint64 length
uint32 masked_crc32_of_length
byte data[length]
uint32 masked_crc32_of_data
The records are concatenated together to produce the file. CRCs are
described here, and
the mask of a CRC is:
masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
Note: There is no requirement to use tf.Example in TFRecord files. tf.Example is just a method of serializing dictionaries to byte-strings. Lines of text, encoded image data, or serialized tensors (using tf.io.serialize_tensor, and
tf.io.parse_tensor when loading). See the tf.io module for more options.
TFRecord files using tf.data
The tf.data module also provides tools for reading and writing data in TensorFlow.
Writing a TFRecord file
The easiest way to get the data into a dataset is to use the from_tensor_slices method.
Applied to an array, it returns a dataset of scalars:
End of explanation
features_dataset = tf.data.Dataset.from_tensor_slices(
(feature0, feature1, feature2, feature3)
)
features_dataset
# Use `take(1)` to only pull one example from the dataset.
for f0, f1, f2, f3 in features_dataset.take(1):
print(f0)
print(f1)
print(f2)
print(f3)
Explanation: Applied to a tuple of arrays, it returns a dataset of tuples:
End of explanation
# TODO 2a
# TODO: Your code goes here
tf_serialize_example(f0, f1, f2, f3)
Explanation: Use the tf.data.Dataset.map method to apply a function to each element of a Dataset.
The mapped function must operate in TensorFlow graph mode—it must operate on and return tf.Tensors. A non-tensor function, like serialize_example, can be wrapped with tf.py_function to make it compatible.
Lab Task 2a: Using tf.py_function requires to specify the shape and type information that is otherwise unavailable:
End of explanation
# TODO 2b
serialized_features_dataset = #TODO : Complete the code here.
serialized_features_dataset
def generator():
for features in features_dataset:
yield serialize_example(*features)
serialized_features_dataset = tf.data.Dataset.from_generator(
generator, output_types=tf.string, output_shapes=()
)
serialized_features_dataset
Explanation: Lab Task 2b: Apply this function to each element in the features_dataset using the map function and complete below TODO:
End of explanation
filename = "test.tfrecord"
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(serialized_features_dataset)
Explanation: And write them to a TFRecord file:
End of explanation
# TODO 2c
# TODO: Your code goes here
Explanation: Reading a TFRecord file
You can also read the TFRecord file using the tf.data.TFRecordDataset class.
More information on consuming TFRecord files using tf.data can be found here.
Lab Task 2c: Complete the below TODO by using TFRecordDatasets which is useful for standardizing input data and optimizing performance.
End of explanation
for raw_record in raw_dataset.take(10):
print(repr(raw_record))
Explanation: At this point the dataset contains serialized tf.train.Example messages. When iterated over it returns these as scalar string tensors.
Use the .take method to only show the first 10 records.
Note: iterating over a tf.data.Dataset only works with eager execution enabled.
End of explanation
# Create a description of the features.
feature_description = {
"feature0": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"feature1": tf.io.FixedLenFeature([], tf.int64, default_value=0),
"feature2": tf.io.FixedLenFeature([], tf.string, default_value=""),
"feature3": tf.io.FixedLenFeature([], tf.float32, default_value=0.0),
}
def _parse_function(example_proto):
# Parse the input `tf.Example` proto using the dictionary above.
return tf.io.parse_single_example(example_proto, feature_description)
Explanation: These tensors can be parsed using the function below. Note that the feature_description is necessary here because datasets use graph-execution, and need this description to build their shape and type signature:
End of explanation
parsed_dataset = raw_dataset.map(_parse_function)
parsed_dataset
Explanation: Alternatively, use tf.parse example to parse the whole batch at once. Apply this function to each item in the dataset using the tf.data.Dataset.map method:
End of explanation
for parsed_record in parsed_dataset.take(10):
print(repr(parsed_record))
Explanation: Use eager execution to display the observations in the dataset. There are 10,000 observations in this dataset, but you will only display the first 10. The data is displayed as a dictionary of features. Each item is a tf.Tensor, and the numpy element of this tensor displays the value of the feature:
End of explanation
# Write the `tf.Example` observations to the file.
with tf.io.TFRecordWriter(filename) as writer:
for i in range(n_observations):
example = serialize_example(
feature0[i], feature1[i], feature2[i], feature3[i]
)
writer.write(example)
!du -sh {filename}
Explanation: Here, the tf.parse_example function unpacks the tf.Example fields into standard tensors.
TFRecord files in Python
The tf.io module also contains pure-Python functions for reading and writing TFRecord files.
Writing a TFRecord file
Next, write the 10,000 observations to the file test.tfrecord. Each observation is converted to a tf.Example message, then written to file. You can then verify that the file test.tfrecord has been created:
End of explanation
filenames = [filename]
raw_dataset = tf.data.TFRecordDataset(filenames)
raw_dataset
for raw_record in raw_dataset.take(1):
example = tf.train.Example()
example.ParseFromString(raw_record.numpy())
print(example)
Explanation: Reading a TFRecord file
These serialized tensors can be easily parsed using tf.train.Example.ParseFromString:
End of explanation
cat_in_snow = tf.keras.utils.get_file(
"320px-Felis_catus-cat_on_snow.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg",
)
williamsburg_bridge = tf.keras.utils.get_file(
"194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg",
"https://storage.googleapis.com/download.tensorflow.org/example_images/194px-New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg",
)
display.display(display.Image(filename=cat_in_snow))
display.display(
display.HTML(
'Image cc-by: <a "href=https://commons.wikimedia.org/wiki/File:Felis_catus-cat_on_snow.jpg">Von.grzanka</a>'
)
)
display.display(display.Image(filename=williamsburg_bridge))
display.display(
display.HTML(
'<a "href=https://commons.wikimedia.org/wiki/File:New_East_River_Bridge_from_Brooklyn_det.4a09796u.jpg">From Wikimedia</a>'
)
)
Explanation: Walkthrough: Reading and writing image data
This is an end-to-end example of how to read and write image data using TFRecords. Using an image as input data, you will write the data as a TFRecord file, then read the file back and display the image.
This can be useful if, for example, you want to use several models on the same input dataset. Instead of storing the image data raw, it can be preprocessed into the TFRecords format, and that can be used in all further processing and modelling.
First, let's download this image of a cat in the snow and this photo of the Williamsburg Bridge, NYC under construction.
Fetch the images
End of explanation
image_labels = {
cat_in_snow: 0,
williamsburg_bridge: 1,
}
# This is an example, just using the cat image.
image_string = open(cat_in_snow, "rb").read()
label = image_labels[cat_in_snow]
# Create a dictionary with features that may be relevant.
def image_example(image_string, label):
image_shape = tf.image.decode_jpeg(image_string).shape
feature = {
"height": _int64_feature(image_shape[0]),
"width": _int64_feature(image_shape[1]),
"depth": _int64_feature(image_shape[2]),
"label": _int64_feature(label),
"image_raw": _bytes_feature(image_string),
}
return tf.train.Example(features=tf.train.Features(feature=feature))
for line in str(image_example(image_string, label)).split("\n")[:15]:
print(line)
print("...")
Explanation: Write the TFRecord file
As before, encode the features as types compatible with tf.Example. This stores the raw image string feature, as well as the height, width, depth, and arbitrary label feature. The latter is used when you write the file to distinguish between the cat image and the bridge image. Use 0 for the cat_in_snow image, and 1 for the williamsburg_bridge image.
End of explanation
# Write the raw image files to `images.tfrecords`.
# First, process the two images into `tf.Example` messages.
# Then, write to a `.tfrecords` file.
record_file = "images.tfrecords"
with tf.io.TFRecordWriter(record_file) as writer:
for filename, label in image_labels.items():
image_string = open(filename, "rb").read()
tf_example = image_example(image_string, label)
writer.write(tf_example.SerializeToString())
!du -sh {record_file}
Explanation: Notice that all of the features are now stored in the tf.Example message. Next, functionalize the code above and write the example messages to a file named images.tfrecords:
End of explanation
raw_image_dataset = tf.data.TFRecordDataset("images.tfrecords")
# Create a dictionary describing the features.
image_feature_description = {
"height": tf.io.FixedLenFeature([], tf.int64),
"width": tf.io.FixedLenFeature([], tf.int64),
"depth": tf.io.FixedLenFeature([], tf.int64),
"label": tf.io.FixedLenFeature([], tf.int64),
"image_raw": tf.io.FixedLenFeature([], tf.string),
}
def _parse_image_function(example_proto):
# Parse the input tf.Example proto using the dictionary above.
return tf.io.parse_single_example(example_proto, image_feature_description)
parsed_image_dataset = raw_image_dataset.map(_parse_image_function)
parsed_image_dataset
Explanation: Read the TFRecord file
You now have the file—images.tfrecords—and can now iterate over the records in it to read back what you wrote. Given that in this example you will only reproduce the image, the only feature you will need is the raw image string. Extract it using the getters described above, namely example.features.feature['image_raw'].bytes_list.value[0]. You can also use the labels to determine which record is the cat and which one is the bridge:
End of explanation
for image_features in parsed_image_dataset:
image_raw = image_features["image_raw"].numpy()
display.display(display.Image(data=image_raw))
Explanation: Recover the images from the TFRecord file:
End of explanation |
14,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Groupwise Correlation Toy Example
Step1: ## The main helper function splits input arrays into groups
Step2: The parse_replicates function requires two arrays
Step3: The main wrapper function first calculates within group combinations, then between group combinations. (it currently doesn't contain the code for betweens)
Step4: Create combinations of replicates using the wrapper function | Python Code:
import sys, getopt, math
import itertools as itt
from scipy.stats.stats import pearsonr
x2reps = ['xa1', 'xa2', 'xb1', 'xb2', 'xc1', 'xc2', 'xd1', 'xd2']
y2reps = ['ya1', 'ya2', 'yb1', 'yb2', 'yc1', 'yc2', 'yd1', 'yd2']
x3reps = ['xa1', 'xa2', 'xa3', 'xb1', 'xb2', 'xb3', 'xc1', 'xc2', 'xc3', 'xd1', 'xd2', 'xd3']
y3reps = ['ya1', 'ya2', 'ya3', 'yb1', 'yb2', 'yb3', 'yc1', 'yc2', 'yc3', 'yd1', 'yd2', 'yd3']
x4reps = ['xa1', 'xa2', 'xa3', 'xa4', 'xb1', 'xb2', 'xb3', 'xb4', 'xc1', 'xc2', 'xc3', 'xc4', 'xd1', 'xd2', 'xd3', 'xd4']
y4reps = ['ya1', 'ya2', 'ya3', 'ya4', 'yb1', 'yb2', 'yb3', 'yb4', 'yc1', 'yc2', 'yc3', 'yc4', 'yd1', 'yd2', 'yd3', 'yd4']
x23reps = ['xa1', 'xa2', 'xb1', 'xb2', 'xc1', 'xc2', 'xc3', 'xd1', 'xd2', 'xd3']
y23reps = ['ya1', 'ya2', 'yb1', 'yb2', 'yc1', 'yc2', 'yc3', 'yd1', 'yd2', 'yd3']
Explanation: Groupwise Correlation Toy Example
End of explanation
def parse_replicates(data_in, replicates):
group_data= []
#data_in = [array of ungrouped replicate data]
start = 0
end = replicates[0]
for i in range(0,len(replicates)):
group_data.append(data_in[start:end])
start = end
if i == len(replicates)-1:
end = end + replicates[i]
else:
end = end + replicates[i+1]
return group_data
Explanation: ## The main helper function splits input arrays into groups
End of explanation
x2parse = parse_replicates(x2reps, [2,2,2,2])
y2parse = parse_replicates(y2reps, [2,2,2,2])
x3parse = parse_replicates(x3reps, [3,3,3,3])
y3parse = parse_replicates(y3reps, [3,3,3,3])
x4parse = parse_replicates(x4reps, [4,4,4,4])
y4parse = parse_replicates(y4reps, [4,4,4,4])
x23parse = parse_replicates(x23reps, [2,2,3,3])
y23parse = parse_replicates(y23reps, [2,2,3,3])
x23parse
Explanation: The parse_replicates function requires two arrays: 1) the data, and 2) number of replicates per group.
End of explanation
def get_comb(x_in, y_in, xreps, yreps):
assert len(xreps) == len(yreps)
xparse = parse_replicates(x_in, xreps)
yparse = parse_replicates(y_in, yreps)
xperm = [list(itt.permutations(x, len(x))) for x in xparse]
yperm = [list(itt.permutations(y, len(y))) for y in yparse]
within = [[zip(j,k) for j,k in list(itt.product(x,y))[0:]] for x,y in zip(xperm,yperm)]
groups = []
for group in within:
groups.append(group)
betweens = list(itt.product(*groups))
return betweens
x2perm = [list(itt.permutations(x, len(x))) for x in x2parse]
y2perm = [list(itt.permutations(y, len(y))) for y in y2parse]
x3perm = [list(itt.permutations(x, len(x))) for x in x3parse]
y3perm = [list(itt.permutations(y, len(y))) for y in y3parse]
x4perm = [list(itt.permutations(x, len(x))) for x in x4parse]
y4perm = [list(itt.permutations(y, len(y))) for y in y4parse]
x23perm = [list(itt.permutations(x, len(x))) for x in x23parse]
y23perm = [list(itt.permutations(y, len(y))) for y in y23parse]
print len(x2perm[0])
print len(x3perm[0])
print len(x4perm[0])
print len(x23perm[2])
x23perm
zip2reps = zip(x2perm, y2perm)
zip23reps = zip(x3perm, y23perm)
zip23reps
product2reps = list(itt.product(x2reps, y2reps))
product2perms = list(itt.product(x2perm, y2perm))
print len(x2reps)
print len(product2reps)
print len(product2perms)
print product2reps
print product2perms
[[zip(j,k) for j,k in list(itt.product(x,y))[0:((len(x)+len(y))/2)]] for x,y in zip(x2perm,y2perm)]
#[itt.product(x,y) for x,y in x2zip]
[[zip(j,k) for j,k in list(itt.product(x,y))[0:((len(x)+len(y))/2)]] for x,y in zip(x3perm,y23perm)]
Explanation: The main wrapper function first calculates within group combinations, then between group combinations. (it currently doesn't contain the code for betweens)
End of explanation
comb2 = get_comb(x2reps, y2reps, [2,2,2,2], [2,2,2,2])
comb3 = get_comb(x3reps, y3reps, [3,3,3,3], [3,3,3,3])
comb4 = get_comb(x4reps, y4reps, [4,4,4,4], [4,4,4,4])
comb23 = get_comb(x23reps, y23reps, [2,2,3,3], [2,2,3,3])
comb323 = get_comb(x3reps, y23reps, [3,3,3,3], [2,2,3,3])
comb2
len(get_comb(x2reps, y2reps, [2,2,2,2], [2,2,2,2]))
Explanation: Create combinations of replicates using the wrapper function
End of explanation |
14,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Validating Configuration Settings with Batfish
Network engineers routinely need to validate configuration settings of various devices in their network. In a multi-vendor network, this validation can be hard and few tools exist today to enable this basic task. However, the vendor-independent models of Batfish and its querying mechanisms make such validation almost trivial.
In this notebook, we show how to validate configuration settings with Batfish. More specifically, we examine how the configuration of NTP servers can be validated. The same validation scenarios can be performed for other configuration settings of nodes (such as dns servers, tacacs servers, snmp communities, VRFs, etc.) interfaces (such as MTU, bandwidth, input and output access lists, state, etc.), VRFs, BGP and OSPF sessions, and more.
Check out a video demo of this notebook here.
Initializing our Network and Snapshot
SNAPSHOT_PATH below can be updated to point to a custom snapshot directory, see the Batfish instructions for how to package data for analysis.<br>
More example networks are available in the networks folder of the Batfish repository.
Step1: The network snapshot that we initialized above is illustrated below. You can download/view devices' configuration files here. We will focus on the validation for the six border routers.
Extracting configured NTP servers
This can be done using the nodeProperties() question.
Step2: The .frame() function call above returns a Pandas data frame that contains the answer.
Validating NTP Servers Configuration
Depending on the network's policy, there are several possible validation scenarios for NTP-servers configuration
Step3: Validation scenario 2
Step4: Because as1border1 has no configured NTP servers, it clearly violates our assertion, and so does as2border2 which has a configured server but not one that is present in the reference set.
Validation scenario 3
Step5: As we can see, all border nodes violate this condition.
A slightly advanced version of pandas filtering can also show us which configured NTP servers are missing or extra (compared to the reference set) at each node.
Step6: Validation scenario 4
Step7: Note that there is an extra property in this dictionary that we don't care about comparing right now
Step8: Continue exploring
We showed you how to extract the database of configured NTP servers for every node and how to test that the settings are correct for a variety of desired test configurations. The underlying principles can be applied to other network configurations, such as interfaceProperties, bgpProcessConfiguration, ospfProcessConfiguration etc.
For example interfaceProperties() question can be used to fetch properties like interface MTU using a simple command. | Python Code:
# Import packages
%run startup.py
bf = Session(host="localhost")
# Initialize a network and snapshot
NETWORK_NAME = "example_network"
SNAPSHOT_NAME = "example_snapshot"
SNAPSHOT_PATH = "networks/example"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)
Explanation: Validating Configuration Settings with Batfish
Network engineers routinely need to validate configuration settings of various devices in their network. In a multi-vendor network, this validation can be hard and few tools exist today to enable this basic task. However, the vendor-independent models of Batfish and its querying mechanisms make such validation almost trivial.
In this notebook, we show how to validate configuration settings with Batfish. More specifically, we examine how the configuration of NTP servers can be validated. The same validation scenarios can be performed for other configuration settings of nodes (such as dns servers, tacacs servers, snmp communities, VRFs, etc.) interfaces (such as MTU, bandwidth, input and output access lists, state, etc.), VRFs, BGP and OSPF sessions, and more.
Check out a video demo of this notebook here.
Initializing our Network and Snapshot
SNAPSHOT_PATH below can be updated to point to a custom snapshot directory, see the Batfish instructions for how to package data for analysis.<br>
More example networks are available in the networks folder of the Batfish repository.
End of explanation
# Set the property that we want to extract
COL_NAME = "NTP_Servers"
# Extract NTP servers for all routers with 'border' in their name
node_props = bf.q.nodeProperties(
nodes="/border/",
properties=COL_NAME).answer().frame()
node_props
Explanation: The network snapshot that we initialized above is illustrated below. You can download/view devices' configuration files here. We will focus on the validation for the six border routers.
Extracting configured NTP servers
This can be done using the nodeProperties() question.
End of explanation
# Find nodes that have no NTP servers configured
ns_violators = node_props[node_props[COL_NAME].apply(
lambda x: len(x) == 0)]
ns_violators
Explanation: The .frame() function call above returns a Pandas data frame that contains the answer.
Validating NTP Servers Configuration
Depending on the network's policy, there are several possible validation scenarios for NTP-servers configuration:
1. Every node has at least one NTP server configured.
2. Every node has at least one NTP server configured from the reference set.
3. Every node has the reference set of NTP servers configured.
4. Every node has NTP servers that match those in a per-node database.
We demonstrate each scenario below.
Validation scenario 1: Every node has at least one NTP server configured
Now that we have the list of NTP servers, let's check if at least one server is configured on the border routers. We accomplish that by using (lambda expressions) to identify nodes where the list is empty.
End of explanation
# Define the reference set of NTP servers
ref_ntp_servers = set(["23.23.23.23"])
# Find nodes that have no NTP server in common with the reference set
ns_violators = node_props[node_props[COL_NAME].apply(
lambda x: len(ref_ntp_servers.intersection(set(x))) == 0)]
ns_violators
Explanation: Validation scenario 2: Every node has at least one NTP server configured from the reference set.
Now if we want to validate that configured NTP servers should contain at least one NTP server from a reference set, we can use the command below. It identifies any node whose configured set of NTP servers does not overlap with the reference set at all.
End of explanation
# Find violating nodes whose configured NTP servers do not match the reference set
ns_violators = node_props[node_props[COL_NAME].apply(
lambda x: ref_ntp_servers != set(x))]
ns_violators
Explanation: Because as1border1 has no configured NTP servers, it clearly violates our assertion, and so does as2border2 which has a configured server but not one that is present in the reference set.
Validation scenario 3: Every node has the reference set of NTP servers configured
A common use case for validating NTP servers involves checking that the set of NTP servers exactly matches a desired reference set. Such validation is quite straightforward as well.
End of explanation
# Find extra and missing servers at each node
ns_extra = node_props[COL_NAME].map(lambda x: set(x) - ref_ntp_servers)
ns_missing = node_props[COL_NAME].map(lambda x: ref_ntp_servers - set(x))
# Join these columns up with the node columns for a complete view
diff_df = pd.concat([node_props["Node"],
ns_extra.rename('extra-{}'.format(COL_NAME)),
ns_missing.rename('missing-{}'.format(COL_NAME))],
axis=1)
diff_df
Explanation: As we can see, all border nodes violate this condition.
A slightly advanced version of pandas filtering can also show us which configured NTP servers are missing or extra (compared to the reference set) at each node.
End of explanation
# Mock reference-node-data, presumably taken from an external database
database = {'as1border1': {'NTP_Servers': ['23.23.23.23'],
'DNS_Servers': ['1.1.1.1']},
'as1border2': {'NTP_Servers': ['23.23.23.23'],
'DNS_Servers': ['1.1.1.1']},
'as2border1': {'NTP_Servers': ['18.18.18.18', '23.23.23.23'],
'DNS_Servers': ['2.2.2.2']},
'as2border2': {'NTP_Servers': ['18.18.18.18'],
'DNS_Servers': ['1.1.1.1']},
'as3border1': {'NTP_Servers': ['18.18.18.18', '23.23.23.23'],
'DNS_Servers': ['2.2.2.2']},
'as3border2': {'NTP_Servers': ['18.18.18.18', '23.23.23.23'],
'DNS_Servers': ['2.2.2.2']},
}
Explanation: Validation scenario 4: Every node has NTP servers that match those in a per-node database.
Every node should match its reference set of NTP Servers which may be stored in an external database. This check enables easy validation of configuration settings that differ acorss nodes.
We assume data from the database is fetched in the following format, where node names are dictionary keys and specific properties are defined in a property-keyed dictionary per node.
End of explanation
# Transpose database data so each node has its own row
database_df = pd.DataFrame(data=database).transpose()
# Index on node for easier comparison
df_node_props = node_props.set_index('Node')
# Select only columns present in node_props (get rid of the extra dns-servers column)
df_db_node_props = database_df[df_node_props.columns].copy()
# Convert server lists into sets to support arithmetic below
df_node_props[COL_NAME] = df_node_props[COL_NAME].apply(set)
df_db_node_props[COL_NAME] = df_db_node_props[COL_NAME].apply(set)
# Figure out what servers are in the configs but not the database and vice versa
missing_servers = (df_db_node_props - df_node_props).rename(
columns={COL_NAME: 'missing-{}'.format(COL_NAME)})
extra_servers = (df_node_props - df_db_node_props).rename(
columns={COL_NAME: 'extra-{}'.format(COL_NAME)})
result = pd.concat([missing_servers, extra_servers], axis=1, sort=False)
result
Explanation: Note that there is an extra property in this dictionary that we don't care about comparing right now: dns-server. We will filter out this property below, before comparing the data from Batfish to that in the database.
After a little massaging, the database and Batfish data can be compared to generate two sets of servers: missing (i.e., present in the database but not in the configurations) and extra (i.e., present in the configurations but not in the database).
End of explanation
# Extract interface MTU for Ethernet0/0 interfaces on border routers
interface_mtu = bf.q.interfaceProperties(
interfaces="/border/[Ethernet0/0]",
properties="MTU").answer().frame()
interface_mtu
Explanation: Continue exploring
We showed you how to extract the database of configured NTP servers for every node and how to test that the settings are correct for a variety of desired test configurations. The underlying principles can be applied to other network configurations, such as interfaceProperties, bgpProcessConfiguration, ospfProcessConfiguration etc.
For example interfaceProperties() question can be used to fetch properties like interface MTU using a simple command.
End of explanation |
14,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to create a Deployment
In this notebook, we show you how to create a Deployment with 3 ReplicaSets. These ReplicaSets are owned by the Deployment and are managed by the Deployment controller. We would also learn how to carry out RollingUpdate and RollBack to new and older versions of the deployment.
Step1: Load config from default location
Step2: Create Deployment object
Step3: Fill required Deployment fields (apiVersion, kind, and metadata)
Step4: A Deployment also needs a .spec section
Step5: Add Pod template in .spec.template section
Step6: Pod template container description
Step7: Create Deployment
Step8: Update container image
Step9: Apply update (RollingUpdate)
Step10: Delete Deployment | Python Code:
from kubernetes import client, config
Explanation: How to create a Deployment
In this notebook, we show you how to create a Deployment with 3 ReplicaSets. These ReplicaSets are owned by the Deployment and are managed by the Deployment controller. We would also learn how to carry out RollingUpdate and RollBack to new and older versions of the deployment.
End of explanation
config.load_kube_config()
apps_api = client.AppsV1Api()
Explanation: Load config from default location
End of explanation
deployment = client.V1Deployment()
Explanation: Create Deployment object
End of explanation
deployment.api_version = "apps/v1"
deployment.kind = "Deployment"
deployment.metadata = client.V1ObjectMeta(name="nginx-deployment")
Explanation: Fill required Deployment fields (apiVersion, kind, and metadata)
End of explanation
spec = client.V1DeploymentSpec()
spec.replicas = 3
Explanation: A Deployment also needs a .spec section
End of explanation
spec.template = client.V1PodTemplateSpec()
spec.template.metadata = client.V1ObjectMeta(labels={"app": "nginx"})
spec.template.spec = client.V1PodSpec()
Explanation: Add Pod template in .spec.template section
End of explanation
container = client.V1Container()
container.name="nginx"
container.image="nginx:1.7.9"
container. ports = [client.V1ContainerPort(container_port=80)]
spec.template.spec.containers = [container]
deployment.spec = spec
Explanation: Pod template container description
End of explanation
apps_api.create_namespaced_deployment(namespace="default", body=deployment)
Explanation: Create Deployment
End of explanation
deployment.spec.template.spec.containers[0].image = "nginx:1.9.1"
Explanation: Update container image
End of explanation
apps_api.replace_namespaced_deployment(name="nginx-deployment", namespace="default", body=deployment)
Explanation: Apply update (RollingUpdate)
End of explanation
apps_api.delete_namespaced_deployment(name="nginx-deployment", namespace="default", body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5))
Explanation: Delete Deployment
End of explanation |
14,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pythonを使って顔ランドマークで遊んでみよう
今回はPythonを使ったプログラミングをやってみます。ただの数値計算では面白くないので
WebCAMを使って自分の顔をキャプチャ
顔検出
顔ランドマーク検出
ランドマークを使って何かやる
という流れです。
使うパッケージ
この例では
OpenCV
Step1: これでdlibとcv2が使えるようになりました。dlib.あるいはcv2.の後に関数名を付けることでそれぞれの機能を呼び出せます。早速WebCAMを使えるようにしましょう。
Step2: カメラのタリーが光りましたか? 光らない場合は括弧の中の数字を1や2に変えてみて下さい。
次に画像をキャプチャします。カメラに目線を送りながら次のセルを実行しましょう。
Step3: capはWebCAMを使うための操縦桿(ハンドル)と思って下さい。それにread(読め)と命令した訳です。では,成功したか確認しましょう。readという関数(機能)は成功したか否かの結果と,画像を返してくれます。
Step4: Trueと出ましたか? 出ていれば成功です。画像を見てみましょう。
Step5: 自分の顔が出てきましたか? waitKey(2000)は2000ms待って終了する意味です。この2000を0にすると特別な意味になり,入力待ちになります。(ウィンドウを選択してアクティブな状態にしてから何かキーを押して下さい。Outに何か数字が出るでしょう。この数字はキーの認識番号とでも思って下さい。)
2. 顔検出
さて,顔検出をやってみます。OpenCVにも機能がありますがdlibの機能を使います。
Step6: detectorはdlibのget(よこせ) frontal(正面の) face(顔) detector(検出器)の結果。という意味です。要するに今度は顔検出の操縦桿がdetectorということです。では早速使ってみましょう。
Step7: "1"以上の数字が出てきたら成功です。これは検出した顔の数です。1行目で画像imgから, upsamplingを1回だけして(色々な大きさの顔に対応する処理),その結果をdetsに入れてます。
ではdetsの中身を見てみましょう。
Step8: rectangle(xxx, xxx, xxx, xxx)と出てきましたね。これはdlibのrectangleというモノです。訳がわからないのでdlib.rectangle?と実行してみましょう。
Step9: 恐らく
```Python
Docstring
Step10: 答えは四角形の左上,右下の座標です。では画像に四角形を重ねてみましょう。ここではcv2の機能を使います。使い方を見て実行してみましょう。
Step11: 顔に四角形が重なりましたか?失敗した場合には顔が正面を向いていないか,rectangleに渡す座標が間違えています。ちなみにこれを連続的に実行すると以下のようになります。(ウィンドウをアクティブにしてESCキーを押すと止まります)
Step12: 3. 顔ランドマーク検出
いよいよ顔ランドマークです。顔ランドマークは学習済みのデータ,shape_predictor_68_face_landmarks.datを使います。これは顔ランドマーク68点を検出できます。その前に仕切り直しです。また顔をカメラに向けて以下を実行して下さい。
Step13: では顔ランドマークの検出器の操縦桿を作りましょう。
Step14: もし,エラーが出てしまったらshape_predictor_68_face_landmarks.datファイルがこのノートブックファイルと同じ場所にないせいです。ネットからダウンロードしましょう。下のセルがdlib.netからbz2圧縮されたファイルを展開して保存する処理なので,一度実行していれば大丈夫です)
Step15: 手順としてはdetectorで顔検出し,predictorで検出した顔領域内の顔ランドマークを検出,という流れです。
Step16: 結果を入れたshapeを見てみようと思ったらdlib.full_object_detection at ....と出てきました。?を使って調べてみましょう。
Step17: ```Python
Docstring
Step18: Python
Docstring
Step19: 出ました。0番です。さて,どこでしょう。これはググってみましょう。ついでにdlib.pointも調べてみましょう。
Step20: ```Python
Docstring
Step21: では取り敢えず右目を囲ってみましょう。左端は36番のx,上端は38番のy,右端は39番のx,下端は41番のyを使ってみます。長くなるのでそれぞれx1, y1, x2, y2に代入してしまいましょう。
Step22: そしてimgに四角形を書き込んでみましょう。
Step23: 先程の連続処理に手を加えてみましょう。
Step24: 4. 顔ランドマークを使って何かやる
さて,最後です。ランドマークを使って雑コラをしてみます。とりあえず改変OKなものを探してここから拾ってきました。
また仕切り直しですのでカメラを見て下のセルを実行しましょう。
Step25: 今度は両目を覆いたいので(x1, y1) = (17のx, 19のy), (x2, y2) = (26のx, 29のy)としました。
Step26: では囲えてるか確認しましょう。
Step27: では画像の一部置き換えです。Pythonを使うと簡単ですが注意が必要です。
Python
置き換える画像の読み込み(cv2.imread)
置き換える画像をリサイズ(cv2.resize),サイズは(x2 - x1, y2 - y1)
元画像[yの範囲, xの範囲] = リサイズした置き換える画像
となります。Pythonは通常「行,列」で扱っているので3行目はxとyが逆になっています。
Step28: さて,確認してみましょう。
Step29: では連続処理にしてみましょう。 | Python Code:
import dlib
import cv2
Explanation: Pythonを使って顔ランドマークで遊んでみよう
今回はPythonを使ったプログラミングをやってみます。ただの数値計算では面白くないので
WebCAMを使って自分の顔をキャプチャ
顔検出
顔ランドマーク検出
ランドマークを使って何かやる
という流れです。
使うパッケージ
この例では
OpenCV: 画像処理ライブラリ(cv2)
dlib: 機械学習ライブラリ
を使います。
1. WebCAMを使って自分の顔をキャプチャ
まず,OpenCV(cv2)とdlibを使う宣言をします。C言語の#includeみたいなもんです。
セルが緑色の状態(青だったらEnterを押す)でSHIFT+Enterを押して下さい。そうするとIn[?]となっているセル内のPython文が実行されます。
その際, Errorのようなメッセージが出なければ成功です。メッセージは英語ですが少し気合を入れれば読めます。
End of explanation
cap = cv2.VideoCapture(0)
Explanation: これでdlibとcv2が使えるようになりました。dlib.あるいはcv2.の後に関数名を付けることでそれぞれの機能を呼び出せます。早速WebCAMを使えるようにしましょう。
End of explanation
ret, img = cap.read()
Explanation: カメラのタリーが光りましたか? 光らない場合は括弧の中の数字を1や2に変えてみて下さい。
次に画像をキャプチャします。カメラに目線を送りながら次のセルを実行しましょう。
End of explanation
print(ret)
Explanation: capはWebCAMを使うための操縦桿(ハンドル)と思って下さい。それにread(読め)と命令した訳です。では,成功したか確認しましょう。readという関数(機能)は成功したか否かの結果と,画像を返してくれます。
End of explanation
cv2.imshow('image', img)
cv2.waitKey(2000)
Explanation: Trueと出ましたか? 出ていれば成功です。画像を見てみましょう。
End of explanation
detector = dlib.get_frontal_face_detector()
Explanation: 自分の顔が出てきましたか? waitKey(2000)は2000ms待って終了する意味です。この2000を0にすると特別な意味になり,入力待ちになります。(ウィンドウを選択してアクティブな状態にしてから何かキーを押して下さい。Outに何か数字が出るでしょう。この数字はキーの認識番号とでも思って下さい。)
2. 顔検出
さて,顔検出をやってみます。OpenCVにも機能がありますがdlibの機能を使います。
End of explanation
dets = detector(img, 1)
len(dets)
Explanation: detectorはdlibのget(よこせ) frontal(正面の) face(顔) detector(検出器)の結果。という意味です。要するに今度は顔検出の操縦桿がdetectorということです。では早速使ってみましょう。
End of explanation
dets[0]
Explanation: "1"以上の数字が出てきたら成功です。これは検出した顔の数です。1行目で画像imgから, upsamplingを1回だけして(色々な大きさの顔に対応する処理),その結果をdetsに入れてます。
ではdetsの中身を見てみましょう。
End of explanation
dlib.rectangle?
Explanation: rectangle(xxx, xxx, xxx, xxx)と出てきましたね。これはdlibのrectangleというモノです。訳がわからないのでdlib.rectangle?と実行してみましょう。
End of explanation
print(dets[0].left())
print(dets[0].top())
print(dets[0].right())
print(dets[0].bottom())
Explanation: 恐らく
```Python
Docstring: This object represents a rectangular area of an image.
Init docstring:
init( (object)arg1) -> None
init( (object)arg1, (int)left, (int)top, (int)right, (int)bottom) -> None
File:
Type: class
```
のような表示が出てきたと思います。詳しく説明しませんが,rectangle(四角形)にleft, top, right, bottomとくれば何となく想像できるでしょう。
End of explanation
cv2.rectangle?
img = cv2.rectangle(img, (dets[0].left(), dets[0].top()), (dets[0].right(), dets[0].bottom()), (255, 0, 0))
cv2.imshow('image', img)
cv2.waitKey(2000)
Explanation: 答えは四角形の左上,右下の座標です。では画像に四角形を重ねてみましょう。ここではcv2の機能を使います。使い方を見て実行してみましょう。
End of explanation
import cv2
import dlib
cap = cv2.VideoCapture(0)
detector = dlib.get_frontal_face_detector()
key = 0
while key != 27:
ret, img = cap.read()
dets = detector(img, 1)
if len(dets) > 0:
img = cv2.rectangle(img, (dets[0].left(), dets[0].top()), (dets[0].right(), dets[0].bottom()), (255, 0, 0))
cv2.imshow('image', img)
else:
cv2.imshow('image', img)
key = cv2.waitKey(10)
Explanation: 顔に四角形が重なりましたか?失敗した場合には顔が正面を向いていないか,rectangleに渡す座標が間違えています。ちなみにこれを連続的に実行すると以下のようになります。(ウィンドウをアクティブにしてESCキーを押すと止まります)
End of explanation
ret, img = cap.read()
Explanation: 3. 顔ランドマーク検出
いよいよ顔ランドマークです。顔ランドマークは学習済みのデータ,shape_predictor_68_face_landmarks.datを使います。これは顔ランドマーク68点を検出できます。その前に仕切り直しです。また顔をカメラに向けて以下を実行して下さい。
End of explanation
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
Explanation: では顔ランドマークの検出器の操縦桿を作りましょう。
End of explanation
import urllib.request
urllib.request.urlretrieve("http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2", "shape_predictor_68_face_landmarks.dat.bz2")
import bz2
f = bz2.open("shape_predictor_68_face_landmarks.dat.bz2", "rb")
d = f.read()
f.close()
f = open("shape_predictor_68_face_landmarks.dat","wb")
f.write(d)
f.close()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
Explanation: もし,エラーが出てしまったらshape_predictor_68_face_landmarks.datファイルがこのノートブックファイルと同じ場所にないせいです。ネットからダウンロードしましょう。下のセルがdlib.netからbz2圧縮されたファイルを展開して保存する処理なので,一度実行していれば大丈夫です)
End of explanation
dets = detector(img, 1)
shape = predictor(img, dets[0])
shape
Explanation: 手順としてはdetectorで顔検出し,predictorで検出した顔領域内の顔ランドマークを検出,という流れです。
End of explanation
dlib.full_object_detection?
Explanation: 結果を入れたshapeを見てみようと思ったらdlib.full_object_detection at ....と出てきました。?を使って調べてみましょう。
End of explanation
shape.parts?
Explanation: ```Python
Docstring: This object represents the location of an object in an image along with the positions of each of its constituent parts.
Init docstring:
init( (object)arg1) -> None
init( (object)arg1, (object)arg2, (object)arg3) -> object :
requires
- rect: dlib rectangle
- parts: list of dlib points
File:
Type: class
```
どうもrectとpartsがあるようです。rectは恐らくdetsと同じものでしょう。ではpartsはどうでしょう。
End of explanation
shape.parts()[0]
Explanation: Python
Docstring:
parts( (full_object_detection)arg1) -> points :
A vector of dlib points representing all of the parts.
Type: method
と出てきました。実行すると場所が詰まったベクトルが出てくると言っています。ベクトルの何番目は[]で指定できます。
End of explanation
dlib.point?
Explanation: 出ました。0番です。さて,どこでしょう。これはググってみましょう。ついでにdlib.pointも調べてみましょう。
End of explanation
print(shape.parts()[0].x)
print(shape.parts()[0].y)
Explanation: ```Python
Docstring: This object represents a single point of integer coordinates that maps directly to a dlib::point.
Init docstring:
init( (object)arg1) -> None
init( (object)arg1, (int)x, (int)y) -> None
File:
Type: class
```
とあるので,x, yで座標を指定できそうです。
End of explanation
x1 = shape.parts()[36].x
y1 = shape.parts()[38].y
x2 = shape.parts()[39].x
y2 = shape.parts()[41].y
Explanation: では取り敢えず右目を囲ってみましょう。左端は36番のx,上端は38番のy,右端は39番のx,下端は41番のyを使ってみます。長くなるのでそれぞれx1, y1, x2, y2に代入してしまいましょう。
End of explanation
img = cv2.rectangle(img, (x1, y1), (x2, y2), (0, 0, 255))
cv2.imshow('image', img)
cv2.waitKey(2000)
Explanation: そしてimgに四角形を書き込んでみましょう。
End of explanation
import cv2
import dlib
cap = cv2.VideoCapture(0)
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
key = 0
while key != 27:
ret, img = cap.read()
dets = detector(img, 1)
if len(dets) > 0:
shape = predictor(img, dets[0])
x1 = shape.parts()[36].x
y1 = shape.parts()[38].y
x2 = shape.parts()[39].x
y2 = shape.parts()[41].y
img = cv2.rectangle(img, (dets[0].left(), dets[0].top()), (dets[0].right(), dets[0].bottom()), (255, 0, 0))
img = cv2.rectangle(img, (x1, y1), (x2, y2), (0, 0, 255))
cv2.imshow('image', img)
key = cv2.waitKey(10)
Explanation: 先程の連続処理に手を加えてみましょう。
End of explanation
ret, img = cap.read()
dets = detector(img, 1)
shape = predictor(img, dets[0])
Explanation: 4. 顔ランドマークを使って何かやる
さて,最後です。ランドマークを使って雑コラをしてみます。とりあえず改変OKなものを探してここから拾ってきました。
また仕切り直しですのでカメラを見て下のセルを実行しましょう。
End of explanation
x1 = shape.parts()[17].x
y1 = shape.parts()[19].y
x2 = shape.parts()[26].x
y2 = shape.parts()[29].y
Explanation: 今度は両目を覆いたいので(x1, y1) = (17のx, 19のy), (x2, y2) = (26のx, 29のy)としました。
End of explanation
img = cv2.rectangle(img, (x1, y1), (x2, y2), (0, 0, 255))
cv2.imshow('image', img)
cv2.waitKey(2000)
Explanation: では囲えてるか確認しましょう。
End of explanation
img2 = cv2.imread('cartoon-718659_640.png', cv2.IMREAD_ANYCOLOR)
newSize = (x2 - x1, y2 - y1)
img3 = cv2.resize(img2, newSize)
img[y1:y2, x1:x2] = img3
Explanation: では画像の一部置き換えです。Pythonを使うと簡単ですが注意が必要です。
Python
置き換える画像の読み込み(cv2.imread)
置き換える画像をリサイズ(cv2.resize),サイズは(x2 - x1, y2 - y1)
元画像[yの範囲, xの範囲] = リサイズした置き換える画像
となります。Pythonは通常「行,列」で扱っているので3行目はxとyが逆になっています。
End of explanation
cv2.imshow('image', img)
cv2.waitKey(2000)
Explanation: さて,確認してみましょう。
End of explanation
import cv2
import dlib
cap = cv2.VideoCapture(0)
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
img2 = cv2.imread('cartoon-718659_640.png', cv2.IMREAD_ANYCOLOR)
key = 0
while key != 27:
ret, img = cap.read()
dets = detector(img, 1)
if len(dets) > 0:
shape = predictor(img, dets[0])
x1 = shape.parts()[17].x
y1 = shape.parts()[19].y
x2 = shape.parts()[26].x
y2 = shape.parts()[29].y
newSize = (x2 - x1, y2 - y1)
img3 = cv2.resize(img2, newSize)
img[y1:y2, x1:x2] = img3
cv2.imshow('image', img)
key = cv2.waitKey(10)
Explanation: では連続処理にしてみましょう。
End of explanation |
14,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theano, Lasagne
and why they matter
got no lasagne?
Install the bleeding edge version from here
Step1: theano teaser
Doing the very same thing
Step2: How does it work?
1 You define inputs f your future function;
2 You write a recipe for some transformation of inputs;
3 You compile it;
You have just got a function!
The gobbledegooky version
Step3: Compiling
So far we were using "symbolic" variables and transformations
Defining the recipe for computation, but not computing anything
To use the recipe, one should compile it
Step4: Debugging
Compilation can take a while for big functions
To avoid waiting, one can evaluate transformations without compiling
Without compilation, the code runs slower, so consider reducing input size
Step5: When debugging, it's usually a good idea to reduce the scale of your computation. E.g. if you train on batches of 128 objects, debug on 2-3.
If it's imperative that you run a large batch of data, consider compiling with mode='debug' instead
Your turn
Step6: Shared variables
The inputs and transformations only exist when function is called
Shared variables always stay in memory like global variables
Shared variables can be included into a symbolic graph
They can be set and evaluated using special methods
but they can't change value arbitrarily during symbolic graph computation
we'll cover that later;
Hint
Step7: Your turn
Step8: T.grad - why theano matters
Theano can compute derivatives and gradients automatically
Derivatives are computed symbolically, not numerically
Limitations
Step9: Why that rocks
Step10: Almost done - Updates
updates are a way of changing shared variables at after function call.
technically it's a dictionary {shared_variable
Step11: Logistic regression example (4 pts)
Implement the regular logistic regression training algorithm
Tips
Step12: lasagne
lasagne is a library for neural network building and training
it's a low-level library with almost seamless integration with theano
For a demo we shall solve the same digit recognition problem, but at a different scale
* images are now 28x28
* 10 different digits
* 50k samples
Step13: Defining network architecture
Step14: Than you could simply
define loss function manually
compute error gradient over all weights
define updates
But that's a whole lot of work and life's short
not to mention life's too short to wait for SGD to converge
Instead, we shall use Lasagne builtins
Step15: That's all, now let's train it!
We got a lot of data, so it's recommended that you use SGD
So let's implement a function that splits the training sample into minibatches
Step16: Training loop
Step17: A better network ( 4+ pts )
The quest is to create a network that gets at least 99% at test set
In case you tried several architectures and have a detailed report - 97.5% "is fine too".
+1 bonus point each 0.1% past 99%
More points for creative approach
There is a mini-report at the end that you will have to fill in. We recommend to read it first and fill in while you are iterating.
Tips on what can be done | Python Code:
import numpy as np
def sum_squares(N):
return <student.Implement_me()>
%%time
sum_squares(10**8)
Explanation: Theano, Lasagne
and why they matter
got no lasagne?
Install the bleeding edge version from here: http://lasagne.readthedocs.org/en/latest/user/installation.html
Warming up
Implement a function that computes the sum of squares of numbers from 0 to N
Use numpy or python
An array of numbers 0 to N - numpy.arange(N)
End of explanation
import theano
import theano.tensor as T
#I gonna be function parameter
N = T.scalar("a dimension",dtype='int32')
#i am a recipe on how to produce sum of squares of arange of N given N
result = (T.arange(N)**2).sum()
#Compiling the recipe of computing "result" given N
sum_function = theano.function(inputs = [N],outputs=result)
%%time
sum_function(10**8)
Explanation: theano teaser
Doing the very same thing
End of explanation
#Inputs
example_input_integer = T.scalar("scalar input",dtype='float32')
example_input_tensor = T.tensor4("four dimensional tensor input") #dtype = theano.config.floatX by default
#не бойся, тензор нам не пригодится
input_vector = T.vector("my vector", dtype='int32') # vector of integers
#Transformations
#transofrmation: elementwise multiplication
double_the_vector = input_vector*2
#elementwise cosine
elementwise_cosine = T.cos(input_vector)
#difference between squared vector and vector itself
vector_squares = input_vector**2 - input_vector
#Practice time:
#create two vectors of size float32
my_vector = student.init_float32_vector()
my_vector2 = student.init_one_more_such_vector()
#Write a transformation(recipe):
#(vec1)*(vec2) / (sin(vec1) +1)
my_transformation = student.implementwhatwaswrittenabove()
print( my_transformation)
#it's okay it aint a number
#What's inside the transformation
theano.printing.debugprint(my_transformation)
Explanation: How does it work?
1 You define inputs f your future function;
2 You write a recipe for some transformation of inputs;
3 You compile it;
You have just got a function!
The gobbledegooky version: you define a function as symbolic computation graph.
There are two main kinвs of entities: "Inputs" and "Transformations"
Both can be numbers, vectors, matrices, tensors, etc.
Both can be integers, floats of booleans (uint8) of various size.
An input is a placeholder for function parameters.
N from example above
Transformations are the recipes for computing something given inputs and transformation
(T.arange(N)^2).sum() are 3 sequential transformations of N
Doubles all functions of numpy vector syntax
You can almost always go with replacing "np.function" with "T.function" aka "theano.tensor.function"
np.mean -> T.mean
np.arange -> T.arange
np.cumsum -> T.cumsum
and so on.
builtin operations also work that way
np.arange(10).mean() -> T.arange(10).mean()
Once upon a blue moon the functions have different names or locations (e.g. T.extra_ops)
Ask us or google it
Still confused? We gonna fix that.
End of explanation
inputs = [<two vectors that my_transformation depends on>]
outputs = [<What do we compute (can be a list of several transformation)>]
# The next lines compile a function that takes two vectors and computes your transformation
my_function = theano.function(
inputs,outputs,
allow_input_downcast=True #automatic type casting for input parameters (e.g. float64 -> float32)
)
#using function with, lists:
print "using python lists:"
print my_function([1,2,3],[4,5,6])
print
#Or using numpy arrays:
#btw, that 'float' dtype is casted to secong parameter dtype which is float32
print "using numpy arrays:"
print my_function(np.arange(10),
np.linspace(5,6,10,dtype='float'))
Explanation: Compiling
So far we were using "symbolic" variables and transformations
Defining the recipe for computation, but not computing anything
To use the recipe, one should compile it
End of explanation
#a dictionary of inputs
my_function_inputs = {
my_vector:[1,2,3],
my_vector2:[4,5,6]
}
# evaluate my_transformation
# has to match with compiled function output
print my_transformation.eval(my_function_inputs)
# can compute transformations on the fly
print ("add 2 vectors", (my_vector + my_vector2).eval(my_function_inputs))
#!WARNING! if your transformation only depends on some inputs,
#do not provide the rest of them
print ("vector's shape:", my_vector.shape.eval({
my_vector:[1,2,3]
}))
Explanation: Debugging
Compilation can take a while for big functions
To avoid waiting, one can evaluate transformations without compiling
Without compilation, the code runs slower, so consider reducing input size
End of explanation
# Quest #1 - implement a function that computes a mean squared error of two input vectors
# Your function has to take 2 vectors and return a single number
<student.define_inputs_and_transformations()>
compute_mse =<student.compile_function()>
# Tests
from sklearn.metrics import mean_squared_error
for n in [1,5,10,10**3]:
elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),
np.ones(n),np.random.random(n),np.random.randint(100,size=n)]
for el in elems:
for el_2 in elems:
true_mse = np.array(mean_squared_error(el,el_2))
my_mse = compute_mse(el,el_2)
if not np.allclose(true_mse,my_mse):
print ('Wrong result:')
print ('mse(%s,%s)'%(el,el_2))
print ("should be: %f, but your function returned %f"%(true_mse,my_mse))
raise ValueError("Что-то не так")
print ("All tests passed")
Explanation: When debugging, it's usually a good idea to reduce the scale of your computation. E.g. if you train on batches of 128 objects, debug on 2-3.
If it's imperative that you run a large batch of data, consider compiling with mode='debug' instead
Your turn: Mean Squared Error (2 pts)
End of explanation
#creating shared variable
shared_vector_1 = theano.shared(np.ones(10,dtype='float64'))
#evaluating shared variable (outside symbolicd graph)
print ("initial value",shared_vector_1.get_value())
# within symbolic graph you use them just as any other inout or transformation, not "get value" needed
#setting new value
shared_vector_1.set_value( np.arange(5) )
#getting that new value
print ("new value", shared_vector_1.get_value())
#Note that the vector changed shape
#This is entirely allowed... unless your graph is hard-wired to work with some fixed shape
Explanation: Shared variables
The inputs and transformations only exist when function is called
Shared variables always stay in memory like global variables
Shared variables can be included into a symbolic graph
They can be set and evaluated using special methods
but they can't change value arbitrarily during symbolic graph computation
we'll cover that later;
Hint: such variables are a perfect place to store network parameters
e.g. weights or some metadata
End of explanation
# Write a recipe (transformation) that computes an elementwise transformation of shared_vector and input_scalar
#Compile as a function of input_scalar
input_scalar = T.scalar('coefficient',dtype='float32')
scalar_times_shared = <student.write_recipe()>
shared_times_n = <student.compile_function()>
print "shared:", shared_vector_1.get_value()
print "shared_times_n(5)",shared_times_n(5)
print "shared_times_n(-0.5)",shared_times_n(-0.5)
#Changing value of vector 1 (output should change)
shared_vector_1.set_value([-1,0,1])
print "shared:", shared_vector_1.get_value()
print "shared_times_n(5)",shared_times_n(5)
print "shared_times_n(-0.5)",shared_times_n(-0.5)
Explanation: Your turn
End of explanation
my_scalar = T.scalar(name='input',dtype='float64')
scalar_squared = T.sum(my_scalar**2)
#a derivative of v_squared by my_vector
derivative = T.grad(scalar_squared,my_scalar)
fun = theano.function([my_scalar],scalar_squared)
grad = theano.function([my_scalar],derivative)
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3,3)
x_squared = list(map(fun,x))
x_squared_der = list(map(grad,x))
plt.plot(x, x_squared,label="x^2")
plt.plot(x, x_squared_der, label="derivative")
plt.legend()
Explanation: T.grad - why theano matters
Theano can compute derivatives and gradients automatically
Derivatives are computed symbolically, not numerically
Limitations:
* You can only compute a gradient of a scalar transformation over one or several scalar or vector (or tensor) transformations or inputs.
* A transformation has to have float32 or float64 dtype throughout the whole computation graph
* derivative over an integer has no mathematical sense
End of explanation
my_vector = T.vector('float64')
#Compute the gradient of the next weird function over my_scalar and my_vector
#warning! Trying to understand the meaning of that function may result in permanent brain damage
weird_psychotic_function = ((my_vector+my_scalar)**(1+T.var(my_vector)) +1./T.arcsinh(my_scalar)).mean()/(my_scalar**2 +1) + 0.01*T.sin(2*my_scalar**1.5)*(T.sum(my_vector)* my_scalar**2)*T.exp((my_scalar-4)**2)/(1+T.exp((my_scalar-4)**2))*(1.-(T.exp(-(my_scalar-4)**2))/(1+T.exp(-(my_scalar-4)**2)))**2
der_by_scalar,der_by_vector = <student.compute_grad_over_scalar_and_vector()>
compute_weird_function = theano.function([my_scalar,my_vector],weird_psychotic_function)
compute_der_by_scalar = theano.function([my_scalar,my_vector],der_by_scalar)
#Plotting your derivative
vector_0 = [1,2,3]
scalar_space = np.linspace(0,7)
y = [compute_weird_function(x,vector_0) for x in scalar_space]
plt.plot(scalar_space,y,label='function')
y_der_by_scalar = [compute_der_by_scalar(x,vector_0) for x in scalar_space]
plt.plot(scalar_space,y_der_by_scalar,label='derivative')
plt.grid();plt.legend()
Explanation: Why that rocks
End of explanation
# Multiply shared vector by a number and save the product back into shared vector
inputs = [input_scalar]
outputs = [scalar_times_shared] #return vector times scalar
my_updates = {
shared_vector_1:scalar_times_shared #and write this same result bach into shared_vector_1
}
compute_and_save = theano.function(inputs, outputs, updates=my_updates)
shared_vector_1.set_value(np.arange(5))
#initial shared_vector_1
print ("initial shared value:" ,shared_vector_1.get_value())
# evaluating the function (shared_vector_1 will be changed)
print ("compute_and_save(2) returns",compute_and_save(2))
#evaluate new shared_vector_1
print ("new shared value:" ,shared_vector_1.get_value())
Explanation: Almost done - Updates
updates are a way of changing shared variables at after function call.
technically it's a dictionary {shared_variable : a recipe for new value} which is has to be provided when function is compiled
That's how it works:
End of explanation
from sklearn.datasets import load_digits
mnist = load_digits(2)
X,y = mnist.data, mnist.target
print ("y [shape - %s]:"%(str(y.shape)),y[:10])
print ("X [shape - %s]:"%(str(X.shape)))
print (X[:3])
print (y[:10])
# inputs and shareds
shared_weights = <student.code_me()>
input_X = <student.code_me()>
input_y = <student.code_me()>
predicted_y = <predicted probabilities for input_X>
loss = <logistic loss (scalar, mean over sample)>
grad = <gradient of loss over model weights>
updates = {
shared_weights: <new weights after gradient step>
}
train_function = <compile function that takes X and y, returns log loss and updates weights>
predict_function = <compile function that takes X and computes probabilities of y>
from sklearn.cross_validation import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y)
from sklearn.metrics import roc_auc_score
for i in range(5):
loss_i = train_function(X_train,y_train)
print ("loss at iter %i:%.4f"%(i,loss_i))
print ("train auc:",roc_auc_score(y_train,predict_function(X_train)))
print ("test auc:",roc_auc_score(y_test,predict_function(X_test)))
print ("resulting weights:")
plt.imshow(shared_weights.get_value().reshape(8,-1))
plt.colorbar()
Explanation: Logistic regression example (4 pts)
Implement the regular logistic regression training algorithm
Tips:
* Weights fit in as a shared variable
* X and y are potential inputs
* Compile 2 functions:
* train_function(X,y) - returns error and computes weights' new values (through updates)
* predict_fun(X) - just computes probabilities ("y") given data
We shall train on a two-class MNIST dataset
* please note that target y are {0,1} and not {-1,1} as in some formulae
End of explanation
from mnist import load_dataset
X_train,y_train,X_val,y_val,X_test,y_test = load_dataset()
print X_train.shape,y_train.shape
import lasagne
input_X = T.tensor4("X")
#input dimention (None means "Arbitrary" and only works at the first axes [samples])
input_shape = [None,1,28,28]
target_y = T.vector("target Y integer",dtype='int32')
Explanation: lasagne
lasagne is a library for neural network building and training
it's a low-level library with almost seamless integration with theano
For a demo we shall solve the same digit recognition problem, but at a different scale
* images are now 28x28
* 10 different digits
* 50k samples
End of explanation
#Input layer (auxilary)
input_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X)
#fully connected layer, that takes input layer and applies 50 neurons to it.
# nonlinearity here is sigmoid as in logistic regression
# you can give a name to each layer (optional)
dense_1 = lasagne.layers.DenseLayer(input_layer,num_units=50,
nonlinearity = lasagne.nonlinearities.sigmoid,
name = "hidden_dense_layer")
#fully connected output layer that takes dense_1 as input and has 10 neurons (1 for each digit)
#We use softmax nonlinearity to make probabilities add up to 1
dense_output = lasagne.layers.DenseLayer(dense_1,num_units = 10,
nonlinearity = lasagne.nonlinearities.softmax,
name='output')
#network prediction (theano-transformation)
y_predicted = lasagne.layers.get_output(dense_output)
#all network weights (shared variables)
all_weights = lasagne.layers.get_all_params(dense_output)
print (all_weights)
Explanation: Defining network architecture
End of explanation
#Mean categorical crossentropy as a loss function - similar to logistic loss but for multiclass targets
loss = lasagne.objectives.categorical_crossentropy(y_predicted,target_y).mean()
#prediction accuracy
accuracy = lasagne.objectives.categorical_accuracy(y_predicted,target_y).mean()
#This function computes gradient AND composes weight updates just like you did earlier
updates_sgd = lasagne.updates.sgd(loss, all_weights,learning_rate=0.01)
#function that computes loss and updates weights
train_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd)
#function that just computes accuracy
accuracy_fun = theano.function([input_X,target_y],accuracy)
Explanation: Than you could simply
define loss function manually
compute error gradient over all weights
define updates
But that's a whole lot of work and life's short
not to mention life's too short to wait for SGD to converge
Instead, we shall use Lasagne builtins
End of explanation
# An auxilary function that returns mini-batches for neural network training
#Parameters
# X - a tensor of images with shape (many, 1, 28, 28), e.g. X_train
# y - a vector of answers for corresponding images e.g. Y_train
#batch_size - a single number - the intended size of each batches
#What do need to implement
# 1) Shuffle data
# - Gotta shuffle X and y the same way not to break the correspondence between X_i and y_i
# 3) Split data into minibatches of batch_size
# - If data size is not a multiple of batch_size, make one last batch smaller.
# 4) return a list (or an iterator) of pairs
# - (подгруппа картинок, ответы из y на эту подгруппу)
def iterate_minibatches(X, y, batchsize):
<return an iterable of (X_batch, y_batch) batches of images and answers for them>
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
# You feel lost and wish you stayed home tonight?
# Go search for a similar function at
# https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py
Explanation: That's all, now let's train it!
We got a lot of data, so it's recommended that you use SGD
So let's implement a function that splits the training sample into minibatches
End of explanation
import time
num_epochs = 100 #amount of passes through the data
batch_size = 50 #number of samples processed at each function call
for epoch in range(num_epochs):
# In each epoch, we do a full pass over the training data:
train_err = 0
train_acc = 0
train_batches = 0
start_time = time.time()
for batch in iterate_minibatches(X_train, y_train,batch_size):
inputs, targets = batch
train_err_batch, train_acc_batch= train_fun(inputs, targets)
train_err += train_err_batch
train_acc += train_acc_batch
train_batches += 1
# And a full pass over the validation data:
val_acc = 0
val_batches = 0
for batch in iterate_minibatches(X_val, y_val, batch_size):
inputs, targets = batch
val_acc += accuracy_fun(inputs, targets)
val_batches += 1
# Then we print the results for this epoch:
print("Epoch {} of {} took {:.3f}s".format(
epoch + 1, num_epochs, time.time() - start_time))
print(" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches))
print(" train accuracy:\t\t{:.2f} %".format(
train_acc / train_batches * 100))
print(" validation accuracy:\t\t{:.2f} %".format(
val_acc / val_batches * 100))
test_acc = 0
test_batches = 0
for batch in iterate_minibatches(X_test, y_test, 500):
inputs, targets = batch
acc = accuracy_fun(inputs, targets)
test_acc += acc
test_batches += 1
print("Final results:")
print(" test accuracy:\t\t{:.2f} %".format(
test_acc / test_batches * 100))
if test_acc / test_batches * 100 > 99:
print ("Achievement unlocked: 80lvl Warlock!")
else:
print ("We need more magic!")
Explanation: Training loop
End of explanation
from mnist import load_dataset
X_train,y_train,X_val,y_val,X_test,y_test = load_dataset()
print X_train.shape,y_train.shape
import lasagne
input_X = T.tensor4("X")
#input dimention (None means "Arbitrary" and only works at the first axes [samples])
input_shape = [None,1,28,28]
target_y = T.vector("target Y integer",dtype='int32')
#Input layer (auxilary)
input_layer = lasagne.layers.InputLayer(shape = input_shape,input_var=input_X)
<student.code_neural_network_architecture()>
dense_output = <your network output>
# Network predictions (theano-transformation)
y_predicted = lasagne.layers.get_output(dense_output)
#All weights (shared-varaibles)
# "trainable" flag means not to return auxilary params like batch mean (for batch normalization)
all_weights = lasagne.layers.get_all_params(dense_output,trainable=True)
print (all_weights)
#loss function
loss = <loss function>
#<optionally add regularization>
accuracy = <mean accuracy score for evaluation>
#weight updates
updates = <try different update methods>
#A function that accepts X and y, returns loss functions and performs weight updates
train_fun = theano.function([input_X,target_y],[loss,accuracy],updates= updates_sgd)
#A function that just computes accuracy given X and y
accuracy_fun = theano.function([input_X,target_y],accuracy)
#итерации обучения
num_epochs = <how many times to iterate over the entire training set>
batch_size = <how many samples are processed at a single function call>
for epoch in range(num_epochs):
# In each epoch, we do a full pass over the training data:
train_err = 0
train_acc = 0
train_batches = 0
start_time = time.time()
for batch in iterate_minibatches(X_train, y_train,batch_size):
inputs, targets = batch
train_err_batch, train_acc_batch= train_fun(inputs, targets)
train_err += train_err_batch
train_acc += train_acc_batch
train_batches += 1
# And a full pass over the validation data:
val_acc = 0
val_batches = 0
for batch in iterate_minibatches(X_val, y_val, batch_size):
inputs, targets = batch
val_acc += accuracy_fun(inputs, targets)
val_batches += 1
# Then we print the results for this epoch:
print("Epoch {} of {} took {:.3f}s".format(
epoch + 1, num_epochs, time.time() - start_time))
print(" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches))
print(" train accuracy:\t\t{:.2f} %".format(
train_acc / train_batches * 100))
print(" validation accuracy:\t\t{:.2f} %".format(
val_acc / val_batches * 100))
test_acc = 0
test_batches = 0
for batch in iterate_minibatches(X_test, y_test, 500):
inputs, targets = batch
acc = accuracy_fun(inputs, targets)
test_acc += acc
test_batches += 1
print("Final results:")
print(" test accuracy:\t\t{:.2f} %".format(
test_acc / test_batches * 100))
if test_acc / test_batches * 100 > 99:
print ("Achievement unlocked: 80lvl Warlock!")
else:
print ("We need more magic!")
Explanation: A better network ( 4+ pts )
The quest is to create a network that gets at least 99% at test set
In case you tried several architectures and have a detailed report - 97.5% "is fine too".
+1 bonus point each 0.1% past 99%
More points for creative approach
There is a mini-report at the end that you will have to fill in. We recommend to read it first and fill in while you are iterating.
Tips on what can be done:
Network size
MOAR neurons,
MOAR layers,
Convolutions are almost imperative
Пх'нглуи мглв'нафх Ктулху Р'льех вгах'нагл фхтагн!
Regularize to prevent overfitting
Add some L2 weight norm to the loss function, theano will do the rest
Can be done manually or via - http://lasagne.readthedocs.org/en/latest/modules/regularization.html
Better optimization - rmsprop, nesterov_momentum, adadelta, adagrad and so on.
Converge faster and sometimes reach better optima
It might make sense to tweak learning rate, other learning parameters, batch size and number of epochs
Dropout - to prevent overfitting
lasagne.layers.DropoutLayer(prev_layer, p=probability_to_zero_out)
Convolution layers
network = lasagne.layers.Conv2DLayer(prev_layer,
num_filters = n_neurons,
filter_size = (filter width, filter height),
nonlinearity = some_nonlinearity)
Warning! Training convolutional networks can take long without GPU.
If you are CPU-only, we still recomment to try a simple convolutional architecture
a perfect option is if you can set it up to run at nighttime and check it up at the morning.
Plenty other layers and architectures
http://lasagne.readthedocs.org/en/latest/modules/layers.html
batch normalization, pooling, etc
Nonlinearities in the hidden layers
tanh, relu, leaky relu, etc
There is a template for your solution below that you can opt to use or throw away and write it your way
End of explanation |
14,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Making prettier (and more impactful) plots
Making prettier plots is part matter-of-taste, part an appreciation for optical perception. These days, there are a number of things you can do to make prettier plots. The guiding philosophy for these bits of advice is that it's better to start with little, and add more elements to the plot only if they actually add information (see the work of Edward Tufte).
Step1: The default plots created with matplotlib aren't bad, but they do have elements that are, at best, unnecessary. At worst, these elements detract from the display of quantitative information. We want to change this. First, let's change matplotlib's style with a built-in style sheet.
Step2: This produces something a bit more pleasing to the eye, with what probably amounts to better coloration. It replaced the box with a grid, however. Although this is useful for some plots, in particular panels of plots in which one needs to compare across different plots, it is often just unnecessary noise.
Using seaborn can help with this. We'll import seaborn's helper functions without importing its style
Step3: It almost looks like we took two steps forward and one step back
Step4: We can also go further, moving the axes a bit so they distract even less from the data, which should be front-and-center.
Step5: Now let's do some refining. Figures for exploratory work can be any size that's convenient, but when making figures for a publication, you must consider the real size of the figure in the final printed form. Considering a page in the U.S. is typically 8.5 x 11 inches, a typical figure should be no more than 4 inches wide to fit in a single column of the page. We can adjust figure sizes by giving matplotlib a bit more detail
Step6: We added some axes labels, too. Because this is a timeseries, we deliberately made the height of the figure less than the width. This is because timeseries are difficult to interpret when the variations with time are smashed together. Tufte's general rule is that no line in the timeseries be greater than 45$^\circ$; we would have a hard time doing that here with such noisy data, but going wider than tall is a step in the right direction.
Plotting to files
We can save figures in a variety of formats. It's useful to save a version as a PDF so that it can be postprocessed using vector graphics tools like Inkscape and Adobe Illustrator, but because vector graphics must be rendered by the viewer on load, it's useful to also write out a PNG.
PNGs are raster graphics
Step7: We can view the resulting PNG directly
Step8: Woah...something's wrong. The figure doesn't fit in the frame! This is because the figure elements were adjusted after the figure object was created, and so some of these elements, including the axis labels, are beyond the figure's edges. We can usually fix this with a call to plt.tight_layout to ensure everything fits in the plots we write out.
Step9: Okay...better. But it looks like the labels are a bit too big to make these dimensions work well. We can adjust these directly by changing matplotlib's settings. These are the same settings you might have set defaults for in your matplotlibrc file.
Step10: Much better!
Going further
Because we know this is an oscillating function, it might make sense to also plot a histogram of its values to get a sense of their distribution. We can accomplish this in one figure using subplots defined with a GridSpec object. There are many ways of defining the spatial extent of axes within a figure, but for this case this is probably the easiest.
Step11: Don't like the color? We can use seaborn to get at different colors in the color palette | Python Code:
# we'll use the pythonic pyplot interface
import matplotlib.pyplot as plt
# necessary for the notebook to render the plots inline
%matplotlib inline
import numpy as np
np.random.seed(42)
x = np.linspace(0, 40, 1000)
y = np.sin(np.linspace(0, 10*np.pi, 1000))
y += np.random.randn(len(x))
plt.plot(x, y)
Explanation: Making prettier (and more impactful) plots
Making prettier plots is part matter-of-taste, part an appreciation for optical perception. These days, there are a number of things you can do to make prettier plots. The guiding philosophy for these bits of advice is that it's better to start with little, and add more elements to the plot only if they actually add information (see the work of Edward Tufte).
End of explanation
# this gives us a style and color palette similar to ggplot2
plt.style.use('ggplot')
plt.plot(x, y)
Explanation: The default plots created with matplotlib aren't bad, but they do have elements that are, at best, unnecessary. At worst, these elements detract from the display of quantitative information. We want to change this. First, let's change matplotlib's style with a built-in style sheet.
End of explanation
# import seaborn's helpful functions without applying its style
import seaborn.apionly as sns
# importing seaborn can sometimes reset matplotlib's style to default
plt.style.use('ggplot')
# this will remove the noisy grids the 'ggplot' style gives
sns.set_style('ticks')
plt.plot(x, y)
Explanation: This produces something a bit more pleasing to the eye, with what probably amounts to better coloration. It replaced the box with a grid, however. Although this is useful for some plots, in particular panels of plots in which one needs to compare across different plots, it is often just unnecessary noise.
Using seaborn can help with this. We'll import seaborn's helper functions without importing its style:
End of explanation
plt.plot(x, y)
sns.despine()
Explanation: It almost looks like we took two steps forward and one step back: now we have a box again. But seaborn provides a useful function for removing axis lines: despine.
End of explanation
plt.plot(x, y)
sns.despine(offset=10)
Explanation: We can also go further, moving the axes a bit so they distract even less from the data, which should be front-and-center.
End of explanation
fig = plt.figure(figsize=(4, 2))
ax = fig.add_subplot(1,1,1)
ax.plot(x, y)
sns.despine(offset=10, ax=ax)
# let's add some axes labels to boot
ax.set_ylabel(r'displacement ($\AA$)')
ax.set_xlabel('time (ns)')
Explanation: Now let's do some refining. Figures for exploratory work can be any size that's convenient, but when making figures for a publication, you must consider the real size of the figure in the final printed form. Considering a page in the U.S. is typically 8.5 x 11 inches, a typical figure should be no more than 4 inches wide to fit in a single column of the page. We can adjust figure sizes by giving matplotlib a bit more detail:
End of explanation
fig = plt.figure(figsize=(4, 2))
ax = fig.add_subplot(1,1,1)
ax.plot(x, y)
sns.despine(offset=10, ax=ax)
# let's add some axes labels to boot
ax.set_ylabel(r'displacement ($\AA$)')
ax.set_xlabel('time (ns)')
fig.savefig('testfigure.pdf')
fig.savefig('testfigure.png', dpi=300)
Explanation: We added some axes labels, too. Because this is a timeseries, we deliberately made the height of the figure less than the width. This is because timeseries are difficult to interpret when the variations with time are smashed together. Tufte's general rule is that no line in the timeseries be greater than 45$^\circ$; we would have a hard time doing that here with such noisy data, but going wider than tall is a step in the right direction.
Plotting to files
We can save figures in a variety of formats. It's useful to save a version as a PDF so that it can be postprocessed using vector graphics tools like Inkscape and Adobe Illustrator, but because vector graphics must be rendered by the viewer on load, it's useful to also write out a PNG.
PNGs are raster graphics: they are just a matrix of pixels with four components (red, green, blue, and alpha (transparency)). This means they are quick to render with your favorite viewer, even if the plot originally had hundreds of thousands of points. However, they are not so great for making posters and final publication-quality figures, since they cannot be scaled to any size like vector graphics.
End of explanation
from IPython.display import Image
Image(filename='testfigure.png')
Explanation: We can view the resulting PNG directly:
End of explanation
fig = plt.figure(figsize=(4, 2))
ax = fig.add_subplot(1,1,1)
ax.plot(x, y)
sns.despine(offset=10, ax=ax)
# let's add some axes labels to boot
ax.set_ylabel(r'displacement ($\AA$)')
ax.set_xlabel('time (ns)')
# and now we'll also refine the y-axis ticks a bit
ax.set_ylim(-4.5, 4.5)
ax.set_yticks(np.linspace(-4, 4, 5))
ax.set_yticks(np.linspace(-3, 3, 4), minor=True)
plt.tight_layout()
fig.savefig('testfigure.pdf')
fig.savefig('testfigure.png', dpi=300)
Image(filename='testfigure.png')
Explanation: Woah...something's wrong. The figure doesn't fit in the frame! This is because the figure elements were adjusted after the figure object was created, and so some of these elements, including the axis labels, are beyond the figure's edges. We can usually fix this with a call to plt.tight_layout to ensure everything fits in the plots we write out.
End of explanation
# we can override matplotlib's settings; for example, changing the font size
plt.rcParams['font.size'] = 8
fig = plt.figure(figsize=(4, 2))
ax = fig.add_subplot(1,1,1)
ax.plot(x, y)
sns.despine(offset=10, ax=ax)
# let's add some axes labels to boot
ax.set_ylabel(r'displacement ($\AA$)')
ax.set_xlabel('time (ns)')
# and now we'll also refine the y-axis ticks a bit
ax.set_ylim(-4.5, 4.5)
ax.set_yticks(np.linspace(-4, 4, 5))
ax.set_yticks(np.linspace(-3, 3, 4), minor=True)
plt.tight_layout()
fig.savefig('testfigure.pdf')
fig.savefig('testfigure.png', dpi=300)
Image(filename='testfigure.png')
Explanation: Okay...better. But it looks like the labels are a bit too big to make these dimensions work well. We can adjust these directly by changing matplotlib's settings. These are the same settings you might have set defaults for in your matplotlibrc file.
End of explanation
from matplotlib import gridspec
fig = plt.figure(figsize=(7, 3))
gs = gridspec.GridSpec(1, 2, width_ratios=[3,1] )
# plot the timeseries
ax0 = plt.subplot(gs[0])
ax0.plot(x, y)
# let's add some axes labels to boot
ax0.set_ylabel(r'displacement ($\AA$)')
ax0.set_xlabel('time (ns)')
# and now we'll also refine the y-axis ticks a bit
ax0.set_ylim(-4.5, 4.5)
ax0.set_yticks(np.linspace(-4, 4, 5))
ax0.set_yticks(np.linspace(-3, 3, 4), minor=True)
# plot the distribution
ax1 = plt.subplot(gs[1])
ax1.hist(y, histtype='step', bins=40, normed=True, orientation='horizontal')
# this will remove the grid and the ticks on the x-axis
ax1.set_xticks([])
ax1.grid(False)
ax1.set_xlim((0,.5))
ax1.set_ylim(-4.5, 4.5)
ax1.set_yticks(np.linspace(-3, 3, 4), minor=True)
sns.despine(ax=fig.axes[0], offset=10)
sns.despine(ax=fig.axes[1], bottom=True)
plt.tight_layout()
Explanation: Much better!
Going further
Because we know this is an oscillating function, it might make sense to also plot a histogram of its values to get a sense of their distribution. We can accomplish this in one figure using subplots defined with a GridSpec object. There are many ways of defining the spatial extent of axes within a figure, but for this case this is probably the easiest.
End of explanation
sns.palplot(sns.color_palette())
from matplotlib import gridspec
fig = plt.figure(figsize=(7, 3))
gs = gridspec.GridSpec(1, 2, width_ratios=[3,1] )
# plot the timeseries
ax0 = plt.subplot(gs[0])
ax0.plot(x, y, color=sns.color_palette()[1])
# let's add some axes labels to boot
ax0.set_ylabel(r'displacement ($\AA$)')
ax0.set_xlabel('time (ns)')
# and now we'll also refine the y-axis ticks a bit
ax0.set_ylim(-4.5, 4.5)
ax0.set_yticks(np.linspace(-4, 4, 5))
ax0.set_yticks(np.linspace(-3, 3, 4), minor=True)
# plot the distribution
ax1 = plt.subplot(gs[1])
ax1.hist(y, histtype='step', bins=40, normed=True, orientation='horizontal', color=sns.color_palette()[1])
# this will remove the grid and the ticks on the x-axis
ax1.set_xticks([])
ax1.grid(False)
ax1.set_xlim((0,.5))
ax1.set_ylim(-4.5, 4.5)
ax1.set_yticks(np.linspace(-3, 3, 4), minor=True)
sns.despine(ax=fig.axes[0], offset=10)
sns.despine(ax=fig.axes[1], bottom=True)
plt.tight_layout()
Explanation: Don't like the color? We can use seaborn to get at different colors in the color palette:
End of explanation |
14,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Snow drift potential
Step1: Accessing netcdf file via thredds
The wind speeds in ...metcoop_default... are post-processed and based on FX (highest 10 min avergae within the hour). FX is in approximately about 10% higher than the hourly average.
Step3: Calculating wind speed in one grid cell over the prognosis time
Step4: Plotting wind speed
Step5: Cluster wind speeds depending on the standard variation of the wind direction (upper and lower limits) before applying a wind speed threshold.
Step6: See more at
Step7: Defining drift potential
See Gompertz function
Step8: Comparison to Föhn's model
Step9: Using real data from AROME model | Python Code:
%matplotlib inline
import netCDF4
import numpy as np
import pylab as plt
plt.rcParams['figure.figsize'] = (14, 5)
Explanation: Snow drift potential
End of explanation
ncdata = netCDF4.Dataset('http://thredds.met.no/thredds/dodsC/arome25/arome_metcoop_default2_5km_latest.nc')
x_wind_v = ncdata.variables['x_wind_10m'] # x component wrt the senorge grid - not true East!!!
y_wind_v = ncdata.variables['y_wind_10m'] # y component wrt the senorge grid - not true North!!!
lat_v = ncdata.variables['latitude']
lon_v = ncdata.variables['longitude']
time_v = ncdata.variables['time']
t = netCDF4.num2date(time_v[:], time_v.units)
Explanation: Accessing netcdf file via thredds
The wind speeds in ...metcoop_default... are post-processed and based on FX (highest 10 min avergae within the hour). FX is in approximately about 10% higher than the hourly average.
End of explanation
i_x = 200
i_y = 400
c_lat = lat_v[i_x,i_y]
c_lon = lon_v[i_x,i_y]
x_wind = x_wind_v[:,i_y,i_x]
y_wind = y_wind_v[:,i_y,i_x]
x_avg = np.mean(x_wind)
y_avg = np.mean(y_wind)
avg_wind_speed = np.sqrt(x_avg**2 + y_avg**2)
wind_speed = np.sqrt(x_wind**2 + y_wind**2)
wind_direction = np.arctan2(x_wind, y_wind) * 180 / np.pi
# using (x, y) results in N=0, W=-90, E=90, S=+/-180
# using (y, x) results in N=90, W=+/-180, E=0, S=-90
The wind direction is most likely affected by the down scaling of the wind speed vectors; MET will provide a separate variable
of wind direction in the netcdf files on thredds that is related to the original 2.5 km resolution.
st_threshold = 7.0 # (m/s); snow transport threshold varies depending on snow surface conditions
rel_wind_speed = np.where(wind_speed > st_threshold)
print(type(rel_wind_speed), len(rel_wind_speed), len(wind_speed))
Explanation: Calculating wind speed in one grid cell over the prognosis time
End of explanation
plt.figure()
plt.plot(t, wind_speed)
plt.plot(t, x_wind, label='x-wind', color='g')
plt.plot(t, y_wind, label='y-wind', color='k')
plt.axhline(y=0, color='lightgrey')
plt.axhline(y=st_threshold, color='lightgrey')
plt.ylabel('Wind speed (m/s)')
plt.title('Wind speed at {0:.2f}E and {1:.2f}N'.format(c_lon, c_lat))
plt.show()
plt.figure()
plt.plot(t, wind_direction)
plt.axhline(y=0, color='lightgrey')
plt.ylabel('Wind direction (deg)')
plt.title('Wind direction at {0:.2f}E and {1:.2f}N'.format(c_lon, c_lat))
plt.show()
Explanation: Plotting wind speed
End of explanation
def avg_wind_dir(uav, vav):
if uav == 0:
if vav == 0:
return 0.0
else:
if vav > 0:
return 360.0
else:
return 180.0
else:
if uav > 0:
return 90.0-180.0 / np.pi * np.arctan(vav/uav) # had to swap 90 and 270 between if-else to get it right
else:
return 270.0-180.0 / np.pi * np.arctan(vav/uav)
Explanation: Cluster wind speeds depending on the standard variation of the wind direction (upper and lower limits) before applying a wind speed threshold.
End of explanation
# test avg_wind_dir()
uav = np.array([1., 1., -1., -1., 0.0, 1.0, 0.0])
vav = np.array([1., -1., 1., -1., 1.0, 0.0, 0.0])
exp_res = [45.0, 135.0, 315.0, 225.0, 360.0, 90.0, 0.0]
res = [avg_wind_dir(u, v) for u, v in zip(uav, vav)]
print(res, res==exp_res)
u = np.array([-10., 10., -10.])
v = np.array([1., -1., -1.])
res = [avg_wind_dir(x, y) for x, y in zip(u, v)]
print(res)
uav = np.mean(u)
vav = np.mean(v)
avg_dir = avg_wind_dir(uav, vav)
print(uav, vav, avg_dir)
Explanation: See more at: http://www.weatherapi.net/calculate-average-wind-direction/#sthash.VhvKzU0S.dpuf
End of explanation
def drift_potential(u, a=1.2, b=15, c=.16):
'''
Using a Gompertz function (subclass of sigmoid functions) to resample the experimental derived snow transport curve by
Föhn et al. 1980 of the form 8e-5 * u^3.
u: wind speed in m/s
a: is an asymptote; something like maximum possible additional snow depth
b: defines the displacment along the x-axis; kind of a delay before snow transport starts;
snow surface hardness will influence 'b'
c: defines the growth rate; a measure for how quickly snow transport increases with increasing wind speeds;
snow surface ahrdness and concurrent snow fall will influence 'c'
Default values for 'a', 'b', and 'c' represent best fit to Föhn's model.
TODO:
- link a, b, and c to snow surface conditions available from the seNorge model.
'''
# Additional loading by wind redistribution on leeward slopes
hs_wind_foehn = 8e-5 * u**3.0
hs_wind = a * np.exp(-b * np.exp(-c * u))
return hs_wind, hs_wind_foehn
Explanation: Defining drift potential
See Gompertz function
End of explanation
dummy_wind = np.arange(0,35) # m/s
dummy_hs, hs_foehn = drift_potential(dummy_wind, a=1.2, b=15, c=.16)
plt.figure()
plt.axhline(y=0.05, linestyle='--', color='g') # lower limit for little snow transport
plt.axhline(y=0.2, linestyle='--', color='y') # lower limit for intermediate snow transport
plt.axhline(y=0.5, linestyle='--', color='r') # lower limit for severe snow transport
plt.plot(dummy_wind, hs_foehn, color='0.5', label='Föhn et.al, 1980')
plt.plot(dummy_wind, dummy_hs, label='snow drift potential')
plt.ylabel('Additional snow height')
plt.legend(loc=2)
plt.show()
Explanation: Comparison to Föhn's model:
End of explanation
hs_wind, hsf = drift_potential(wind_speed)
plt.figure()
plt.plot(t, hs_wind)
plt.ylabel('Additional snow height (m)')
ax_wind = plt.gca().twinx()
ax_wind.plot(t, wind_speed, color='k')
ax_wind.set_ylabel('Wind speed (m/s)')
plt.show()
Explanation: Using real data from AROME model
End of explanation |
14,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get Facebook statuses from Python
Download status updates and comments from Facebook pages and Facebook groups.
Uses the Facebook GraphAPI.
Facebook access token
To use the Facebook GraphAPI, you need an access token. It's basically a key that unlocks the service.
How to get access token
Step1: Define Facebook functions
Create some fuctions that will scrape Facebook pages.
It is a lot of things here, but you'll only interact with these functions
Step2: Get your own Facebook
Step3: Get information about Facebook page
Step4: Get status messages from Facebook page
The status messages are stored as a dictionary object.
You can see the contents by printing them like print(status["id"]). Here are all the available names | Python Code:
accesstoken = "XXX"
Explanation: Get Facebook statuses from Python
Download status updates and comments from Facebook pages and Facebook groups.
Uses the Facebook GraphAPI.
Facebook access token
To use the Facebook GraphAPI, you need an access token. It's basically a key that unlocks the service.
How to get access token:
Go to https://developers.facebook.com/tools/explorer/
Click button Get Token and then Get User Access Token.
The box Select Permissions appears. Check all boxes and click Get Access Token.
Facebook will ask for your permission. Press OK.
Copy the access token - the long string that start with something like E3OGYAACENWbS6CRz7qiFudEose0cBAMdqw...
Come back here and replace XXX below with your access token.
End of explanation
# Install facepy package to connect o Facebook API. If it doesn't work, test with pip3 instead.
!pip install facepy
# Import the "facepy" library that talks to Facebooks API.
from facepy.exceptions import OAuthError
from facepy import GraphAPI
import datetime
# Function that connects to Faceboko GraphAPI and returns information about you.
def getme():
print("Fetching yourself...")
graph = GraphAPI(accesstoken, version="2.11")
melist = graph.get("me?fields=id,name,email,birthday", page=True, retry=2, limit=1)
for me in melist:
print("Done.")
return(me)
print("Couldn't find you...")
# Function that gets information about a Facebook page.
def getpage(id):
graph = GraphAPI(accesstoken, version="2.11")
page = graph.get(str(id) + "?fields=id,name,link,fan_count", page=False, retry=2, limit=1)
return(page)
# Function to get statuses from a Facebook page.
def getstatuses(id, limit=0):
fields = "permalink_url,message,link,created_time,type,from,name,id,likes.limit(1).summary(true),comments.limit(1).summary(true),shares"
if limit == 0:
print("Fetching statuses (this might take some while, consider changing the limit to speed things up)...")
else:
print("Fetching statuses (limited to latest {0})...".format(limit))
graph = GraphAPI(accesstoken, version="2.11")
pages = graph.get(str(id) + "/feed?fields=" + fields, page=True, retry=2, limit=1)
l = process_pager(pages, limit)
print("Done.")
print("Got {0} statuses.".format(len(l)))
return(l)
# Function that process pager from facepy and cycle through each status message.
def process_pager(pages, limit):
l = []
i = 0
for page in pages:
for status in page["data"]:
l.append(process_status(status))
i = i + 1
if i >= limit and limit != 0:
break
if i >= limit and limit != 0:
break
return(l)
# Function that processes a status message into a more easy-to-use dictionary.
def process_status(status):
status_dict = {
"fromname": status["from"]["name"],
"fromid": status["from"]["id"],
"id": status["id"],
"type": status["type"],
"created": process_date(status["created_time"]),
"message": "" if "message" not in status.keys() else str(status["message"].encode("utf-8")),
"link": "" if "link" not in status.keys() else status["link"],
"linkname": "" if "name" not in status.keys() else status["name"].encode("utf-8"),
"likes": 0 if "likes" not in status.keys() else status["likes"]["summary"]["total_count"],
"comments": 0 if "comments" not in status.keys() else status["comments"]["summary"]["total_count"],
"shares": 0 if "shares" not in status.keys() else status["shares"]["count"],
"permalink": status["permalink_url"]
}
return(status_dict)
# Function that convert dates from Facebook to yyy-mm-dd hh:mm:ss.
def process_date(strdate):
dt = datetime.datetime.strptime(strdate, "%Y-%m-%dT%H:%M:%S+0000")
#dt = dt + datetime.timedelta(hours = -6) # About -6 hours in Swedish time.
dt = dt.strftime("%Y-%m-%d %H:%M:%S")
return(dt)
Explanation: Define Facebook functions
Create some fuctions that will scrape Facebook pages.
It is a lot of things here, but you'll only interact with these functions:
getme() will get information about yourself.
getpage(id) will get information about a page by its page ID, slug name or URL, and return page information.
getstatuses(pageid) will get status updates from a page by its page ID, and return a list of statuses.
End of explanation
me = getme()
me
# Your name.
print(me["name"])
# Your ID.
print(me["id"])
# Your birthday.
print(me["birthday"])
Explanation: Get your own Facebook
End of explanation
guardian = getpage("http://facebook.com/theguardian")
guardian
guardian["fan_count"]
print("{0} ({1}) has {2} fans and ID {3}.".format(guardian["name"], guardian["link"], guardian["fan_count"], guardian["id"]))
Explanation: Get information about Facebook page
End of explanation
# Get Facebook status updates from The Guardian (PageID: 10513336322).
guardian_statuses = getstatuses(10513336322, limit=20)
# Show info about each status message.
for status in guardian_statuses:
print("Created: " + status["created"])
print("Permalink: " + status["permalink"])
print("Message: " + status["message"][:60])
print("Info: {0} likes, {1} shares, {2} comments".format(status["likes"], status["shares"], status["comments"]))
print()
# Lets count the number of links among all status messages.
# Counter to store the number of links.
i = 0
# Get the number of total status messages. len() means length.
total_statuses = len(guardian_statuses)
# How many statuses are links? Do a for-loop and increment the counter with 1 if it is a link.
for status in guardian_statuses:
if status["type"] == "link":
i = i + 1
print("There are {0} status messages and {1} of them are links.".format(total_statuses, i))
# Descriptive statistics: how many likes did they get in total?
total_likes = 0
for status in guardian_statuses:
total_likes = total_likes + status["likes"]
print("Total {0} likes.".format(total_likes))
Explanation: Get status messages from Facebook page
The status messages are stored as a dictionary object.
You can see the contents by printing them like print(status["id"]). Here are all the available names:
| Status | Description |
| :-------- | :------------ |
| status["fromname"] | name of sender |
| status["fromid"] | ID of sender |
| status["id"] | ID of status message |
| status["type"] | type of status message (e.g., link, event, picture) |
| status["created"] | date when message was published |
| status["message"] | status message |
| status["link"] | URL link in the status message |
| status["linkname"] | name of link that status message may contain |
| status["likes"] | number of likes status message got |
| status["comments"] | number of comments status message got |
| status["shares"] | number of shares status message got |
| status["permalink"] | URL link to Facebook post |
End of explanation |
14,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyis of global temperature
In this example we show some analysis of global surface temperature fields. The data which is used is the NCEP reanalysis data which we first download.
Step1: EOF analysis of NCEP temperature field
As we might be interested in the characteristics spatiotemporal pattern of the air temperature field we perform an EOF analysis. This is done using the EOF object.
Step2: We now plot the resulting patterns for the first 4 EOF's and also their temporal evolution.
Step3: Let's now have a look on the global mean temperature and it's anomalies
We use LinePlot for this which automatically performs an area weighted mean calculation.
in addition we also estimate the longterm linear trend and plot it's slope as a map | Python Code:
%matplotlib inline
import os
from pycmbs.data import Data
from pycmbs.mapping import map_plot
# we download some NCEP data
if not os.path.exists('air.mon.mean.nc'):
!wget --ftp-user=anonymous --ftp-password=nothing ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.derived/surface/air.mon.mean.nc
ncep = Data('air.mon.mean.nc', 'air', read=True, label='NCEP air')
Explanation: Analyis of global temperature
In this example we show some analysis of global surface temperature fields. The data which is used is the NCEP reanalysis data which we first download.
End of explanation
from pycmbs.diagnostic import EOF
E = EOF(ncep, anomalies=True)
Explanation: EOF analysis of NCEP temperature field
As we might be interested in the characteristics spatiotemporal pattern of the air temperature field we perform an EOF analysis. This is done using the EOF object.
End of explanation
E.plot_EOF([0,1,2,3],use_basemap=True,show_coef=True)
Explanation: We now plot the resulting patterns for the first 4 EOF's and also their temporal evolution.
End of explanation
from pycmbs.plots import LinePlot
import matplotlib.pyplot as plt
f = plt.figure(figsize=(15,8))
ax1 = f.add_subplot(2,1,1)
#ax2 = f.add_subplot(2,1,2)
ax2 = ax1.twinx()
ax3 = f.add_subplot(2,2,3)
ax4 = f.add_subplot(2,2,4)
L = LinePlot(ax=ax1)
L.plot(ncep, color='grey')
L.ax.grid()
L.legend(loc='lower right')
L1 = LinePlot(ax=ax2)
L1.plot(ncep.get_deseasonalized_anomaly(base='all'), color='red') # calculate anomalies on the fly
L1.ax.grid()
L1.legend(loc='lower left')
# calculate longterm trend
R, S, I, P = ncep.temporal_trend(return_object=True) #, pthres=0.05)
f = map_plot(S, cmap_data = 'RdBu_r', vmin=-0.0001, vmax=0.0001, use_basemap=True, ax=ax3)
# we plot EOF in addition
E.plot_EOF([0], use_basemap=True, ax=ax4)
f.savefig('../temperature_trend.pdf', bbox_inches='tight')
Explanation: Let's now have a look on the global mean temperature and it's anomalies
We use LinePlot for this which automatically performs an area weighted mean calculation.
in addition we also estimate the longterm linear trend and plot it's slope as a map
End of explanation |
14,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
100 pandas puzzles
Inspired by 100 Numpy exerises, here are 100* short puzzles for testing your knowledge of pandas' power.
Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects.
Many of the excerises here are stright-forward in that the solutions require no more than a few lines of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods and following best practices is the underlying goal.
The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how inventive the required solution is.
If you're just starting out with pandas and you are looking for some other resources, the official documentation is very extensive. In particular, some good places get a broader overview of pandas are...
10 minutes to pandas
pandas basics
tutorials
cookbook and idioms
Enjoy the puzzles!
* the list of exercises is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.
Importing pandas
Getting started and checking your pandas setup
Difficulty
Step1: 2. Print the version of pandas that has been imported.
Step2: 3. Print out all the version information of the libraries that are required by the pandas library.
Step3: DataFrame basics
A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames
Difficulty
Step4: 5. Display a summary of the basic information about this DataFrame and its data.
Step5: 6. Return the first 3 rows of the DataFrame df.
Step6: 7. Select just the 'animal' and 'age' columns from the DataFrame df.
Step7: 8. Select the data in rows [3, 4, 8] and in columns ['animal', 'age'].
Step8: 9. Select only the rows where the number of visits is greater than 3.
Step9: 10. Select the rows where the age is missing, i.e. is NaN.
Step10: 11. Select the rows where the animal is a cat and the age is less than 3.
Step11: 12. Select the rows the age is between 2 and 4 (inclusive).
Step12: 13. Change the age in row 'f' to 1.5.
Step13: 14. Calculate the sum of all visits (the total number of visits).
Step14: 15. Calculate the mean age for each different animal in df.
Step15: 16. Append a new row 'k' to df with your choice of values for each column. Then delete that row to return the original DataFrame.
Step16: 17. Count the number of each type of animal in df.
Step17: 18. Sort df first by the values in the 'age' in decending order, then by the value in the 'visit' column in ascending order.
Step18: 19. The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values
Step19: 20. In the 'animal' column, change the 'snake' entries to 'python'.
Step20: 21. For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (hint
Step21: DataFrames
Step22: 23. Given a DataFrame of numeric values, say
python
df = pd.DataFrame(np.random.random(size=(5, 3))) # a 5x3 frame of float values
how do you subtract the row mean from each element in the row?
Step23: 24. Suppose you have DataFrame with 10 columns of real numbers, for example
Step24: 25. How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
Step25: The next three puzzles are slightly harder...
26. You have a DataFrame that consists of 10 columns of floating--point numbers. Suppose that exactly 5 entries in each row are NaN values. For each row of the DataFrame, find the column which contains the third NaN value.
(You should return a Series of column labels.)
Step26: 27. A DataFrame has a column of groups 'grps' and and column of numbers 'vals'. For example
Step27: 28. A DataFrame has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive). For each group of 10 consecutive integers in 'A' (i.e. (0, 10], (10, 20], ...), calculate the sum of the corresponding values in column 'B'.
Step28: DataFrames
Step29: Here's an alternative approach based on a cookbook recipe
Step30: And another approach using a groupby
Step31: 30. Consider a DataFrame containing rows and columns of purely numerical data. Create a list of the row-column index locations of the 3 largest values.
Step32: 31. Given a DataFrame with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals', replace any negative values in 'vals' with the group mean.
Step33: 32. Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame
Step34: Series and DatetimeIndex
Exercises for creating and manipulating Series with datetime data
Difficulty
Step35: 34. Find the sum of the values in s for every Wednesday.
Step36: 35. For each calendar month in s, find the mean of values.
Step37: 36. For each group of four consecutive calendar months in s, find the date on which the highest value occurred.
Step38: 37. Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and 2016.
Step39: Cleaning Data
Making a DataFrame easier to work with
Difficulty
Step40: 39. The From_To column would be better as two separate columns! Split each string on the underscore delimiter _ to give a new temporary DataFrame with the correct values. Assign the correct column names to this temporary DataFrame.
Step41: 40. Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".)
Step42: 41. Delete the From_To column from df and attach the temporary DataFrame from the previous questions.
Step43: 42. In the Airline column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. '(British Airways. )' should become 'British Airways'.
Step44: 43. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.
Expand the Series of lists into a DataFrame named delays, rename the columns delay_1, delay_2, etc. and replace the unwanted RecentDelays column in df with delays.
Step45: The DataFrame should look much better now.
Using MultiIndexes
Go beyond flat DataFrames with additional index levels
Difficulty
Step46: 45. Check the index of s is lexicographically sorted (this is a necessary proprty for indexing to work correctly with a MultiIndex).
Step47: 46. Select the labels 1, 3 and 6 from the second level of the MultiIndexed Series.
Step48: 47. Slice the Series s; slice up to label 'B' for the first level and from label 5 onwards for the second level.
Step49: 48. Sum the values in s for each label in the first level (you should have Series giving you a total for labels A, B and C).
Step50: 49. Suppose that sum() (and other methods) did not accept a level keyword argument. How else could you perform the equivalent of s.sum(level=1)?
Step51: 50. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this new Series properly lexsorted? If not, sort it.
Step52: Minesweeper
Generate the numbers for safe squares in a Minesweeper grid
Difficulty
Step53: 52. For this DataFrame df, create a new column of zeros (safe) and ones (mine). The probability of a mine occuring at each location should be 0.4.
Step54: 53. Now create a new column for this DataFrame called 'adjacent'. This column should contain the number of mines found on adjacent squares in the grid.
(E.g. for the first row, which is the entry for the coordinate (0, 0), count how many mines are found on the coordinates (0, 1), (1, 0) and (1, 1).)
Step55: 54. For rows of the DataFrame that contain a mine, set the value in the 'adjacent' column to NaN.
Step56: 55. Finally, convert the DataFrame to grid of the adjacent mine counts
Step57: Plotting
Visualize trends and patterns in data
Difficulty
Step58: 57. Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping track of his performance at work over time, as well as how good he was feeling that day, and whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of this DataFrame.
(Hint
Step59: 58. What if we want to plot multiple things? Pandas allows you to pass in a matplotlib Axis object for plots, and plots will also return an Axis object.
Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in millions)
df = pd.DataFrame({"revenue"
Step60: Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock during a time window. The color of the "candle" (the thick part of the bar) is green if the stock closed above its opening price, or red if below.
This was initially designed to be a pandas plotting challenge, but it just so happens that this type of plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have provided a function that will plot the chart for you so long as you can use pandas to get the data into the correct format.
Your first step should be to get the data in the correct format using pandas' time-series grouping function. We would like each candle to represent an hour's worth of data. You can write your own aggregation function which returns the open/high/low/close, but pandas has a built-in which also does this.
The below cell contains helper functions. Call day_stock_data() to generate a DataFrame containing the prices a hypothetical stock sold for, and the time the sale occurred. Call plot_candlestick(df) on your properly aggregated and formatted stock data to print the candlestick chart.
Step61: 59. Generate a day's worth of random stock data, and aggregate / reformat it so that it has hourly summaries of the opening, highest, lowest, and closing prices
Step62: 60. Now that you have your properly-formatted data, try to plot it yourself as a candlestick chart. Use the plot_candlestick(df) function above, or matplotlib's plot documentation if you get stuck. | Python Code:
import pandas as pd
Explanation: 100 pandas puzzles
Inspired by 100 Numpy exerises, here are 100* short puzzles for testing your knowledge of pandas' power.
Since pandas is a large library with many different specialist features and functions, these excercises focus mainly on the fundamentals of manipulating data (indexing, grouping, aggregating, cleaning), making use of the core DataFrame and Series objects.
Many of the excerises here are stright-forward in that the solutions require no more than a few lines of code (in pandas or NumPy... don't go using pure Python or Cython!). Choosing the right methods and following best practices is the underlying goal.
The exercises are loosely divided in sections. Each section has a difficulty rating; these ratings are subjective, of course, but should be a seen as a rough guide as to how inventive the required solution is.
If you're just starting out with pandas and you are looking for some other resources, the official documentation is very extensive. In particular, some good places get a broader overview of pandas are...
10 minutes to pandas
pandas basics
tutorials
cookbook and idioms
Enjoy the puzzles!
* the list of exercises is not yet complete! Pull requests or suggestions for additional exercises, corrections and improvements are welcomed.
Importing pandas
Getting started and checking your pandas setup
Difficulty: easy
1. Import pandas under the name pd.
End of explanation
pd.__version__
Explanation: 2. Print the version of pandas that has been imported.
End of explanation
pd.show_versions()
Explanation: 3. Print out all the version information of the libraries that are required by the pandas library.
End of explanation
df = pd.DataFrame(data, index=labels)
Explanation: DataFrame basics
A few of the fundamental routines for selecting, sorting, adding and aggregating data in DataFrames
Difficulty: easy
Note: remember to import numpy using:
python
import numpy as np
Consider the following Python dictionary data and Python list labels:
``` python
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
```
(This is just some meaningless data I made up with the theme of animals and trips to a vet.)
4. Create a DataFrame df from this dictionary data which has the index labels.
End of explanation
df.info()
# ...or...
df.describe()
Explanation: 5. Display a summary of the basic information about this DataFrame and its data.
End of explanation
df.iloc[:3]
# or equivalently
df.head(3)
Explanation: 6. Return the first 3 rows of the DataFrame df.
End of explanation
df.loc[:, ['animal', 'age']]
# or
df[['animal', 'age']]
Explanation: 7. Select just the 'animal' and 'age' columns from the DataFrame df.
End of explanation
df.loc[df.index[[3, 4, 8]], ['animal', 'age']]
Explanation: 8. Select the data in rows [3, 4, 8] and in columns ['animal', 'age'].
End of explanation
df[df['visits'] > 3]
Explanation: 9. Select only the rows where the number of visits is greater than 3.
End of explanation
df[df['age'].isnull()]
Explanation: 10. Select the rows where the age is missing, i.e. is NaN.
End of explanation
df[(df['animal'] == 'cat') & (df['age'] < 3)]
Explanation: 11. Select the rows where the animal is a cat and the age is less than 3.
End of explanation
df[df['age'].between(2, 4)]
Explanation: 12. Select the rows the age is between 2 and 4 (inclusive).
End of explanation
df.loc['f', 'age'] = 1.5
Explanation: 13. Change the age in row 'f' to 1.5.
End of explanation
df['visits'].sum()
Explanation: 14. Calculate the sum of all visits (the total number of visits).
End of explanation
df.groupby('animal')['age'].mean()
Explanation: 15. Calculate the mean age for each different animal in df.
End of explanation
df.loc['k'] = [5.5, 'dog', 'no', 2]
# and then deleting the new row...
df = df.drop('k')
Explanation: 16. Append a new row 'k' to df with your choice of values for each column. Then delete that row to return the original DataFrame.
End of explanation
df['animal'].value_counts()
Explanation: 17. Count the number of each type of animal in df.
End of explanation
df.sort_values(by=['age', 'visits'], ascending=[False, True])
Explanation: 18. Sort df first by the values in the 'age' in decending order, then by the value in the 'visit' column in ascending order.
End of explanation
df['priority'] = df['priority'].map({'yes': True, 'no': False})
Explanation: 19. The 'priority' column contains the values 'yes' and 'no'. Replace this column with a column of boolean values: 'yes' should be True and 'no' should be False.
End of explanation
df['animal'] = df['animal'].replace('snake', 'python')
Explanation: 20. In the 'animal' column, change the 'snake' entries to 'python'.
End of explanation
df.pivot_table(index='animal', columns='visits', values='age', aggfunc='mean')
Explanation: 21. For each animal type and each number of visits, find the mean age. In other words, each row is an animal, each column is a number of visits and the values are the mean ages (hint: use a pivot table).
End of explanation
df.loc[df['A'].shift() != df['A']]
Explanation: DataFrames: beyond the basics
Slightly trickier: you may need to combine two or more methods to get the right answer
Difficulty: medium
The previous section was tour through some basic but essential DataFrame operations. Below are some ways that you might need to cut your data, but for which there is no single "out of the box" method.
22. You have a DataFrame df with a column 'A' of integers. For example:
python
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
How do you filter out rows which contain the same integer as the row immediately above?
End of explanation
df.sub(df.mean(axis=1), axis=0)
Explanation: 23. Given a DataFrame of numeric values, say
python
df = pd.DataFrame(np.random.random(size=(5, 3))) # a 5x3 frame of float values
how do you subtract the row mean from each element in the row?
End of explanation
df.sum().idxmin()
Explanation: 24. Suppose you have DataFrame with 10 columns of real numbers, for example:
python
df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))
Which column of numbers has the smallest sum? (Find that column's label.)
End of explanation
len(df) - df.duplicated(keep=False).sum()
# or perhaps more simply...
len(df.drop_duplicates(keep=False))
Explanation: 25. How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
End of explanation
(df.isnull().cumsum(axis=1) == 3).idxmax(axis=1)
Explanation: The next three puzzles are slightly harder...
26. You have a DataFrame that consists of 10 columns of floating--point numbers. Suppose that exactly 5 entries in each row are NaN values. For each row of the DataFrame, find the column which contains the third NaN value.
(You should return a Series of column labels.)
End of explanation
df.groupby('grp')['vals'].nlargest(3).sum(level=0)
Explanation: 27. A DataFrame has a column of groups 'grps' and and column of numbers 'vals'. For example:
python
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
For each group, find the sum of the three greatest values.
End of explanation
df.groupby(pd.cut(df['A'], np.arange(0, 101, 10)))['B'].sum()
Explanation: 28. A DataFrame has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive). For each group of 10 consecutive integers in 'A' (i.e. (0, 10], (10, 20], ...), calculate the sum of the corresponding values in column 'B'.
End of explanation
izero = np.r_[-1, (df['X'] == 0).nonzero()[0]] # indices of zeros
idx = np.arange(len(df))
df['Y'] = idx - izero[np.searchsorted(izero - 1, idx) - 1]
# http://stackoverflow.com/questions/30730981/how-to-count-distance-to-the-previous-zero-in-pandas-series/
# credit: Behzad Nouri
Explanation: DataFrames: harder problems
These might require a bit of thinking outside the box...
...but all are solvable using just the usual pandas/NumPy methods (and so avoid using explicit for loops).
Difficulty: hard
29. Consider a DataFrame df where there is an integer column 'X':
python
df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})
For each value, count the difference back to the previous zero (or the start of the Series, whichever is closer). These values should therefore be [1, 2, 0, 1, 2, 3, 4, 0, 1, 2]. Make this a new column 'Y'.
End of explanation
x = (df['X'] != 0).cumsum()
y = x != x.shift()
df['Y'] = y.groupby((y != y.shift()).cumsum()).cumsum()
Explanation: Here's an alternative approach based on a cookbook recipe:
End of explanation
df['Y'] = df.groupby((df['X'] == 0).cumsum()).cumcount()
# We're off by one before we reach the first zero.
first_zero_idx = (df['X'] == 0).idxmax()
df['Y'].iloc[0:first_zero_idx] += 1
Explanation: And another approach using a groupby:
End of explanation
df.unstack().sort_values()[-3:].index.tolist()
# http://stackoverflow.com/questions/14941261/index-and-column-for-the-max-value-in-pandas-dataframe/
# credit: DSM
Explanation: 30. Consider a DataFrame containing rows and columns of purely numerical data. Create a list of the row-column index locations of the 3 largest values.
End of explanation
def replace(group):
mask = group<0
group[mask] = group[~mask].mean()
return group
df.groupby(['grps'])['vals'].transform(replace)
# http://stackoverflow.com/questions/14760757/replacing-values-with-groupby-means/
# credit: unutbu
Explanation: 31. Given a DataFrame with a column of group IDs, 'grps', and a column of corresponding integer values, 'vals', replace any negative values in 'vals' with the group mean.
End of explanation
g1 = df.groupby(['group'])['value'] # group values
g2 = df.fillna(0).groupby(['group'])['value'] # fillna, then group values
s = g2.rolling(3, min_periods=1).sum() / g1.rolling(3, min_periods=1).count() # compute means
s.reset_index(level=0, drop=True).sort_index() # drop/sort index
# http://stackoverflow.com/questions/36988123/pandas-groupby-and-rolling-apply-ignoring-nans/
Explanation: 32. Implement a rolling mean over groups with window size 3, which ignores NaN value. For example consider the following DataFrame:
```python
df = pd.DataFrame({'group': list('aabbabbbabab'),
'value': [1, 2, 3, np.nan, 2, 3,
np.nan, 1, 7, 3, np.nan, 8]})
df
group value
0 a 1.0
1 a 2.0
2 b 3.0
3 b NaN
4 a 2.0
5 b 3.0
6 b NaN
7 b 1.0
8 a 7.0
9 b 3.0
10 a NaN
11 b 8.0
```
The goal is to compute the Series:
0 1.000000
1 1.500000
2 3.000000
3 3.000000
4 1.666667
5 3.000000
6 3.000000
7 2.000000
8 3.666667
9 2.000000
10 4.500000
11 4.000000
E.g. the first window of size three for group 'b' has values 3.0, NaN and 3.0 and occurs at row index 5. Instead of being NaN the value in the new column at this row index should be 3.0 (just the two non-NaN values are used to compute the mean (3+3)/2)
End of explanation
dti = pd.date_range(start='2015-01-01', end='2015-12-31', freq='B')
s = pd.Series(np.random.rand(len(dti)), index=dti)
Explanation: Series and DatetimeIndex
Exercises for creating and manipulating Series with datetime data
Difficulty: easy/medium
pandas is fantastic for working with dates and times. These puzzles explore some of this functionality.
33. Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series of random numbers. Let's call this Series s.
End of explanation
s[s.index.weekday == 2].sum()
Explanation: 34. Find the sum of the values in s for every Wednesday.
End of explanation
s.resample('M').mean()
Explanation: 35. For each calendar month in s, find the mean of values.
End of explanation
s.groupby(pd.TimeGrouper('4M')).idxmax()
Explanation: 36. For each group of four consecutive calendar months in s, find the date on which the highest value occurred.
End of explanation
pd.date_range('2015-01-01', '2016-12-31', freq='WOM-3THU')
Explanation: 37. Create a DateTimeIndex consisting of the third Thursday in each month for the years 2015 and 2016.
End of explanation
df['FlightNumber'] = df['FlightNumber'].interpolate().astype(int)
Explanation: Cleaning Data
Making a DataFrame easier to work with
Difficulty: easy/medium
It happens all the time: someone gives you data containing malformed strings, Python, lists and missing data. How do you tidy it up so you can get on with the analysis?
Take this monstrosity as the DataFrame to use in the following puzzles:
python
df = pd.DataFrame({'From_To': ['LoNDon_paris', 'MAdrid_miLAN', 'londON_StockhOlm',
'Budapest_PaRis', 'Brussels_londOn'],
'FlightNumber': [10045, np.nan, 10065, np.nan, 10085],
'RecentDelays': [[23, 47], [], [24, 43, 87], [13], [67, 32]],
'Airline': ['KLM(!)', '<Air France> (12)', '(British Airways. )',
'12. Air France', '"Swiss Air"']})
(It's some flight data I made up; it's not meant to be accurate in any way.)
38. Some values in the the FlightNumber column are missing. These numbers are meant to increase by 10 with each row so 10055 and 10075 need to be put in place. Fill in these missing numbers and make the column an integer column (instead of a float column).
End of explanation
temp = df.From_To.str.split('_', expand=True)
temp.columns = ['From', 'To']
Explanation: 39. The From_To column would be better as two separate columns! Split each string on the underscore delimiter _ to give a new temporary DataFrame with the correct values. Assign the correct column names to this temporary DataFrame.
End of explanation
temp['From'] = temp['From'].str.capitalize()
temp['To'] = temp['To'].str.capitalize()
Explanation: 40. Notice how the capitalisation of the city names is all mixed up in this temporary DataFrame. Standardise the strings so that only the first letter is uppercase (e.g. "londON" should become "London".)
End of explanation
df = df.drop('From_To', axis=1)
df = df.join(temp)
Explanation: 41. Delete the From_To column from df and attach the temporary DataFrame from the previous questions.
End of explanation
df['Airline'] = df['Airline'].str.extract('([a-zA-Z\s]+)', expand=False).str.strip()
# note: using .strip() gets rid of any leading/trailing spaces
Explanation: 42. In the Airline column, you can see some extra puctuation and symbols have appeared around the airline names. Pull out just the airline name. E.g. '(British Airways. )' should become 'British Airways'.
End of explanation
# there are several ways to do this, but the following approach is possibly the simplest
delays = df['RecentDelays'].apply(pd.Series)
delays.columns = ['delay_{}'.format(n) for n in range(1, len(delays.columns)+1)]
df = df.drop('RecentDelays', axis=1).join(delays)
Explanation: 43. In the RecentDelays column, the values have been entered into the DataFrame as a list. We would like each first value in its own column, each second value in its own column, and so on. If there isn't an Nth value, the value should be NaN.
Expand the Series of lists into a DataFrame named delays, rename the columns delay_1, delay_2, etc. and replace the unwanted RecentDelays column in df with delays.
End of explanation
letters = ['A', 'B', 'C']
numbers = list(range(10))
mi = pd.MultiIndex.from_product([letters, numbers])
s = pd.Series(np.random.rand(30), index=mi)
Explanation: The DataFrame should look much better now.
Using MultiIndexes
Go beyond flat DataFrames with additional index levels
Difficulty: medium
Previous exercises have seen us analysing data from DataFrames equipped with a single index level. However, pandas also gives you the possibilty of indexing your data using multiple levels. This is very much like adding new dimensions to a Series or a DataFrame. For example, a Series is 1D, but by using a MultiIndex with 2 levels we gain of much the same functionality as a 2D DataFrame.
The set of puzzles below explores how you might use multiple index levels to enhance data analysis.
To warm up, we'll look make a Series with two index levels.
44. Given the lists letters = ['A', 'B', 'C'] and numbers = list(range(10)), construct a MultiIndex object from the product of the two lists. Use it to index a Series of random numbers. Call this Series s.
End of explanation
s.index.is_lexsorted()
# or more verbosely...
s.index.lexsort_depth == s.index.nlevels
Explanation: 45. Check the index of s is lexicographically sorted (this is a necessary proprty for indexing to work correctly with a MultiIndex).
End of explanation
s.loc[:, [1, 3, 6]]
Explanation: 46. Select the labels 1, 3 and 6 from the second level of the MultiIndexed Series.
End of explanation
s.loc[pd.IndexSlice[:'B', 5:]]
# or equivalently without IndexSlice...
s.loc[slice(None, 'B'), slice(5, None)]
Explanation: 47. Slice the Series s; slice up to label 'B' for the first level and from label 5 onwards for the second level.
End of explanation
s.sum(level=0)
Explanation: 48. Sum the values in s for each label in the first level (you should have Series giving you a total for labels A, B and C).
End of explanation
# One way is to use .unstack()...
# This method should convince you that s is essentially
# just a regular DataFrame in disguise!
s.unstack().sum(axis=0)
Explanation: 49. Suppose that sum() (and other methods) did not accept a level keyword argument. How else could you perform the equivalent of s.sum(level=1)?
End of explanation
new_s = s.swaplevel(0, 1)
# check
new_s.index.is_lexsorted()
# sort
new_s = new_s.sort_index()
Explanation: 50. Exchange the levels of the MultiIndex so we have an index of the form (letters, numbers). Is this new Series properly lexsorted? If not, sort it.
End of explanation
p = pd.tools.util.cartesian_product([np.arange(X), np.arange(Y)])
df = pd.DataFrame(np.asarray(p).T, columns=['x', 'y'])
Explanation: Minesweeper
Generate the numbers for safe squares in a Minesweeper grid
Difficulty: medium to hard
If you've ever used an older version of Windows, there's a good chance you've played with [Minesweeper](https://en.wikipedia.org/wiki/Minesweeper_(video_game). If you're not familiar with the game, imagine a grid of squares: some of these squares conceal a mine. If you click on a mine, you lose instantly. If you click on a safe square, you reveal a number telling you how many mines are found in the squares that are immediately adjacent. The aim of the game is to uncover all squares in the grid that do not contain a mine.
In this section, we'll make a DataFrame that contains the necessary data for a game of Minesweeper: coordinates of the squares, whether the square contains a mine and the number of mines found on adjacent squares.
51. Let's suppose we're playing Minesweeper on a 5 by 4 grid, i.e.
X = 5
Y = 4
To begin, generate a DataFrame df with two columns, 'x' and 'y' containing every coordinate for this grid. That is, the DataFrame should start:
x y
0 0 0
1 0 1
2 0 2
End of explanation
# One way is to draw samples from a binomial distribution.
df['mine'] = np.random.binomial(1, 0.4, X*Y)
Explanation: 52. For this DataFrame df, create a new column of zeros (safe) and ones (mine). The probability of a mine occuring at each location should be 0.4.
End of explanation
# Here is one way to solve using merges.
# It's not necessary the optimal way, just
# the solution I thought of first...
df['adjacent'] = \
df.merge(df + [ 1, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 1, -1, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, -1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 1, 0, 0], on=['x', 'y'], how='left')\
.merge(df + [-1, 0, 0], on=['x', 'y'], how='left')\
.merge(df + [ 0, 1, 0], on=['x', 'y'], how='left')\
.merge(df + [ 0, -1, 0], on=['x', 'y'], how='left')\
.iloc[:, 3:]\
.sum(axis=1)
# An alternative solution is to pivot the DataFrame
# to form the "actual" grid of mines and use convolution.
# See https://github.com/jakevdp/matplotlib_pydata2013/blob/master/examples/minesweeper.py
from scipy.signal import convolve2d
mine_grid = df.pivot_table(columns='x', index='y', values='mine')
counts = convolve2d(mine_grid.astype(complex), np.ones((3, 3)), mode='same').real.astype(int)
df['adjacent'] = (counts - mine_grid).ravel('F')
Explanation: 53. Now create a new column for this DataFrame called 'adjacent'. This column should contain the number of mines found on adjacent squares in the grid.
(E.g. for the first row, which is the entry for the coordinate (0, 0), count how many mines are found on the coordinates (0, 1), (1, 0) and (1, 1).)
End of explanation
df.loc[df['mine'] == 1, 'adjacent'] = np.nan
Explanation: 54. For rows of the DataFrame that contain a mine, set the value in the 'adjacent' column to NaN.
End of explanation
df.drop('mine', axis=1)\
.set_index(['y', 'x']).unstack()
Explanation: 55. Finally, convert the DataFrame to grid of the adjacent mine counts: columns are the x coordinate, rows are the y coordinate.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})
df.plot.scatter("xs", "ys", color = "black", marker = "x")
Explanation: Plotting
Visualize trends and patterns in data
Difficulty: medium
To really get a good understanding of the data contained in your DataFrame, it is often essential to create plots: if you're lucky, trends and anomalies will jump right out at you. This functionality is baked into pandas and the puzzles below explore some of what's possible with the library.
56. Pandas is highly integrated with the plotting library matplotlib, and makes plotting DataFrames very user-friendly! Plotting in a notebook environment usually makes use of the following boilerplate:
python
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
matplotlib is the plotting library which pandas' plotting functionality is built upon, and it is usually aliased to plt.
%matplotlib inline tells the notebook to show plots inline, instead of creating them in a separate window.
plt.style.use('ggplot') is a style theme that most people find agreeable, based upon the styling of R's ggplot package.
For starters, make a scatter plot of this random data, but use black X's instead of the default markers.
df = pd.DataFrame({"xs":[1,5,2,8,1], "ys":[4,2,1,9,6]})
Consult the documentation if you get stuck!
End of explanation
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
df.plot.scatter("hours_in", "productivity", s = df.happiness * 30, c = df.caffienated)
Explanation: 57. Columns in your DataFrame can also be used to modify colors and sizes. Bill has been keeping track of his performance at work over time, as well as how good he was feeling that day, and whether he had a cup of coffee in the morning. Make a plot which incorporates all four features of this DataFrame.
(Hint: If you're having trouble seeing the plot, try multiplying the Series which you choose to represent size by 10 or more)
The chart doesn't have to be pretty: this isn't a course in data viz!
df = pd.DataFrame({"productivity":[5,2,3,1,4,5,6,7,8,3,4,8,9],
"hours_in" :[1,9,6,5,3,9,2,9,1,7,4,2,2],
"happiness" :[2,1,3,2,3,1,2,3,1,2,2,1,3],
"caffienated" :[0,0,1,1,0,0,0,0,1,1,0,1,0]})
End of explanation
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
ax = df.plot.bar("month", "revenue", color = "green")
df.plot.line("month", "advertising", secondary_y = True, ax = ax)
ax.set_xlim((-1,12))
Explanation: 58. What if we want to plot multiple things? Pandas allows you to pass in a matplotlib Axis object for plots, and plots will also return an Axis object.
Make a bar plot of monthly revenue with a line plot of monthly advertising spending (numbers in millions)
df = pd.DataFrame({"revenue":[57,68,63,71,72,90,80,62,59,51,47,52],
"advertising":[2.1,1.9,2.7,3.0,3.6,3.2,2.7,2.4,1.8,1.6,1.3,1.9],
"month":range(12)
})
End of explanation
#This function is designed to create semi-interesting random stock price data
import numpy as np
def float_to_time(x):
return str(int(x)) + ":" + str(int(x%1 * 60)).zfill(2) + ":" + str(int(x*60 % 1 * 60)).zfill(2)
def day_stock_data():
#NYSE is open from 9:30 to 4:00
time = 9.5
price = 100
results = [(float_to_time(time), price)]
while time < 16:
elapsed = np.random.exponential(.001)
time += elapsed
if time > 16:
break
price_diff = np.random.uniform(.999, 1.001)
price *= price_diff
results.append((float_to_time(time), price))
df = pd.DataFrame(results, columns = ['time','price'])
df.time = pd.to_datetime(df.time)
return df
def plot_candlestick(agg):
fig, ax = plt.subplots()
for time in agg.index:
ax.plot([time.hour] * 2, agg.loc[time, ["high","low"]].values, color = "black")
ax.plot([time.hour] * 2, agg.loc[time, ["open","close"]].values, color = agg.loc[time, "color"], linewidth = 10)
ax.set_xlim((8,16))
ax.set_ylabel("Price")
ax.set_xlabel("Hour")
ax.set_title("OHLC of Stock Value During Trading Day")
plt.show()
Explanation: Now we're finally ready to create a candlestick chart, which is a very common tool used to analyze stock price data. A candlestick chart shows the opening, closing, highest, and lowest price for a stock during a time window. The color of the "candle" (the thick part of the bar) is green if the stock closed above its opening price, or red if below.
This was initially designed to be a pandas plotting challenge, but it just so happens that this type of plot is just not feasible using pandas' methods. If you are unfamiliar with matplotlib, we have provided a function that will plot the chart for you so long as you can use pandas to get the data into the correct format.
Your first step should be to get the data in the correct format using pandas' time-series grouping function. We would like each candle to represent an hour's worth of data. You can write your own aggregation function which returns the open/high/low/close, but pandas has a built-in which also does this.
The below cell contains helper functions. Call day_stock_data() to generate a DataFrame containing the prices a hypothetical stock sold for, and the time the sale occurred. Call plot_candlestick(df) on your properly aggregated and formatted stock data to print the candlestick chart.
End of explanation
df = day_stock_data()
df.head()
df.set_index("time", inplace = True)
agg = df.resample("H").ohlc()
agg.columns = agg.columns.droplevel()
agg["color"] = (agg.close > agg.open).map({True:"green",False:"red"})
agg.head()
Explanation: 59. Generate a day's worth of random stock data, and aggregate / reformat it so that it has hourly summaries of the opening, highest, lowest, and closing prices
End of explanation
plot_candlestick(agg)
Explanation: 60. Now that you have your properly-formatted data, try to plot it yourself as a candlestick chart. Use the plot_candlestick(df) function above, or matplotlib's plot documentation if you get stuck.
End of explanation |
14,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project Euler, first problem
Step1: Let's create a predicate that returns True if a number is a multiple of 3 or 5 and False otherwise.
Step2: Given the predicate function P a suitable program is
Step3: So one step through all seven terms brings the counter to 15 and the total to 60.
Step4: We only want the terms less than 1000.
Step5: That means we want to run the full list of numbers sixty-six times to get to 990 and then the first four numbers 3 2 1 3 to get to 999.
Step6: This form uses no extra storage and produces no unused summands. It's
good but there's one more trick we can apply. The list of seven terms
takes up at least seven bytes. But notice that all of the terms are less
than four, and so each can fit in just two bits. We could store all
seven terms in just fourteen bits and use masking and shifts to pick out
each term as we go. This will use less space and save time loading whole
integer terms from the list.
3 2 1 3 1 2 3
0b 11 10 01 11 01 10 11 == 14811
Step7: And so we have at last
Step8: Let's refactor.
14811 7 [PE1.2] times pop
14811 4 [PE1.2] times pop
14811 n [PE1.2] times pop
n 14811 swap [PE1.2] times pop
Step9: Now we can simplify the definition above
Step10: Here's our joy program all in one place. It doesn't make so much sense, but if you have read through the above description of how it was derived I hope it's clear.
PE1.1 == + [+] dupdip
PE1.2 == [3 & PE1.1] dupdip 2 >>
PE1.3 == 14811 swap [PE1.2] times pop
PE1 == 0 0 66 [7 PE1.3] times 4 PE1.3 pop
Generator Version
It's a little clunky iterating sixty-six times though the seven numbers then four more. In the Generator Programs notebook we derive a generator that can be repeatedly driven by the x combinator to produce a stream of the seven numbers repeating over and over again.
Step11: We know from above that we need sixty-six times seven then four more terms to reach up to but not over one thousand.
Step12: Here they are...
Step13: ...and they do sum to 999.
Step14: Now we can use PE1.1 to accumulate the terms as we go, and then pop the generator and the counter from the stack when we're done, leaving just the sum.
Step15: A little further analysis renders iteration unnecessary.
Consider finding the sum of the positive integers less than or equal to ten.
Step16: Instead of summing them, observe
Step17: Generalizing to Blocks of Terms
We can apply the same reasoning to the PE1 problem.
Between 0 and 990 inclusive there are sixty-six "blocks" of seven terms each, starting with
Step18: (Interesting that the sequence of seven numbers appears again in the rightmost digit of each term.)
Step19: Since there are sixty-six blocks and we are pairing them up, there must be thirty-three pairs, each of which sums to 6945. We also have these additional unpaired terms between 990 and 1000 | Python Code:
from notebook_preamble import J, V, define
Explanation: Project Euler, first problem: "Multiples of 3 and 5"
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
End of explanation
define('P == [3 % not] dupdip 5 % not or')
V('80 P')
Explanation: Let's create a predicate that returns True if a number is a multiple of 3 or 5 and False otherwise.
End of explanation
define('PE1.1 == + [+] dupdip')
V('0 0 3 PE1.1')
V('0 0 [3 2 1 3 1 2 3] [PE1.1] step')
Explanation: Given the predicate function P a suitable program is:
PE1 == 1000 range [P] filter sum
This function generates a list of the integers from 0 to 999, filters
that list by P, and then sums the result.
Logically this is fine, but pragmatically we are doing more work than we
should be; we generate one thousand integers but actually use less than
half of them. A better solution would be to generate just the multiples
we want to sum, and to add them as we go rather than storing them and
adding summing them at the end.
At first I had the idea to use two counters and increase them by three
and five, respectively. This way we only generate the terms that we
actually want to sum. We have to proceed by incrementing the counter
that is lower, or if they are equal, the three counter, and we have to
take care not to double add numbers like 15 that are multiples of both
three and five.
This seemed a little clunky, so I tried a different approach.
Consider the first few terms in the series:
3 5 6 9 10 12 15 18 20 21 ...
Subtract each number from the one after it (subtracting 0 from 3):
3 5 6 9 10 12 15 18 20 21 24 25 27 30 ...
0 3 5 6 9 10 12 15 18 20 21 24 25 27 ...
-------------------------------------------
3 2 1 3 1 2 3 3 2 1 3 1 2 3 ...
You get this lovely repeating palindromic sequence:
3 2 1 3 1 2 3
To make a counter that increments by factors of 3 and 5 you just add
these differences to the counter one-by-one in a loop.
To make use of this sequence to increment a counter and sum terms as we
go we need a function that will accept the sum, the counter, and the next
term to add, and that adds the term to the counter and a copy of the
counter to the running sum. This function will do that:
PE1.1 == + [+] dupdip
End of explanation
1000 / 15
66 * 15
1000 - 990
Explanation: So one step through all seven terms brings the counter to 15 and the total to 60.
End of explanation
999 - 990
Explanation: We only want the terms less than 1000.
End of explanation
define('PE1 == 0 0 66 [[3 2 1 3 1 2 3] [PE1.1] step] times [3 2 1 3] [PE1.1] step pop')
J('PE1')
Explanation: That means we want to run the full list of numbers sixty-six times to get to 990 and then the first four numbers 3 2 1 3 to get to 999.
End of explanation
0b11100111011011
define('PE1.2 == [3 & PE1.1] dupdip 2 >>')
V('0 0 14811 PE1.2')
V('3 3 3702 PE1.2')
V('0 0 14811 7 [PE1.2] times pop')
Explanation: This form uses no extra storage and produces no unused summands. It's
good but there's one more trick we can apply. The list of seven terms
takes up at least seven bytes. But notice that all of the terms are less
than four, and so each can fit in just two bits. We could store all
seven terms in just fourteen bits and use masking and shifts to pick out
each term as we go. This will use less space and save time loading whole
integer terms from the list.
3 2 1 3 1 2 3
0b 11 10 01 11 01 10 11 == 14811
End of explanation
define('PE1 == 0 0 66 [14811 7 [PE1.2] times pop] times 14811 4 [PE1.2] times popop')
J('PE1')
Explanation: And so we have at last:
End of explanation
define('PE1.3 == 14811 swap [PE1.2] times pop')
Explanation: Let's refactor.
14811 7 [PE1.2] times pop
14811 4 [PE1.2] times pop
14811 n [PE1.2] times pop
n 14811 swap [PE1.2] times pop
End of explanation
define('PE1 == 0 0 66 [7 PE1.3] times 4 PE1.3 pop')
J('PE1')
Explanation: Now we can simplify the definition above:
End of explanation
define('PE1.terms == [0 swap [dup [pop 14811] [] branch [3 &] dupdip 2 >>] dip rest cons]')
J('PE1.terms 21 [x] times')
Explanation: Here's our joy program all in one place. It doesn't make so much sense, but if you have read through the above description of how it was derived I hope it's clear.
PE1.1 == + [+] dupdip
PE1.2 == [3 & PE1.1] dupdip 2 >>
PE1.3 == 14811 swap [PE1.2] times pop
PE1 == 0 0 66 [7 PE1.3] times 4 PE1.3 pop
Generator Version
It's a little clunky iterating sixty-six times though the seven numbers then four more. In the Generator Programs notebook we derive a generator that can be repeatedly driven by the x combinator to produce a stream of the seven numbers repeating over and over again.
End of explanation
J('7 66 * 4 +')
Explanation: We know from above that we need sixty-six times seven then four more terms to reach up to but not over one thousand.
End of explanation
J('PE1.terms 466 [x] times pop')
Explanation: Here they are...
End of explanation
J('[PE1.terms 466 [x] times pop] run sum')
Explanation: ...and they do sum to 999.
End of explanation
J('0 0 PE1.terms 466 [x [PE1.1] dip] times popop')
Explanation: Now we can use PE1.1 to accumulate the terms as we go, and then pop the generator and the counter from the stack when we're done, leaving just the sum.
End of explanation
J('[10 9 8 7 6 5 4 3 2 1] sum')
Explanation: A little further analysis renders iteration unnecessary.
Consider finding the sum of the positive integers less than or equal to ten.
End of explanation
define('F == dup ++ * 2 floordiv')
V('10 F')
Explanation: Instead of summing them, observe:
10 9 8 7 6
+ 1 2 3 4 5
---- -- -- -- --
11 11 11 11 11
11 * 5 = 55
From the above example we can deduce that the sum of the first N positive integers is:
(N + 1) * N / 2
(The formula also works for odd values of N, I'll leave that to you if you want to work it out or you can take my word for it.)
End of explanation
J('[3 5 6 9 10 12 15] reverse [978 980 981 984 985 987 990] zip')
J('[3 5 6 9 10 12 15] reverse [978 980 981 984 985 987 990] zip [sum] map')
Explanation: Generalizing to Blocks of Terms
We can apply the same reasoning to the PE1 problem.
Between 0 and 990 inclusive there are sixty-six "blocks" of seven terms each, starting with:
[3 5 6 9 10 12 15]
And ending with:
[978 980 981 984 985 987 990]
If we reverse one of these two blocks and sum pairs...
End of explanation
J('[ 3 5 6 9 10 12 15] reverse [978 980 981 984 985 987 990] zip [sum] map sum')
Explanation: (Interesting that the sequence of seven numbers appears again in the rightmost digit of each term.)
End of explanation
J('6945 33 * [993 995 996 999] cons sum')
Explanation: Since there are sixty-six blocks and we are pairing them up, there must be thirty-three pairs, each of which sums to 6945. We also have these additional unpaired terms between 990 and 1000:
993 995 996 999
So we can give the "sum of all the multiples of 3 or 5 below 1000" like so:
End of explanation |
14,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
04-notebook-rough-draft for final project
I am working on the Kaggle Grupo Bimbo competition dataset for this project.
Link to Grupo Bimbo Kaggle competition
Step1: Part 1. Identify the Problem
Problem
Step2: Part 3. Parse, Mine, and Refine the data
Perform exploratory data analysis and verify the quality of the data.
Check columns and counts to drop any non-generic or near-empty columns
Step3: Check for missing values and drop or impute
Step4: Wrangle the data to address any issues from above checks
Step5: Perform exploratory data analysis
Step6: Check and convert all data types to numerical
Step7: Part 4. Build a Model
Create a cross validation split, select and build a model, evaluate the model, and refine the model
Create cross validation sets
Step8: Build a model
Step9: Evaluate the model
Step10: Part 5
Step11: Load Kaggle test data, make predictions using model, and generate submission file
Step12: Kaggle score | Python Code:
import numpy as np
import pandas as pd
from sklearn import cross_validation
from sklearn import metrics
from sklearn import linear_model
from sklearn import ensemble
#QUESTION - what is diff bw random forest classifier and rf regressor?
#import seaborn as sns
import matplotlib.pyplot as plt
#sns.set(style="whitegrid", font_scale=1)
%matplotlib inline
Explanation: 04-notebook-rough-draft for final project
I am working on the Kaggle Grupo Bimbo competition dataset for this project.
Link to Grupo Bimbo Kaggle competition: Kaggle-GrupoBimbo
End of explanation
# Load train data
# Given size of training data, I chose to use only 10% for speed reasons
# QUESTION - how can i randomize with python? i used sql to create the random sample below.
df_train = pd.read_csv("../train_random10percent.csv")
# Check head
df_train.head()
# Load test data
df_test = pd.read_csv("../test.csv")
# Check head. I noticed that I will have to drop certain columns so that test and train sets have the same features.
df_test.head()
#given that i cannot use a significant amount of variables in train data, i created additoinal features using the mean
#i grouped on product id since i will ultimately be predicting demand for each product
df_train_mean = df_train.groupby('Producto_ID').mean().add_suffix('_mean').reset_index()
df_train_mean.head()
#from above, adding 2 additional features, the average sales units and the average demand
df_train2 = df_train.merge(df_train_mean[['Producto_ID','Venta_uni_hoy_mean', 'Demanda_uni_equil_mean']],how='inner',on='Producto_ID')
df_train2.sample(5)
# Adding features to the test set in order to match train set
df_test2 = df_test.merge(df_train_mean[['Producto_ID','Venta_uni_hoy_mean', 'Demanda_uni_equil_mean']],how='left',on='Producto_ID')
df_test2.head()
Explanation: Part 1. Identify the Problem
Problem: Given various sales/client/product data, we want to predict demand for each product at each store on a weekly basis. Per the train dataset, the average demand for a product at a store per week is 7.2 units. However, this does not factor in cases in which store managers under-predict demand for a product which we can see when returns=0 for that week. There are 74,180,464 records in the train data, of which 71,636,003 records have returns=0 or approx 96%. This generally means that managers probably often under predict product demand (unless that are exact on the money, which seems unlikely).
Goals: The goal is to predict demand for each product at each store on a weekly basis while avoiding under-predicting demand.
Hypothesis: As stated previously, the average product demand at a store per week is 7.2 units per the train data. However, given the likelihood of managers underpredicint product demand, I hypothesize a good model should return a number higher than 7.2 units to more accurately predict demand.
Part 2. Acquire the Data
Kaggle has provided five files for this dataset:
train.csv: Use for building a model (contains target variable "Demanda_uni_equil")
test.csv: Use for submission file (fill in for target variable "Demanda_uni_equil")
cliente_tabla.csv: Contains client names (can be joined with train/test on Cliente_ID)
producto_tabla.csv: Contains product names (can be join with train/test on Producto_ID)
town_state.csv: Contains town and state (can be join with train/test on Agencia_ID)
Notes: I will further split train.csv to generate my own cross validation set. However, I will use all of train.csv to train my final model since Kaggle has already supplied a test dataset. Additionally, I am only using a random 10% of the train data given to me for EDA and model development. Using the entire train dataset proved to be too time consuming for the quick iternations needed for initial modeling building and EDA efforts. I plan to use 100% of the train dataset once I build a model I'm comfortable with. I may have to explore using EC2 for this effort.
End of explanation
# Check columns
print "train dataset columns:"
print df_train2.columns.values
print
print "test dataset columns:"
print df_test2.columns.values
# Check counts
print "train dataset counts:"
print df_train2.count()
print
print "test dataset counts:"
print df_test2.count()
Explanation: Part 3. Parse, Mine, and Refine the data
Perform exploratory data analysis and verify the quality of the data.
Check columns and counts to drop any non-generic or near-empty columns
End of explanation
# Check counts for missing values in each column
print "train dataset missing values:"
print df_train2.isnull().sum()
print
print "test dataset missing values:"
print df_test2.isnull().sum()
Explanation: Check for missing values and drop or impute
End of explanation
# Drop columns not included in test dataset
df_train2 = df_train2.drop(['Venta_uni_hoy', 'Venta_hoy', 'Dev_uni_proxima', 'Dev_proxima'], axis=1)
# Check data
df_train2.head()
# Drop blank values in test set and replace with mean
# Replace missing values for venta_uni_hoy_mean using mean
df_test2.loc[(df_test2['Venta_uni_hoy_mean'].isnull()), 'Venta_uni_hoy_mean'] = df_test2['Venta_uni_hoy_mean'].dropna().mean()
# Replace missing values for demand using mean
df_test2.loc[(df_test2['Demanda_uni_equil_mean'].isnull()), 'Demanda_uni_equil_mean'] = df_test2['Demanda_uni_equil_mean'].dropna().mean()
print "test dataset missing values:"
print df_test2.isnull().sum()
Explanation: Wrangle the data to address any issues from above checks
End of explanation
# Get summary statistics for data
df_train2.describe()
#Show box plot of demand by week
sns.factorplot(
x='Semana',
y='Demanda_uni_equil',
data=df_train2,
kind='box')
Explanation: Perform exploratory data analysis
End of explanation
# Check data types
df_train.dtypes
#these are all numerical but are not continuous values and therefore don't have relative significant to one another, except for week
#however, creating dummy variables for all these is too memory intensive. as such, might have to explore using a random forest model
#in addition to the linear regression model
Explanation: Check and convert all data types to numerical
End of explanation
#create cross validation sets
#set target variable name
target = 'Demanda_uni_equil'
#set X and y
X = df_train2.drop([target], axis=1)
y = df_train2[target]
# create separate training and test sets with 60/40 train/test split
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size= .4)
#QUESTION - do i have to cross validate when using kaggle data? prob not.
Explanation: Part 4. Build a Model
Create a cross validation split, select and build a model, evaluate the model, and refine the model
Create cross validation sets
End of explanation
#create linear regression object
#lm = linear_model.LinearRegression()
#create random forest object
#rf = ensemble.RandomForestClassifier(n_estimators=10) - did not work due to memory errors during fitting
rf = ensemble.RandomForestRegressor(n_estimators=100)
#train the model using the training data
rf.fit(X_train, y_train)
Explanation: Build a model
End of explanation
# Check score on test set
print "Score: %0.3f" % rf.score(X_test,y_test)
#score on test set using 100 n_estimators was .46
Explanation: Evaluate the model
End of explanation
# Set target variable name
target = 'Demanda_uni_equil'
# Set X_train and y_train
X_train = df_train2.drop([target], axis=1)
y_train = df_train2[target]
# Build tuned model
#create linear regression object
#lm = linear_model.LinearRegression()
#create random forest object
rf = ensemble.RandomForestRegressor(n_estimators=50)
#n_estimators is too memory instensive for 30gb of ram, so trying 50
#train the model using the training data
#lm.fit(X_train,y_train)
rf.fit(X_train,y_train)
# Score tuned model
print "Score: %0.3f" % rf.score(X_train, y_train)
#score is .906 when n_estimators=10, .93 when n_estimators = 50
Explanation: Part 5: Present the Results
Generate summary of findings and kaggle submission file.
NOTE: For the purposes of generating summary narratives and kaggle submission, we can train the model on the entire training data provided in train.csv.
Load Kaggle training data and use entire data to train tuned model
End of explanation
df_test.head()
#create data frame for submission
df_sub = df_test[['id']]
#df_test2 = df_test2.drop('id', axis=1)
#predict using tuned model
df_sub['Demanda_uni_equil'] = rf.predict(df_test2)
df_sub.describe()
d = df_sub['Demanda_uni_equil']
d[d<0] = 0
df_sub.describe()
# Write submission file
df_sub.to_csv("mysubmission5.csv", index=False)
Explanation: Load Kaggle test data, make predictions using model, and generate submission file
End of explanation
#notes
#do somethign wtih time series to check for seasonality or general trending?
#look into boosting
#look into products features
Explanation: Kaggle score :
using linear regression 0.75682
using random forest: 0.65784
End of explanation |
14,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In-Class Coding Lab
Step1: Reading from the file
Let's start with some code to read the lines of text from CCL-mbox-tiny.txt this code reads the contents of the file one line at a time and prints those contents back out.
f.readlines() reads the file line-by-line. NOTE
Step2: 1.1 You Code
Step3: Finding the SPAM Confidence lines
Next, we'll focus on only getting lines addressing lines in the mailbox file that start with X-DSPAM-Confidence
Step4: Parsing out the confidence value
The final step is to figure out how to parse out the confidence value from the string.
For example for the given line
Step5: Putting it all together
Now that we have all the working parts, let's put it all together.
0. use the file named 'mbox-short.txt'
1. line count is 0
2. total confidence is 0
3. open mailbox file
4. for each line in file
5. if line starts with `X-DSPAM-Confidence
Step6: Who are these emails from?
Now that you got it working once, let's repeat the process to discover who sent each email. The approach is similar to the spam confidence example but instead we search for lines that start with From
Step7: Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==-- | Python Code:
! curl https://raw.githubusercontent.com/mafudge/datasets/master/ist256/07-Files/mbox-tiny.txt -o mbox-tiny.txt
! curl https://raw.githubusercontent.com/mafudge/datasets/master/ist256/07-Files/mbox-short.txt -o mbox-short.txt
Explanation: In-Class Coding Lab: Files
The goals of this lab are to help you to understand:
Reading data from a file all at once or one line at a time.
Searching for data in files
Parsing text data to numerical data.
How to build complex programs incrementally.
Parsing Email Headers
For this lab, we will write a program to read data from a mailbox file like mbox-tiny.txt or mbox-short.txt. These files contain raw email data, and in that data are attributes like who the message is To, From, the subject and SPAM confidence number for each message, like this:
X-DSPAM-Confidence:0.8475
Our goal will be to find each of these lines in the file, and extract the confidence number (In this case 0.8475), with the end-goal of calculating the average SPAM Confidence of all the emails in the file.
Getting the files we need
Run this code to fetch the files we need for this lab. This linux code downloads them from the internet and saves them to your folder on jupyterhub.
End of explanation
filename = "mbox-tiny.txt"
with open(filename, 'r') as f:
for line in f.readlines():
print(line.strip())
Explanation: Reading from the file
Let's start with some code to read the lines of text from CCL-mbox-tiny.txt this code reads the contents of the file one line at a time and prints those contents back out.
f.readlines() reads the file line-by-line. NOTE: We could read this file all at once, but it would be more difficult to process that way.
line.strip() is required to remove the end-line character from each line since the print() function includes one already.
End of explanation
# TODO debut this code to print the number of lines in the file
line_count = 0
filename = "mbox-tiny.txt"
with open(filename, 'r')
for line in f.readlines():
line_count = 1
print("there are {line_count} lines in the file")
Explanation: 1.1 You Code: Debug
The following code should print the number of lines of text in the file 'mbox-tiny.txt. There should be 332 lines. Debug this code to get it working.
There should be 332 lines.
End of explanation
filename = "mbox-tiny.txt"
with open(filename, 'r') as f:
for line in f.readlines():
if line.startswith("X-DSPAM-Confidence:"):
print(line.strip())
Explanation: Finding the SPAM Confidence lines
Next, we'll focus on only getting lines addressing lines in the mailbox file that start with X-DSPAM-Confidence:. We do this by including an if statement inside the for loop.
This is a very common pattern in computing used to search through massive amouts of data.
Rather than print ALL 332 lines in mbox-tiny.txt we only print lines that begin with X-DSPAM-Confidence: There are only 5 such rows in this file.
End of explanation
# TODO: Write code here
line = 'X-DSPAM-Confidence: 0.8475'
Explanation: Parsing out the confidence value
The final step is to figure out how to parse out the confidence value from the string.
For example for the given line: X-DSPAM-Confidence: 0.8475 we need to get the value 0.8475 as a float.
The strategy here is to use the string .replace() method to replace X-DSPAM-Confidence: with an empty string"". After we do that we can call the float() function to parse the string number to a float.
1.2 You Code
Write code to parse the value 0.8475 from the text string 'X-DSPAM-Confidence: 0.8475'.
End of explanation
#TODO Write Code here
Explanation: Putting it all together
Now that we have all the working parts, let's put it all together.
0. use the file named 'mbox-short.txt'
1. line count is 0
2. total confidence is 0
3. open mailbox file
4. for each line in file
5. if line starts with `X-DSPAM-Confidence:`
6. remove `X-DSPAM-Confidence:` from line and convert to float
7. increment line count
8. add spam confidence to total confidence
9. print average confidence (total confidence/line count)
1.3 You Code
End of explanation
#TODO Write Code here
Explanation: Who are these emails from?
Now that you got it working once, let's repeat the process to discover who sent each email. The approach is similar to the spam confidence example but instead we search for lines that start with From:. For example:
From: [email protected]
To extact the email we remove the From: portion from the line.
0. use the file named 'mbox-short.txt'
1. open mailbox file
2. for each line in file
3. if line starts with `From:`
4. remove `From:` from line and strip out any remaining whitespace
5. print email
1.4 You code
End of explanation
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
Explanation: Metacognition
Rate your comfort level with this week's material so far.
1 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand to the best of your ability in the questions and comments section below.
2 ==> I can do this with help or guidance from other people or resources. If you choose this level, please indicate HOW this person helped you in the questions and comments section below.
3 ==> I can do this on my own without any help.
4 ==> I can do this on my own and can explain/teach how to do it to others.
--== Double-Click Here then Enter a Number 1 through 4 Below This Line ==--
Questions And Comments
Record any questions or comments you have about this lab that you would like to discuss in your recitation. It is expected you will have questions if you did not complete the code sections correctly. Learning how to articulate what you do not understand is an important skill of critical thinking. Write them down here so that you remember to ask them in your recitation. We expect you will take responsilbity for your learning and ask questions in class.
--== Double-click Here then Enter Your Questions Below this Line ==--
End of explanation |
14,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non-uniform random distributions
In the previous section we learned how to generate random deviates with
a uniform probability distribution in an interval $[a,b]$. This
distributioon is normalized, so that $$\int _a^b {P(x)dx}=1.$$ Hence,
$P(x)=1/(b-a)$.
Now, suppose that we generate a sequence ${x_i}$ and we take some
function of it to generate ${y(x_i)}={y_i}$. This new sequence is
going to be distributed according to some probability density $P(y)$,
such that $$P(y)dy=P(x)dx$$ or $$P(y)=P(x)\frac{dx}{dy}.$$
If we want to generate a desired normalized distribution $P(y)$, we need
to solve the differential equation
Step1: von Neumann rejection
A simple and ingenious method for generating random points with a
probability distribution $P(x)$ was deduced by von Neumann. Draw a plot
with you probability distribution, and on the same graph, plot another
curve $f(x)$ which has finite area and lies everywhere above your
original distribution. We will call $f(x)$ the “comparison function”.
Generate random pairs $(x_i,y_i)$ with uniform distribution inside
$f(x)$. Whenever the point lies inside the area of the original
probability, we accept it, otherwise, we reject it. All the accepted
points will be uniformly distributed within the original area, and
therefore will have the desired distribution. The fraction of points
accepted/rejected will deppend on the ratio between the two areas. The
closer the comparison function $f(x)$ resembles $P(x)$, the more points
will be accepted. Ideally, for $P(x)=f(x)$, all the points will be
accepted, and none rejected. However, in practice, this is not always
possible, but we can try to pick $f(x)$ such that we minimize the
fraction of rejected points.
It only remains how to pick a number with probability $f(x)$. For this
purpose, we utilize the method shown in the previous section, using a
function whose indefinite intergral is know analitically, and is also
analitically invertible. We then pick a random number $x$ and retrieve
the corresponding $y(x)$ according to ([random_invert]). Then, we
generate a second random number and we use the rejection criterion.
An equivalent procedure consists of picking the second number between 0
and 1 and accept or reject according to wether is it respectively less
than or greater than the ratio $P(x)/f(x)$. Clearly, if $f(x)=P(x)$ all the points will be accepted.
Step2: Challenge 9.1 | Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot
N = 10000
r = np.random.random(N)
xlambda = 0.1
x = -np.log(r)/xlambda
binwidth=xlambda*5
pyplot.hist(x,bins=np.arange(0.,100., binwidth),density=True);
pyplot.plot(np.arange(0.,100.,binwidth),xlambda*np.exp(-xlambda*np.arange(0.,100.,binwidth)),ls='-',c='red',lw=3);
Explanation: Non-uniform random distributions
In the previous section we learned how to generate random deviates with
a uniform probability distribution in an interval $[a,b]$. This
distributioon is normalized, so that $$\int _a^b {P(x)dx}=1.$$ Hence,
$P(x)=1/(b-a)$.
Now, suppose that we generate a sequence ${x_i}$ and we take some
function of it to generate ${y(x_i)}={y_i}$. This new sequence is
going to be distributed according to some probability density $P(y)$,
such that $$P(y)dy=P(x)dx$$ or $$P(y)=P(x)\frac{dx}{dy}.$$
If we want to generate a desired normalized distribution $P(y)$, we need
to solve the differential equation: $$\frac{dx}{dy}=P(y).$$ But the
solution of this is $$x=\int _0^y {P(y')dy'}=F(y).$$ Therefore,
$$y(x)=F^{-1}(x),
$$ where $F^{-1}$ is the inverse of $F$.
Exponential distribution
As an example, let us take $y(x)=-\ln{(x)}$ with $P(x)$ representing a
uniform distribution in the interval $[0,1]$. Then
$$P(y)=\frac{dx}{dy}=e^{-y},$$ which is distributed exponentially. This
distribution occurs frequently in real problems such as the radioactive
decay of nuclei. You can also see that the quantity $y/\lambda$ has the
distribution $\lambda
e^{-\lambda y}$.
End of explanation
N = 100000
xmax = 60
ymax = xlambda
rx = np.random.random(N)*xmax
ry = np.random.random(N)*ymax
values = []
Nin = 0
for i in range(N):
if(ry[i] <= xlambda*np.exp(-xlambda*rx[i])):
# Accept
values.append(rx[i])
Nin += 1
x = np.asarray(values)
print("Acceptance Ratio: ",Nin/float(N))
binwidth=xlambda*5
pyplot.hist(x,bins=np.arange(0.,100., binwidth),density=True);
pyplot.plot(np.arange(0.,100.,binwidth),xlambda*np.exp(-xlambda*np.arange(0.,100.,binwidth)),ls='-',c='red',lw=3);
Explanation: von Neumann rejection
A simple and ingenious method for generating random points with a
probability distribution $P(x)$ was deduced by von Neumann. Draw a plot
with you probability distribution, and on the same graph, plot another
curve $f(x)$ which has finite area and lies everywhere above your
original distribution. We will call $f(x)$ the “comparison function”.
Generate random pairs $(x_i,y_i)$ with uniform distribution inside
$f(x)$. Whenever the point lies inside the area of the original
probability, we accept it, otherwise, we reject it. All the accepted
points will be uniformly distributed within the original area, and
therefore will have the desired distribution. The fraction of points
accepted/rejected will deppend on the ratio between the two areas. The
closer the comparison function $f(x)$ resembles $P(x)$, the more points
will be accepted. Ideally, for $P(x)=f(x)$, all the points will be
accepted, and none rejected. However, in practice, this is not always
possible, but we can try to pick $f(x)$ such that we minimize the
fraction of rejected points.
It only remains how to pick a number with probability $f(x)$. For this
purpose, we utilize the method shown in the previous section, using a
function whose indefinite intergral is know analitically, and is also
analitically invertible. We then pick a random number $x$ and retrieve
the corresponding $y(x)$ according to ([random_invert]). Then, we
generate a second random number and we use the rejection criterion.
An equivalent procedure consists of picking the second number between 0
and 1 and accept or reject according to wether is it respectively less
than or greater than the ratio $P(x)/f(x)$. Clearly, if $f(x)=P(x)$ all the points will be accepted.
End of explanation
N = 100000
x = np.zeros(N)
delta = 2.
sigma = 20.
sigma2 = sigma**2
def metropolis(xold):
xtrial = np.random.random()
xtrial = xold+(2*xtrial-1)*delta
weight = np.exp(-0.5*(xtrial**2-xold**2)/sigma2)
# weight = np.exp(-0.5*(xtrial-xold)/sigma2)
# if(xtrial < 0):
# weight = 0
xnew = xold
if(weight >= 1): #Accept
xnew = xtrial
else:
r = np.random.random()
if(r <= weight): #Accept
xnew = xtrial
return xnew
xwalker = 20.
Nwarmup = 5
for i in range(Nwarmup):
xwalker = metropolis(xwalker)
x[0] = xwalker
tot = x[0]
for i in range(1,N):
x0 = x[i-1]
for j in range(10):
x0 = metropolis(x0)
x[i] = metropolis(x0)
binwidth=sigma/10
pyplot.hist(x,bins=np.arange(-50,50., binwidth),density=True);
norm = 1./(sigma*np.sqrt(2*np.pi))
pyplot.plot(np.arange(-50.,50.,binwidth),norm*np.exp(-0.5*np.arange(-50.,50.,binwidth)**2/sigma2),ls='-',c='red',lw=3);
Explanation: Challenge 9.1:
Improve the acceptance ratio by using a linear function $f(x)=1-\alpha x$, with a ppropriate choice of $\alpha$
Random walk methods: the Metropolis algorithm
Suppose that we want to generate random variables according to an
arbitrary probability density $P(x)$. The Metropolis algorithm produces
a “random walk” of points ${x_i}$ whose asymptotic probability
approaches $P(x)$ after a large number of steps. The random walk is
defined by a “transition probability” $w(x_i \rightarrow x_j)$ for one
value $x_i$ to another $x_j$ in order that the distribution of points
$x_0$, $x_1$, $x_2$, ... converges to $P(x)$. In can be shown that it is
sufficient (but not necessary) to satisfy the “detailed balance”
condition $$p(x_i)w(x_i \rightarrow x_j) = p(x_j)w(x_j \rightarrow x_i).
$$ This relation dos not specify $w(x_i \rightarrow x_j)$
uniquely. A simple choice is
$$w(x_i \rightarrow x_j)=\min{\left[ 1,\frac{P(x_j)}{P(x_i)} \right] }.$$
This choice can be described by the following steps. Suppose that the
“random walker” is a position $x_n$. To generate $x_{n+1}$ we
choose a trial position $x_t=x_n+\delta _n$ , where the $\delta _n$
is a random number in the interval $[-\delta ,\delta]$.
Calculate $w=P(x_t)/P(x_n)$.
If $w \geq 1$ we accept the change and let $x_{n+1}=x_t$.
If $w \leq 1$, generate a random number $r$.
If $r \leq w$, accept the change and let $x_{n+1} = x_t$.
If the trial change is not accepted, the let $x_{n+1}=x_n$.
It is necessary to sample a number of points of the random walk before
the asymptotic probability $P(x)$ is attained. How do we choose the
“step size” $\delta$? If $\delta$ is too large, only a small fraction of
changes will be accepted and the sampling will be inefficient. If
$\delta$ is too small, a large number will be accepted, but it would
take too long to sample $P(x)$ over the whole interval of interest.
Ideally, we want at least 1/3-1/2 of the trial steps to be accepted. We
also want to choose $x_0$ such that the distribution ${x_i}$ converges
to $P(x)$ as quickly as possible. An obvious choice is to begin the
random walk at the point where $P(x)$ is maximum.
Exercise 9.1: The Gaussian distribution
Use the Metropolis algorithm to generate a Gaussian distribution
$P(x)=A \exp{(-x^2/2\sigma ^2)}$. Is the numerical value of the
normalization constant $A$ relevant? Determine the qualitative
dependence of the acceptance ratio and the equilibrium time on the
maximum step size $\delta$. One possible criterion for equilibrium
is that $\langle x^2
\rangle \approx \sigma ^2$. For $\sigma = 1$, what is a reasonable
choice of $\delta$? (choose $x_0 = 0$.)
Plot the asymptotic probability distribution generated by the
Metropolis algorithm.
End of explanation |
14,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Astro 283 Homework 6</h1>
Bijan Pourhamzeh
Step1: <h3> Problem 1 </h3>
To estimate the values of $(\alpha,\beta)$, we maximize the posterior function $p(\alpha,\beta\mid{D})$ with respect to $\alpha$ and $\beta$. From Baye's rule, and assuming the prior $p(\alpha,\beta)$ is uniform, this is equivalent to maximizing the likelihood function since
$$
p(\alpha,\beta\mid{D}) \propto p({D}\mid\alpha,\beta) = \prod_i p(x_i\mid\alpha,\beta)
$$
where
$$
p(x_i\mid \alpha,\beta) = \left{
\begin{array}{ll}
\alpha^{-1}\exp\left(-\frac{x_i+\beta}{\alpha}\right)I_0\left(\frac{2\sqrt{x_i\beta}}{\alpha}\right) & \quad x_i\geq 0\
0 & \quad\text{otherwise}
\end{array}
\right.
$$
Using <code>scipy.optimize.fmin</code>, we find that the parameters which maximize the posterior function are
\begin{eqnarray}
\alpha &\approx& 7.23\
\beta &\approx& 7.87\times 10^{-4}
\end{eqnarray}
To compare this fit to one of a Gaussian that has a variance equal to the mean, we evaluate the ratio
\begin{eqnarray}
\frac{P(R\mid{D})}{P(G\mid{D})} &=& \frac{\int p(\alpha,\beta,R\mid{D}) d\alpha d\beta}{\int p(\mu,G\mid{D})d\mu}\
&=& \frac{\int p({D}\mid\alpha,\beta,R)p(\alpha,\beta) d\alpha d\beta}{\int p({D}\mid\mu,G)p(\mu) d\mu}\
&=& \frac{\mu^\text{max}-\mu^\text{min}}{(\alpha^\text{max}-\alpha^\text{min})(\beta^\text{max}-\beta^\text{min})}\frac{\int p({D}\mid\alpha,\beta,R) d\alpha d\beta}{\int p({D}\mid\mu,G) d\mu}\
&\approx& \frac{\mu^\text{max}-\mu^\text{min}}{(\alpha^\text{max}-\alpha^\text{min})(\beta^\text{max}-\beta^\text{min})}\frac{p({D}\mid\alpha_0,\beta_0,R)}{p({D}\mid\mu_0,G)}\frac{\int f(\alpha,\beta,R) d\alpha d\beta}{\int g(\mu,G) d\mu}
\end{eqnarray}
where we make the approximation that the likelihoods can be approximated by the likelihoods at the best fit values times some functions $f,g$ which are functions of the parameters, best-fit values, and errors associated with the fit. We can assume that $f,g$ are approximately Gaussian, which means the integrals will be proportional to the errors, $\delta\alpha,\delta\beta,\delta\mu$. This gives us
$$
\frac{P(R\mid{D})}{P(G\mid{D})} \sim \frac{\delta\alpha\delta\beta}{(\alpha^\text{max}-\alpha^\text{min})(\beta^\text{max}-\beta^\text{min})}\frac{\mu^\text{max}-\mu^\text{min}}{\delta\mu}\frac{p({D}\mid\alpha_0,\beta_0,R)}{p({D}\mid\mu_0,G)}
$$
We can evaluate the last factor numerically, which comes to
$$
\frac{p({D}\mid\alpha_0,\beta_0,R)}{p({D}\mid\mu_0,G)} \sim 10^{51}
$$
The factors in front we assume are not close to this order of magnitude, so we can say that
$$
\frac{P(R\mid{D})}{P(G\mid{D})} \gg 1
$$
and therefore the Rice distribution is a much better fit than the Poisson-like Gaussian.
Step2: <h3>Problem 2</h3>
Given the 3d galaxy model, we obtain an image by integrating along the first axis to obtain a perfect scene. Since the axes are pixels, integation just amounts to summing along the first index. We want to convolve the point spread function (PSF) with this image. We do this in the following steps | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import iv
from scipy.optimize import fmin
from csv import reader
from astropy.io import fits
from __future__ import print_function
Explanation: <h1>Astro 283 Homework 6</h1>
Bijan Pourhamzeh
End of explanation
#Read in student samples
samples = reader(open('rice.dat', 'rt'))
x_dat = []
for sam in samples:
x_dat.append(float(sam[0]))
x_dat = np.array(x_dat)
#print(x_dat)
def rice_likelihood(params, x):
'''Returns p(x|\alpha,\beta) defined above'''
return 1/params[0]*np.exp(-(x+params[1])/params[0])*iv(0,2*np.sqrt(x*params[1])/params[0])
#Negative likelihood so we can use fmin
neg_RL = lambda params, x: -1*np.prod(rice_likelihood(params,x))
def gaussian_(mu, x):
'''Returns a gaussian with variance equal to its mean'''
return 1/np.sqrt(2*np.pi*mu)*np.exp(-0.5*(x-mu)**2/mu)
neg_G = lambda mu, x: -1*np.prod(gaussian_(mu,x))
#Optimize!
initial_r = np.array([5,0])
opt_r = fmin(neg_RL, initial_r, args=(x_dat,))
initial_g = np.array([np.mean(x_dat)])
opt_g = fmin(neg_G, initial_g, args=(x_dat,))
print(opt_r)
print(opt_g)
#Plot to see how it looks. Meh....
plt.hist(x_dat, normed=True)
x_vals = np.arange(0,30,.01)
plt.plot(x_vals, rice_likelihood(opt_r,x_vals), 'r-')
plt.plot(x_vals, gaussian_(opt_g,x_vals), 'g-')
plt.title('Best fit Rice/Gaussian distribution and sampled points')
#Contour plot of posterior function, unnormalized
alp = np.linspace(2,11,100)
bet = np.linspace(0,5,100)
prob = np.array([[neg_RL((a,b),x_dat) for a in alp] for b in bet])
plt.contourf(alp, bet, prob)
plt.colorbar()
plt.xlabel('alpha')
plt.ylabel('beta')
plt.title('Unnormalized posterior for Rice estimate')
#Ratio of Rice likelihood at best fit parameter to Gaussian at best fit
third_factor = neg_RL(opt_r,x_dat)/neg_G(opt_g,x_dat)
print(third_factor)
Explanation: <h3> Problem 1 </h3>
To estimate the values of $(\alpha,\beta)$, we maximize the posterior function $p(\alpha,\beta\mid{D})$ with respect to $\alpha$ and $\beta$. From Baye's rule, and assuming the prior $p(\alpha,\beta)$ is uniform, this is equivalent to maximizing the likelihood function since
$$
p(\alpha,\beta\mid{D}) \propto p({D}\mid\alpha,\beta) = \prod_i p(x_i\mid\alpha,\beta)
$$
where
$$
p(x_i\mid \alpha,\beta) = \left{
\begin{array}{ll}
\alpha^{-1}\exp\left(-\frac{x_i+\beta}{\alpha}\right)I_0\left(\frac{2\sqrt{x_i\beta}}{\alpha}\right) & \quad x_i\geq 0\
0 & \quad\text{otherwise}
\end{array}
\right.
$$
Using <code>scipy.optimize.fmin</code>, we find that the parameters which maximize the posterior function are
\begin{eqnarray}
\alpha &\approx& 7.23\
\beta &\approx& 7.87\times 10^{-4}
\end{eqnarray}
To compare this fit to one of a Gaussian that has a variance equal to the mean, we evaluate the ratio
\begin{eqnarray}
\frac{P(R\mid{D})}{P(G\mid{D})} &=& \frac{\int p(\alpha,\beta,R\mid{D}) d\alpha d\beta}{\int p(\mu,G\mid{D})d\mu}\
&=& \frac{\int p({D}\mid\alpha,\beta,R)p(\alpha,\beta) d\alpha d\beta}{\int p({D}\mid\mu,G)p(\mu) d\mu}\
&=& \frac{\mu^\text{max}-\mu^\text{min}}{(\alpha^\text{max}-\alpha^\text{min})(\beta^\text{max}-\beta^\text{min})}\frac{\int p({D}\mid\alpha,\beta,R) d\alpha d\beta}{\int p({D}\mid\mu,G) d\mu}\
&\approx& \frac{\mu^\text{max}-\mu^\text{min}}{(\alpha^\text{max}-\alpha^\text{min})(\beta^\text{max}-\beta^\text{min})}\frac{p({D}\mid\alpha_0,\beta_0,R)}{p({D}\mid\mu_0,G)}\frac{\int f(\alpha,\beta,R) d\alpha d\beta}{\int g(\mu,G) d\mu}
\end{eqnarray}
where we make the approximation that the likelihoods can be approximated by the likelihoods at the best fit values times some functions $f,g$ which are functions of the parameters, best-fit values, and errors associated with the fit. We can assume that $f,g$ are approximately Gaussian, which means the integrals will be proportional to the errors, $\delta\alpha,\delta\beta,\delta\mu$. This gives us
$$
\frac{P(R\mid{D})}{P(G\mid{D})} \sim \frac{\delta\alpha\delta\beta}{(\alpha^\text{max}-\alpha^\text{min})(\beta^\text{max}-\beta^\text{min})}\frac{\mu^\text{max}-\mu^\text{min}}{\delta\mu}\frac{p({D}\mid\alpha_0,\beta_0,R)}{p({D}\mid\mu_0,G)}
$$
We can evaluate the last factor numerically, which comes to
$$
\frac{p({D}\mid\alpha_0,\beta_0,R)}{p({D}\mid\mu_0,G)} \sim 10^{51}
$$
The factors in front we assume are not close to this order of magnitude, so we can say that
$$
\frac{P(R\mid{D})}{P(G\mid{D})} \gg 1
$$
and therefore the Rice distribution is a much better fit than the Poisson-like Gaussian.
End of explanation
#Read in FITS file and plot PSF
model_file = fits.open('../../hw6prob2_model.fits')
model_data = model_file[0].data
#print(model_data.shape)
psf_file = fits.open('../../hw6prob2_psf.fits')
psf_data = psf_file[0].data
plt.contourf(psf_data)
plt.colorbar()
plt.title('Point spread function (PSF)')
plt.xlabel('[pix]')
plt.ylabel('[pix]')
#Integrate model along the slow axis.
model_int = np.sum(model_data, axis=0)
#print(model_int.shape)
plt.contourf(model_int)
plt.colorbar()
plt.title('Integrated model')
plt.xlabel('[pix]')
plt.ylabel('[pix]')
def zero_pad_2d(a, num):
'''take an 2d array and pad with zeros around edges
output is an array with size (num, num)'''
a = np.asarray(a)
size = a.shape
out = np.zeros((num,num))
out[int((num-size[0])/2):int((num+size[0])/2),int((num-size[1])/2):int((num+size[1])/2)] = a
return out
#from timeit import timeit
def my_fft_1d(a, inverse=False, first_iter=True):
'''Computes a fast fourier transform of a 1d array, a.
Returns a complex array of the same size. Uses the Cooley-Tukey algorithm'''
n = len(a)
out = np.zeros(n, dtype=complex)
overall_factor = 1
if inverse:
s = -1
if first_iter:
overall_factor = float(1/n)
else:
s = 1
if n == 1:
out[0] = a[0]
else:
out[0:int(n/2)] = my_fft_1d(a[0::2], inverse, first_iter=False)
out[int(n/2):] = my_fft_1d(a[1::2], inverse, first_iter=False)
for k in range(0,int(n/2)):
t = out[k]
out[k] = t + np.exp(-s*2*np.pi*1j*k/n)*out[k+int(n/2)]
out[k+int(n/2)] = t - np.exp(-s*2*np.pi*1j*k/n)*out[k+int(n/2)]
return overall_factor*out
def my_fft_2d(a, inverse=False):
'''Computes 2d FFT using my_fft_1d'''
a = np.asarray(a)
n1, n2 = a.shape
if n1 != n2:
print('Input must be a square array! Shape is', (n1,n2))
return None
N = n1
out = np.zeros((N,N), dtype=complex)
#FFT columns
for i in range(0,N):
out[:,i] = my_fft_1d(a[:,i], inverse, first_iter=True)
#FFT rows
for j in range(0,N):
out[j,:] = my_fft_1d(out[j,:], inverse, first_iter=True)
return out
#Zero pad data
model_int_p = zero_pad_2d(model_int, 256)
psf_data_p = zero_pad_2d(psf_data, 256)
#FFT padded data
model_int_f = my_fft_2d(model_int_p)
psf_data_f = my_fft_2d(psf_data_p)
#Do convolution in Fourier space, i.e. element-wise multiplication
conv_f = model_int_f*psf_data_f
#Inverse FFT back
conv = my_fft_2d(conv_f, inverse=True)
#Plot result
plt.contourf(np.real(np.fft.fftshift(conv)))
plt.colorbar()
plt.title('Galaxy image convolved with PSF')
plt.xlabel('[pix]')
plt.ylabel('[pix]')
Explanation: <h3>Problem 2</h3>
Given the 3d galaxy model, we obtain an image by integrating along the first axis to obtain a perfect scene. Since the axes are pixels, integation just amounts to summing along the first index. We want to convolve the point spread function (PSF) with this image. We do this in the following steps:
Zero-pad the arrays by inserting them in the center of a (256, 256) array of zeros.
Perform an FFT on each of the arrays.
Do the convolution on the fourier transformed arrays. This is jst element-wise multiplication.
Inverse FFT back.
Do an FFT shift to recenter the resulting image. Nice!
The results are shown below. The FFT method written below follows a version of the Cooley-Tukey FFT algorithm, which was described in class. I checked that it gives the same result as the numpy FFT method.
End of explanation |
14,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding Trends
R.A. Collenteur (University of Graz), O.N. Ebbens (Artesia)
In this notebook it is explained how to use linear and step trend models to improve the simulation of groundwater levels.
Step1: 1. Modeling a linear trend
In this first example we look at a model where a linear trend is used to improve the simulation of the groundwater levels. The linear trend is modeled using the LinearTrend stress model. We start with a model where we try to explain the groundwater level fluctuations using precipitation and evaporation. A simple non-linear recharge model is used to translate these fluxes into recharge and finally groundwater levels.
Step2: Add linear trend to the model
Clearly the model fit with the data in the above figure is not so good. Looking at the model residuals (simulation - observation) we can observe a steady upward trend in the residuals. Let's try and add a linear trend to the model to improve the groundwater level simulation.
Step3: 2. Modeling a step trend
In this example the modeling of step trends in groundwater level time series is explored. Step trends can be used when a system change has taken place during the observation period, for example a lowering of the surrounding water levels. Here we model a groundwater level time series observed near the city of Eindhoven in the Netherlands that has undergone a structural change during the time of observation. The change has taken place in 2012, but unfortunately no observations are available for the period when the change was made.
model with precipitation and evaporation
First a model with only precipitation and potential evaporation as explanatory variables is created. It can be observed that the peak in the groundwater levels after 2012 lie about 0.5 meters lower that the peaks before 2012. This can also be observed by studying the model residuals, which show a different mean for the period before 2012 and after 2012.
Step4: Adding a step trend
Clearly, the model creating above has room for improvements. Since we know a system change has taken place around 2012, we can try and simulate this system change by adding a step trend. We do not know the absolute size of this step trend, only the time when it has taken place (1st of July 2012). The StepModel is available in Pastas to add a step trend to a model. This stress model does not require a independent time series, but a(indication of) the date of the system change instead. By deault the StepModel applies an instantaneous step in the groundwater level, using the One response function. it is however also possible to choose a more gradual change by applying another type of response function.
Step5: Diagnostic checking
In the following, the uncertainty of the estimated step trend is evaluated. Before we quantify the uncertainty, we perform some model diagnostic checks on the noise time series using ps.plots.diagnostics(). From this plot, we can say that the noise looks normally distributed and has no significant autocorrelation. Thus we may use the standard errors of the estimated parameters to quantify the uncertainty of the estimated step trend.
Step6: 3. Uncertainty of step trend
Below we draw the 95% confidence interval of the step trend. We may use this confidence interval to interpret the results. For example, if we know or expect that the step trend is negative, we can reject the step trend from the model if the confidence interval also includes positive step trends. In this example this is not the case and the estimated step trend is always negative as expected.
Step7: 4. Effect of system changes
Sometimes changes to a groundwater system cannot be easily modelled using a step trend. For example when the system change causes a different response of the groundwater to recharge. In that case there are other methods to visualise the effect of the system change on the groundwater head. Two of these methods are shown below | Python Code:
import pandas as pd
import pastas as ps
ps.set_log_level("ERROR")
ps.show_versions()
Explanation: Adding Trends
R.A. Collenteur (University of Graz), O.N. Ebbens (Artesia)
In this notebook it is explained how to use linear and step trend models to improve the simulation of groundwater levels.
End of explanation
# Load the input data
evap = ps.read_knmi("../data/etmgeg_260.txt", variables="EV24").series * 1e3
rain = ps.read_knmi("../data/etmgeg_260.txt", variables="RH").series * 1e3
head = ps.read_dino("../data/B32C0609001_1.csv")
# Create a Pastas model
ml = ps.Model(head)
# Add a recharge model
rch = ps.rch.FlexModel()
rm = ps.RechargeModel(rain, evap, recharge=rch, rfunc=ps.Exponential, name="rch")
ml.add_stressmodel(rm)
# Solve and plot the model
ml.solve(noise=False, tmin="1990", report=False) # Get better initial estimated first
ml.solve(noise=True, tmin="1990", initial=False, report=False)
ml.plots.results(figsize=(10, 6));
Explanation: 1. Modeling a linear trend
In this first example we look at a model where a linear trend is used to improve the simulation of the groundwater levels. The linear trend is modeled using the LinearTrend stress model. We start with a model where we try to explain the groundwater level fluctuations using precipitation and evaporation. A simple non-linear recharge model is used to translate these fluxes into recharge and finally groundwater levels.
End of explanation
# Add a linear trend
tm = ps.LinearTrend(start="1990-01-01", end="2020-01-01", name="trend")
ml.add_stressmodel(tm)
# Solve the model
ml.solve(noise=False, tmin="1990", report=False) # Get better initial estimated first
ml.solve(noise=True, tmin="1990", initial=False, report=False)
ml.plots.results(figsize=(10, 6));
Explanation: Add linear trend to the model
Clearly the model fit with the data in the above figure is not so good. Looking at the model residuals (simulation - observation) we can observe a steady upward trend in the residuals. Let's try and add a linear trend to the model to improve the groundwater level simulation.
End of explanation
p = pd.read_csv("../data/nb18_rain.csv", index_col=0, parse_dates=True, squeeze=True)
e = pd.read_csv("../data/nb18_evap.csv", index_col=0, parse_dates=True, squeeze=True)
h = pd.read_csv("../data/nb18_head.csv", index_col=0, parse_dates=True, squeeze=True)
ml = ps.Model(h.iloc[::10])
sm = ps.RechargeModel(p, e, name="recharge", rfunc=ps.Exponential, recharge=ps.rch.Linear())
ml.add_stressmodel(sm)
ml.solve(report=False)
ml.plots.results(figsize=(10,5));
Explanation: 2. Modeling a step trend
In this example the modeling of step trends in groundwater level time series is explored. Step trends can be used when a system change has taken place during the observation period, for example a lowering of the surrounding water levels. Here we model a groundwater level time series observed near the city of Eindhoven in the Netherlands that has undergone a structural change during the time of observation. The change has taken place in 2012, but unfortunately no observations are available for the period when the change was made.
model with precipitation and evaporation
First a model with only precipitation and potential evaporation as explanatory variables is created. It can be observed that the peak in the groundwater levels after 2012 lie about 0.5 meters lower that the peaks before 2012. This can also be observed by studying the model residuals, which show a different mean for the period before 2012 and after 2012.
End of explanation
step = ps.StepModel(tstart=pd.Timestamp("2012-07-01"), name="step", up=None)
ml.add_stressmodel(step)
ml.solve(report=False)
ml.plots.results(figsize=(10,5));
Explanation: Adding a step trend
Clearly, the model creating above has room for improvements. Since we know a system change has taken place around 2012, we can try and simulate this system change by adding a step trend. We do not know the absolute size of this step trend, only the time when it has taken place (1st of July 2012). The StepModel is available in Pastas to add a step trend to a model. This stress model does not require a independent time series, but a(indication of) the date of the system change instead. By deault the StepModel applies an instantaneous step in the groundwater level, using the One response function. it is however also possible to choose a more gradual change by applying another type of response function.
End of explanation
ml.plots.diagnostics(figsize=(10, 4));
Explanation: Diagnostic checking
In the following, the uncertainty of the estimated step trend is evaluated. Before we quantify the uncertainty, we perform some model diagnostic checks on the noise time series using ps.plots.diagnostics(). From this plot, we can say that the noise looks normally distributed and has no significant autocorrelation. Thus we may use the standard errors of the estimated parameters to quantify the uncertainty of the estimated step trend.
End of explanation
ci = ml.fit.ci_contribution("step", alpha=0.05)
axes = ml.plots.results(adjust_height=False, figsize=(10,5))
axes[-2].fill_between(ci.index, ci.iloc[:, 0], ci.iloc[:, 1], zorder=-10, alpha=0.5);
Explanation: 3. Uncertainty of step trend
Below we draw the 95% confidence interval of the step trend. We may use this confidence interval to interpret the results. For example, if we know or expect that the step trend is negative, we can reject the step trend from the model if the confidence interval also includes positive step trends. In this example this is not the case and the estimated step trend is always negative as expected.
End of explanation
# method 1
ml_before = ps.Model(h.iloc[::10][:"2012-07-01"])
sm = ps.RechargeModel(p, e, name="recharge", rfunc=ps.Exponential, recharge=ps.rch.Linear())
ml_before.add_stressmodel(sm)
ml_before.solve(report=False)
ax = h.iloc[::10].plot(marker='.', color='k', ls='none', label='head', figsize=(10, 3))
ml_before.simulate().plot(ax=ax, label='model fit')
ml_before.simulate(tmin="2012-07-01",
tmax=h.iloc[::10].index[-1]).plot(ax=ax, label='simulation')
ax.legend(ncol=3)
ax.grid()
# method 2
ml_after = ps.Model(h.iloc[::10][pd.Timestamp("2012-07-01"):])
sm = ps.RechargeModel(p, e, name="recharge", rfunc=ps.Exponential,
recharge=ps.rch.Linear())
ml_after.add_stressmodel(sm)
ml_after.solve(report=False)
ax = h.iloc[::10].plot(marker='.', color='k', ls='none', label='head', figsize=(10, 3))
ml_after.simulate().plot(ax=ax, label='model fit')
ml_after.simulate(tmin=h.iloc[::10].index[0],
tmax=pd.Timestamp("2012-07-01")).plot(ax=ax, label='simulation')
ax.legend(ncol=3)
ax.grid()
Explanation: 4. Effect of system changes
Sometimes changes to a groundwater system cannot be easily modelled using a step trend. For example when the system change causes a different response of the groundwater to recharge. In that case there are other methods to visualise the effect of the system change on the groundwater head. Two of these methods are shown below:
Fit the model on the observations before the system change. Use this model to simulate the groundwater head after the system change. The differences between simulated groundwater heads an observations are an indication of the effect of the system change on the groundwater head.
similar to method 1. Now the model is fit on the period after the system change and groundwater heads are simulated for the period before the system change.
End of explanation |
14,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GA4GH 1000 Genome Variant Service Example
This example illustrates how to access the different variant calls implemented within the variant service.
Initialize the client
In this step we create a client object which will be used to communicate with the server. It is initialized using the URL.
Step1: Search variant annotation sets method
Response returns a list of sets of variant annotations, with the pertaining info fields
Step2: Search variant annotations method
This request returns ---
Step3: Get variant annotation set method
This call returns a specific set when the id of the wanted set is provided. | Python Code:
import ga4gh_client.client as client
c = client.HttpClient("http://1kgenomes.ga4gh.org")
Explanation: GA4GH 1000 Genome Variant Service Example
This example illustrates how to access the different variant calls implemented within the variant service.
Initialize the client
In this step we create a client object which will be used to communicate with the server. It is initialized using the URL.
End of explanation
for variant_annotation_sets in c.search_variant_annotation_sets(variant_set_id="WyIxa2dlbm9tZXMiLCJ2cyIsImZ1bmN0aW9uYWwtYW5ub3RhdGlvbiJd"):
print "\nName: {},".format(variant_annotation_sets.name)
print" Id: {},".format(variant_annotation_sets.id)
print" Variant Set Id: {},".format(variant_annotation_sets.variant_set_id)
print" Analysis Id: {},".format(variant_annotation_sets.analysis.id)
print" Analysis Created: {}\n".format(variant_annotation_sets.analysis.created)
for info in variant_annotation_sets.analysis.info:
print"{}: {}".format(info, variant_annotation_sets.analysis.info[info].values[0].string_value)
Explanation: Search variant annotation sets method
Response returns a list of sets of variant annotations, with the pertaining info fields
End of explanation
counter = 6
for variant_annotations in c.search_variant_annotations(variant_annotation_set_id="WyIxa2dlbm9tZXMiLCJ2cyIsImZ1bmN0aW9uYWwtYW5ub3RhdGlvbiIsImZ1bmN0aW9uYWwtYW5ub3RhdGlvbiJd", reference_name="1", start=0, end=1000000):
if counter <= 0:
break
counter -= 1
print"Id: {},".format(variant_annotations.id)
print" Variant Id: {},".format(variant_annotations.variant_id)
print" Variant Annotation Set Id: {}".format(variant_annotations.variant_annotation_set_id)
print" Created: {}".format(variant_annotations.created)
print" Transcript Effects Id: {},".format(variant_annotations.transcript_effects[0].id)
print" Featured Id: {},".format(variant_annotations.transcript_effects[0].feature_id)
print" Alternate Bases: {},".format(variant_annotations.transcript_effects[0].alternate_bases)
print" Effects Id: {},".format(variant_annotations.transcript_effects[0].effects[0].id)
print" Effect Term: {},".format(variant_annotations.transcript_effects[0].effects[0].term)
print" Effect Sorce Name: {},".format(variant_annotations.transcript_effects[0].effects[0].source_name)
print" Effect Source Version: {}\n".format(variant_annotations.transcript_effects[0].effects[0].source_version)
Explanation: Search variant annotations method
This request returns ---
End of explanation
variant_annotation_set = c.get_variant_annotation_set(variant_annotation_set_id="WyIxa2dlbm9tZXMiLCJ2cyIsImZ1bmN0aW9uYWwtYW5ub3RhdGlvbiIsImZ1bmN0aW9uYWwtYW5ub3RhdGlvbiJd")
print"Name: {}".format(variant_annotation_set.name)
print" Id: {} ".format(variant_annotation_set.id)
print" Variant Set Id: {}".format(variant_annotation_set.variant_set_id)
print" Analysis Id: {},".format(variant_annotation_set.analysis.id)
print" Analysis Created: {},\n".format(variant_annotation_set.analysis.created)
for info in variant_annotation_set.analysis.info:
print"{}: {},".format(info, variant_annotation_set.analysis.info[info].values[0].string_value)
Explanation: Get variant annotation set method
This call returns a specific set when the id of the wanted set is provided.
End of explanation |
14,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contract a Grid Circuit
Shallow circuits on a planar grid with low-weight observables permit easy contraction.
Note
Step1: Make an example circuit topology
We'll use entangling gates according to this topology and compute the value of an observable on the red nodes.
Step2: Circuit
Step3: Observable
Step4: The contraction
The value of the observable is $\langle 0 | U^\dagger (ZZ) U |0 \rangle$.
Step5: We can simplify the circuit
By cancelling the "forwards" and "backwards" part of the circuit that are outside of the light-cone of the observable, we can reduce the number of gates to consider --- and sometimes the number of qubits involved at all. To see this in action, run the following cell and then keep re-running the following cell to watch gates disappear from the circuit.
Step6: (try re-running the following cell to watch the circuit get smaller)
Step7: Utility function to fully-simplify
We provide this utility function to fully simplify a circuit.
Step8: Turn it into a Tensor Netowork
We explicitly "cap" the tensor network with <0..0| bras so the entire thing contracts to the expectation value $\langle 0 | U^\dagger (ZZ) U |0 \rangle$.
Step9: rank_simplify effectively folds together 1- and 2-qubit gates
In practice, using this is faster than running the circuit optimizer to remove gates that cancel themselves, but please benchmark for your particular use case.
Step10: The tensor contraction path tells us how expensive this will be
Step11: Do the contraction
Step12: Big Circuit | Python Code:
import numpy as np
import networkx as nx
import cirq
import quimb
import quimb.tensor as qtn
from cirq.contrib.svg import SVGCircuit
import cirq.contrib.quimb as ccq
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
sns.set_style('ticks')
plt.rc('axes', labelsize=16, titlesize=16)
plt.rc('xtick', labelsize=14)
plt.rc('ytick', labelsize=14)
plt.rc('legend', fontsize=14, title_fontsize=16)
# theme colors
QBLUE = '#1967d2'
QRED = '#ea4335ff'
QGOLD = '#fbbc05ff'
QGREEN = '#34a853ff'
QGOLD2 = '#ffca28'
QBLUE2 = '#1e88e5'
Explanation: Contract a Grid Circuit
Shallow circuits on a planar grid with low-weight observables permit easy contraction.
Note: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre.
Imports
End of explanation
width = 3
height = 4
graph = nx.grid_2d_graph(width, height)
rs = np.random.RandomState(52)
nx.set_edge_attributes(graph, name='weight',
values={e: np.round(rs.uniform(), 2) for e in graph.edges})
zz_inds = ((width//2, (height//2-1)), (width//2, (height//2)))
nx.draw_networkx(graph,
pos={n:n for n in graph.nodes},
node_color=[QRED if node in zz_inds else QBLUE for node in graph.nodes])
Explanation: Make an example circuit topology
We'll use entangling gates according to this topology and compute the value of an observable on the red nodes.
End of explanation
qubits = [cirq.GridQubit(*n) for n in graph]
circuit = cirq.Circuit(
cirq.H.on_each(qubits),
ccq.get_grid_moments(graph),
cirq.Moment([cirq.rx(0.456).on_each(qubits)]),
)
SVGCircuit(circuit)
Explanation: Circuit
End of explanation
ZZ = cirq.Z(cirq.GridQubit(*zz_inds[0])) * cirq.Z(cirq.GridQubit(*zz_inds[1]))
ZZ
Explanation: Observable
End of explanation
tot_c = ccq.circuit_for_expectation_value(circuit, ZZ)
SVGCircuit(tot_c)
Explanation: The contraction
The value of the observable is $\langle 0 | U^\dagger (ZZ) U |0 \rangle$.
End of explanation
compressed_c = tot_c.copy()
print(len(list(compressed_c.all_operations())), len(compressed_c.all_qubits()))
Explanation: We can simplify the circuit
By cancelling the "forwards" and "backwards" part of the circuit that are outside of the light-cone of the observable, we can reduce the number of gates to consider --- and sometimes the number of qubits involved at all. To see this in action, run the following cell and then keep re-running the following cell to watch gates disappear from the circuit.
End of explanation
ccq.MergeNQubitGates(n_qubits=2).optimize_circuit(compressed_c)
ccq.MergeNQubitGates(n_qubits=1).optimize_circuit(compressed_c)
compressed_c = cirq.drop_negligible_operations(compressed_c, atol=1e-6)
compressed_c = cirq.drop_empty_moments(compressed_c)
print(len(list(compressed_c.all_operations())), len(compressed_c.all_qubits()))
SVGCircuit(compressed_c)
Explanation: (try re-running the following cell to watch the circuit get smaller)
End of explanation
ccq.simplify_expectation_value_circuit(tot_c)
SVGCircuit(tot_c)
# simplification might eliminate qubits entirely for large graphs and
# shallow `p`, so re-get the current qubits.
qubits = sorted(tot_c.all_qubits())
print(len(qubits))
Explanation: Utility function to fully-simplify
We provide this utility function to fully simplify a circuit.
End of explanation
tensors, qubit_frontier, fix = ccq.circuit_to_tensors(
circuit=tot_c, qubits=qubits)
end_bras = [
qtn.Tensor(
data=quimb.up().squeeze(),
inds=(f'i{qubit_frontier[q]}_q{q}',),
tags={'Q0', 'bra0'}) for q in qubits
]
tn = qtn.TensorNetwork(tensors + end_bras)
tn.graph(color=['Q0', 'Q1', 'Q2'])
plt.show()
Explanation: Turn it into a Tensor Netowork
We explicitly "cap" the tensor network with <0..0| bras so the entire thing contracts to the expectation value $\langle 0 | U^\dagger (ZZ) U |0 \rangle$.
End of explanation
tn.rank_simplify(inplace=True)
tn.graph(color=['Q0', 'Q1', 'Q2'])
Explanation: rank_simplify effectively folds together 1- and 2-qubit gates
In practice, using this is faster than running the circuit optimizer to remove gates that cancel themselves, but please benchmark for your particular use case.
End of explanation
path_info = tn.contract(get='path-info')
path_info.opt_cost / int(3e9) # assuming 3gflop, in seconds
path_info.largest_intermediate * 128 / 8 / 1024 / 1024 / 1024 # gb
Explanation: The tensor contraction path tells us how expensive this will be
End of explanation
zz = tn.contract(inplace=True)
zz = np.real_if_close(zz)
print(zz)
Explanation: Do the contraction
End of explanation
width = 8
height = 8
graph = nx.grid_2d_graph(width, height)
rs = np.random.RandomState(52)
nx.set_edge_attributes(graph, name='weight',
values={e: np.round(rs.uniform(), 2) for e in graph.edges})
zz_inds = ((width//2, (height//2-1)), (width//2, (height//2)))
nx.draw_networkx(graph,
pos={n:n for n in graph.nodes},
node_color=[QRED if node in zz_inds else QBLUE for node in graph.nodes])
qubits = [cirq.GridQubit(*n) for n in graph]
circuit = cirq.Circuit(
cirq.H.on_each(qubits),
ccq.get_grid_moments(graph),
cirq.Moment([cirq.rx(0.456).on_each(qubits)]),
)
ZZ = cirq.Z(cirq.GridQubit(*zz_inds[0])) * cirq.Z(cirq.GridQubit(*zz_inds[1]))
ZZ
ccq.tensor_expectation_value(circuit, ZZ)
Explanation: Big Circuit
End of explanation |
14,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Research Module Usage
We start with some useful imports and constant definitions
Step1: Reducing Extra Dataset Loads
Running Research Sequentially
In previous tutorial we learned how to use Research to run experimetrs multiple times and with varying parameters.
Firstly we define a dataset to work with and a pipeline that reads this dataset
Step2: Then we define a grid of parameters whose nodes will be used to form separate experiments
Step3: These parameters can be passed to model's configs using named expressions.
Step4: After that we define a pipeline to run during our experiments. We initialise a pipeline variable 'loss' to store loss on each iteration
Step5: Each research is assigned with a name and writes its results to a folder with this name. The names must be unique, so if one attempts to run a research with a name that already exists, an error will be thrown. In the cell below we clear the results of previous research runs so as to allow multiple runs of a research. This is done solely for purposes of ths tutorial and should not be done in real work
Step6: Finally we define a Research that runs the pipeline substituting its parameters using different nodes of the grid, and saves values of the 'loss' named expressions to results.
Step7: 16 experiments are run (4 grid nodes x 4 repetitions) each consisting of 10 iterations.
We can load results of the research and see that the table has 160 entries.
Step8: Branches
Step9: Scince every root is now assigned to 8 branches, there are only 2 jobs.
We can see that the whole research duration reduced.
In this toy example we use only 10 iterations to make the effect of reduced dataset load more visible.
The numbers of results entries is the same.
Step10: Functions on Root
If each job has several branches, they are all executed in parallel threads. To run a function on root, one should add it with on_root=True.
Functions on root have required parameters iteration and experiments and optional keyword parameters. They are not allowed to return anything
Step11: Improving Performance
Research can ran experiments in parallel if number of workers if defined in workers parameter.
Each worker starts in a separate process and performs one or several jobs assigned to it. Moreover if several GPU's are accessible one can pass indices of GPUs to use via devices parameter.
Following parameters are also useful to control research execution
Step12: Cross-validation
One can easyly perform cross-validation with Research
Firstly we will define a dataset
Step13: Next, we define our train and test pipelines. To perform cross-validation, you can define train and test datasets as mnist_train.CV(C('fold')).train and mnist_test.CV(C('fold')).test, correspondingly.
Step14: Then multiply your Domain by Option('fold', [0, 1, 2]).
Step15: We can now load results, specifying which folds to get if needed | Python Code:
import sys
import os
import shutil
import warnings
warnings.filterwarnings('ignore')
from tensorflow import logging
logging.set_verbosity(logging.ERROR)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import matplotlib
%matplotlib inline
sys.path.append('../../..')
from batchflow import Pipeline, B, C, V, D, L
from batchflow.opensets import MNIST
from batchflow.models.tf import VGG7, VGG16
from batchflow.research import Research, Option, Results, RP
BATCH_SIZE=64
ITERATIONS=1000
TEST_EXECUTE_FREQ=100
def clear_previous_results(res_name):
if os.path.exists(res_name):
shutil.rmtree(res_name)
Explanation: Advanced Research Module Usage
We start with some useful imports and constant definitions
End of explanation
mnist = MNIST()
train_root = mnist.train.p.run_later(BATCH_SIZE, shuffle=True, n_epochs=None)
Explanation: Reducing Extra Dataset Loads
Running Research Sequentially
In previous tutorial we learned how to use Research to run experimetrs multiple times and with varying parameters.
Firstly we define a dataset to work with and a pipeline that reads this dataset
End of explanation
domain = Option('layout', ['cna', 'can']) * Option('bias', [True, False])
Explanation: Then we define a grid of parameters whose nodes will be used to form separate experiments
End of explanation
model_config={
'inputs/images/shape': B('image_shape'),
'inputs/labels/classes': D('num_classes'),
'inputs/labels/name': 'targets',
'initial_block/inputs': 'images',
'body/block/layout': C('layout'),
'common/conv/use_bias': C('bias'),
}
Explanation: These parameters can be passed to model's configs using named expressions.
End of explanation
train_template = (Pipeline()
.init_variable('loss')
.init_model('dynamic', VGG7, 'conv', config=model_config)
.to_array()
.train_model('conv',
images=B('images'), labels=B('labels'),
fetches='loss', save_to=V('loss', mode='w'))
)
Explanation: After that we define a pipeline to run during our experiments. We initialise a pipeline variable 'loss' to store loss on each iteration
End of explanation
res_name = 'simple_research'
clear_previous_results(res_name)
Explanation: Each research is assigned with a name and writes its results to a folder with this name. The names must be unique, so if one attempts to run a research with a name that already exists, an error will be thrown. In the cell below we clear the results of previous research runs so as to allow multiple runs of a research. This is done solely for purposes of ths tutorial and should not be done in real work
End of explanation
research = (Research()
.add_pipeline(train_root + train_template, variables='loss')
.init_domain(domain, n_reps=4))
research.run(n_iters=10, name=res_name, bar=True)
Explanation: Finally we define a Research that runs the pipeline substituting its parameters using different nodes of the grid, and saves values of the 'loss' named expressions to results.
End of explanation
research.load_results().df.info()
Explanation: 16 experiments are run (4 grid nodes x 4 repetitions) each consisting of 10 iterations.
We can load results of the research and see that the table has 160 entries.
End of explanation
model_config={
'inputs/images/shape': B('image_shape'),
'inputs/labels/classes': 10,
'inputs/labels/name': 'targets',
'initial_block/inputs': 'images',
'body/block/layout': C('layout'),
'common/conv/use_bias': C('bias'),
}
train_template = (Pipeline()
.init_variable('loss')
.init_model('dynamic', VGG7, 'conv', config=model_config)
.to_array()
.train_model('conv',
images=B('images'), labels=B('labels'),
fetches='loss', save_to=V('loss', mode='w'))
)
res_name = 'no_extra_dataload_research'
clear_previous_results(res_name)
research = (Research()
.add_pipeline(root=train_root, branch=train_template, variables='loss')
.init_domain(domain, n_reps=4))
research.run(n_iters=10, branches=8, name=res_name, bar=True)
Explanation: Branches: Reducing Data Loading and Preprocessing
Each experiment can be divided into 2 stages: root stage that is roughly same for all experiments (for example, data loading and preprocessing) and branch stage that varies. If data loading and preprocessing take significant time one can use the batches generated on a single root stage to feed to several branches that belong to different experiments.
For example, if you want to test 4 different models, and yor workflow includes some complicated data preprocessing and augmentation that is done separatey for each model, you may want to do preprocessing and augmentation once and feed resulting batches of data to all these 4 models.
Figure above shows the difference.
On the left, simple workflow is shown. Same steps of common preprocessing are performed 4 times, and the batches that are generated after different runs of common stages are also different due to shuffling and possible randomisation inside common steps.
On the right, common steps are performed once on root stage and the very same batches are passed to different branches. This has the advantage of reducing extra computations but it also reduces variability becauce all models get exactly same pieces of data.
To perform root-branch division, one should pass root and branch parameters to add_pipeline() and define number of branches per root via branches parameter of run().
A root with corresponding branches is called a job. Note that different roots still produce different batches.
One constraint when using branches is that branch pipelines do not calculate dataset variables properly, so we have to redefine model_config and train_template and hard-code 'inputs/labels/classes' parameter
End of explanation
research.load_results().df.info()
Explanation: Scince every root is now assigned to 8 branches, there are only 2 jobs.
We can see that the whole research duration reduced.
In this toy example we use only 10 iterations to make the effect of reduced dataset load more visible.
The numbers of results entries is the same.
End of explanation
res_name = 'on_root_research'
clear_previous_results(res_name)
def function_on_root():
print('on root')
research = (Research()
.add_callable(function_on_root, execute="#0", on_root=True)
.add_pipeline(root=train_root, branch=train_template, variables='loss')
.init_domain(domain, n_reps=4)
)
research.run(branches=8, n_iters=10, name=res_name, bar=True)
Explanation: Functions on Root
If each job has several branches, they are all executed in parallel threads. To run a function on root, one should add it with on_root=True.
Functions on root have required parameters iteration and experiments and optional keyword parameters. They are not allowed to return anything
End of explanation
model_config={
**model_config,
'device': C('device'), # it's technical parameter for TFModel
}
test_root = mnist.test.p.run_later(BATCH_SIZE, shuffle=True, n_epochs=1) #Note n_epochs=1
test_template = (Pipeline()
.init_variable('predictions')
.init_variable('metrics')
.import_model('conv', C('import_from'))
.to_array()
.predict_model('conv',
images=B('images'), labels=B('labels'),
fetches='predictions', save_to=V('predictions'))
.gather_metrics('class', targets=B('labels'), predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a')))
research = (Research()
.add_pipeline(root=train_root, branch=train_template, variables='loss', name='train_ppl',
dump=TEST_EXECUTE_FREQ)
.add_pipeline(root=test_root, branch=test_template, name='test_ppl',
execute=TEST_EXECUTE_FREQ, run=True, import_from=RP('train_ppl'))
.get_metrics(pipeline='test_ppl', metrics_var='metrics', metrics_name='accuracy',
returns='accuracy',
execute=TEST_EXECUTE_FREQ,
dump=TEST_EXECUTE_FREQ)
.init_domain(domain, n_reps=4))
res_name = 'faster_research'
clear_previous_results(res_name)
research.run(n_iters=ITERATIONS, name=res_name, bar=True,
branches=2, workers=2, devices=[0, 1],
timeout=2, trials=1)
results = research.load_results().df
results.info()
Explanation: Improving Performance
Research can ran experiments in parallel if number of workers if defined in workers parameter.
Each worker starts in a separate process and performs one or several jobs assigned to it. Moreover if several GPU's are accessible one can pass indices of GPUs to use via devices parameter.
Following parameters are also useful to control research execution:
* timeout in run specifies time in minutes to kill non-responding job, default value is 5
* trials in run specifies number of attempts to restart a job, default=2
* dump in add_pipeline, add_callable and get_metrics tells how often results are written to disk and cleared. By default results are dumped on the last iteration, but if they consume too much memory one may want to do it more often. The format is same as execute
End of explanation
mnist_train = MNIST().train
mnist_train.cv_split(n_splits=3)
Explanation: Cross-validation
One can easyly perform cross-validation with Research
Firstly we will define a dataset: we will use train subset of MNIST
End of explanation
model_config={
'inputs/images/shape': B('image_shape'),
'inputs/labels/classes': D('num_classes'),
'inputs/labels/name': 'targets',
'initial_block/inputs': 'images',
'body/block/layout': C('layout'),
}
train_template = (Pipeline()
.init_variable('train_loss')
.init_model('dynamic', VGG7, 'conv', config=model_config)
.to_array()
.train_model('conv',
images=B('images'), labels=B('labels'),
fetches='loss', save_to=V('train_loss', mode='w'))
.run_later(BATCH_SIZE, shuffle=True, n_epochs=None)) << mnist_train.CV(C('fold')).train
test_template = (Pipeline()
.init_variable('predictions')
.init_variable('metrics')
.import_model('conv', C('import_from'))
.to_array()
.predict_model('conv',
images=B('images'), labels=B('labels'),
fetches='predictions', save_to=V('predictions'))
.gather_metrics('class', targets=B('labels'), predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a'))
.run_later(BATCH_SIZE, shuffle=True, n_epochs=1)) << mnist_train.CV(C('fold')).test
Explanation: Next, we define our train and test pipelines. To perform cross-validation, you can define train and test datasets as mnist_train.CV(C('fold')).train and mnist_test.CV(C('fold')).test, correspondingly.
End of explanation
domain = Option('layout', ['cna', 'can']) * Option('fold', [0, 1, 2])
research = (Research()
.add_pipeline(train_template, dataset=mnist_train, variables='train_loss', name='train_ppl')
.add_pipeline(test_template, dataset=mnist_train, name='test_ppl',
execute=TEST_EXECUTE_FREQ, run=True, import_from=RP('train_ppl'))
.get_metrics(pipeline='test_ppl', metrics_var='metrics', metrics_name='accuracy', returns='accuracy',
execute=TEST_EXECUTE_FREQ)
.init_domain(domain))
res_name = 'cv_research'
clear_previous_results(res_name)
research.run(n_iters=ITERATIONS, name=res_name, bar=True, workers=1, devices=[0])
Explanation: Then multiply your Domain by Option('fold', [0, 1, 2]).
End of explanation
results = research.load_results(fold=0).df
results.sample(5)
from matplotlib import pyplot as plt
test_results = Results(path='cv_research', names= 'test_ppl_metrics',
concat_config=True, drop_columns=False).df
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
for i, (config, df) in enumerate(test_results.groupby('config')):
x, y = i//2, i%2
df.pivot(index='iteration', columns='fold', values='accuracy').plot(ax=ax[y])
ax[y].set_title(config)
ax[y].set_xlabel('iteration')
ax[y].set_ylabel('accuracy')
ax[y].grid(True)
ax[y].legend()
Explanation: We can now load results, specifying which folds to get if needed
End of explanation |
14,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Network Structure Learning in Pomegranate
author
Step1: The structure attribute returns a tuple of tuples, where each inner tuple corresponds to that node in the graph (and the column of data learned on). The numbers in that inner tuple correspond to the parents of that node. The results from this structure are that node 3 has node 1 as a parent, that node 2 has node 0 as a parent, and so forth. It seems to faithfully recapture the underlying dependencies in the data.
Now, two algorithms for performing search-and-score were mentioned, the traditional shortest path algorithm and the A* algorithm. These both work by essentially turning the Bayesian network structure learning problem into a shortest path problem over an 'order graph.' This order graph is a lattice made up of layers of variable sets from the BNSL problem, with the root node having no variables, the leaf node having all variables, and layer i in the lattice having all subsets of variables of size i. Each path from the root to the leaf represents a certain topological sort of the variables, with the shortest path corresponding to the optimal topological sort and Bayesian network. Details can be found <a href="http
Step2: These results show that the A* algorithm is both computationally faster and requires far less memory than the traditional algorithm, making it a better default for the 'exact' algorithm. The amount of memory used by the BNSL process is under 'increment', not 'peak memory', as 'peak memory' returns the total memory used by everything, while increment shows the difference in peak memory before and after the function has run.
Approximate Learning
Step3: Approximate Learning
Step4: Comparison
We can then compare the algorithms directly to each other on the digits dataset as we expand the number of pixels to consider.
Step5: We can see the expected results-- that the A* algorithm works faster than the shortest path, the greedy one faster than that, and Chow-Liu the fastest. The purple and cyan lines superimpose on the right plot as they produce graphs with the same score, followed closely by the greedy algorithm and then Chow-Liu performing the worst.
Constraint Graphs
Now, sometimes you have prior information about how groups of nodes are connected to each other and want to exploit that. This can take the form of a global ordering, where variables can be ordered in such a manner that edges only go from left to right, for example. However, sometimes you have layers in your network where variables are a part of these layers and can only have parents in another layer.
Lets consider a diagnostics Bayesian network like the following (no need to read code, the picture is all that is important for now)
Step6: This network contains three layer, with symptoms on the bottom (low energy, bloating, loss of appetite, vomitting, and abdominal cramps), diseases in the middle (overian cancer, lactose intolerance, and pregnancy), and genetic tests on the top for three different genetic mutations. The edges in this graph are constrainted such that symptoms are explained by diseases, and diseases can be partially explained by genetic mutations. There are no edges from diseases to genetic conditions, and no edges from genetic conditions to symptoms. If we were going to design a more efficient search algorithm, we would want to exploit this fact to drastically reduce the search space of graphs.
Before presenting a solution, lets also consider another situation. In some cases you can define a global ordering of the variables, meaning you can order them from left to right and ensure that edges only go from the left to the right. This can represent some temporal separation (things on the left happen before things on the right), physical separation, or anything else. This would also dramatically reduce the search space.
In addition to reducing the search space, an efficient algorithm can exploit this layered structure. A key property of most scoring functions is the idea of "global parameter independence", meaning that that the parents of node A are independent of the parents of node B assuming that they do not form a cycle in the graph. If you have a layered structure, either like in the diagnostics network or through a global ordering, it is impossible to form a cycle in the graph through any valid assignment of parent values. This means that the parents for each node can be identified independently, drastically reducing the runtime of the algorithm.
Now, sometimes we know ~some things~ about the structure of the variables, but nothing about the others. For example, we might have a partial ordering on some variables but not know anything about the others. We could enforce an arbitrary ordering on the others, but this may not be well justified. In essence, we'd like to exploit whatever information we have.
Abstractly, we can think about this in terms of constraint graphs. Lets say you have some symptoms, diseases, and genetic tests, and don't a priori know the connection between all of these pieces, but you do know the previous layer structure. You can define a "constraint graph" which is made up of three nodes, "symptoms", "diseases", and "genetic mutations". There is a directed edge from genetic mutations to diseases, and a directed edge from diseases to symptoms. This specifies that genetic mutations can be parents to diseases, and diseases to symptoms. It would look like the following
Step7: All variables corresponding to these categories would be put in their appropriate name. This would define a scaffold for structure learning.
Now, we can do the same thing for a global ordering. Lets say we have 3 variables in an order from 0-2.
Step8: In this graph, we're saying that variable 0 can be a parent for 1 or 2, and that variable 1 can be a parent for variable 2. In the same way that putting multiple variables in a node of the constraint graph allowed us to define layers, putting a single variable in the nodes of a constraint graph can allow us to define an ordering.
To be specific, lets say we want to find the parents of the variables in node 1 given that those variables parents can only come from the variables in node 0. We can independently find the best parents for each variable in node 1 from the set of those in node 0. This is significantly faster than trying to find the best Bayesian network of all variables in nodes 0 and 1. We can also do the same thing for the variables in node 2 by going through the variables in both nodes 0 and 1 to find the best parent set for the variables in node 2.
However, there are some cases where we know nothing about the parent structure of some variables. This can be solved by including self-loops in the graph, where a node is its own parent. This means that we know nothing about the parent structure of the variables in that node and that the full exponential time algorithm will have to be run. The naive structure learning algorithm can be thought of as putting all variables in a single node in the constraint graph and putting a self-loop on that node.
We are thus left with two procedures; one for solving edges which are self edges, and one for solving edges which are not. Even though we have to use the exponential time procedure on variables in nodes with self loops, it will still be significantly faster because we will be using less variables (except in the naive case).
Frequently though we will have some information about some of the nodes of the graph even if we don't have information about all of the nodes. Lets take the case where we know some variables have no children but can have parents, and know nothing about the other variables.
Step9: In this situation we would have to run the exponential time algorithm on the variables in node 0 to find the optimal parents, and then run the independent parents algorithm on the variables in node 1 drawing only from the variables in node 0. To be specific
Step10: We see that reconstructed perfectly here. Lets see what would happen if we didn't use the exact algorithm.
Step11: It looks like we got three desirable attributes by using a constraint graph. The first is that there was over an order of magnitude speed improvement in finding the optimal graph. The second is that we were able to remove some edges we didn't want in the final Bayesian network, such as those between 11, 13, and 14. We also removed the edge between 1 and 12 and 1 and 3, which are spurious given the model that we originally defined. The third desired attribute is that we can specify the direction of some of the edges and get a better causal model.
Lets take a look at how big of a model we can learn given a three layer constraint graph like before. | Python Code:
%pylab inline
%load_ext memory_profiler
from pomegranate import BayesianNetwork
import seaborn, time
seaborn.set_style('whitegrid')
X = numpy.random.randint(2, size=(2000, 7))
X[:,3] = X[:,1]
X[:,6] = X[:,1]
X[:,0] = X[:,2]
X[:,4] = X[:,5]
model = BayesianNetwork.from_samples(X, algorithm='exact')
print model.structure
model.plot()
Explanation: Bayesian Network Structure Learning in Pomegranate
author: Jacob Schreiber <br>
contact: [email protected]
Learning the structure of Bayesian networks can be complicated for two main reasons: (1) difficulties in inferring causality and (2) the super-exponential number of directed edges that could exist in a dataset. The first issue presents itself when the structure lerning algorithm considers only correlation or another measure of co-occurrence to determine if an edge should exist. The first point presents challenges which deserve a far more in depth treatment unrelated to implementations in pomegranate, so instead this tutorial will focus on how pomegranate implements fast Bayesian network structure learning. It will also cover a new concept called the "constraint graph" which can be used to massively speed up structure search while also making causality assignment a bit more reasonable.
Introduction to Bayesian Network Structure Learning
Most methods for Bayesian network structure learning (BNSL) can be put into one of the following three categories:
(1) Search and Score: The most intuitive method is that of 'search and score,' where one searches over the space of all possible directed acyclic graphs (DAGs) and identifies the one that minimizes some objective function. Typical objective functions attempt to balance the log probability of the data given the model (the likelihood) with the complexity of the model to encourage sparser models. A naive implementation of this search is super-exponential in time with the number of variables, and becomes infeasible when considering even less than a dozen variables. However, dynamic programming can efficiently remove the many repeated calculations and reduce this to be simply exponential in time. This allows exact BNSL to scale to ~25-30 variables. In addition, the A* algorithm can be used to smartly search the space and reduce computational time even further by not even considering all possibile networks.
(2) Constraint learning: These methods typically involve calculating some measure of correlation or co-occurrence to identify an undirected backbone of edges that could exist, and then prune these edges systematically until a DAG is reached. A common method is to iterate over all triplets of variables to identify conditional independencies that specify both presence and direction of the edges. This algorithm is asymptotically faster (quadratic in time) than search-and-score, but it does not have a simple probabilistic interpretation.
(3) Approximate algorithms: In many real world examples, one wishes to merge the interpretability of the search and score method with the attractiveness of the task finishing before the universe ends. To this end, several heuristics have been developed with different properties to yield good structures in a reasonable amount of time. These methods include the Chow-Liu tree building algorithm, the hill-climbing algorithm, and optimal reinsertion, though there are others.
pomegranate currently implements a search-and-score method based on the minimum description length score which utilizes the dynamic programming and A* algorithm (DP/A*), a greedy algorithm based off of DP/A*, and the Chow-Liu tree building algorithm, though there are plans to soon add other algorithms.
Structure Learning in pomegranate
Exact Learning
Structure learning in pomegranate is done using the from_samples method. All you pass in is the samples, their associated weights (if not uniform), and the algorithm which you'd like to use, and it will learn the network for you using the dynamic programming implementation. Lets see a quick synthetic example to make sure that appropriate connections are found. Lets add connections between variables 1, 3, 6, and variables 0 and 2, and variables 4 and 5.
End of explanation
from sklearn.datasets import load_digits
X, y = load_digits(10, True)
X = X > numpy.mean(X)
plt.figure(figsize=(14, 4))
plt.subplot(131)
plt.imshow(X[0].reshape(8, 8), interpolation='nearest')
plt.grid(False)
plt.subplot(132)
plt.imshow(X[1].reshape(8, 8), interpolation='nearest')
plt.grid(False)
plt.subplot(133)
plt.imshow(X[2].reshape(8, 8), interpolation='nearest')
plt.grid(False)
X = X[:,:18]
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact-dp') # << BNSL done here!
t1 = time.time() - tic
p1 = model.log_probability(X).sum()
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
t2 = time.time() - tic
p2 = model.log_probability(X).sum()
print "Shortest Path"
print "Time (s): ", t1
print "P(D|M): ", p1
%memit BayesianNetwork.from_samples(X, algorithm='exact-dp')
print
print "A* Search"
print "Time (s): ", t2
print "P(D|M): ", p2
%memit BayesianNetwork.from_samples(X, algorithm='exact')
Explanation: The structure attribute returns a tuple of tuples, where each inner tuple corresponds to that node in the graph (and the column of data learned on). The numbers in that inner tuple correspond to the parents of that node. The results from this structure are that node 3 has node 1 as a parent, that node 2 has node 0 as a parent, and so forth. It seems to faithfully recapture the underlying dependencies in the data.
Now, two algorithms for performing search-and-score were mentioned, the traditional shortest path algorithm and the A* algorithm. These both work by essentially turning the Bayesian network structure learning problem into a shortest path problem over an 'order graph.' This order graph is a lattice made up of layers of variable sets from the BNSL problem, with the root node having no variables, the leaf node having all variables, and layer i in the lattice having all subsets of variables of size i. Each path from the root to the leaf represents a certain topological sort of the variables, with the shortest path corresponding to the optimal topological sort and Bayesian network. Details can be found <a href="http://url.cs.qc.cuny.edu/publications/Yuan11learning.pdf">here</a>. The traditional shortest path algorithm calculates the values of all edges in the order lattice before finding the shortest path, while the A* algorithm searches only a subset of the order lattice and begins searching immediately. Both methods yield optimal Bayesian networks.
A major problem that arises in the traditional shortest path algorithm is that the size of the order graph grows exponentially with the number of variables, and can make tasks infeasible that have otherwise-reasonable computational times. While the A* algorithm is faster computationally, another advantage is that it uses a much smaller amount of memory since it doesn't explore the full order graph, and so can be applied to larger problems.
In order to see the differences between these algorithms in practice, let's turn to the task of learning a Bayesian network over the digits dataset. The digits dataset is comprised of over a thousand 8x8 pictures of handwritten digits. We binarize the values into 'on' or 'off' for simplicity, and try to learn dependencies between the pixels.
End of explanation
tic = time.time()
model = BayesianNetwork.from_samples(X) # << Default BNSL setting
t = time.time() - tic
p = model.log_probability(X).sum()
print "Greedy"
print "Time (s): ", t
print "P(D|M): ", p
%memit BayesianNetwork.from_samples(X)
Explanation: These results show that the A* algorithm is both computationally faster and requires far less memory than the traditional algorithm, making it a better default for the 'exact' algorithm. The amount of memory used by the BNSL process is under 'increment', not 'peak memory', as 'peak memory' returns the total memory used by everything, while increment shows the difference in peak memory before and after the function has run.
Approximate Learning: Greedy Search (pomegranate default)
A natural heuristic when a non-greedy algorithm is too slow is to consider the greedy version. This simple implementation iteratively finds the best variable to add to the growing topological sort, allowing the new variable to draw only from variables already in the topological sort. This is the default in pomegranate because it has a nice balance between producing good (often optimal) graphs and having a small computational cost and memory footprint. However, there is no guarantee that this produces the globally optimal graph.
Let's see how it performs on the same dataset as above.
End of explanation
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='chow-liu') # << Default BNSL setting
t = time.time() - tic
p = model.log_probability(X).sum()
print "Chow-Liu"
print "Time (s): ", t
print "P(D|M): ", p
%memit BayesianNetwork.from_samples(X, algorithm='chow-liu')
Explanation: Approximate Learning: Chow-Liu Trees
However, there are even cases where the greedy heuristic is too slow, for example hundreds of variables. One of the first heuristics for BNSL is that of Chow-Liu trees, which learns the optimal tree from data. Essentially it calculates the mutual information between all pairs of variables and then finds the maximum spanning tree. A root node has to be input to turn the undirected edges based on mutual information into directed edges for the Bayesian network. The algorithm is is $O(d^{2})$ and practically is extremely fast and memory efficient, though it produces structures with a worse $P(D|M)$.
End of explanation
X, _ = load_digits(10, True)
X = X > numpy.mean(X)
t1, t2, t3, t4 = [], [], [], []
p1, p2, p3, p4 = [], [], [], []
n_vars = range(8, 19)
for i in n_vars:
X_ = X[:,:i]
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='exact-dp') # << BNSL done here!
t1.append(time.time() - tic)
p1.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='exact')
t2.append(time.time() - tic)
p2.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='greedy')
t3.append(time.time() - tic)
p3.append(model.log_probability(X_).sum())
tic = time.time()
model = BayesianNetwork.from_samples(X_, algorithm='chow-liu')
t4.append(time.time() - tic)
p4.append(model.log_probability(X_).sum())
plt.figure(figsize=(14, 4))
plt.subplot(121)
plt.title("Time to Learn Structure", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.ylabel("Time (s)", fontsize=14)
plt.xlabel("Variables", fontsize=14)
plt.plot(n_vars, t1, c='c', label="Exact Shortest")
plt.plot(n_vars, t2, c='m', label="Exact A*")
plt.plot(n_vars, t3, c='g', label="Greedy")
plt.plot(n_vars, t4, c='r', label="Chow-Liu")
plt.legend(fontsize=14, loc=2)
plt.subplot(122)
plt.title("$P(D|M)$ with Resulting Model", fontsize=14)
plt.xlabel("Variables", fontsize=14)
plt.ylabel("logp", fontsize=14)
plt.plot(n_vars, p1, c='c', label="Exact Shortest")
plt.plot(n_vars, p2, c='m', label="Exact A*")
plt.plot(n_vars, p3, c='g', label="Greedy")
plt.plot(n_vars, p4, c='r', label="Chow-Liu")
plt.legend(fontsize=14)
Explanation: Comparison
We can then compare the algorithms directly to each other on the digits dataset as we expand the number of pixels to consider.
End of explanation
from pomegranate import DiscreteDistribution, ConditionalProbabilityTable, Node
BRCA1 = DiscreteDistribution({0: 0.999, 1: 0.001})
BRCA2 = DiscreteDistribution({0: 0.985, 1: 0.015})
LCT = DiscreteDistribution({0: 0.950, 1: 0.050})
OC = ConditionalProbabilityTable([[0, 0, 0, 0.999],
[0, 0, 1, 0.001],
[0, 1, 0, 0.750],
[0, 1, 1, 0.250],
[1, 0, 0, 0.700],
[1, 0, 1, 0.300],
[1, 1, 0, 0.050],
[1, 1, 1, 0.950]], [BRCA1, BRCA2])
LI = ConditionalProbabilityTable([[0, 0, 0.99],
[0, 1, 0.01],
[1, 0, 0.20],
[1, 1, 0.80]], [LCT])
PREG = DiscreteDistribution({0: 0.90, 1: 0.10})
LE = ConditionalProbabilityTable([[0, 0, 0.99],
[0, 1, 0.01],
[1, 0, 0.25],
[1, 1, 0.75]], [OC])
BLOAT = ConditionalProbabilityTable([[0, 0, 0, 0.85],
[0, 0, 1, 0.15],
[0, 1, 0, 0.70],
[0, 1, 1, 0.30],
[1, 0, 0, 0.40],
[1, 0, 1, 0.60],
[1, 1, 0, 0.10],
[1, 1, 1, 0.90]], [OC, LI])
LOA = ConditionalProbabilityTable([[0, 0, 0, 0.99],
[0, 0, 1, 0.01],
[0, 1, 0, 0.30],
[0, 1, 1, 0.70],
[1, 0, 0, 0.95],
[1, 0, 1, 0.05],
[1, 1, 0, 0.95],
[1, 1, 1, 0.05]], [PREG, OC])
VOM = ConditionalProbabilityTable([[0, 0, 0, 0, 0.99],
[0, 0, 0, 1, 0.01],
[0, 0, 1, 0, 0.80],
[0, 0, 1, 1, 0.20],
[0, 1, 0, 0, 0.40],
[0, 1, 0, 1, 0.60],
[0, 1, 1, 0, 0.30],
[0, 1, 1, 1, 0.70],
[1, 0, 0, 0, 0.30],
[1, 0, 0, 1, 0.70],
[1, 0, 1, 0, 0.20],
[1, 0, 1, 1, 0.80],
[1, 1, 0, 0, 0.05],
[1, 1, 0, 1, 0.95],
[1, 1, 1, 0, 0.01],
[1, 1, 1, 1, 0.99]], [PREG, OC, LI])
AC = ConditionalProbabilityTable([[0, 0, 0, 0.95],
[0, 0, 1, 0.05],
[0, 1, 0, 0.01],
[0, 1, 1, 0.99],
[1, 0, 0, 0.40],
[1, 0, 1, 0.60],
[1, 1, 0, 0.20],
[1, 1, 1, 0.80]], [PREG, LI])
s1 = Node(BRCA1, name="BRCA1")
s2 = Node(BRCA2, name="BRCA2")
s3 = Node(LCT, name="LCT")
s4 = Node(OC, name="OC")
s5 = Node(LI, name="LI")
s6 = Node(PREG, name="PREG")
s7 = Node(LE, name="LE")
s8 = Node(BLOAT, name="BLOAT")
s9 = Node(LOA, name="LOA")
s10 = Node(VOM, name="VOM")
s11 = Node(AC, name="AC")
model = BayesianNetwork("Hut")
model.add_nodes(s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11)
model.add_edge(s1, s4)
model.add_edge(s2, s4)
model.add_edge(s3, s5)
model.add_edge(s4, s7)
model.add_edge(s4, s8)
model.add_edge(s4, s9)
model.add_edge(s4, s10)
model.add_edge(s5, s8)
model.add_edge(s5, s10)
model.add_edge(s5, s11)
model.add_edge(s6, s9)
model.add_edge(s6, s10)
model.add_edge(s6, s11)
model.bake()
plt.figure(figsize=(14, 10))
model.plot()
plt.show()
Explanation: We can see the expected results-- that the A* algorithm works faster than the shortest path, the greedy one faster than that, and Chow-Liu the fastest. The purple and cyan lines superimpose on the right plot as they produce graphs with the same score, followed closely by the greedy algorithm and then Chow-Liu performing the worst.
Constraint Graphs
Now, sometimes you have prior information about how groups of nodes are connected to each other and want to exploit that. This can take the form of a global ordering, where variables can be ordered in such a manner that edges only go from left to right, for example. However, sometimes you have layers in your network where variables are a part of these layers and can only have parents in another layer.
Lets consider a diagnostics Bayesian network like the following (no need to read code, the picture is all that is important for now):
End of explanation
import networkx
from pomegranate.utils import plot_networkx
constraints = networkx.DiGraph()
constraints.add_edge('genetic conditions', 'diseases')
constraints.add_edge('diseases', 'symptoms')
plot_networkx(constraints)
Explanation: This network contains three layer, with symptoms on the bottom (low energy, bloating, loss of appetite, vomitting, and abdominal cramps), diseases in the middle (overian cancer, lactose intolerance, and pregnancy), and genetic tests on the top for three different genetic mutations. The edges in this graph are constrainted such that symptoms are explained by diseases, and diseases can be partially explained by genetic mutations. There are no edges from diseases to genetic conditions, and no edges from genetic conditions to symptoms. If we were going to design a more efficient search algorithm, we would want to exploit this fact to drastically reduce the search space of graphs.
Before presenting a solution, lets also consider another situation. In some cases you can define a global ordering of the variables, meaning you can order them from left to right and ensure that edges only go from the left to the right. This can represent some temporal separation (things on the left happen before things on the right), physical separation, or anything else. This would also dramatically reduce the search space.
In addition to reducing the search space, an efficient algorithm can exploit this layered structure. A key property of most scoring functions is the idea of "global parameter independence", meaning that that the parents of node A are independent of the parents of node B assuming that they do not form a cycle in the graph. If you have a layered structure, either like in the diagnostics network or through a global ordering, it is impossible to form a cycle in the graph through any valid assignment of parent values. This means that the parents for each node can be identified independently, drastically reducing the runtime of the algorithm.
Now, sometimes we know ~some things~ about the structure of the variables, but nothing about the others. For example, we might have a partial ordering on some variables but not know anything about the others. We could enforce an arbitrary ordering on the others, but this may not be well justified. In essence, we'd like to exploit whatever information we have.
Abstractly, we can think about this in terms of constraint graphs. Lets say you have some symptoms, diseases, and genetic tests, and don't a priori know the connection between all of these pieces, but you do know the previous layer structure. You can define a "constraint graph" which is made up of three nodes, "symptoms", "diseases", and "genetic mutations". There is a directed edge from genetic mutations to diseases, and a directed edge from diseases to symptoms. This specifies that genetic mutations can be parents to diseases, and diseases to symptoms. It would look like the following:
End of explanation
constraints = networkx.DiGraph()
constraints.add_edge(0, 1)
constraints.add_edge(1, 2)
constraints.add_edge(0, 2)
plot_networkx(constraints)
Explanation: All variables corresponding to these categories would be put in their appropriate name. This would define a scaffold for structure learning.
Now, we can do the same thing for a global ordering. Lets say we have 3 variables in an order from 0-2.
End of explanation
constraints = networkx.DiGraph()
constraints.add_edge(0, 1)
constraints.add_edge(0, 0)
plot_networkx(constraints)
Explanation: In this graph, we're saying that variable 0 can be a parent for 1 or 2, and that variable 1 can be a parent for variable 2. In the same way that putting multiple variables in a node of the constraint graph allowed us to define layers, putting a single variable in the nodes of a constraint graph can allow us to define an ordering.
To be specific, lets say we want to find the parents of the variables in node 1 given that those variables parents can only come from the variables in node 0. We can independently find the best parents for each variable in node 1 from the set of those in node 0. This is significantly faster than trying to find the best Bayesian network of all variables in nodes 0 and 1. We can also do the same thing for the variables in node 2 by going through the variables in both nodes 0 and 1 to find the best parent set for the variables in node 2.
However, there are some cases where we know nothing about the parent structure of some variables. This can be solved by including self-loops in the graph, where a node is its own parent. This means that we know nothing about the parent structure of the variables in that node and that the full exponential time algorithm will have to be run. The naive structure learning algorithm can be thought of as putting all variables in a single node in the constraint graph and putting a self-loop on that node.
We are thus left with two procedures; one for solving edges which are self edges, and one for solving edges which are not. Even though we have to use the exponential time procedure on variables in nodes with self loops, it will still be significantly faster because we will be using less variables (except in the naive case).
Frequently though we will have some information about some of the nodes of the graph even if we don't have information about all of the nodes. Lets take the case where we know some variables have no children but can have parents, and know nothing about the other variables.
End of explanation
numpy.random.seed(6)
X = numpy.random.randint(2, size=(200, 15))
X[:,1] = X[:,7]
X[:,12] = 1 - X[:,7]
X[:,5] = X[:,3]
X[:,13] = X[:,11]
X[:,14] = X[:,11]
a = networkx.DiGraph()
b = tuple((0, 1, 2, 3, 4))
c = tuple((5, 6, 7, 8, 9))
d = tuple((10, 11, 12, 13, 14))
a.add_edge(b, c)
a.add_edge(c, d)
print "Constraint Graph"
plot_networkx(a)
plt.show()
print "Learned Bayesian Network"
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact', constraint_graph=a)
plt.figure(figsize=(16, 8))
model.plot()
plt.show()
print "pomegranate time: ", time.time() - tic, model.structure
Explanation: In this situation we would have to run the exponential time algorithm on the variables in node 0 to find the optimal parents, and then run the independent parents algorithm on the variables in node 1 drawing only from the variables in node 0. To be specific:
(1) Use exponential time procedure to find optimal structure amongst variables in node 0
(2) Use independent-parents procedure to find the best parents of variables in node 1, restricting the parents to be in node 0
(3) Concatenate these parent sets together to get the optimal structure of the network given the constraints.
We can generalize this to any arbitrary constraint graph:
(1) Use exponential time procedure to find optimal structure amongst variables in nodes with self loops (including parents from other nodes if needed)
(2) Use independent-parents procedure to find best parents of variables in a node given the constraint that the parents must come from variables in the node which is this nodes parent
(3) Concatenate these parent sets together to get the optimal structure of the network given the constraints.
According to the global parameter independence property of Bayesian networks, this procedure will give the globally optimal Bayesian network while exploring a significantly smaller part of the network.
pomegranate supports constraint graphs in an extremely easy to use manner. Lets say that we have a graph with three layers like the diagnostic model, and five variables in each layer. We can define the constraint graph as a networkx DiGraph, with the nodes being tuples containing the column ids of each variable belonging to that variable.
In this case, we're saying that (0, 1, 2, 3, 4) is the first node, (5, 6, 7, 8, 9) is the second node, and (10, 11, 12, 13, 14) is the final node. Lets make nodes 1, 7, and 12 related, 11, 13, 14 related, and 3 and 5 related. In this case, where should be an edge from 1 to 7, and 7 to 12. 11, 13, and 14 are all a part of the same layer and so that connection should be ignored, and then there should be a connection from 3 to 5.
End of explanation
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
plt.figure(figsize=(16, 8))
model.plot()
plt.show()
print "pomegranate time: ", time.time() - tic, model.structure
Explanation: We see that reconstructed perfectly here. Lets see what would happen if we didn't use the exact algorithm.
End of explanation
constraint_times, times = [], []
x = numpy.arange(1, 7)
for i in x:
symptoms = tuple(range(i))
diseases = tuple(range(i, i*2))
genetic = tuple(range(i*2, i*3))
constraints = networkx.DiGraph()
constraints.add_edge(genetic, diseases)
constraints.add_edge(diseases, symptoms)
X = numpy.random.randint(2, size=(2000, i*3))
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact', constraint_graph=constraints)
constraint_times.append( time.time() - tic )
tic = time.time()
model = BayesianNetwork.from_samples(X, algorithm='exact')
times.append( time.time() - tic )
plt.figure(figsize=(14, 6))
plt.title('Time To Learn Bayesian Network', fontsize=18)
plt.xlabel("Number of Variables", fontsize=14)
plt.ylabel("Time (s)", fontsize=14)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.plot( x*3, times, linewidth=3, color='c', label='Exact')
plt.plot( x*3, constraint_times, linewidth=3, color='m', label='Constrained')
plt.legend(loc=2, fontsize=16)
plt.yscale('log')
Explanation: It looks like we got three desirable attributes by using a constraint graph. The first is that there was over an order of magnitude speed improvement in finding the optimal graph. The second is that we were able to remove some edges we didn't want in the final Bayesian network, such as those between 11, 13, and 14. We also removed the edge between 1 and 12 and 1 and 3, which are spurious given the model that we originally defined. The third desired attribute is that we can specify the direction of some of the edges and get a better causal model.
Lets take a look at how big of a model we can learn given a three layer constraint graph like before.
End of explanation |
14,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook a Q learner with dyna and a custom predictor will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value).
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
First a non-learning test
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
Step4: What are the metrics for "holding the position"? | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
import pickle
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent_predictor import AgentPredictor
from functools import partial
from sklearn.externals import joblib
NUM_THREADS = 1
LOOKBACK = -1
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
DYNA = 1
BASE_DAYS = 112
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Crop the final days of the test set as a workaround to make dyna work
# (the env, only has the market calendar up to a certain time)
data_test_df = data_test_df.iloc[:-DYNA]
total_data_test_df = total_data_test_df.loc[:data_test_df.index[-1]]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
estimator_close = joblib.load('../../data/best_predictor.pkl')
estimator_volume = joblib.load('../../data/best_volume_predictor.pkl')
agents = [AgentPredictor(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=DYNA,
name='Agent_{}'.format(i),
estimator_close=estimator_close,
estimator_volume=estimator_volume,
env=env,
prediction_window=BASE_DAYS) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
Explanation: In this notebook a Q learner with dyna and a custom predictor will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value).
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
import pickle
with open('../../data/dyna_q_with_predictor.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
Explanation: What are the metrics for "holding the position"?
End of explanation |
14,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization 1
Step1: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
Step2: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
y=np.random.randn(30)
x=np.random.randn(30)
plt.scatter(x,y, color="r",s=50, marker='x',alpha=.9)
plt.xlabel('Random Values for X')
plt.ylabel('Randome Values for Y')
plt.title("My Random Values")
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
data=np.random.randn(50)
plt.hist(data, bins=10,color='g',align='left')
plt.xlabel('Value')
plt.ylabel('Number of Random Numbers')
plt.title('My Histogram')
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation |
14,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We have 2 banners promoting a new sport club.
The first banner is aggressive
Step1: To decide which banner is better we run experiment. We show both banners to random clients and make the conclusions out of the data we get.
Imagine we ran an experiment and computed the Convertion Rate as #Convertions / #Shows. That's a point estimation = 💩
To make some statistically significant conclusions we need to use confidence intervals or hypothesis checking methods.
Let's simplify everything a bit and sample from the true distribution directly. On practice we usually can't afford it and use bootstraping (delta-method/etc), but since we have the world model we don't need bootstrap.
Step2: Regret - the money we lost on an experiment. If we have a magic oracle that tells you which banner is the best without any experiments you save ~ 347k - 343k = 4k €
You also may compute regret in counts of the "bad" banner shows. The loss in each show is the same. So, regret is 1M shows.
Due to we know how the real world behaves, let's check which banner is actually better.
Step3: Usually you have too many factors and it's hard to say if two banners have real different conversion rate. The less difference the more audience you need to find the difference.
So, you want to have some early stopping method + the tool to compare more than 2 banners the same time.
There are some tools in classis statistics. But they look overcomplicated compared to the following approach. The other benefit of Multiarmed Bandits
Step4: $Pr(A|B) = \frac{Pr(B|A)Pr(A)}{Pr(B)}$
$X$ - events (click/no-click)
CTR is a distribution, obviously it's defined on [0,1] (dom of Beta distribution).
$Pr(CTR|X) = \frac{Pr(X|CTR)Pr(CTR)}{Pr(X)} = \frac{Binomial(CTR) * Beta(\alpha, \beta)}{\int{Binomial(CTR) * Beta(\alpha, \beta)}} = \frac{Bernoulli(CTR) * Beta(\alpha, \beta)}{Const}$
Beta
Step5: What banner to show
Let's create a lottery. On each show <s>sample</s> draw a dice out of Beta distribution and have a CTR point estimation. | Python Code:
crossfitters_ratio = .48
aggressive = {"crossfitters": .68, "runners": .04}
neutral = {"crossfitters": .28, "runners": .4}
def test_banner(banner, shows):
runners_dist = stats.bernoulli(banner["runners"])
crossfitters_dist = stats.bernoulli(banner["crossfitters"])
crossfitters_cnt = stats.bernoulli(crossfitters_ratio).rvs(shows).sum()
runners_cnt = shows - crossfitters_cnt
crossfitters_hits = crossfitters_dist.rvs(crossfitters_cnt).sum()
runners_hits = runners_dist.rvs(runners_cnt).sum()
return crossfitters_hits + runners_hits
Explanation: We have 2 banners promoting a new sport club.
The first banner is aggressive: it focuses on the weight equipment we have and is very attractive to crossfitters, but completely can't convince runners. Another one makes the main focus on the cardio trainers we have and is much more attractive for runners. The neutral banner is also attractive to crossfitters, but it's not so cool as the first one.
Actually we don't know it. But that's what designers kept in mind when they create those banners.
Also let's imaging we don't know web site visitors interests. And ideally we show just one banner which is "the best" in general.
Let's define the world model. And use it further as a black box. Use bernoulli instead of binomial to make everything transparent.
End of explanation
%%time
revenue_agressive = [test_banner(aggressive, 100) for _ in range(1000)]
revenue_neutral = [test_banner(neutral, 100) for _ in range(1000)]
sns.distplot(revenue_agressive, label="agressive")
sns.distplot(revenue_neutral, label="neutral")
plt.legend()
%%time
revenue_agressive = [test_banner(aggressive, 1000) for _ in range(1000)]
revenue_neutral = [test_banner(neutral, 1000) for _ in range(1000)]
sns.distplot(revenue_agressive)
sns.distplot(revenue_neutral)
%%time
revenue_agressive = [test_banner(aggressive, 100000) for _ in range(1000)]
revenue_neutral = [test_banner(neutral, 100000) for _ in range(1000)]
sns.distplot(revenue_agressive)
sns.distplot(revenue_neutral)
%%time
revenue_agressive = [test_banner(aggressive, 1000000) for _ in range(1000)]
revenue_neutral = [test_banner(neutral, 1000000) for _ in range(1000)]
sns.distplot(revenue_agressive)
sns.distplot(revenue_neutral)
Explanation: To decide which banner is better we run experiment. We show both banners to random clients and make the conclusions out of the data we get.
Imagine we ran an experiment and computed the Convertion Rate as #Convertions / #Shows. That's a point estimation = 💩
To make some statistically significant conclusions we need to use confidence intervals or hypothesis checking methods.
Let's simplify everything a bit and sample from the true distribution directly. On practice we usually can't afford it and use bootstraping (delta-method/etc), but since we have the world model we don't need bootstrap.
End of explanation
.48 * .68 + .52 * .04, .48 * .28 + .52 * .4
Explanation: Regret - the money we lost on an experiment. If we have a magic oracle that tells you which banner is the best without any experiments you save ~ 347k - 343k = 4k €
You also may compute regret in counts of the "bad" banner shows. The loss in each show is the same. So, regret is 1M shows.
Due to we know how the real world behaves, let's check which banner is actually better.
End of explanation
xs = np.linspace(0, 1, 100)
plt.plot(xs, stats.beta(1, 1).pdf(xs), label="alpha = 1 beta = 1")
plt.plot(xs, stats.beta(.1, .1).pdf(xs), label="alpha = .1 beta = .1")
plt.plot(xs, stats.beta(7, 3).pdf(xs), label="alpha = 7 beta = 3")
plt.legend()
Explanation: Usually you have too many factors and it's hard to say if two banners have real different conversion rate. The less difference the more audience you need to find the difference.
So, you want to have some early stopping method + the tool to compare more than 2 banners the same time.
There are some tools in classis statistics. But they look overcomplicated compared to the following approach. The other benefit of Multiarmed Bandits: you can use Contextual Multiarmed Bandits when you have additional information about users (gender, city, etc).
Multiarmed bandit [Thompson sampling]
<img src="https://www.abtasty.com/content/uploads/img_5559fcc451925.png" width="200px" align="left"/>
<img src="https://vignette.wikia.nocookie.net/matrix/images/d/da/Spoon_Boy_Neo_Bends.jpg/revision/latest/scale-to-width-down/266?cb=20130119092916" width="200px" align="right"/>
CTR doesn't exist, but we have CTR distribution.
In fact there is a distribution of our knowledge about CTR.
Let's assume CTR is a Beta distribution. <s>Because of conjugate prior</s> Because I like Beta distribution.
We don't know any supported by data prior knowledge about the true CTR. Therefore it's better to use non-informative prior than use some particular value.
Don't use prejudices/preconception as a prior. Use either non-informative prior or something supported by data. Otherwise you would be loosing money when the model would fix your prior belives with the data.
How the Beta distribution looks like.
End of explanation
xs = np.linspace(0, 1, 100)
plt.plot(xs, stats.beta(7, 3).pdf(xs), label="alpha = 7 beta = 3")
plt.plot(xs, stats.beta(70, 30).pdf(xs), label="alpha = 21 beta = 9")
plt.plot(xs, stats.beta(700, 300).pdf(xs), label="alpha = 70 beta = 30")
plt.axvline(.7, 0, 1, color="red")
plt.legend()
Explanation: $Pr(A|B) = \frac{Pr(B|A)Pr(A)}{Pr(B)}$
$X$ - events (click/no-click)
CTR is a distribution, obviously it's defined on [0,1] (dom of Beta distribution).
$Pr(CTR|X) = \frac{Pr(X|CTR)Pr(CTR)}{Pr(X)} = \frac{Binomial(CTR) * Beta(\alpha, \beta)}{\int{Binomial(CTR) * Beta(\alpha, \beta)}} = \frac{Bernoulli(CTR) * Beta(\alpha, \beta)}{Const}$
Beta: $\frac{p^{\alpha - 1}(1 - p)^{\beta - 1}}{\mathrm {B}(\alpha, \beta)}$ Binomial: $\binom{N}{k} p^k(1 - p)^{N - k}$, where p is a success probability, which is distributed as Beta
$
Pr(CTR|X) = (p^{(\alpha + k) - 1} (1 - p)^{(\beta + N - k) - 1}) / Const
$
It has a shape of Beta distribution: $p^{\alpha - 1} (1 - p)^{\beta - 1}$
$\alpha_{new} = \alpha + k$
$\beta_{new} = \beta + N - k$
But we don't know the normalization constant. This curve may lay upper or lower than Beta distribution curve with same parameters. But we know the posterior is a some distribution and the square under the line should be = 1. So the only possible option - the Pr(CTR|X) curve is exactly the Beta distribution with $\alpha_{new}$ and $\beta_{new}$ parameters.
$\alpha$ & $\beta$ correspond to the number of successes / failures
The more data we've seen the more confident we're in the estimation.
End of explanation
crossfitters_ratio = .48
aggressive = {"crossfitters": .68, "runners": .04}
neutral = {"crossfitters": .28, "runners": .4}
agressive_beta = {"alpha": 1, "beta": 1}
neutral_beta = {"alpha": 1, "beta": 1}
regret = 0
revenue = 0
for _ in tqdm(range(2000000)):
aggresive_score = stats.beta(agressive_beta["alpha"], agressive_beta["beta"]).rvs()
neutral_score = stats.beta(neutral_beta["alpha"], neutral_beta["beta"]).rvs()
user_type = "crossfitters" if stats.bernoulli(crossfitters_ratio).rvs() > 0 else "runners"
if aggresive_score > neutral_score:
click = stats.bernoulli(aggressive[user_type]).rvs()
if click:
agressive_beta["alpha"] += 1
else:
agressive_beta["beta"] += 1
else:
regret += 1
click = stats.bernoulli(neutral[user_type]).rvs()
if click:
neutral_beta["alpha"] += 1
else:
neutral_beta["beta"] += 1
revenue += click
regret, revenue
agressive_beta, neutral_beta
agressive_beta["alpha"] / (agressive_beta["alpha"] + agressive_beta["beta"]), neutral_beta["alpha"] / (neutral_beta["alpha"] + neutral_beta["beta"])
crossfitters_ratio = .48
aggressive = {"crossfitters": .68, "runners": .04}
neutral = {"crossfitters": .28, "runners": .4}
agressive_beta = {"alpha": 1, "beta": 1}
neutral_beta = {"alpha": 1, "beta": 1}
regret = 0
revenue = 0
for _ in tqdm(range(200000)):
aggresive_score = stats.beta(agressive_beta["alpha"], agressive_beta["beta"]).rvs()
neutral_score = stats.beta(neutral_beta["alpha"], neutral_beta["beta"]).rvs()
user_type = "crossfitters" if stats.bernoulli(crossfitters_ratio).rvs() > 0 else "runners"
if aggresive_score > neutral_score:
click = stats.bernoulli(aggressive[user_type]).rvs()
if click:
agressive_beta["alpha"] += 1
else:
agressive_beta["beta"] += 1
else:
regret += 1
click = stats.bernoulli(neutral[user_type]).rvs()
if click:
neutral_beta["alpha"] += 1
else:
neutral_beta["beta"] += 1
revenue += click
xs = np.linspace(0.33, 0.36, 100)
plt.plot(xs, stats.beta(agressive_beta["alpha"], agressive_beta["beta"]).pdf(xs), label="agressive")
plt.plot(xs, stats.beta(neutral_beta["alpha"], neutral_beta["beta"]).pdf(xs), label="neutral")
plt.legend()
regret, revenue
agressive_beta, neutral_beta
agressive_beta["alpha"] / (agressive_beta["alpha"] + agressive_beta["beta"]), neutral_beta["alpha"] / (neutral_beta["alpha"] + neutral_beta["beta"])
Explanation: What banner to show
Let's create a lottery. On each show <s>sample</s> draw a dice out of Beta distribution and have a CTR point estimation.
End of explanation |
14,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exceptions
An exception is an event which occurs during the execution of a program and that disrupts the normal flow of the program's instructions. When a Python code generates an exception, it must either handle the exception immediately or raises it. Otherwise execution terminates.
Exception are objects, organized in a tree.
1. Handling (solving) exceptions
1.1 Catching exceptions
Exceptions are used for handle errors or inusual situations.
Step1: 1.2 Defining alternatives with else
The else statement solves the previous problem (run unwanted code).
Step2: 1.3 Finally, the finally code is executed yes or yes
No matter what happend in the rest of sections of the try statement (even if an exception is thrown in these sections).
Step3: 1.4 Discriminating exceptions
The except statement accepts the type of exception as an argument. This is suitable to refine the exception handling.
Step4: 1.5 Handling several exceptions in the same way
Step5: 2. Raising exceptions
Sometimes we don't want (or don't know
Step6: 3. Creating (new type of) exceptions
Exceptions can be created, for example, to increase the functionality of an existing one. All exceptions must be derived (directly or indirectly) from the Exception class.
Step7: 4. Asserting
Assertions are statements that throw an exception (AssertionError) when some condition is true. For this reason they are used in testing time. Assertions are ignored when the interpreter is invoked in release mode (using the -O flag). | Python Code:
# Note: You must interrupt the kernel (see the menu) in order to simulate <ctrl>+c.
try:
text = input('Please, enter something (or stop the kernel): ')
except:
print('Sorry, something wrong happened :-(')
# This command never should be executed if you didn't provide an input
print('You entered "{}".'.format(text))
Explanation: Exceptions
An exception is an event which occurs during the execution of a program and that disrupts the normal flow of the program's instructions. When a Python code generates an exception, it must either handle the exception immediately or raises it. Otherwise execution terminates.
Exception are objects, organized in a tree.
1. Handling (solving) exceptions
1.1 Catching exceptions
Exceptions are used for handle errors or inusual situations.
End of explanation
try:
text = input('Please, enter something: ')
except:
print('Sorry, something wrong happened :-(')
else:
# Now this statement is executed only if you provided an input
print('You entered "{}".'.format(text))
Explanation: 1.2 Defining alternatives with else
The else statement solves the previous problem (run unwanted code).
End of explanation
try:
text = input('Please, enter something: ')
except:
print('Sorry, something wrong happened :-(')
else:
print('You entered "{}".'.format(text))
finally:
# This will always executed, with exception or not.
print('Thanks for your interaction!')
Explanation: 1.3 Finally, the finally code is executed yes or yes
No matter what happend in the rest of sections of the try statement (even if an exception is thrown in these sections).
End of explanation
try:
text = input('Please, enter something: ')
except EOFError: # Exception specific for input()
print('Sorry, you didn\'t enter anything (<ctrl>+d) :-(')
except KeyboardInterrupt: # Exception raised when a program is interrupted
print('Sorry, you cancelled the input (<ctrl>+c) :-(')
else:
print('You entered "{}".'.format(text))
finally:
print('Thanks for your interaction!')
Explanation: 1.4 Discriminating exceptions
The except statement accepts the type of exception as an argument. This is suitable to refine the exception handling.
End of explanation
try:
x = 1/0
except (ArithmeticError, ZeroDivisionError):
print('Some arithmetic issue has been arised :-/')
Explanation: 1.5 Handling several exceptions in the same way
End of explanation
def keyboard_input():
try:
text = input('Please, enter something: ')
return text
except KeyboardInterrupt: # Exception raised when a program is interrupted
print("Sorry, you can't cancel the input :-(")
raise
while True:
try:
print('You entered:', keyboard_input())
break
except KeyboardInterrupt:
print('Please, try again')
Explanation: 2. Raising exceptions
Sometimes we don't want (or don't know :-) how to manage an exception in the current function (or method). In this case, the exception can be propagated upwards the code called (directly or indirectly) by the exception. Exceptions generated by a statement are documented and accesible through the built-in help() function.
End of explanation
class SmallStack_full(Exception):
pass
class SmallStack_empty(Exception):
pass
class SmallStack():
'''A stack structure with 10 slots.'''
def __init__(self):
'''Create the stack.'''
self.stack = [None]*10
self.counter = 0
def push(self, x)->None:
'''Put "x" on the stack.
Raises SmallStackFull upon fullness.
'''
if self.counter < 10:
self.stack[self.counter] = x
self.counter += 1
else:
raise SmallStack_full
def pop(self)->object:
'''Remove the last element inserter in the stack.
Raises SmallStackEmpty upon emptyness.
'''
if self.counter > 0:
self.counter -= 1
return self.stack[self.counter]
else:
raise SmallStack_empty
s = SmallStack()
try:
for i in range(100):
s.push(i)
print(i)
except SmallStack_full:
print('The stack is full. i={}'.format(i))
try:
for i in range(100):
print(i, s.pop())
except SmallStack_empty:
print('The stack is empty. i={}'.format(i))
Explanation: 3. Creating (new type of) exceptions
Exceptions can be created, for example, to increase the functionality of an existing one. All exceptions must be derived (directly or indirectly) from the Exception class.
End of explanation
! cat testing_assertions.py
! python testing_assertions.py
! python -O testing_assertions.py
Explanation: 4. Asserting
Assertions are statements that throw an exception (AssertionError) when some condition is true. For this reason they are used in testing time. Assertions are ignored when the interpreter is invoked in release mode (using the -O flag).
End of explanation |
14,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure
Step1: values from Okanoya paper below (KOUMURA_OKANOYA_NOTE_ERROR_RATES) are taken from this table | Python Code:
TRAIN_DUR_IND_MAP = {
k:v for k, v in zip(
sorted(curve_df['train_set_dur'].unique()),
sorted(curve_df['train_set_dur_ind'].unique())
)
}
Explanation: Figure
End of explanation
SAVE_FIG = True
sns.set("paper")
KOUMURA_OKANOYA_NOTE_ERROR_RATES = {
120. : 0.84,
480. : 0.46,
}
KOUMURA_OKANOYA_X = np.asarray([TRAIN_DUR_IND_MAP[k] for k in KOUMURA_OKANOYA_NOTE_ERROR_RATES.keys()])
KOUMURA_OKANOYA_Y = np.asarray(list(KOUMURA_OKANOYA_NOTE_ERROR_RATES.values()))
# max width in inches is 7.5
# https://journals.plos.org/ploscompbiol/s/figures
FIGSIZE = (7.5, 3.75)
DPI = 300
fig = plt.figure(constrained_layout=True, figsize=FIGSIZE, dpi=DPI)
gs = fig.add_gridspec(nrows=4, ncols=2, hspace=0.005)
ax_arr = []
ax_arr.append(fig.add_subplot(gs[0, 0]))
ax_arr.append(fig.add_subplot(gs[:2, 1]))
ax_arr.append(fig.add_subplot(gs[1:, 0]))
ax_arr.append(fig.add_subplot(gs[2:, 1]))
ax_arr = np.array(ax_arr).reshape(2, 2)
ax_arr[0,0].get_shared_x_axes().join(*ax_arr[:, 0].tolist())
ax_arr[0,0].get_shared_x_axes().join(*ax_arr[:, 1].tolist())
for col in range(2):
ax_arr[0,col].spines['bottom'].set_visible(False)
ax_arr[1, col].spines['top'].set_visible(False)
ax_arr[1, col].xaxis.tick_bottom()
metric_list = ['avg_error', 'avg_segment_error_rate']
ylabels = ['Frame error (%)', 'Segment error rate\n(edits per segment)']
for col, (metric, ylabel) in enumerate(zip(metric_list, ylabels)):
for row in range(2):
# ax_ind = row * 2 + col
ax = ax_arr[row, col]
if row == 1 and col == 1:
legend = 'full'
else:
legend = False
sns.lineplot(x='train_set_dur_ind', y=metric, hue='bird', data=curve_df, ci='sd', linewidth=2, ax=ax, legend=legend)
sns.lineplot(x='train_set_dur_ind', y=metric,
linestyle='dashed', color='k', linewidth=4,
data=curve_df, ci=None, label='mean', ax=ax, legend=legend)
if metric == 'avg_segment_error_rate' and row == 0:
scatter = ax.scatter(KOUMURA_OKANOYA_X, KOUMURA_OKANOYA_Y, s=20)
ax.legend(handles=(scatter,), labels=('Koumura\nOkanoya 2016',), loc='upper left')
ax.set_ylabel('')
if row == 0:
ax.set_xticklabels([])
ax.set_xlabel('')
elif row == 1:
ax.set_xlabel('Training set duration (s)', fontsize=10)
ax.set_xticks(list(TRAIN_DUR_IND_MAP.values()))
ax.set_xticklabels(sorted(curve_df['train_set_dur'].unique().astype(int)), rotation=45)
# zoom-in / limit the view to different portions of the data
ax_arr[0, 0].set_ylim(12, 100)
ax_arr[1, 0].set_ylim(0, 8)
ax_arr[0, 1].set_ylim(0.35, 0.95)
ax_arr[1, 1].set_ylim(0.0, 0.12)
bigax_col0 = fig.add_subplot(gs[:, 0], frameon=False)
bigax_col1 = fig.add_subplot(gs[:, 1], frameon=False)
labelpads = (2, 10)
panel_labels = ['A', 'B']
for ylabel, labelpad, panel_label, ax in zip(ylabels,
labelpads,
panel_labels,
[bigax_col0, bigax_col1]):
ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
ax.grid(False)
ax.set_ylabel(ylabel, fontsize=10, labelpad=labelpad)
ax.text(-0.2, 1., panel_label, transform=ax.transAxes,
fontsize=12, fontweight='bold', va='top', ha='right')
# get handles from lower right axes legend, then remove and re-create outside
handles, _ = ax_arr[1, 1].get_legend_handles_labels()
ax_arr[1, 1].get_legend().remove()
bigax_col1.legend(handles=handles, bbox_to_anchor=(1.35, 1))
for row in range(2):
for col in range(2):
ax_arr[row, col].spines['left'].set_color('black')
ax_arr[row, col].spines['left'].set_linewidth(0.5)
if row == 1:
ax_arr[row, col].spines['bottom'].set_color('black')
ax_arr[row, col].spines['bottom'].set_linewidth(0.5)
for ax_ in ax_arr.ravel():
ax_.tick_params(axis='both', which='major', labelsize=8)
fig.set_constrained_layout_pads(hspace=-0.05, wspace=0.0)
if SAVE_FIG:
plt.savefig(
REPO_ROOT.joinpath('doc/article/figures/fig4/fig4-learning-curves.png')
)
plt.savefig(
REPO_ROOT.joinpath('doc/article/figures/fig4/fig4-learning-curves.svg')
)
plt.savefig(
REPO_ROOT.joinpath('doc/article/figures/fig4/fig4-learning-curves.tiff')
)
Explanation: values from Okanoya paper below (KOUMURA_OKANOYA_NOTE_ERROR_RATES) are taken from this table:
https://doi.org/10.1371/journal.pone.0159188.t001
Their "note error rate" is what we call "segment error rate".
We chose the values from their models that achieved the lowest error rate.
End of explanation |
14,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Exercise -
Follow along with the instructions in bold. Watch the solutions video if you get stuck!
The Data
Source
Step1: Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'
Step2: Check out the head of the dataframe
Step3: Make the index a time series by using
Step4: Plot out the time series data.
Step5: Train Test Split
Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future)
Create a test train split using indexing (hint
Step6: Scale the Data
Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!
Step8: Batch Function
We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!
Step9: Setting Up The RNN Model
Import TensorFlow
Step10: The Constants
Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these)
Step11: Create Placeholders for X and y. (You can change the variable names if you want). The shape for these placeholders should be [None,num_time_steps-1,num_inputs] and [None, num_time_steps-1, num_outputs] The reason we use num_time_steps-1 is because each of these will be one step shorter than the original time steps size, because we are training the RNN network to predict one point into the future based on the input sequence.
Step12: Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.
Step13: Now pass in the cells variable into tf.nn.dynamic_rnn, along with your first placeholder (X)
Step14: Loss Function and Optimizer
Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.
Step15: Initialize the global variables
Step16: Create an instance of tf.train.Saver()
Step17: Session
Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.
Step18: Predicting Future (Test Data)
Show the test_set (the last 12 months of your original complete data set)
Step19: Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set!
Generative Session
NOTE
Step20: Show the result of the predictions.
Step21: Grab the portion of the results that are the generated values and apply inverse_transform on them to turn them back into milk production value units (lbs per cow). Also reshape the results to be (12,1) so we can easily add them to the test_set dataframe.
Step22: Create a new column on the test_set called "Generated" and set it equal to the generated results. You may get a warning about this, feel free to ignore it.
Step23: View the test_set dataframe.
Step24: Plot out the two columns for comparison. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Time Series Exercise -
Follow along with the instructions in bold. Watch the solutions video if you get stuck!
The Data
Source: https://datamarket.com/data/set/22ox/monthly-milk-production-pounds-per-cow-jan-62-dec-75#!ds=22ox&display=line
Monthly milk production: pounds per cow. Jan 62 - Dec 75
Import numpy pandas and matplotlib
End of explanation
data = pd.read_csv("./data/monthly-milk-production.csv", index_col = 'Month')
Explanation: Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'
End of explanation
data.head()
Explanation: Check out the head of the dataframe
End of explanation
data.index = pd.to_datetime(data.index)
Explanation: Make the index a time series by using:
milk.index = pd.to_datetime(milk.index)
End of explanation
data.plot()
Explanation: Plot out the time series data.
End of explanation
data.info()
training_set = data.head(156)
test_set = data.tail(12)
Explanation: Train Test Split
Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future)
Create a test train split using indexing (hint: use .head() or tail() or .iloc[]). We don't want a random train test split, we want to specify that the test set is the last 12 months of data is the test set, with everything before it is the training.
End of explanation
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
training_set = scaler.fit_transform(training_set)
test_set_scaled = scaler.transform(test_set)
Explanation: Scale the Data
Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!
End of explanation
def next_batch(training_data, batch_size, steps):
INPUT: Data, Batch Size, Time Steps per batch
OUTPUT: A tuple of y time series results. y[:,:-1] and y[:,1:]
# STEP 1: Use np.random.randint to set a random starting point index for the batch.
# Remember that each batch needs have the same number of steps in it.
# This means you should limit the starting point to len(data)-steps
random_start = np.random.randint(0, len(training_data) - steps)
# STEP 2: Now that you have a starting index you'll need to index the data from
# the random start to random start + steps + 1. Then reshape this data to be (1,steps+1)
# Create Y data for time series in the batches
y_batch = np.array(training_data[random_start : random_start + steps + 1]).reshape(1, steps+1)
# STEP 3: Return the batches. You'll have two batches to return y[:,:-1] and y[:,1:]
# You'll need to reshape these into tensors for the RNN to .reshape(-1,steps,1)
return y_batch[:, :-1].reshape(-1, steps, 1), y_batch[:, 1:].reshape(-1, steps, 1)
Explanation: Batch Function
We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!
End of explanation
import tensorflow as tf
Explanation: Setting Up The RNN Model
Import TensorFlow
End of explanation
num_inputs = 1
num_time_steps = 12
num_neurons = 100
num_outputs = 1
learning_rate = 0.03
num_train_iter = 4000
batch_size = 1
Explanation: The Constants
Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these):
* Number of Inputs (1)
* Number of Time Steps (12)
* Number of Neurons per Layer (100)
* Number of Outputs (1)
* Learning Rate (0.03)
* Number of Iterations for Training (4000)
* Batch Size (1)
End of explanation
X = tf.placeholder(tf.float32, [None, num_time_steps, num_inputs])
y = tf.placeholder(tf.float32, [None, num_time_steps, num_outputs])
Explanation: Create Placeholders for X and y. (You can change the variable names if you want). The shape for these placeholders should be [None,num_time_steps-1,num_inputs] and [None, num_time_steps-1, num_outputs] The reason we use num_time_steps-1 is because each of these will be one step shorter than the original time steps size, because we are training the RNN network to predict one point into the future based on the input sequence.
End of explanation
cell = tf.contrib.rnn.OutputProjectionWrapper(tf.contrib.rnn.BasicLSTMCell(num_units = num_neurons, activation = tf.nn.relu), output_size = num_outputs)
Explanation: Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.
End of explanation
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype = tf.float32)
Explanation: Now pass in the cells variable into tf.nn.dynamic_rnn, along with your first placeholder (X)
End of explanation
# MSE
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
train = optimizer.minimize(loss)
Explanation: Loss Function and Optimizer
Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.
End of explanation
init = tf.global_variables_initializer()
Explanation: Initialize the global variables
End of explanation
saver = tf.train.Saver()
Explanation: Create an instance of tf.train.Saver()
End of explanation
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = 0.75)
with tf.Session() as sess:
# Run
sess.run(init)
for iteration in range(num_train_iter):
X_batch, Y_batch = next_batch(training_set, batch_size, num_time_steps)
sess.run(train, feed_dict = {X: X_batch, y: Y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict = {X: X_batch, y: Y_batch})
print(iteration, "\tMSE:", mse)
# Save Model for Later
saver.save(sess, "./checkpoints/ex_time_series_model")
Explanation: Session
Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.
End of explanation
test_set
Explanation: Predicting Future (Test Data)
Show the test_set (the last 12 months of your original complete data set)
End of explanation
with tf.Session() as sess:
# Use your Saver instance to restore your saved rnn time series model
saver.restore(sess, "./checkpoints/ex_time_series_model")
# Create a numpy array for your genreative seed from the last 12 months of the
# training set data. Hint: Just use tail(12) and then pass it to an np.array
train_seed = list(training_set[-12:])
## Now create a for loop that
for iteration in range(12):
X_batch = np.array(train_seed[-num_time_steps:]).reshape(1, num_time_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
train_seed.append(y_pred[0, -1, 0])
Explanation: Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set!
Generative Session
NOTE: Recall that our model is really only trained to predict 1 time step ahead, asking it to generate 12 steps is a big ask, and technically not what it was trained to do! Think of this more as generating new values based off some previous pattern, rather than trying to directly predict the future. You would need to go back to the original model and train the model to predict 12 time steps ahead to really get a higher accuracy on the test data. (Which has its limits due to the smaller size of our data set)
Fill out the session code below to generate 12 months of data based off the last 12 months of data from the training set. The hardest part about this is adjusting the arrays with their shapes and sizes. Reference the lecture for hints.
End of explanation
train_seed
Explanation: Show the result of the predictions.
End of explanation
results = scaler.inverse_transform(np.array(train_seed[12:]).reshape(12, 1))
Explanation: Grab the portion of the results that are the generated values and apply inverse_transform on them to turn them back into milk production value units (lbs per cow). Also reshape the results to be (12,1) so we can easily add them to the test_set dataframe.
End of explanation
test_set['Generated'] = results
Explanation: Create a new column on the test_set called "Generated" and set it equal to the generated results. You may get a warning about this, feel free to ignore it.
End of explanation
test_set
Explanation: View the test_set dataframe.
End of explanation
test_set.plot()
Explanation: Plot out the two columns for comparison.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.