path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
matplotlib/04.12-Three-Dimensional-Plotting.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!**No changes were made to the contents of this notebook from the original.* Three-Dimensional Plotting in Matplotlib Matplotlib was initially designed with only two-dimensional plotting in mind.Around the time of the 1.0 release, some three-dimensional plotting utilities were built on top of Matplotlib's two-dimensional display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization.three-dimensional plots are enabled by importing the ``mplot3d`` toolkit, included with the main Matplotlib installation:
###Code
from mpl_toolkits import mplot3d
###Output
_____no_output_____
###Markdown
Once this submodule is imported, a three-dimensional axes can be created by passing the keyword ``projection='3d'`` to any of the normal axes creation routines:
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = plt.axes(projection='3d')
###Output
_____no_output_____
###Markdown
With this three-dimensional axes enabled, we can now plot a variety of three-dimensional plot types. Three-dimensional plotting is one of the functionalities that benefits immensely from viewing figures interactively rather than statically in the notebook; recall that to use interactive figures, you can use ``%matplotlib notebook`` rather than ``%matplotlib inline`` when running this code. Three-dimensional Points and LinesThe most basic three-dimensional plot is a line or collection of scatter plot created from sets of (x, y, z) triples.In analogy with the more common two-dimensional plots discussed earlier, these can be created using the ``ax.plot3D`` and ``ax.scatter3D`` functions.The call signature for these is nearly identical to that of their two-dimensional counterparts, so you can refer to [Simple Line Plots](04.01-Simple-Line-Plots.ipynb) and [Simple Scatter Plots](04.02-Simple-Scatter-Plots.ipynb) for more information on controlling the output.Here we'll plot a trigonometric spiral, along with some points drawn randomly near the line:
###Code
ax = plt.axes(projection='3d')
# Data for a three-dimensional line
zline = np.linspace(0, 15, 1000)
xline = np.sin(zline)
yline = np.cos(zline)
ax.plot3D(xline, yline, zline, 'gray')
# Data for three-dimensional scattered points
zdata = 15 * np.random.random(100)
xdata = np.sin(zdata) + 0.1 * np.random.randn(100)
ydata = np.cos(zdata) + 0.1 * np.random.randn(100)
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Greens');
###Output
_____no_output_____
###Markdown
Notice that by default, the scatter points have their transparency adjusted to give a sense of depth on the page.While the three-dimensional effect is sometimes difficult to see within a static image, an interactive view can lead to some nice intuition about the layout of the points. Three-dimensional Contour PlotsAnalogous to the contour plots we explored in [Density and Contour Plots](04.04-Density-and-Contour-Plots.ipynb), ``mplot3d`` contains tools to create three-dimensional relief plots using the same inputs.Like two-dimensional ``ax.contour`` plots, ``ax.contour3D`` requires all the input data to be in the form of two-dimensional regular grids, with the Z data evaluated at each point.Here we'll show a three-dimensional contour diagram of a three-dimensional sinusoidal function:
###Code
def f(x, y):
return np.sin(np.sqrt(x ** 2 + y ** 2))
x = np.linspace(-6, 6, 30)
y = np.linspace(-6, 6, 30)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.contour3D(X, Y, Z, 50, cmap='binary')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z');
###Output
_____no_output_____
###Markdown
Sometimes the default viewing angle is not optimal, in which case we can use the ``view_init`` method to set the elevation and azimuthal angles. In the following example, we'll use an elevation of 60 degrees (that is, 60 degrees above the x-y plane) and an azimuth of 35 degrees (that is, rotated 35 degrees counter-clockwise about the z-axis):
###Code
ax.view_init(60, 35)
fig
###Output
_____no_output_____
###Markdown
Again, note that this type of rotation can be accomplished interactively by clicking and dragging when using one of Matplotlib's interactive backends. Wireframes and Surface PlotsTwo other types of three-dimensional plots that work on gridded data are wireframes and surface plots.These take a grid of values and project it onto the specified three-dimensional surface, and can make the resulting three-dimensional forms quite easy to visualize.Here's an example of using a wireframe:
###Code
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_wireframe(X, Y, Z, color='black')
ax.set_title('wireframe');
###Output
_____no_output_____
###Markdown
A surface plot is like a wireframe plot, but each face of the wireframe is a filled polygon.Adding a colormap to the filled polygons can aid perception of the topology of the surface being visualized:
###Code
ax = plt.axes(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1,
cmap='viridis', edgecolor='none')
ax.set_title('surface');
###Output
_____no_output_____
###Markdown
Note that though the grid of values for a surface plot needs to be two-dimensional, it need not be rectilinear.Here is an example of creating a partial polar grid, which when used with the ``surface3D`` plot can give us a slice into the function we're visualizing:
###Code
r = np.linspace(0, 6, 20)
theta = np.linspace(-0.9 * np.pi, 0.8 * np.pi, 40)
r, theta = np.meshgrid(r, theta)
X = r * np.sin(theta)
Y = r * np.cos(theta)
Z = f(X, Y)
ax = plt.axes(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1,
cmap='viridis', edgecolor='none');
###Output
_____no_output_____
###Markdown
Surface TriangulationsFor some applications, the evenly sampled grids required by the above routines is overly restrictive and inconvenient.In these situations, the triangulation-based plots can be very useful.What if rather than an even draw from a Cartesian or a polar grid, we instead have a set of random draws?
###Code
theta = 2 * np.pi * np.random.random(1000)
r = 6 * np.random.random(1000)
x = np.ravel(r * np.sin(theta))
y = np.ravel(r * np.cos(theta))
z = f(x, y)
###Output
_____no_output_____
###Markdown
We could create a scatter plot of the points to get an idea of the surface we're sampling from:
###Code
ax = plt.axes(projection='3d')
ax.scatter(x, y, z, c=z, cmap='viridis', linewidth=0.5);
###Output
_____no_output_____
###Markdown
This leaves a lot to be desired.The function that will help us in this case is ``ax.plot_trisurf``, which creates a surface by first finding a set of triangles formed between adjacent points (remember that x, y, and z here are one-dimensional arrays):
###Code
ax = plt.axes(projection='3d')
ax.plot_trisurf(x, y, z,
cmap='viridis', edgecolor='none');
###Output
_____no_output_____
###Markdown
The result is certainly not as clean as when it is plotted with a grid, but the flexibility of such a triangulation allows for some really interesting three-dimensional plots.For example, it is actually possible to plot a three-dimensional Möbius strip using this, as we'll see next. Example: Visualizing a Möbius stripA Möbius strip is similar to a strip of paper glued into a loop with a half-twist.Topologically, it's quite interesting because despite appearances it has only a single side!Here we will visualize such an object using Matplotlib's three-dimensional tools.The key to creating the Möbius strip is to think about it's parametrization: it's a two-dimensional strip, so we need two intrinsic dimensions. Let's call them $\theta$, which ranges from $0$ to $2\pi$ around the loop, and $w$ which ranges from -1 to 1 across the width of the strip:
###Code
theta = np.linspace(0, 2 * np.pi, 30)
w = np.linspace(-0.25, 0.25, 8)
w, theta = np.meshgrid(w, theta)
###Output
_____no_output_____
###Markdown
Now from this parametrization, we must determine the *(x, y, z)* positions of the embedded strip.Thinking about it, we might realize that there are two rotations happening: one is the position of the loop about its center (what we've called $\theta$), while the other is the twisting of the strip about its axis (we'll call this $\phi$). For a Möbius strip, we must have the strip makes half a twist during a full loop, or $\Delta\phi = \Delta\theta/2$.
###Code
phi = 0.5 * theta
###Output
_____no_output_____
###Markdown
Now we use our recollection of trigonometry to derive the three-dimensional embedding.We'll define $r$, the distance of each point from the center, and use this to find the embedded $(x, y, z)$ coordinates:
###Code
# radius in x-y plane
r = 1 + w * np.cos(phi)
x = np.ravel(r * np.cos(theta))
y = np.ravel(r * np.sin(theta))
z = np.ravel(w * np.sin(phi))
###Output
_____no_output_____
###Markdown
Finally, to plot the object, we must make sure the triangulation is correct. The best way to do this is to define the triangulation *within the underlying parametrization*, and then let Matplotlib project this triangulation into the three-dimensional space of the Möbius strip.This can be accomplished as follows:
###Code
# triangulate in the underlying parametrization
from matplotlib.tri import Triangulation
tri = Triangulation(np.ravel(w), np.ravel(theta))
ax = plt.axes(projection='3d')
ax.plot_trisurf(x, y, z, triangles=tri.triangles,
cmap='viridis', linewidths=0.2);
ax.set_xlim(-1, 1); ax.set_ylim(-1, 1); ax.set_zlim(-1, 1);
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!**No changes were made to the contents of this notebook from the original.* Three-Dimensional Plotting in Matplotlib Matplotlib was initially designed with only two-dimensional plotting in mind.Around the time of the 1.0 release, some three-dimensional plotting utilities were built on top of Matplotlib's two-dimensional display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization.three-dimensional plots are enabled by importing the ``mplot3d`` toolkit, included with the main Matplotlib installation:
###Code
from mpl_toolkits import mplot3d
###Output
_____no_output_____
###Markdown
Once this submodule is imported, a three-dimensional axes can be created by passing the keyword ``projection='3d'`` to any of the normal axes creation routines:
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
ax = plt.axes(projection='3d')
###Output
_____no_output_____
###Markdown
With this three-dimensional axes enabled, we can now plot a variety of three-dimensional plot types. Three-dimensional plotting is one of the functionalities that benefits immensely from viewing figures interactively rather than statically in the notebook; recall that to use interactive figures, you can use ``%matplotlib notebook`` rather than ``%matplotlib inline`` when running this code. Three-dimensional Points and LinesThe most basic three-dimensional plot is a line or collection of scatter plot created from sets of (x, y, z) triples.In analogy with the more common two-dimensional plots discussed earlier, these can be created using the ``ax.plot3D`` and ``ax.scatter3D`` functions.The call signature for these is nearly identical to that of their two-dimensional counterparts, so you can refer to [Simple Line Plots](04.01-Simple-Line-Plots.ipynb) and [Simple Scatter Plots](04.02-Simple-Scatter-Plots.ipynb) for more information on controlling the output.Here we'll plot a trigonometric spiral, along with some points drawn randomly near the line:
###Code
ax = plt.axes(projection='3d')
# Data for a three-dimensional line
zline = np.linspace(0, 15, 1000)
xline = np.sin(zline)
yline = np.cos(zline)
ax.plot3D(xline, yline, zline, 'gray')
# Data for three-dimensional scattered points
zdata = 15 * np.random.random(100)
xdata = np.sin(zdata) + 0.1 * np.random.randn(100)
ydata = np.cos(zdata) + 0.1 * np.random.randn(100)
ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Greens');
###Output
_____no_output_____
###Markdown
Notice that by default, the scatter points have their transparency adjusted to give a sense of depth on the page.While the three-dimensional effect is sometimes difficult to see within a static image, an interactive view can lead to some nice intuition about the layout of the points. Three-dimensional Contour PlotsAnalogous to the contour plots we explored in [Density and Contour Plots](04.04-Density-and-Contour-Plots.ipynb), ``mplot3d`` contains tools to create three-dimensional relief plots using the same inputs.Like two-dimensional ``ax.contour`` plots, ``ax.contour3D`` requires all the input data to be in the form of two-dimensional regular grids, with the Z data evaluated at each point.Here we'll show a three-dimensional contour diagram of a three-dimensional sinusoidal function:
###Code
def f(x, y):
return np.sin(np.sqrt(x ** 2 + y ** 2))
x = np.linspace(-6, 6, 30)
y = np.linspace(-6, 6, 30)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.contour3D(X, Y, Z, 50, cmap='binary')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z');
###Output
_____no_output_____
###Markdown
Sometimes the default viewing angle is not optimal, in which case we can use the ``view_init`` method to set the elevation and azimuthal angles. In the following example, we'll use an elevation of 60 degrees (that is, 60 degrees above the x-y plane) and an azimuth of 35 degrees (that is, rotated 35 degrees counter-clockwise about the z-axis):
###Code
ax.view_init(60, 35)
fig
###Output
_____no_output_____
###Markdown
Again, note that this type of rotation can be accomplished interactively by clicking and dragging when using one of Matplotlib's interactive backends. Wireframes and Surface PlotsTwo other types of three-dimensional plots that work on gridded data are wireframes and surface plots.These take a grid of values and project it onto the specified three-dimensional surface, and can make the resulting three-dimensional forms quite easy to visualize.Here's an example of using a wireframe:
###Code
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_wireframe(X, Y, Z, color='black')
ax.set_title('wireframe');
###Output
_____no_output_____
###Markdown
A surface plot is like a wireframe plot, but each face of the wireframe is a filled polygon.Adding a colormap to the filled polygons can aid perception of the topology of the surface being visualized:
###Code
ax = plt.axes(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1,
cmap='viridis', edgecolor='none')
ax.set_title('surface');
###Output
_____no_output_____
###Markdown
Note that though the grid of values for a surface plot needs to be two-dimensional, it need not be rectilinear.Here is an example of creating a partial polar grid, which when used with the ``surface3D`` plot can give us a slice into the function we're visualizing:
###Code
r = np.linspace(0, 6, 20)
theta = np.linspace(-0.9 * np.pi, 0.8 * np.pi, 40)
r, theta = np.meshgrid(r, theta)
X = r * np.sin(theta)
Y = r * np.cos(theta)
Z = f(X, Y)
ax = plt.axes(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1,
cmap='viridis', edgecolor='none');
###Output
_____no_output_____
###Markdown
Surface TriangulationsFor some applications, the evenly sampled grids required by the above routines is overly restrictive and inconvenient.In these situations, the triangulation-based plots can be very useful.What if rather than an even draw from a Cartesian or a polar grid, we instead have a set of random draws?
###Code
theta = 2 * np.pi * np.random.random(1000)
r = 6 * np.random.random(1000)
x = np.ravel(r * np.sin(theta))
y = np.ravel(r * np.cos(theta))
z = f(x, y)
###Output
_____no_output_____
###Markdown
We could create a scatter plot of the points to get an idea of the surface we're sampling from:
###Code
ax = plt.axes(projection='3d')
ax.scatter(x, y, z, c=z, cmap='viridis', linewidth=0.5);
###Output
_____no_output_____
###Markdown
This leaves a lot to be desired.The function that will help us in this case is ``ax.plot_trisurf``, which creates a surface by first finding a set of triangles formed between adjacent points (remember that x, y, and z here are one-dimensional arrays):
###Code
ax = plt.axes(projection='3d')
ax.plot_trisurf(x, y, z,
cmap='viridis', edgecolor='none');
###Output
_____no_output_____
###Markdown
The result is certainly not as clean as when it is plotted with a grid, but the flexibility of such a triangulation allows for some really interesting three-dimensional plots.For example, it is actually possible to plot a three-dimensional Möbius strip using this, as we'll see next. Example: Visualizing a Möbius stripA Möbius strip is similar to a strip of paper glued into a loop with a half-twist.Topologically, it's quite interesting because despite appearances it has only a single side!Here we will visualize such an object using Matplotlib's three-dimensional tools.The key to creating the Möbius strip is to think about it's parametrization: it's a two-dimensional strip, so we need two intrinsic dimensions. Let's call them $\theta$, which ranges from $0$ to $2\pi$ around the loop, and $w$ which ranges from -1 to 1 across the width of the strip:
###Code
theta = np.linspace(0, 2 * np.pi, 30)
w = np.linspace(-0.25, 0.25, 8)
w, theta = np.meshgrid(w, theta)
###Output
_____no_output_____
###Markdown
Now from this parametrization, we must determine the *(x, y, z)* positions of the embedded strip.Thinking about it, we might realize that there are two rotations happening: one is the position of the loop about its center (what we've called $\theta$), while the other is the twisting of the strip about its axis (we'll call this $\phi$). For a Möbius strip, we must have the strip makes half a twist during a full loop, or $\Delta\phi = \Delta\theta/2$.
###Code
phi = 0.5 * theta
###Output
_____no_output_____
###Markdown
Now we use our recollection of trigonometry to derive the three-dimensional embedding.We'll define $r$, the distance of each point from the center, and use this to find the embedded $(x, y, z)$ coordinates:
###Code
# radius in x-y plane
r = 1 + w * np.cos(phi)
x = np.ravel(r * np.cos(theta))
y = np.ravel(r * np.sin(theta))
z = np.ravel(w * np.sin(phi))
###Output
_____no_output_____
###Markdown
Finally, to plot the object, we must make sure the triangulation is correct. The best way to do this is to define the triangulation *within the underlying parametrization*, and then let Matplotlib project this triangulation into the three-dimensional space of the Möbius strip.This can be accomplished as follows:
###Code
# triangulate in the underlying parametrization
from matplotlib.tri import Triangulation
tri = Triangulation(np.ravel(w), np.ravel(theta))
ax = plt.axes(projection='3d')
ax.plot_trisurf(x, y, z, triangles=tri.triangles,
cmap='viridis', linewidths=0.2);
ax.set_xlim(-1, 1); ax.set_ylim(-1, 1); ax.set_zlim(-1, 1);
###Output
_____no_output_____ |
biorxiv/article_distances/06_biorxiv_article_distances_cosine_abstract_tester.ipynb | ###Markdown
Find published articles missing from bioRxiv using abstracts alone
###Code
from pathlib import Path
import pickle
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotnine as p9
from scipy.spatial.distance import cdist
import scipy.stats
import seaborn as sns
from sklearn.metrics import roc_curve, auc, recall_score, precision_score
import tqdm
import svgutils.transform as sg
from svgutils.compose import Unit
from cairosvg import svg2png
from IPython.display import Image
from lxml import etree
###Output
_____no_output_____
###Markdown
Load Embeddings bioRxiv
###Code
biorxiv_journal_df = (
pd.read_csv(
"../journal_tracker/output/mapped_published_doi_before_update.tsv", sep="\t"
)
.rename(index=str, columns={"doi": "preprint_doi"})
.groupby("preprint_doi")
.agg(
{
"document": "last",
"category": "first",
"preprint_doi": "last",
"published_doi": "first",
"pmcid": "first",
"pmcoa": "first",
}
)
.reset_index(drop=True)
)
biorxiv_journal_df.head()
biorxiv_embed_df = pd.read_csv(
Path("../word_vector_experiment/output/")
/ "word2vec_output/"
/ "biorxiv_all_articles_300.tsv.xz",
sep="\t",
)
biorxiv_embed_df = biorxiv_embed_df.dropna()
biorxiv_embed_df.head()
biorxiv_journal_mapped_df = biorxiv_journal_df[
["document", "preprint_doi", "published_doi", "pmcid", "pmcoa"]
].merge(biorxiv_embed_df, on="document")
print(biorxiv_journal_mapped_df.shape)
biorxiv_journal_mapped_df.head()
biorxiv_embed_abstract_only_df = pd.read_csv(
Path("../word_vector_experiment/output/")
/ "word2vec_output/"
/ "biorxiv_all_articles_300_abstract_only_delete_me.tsv.xz",
sep="\t",
)
biorxiv_embed_abstract_only_df = biorxiv_embed_abstract_only_df.dropna()
biorxiv_embed_abstract_only_df.head()
###Output
_____no_output_____
###Markdown
Remove preprints with malformed abstracts
###Code
missing_abstracts = set(biorxiv_embed_df.document.tolist()).difference(
set(biorxiv_embed_abstract_only_df.document.tolist())
)
print(len(missing_abstracts))
biorxiv_journal_mapped_df = biorxiv_journal_mapped_df.query(
f"document not in {list(missing_abstracts)}"
)
print(biorxiv_journal_mapped_df.shape)
biorxiv_journal_mapped_df.head()
biorxiv_journal_mapped_abstract_df = biorxiv_journal_df[
["document", "preprint_doi", "published_doi", "pmcid", "pmcoa"]
].merge(biorxiv_embed_abstract_only_df, on="document")
print(biorxiv_journal_mapped_df.shape)
biorxiv_journal_mapped_abstract_df.head()
###Output
(70857, 305)
###Markdown
Pubmed Central
###Code
pmc_articles_df = pd.read_csv(
Path("../../pmc/exploratory_data_analysis/")
/ "output/pubmed_central_journal_paper_map.tsv.xz",
sep="\t",
).query("article_type=='research-article'")
pmc_articles_df.head()
pmc_embed_df = pd.read_csv(
Path("../../pmc/word_vector_experiment/output")
/ Path("pmc_document_vectors_300_replace.tsv.xz"),
sep="\t",
)
pmc_embed_df.head()
pmc_journal_mapped_df = (
pmc_articles_df[["doi", "pmcid"]]
.merge(pmc_embed_df, left_on="pmcid", right_on="document")
.drop("pmcid", axis=1)
)
pmc_journal_mapped_df.head()
pmc_embed_abstract_only_df = pd.read_csv(
Path("../../pmc/word_vector_experiment")
/ "output"
/ "pmc_document_vectors_300_abstract_only.tsv.xz",
sep="\t",
)
pmc_embed_abstract_only_df = pmc_embed_abstract_only_df.dropna()
pmc_embed_abstract_only_df.head()
pmc_journal_mapped_abstract_df = (
pmc_articles_df[["doi", "pmcid"]]
.merge(pmc_embed_abstract_only_df, left_on="pmcid", right_on="document")
.drop("pmcid", axis=1)
)
pmc_journal_mapped_abstract_df.head()
###Output
_____no_output_____
###Markdown
Remove Published articles with Malformed Abstracts
###Code
pmc_full_text = set(pmc_journal_mapped_df.document.tolist())
pmc_abstract = set(pmc_journal_mapped_abstract_df.document.tolist())
missing_articles = pmc_full_text.difference(pmc_abstract)
print(len(missing_articles))
pmc_journal_mapped_df = pmc_journal_mapped_df.query(
f"document not in {list(missing_articles)}"
)
###Output
19813
###Markdown
Calculate Distances biorxiv -> published versions
###Code
biorxiv_published = (
biorxiv_journal_mapped_df.query("pmcid.notnull()")
.query("pmcoa == True")
.sort_values("pmcid", ascending=True)
.drop_duplicates("pmcid")
.set_index("pmcid")
)
biorxiv_published.head()
PMC_published = (
pmc_journal_mapped_df.query(f"document in {biorxiv_published.index.tolist()}")
.sort_values("document", ascending=True)
.set_index("document")
)
PMC_published.head()
###Output
_____no_output_____
###Markdown
Full Text
###Code
article_distances = cdist(
biorxiv_published.loc[PMC_published.index.tolist()].drop(
["document", "preprint_doi", "published_doi", "pmcoa"], axis=1
),
PMC_published.drop(["doi", "journal"], axis=1),
"euclidean",
)
article_distances.shape
articles_distance_original_df = (
biorxiv_published.loc[PMC_published.index.tolist()]
.reset_index()[["document", "pmcid"]]
.assign(
distance=np.diag(article_distances, k=0), journal=PMC_published.journal.tolist()
)
)
articles_distance_original_df.head()
###Output
_____no_output_____
###Markdown
Abstracts
###Code
biorxiv_published_abstract = (
biorxiv_journal_mapped_abstract_df.query("pmcid.notnull()")
.query("pmcoa == True")
.sort_values("pmcid", ascending=True)
.drop_duplicates("pmcid")
.set_index("pmcid")
)
biorxiv_published_abstract.head()
PMC_published_abstract = (
pmc_journal_mapped_abstract_df.query(
f"document in {biorxiv_published_abstract.index.tolist()}"
)
.sort_values("document", ascending=True)
.set_index("document")
)
PMC_published_abstract.head()
article_distances = cdist(
biorxiv_published_abstract.loc[PMC_published_abstract.index.tolist()].drop(
["document", "preprint_doi", "published_doi", "pmcoa"], axis=1
),
PMC_published_abstract.drop(["doi", "journal"], axis=1),
"euclidean",
)
article_distances.shape
articles_distance_abstract_df = (
biorxiv_published_abstract.loc[PMC_published_abstract.index.tolist()]
.reset_index()[["document", "pmcid"]]
.assign(
distance=np.diag(article_distances, k=0),
journal=PMC_published_abstract.journal.tolist(),
)
)
articles_distance_abstract_df.head()
###Output
_____no_output_____
###Markdown
biorxiv -> random paper same journal
###Code
PMC_off_published = (
pmc_journal_mapped_df.drop("doi", axis=1)
.query(f"document not in {biorxiv_published.index.tolist()}")
.query(f"journal in {articles_distance_original_df.journal.unique().tolist()}")
.groupby("journal", group_keys=False)
.apply(lambda x: x.sample(1, random_state=100))
)
PMC_off_published.head()
journal_mapper = {
journal: col for col, journal in enumerate(PMC_off_published.journal.tolist())
}
list(journal_mapper.items())[0:10]
###Output
_____no_output_____
###Markdown
Full Text
###Code
off_article_dist = cdist(
biorxiv_published.loc[PMC_published.index.tolist()]
.drop(["document", "preprint_doi", "published_doi", "pmcoa"], axis=1)
.values,
PMC_off_published.drop(["document", "journal"], axis=1).values,
"euclidean",
)
off_article_dist.shape
data = []
for idx, row in tqdm.tqdm(articles_distance_original_df.iterrows()):
if row["journal"] in journal_mapper:
data.append(
{
"document": row["document"],
"pmcid": (
PMC_off_published.query(f"journal=='{row['journal']}'")
.reset_index()
.document.values[0]
),
"journal": row["journal"],
"distance": off_article_dist[idx, journal_mapper[row["journal"]]],
}
)
final_original_df = articles_distance_original_df.assign(
label="pre_vs_published"
).append(pd.DataFrame.from_records(data).assign(label="pre_vs_random"))
final_original_df.head()
###Output
_____no_output_____
###Markdown
Abstract
###Code
PMC_off_published_abstract = pmc_journal_mapped_abstract_df.query(
f"document in {PMC_off_published.document.tolist()}"
).sort_values("journal")
PMC_off_published_abstract.head()
off_article_dist = cdist(
biorxiv_published_abstract.loc[PMC_published_abstract.index.tolist()]
.drop(["document", "preprint_doi", "published_doi", "pmcoa"], axis=1)
.values,
PMC_off_published_abstract.drop(["document", "journal", "doi"], axis=1).values,
"euclidean",
)
off_article_dist.shape
remaining_journal_mapper = list(
set(PMC_off_published_abstract.journal.tolist()).intersection(
set(journal_mapper.keys())
)
)
remaining_journal_mapper = dict(
zip(sorted(remaining_journal_mapper), range(len(remaining_journal_mapper)))
)
data = []
for idx, row in tqdm.tqdm(articles_distance_abstract_df.iterrows()):
if row["journal"] in remaining_journal_mapper:
data.append(
{
"document": row["document"],
"pmcid": (
PMC_off_published_abstract.query(f"journal=='{row['journal']}'")
.reset_index()
.document.values[0]
),
"journal": row["journal"],
"distance": off_article_dist[
idx, remaining_journal_mapper[row["journal"]]
],
}
)
final_abstract_df = articles_distance_abstract_df.assign(
label="pre_vs_published"
).append(pd.DataFrame.from_records(data).assign(label="pre_vs_random"))
final_abstract_df.head()
final_abstract_df = biorxiv_journal_df[["document", "preprint_doi"]].merge(
final_abstract_df
)
final_abstract_df.to_csv(
"output/annotated_links/article_distances_abstract_only.tsv", sep="\t", index=False
)
final_abstract_df.head()
###Output
_____no_output_____
###Markdown
Distribution plot
###Code
g = (
p9.ggplot(
final_original_df.replace(
{
"pre_vs_published": "preprint-published",
"pre_vs_random": "preprint-random",
}
)
)
+ p9.aes(x="label", y="distance")
+ p9.geom_violin(fill="#a6cee3")
+ p9.labs(x="Document Pair Groups", y="Euclidean Distance")
+ p9.theme_seaborn(context="paper", style="ticks", font="Arial", font_scale=2)
+ p9.theme(figure_size=(11, 8.5))
)
print(g)
g = (
p9.ggplot(
final_abstract_df.replace(
{
"pre_vs_published": "preprint-published",
"pre_vs_random": "preprint-random",
}
)
)
+ p9.aes(x="label", y="distance")
+ p9.geom_violin(fill="#a6cee3")
+ p9.labs(x="Document Pair Groups", y="Euclidean Distance")
+ p9.theme_seaborn(context="paper", style="ticks", font="Arial", font_scale=2)
+ p9.theme(figure_size=(11, 8.5))
)
print(g)
###Output
_____no_output_____
###Markdown
Examine the top N predictions using Recall and Precision
###Code
data_rows = []
for df, model_label in zip(
[final_original_df, final_abstract_df], ["Full Text", "Abstract Only"]
):
for k in tqdm.tqdm(range(1, 34503, 200)):
recall = recall_score(
df.sort_values("distance").iloc[0:k].label.tolist(),
["pre_vs_published"] * k
if k <= df.shape[0]
else ["pre_vs_published"] * df.shape[0],
pos_label="pre_vs_published",
)
precision = precision_score(
df.sort_values("distance").iloc[0:k].label.tolist(),
["pre_vs_published"] * k
if k <= df.shape[0]
else ["pre_vs_published"] * df.shape[0],
pos_label="pre_vs_published",
)
data_rows.append(
{"recall": recall, "precision": precision, "N": k, "model": model_label}
)
plot_df = pd.DataFrame.from_records(data_rows)
plot_df.head()
g = (
p9.ggplot(plot_df, p9.aes(x="N", y="recall", color="model"))
+ p9.geom_point()
+ p9.labs(x="Top N predictions", y="Recall")
)
g.save("output/figures/abstract_vs_full_text_top_k_recall.png", dpi=600)
print(g)
g = (
p9.ggplot(plot_df, p9.aes(x="N", y="precision", color="model"))
+ p9.geom_point()
+ p9.labs(x="Top N predictions", y="Precision")
)
g.save("output/figures/abstract_vs_full_text_top_k_precision.png", dpi=600)
print(g)
###Output
/home/danich1/anaconda3/envs/annorxiver/lib/python3.7/site-packages/plotnine/ggplot.py:729: PlotnineWarning: Saving 6.4 x 4.8 in image.
/home/danich1/anaconda3/envs/annorxiver/lib/python3.7/site-packages/plotnine/ggplot.py:730: PlotnineWarning: Filename: output/figures/abstract_vs_full_text_top_k_precision.png
|
notebooks/Sparkify on IBM Watson.ipynb | ###Markdown
ModelingSplit the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, I suggest using F1 score as the metric to optimize. Make train, test and validation sets
###Code
### Make train, test and validation sets
train, test, validation = df_ML.randomSplit([0.7, 0.15, 0.15], seed = 44)
print(train.count())
print(test.count())
print(validation.count())
###Output
298
79
71
###Markdown
Make Pipelines
###Code
# index and encode categorical features gender, level and state
stringIndexer_forGender = StringIndexer(inputCol="gender", outputCol="indexed_gender", handleInvalid = 'skip')
stringIndexer_forLast_level = StringIndexer(inputCol="last_level", outputCol="indexed_last_level", handleInvalid = 'skip')
stringIndexer_forLocation = StringIndexer(inputCol="location_first", outputCol="indexed_location", handleInvalid = 'skip')
encoder = OneHotEncoderEstimator(inputCols=["indexed_gender", "indexed_last_level", "indexed_location"],
outputCols=["gender_feat", "last_level_feat", "location_feat"],
handleInvalid = 'keep')
# create vector for features
features = ['gender_feat', 'last_level_feat', 'location_feat', 'time_after_id_creation(day)','avg_total_sessionId_afterCreation','avg_itemInSession_afterCreation',
'avg_thumbsup_afterCreation','avg_thumbsdown_afterCreation','avg_rolladvert_afterCreation','avg_addfriend_afterCreation',
'avg_addplaylist_afterCreation','avg_error_afterCreation','avg_logout_afterCreation','avg_total_Top100_artist_alltime','avg_total_Top100_song_week']
assembler = VectorAssembler(inputCols=features, outputCol="assembled_features")
scaler = MinMaxScaler(inputCol="assembled_features" , outputCol="scaled_features")
# initialize random forest classifier
rf = RandomForestClassifier(featuresCol="scaled_features",labelCol="churn")
# initialize logistic regression
lr = LogisticRegression(maxIter=5,threshold=0.3)
# initialize N GBTClassifier
gbt = GBTClassifier(featuresCol="scaled_features",labelCol="churn", maxIter=5, maxDepth=3)
# assemble pipelines
pipeline_rf = Pipeline(stages = [stringIndexer_forGender, stringIndexer_forLast_level, stringIndexer_forLocation, encoder, assembler, scaler ,rf])
pipeline_lr = Pipeline(stages = [stringIndexer_forGender, stringIndexer_forLast_level, stringIndexer_forLocation, encoder, assembler, scaler ,lr])
pipeline_gbt = Pipeline(stages = [stringIndexer_forGender, stringIndexer_forLast_level, stringIndexer_forLocation, encoder, assembler, scaler, gbt])
# random forest
##
starttime = datetime.now()
model_rf = pipeline_rf.fit(train)
pred_train_rf = model_rf.transform(train)
pred_test_rf = model_rf.transform(test)
pred_validation_rf = model_rf.transform(validation)
## train_set
predictionAndLabels_rf = pred_train_rf.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# Instantiate metrics object
metrics_rf = MulticlassMetrics(predictionAndLabels_rf )
# F1 score
print("F1 score of training set: ", metrics_rf.fMeasure())
print("Precision of training set: ", metrics_rf.precision(1))
print("Recall of training set: " , metrics_rf.recall(1))
print(metrics_rf.confusionMatrix().toArray())
print()
## test_set
predictionAndLabels_rf = pred_test_rf.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# Instantiate metrics object
metrics_rf = MulticlassMetrics(predictionAndLabels_rf)
# F1 score
print("F1 score of test set: ", metrics_rf.fMeasure())
print("Precision of test set: ", metrics_rf.precision(1))
print("Recall of test set: " , metrics_rf.recall(1))
print(metrics_rf.confusionMatrix().toArray())
print(datetime.now() - starttime)
print()
## validation_set
predictionAndLabels_rf = pred_validation_rf.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# Instantiate metrics object
metrics_rf = MulticlassMetrics(predictionAndLabels_rf)
# F1 score
print("F1 score of validation set: ", metrics_rf.fMeasure())
print("Precision of validation set: ", metrics_rf.precision(1))
print("Recall of validation set: ", metrics_rf.recall(1))
print(metrics_rf.confusionMatrix().toArray())
print(datetime.now() - starttime)
# # Logistic regression
# ##
# starttime = datetime.now()
# model_lr = pipeline_lr.fit(train)
# pred_train_lr = model_lr.transform(train)
# pred_test_lr = model_lr.transform(test)
# pred_validation_lr = model_lr.transform(validation)
# ## train_set
# predictionAndLabels_lr = pred_train_lr.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# # Instantiate metrics object
# metrics_lr = MulticlassMetrics(predictionAndLabels_lr)
# # F1 score
# print("F1 score of training set: ", metrics_lr.fMeasure())
# print("Precision of training set: ", metrics_lr.precision(1))
# print("Recall of training set: " , metrics_lr.recall(1))
# print(metrics_lr.confusionMatrix().toArray())
# print()
# ## test_set
# predictionAndLabels_lr = pred_test_lr.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# # Instantiate metrics object
# metrics_lr = MulticlassMetrics(predictionAndLabels_lr)
# # F1 score
# print("F1 score of test set: ", metrics_lr.fMeasure())
# print("Precision of test set: ", metrics_lr.precision(1))
# print("Recall of test set: ", metrics_lr.recall(1))
# print(metrics_lr.confusionMatrix().toArray())
# print(datetime.now() - starttime)
# print()
# ## validation_set
# predictionAndLabels_lr = pred_validation_lr.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# # Instantiate metrics object
# metrics_lr = MulticlassMetrics(predictionAndLabels_lr)
# # F1 score
# print("F1 score of validation set: " , metrics_lr.fMeasure())
# print("Precision of validation set: " , metrics_lr.precision(1))
# print("Recall on validation set: ", metrics_lr.recall(1))
# print(metrics_lr.confusionMatrix().toArray())
# print(datetime.now() - starttime)
# GBT
##
starttime = datetime.now()
model_gbt = pipeline_gbt.fit(train)
pred_train_gbt = model_gbt.transform(train)
pred_test_gbt = model_gbt.transform(test)
pred_validation_gbt = model_gbt.transform(validation)
## train_set
predictionAndLabels_gbt = pred_train_gbt.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# Instantiate metrics object
metrics_gbt = MulticlassMetrics(predictionAndLabels_gbt)
# F1 score
print("F1 score of training set: ", metrics_gbt.fMeasure())
print("Precision of training set: ", metrics_gbt.precision(1))
print("Recall of training set: ", metrics_gbt.recall(1))
print(metrics_gbt.confusionMatrix().toArray())
print()
## test_set
predictionAndLabels_gbt = pred_test_gbt.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# Instantiate metrics object
metrics_gbt = MulticlassMetrics(predictionAndLabels_gbt)
# F1 score
print("F1 score of test set: ", metrics_gbt.fMeasure())
print("Precision of test set: " , metrics_gbt.precision(1))
print("Recall of test set: " , metrics_gbt.recall(1))
print(metrics_gbt.confusionMatrix().toArray())
print(datetime.now() - starttime)
print()
## valid_set
predictionAndLabels_gbt = pred_validation_gbt.rdd.map(lambda x: (float(x.prediction), float(x.churn)))
# Instantiate metrics object
metrics_gbt = MulticlassMetrics(predictionAndLabels_gbt)
# F1 score
print("F1 score of validation set: ", metrics_gbt.fMeasure())
print("Precision of validation set: ", metrics_gbt.precision(1))
print("Recall of validation set: ", metrics_gbt.recall(1))
print(metrics_gbt.confusionMatrix().toArray())
print(datetime.now() - starttime)
###Output
F1 score of training set: 0.9295302013422819
Precision of training set: 0.896551724137931
Recall of training set: 0.7761194029850746
[[225. 6.]
[ 15. 52.]]
F1 score of test set: 0.863013698630137
Precision of test set: 0.8333333333333334
Recall of test set: 0.5555555555555556
[[53. 2.]
[ 8. 10.]]
0:02:19.446047
F1 score of validation set: 0.8208955223880597
Precision of validation set: 0.5454545454545454
Recall of validation set: 0.46153846153846156
[[49. 5.]
[ 7. 6.]]
0:02:49.543222
|
notebooks/building_production_ml_systems/labs/4b_streaming_data_inference.ipynb | ###Markdown
Working with Streaming DataLearning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data IntroductionIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery)These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
###Code
import os
import shutil
import googleapiclient.discovery
import numpy as np
import tensorflow as tf
from google import api_core
from google.api_core.client_options import ClientOptions
from google.cloud import bigquery
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai_platform/region $REGION
###Output
_____no_output_____
###Markdown
Re-train our model with `trips_last_5min` featureIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `4a_streaming_data_training.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. Simulate Real Time Taxi DataSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the `iot_devices.py` script, launch a terminal and navigate to the `asl-ml-immersion/notebooks/building_production_ml_systems/labs` directory. Then run the following two commands. ```bashPROJECT_ID=$(gcloud config get-value project)python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxi_rides`. Create a BigQuery table to collect the processed dataIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `traffic_realtime`.
###Code
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except api_core.exceptions.Conflict:
print("Dataset already exists.")
###Output
_____no_output_____
###Markdown
Next, we create a table called `traffic_realtime` and set up the schema.
###Code
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except api_core.exceptions.Conflict:
print("Table already exists.")
###Output
_____no_output_____
###Markdown
Launch Streaming Dataflow PipelineNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery**TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bashPROJECT_ID=$(gcloud config get-value project)REGION=$(gcloud config get-value ai_platform/region)BUCKET=$PROJECT_ID change as necessarypython3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --region=$REGION \ --temp_location=gs://$BUCKET/dataflow_streaming``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
###Code
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
###Output
_____no_output_____
###Markdown
Make predictions from the new dataIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `4a_streaming_data_training.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance.
###Code
# TODO 2a. Write a function to take most recent entry in `traffic_realtime`
# table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()["trips_last_5min"][0]
instance["traffic_last_5min"] = # TODO: Your code goes here.
return instance
###Output
_____no_output_____
###Markdown
The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
###Code
add_traffic_last_5min(
instance={
"dayofweek": 4,
"hourofday": 13,
"pickup_longitude": -73.99,
"pickup_latitude": 40.758,
"dropoff_latitude": 41.742,
"dropoff_longitude": -73.07,
}
)
###Output
_____no_output_____
###Markdown
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should- use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance- call prediction on your model for this realtime instance and save the result as a variable called `response`- parse the json of `response` to print the predicted taxifare cost
###Code
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
# Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = "taxifare"
VERSION_NAME = "traffic"
service = googleapiclient.discovery.build("ml", "v1", cache_discovery=False)
name = "projects/{}/models/{}/versions/{}".format(
PROJECT, MODEL_NAME, VERSION_NAME
)
instance = # TODO
response = # TODO
if "error" in response:
raise RuntimeError(response["error"])
else:
print(response["predictions"][0]["output_1"][0])
###Output
_____no_output_____
###Markdown
Working with Streaming DataLearning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data IntroductionIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery)These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
###Code
import numpy as np
import os
import googleapiclient.discovery
import shutil
import tensorflow as tf
from google import api_core
from google.cloud import bigquery
from google.api_core.client_options import ClientOptions
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
%env PROJECT=$PROJECT
%env BUCKET=$BUCKET
%env REGION=$REGION
%%bash
gcloud config set project $PROJECT
gcloud config set ai_platform/region $REGION
###Output
_____no_output_____
###Markdown
Re-train our model with `trips_last_5min` featureIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `4a_streaming_data_training.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. Simulate Real Time Taxi DataSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the `iot_devices.py` script, launch a terminal and navigate to the `asl-ml-immersion/notebooks/building_production_ml_systems/labs` directory. Then run the following two commands. ```bashPROJECT_ID=$(gcloud config get-value project)python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxi_rides`. Create a BigQuery table to collect the processed dataIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `traffic_realtime`.
###Code
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except api_core.exceptions.Conflict:
print("Dataset already exists.")
###Output
_____no_output_____
###Markdown
Next, we create a table called `traffic_realtime` and set up the schema.
###Code
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except api_core.exceptions.Conflict:
print("Table already exists.")
###Output
_____no_output_____
###Markdown
Launch Streaming Dataflow PipelineNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery**TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bashPROJECT_ID=$(gcloud config get-value project)REGION=$(gcloud config get-value ai_platform/region)BUCKET=$PROJECT_ID change as necessarypython3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --region=$REGION \ --temp_location=gs://$BUCKET/dataflow_streaming``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
###Code
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
###Output
_____no_output_____
###Markdown
Make predictions from the new dataIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `4a_streaming_data_training.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance.
###Code
# TODO 2a. Write a function to take most recent entry in `traffic_realtime`
# table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
TODO: Your code goes here
"""
trips = bq.query(query_string).to_dataframe()["trips_last_5min"][0]
instance["traffic_last_5min"] = # TODO: Your code goes here.
return instance
###Output
_____no_output_____
###Markdown
The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
###Code
add_traffic_last_5min(
instance={
"dayofweek": 4,
"hourofday": 13,
"pickup_longitude": -73.99,
"pickup_latitude": 40.758,
"dropoff_latitude": 41.742,
"dropoff_longitude": -73.07,
}
)
###Output
_____no_output_____
###Markdown
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should- use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance- call prediction on your model for this realtime instance and save the result as a variable called `response`- parse the json of `response` to print the predicted taxifare cost
###Code
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = # TODO
response = # TODO
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'][0]['output_1'][0])
###Output
_____no_output_____
###Markdown
Working with Streaming DataLearning Objectives 1. Learn how to process real-time data for ML models using Cloud Dataflow 2. Learn how to serve online predictions using real-time data IntroductionIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. Typically you will have the following: - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis) - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub) - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow) - A persistent store to keep the processed data (in our case this is BigQuery)These steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. Once this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. In this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of `trips_last_5min` data as an additional feature. This is our proxy for real-time traffic.
###Code
!pip install --user apache-beam[gcp]
###Output
Requirement already satisfied: apache-beam[gcp] in /opt/conda/lib/python3.7/site-packages (2.17.0)
Requirement already satisfied: grpcio<2,>=1.12.1 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (1.33.2)
Requirement already satisfied: python-dateutil<3,>=2.8.0 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (2.8.1)
Requirement already satisfied: avro-python3<2.0.0,>=1.8.1; python_version >= "3.0" in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (1.10.0)
Requirement already satisfied: protobuf<4,>=3.5.0.post1 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (3.13.0)
Requirement already satisfied: pytz>=2018.3 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (2020.4)
Requirement already satisfied: pymongo<4.0.0,>=3.8.0 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (3.11.0)
Requirement already satisfied: future<1.0.0,>=0.16.0 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (0.18.2)
Requirement already satisfied: pydot<2,>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (1.4.1)
Requirement already satisfied: hdfs<3.0.0,>=2.1.0 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (2.5.8)
Requirement already satisfied: crcmod<2.0,>=1.7 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (1.7)
Requirement already satisfied: fastavro<0.22,>=0.21.4 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (0.21.24)
Requirement already satisfied: mock<3.0.0,>=1.0.1 in /home/jupyter/.local/lib/python3.7/site-packages (from apache-beam[gcp]) (2.0.0)
Requirement already satisfied: pyarrow<0.16.0,>=0.15.1; python_version >= "3.0" or platform_system != "Windows" in /home/jupyter/.local/lib/python3.7/site-packages (from apache-beam[gcp]) (0.15.1)
Requirement already satisfied: oauth2client<4,>=2.0.1 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (3.0.0)
Processing /home/jupyter/.cache/pip/wheels/0d/e7/b6/0dd30343ceca921cfbd91f355041bd9c69e0f40b49f25b7b8a/httplib2-0.12.0-py3-none-any.whl
Requirement already satisfied: dill<0.3.1,>=0.3.0 in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (0.3.0)
Requirement already satisfied: google-cloud-bigtable<1.1.0,>=0.31.1; extra == "gcp" in /home/jupyter/.local/lib/python3.7/site-packages (from apache-beam[gcp]) (1.0.0)
Requirement already satisfied: google-apitools<0.5.29,>=0.5.28; extra == "gcp" in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (0.5.28)
Requirement already satisfied: google-cloud-pubsub<1.1.0,>=0.39.0; extra == "gcp" in /home/jupyter/.local/lib/python3.7/site-packages (from apache-beam[gcp]) (1.0.2)
Requirement already satisfied: google-cloud-datastore<1.8.0,>=1.7.1; extra == "gcp" in /home/jupyter/.local/lib/python3.7/site-packages (from apache-beam[gcp]) (1.7.4)
Requirement already satisfied: google-cloud-bigquery<1.18.0,>=1.6.0; extra == "gcp" in /home/jupyter/.local/lib/python3.7/site-packages (from apache-beam[gcp]) (1.17.1)
Requirement already satisfied: google-cloud-core<2,>=0.28.1; extra == "gcp" in /opt/conda/lib/python3.7/site-packages (from apache-beam[gcp]) (1.3.0)
Requirement already satisfied: cachetools<4,>=3.1.0; extra == "gcp" in /home/jupyter/.local/lib/python3.7/site-packages (from apache-beam[gcp]) (3.1.1)
Requirement already satisfied: six>=1.5.2 in /opt/conda/lib/python3.7/site-packages (from grpcio<2,>=1.12.1->apache-beam[gcp]) (1.15.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf<4,>=3.5.0.post1->apache-beam[gcp]) (50.3.2)
Requirement already satisfied: pyparsing>=2.1.4 in /opt/conda/lib/python3.7/site-packages (from pydot<2,>=1.2.0->apache-beam[gcp]) (2.4.7)
Requirement already satisfied: requests>=2.7.0 in /opt/conda/lib/python3.7/site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam[gcp]) (2.24.0)
Requirement already satisfied: docopt in /opt/conda/lib/python3.7/site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam[gcp]) (0.6.2)
Requirement already satisfied: pbr>=0.11 in /opt/conda/lib/python3.7/site-packages (from mock<3.0.0,>=1.0.1->apache-beam[gcp]) (5.5.1)
Requirement already satisfied: numpy>=1.14 in /opt/conda/lib/python3.7/site-packages (from pyarrow<0.16.0,>=0.15.1; python_version >= "3.0" or platform_system != "Windows"->apache-beam[gcp]) (1.19.4)
Requirement already satisfied: rsa>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from oauth2client<4,>=2.0.1->apache-beam[gcp]) (4.6)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /opt/conda/lib/python3.7/site-packages (from oauth2client<4,>=2.0.1->apache-beam[gcp]) (0.2.8)
Requirement already satisfied: pyasn1>=0.1.7 in /opt/conda/lib/python3.7/site-packages (from oauth2client<4,>=2.0.1->apache-beam[gcp]) (0.4.8)
Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigtable<1.1.0,>=0.31.1; extra == "gcp"->apache-beam[gcp]) (1.22.4)
Requirement already satisfied: grpc-google-iam-v1<0.13dev,>=0.12.3 in /opt/conda/lib/python3.7/site-packages (from google-cloud-bigtable<1.1.0,>=0.31.1; extra == "gcp"->apache-beam[gcp]) (0.12.3)
Requirement already satisfied: fasteners>=0.14 in /opt/conda/lib/python3.7/site-packages (from google-apitools<0.5.29,>=0.5.28; extra == "gcp"->apache-beam[gcp]) (0.15)
Requirement already satisfied: google-resumable-media<0.5.0dev,>=0.3.1 in /home/jupyter/.local/lib/python3.7/site-packages (from google-cloud-bigquery<1.18.0,>=1.6.0; extra == "gcp"->apache-beam[gcp]) (0.4.1)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam[gcp]) (1.25.11)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam[gcp]) (2020.11.8)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam[gcp]) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam[gcp]) (3.0.4)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1; extra == "gcp"->apache-beam[gcp]) (1.52.0)
Requirement already satisfied: google-auth<2.0dev,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigtable<1.1.0,>=0.31.1; extra == "gcp"->apache-beam[gcp]) (1.23.0)
Requirement already satisfied: monotonic>=0.1 in /opt/conda/lib/python3.7/site-packages (from fasteners>=0.14->google-apitools<0.5.29,>=0.5.28; extra == "gcp"->apache-beam[gcp]) (1.5)
Installing collected packages: httplib2
Successfully installed httplib2-0.12.0
###Markdown
**Restart** the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).
###Code
import os
import googleapiclient.discovery
import shutil
from google.cloud import bigquery
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
BUCKET = "qwiklabs-gcp-04-8722038efd75"
PROJECT = "qwiklabs-gcp-04-8722038efd75"
REGION = "us-west1"
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
###Output
_____no_output_____
###Markdown
Re-train our model with `trips_last_5min` featureIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs/4a_streaming_data_training.ipynb`. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for `trips_last_5min` in the model and the dataset. Simulate Real Time Taxi DataSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.Inspect the `iot_devices.py` script in the `taxicab_traffic` folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. In production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. To execute the `iot_devices.py` script, launch a terminal and navigate to the `training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/labs` directory. Then run the following two commands. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")python3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID``` You will see new messages being published every 5 seconds. **Keep this terminal open** so it continues to publish events to the Pub/Sub topic. If you open [Pub/Sub in your Google Cloud Console](https://console.cloud.google.com/cloudpubsub/topic/list), you should be able to see a topic called `taxi_rides`. Create a BigQuery table to collect the processed dataIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called `taxifare` and a table within that dataset called `traffic_realtime`.
###Code
bq = bigquery.Client()
dataset = bigquery.Dataset(bq.dataset("taxifare"))
try:
bq.create_dataset(dataset) # will fail if dataset already exists
print("Dataset created.")
except:
print("Dataset already exists.")
###Output
Dataset already exists.
###Markdown
Next, we create a table called `traffic_realtime` and set up the schema.
###Code
dataset = bigquery.Dataset(bq.dataset("taxifare"))
table_ref = dataset.table("traffic_realtime")
SCHEMA = [
bigquery.SchemaField("trips_last_5min", "INTEGER", mode="REQUIRED"),
bigquery.SchemaField("time", "TIMESTAMP", mode="REQUIRED"),
]
table = bigquery.Table(table_ref, schema=SCHEMA)
try:
bq.create_table(table)
print("Table created.")
except:
print("Table already exists.")
###Output
Table already exists.
###Markdown
Launch Streaming Dataflow PipelineNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.The pipeline is defined in `./taxicab_traffic/streaming_count.py`. Open that file and inspect it. There are 5 transformations being applied: - Read from PubSub - Window the messages - Count number of messages in the window - Format the count for BigQuery - Write results to BigQuery**TODO:** Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the [beam programming guide](https://beam.apache.org/documentation/programming-guide/windowing) for guidance. To check your answer reference the solution. For the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. In a new terminal, launch the dataflow pipeline using the command below. You can change the `BUCKET` variable, if necessary. Here it is assumed to be your `PROJECT_ID`. ```bashPROJECT_ID=$(gcloud config list project --format "value(core.project)")BUCKET=$PROJECT_ID CHANGE AS NECESSARY python3 ./taxicab_traffic/streaming_count.py \ --input_topic taxi_rides \ --runner=DataflowRunner \ --project=$PROJECT_ID \ --temp_location=gs://$BUCKET/dataflow_streaming``` Once you've submitted the command above you can examine the progress of that job in the [Dataflow section of Cloud console](https://console.cloud.google.com/dataflow). Explore the data in the table After a few moments, you should also see new data written to your BigQuery table as well. Re-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.
###Code
%load_ext google.cloud.bigquery
%%bigquery
SELECT
*
FROM
`taxifare.traffic_realtime`
ORDER BY
time DESC
LIMIT 10
###Output
_____no_output_____
###Markdown
Make predictions from the new dataIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the `train.ipynb` notebook. The `add_traffic_last_5min` function below will query the `traffic_realtime` table to find the most recent traffic information and add that feature to our instance for prediction. **Exercise.** Complete the code in the function below. Write a SQL query that will return the most recent entry in `traffic_realtime` and add it to the instance.
###Code
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.
def add_traffic_last_5min(instance):
bq = bigquery.Client()
query_string = """
SELECT
*
FROM
taxifare.traffic_realtime
ORDER BY
time DESC
LIMIT 1
"""
trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]
instance['traffic_last_5min'] = int(trips)
return instance
###Output
_____no_output_____
###Markdown
The `traffic_realtime` table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the `traffic_last_5min` feature added to the instance and change over time.
###Code
add_traffic_last_5min(instance={'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
###Output
_____no_output_____
###Markdown
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. **Exercise.** Complete the code below to call prediction on an instance incorporating realtime traffic info. You should- use the function `add_traffic_last_5min` to add the most recent realtime traffic data to the prediction instance- call prediction on your model for this realtime instance and save the result as a variable called `response`- parse the json of `response` to print the predicted taxifare cost
###Code
#!pip3 install google-api-python-client==1.12.2 httplib2==0.18.1
!pip3 install google-api-python-client==1.12.2
!pip3 install httplib2==0.18.1
# TODO 2b. Write code to call prediction on instance using realtime traffic info.
#Hint: Look at the "Serving online predictions" section of this page https://cloud.google.com/ml-engine/docs/tensorflow/custom-prediction-routine-keras
MODEL_NAME = 'taxifare'
VERSION_NAME = 'traffic'
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False)
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT,
MODEL_NAME,
VERSION_NAME)
instance = add_traffic_last_5min({'dayofweek': 4,
'hourofday': 13,
'pickup_longitude': -73.99,
'pickup_latitude': 40.758,
'dropoff_latitude': 41.742,
'dropoff_longitude': -73.07})
#instance = # TODO: Your code goes here.
# TODO: Your code goes here:
response = service.projects().predict(
name=name,
body={'instances': [instance]}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'][0]['output_1'][0]) # TODO: Your code goes here
###Output
-14.840583801269531
|
src/data/0_fetch_game_reviews.ipynb | ###Markdown
1. User Reviews via Steam API (https://partner.steamgames.com/doc/store/getreviews)
###Code
# import packages
import os
import sys
import time
import json
import numpy as np
import urllib.parse
import urllib.request
from tqdm import tqdm
import plotly.express as px
from datetime import datetime
from googletrans import Translator
import pandas as pd
from pandas import json_normalize
# list package ver. etc.
print("Python version")
print (sys.version)
print("Version info.")
print (sys.version_info)
print('---------------')
%reload_ext watermark
%watermark -v -p os,sys,time,json,numpy,urllib,tqdm,plotly,datetime,pandas
###Output
CPython 3.8.3
IPython 7.15.0
os unknown
sys 3.8.3 (default, May 19 2020, 18:47:26)
[GCC 7.3.0]
time unknown
json 2.0.9
numpy 1.18.5
urllib unknown
tqdm 4.46.1
plotly 4.8.1
datetime unknown
pandas 1.0.4
###Markdown
--- Data Dictionary:- Response: - success - 1 if the query was successful - query_summary - Returned in the first request - recommendationid - The unique id of the recommendation - author - steamid - the user’s SteamID - um_games_owned - number of games owned by the user - num_reviews - number of reviews written by the user - playtime_forever - lifetime playtime tracked in this app - playtime_last_two_weeks - playtime tracked in the past two weeks for this app - playtime_at_review - playtime when the review was written - last_played - time for when the user last played - language - language the user indicated when authoring the review - review - text of written review - timestamp_created - date the review was created (unix timestamp) - timestamp_updated - date the review was last updated (unix timestamp) - voted_up - true means it was a positive recommendation - votes_up - the number of users that found this review helpful - votes_funny - the number of users that found this review funny - weighted_vote_score - helpfulness score - comment_count - number of comments posted on this review - steam_purchase - true if the user purchased the game on Steam - received_for_free - true if the user checked a box saying they got the app for free - written_during_early_access - true if the user posted this review while the game was in Early Access - developer_response - text of the developer response, if any - timestamp_dev_responded - Unix timestamp of when the developer responded, if applicable---Source: https://partner.steamgames.com/doc/store/getreviews 1.1 Import
###Code
working_dir = os.getcwd()
print(working_dir)
print(working_dir[:-8]+"data/raw/game_reviews_raw.csv")
output_dir = working_dir[:-8]+"data/raw/game_reviews_raw.csv"
print(output_dir)
# generate game review df
#steam 'chunks' their json files (the game reviews) in sets of 100
#ending with a signature, a 'cursor'. This cursor is then pasted
#onto the the same url, to 'grab' the next chunk and so on.
#This sequence block with an 'end cursor' of 'AoJ4tey90tECcbOXSw=='
#set variables
url_base = 'https://store.steampowered.com/appreviews/393380?json=1&filter=updated&language=all&review_type=all&purchase_type=all&num_per_page=100&cursor='
working_dir = os.getcwd()
output_dir = working_dir[:-8]+"data/raw/game_reviews_raw.csv"
#first pass
url = urllib.request.urlopen("https://store.steampowered.com/appreviews/393380?json=1&filter=updated&language=all&review_type=all&purchase_type=all&num_per_page=100&cursor=*")
data = json.loads(url.read().decode())
next_cursor = data['cursor']
next_cursor = next_cursor.replace('+', '%2B')
df1 = json_normalize(data['reviews'])
print(next_cursor)
#add results till stopcursor met, then send all results to csv
while True:
time.sleep(0.5) # Sleep for half-second
url_temp = url_base + next_cursor
url = urllib.request.urlopen(url_temp)
data = json.loads(url.read().decode())
next_cursor = data['cursor']
next_cursor = next_cursor.replace('+', '%2B')
df2 = json_normalize(data['reviews'])
df1 = pd.concat([df1, df2])
print(next_cursor)
if next_cursor == 'AoJ44PCp0tECd4WXSw==' or next_cursor == '*':
df_steam_reviews = df1
df1 = None
df_game_reviews.to_csv(output_dir, index=False)
print('All finished! Check raw data directory for output.')
break
#the hash below is each 'cursor' I loop through until the 'end cursor'.
#this is just my way to monitor the download.
# inspect columns
print(df_steam_reviews.info(verbose=True))
# inspect shape
print(df_steam_reviews.shape)
# inspect df
df_steam_reviews
# save that sheet
df_steam_reviews.to_csv('squad_reviews.csv', index=False)
###Output
_____no_output_____
###Markdown
1.2 Clean
###Code
#search for presence of empty cells
df_steam_reviews.isnull().sum(axis = 0)
#drop empty cols 'timestamp_dev_responded' and 'developer_response'
df_steam_reviews = df_steam_reviews.drop(['timestamp_dev_responded', 'developer_response'], axis=1)
# convert unix timestamp columns to datetime format
def time_to_clean(x):
return datetime.fromtimestamp(x)
df_steam_reviews['timestamp_created'] = df_steam_reviews['timestamp_created'].apply(time_to_clean)
df_steam_reviews['timestamp_updated'] = df_steam_reviews['timestamp_updated'].apply(time_to_clean)
df_steam_reviews['author.last_played'] = df_steam_reviews['author.last_played'].apply(time_to_clean)
# inspect
df_steam_reviews
# save that sheet
df_steam_reviews.to_csv('game_reviews.csv', index=False)
###Output
_____no_output_____
###Markdown
Misc
###Code
# list of free weekends:
Squad Free Weekend - Nov 2016
Squad Free Weekend - Apr 2017
Squad Free Weekend - Nov 2017
Squad Free Weekend - Jun 2018
Squad Free Weekend - Nov 2018
Squad Free Weekend - Jul 2019
Squad Free Weekend - Nov 2019
# list of major patch days:
v1 - July 1 2015
v2 - Oct 31 2015
v3 - Dec 15 2015
v4 - ?
v5 - Mar 30 2016
v6 - May 26 2016
v7 - Aug 7 2016
v8 - Nov 1 2016
v9 - Mar 9 2017
v10 Feb 5 2018
v11 Jun 6 2018
v12 Nov 29 2018
v13 May ? 2019
v14 Jun 28 2019
v15 Jul 22 2019
v16 Oct 10 2019
v17 Nov 25 2019
v18 ?
v19 May 2 2020
###Output
_____no_output_____
###Markdown

###Code
#v2 (fromhttps://cloud.google.com/translate/docs/simple-translate-call#translate_translate_text-python)
# translate/spellcheck via googletranslate pkg
from google.cloud import translate_v2 as translate
def time_to_translate(x):
if x == None: # ignore the 'NaN' reviews
return 'NaN'
else:
translate_client = translate.Client()
if isinstance(x, six.binary_type):
text = x.decode('utf-8')
return text
#print(time_to_translate('hola'))
# scratch
df_steam_reviews = pd.read_csv('squad_reviews.csv', low_memory=False)
df_steam_reviews
# display reviews
fig = px.histogram(df_steam_reviews, x="timestamp_created", color="voted_up", width=1000, height=500, title='Positive(True)/Negative(False) Reviews')
fig.show()
# translate/spellcheck t
t['review.translated'] = t['review'].progress_apply(time_to_translate)
t.to_csv('t.csv', index=False)
###Output
_____no_output_____ |
apogee_SL510.ipynb | ###Markdown
Import and sort all caravela apogee_SL510 eurec4a files, creating a dataframe from this:
###Code
files_list = glob(r'E:\Eurec4a_master\Caravela\apogee_SL_510\*\*'+ '/*SL_510*')
files_list.sort()
li = []
for filename in tqdm(files_list):
df = pd.read_csv(filename, header=0, sep=',',index_col=False)
li.append(df)
sl = pd.concat(li, axis=0)
sl
###Output
_____no_output_____
###Markdown
Parse the timestamp to a datetime format
###Code
dt = []
for i in tqdm(sl['PC Timestamp[UTC]']):
dt.append(datetime.strptime(i, '%Y/%m/%d %H:%M:%S.%f'))
sl['dt [UTC]'] = dt
sl = sl[(sl['dt [UTC]'] >= '2020-01-22 00:00:00.000')] #select data from Caravela's launch onwards
sl
###Output
_____no_output_____
###Markdown
Tidy this up to drop the columns we dont need
###Code
sl = sl.reset_index()
sl = sl.drop(['index','PC Time Zone','PC Timestamp[UTC]' ],axis=1)
###Output
_____no_output_____
###Markdown
check for any gaps in timeseries larger than 2 seconds
###Code
time_diff = sl['dt [UTC]'].values[1:] - sl['dt [UTC]'].values[:-1]
for i in np.arange(1, len(time_diff)):
if np.timedelta64(3599, 's') >= time_diff[i] > np.timedelta64(2, 's'):
print('gap starts at', sl['dt [UTC]'][i], 'and lasts for',
np.timedelta64(time_diff[i],'s'))
if time_diff[i] > np.timedelta64(3599, 's'):
print('gap starts at', sl['dt [UTC]'][i], 'and lasts for',
np.timedelta64(time_diff[i],'s'), ' - approximately', np.timedelta64(time_diff[i],'h'))
###Output
gap starts at 2020-01-22 20:35:34.193000 and lasts for 21092 seconds - approximately 5 hours
gap starts at 2020-01-27 12:49:42.753000 and lasts for 91447 seconds - approximately 25 hours
gap starts at 2020-01-29 17:47:36.976000 and lasts for 20 seconds
gap starts at 2020-01-31 18:08:06.895000 and lasts for 14 seconds
gap starts at 2020-02-01 15:29:12.123000 and lasts for 2 seconds
gap starts at 2020-02-04 10:53:07.094000 and lasts for 15 seconds
gap starts at 2020-02-07 08:06:33.486000 and lasts for 2 seconds
gap starts at 2020-02-09 21:46:59.899000 and lasts for 3 seconds
gap starts at 2020-02-13 11:54:22.136000 and lasts for 2 seconds
gap starts at 2020-02-14 05:56:17.835000 and lasts for 2 seconds
gap starts at 2020-02-14 11:01:33.949000 and lasts for 2 seconds
gap starts at 2020-02-21 20:45:34.363000 and lasts for 2 seconds
###Markdown
Convert to iso time as this is a universally accepted format
###Code
a = []
for i in tqdm(range(0,len(sl['dt [UTC]']))):
a.append(sl['dt [UTC]'][i].isoformat())
sl['datetime [UTC]'] = a
sl = sl.drop(['dt [UTC]'], axis = 1)
sl.to_csv('CARAVELA_SL510.csv',index = None)
###Output
_____no_output_____
###Markdown
testing the file we just created
###Code
import matplotlib.pyplot as plt
baa = pd.read_csv('CARAVELA_SL510.csv')# import file to test it
z = []
for i in tqdm(baa['datetime [UTC]']):
z.append(datetime.fromisoformat(i))
baa['dt'] = z
fig,ax = plt.subplots(1,1, figsize=(18, 15))
ax.plot(baa['dt'], baa['SL-510-SS[W m-2]'])
ax.set_ylabel('SL-510-SS[W m-2]')
ax.set_xlabel('Date')
###Output
_____no_output_____ |
.ipynb_checkpoints/ClassNotes-checkpoint.ipynb | ###Markdown
IT Workshop Basics SheetThis sheet will be regularly updated and available at: https://github.com/siddhartha18101/python_spring_21 How to run code in jupyter nb:1. Click "+" to add a new cell2. Write some Python code3. Click run and ensure that "Code" is selected in dropdown How to write plain text in jupyter nb:1. Click "+" to add a new cell2. Write something in plain english3. Click the dropdown where it will show "Code" by default and change it to "Markdown"4. Click run Basics
###Code
print("Hello World")
###Output
Hello World
###Markdown
Working with numbers:1. int2. float
###Code
a = 5
type(a)
a = 5.5
type(a)
a = 5
b = 10
c = 7.9
print(c)
c = int(c)
print(c)
type(c)
###Output
7
###Markdown
Working with Boolean values:
###Code
a = True
print(a)
a = 1
a = bool(1)
print(a)
###Output
True
###Markdown
Working with String:1. String length2. String slice3. String to upper case4. Last element of a string5. String concat
###Code
a = "Hello"
###Output
_____no_output_____
###Markdown
H E L L O0 1 2 3 4-5-4-3-2-1
###Code
print(a[0])
print(a[-1])
a = a[1:]
print(a)
###Output
ello
###Markdown
Similar way to Ca[x:y]x will be included y-1 will be includedi = xii++
###Code
b = a[0:-1]
print(b)
b = b.upper()
print(b)
b.isupper()
a = "he"
b = "e"
c = a+b
print(c)
a = "1"
a*5
len(a)
###Output
_____no_output_____
###Markdown
Type conversions
###Code
a = 5.5
b = int(a)
b
a = "1"
a*5
a = float(a)
a
a*5
a = 10
b = float(a)
b
a = 1234
a = str(a)
a
import sys
a = sys.maxsize
a
###Output
_____no_output_____
###Markdown
Getting User input
###Code
a = input()
print(a)
print("What is your cgpa")
cgpa = input()
type(cgpa)
type(cgpa)
cgpa = int(cgpa)
type(cgpa)
cgpa = int(input())
type(cgpa)
print("abcd",end = " ")
print("abcd")
print("efgh")
print("abcd",end = "")
print("efgh")
a = "hello"
a = a.capitalize()
a
###Output
_____no_output_____
###Markdown
Potassium Current: $\displaystyle{I_K=g_K(v-E_K)}$ $\displaystyle{g_K(v,t)=\bar{g}_Kn^4(v)}$ $\displaystyle{n \underset{\alpha_n(v)}{\stackrel{\beta_n(v)}{\rightleftharpoons}} (1-n)}$ $\displaystyle{\frac{dn}{dt}=\alpha_n(1-n)-\beta_nn}$ For voltage clamp $v$ is constant so $\alpha_n$ and $\beta_n$ are constant Solving the ODE: $\displaystyle{n(t)=n_{\infty}-(n_{\infty}-n_0)e^{-t/\tau_n}}$ where $\displaystyle{n_{\infty}=\frac{\alpha_n}{\alpha_n+\beta_n}}$ and $\displaystyle{\tau_n=\frac{1}{\alpha_n+\beta_n}}$ $\displaystyle{\alpha_n(v)=\frac{0.01(10-v)}{e^{\frac{10-v}{10}}-1}, \beta_n(v)=0.125e^{\frac{-v}{80}}}$
###Code
## Importing libraries
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
np.seterr(divide='ignore', invalid='ignore');
## Defining functions for the potassium current
def alpha_n(v):
return 0.01*(10-v)/(np.exp((10-v)/10)-1)
def beta_n(v):
return 0.125*np.exp(-v/80)
def n_inf(v):
return alpha_n(v)/(alpha_n(v)+beta_n(v))
def tau_n(v):
return 1/(alpha_n(v)+beta_n(v))
## Creating a voltage clamp experiment
dt=1e-3
T_end=50
t=np.arange(0,T_end+dt,dt)
On=5 #Time of applying vc
Off=30 #Time of removing vc
def voltage_clamp(v0,vC):
v=t*0+v0
v[(t>On)*(t<Off)]=vC
return v
## ploting the gating variable for diffent voltage clamps
tempL=[]
v0=0 #initial voltage, vC is the voltage clamp value
fig = plt.figure()
for vC in np.arange(0,35,5):
v=voltage_clamp(0,vC)
n1=n_inf(v)-(n_inf(v)-n_inf(v0))*np.exp(-(t-On)/tau_n(v))
n2=n_inf(v)-(n_inf(v)-np.max(n1))*np.exp(-(t-Off)/tau_n(v))
n=v
n[t<Off]=n1[t<Off]
n[t>=Off]=n2[t>=Off]
tempL+=plt.plot(t,n**4,label='$v_C=$'+str(vC))
plt.xlabel('Time [ms]')
plt.ylabel('$n^4$')
plt.ylim([0,0.3])
labels = [l.get_label() for l in tempL]
plt.legend(tempL, labels);
plt.show()
###Output
_____no_output_____
###Markdown
Where is the line for $v_C=10$ ? The denominator of $\alpha(v)=\frac{0.01(10-v)}{e^{\frac{10-v}{10}}-1}$ goes to $0$ for $v=10$. We can use L'Hospital's rule: $\displaystyle{\lim_{v\to10}\alpha_n=\frac{\frac{d}{dv}(0.01(10-v))}{\frac{d}{dv}(e^{\frac{10-v}{10}}-1)}=\frac{-0.01}{-0.1e^{\frac{10-v}{10}}}=0.1}$
###Code
## Taking the care of 0/0 for alpha_n
def alpha_n(v):
temp=(0.01*(10-v)/(np.exp((10-v)/10)-1))
return np.where(v!=10,temp,0.1)
## Redoing the voltage clamp for Vc=10
vC=10
plt.plot()
v=voltage_clamp(0,vC)
n1=n_inf(v)-(n_inf(v)-n_inf(v0))*np.exp(-(t-On)/tau_n(v))
n2=n_inf(v)-(n_inf(v)-np.max(n1))*np.exp(-(t-Off)/tau_n(v))
n=v
n[t<Off]=n1[t<Off]
n[t>=Off]=n2[t>=Off]
plt.plot(t,n**4,'g');
###Output
_____no_output_____
###Markdown
Numerical Solution (Euler's Method): $\displaystyle{\frac{dn}{dt}=f(n)\Rightarrow\frac{dn}{dt}=\frac{n_{t+dt}-n_t}{dt}=f(n_t)\Rightarrow n_{t+dt}=n_t+dt\times f(n_t)}$ $\displaystyle{\frac{dn}{dt}=\alpha_n(1-n)-\beta_nn\Rightarrow n_{t+dt}=n_t+dt\times [\alpha_n(1-n_t)-\beta_nn_t]}$
###Code
## Analytical solution
dt=1e-3
T_end=50
t=np.arange(0,T_end+dt,dt)
On=5
Off=30
v0=0
vC=10
fig = plt.figure()
plt.plot()
v=voltage_clamp(0,vC)
n1=n_inf(v)-(n_inf(v)-n_inf(v0))*np.exp(-(t-On)/tau_n(v))
n2=n_inf(v)-(n_inf(v)-np.max(n1))*np.exp(-(t-Off)/tau_n(v))
n=v
n[t<Off]=n1[t<Off]
n[t>=Off]=n2[t>=Off]
plt.plot(t[0:-1:1000],n[0:-1:1000]**4,'o')
plt.xlabel('Time [ms]')
plt.ylabel('$n^4$')
plt.ylim([0,0.08])
plt.legend(["Analytical solution"])
plt.show()
## Numerical Solution using first order explicit Euler's method
dt=0.1
T_end=50
t=np.arange(0,T_end+dt,dt)
v=v0
n_num=np.zeros(len(t))+n_inf(v)
for i in range(0,len(t)-1):
n_num[i+1]=n_num[i]+dt*(alpha_n(v)*(1-n_num[i])-beta_n(v)*n_num[i])
if t[i]>On and t[i]<Off:
v=vC
else:
v=v0
plt.plot(t,n_num**4)
plt.legend(["Analytical solution","Numerical solution"]);
###Output
_____no_output_____
###Markdown
Stochastic potassium channel: $\displaystyle{C_1 \underset{\beta_n}{\stackrel {4\alpha_n}{\rightleftharpoons}} C_2 \underset{2\beta_n}{\stackrel{3\alpha_n}{\rightleftharpoons}} C_3 \underset{3\beta_n}{\stackrel{2\alpha_n}{\rightleftharpoons}} C_4 \underset{4\beta_n}{\stackrel{ \alpha_n}{\rightleftharpoons}} O }$ Initial conditions: $\displaystyle{P_{C_1}=(1-n_{\infty}(v))^4}$ $\displaystyle{ P_{C_2}={4 \choose 3}(1-n_{\infty}(v))^3n_{\infty}(v)}$ $\displaystyle{ P_{C_3}={4 \choose 2}(1-n_{\infty}(v))^2n_{\infty}(v)^2}$ $\displaystyle{ P_{C_4}={4 \choose 1}(1-n_{\infty}(v))^1n_{\infty}(v)^3}$ $\displaystyle{P_{O}=n_{\infty}(v)^4}$ Draw random a number $r$ from a uniform distribution [0,1] If $\displaystyle{0 Channel is in state $C_1$ If $\displaystyle{P_{C_1} Channel is in state $C_2$ If $\displaystyle{P_{C_2} Channel is in state $C_3$ If $\displaystyle{P_{C_3} Channel is in state $C_4$ If $\displaystyle{P_{C_4} Channel is in state $O$ Hint: it is easier to use cumulative sum Run the following code for different values of memberane potential and number of channels to see how the distribution of initial states change
###Code
#The initial distribution of channels' state
from scipy.special import comb
V=0 #Initial voltage
N=1000 #Number of Channels
P=np.empty(5)
P[0]=comb(4,4)*(1-n_inf(V))**4*n_inf(V)**0
P[1]=comb(4,3)*(1-n_inf(V))**3*n_inf(V)**1
P[2]=comb(4,2)*(1-n_inf(V))**2*n_inf(V)**2
P[3]=comb(4,1)*(1-n_inf(V))**1*n_in`f(V)**3
P[4]=comb(4,0)*(1-n_inf(V))**0*n_inf(V)**4
cP=np.cumsum(P)
S=np.empty(N)
for i in np.arange(0,N):
r1=np.random.uniform()
S[i]=np.nonzero(r1<cP)[0][0]+1
fig = plt.figure()
plt.hist(S,bins=np.arange(0.5, 6, step=1.0),density=True,ec='black');
plt.xticks(np.arange(1., 6, step=1.0),['$C_1$','$C_2$','$C_3$','$C_4$','$O$'])
plt.xlabel('States')
plt.title('$V=$'+str(V));
###Output
_____no_output_____
###Markdown
The transition of a channel between two states follows a Poisson process: $\displaystyle{P(\tau)=ke^{-k\tau}}$ where $k$ is the rate of the Poisson process and $\tau$ is the time that the transition occurs. $\displaystyle{P(t<=\tau)=CDF(\tau)=1-e^{k\tau}=r_2\Rightarrow \tau=-\frac{ln(1-r_2)}{k}=-\frac{ln(r_3)}{k}}$ Event and Transition rates: $\displaystyle{C_1 \underset{\beta_n}{\stackrel {4\alpha_n}{\rightleftharpoons}} C_2 \underset{2\beta_n}{\stackrel{3\alpha_n}{\rightleftharpoons}} C_3 \underset{3\beta_n}{\stackrel{2\alpha_n}{\rightleftharpoons}} C_4 \underset{4\beta_n}{\stackrel{ \alpha_n}{\rightleftharpoons}} O }$ $\displaystyle{k_{C_1}=4\alpha_n}$ $\displaystyle{k_{C_2}=3\alpha_n+\beta_n}$ $\displaystyle{k_{C_3}=3\alpha_n+2\beta_n}$ $\displaystyle{k_{C_4}= \alpha_n+3\beta_n}$ $\displaystyle{k_{ O}=4\beta_n}$ $\displaystyle{P_{C_1 \to C_2}}=1$ $\displaystyle{P_{C_2 \to C_3}=\frac{3\alpha_n}{3\alpha_n+\beta_n}}$ $\displaystyle{P_{C_2 \to C_1}=\frac{\beta_n}{3\alpha_n+\beta_n}}$ $\displaystyle{P_{C_3 \to C_4}=\frac{2\alpha_n}{2\alpha_n+2\beta_n}}$ $\displaystyle{P_{C_3 \to C_2}=\frac{2\beta_n}{2\alpha_n+2\beta_n}}$ $\displaystyle{P_{C_4 \to O}=\frac{\alpha_n}{\alpha_n+3\beta_n}}$ $\displaystyle{P_{C_4 \to C_3}=\frac{3\beta_n}{\alpha_n+3\beta_n}}$ $\displaystyle{P_{O \to C_4}}=1$
###Code
#The stochastic simulation
Tend=80
On=20
Off=50
dt=0.01
T=np.arange(0,Tend,dt)
N=1000
Open=np.zeros([N,len(T)])
v0=0
P=np.empty(5)
P[0]=comb(4,4)*(1-n_inf(v0))**4*n_inf(v0)**0
P[1]=comb(4,3)*(1-n_inf(v0))**3*n_inf(v0)**1
P[2]=comb(4,2)*(1-n_inf(v0))**2*n_inf(v0)**2
P[3]=comb(4,1)*(1-n_inf(v0))**1*n_inf(v0)**3
P[4]=comb(4,0)*(1-n_inf(v0))**0*n_inf(v0)**4
cP=np.cumsum(P)
k=np.empty(5)
K=np.empty(5)
vC=60
for i in range(0,N):
S=np.zeros(1)
v=v0
k[0]=4*alpha_n(v)+0*beta_n(v)
k[1]=3*alpha_n(v)+1*beta_n(v)
k[2]=2*alpha_n(v)+2*beta_n(v)
k[3]=1*alpha_n(v)+3*beta_n(v)
k[4]=0*alpha_n(v)+4*beta_n(v)
K[0]=1.0
K[1]=3*alpha_n(v)/(3*alpha_n(v)+1*beta_n(v))
K[2]=2*alpha_n(v)/(2*alpha_n(v)+2*beta_n(v))
K[3]=1*alpha_n(v)/(1*alpha_n(v)+3*beta_n(v))
K[4]=1.0
r1=np.random.uniform()
S[0]=np.nonzero(r1<cP)[0][0]
t=0
j=0
States=np.zeros(len(T))+S
while t<Tend:
r3=np.random.uniform()
tau=-np.log(r3)/k[int(S[j])]
r4=np.random.uniform()
if S[j]==0:
S=np.append(S,1)
elif S[j]==1:
if r4<K[1]:
S=np.append(S,2)
else:
S=np.append(S,0)
elif S[j]==2:
if r4<K[2]:
S=np.append(S,3)
else:
S=np.append(S,1)
elif S[j]==3:
if r4<K[3]:
S=np.append(S,4)
else:
S=np.append(S,2)
else:
S=np.append(S,3)
j=j+1
t=t+tau
States[T>=t]=S[j]
if t>On and t<Off-dt:
if v==v0:
t=On
v=vC
k[0]=4*alpha_n(v)+0*beta_n(v)
k[1]=3*alpha_n(v)+1*beta_n(v)
k[2]=2*alpha_n(v)+2*beta_n(v)
k[3]=1*alpha_n(v)+3*beta_n(v)
k[4]=0*alpha_n(v)+4*beta_n(v)
K[0]=1.0
K[1]=3*alpha_n(v)/(3*alpha_n(v)+1*beta_n(v))
K[2]=2*alpha_n(v)/(2*alpha_n(v)+2*beta_n(v))
K[3]=1*alpha_n(v)/(1*alpha_n(v)+3*beta_n(v))
K[4]=1.0
if t>Off:
if v==vC:
t=Off
v=v0
k[0]=4*alpha_n(v)+0*beta_n(v)
k[1]=3*alpha_n(v)+1*beta_n(v)
k[2]=2*alpha_n(v)+2*beta_n(v)
k[3]=1*alpha_n(v)+3*beta_n(v)
k[4]=0*alpha_n(v)+4*beta_n(v)
K[0]=1.0
K[1]=3*alpha_n(v)/(3*alpha_n(v)+1*beta_n(v))
K[2]=2*alpha_n(v)/(2*alpha_n(v)+2*beta_n(v))
K[3]=1*alpha_n(v)/(1*alpha_n(v)+3*beta_n(v))
K[4]=1.0
temp=States==4
temp[temp==True]=1.0
Open[i,:]=temp
temp=np.mean(Open,0)
fig,ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(T,temp);
ax1.set_xlabel('Time[ms]')
ax1.set_ylabel('$<O>$',color='b')
ax1.set_ylim([0,0.8]);
# Deterministic simulation
v=v0
n_num=np.zeros(len(T))+n_inf(v)
for i in range(0,len(T)-1):
n_num[i+1]=n_num[i]+dt*(alpha_n(v)*(1-n_num[i])-beta_n(v)*n_num[i])
if T[i]>On and T[i]<Off:
v=vC
else:
v=v0
ax2.plot(T,n_num**4,'r')
ax2.set_ylabel('$n^4$',color='r')
ax2.set_ylim([0,0.8]);
###Output
_____no_output_____
###Markdown
$\displaystyle{\sigma^2=Np(1-p),p=n^4}$
###Code
# Variance
temp=np.var(Open,0)
fig,ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(T,temp);
ax1.set_xlabel('Time[ms]')
ax1.set_ylabel('$<O^2>-<O>^2$',color='b')
ax1.set_ylim([0,0.3]);
ax2.plot(T,n_num**4*(1-n_num**4),'r')
ax2.set_ylabel('$(1-n^4)n^4$',color='r')
ax2.set_ylim([0,0.3]);
###Output
_____no_output_____
###Markdown
Sodium Current: $\displaystyle{I_{Na}=g_{Na}(v-E_{Na})}$ $\displaystyle{g_{Na}(v,t)=\bar{g}_{Na}m^3(v)h(v)}$ ${\displaystyle\frac{dm}{dt}=\alpha_m(1-m)-\beta_mm}$ ${\displaystyle\frac{dh}{dt}=\alpha_h(1-h)-\beta_hh}$ ${\displaystyle\alpha_m(v)=\frac{0.1(25-v)}{e^{\frac{25-v}{10}}-1}, \beta_m(v)=4e^{\frac{-v}{18}}}$ ${\displaystyle\alpha_h(v)=0.07e^{\frac{-v}{20}},\beta_h(v)=\frac{1}{1+e^{\frac{30-v}{10}}}}$
###Code
# Functions for sodium
def alpha_m(v):
temp=0.1*(25-v)/(np.exp((25-v)/10)-1)
return np.where(v!=25,temp,1.0)
def beta_m(v):
return 4.0*np.exp(-v/18)
def m_inf(v):
return alpha_m(v)/(alpha_m(v)+beta_m(v))
def tau_m(v):
return 1/(alpha_m(v)+beta_m(v))
# Functions for sodium
def alpha_h(v):
return 0.07*np.exp(-v/20)
def beta_h(v):
return 1.0/(1+np.exp((30-v)/10))
def h_inf(v):
return alpha_h(v)/(alpha_h(v)+beta_h(v))
def tau_h(v):
return 1/(alpha_h(v)+beta_h(v))
# Voltage clamp for m
dt=1e-3
T_end=50
t=np.arange(0,T_end+dt,dt)
On=5
Off=30
tempL=[]
v0=0
fig = plt.figure()
for vC in np.arange(0,35,5):
v=voltage_clamp(0,vC)
m1=m_inf(v)-(m_inf(v)-m_inf(v0))*np.exp(-(t-On)/tau_m(v))
m2=m_inf(v)-(m_inf(v)-np.max(m1))*np.exp(-(t-Off)/tau_m(v))
m=v
m[t<Off]=m1[t<Off]
m[t>=Off]=m2[t>=Off]
tempL+=plt.plot(t,m**3,label='$v_C=$'+str(vC))
plt.xlabel('Time [ms]')
plt.ylabel('$m^3$')
plt.ylim([0,0.3])
labels = [l.get_label() for l in tempL]
plt.legend(tempL, labels);
plt.show()
# Voltage clamp for h
tempL=[]
v0=0
fig = plt.figure()
for vC in np.arange(0,35,5):
v=voltage_clamp(0,vC)
h1=h_inf(v)-(h_inf(v)-h_inf(v0))*np.exp(-(t-On)/tau_h(v))
h2=h_inf(v)-(h_inf(v)-np.min(h1))*np.exp(-(t-Off)/tau_h(v))
h=v
h[t<Off]=h1[t<Off]
h[t>=Off]=h2[t>=Off]
tempL+=plt.plot(t,h,label='$v_C=$'+str(vC))
plt.xlabel('Time [ms]')
plt.ylabel('$h$')
plt.ylim([0,1.0])
labels = [l.get_label() for l in tempL]
plt.legend(tempL, labels)
plt.show();
###Output
_____no_output_____
###Markdown
Steady state values for all the channels
###Code
V=np.arange(-40,100,0.1)
Ns=n_inf(V)
Ms=m_inf(V)
Hs=h_inf(V)
fig = plt.figure()
plt.plot(V,Ns,V,Ms,V,Hs)
plt.xlabel('Membrane potential-$V_{rest}$ [mV]')
plt.ylim([0,1.0])
plt.legend(['$n_\infty$','$m_\infty$','$h_\infty$'])
plt.show()
###Output
_____no_output_____
###Markdown
Time constants for all the channels
###Code
V=np.arange(-40,100,0.1)
Ns=tau_n(V)
Ms=tau_m(V)
Hs=tau_h(V)
fig = plt.figure()
plt.plot(V,Ns,V,Ms,V,Hs)
plt.xlabel('Membrane potential-$V_{rest}$ [mV]')
# plt.ylim([0,1.0])
plt.legend([r'$ \tau_n $',r'$\tau_m$',r'$\tau_h$'])
plt.show()
###Output
_____no_output_____
###Markdown
Hodgkin–Huxley model:  The currents that were not account for are included as leak currents: $\displaystyle{I_L=\bar{g}_L(v-E_L)}$ Putting all the currents together we have: $\displaystyle{I_C+I_{Na}+I_K+I_L-I_{ext}=0}$ $\displaystyle{C_m\frac{dv}{dt}+g_{Na}(v-E_{Na})+g_{K}(v-E_{K})+g_{L}(v-E_{L})-I_{ext}=0}$ Conductances: $\displaystyle{g_K(v,t)=\bar{g}_Kn^4(v)}$ $\displaystyle{g_{Na}(v,t)=\bar{g}_{Na}m^3(v)h(v)}$ $\displaystyle{\frac{dn}{dt}=\alpha_n(1-n)-\beta_nn}$ ${\displaystyle\frac{dm}{dt}=\alpha_m(1-m)-\beta_mm}$ ${\displaystyle\frac{dh}{dt}=\alpha_h(1-h)-\beta_hh}$ Rates: $\displaystyle{\alpha_n(v)=\frac{0.01(10-v)}{e^{\frac{10-v}{10}}-1}, \beta_n(v)=0.125e^{\frac{-v}{80}}}$ ${\displaystyle\alpha_m(v)=\frac{0.1(25-v)}{e^{\frac{25-v}{10}}-1}, \beta_m(v)=4e^{\frac{-v}{18}}}$ ${\displaystyle\alpha_h(v)=0.07e^{\frac{-v}{20}},\beta_h(v)=\frac{1}{1+e^{\frac{30-v}{10}}}}$ Constants: $\displaystyle{C_M=1.0\mu Fcm^{-2}}$ $\displaystyle{E_K=-12mV,\bar{g}_{K}=36mScm^{-2}}$ $\displaystyle{E_{Na}=115mV,\bar{g}_{Na}=120mScm^{-2}}$ $\displaystyle{E_L=10.6mV,\bar{g}_{L}=0.3mScm^{-2}}$ We can use Euler's method again to solve the system of equations for $v,n,m,h$:
###Code
# function to time march HH model
def HHmodel(v0,I_ext):
C_m=1
E_K=-12
g_K=36
E_Na=115
g_Na=120
E_L=10.6
g_L=0.3
v=np.zeros(len(t))+v0
n=np.zeros(len(t))+n_inf(v0)
m=np.zeros(len(t))+m_inf(v0)
h=np.zeros(len(t))+h_inf(v0)
for i in range(0,len(t)-1):
I_K =g_K *n[i]**4 *(v[i]-E_K)
I_Na=g_Na*m[i]**3*h[i]*(v[i]-E_Na)
I_L =g_L *(v[i]-E_L)
n[i+1]=n[i]+dt*(alpha_n(v[i])*(1-n[i])-beta_n(v[i])*n[i])
m[i+1]=m[i]+dt*(alpha_m(v[i])*(1-m[i])-beta_m(v[i])*m[i])
h[i+1]=h[i]+dt*(alpha_h(v[i])*(1-h[i])-beta_h(v[i])*h[i])
v[i+1]=v[i]+dt*(-I_K-I_Na-I_L+I_ext[i])/C_m
Solution={'v':v,'n':n,'m':m,'h':h}
return Solution
# Function for plotting HH model states
def plotHH(Key):
fig,ax1 = plt.subplots()
ax2 = ax1.twinx()
for Items in Key:
State=Solution[Items]
ax1.plot(t,State);
ax1.set_xlabel('Time[ms]')
if Items=='v':
Label='Membrane potential-$V_{rest}$ $[mV]$'
ax1.set_ylim([-20,120]);
else:
Label='Gating variable'
ax1.set_ylabel(Label)
ax1.legend(Key)
ax2.plot(t,I_ext,'r')
ax2.set_ylabel('I$_{ext}$$[\mu A/cm^2]$',color='r')
ax2.set_ylim([5*np.min(I_ext)-np.max(I_ext),5*np.max(I_ext)-np.min(I_ext)]);
###Output
_____no_output_____
###Markdown
Small injected currents do not cause action potentials:
###Code
%matplotlib inline
dt=0.01
T_end=50
t=np.arange(0,T_end+dt,dt)
On=20
Off=25
I_ext=np.zeros(len(t))
I_ext[(t>On)*(t<Off)]=2
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
plotHH(['n','m','h'])
###Output
_____no_output_____
###Markdown
Large enough injected currents lead to action potentials:
###Code
dt=0.01
T_end=50
t=np.arange(0,T_end+dt,dt)
On=20
Off=25
I_ext=np.zeros(len(t))
I_ext[(t>On)*(t<Off)]=3
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
plotHH(['n','m','h'])
g_K=36
g_Na=120
fig,ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(t,g_K*Solution['n']**4,t,g_Na*Solution['m']**3*Solution['h']);
ax1.set_ylabel('Conductance $[mscm^2]$')
ax1.set_xlabel('Time $[ms]$')
ax1.legend(['$g_K$','$g_{Na}$'])
ax2.plot(t,I_ext,'r')
ax2.set_ylabel('$I[\mu A/cm^2]$',color='r')
ax2.set_ylim([5*np.min(I_ext)-np.max(I_ext),5*np.max(I_ext)-np.min(I_ext)]);
###Output
_____no_output_____
###Markdown
What happens if we provide a second pulse?
###Code
dt=0.01
T_end=60
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=25
On2=40
Off2=45
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=3
I_ext[(t>On2)*(t<Off2)]=3
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
###Output
_____no_output_____
###Markdown
Is it possible to generate action potentials with shorter pluses?
###Code
dt=0.01
T_end=80
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=21
On2=40
Off2=41
On3=60
Off3=61
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=3
I_ext[(t>On2)*(t<Off2)]=7
I_ext[(t>On3)*(t<Off3)]=7
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
plotHH(['n','m','h'])
###Output
_____no_output_____
###Markdown
What happens when the pulses are closer to each other?
###Code
dt=0.01
T_end=60
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=21
On2=40
Off2=41
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=8
I_ext[(t>On2)*(t<Off2)]=8
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
dt=0.01
T_end=60
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=21
On2=36
Off2=37
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=8
I_ext[(t>On2)*(t<Off2)]=8
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
dt=0.01
T_end=60
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=21
On2=36
Off2=37
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=8
I_ext[(t>On2)*(t<Off2)]=9
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
dt=0.01
T_end=60
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=21
On2=30
Off2=31
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=8
I_ext[(t>On2)*(t<Off2)]=9
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
plotHH(['n','m','h'])
dt=0.01
T_end=60
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=21
On2=30
Off2=31
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=8
I_ext[(t>On2)*(t<Off2)]=41
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
dt=0.01
T_end=60
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=21
On2=26
Off2=27
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=8
I_ext[(t>On2)*(t<Off2)]=500
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
dt=0.01
T_end=60
t=np.arange(0,T_end+dt,dt)
On1=20
Off1=21
On2=23
Off2=24
I_ext=np.zeros(len(t))
I_ext[(t>On1)*(t<Off1)]=8
I_ext[(t>On2)*(t<Off2)]=500
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
plotHH(['n','m','h'])
###Output
_____no_output_____
###Markdown
The period that even a strong second stimulus cannot cuase a second action potential is called the *absolute refractory* period. The period that only a strong second stimulus can cuase a second action potential is called the *relative refractory* period. What happens during a hyperpolarization stimulus?
###Code
dt=0.01
T_end=50
t=np.arange(0,T_end+dt,dt)
On=20
Off=25
I_ext=np.zeros(len(t))
I_ext[(t>On)*(t<Off)]=-5
v0=0
Solution=HHmodel(v0,I_ext)
plotHH(['v'])
plotHH(['n','m','h'])
###Output
_____no_output_____
###Markdown
The effect of temperature The rates of activation and inactivation increase with increasing temperature $\displaystyle{Q_{10}=\frac{\text{rate at } T+10^\circ \mathrm{C}}{\text{rate at } T}}$ $\displaystyle{\alpha(v,T_2)=\alpha(V,T_1)Q_{10}^{\frac{T_2-T_1}{10}}}$ $\displaystyle{Q_{10}}$ is about 3 for the rates in the H-H model Stability analysis: Particle in a potential well
###Code
A=0.05/4.0
B=-2.05/2.0
C=2.0
def F(x):
return A*x**4+B*x**2+C*x
def dF(x):
return 4*A*x**3+2*B*x+C
xx=np.arange(-12,12,0.01)
yy=F(xx)
fig = plt.figure()
plt.plot(xx,yy)
plt.xlabel('x')
plt.ylabel('Potential')
plt.title('Potential field: $F(x)=Ax^4+Bx^2+C$');
###Output
_____no_output_____
###Markdown
$\displaystyle{\frac{dx}{dt}=V}$ $\displaystyle{\frac{dV}{dt}=-F'(x)}$
###Code
%matplotlib inline
import time
from IPython import display
dt=0.01
T_end=50
t=np.arange(0,T_end+dt,dt)
X0=[(-1.-np.sqrt(1.+C/A))/2.,1.,(-1.+np.sqrt(1.+C/A))/2.]
x=np.zeros(len(t))+X0[1]
V=np.zeros(len(t))
dX=0
fig = plt.figure()
Ax = fig.add_subplot(111)
plt.xlabel('x')
plt.ylabel('Potential')
for i in range(0,len(t)-1):
x[i+1]=x[i]+dt*(V[i]+dX)
V[i+1]=V[i]+dt*(-dF(x[i]))
if t[i]>10 and t[i]<10+0.05:
dX=0.1
else:
dX=0
if i%20==0:
Ax.cla()
plt.title(round(t[i],2))
plt.plot(xx,yy)
plt.plot(x[i],F(x[i]),'r.',markersize=15)
display.display(plt.gcf())
display.clear_output(wait=True)
time.sleep(0.001)
###Output
_____no_output_____
###Markdown
Linear stability analysis: $\displaystyle{\frac{d}{dt}\begin{pmatrix} x \\V\end{pmatrix}=\begin{pmatrix} V \\-F'(x)\end{pmatrix}\Rightarrow \frac{d}{dt}\boldsymbol{X}=f(\boldsymbol{X})}$ $\displaystyle{J=\begin{pmatrix} \partial f_1/\partial x_1 && \partial f_1/\partial x_2\\ \partial f_2/\partial x_2 && \partial f_2/\partial x_2\end{pmatrix}= \begin{pmatrix}-F''(x) && 0\\0 && 1\end{pmatrix}}$ $\displaystyle{|J-\lambda I|=0}\Rightarrow \lambda=-F''(x_0)$ $\displaystyle{\lambda<0: \text{ Stable}}$ $\displaystyle{\lambda>0: \text{ Usntable}}$ $\displaystyle{x_0 \text{ are the fixed points where } \frac{dx}{dt}=0 \text{ and } \frac{dV}{dt}=0 }$ We can use the same concept for the H-H model: $\displaystyle{\frac{d}{dt}\begin{pmatrix} n \\m\\h\\v\end{pmatrix}=\begin{pmatrix} \alpha_n(1-n)-\beta_nn \\ \alpha_m(1-m)-\beta_mm \\ \alpha_h(1-h)-\beta_hh \\ [-\bar{g}_{Na}n^4(v-E_{Na})-\bar{g}_{K}m^3h(v-E_{K})-\bar{g}_{L}(v-E_{L})+I_{ext}]/C_m \end{pmatrix}}$ The analysis is more complicated yet still possible. However, let's focus on the fast dynamics of $m$ and $v$: ${\displaystyle\frac{dm}{dt}=\alpha_m(1-m)-\beta_mm}$ ${\displaystyle\frac{dv}{dt}=[-\bar{g}_{Na}n_{\infty}^4(v-E_{Na})-\bar{g}_{K}m^3h_{\infty}(v-E_{K})-\bar{g}_{L}(v-E_{L})+I_{ext}]/C_m}$ The lines where ${\displaystyle\frac{dm}{dt}=0}$ and ${\displaystyle\frac{dv}{dt}=0}$ are called nullclines.
###Code
n0=n_inf(0)
h0=h_inf(0)
def v_inf(m):
E_K=-12
g_K=36
E_Na=115
g_Na=120
E_L=10.6
g_L=0.3
return (g_Na*m**3*h0*E_Na+g_K*n0**4*E_K+g_L*E_L)/(g_Na*m**3*h0+g_K*n0**4+g_L)
###Output
_____no_output_____
###Markdown
Phase plane analysis:
###Code
fig = plt.figure(figsize=(8, 6))
vv=np.arange(-5,120,0.1)
y=m_inf(vv)
x=v_inf(y)
plt.plot(x,y,vv,y)
plt.xlabel('v')
plt.ylabel('m')
plt.legend(['$dv/dt=0$','$dm/dt=0$']);
def FastHHmodel(v0,m0,I_ext):
C_m=1
E_K=-12
g_K=36
E_Na=115
g_Na=120
E_L=10.6
g_L=0.3
v=np.zeros(len(t))+v0
n0=n_inf(0)
m=np.zeros(len(t))+m0
h0=h_inf(0)
for i in range(0,len(t)-1):
I_K =g_K *n0**4 *(v[i]-E_K)
I_Na=g_Na*m[i]**3*h0*(v[i]-E_Na)
I_L =g_L *(v[i]-E_L)
m[i+1]=m[i]+dt*(alpha_m(v[i])*(1-m[i])-beta_m(v[i])*m[i])
v[i+1]=v[i]+dt*(-I_K-I_Na-I_L+I_ext[i])/C_m
Solution={'v':v,'m':m}
return Solution
dt=0.01
T_end=4
t=np.arange(0,T_end+dt,dt)
On=20
Off=25
I_ext=np.zeros(len(t))
I_ext[(t>On)*(t<Off)]=0
Solution=FastHHmodel(-5,0.2,I_ext)
fig = plt.figure(figsize=(8, 6))
plt.plot(t,Solution['v'])
plt.xlabel('t')
plt.ylabel('v');
def Trace(c):
xx=Solution['v']
yy=Solution['m']
color=c+'.'
for i in range(0,len(t)-1):
if i%5==0:
plt.title(round(t[i],2))
plt.plot(xx[i],yy[i],color,markersize=10);
display.display(plt.gcf())
display.clear_output(wait=True)
fig = plt.figure(figsize=(8, 6))
vv=np.arange(-5,120,0.1)
y=m_inf(vv)
x=v_inf(y)
plt.plot(x,y,vv,y)
plt.xlabel('v')
plt.ylabel('m')
plt.legend(['$dv/dt=0$','$dm/dt=0$'])
Solution=FastHHmodel(-5,0.20,I_ext)
Trace('r')
Solution=FastHHmodel(-5,0.25,I_ext)
Trace('b')
Solution=FastHHmodel(5,0.05,I_ext)
Trace('y')
Solution=FastHHmodel(0,0.5,I_ext)
Trace('m')
Solution=FastHHmodel(-5,0,I_ext)
Trace('c')
Solution=FastHHmodel(25,0.05,I_ext)
Trace('g')
dt=0.01
T_end=20
t=np.arange(0,T_end+dt,dt)
On=0
Off=5
I_ext=np.zeros(len(t))
I_ext[(t>On)*(t<Off)]=5
v0=0
Solution=HHmodel(v0,I_ext)
fig = plt.figure(figsize=(8, 6))
plt.plot(x,y,vv,y)
plt.xlabel('v')
plt.ylabel('m')
plt.legend(['$dv/dt=0$','$dm/dt=0$']);
xx=Solution['v']
yy=Solution['m']
for i in range(0,len(t)-1):
if i%5==0:
plt.title(round(t[i],2))
plt.plot(xx[i],yy[i],'r.',markersize=10);
display.display(plt.gcf())
display.clear_output(wait=True)
###Output
_____no_output_____ |
demos/more/malware-hypergraph/Malware Hypergraph.ipynb | ###Markdown
Finding Correlations in a CSV of Malware Events via Hypergraph ViewsTo find patterns and outliers in CSVs and event data, Graphistry provides the hypergraph transform. As an example, this notebook examines different malware files reported to a security vendor. It reveals phenomena such as:* The malware files cluster into several families* The nodes central to a cluster reveal attributes specific to a strain of malware* The nodes bordering a cluster reveal attributes that show up in a strain, but are unique to each instance in that strain* Several families have attributes connecting them, suggesting they had the same authors Load CSV
###Code
import pandas as pd
import graphistry as g
#graphistry.register(key='...')
df = pd.read_csv('barncat.1k.csv', encoding = "utf8")
print("# samples", len(df))
eval(df[:10]['value'].tolist()[0])
#avoid double counting
df3 = df[df['value'].str.contains("{")]
df3[:1]
#Unpack 'value' json
import json
df4 = pd.concat([df3.drop('value', axis=1), df3.value.apply(json.loads).apply(pd.Series)])
len(df4)
df4[:1]
###Output
_____no_output_____
###Markdown
Default Hypergraph TransformThe hypergraph transform creates:* A node for every row, * A node for every unique value in a columns (so multiple if found across columns)* An edge connecting a row to its valuesWhen multiple rows share similar values, they will cluster together. When a row has unique values, those will form a ring around only that node.
###Code
g.hypergraph(df4[:50])['graph'].plot()
###Output
('# links', 200)
('# event entities', 50)
('# attrib entities', 102)
###Markdown
Configured Hypergraph TransformWe clean up the visualization in a few ways:1. Categorize hash codes as in the same family. This simplifies coloring by the generated 'category' field. If columns share the same value, such as two columns using md5 values, this would also cause them to only create 1 node per hash, instead of per-column instance.2. Not show a lot of attributes as nodes, such as numbers and datesRunning `help(graphistry.hypergraph)` reveals more options.
###Code
g.hypergraph(
df4,
opts={
'CATEGORIES': {
'hash': ['sha1', 'sha256', 'md5'],
'section': [x for x in df4.columns if 'section_' in x]
},
'SKIP': ['event_id', 'InstallFlag', 'type', 'val', 'Date', 'date', 'Port', 'FTPPort', 'Origin', 'category', 'comment', 'to_ids']
})['graph'].plot()
###Output
('# links', 2350)
('# event entities', 204)
('# attrib entities', 1156)
|
notebooks/fairness-interpretability-dashboard-loan-allocation.ipynb | ###Markdown
Assess Fairness, Explore Interpretability, and Mitigate Fairness Issues This notebook demonstrates how to use [InterpretML](interpret.ml), [Fairlearn](fairlearn.org), and the Responsible AI Widget's Fairness and Interpretability dashboards to understand a model trained on the Census dataset. This dataset is a classification problem - given a range of data about 32,000 individuals, predict whether their annual income is above or below fifty thousand dollars per year.For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.We will first train a fairness-unaware predictor, load its global and local explanations, and use the interpretability and fairness dashboards to demonstrate how this model leads to unfair decisions (under a specific notion of fairness called *demographic parity*). We then mitigate unfairness by applying the `GridSearch` algorithm from `Fairlearn` package. Install Required Packages
###Code
# %pip install --upgrade fairlearn
# %pip install --upgrade interpret-community
# %pip install --upgrade raiwidgets
###Output
_____no_output_____
###Markdown
After installing packages, you must close and reopen the notebook as well as restarting the kernel. Load and preprocess the data setFor simplicity, we import the data set from the `fairlearn` package, which contains the data in a cleaned format. We start by importing the various modules we're going to use:
###Code
from fairlearn.reductions import GridSearch
from fairlearn.reductions import DemographicParity, ErrorRate
from fairlearn.datasets import fetch_adult
from fairlearn.metrics import MetricFrame, selection_rate
from sklearn import svm, neighbors, tree
from sklearn.compose import ColumnTransformer, make_column_selector
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
import pandas as pd
import numpy as np
# SHAP Tabular Explainer
from interpret.ext.blackbox import KernelExplainer
from interpret.ext.blackbox import MimicExplainer
from interpret.ext.glassbox import LGBMExplainableModel
###Output
_____no_output_____
###Markdown
We can now load and inspect the data:
###Code
dataset = fetch_adult(as_frame=True)
X_raw, y = dataset['data'], dataset['target']
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data. We then perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms
###Code
sensitive_features = X_raw[['sex','race']]
le = LabelEncoder()
y = le.fit_transform(y)
###Output
_____no_output_____
###Markdown
Finally, we split the data into training and test sets:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test, sensitive_features_train, sensitive_features_test = \
train_test_split(X_raw, y, sensitive_features,
test_size = 0.2, random_state=0, stratify=y)
# Work around indexing bug
X_train = X_train.reset_index(drop=True)
sensitive_features_train = sensitive_features_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
sensitive_features_test = sensitive_features_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Training a fairness-unaware predictorTo show the effect of `Fairlearn` we will first train a standard ML predictor that does not incorporate fairness. For speed of demonstration, we use a simple logistic regression estimator from `sklearn`:
###Code
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, make_column_selector(dtype_exclude="category")),
("cat", categorical_transformer, make_column_selector(dtype_include="category")),
]
)
model = Pipeline(
steps=[
("preprocessor", preprocessor),
(
"classifier",
LogisticRegression(solver="liblinear", fit_intercept=True),
),
]
)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Generate Model Explanations
###Code
# Using SHAP KernelExplainer
# clf.steps[-1][1] returns the trained classification model
explainer = MimicExplainer(model.steps[-1][1],
X_train,
LGBMExplainableModel,
features=X_raw.columns,
classes=['Rejected', 'Approved'],
transformations=preprocessor)
###Output
_____no_output_____
###Markdown
Generate global explanationsExplain overall model predictions (global explanation)
###Code
### Note we downsample the test data since visualization dashboard can't handle the full dataset
global_explanation = explainer.explain_global(X_test[:1000])
global_explanation.get_feature_importance_dict()
###Output
_____no_output_____
###Markdown
Generate local explanationsExplain local data points (individual instances)
###Code
# You can pass a specific data point or a group of data points to the explain_local function
# E.g., Explain the first data point in the test set
instance_num = 1
local_explanation = explainer.explain_local(X_test[:instance_num])
# Get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = model.predict(X_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
print('local importance values: {}'.format(sorted_local_importance_values))
print('local importance names: {}'.format(sorted_local_importance_names))
###Output
_____no_output_____
###Markdown
Visualize model explanationsLoad the interpretability visualization dashboard
###Code
from raiwidgets import ExplanationDashboard
ExplanationDashboard(global_explanation, model, dataset=X_test[:1000], true_y=y_test[:1000])
###Output
_____no_output_____
###Markdown
We can load this predictor into the Fairness dashboard, and examine how it is unfair: Assess model fairness Load the fairness visualization dashboard
###Code
from raiwidgets import FairnessDashboard
y_pred = model.predict(X_test)
FairnessDashboard(sensitive_features=sensitive_features_test,
y_true=y_test,
y_pred=y_pred)
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with Fairlearn (GridSearch)The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (are approved for a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). We are using this metric for the sake of simplicity; in general, the appropriate fairness metric will not be obvious.
###Code
# Fairlearn is not yet fully compatible with Pipelines, so we have to pass the estimator only
X_train_prep = preprocessor.transform(X_train).toarray()
X_test_prep = preprocessor.transform(X_test).toarray()
sweep = GridSearch(LogisticRegression(solver="liblinear", fit_intercept=True),
constraints=DemographicParity(),
grid_size=70)
###Output
_____no_output_____
###Markdown
Our algorithms provide `fit()` and `predict()` methods, so they behave in a similar manner to other ML packages in Python. We do however have to specify two extra arguments to `fit()` - the column of protected attribute labels, and also the number of predictors to generate in our sweep.After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.
###Code
sweep.fit(X_train_prep, y_train,
sensitive_features=sensitive_features_train.sex)
predictors = sweep.predictors_
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the sensitive feature). In general, one might not want to do this, since there may be other considerations beyond the strict optimization of error and disparity (of the given protected attribute).
###Code
accuracies, disparities = [], []
for predictor in predictors:
accuracy_metric_frame = MetricFrame(accuracy_score, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)
selection_rate_metric_frame = MetricFrame(selection_rate, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)
accuracies.append(accuracy_metric_frame.overall)
disparities.append(selection_rate_metric_frame.difference())
all_results = pd.DataFrame({"predictor": predictors, "accuracy": accuracies, "disparity": disparities})
all_models_dict = {"unmitigated": model.steps[-1][1]}
dominant_models_dict = {"unmitigated": model.steps[-1][1]}
base_name_format = "grid_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
all_models_dict[model_name] = row.predictor
accuracy_for_lower_or_eq_disparity = all_results["accuracy"][all_results["disparity"] <= row.disparity]
if row.accuracy >= accuracy_for_lower_or_eq_disparity.max():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for all the models, and also for the dominant models:
###Code
from raiwidgets import FairnessDashboard
dashboard_all = {}
for name, predictor in all_models_dict.items():
value = predictor.predict(X_test_prep)
dashboard_all[name] = value
dominant_all = {}
for name, predictor in dominant_models_dict.items():
dominant_all[name] = predictor.predict(X_test_prep)
FairnessDashboard(sensitive_features=sensitive_features_test,
y_true=y_test,
y_pred=dominant_all)
###Output
_____no_output_____
###Markdown
Assess Fairness, Explore Interpretability, and Mitigate Fairness Issues This notebook demonstrates how to use [InterpretML](interpret.ml), [Fairlearn](fairlearn.org), and the Responsible AI Widget's Fairness and Interpretability dashboards to understand a model trained on the Census dataset. This dataset is a classification problem - given a range of data about 32,000 individuals, predict whether their annual income is above or below fifty thousand dollars per year.For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.We will first train a fairness-unaware predictor, load its global and local explanations, and use the interpretability and fairness dashboards to demonstrate how this model leads to unfair decisions (under a specific notion of fairness called *demographic parity*). We then mitigate unfairness by applying the `GridSearch` algorithm from `Fairlearn` package. Install Required Packages
###Code
%pip install --upgrade fairlearn
%pip install --upgrade interpret-community
%pip install --upgrade raiwidgets
###Output
_____no_output_____
###Markdown
After installing packages, you must close and reopen the notebook as well as restarting the kernel. Load and preprocess the data setFor simplicity, we import the data set from the `shap` package, which contains the data in a cleaned format. We start by importing the various modules we're going to use:
###Code
from fairlearn.reductions import GridSearch
from fairlearn.reductions import DemographicParity, ErrorRate
from fairlearn.datasets import fetch_adult
from fairlearn.metrics import MetricFrame, selection_rate
from sklearn import svm, neighbors, tree
from sklearn.compose import ColumnTransformer, make_column_selector
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
import pandas as pd
import numpy as np
# SHAP Tabular Explainer
from interpret.ext.blackbox import KernelExplainer
from interpret.ext.blackbox import MimicExplainer
from interpret.ext.glassbox import LGBMExplainableModel
###Output
_____no_output_____
###Markdown
We can now load and inspect the data:
###Code
dataset = fetch_adult(as_frame=True)
X_raw, y = dataset['data'], dataset['target']
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data. We then perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms
###Code
sensitive_features = X_raw[['sex','race']]
le = LabelEncoder()
y = le.fit_transform(y)
###Output
_____no_output_____
###Markdown
Finally, we split the data into training and test sets:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test, sensitive_features_train, sensitive_features_test = \
train_test_split(X_raw, y, sensitive_features,
test_size = 0.2, random_state=0, stratify=y)
# Work around indexing bug
X_train = X_train.reset_index(drop=True)
sensitive_features_train = sensitive_features_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
sensitive_features_test = sensitive_features_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Training a fairness-unaware predictorTo show the effect of `Fairlearn` we will first train a standard ML predictor that does not incorporate fairness. For speed of demonstration, we use a simple logistic regression estimator from `sklearn`:
###Code
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, make_column_selector(dtype_exclude="category")),
("cat", categorical_transformer, make_column_selector(dtype_include="category")),
]
)
model = Pipeline(
steps=[
("preprocessor", preprocessor),
(
"classifier",
LogisticRegression(solver="liblinear", fit_intercept=True),
),
]
)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Generate Model Explanations
###Code
# Using SHAP KernelExplainer
# clf.steps[-1][1] returns the trained classification model
explainer = MimicExplainer(model.steps[-1][1],
X_train,
LGBMExplainableModel,
features=X_raw.columns,
classes=['Rejected', 'Approved'],
transformations=preprocessor)
###Output
_____no_output_____
###Markdown
Generate global explanationsExplain overall model predictions (global explanation)
###Code
### Note we downsample the test data since visualization dashboard can't handle the full dataset
global_explanation = explainer.explain_global(X_test[:1000])
global_explanation.get_feature_importance_dict()
###Output
_____no_output_____
###Markdown
Generate local explanationsExplain local data points (individual instances)
###Code
# You can pass a specific data point or a group of data points to the explain_local function
# E.g., Explain the first data point in the test set
instance_num = 1
local_explanation = explainer.explain_local(X_test[:instance_num])
# Get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = model.predict(X_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
print('local importance values: {}'.format(sorted_local_importance_values))
print('local importance names: {}'.format(sorted_local_importance_names))
###Output
_____no_output_____
###Markdown
Visualize model explanationsLoad the interpretability visualization dashboard
###Code
from raiwidgets import ExplanationDashboard
ExplanationDashboard(global_explanation, model, dataset=X_test[:1000], true_y=y_test[:1000])
###Output
_____no_output_____
###Markdown
We can load this predictor into the Fairness dashboard, and examine how it is unfair: Assess model fairness Load the fairness visualization dashboard
###Code
from raiwidgets import FairnessDashboard
y_pred = model.predict(X_test)
FairnessDashboard(sensitive_features=sensitive_features_test,
y_true=y_test,
y_pred=y_pred)
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with Fairlearn (GridSearch)The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (are approved for a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). We are using this metric for the sake of simplicity; in general, the appropriate fairness metric will not be obvious.
###Code
# Fairlearn is not yet fully compatible with Pipelines, so we have to pass the estimator only
X_train_prep = preprocessor.transform(X_train).toarray()
X_test_prep = preprocessor.transform(X_test).toarray()
sweep = GridSearch(LogisticRegression(solver="liblinear", fit_intercept=True),
constraints=DemographicParity(),
grid_size=70)
###Output
_____no_output_____
###Markdown
Our algorithms provide `fit()` and `predict()` methods, so they behave in a similar manner to other ML packages in Python. We do however have to specify two extra arguments to `fit()` - the column of protected attribute labels, and also the number of predictors to generate in our sweep.After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.
###Code
sweep.fit(X_train_prep, y_train,
sensitive_features=sensitive_features_train.sex)
predictors = sweep.predictors_
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the sensitive feature). In general, one might not want to do this, since there may be other considerations beyond the strict optimization of error and disparity (of the given protected attribute).
###Code
accuracies, disparities = [], []
for predictor in predictors:
accuracy_metric_frame = MetricFrame(accuracy_score, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)
selection_rate_metric_frame = MetricFrame(selection_rate, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)
accuracies.append(accuracy_metric_frame.overall)
disparities.append(selection_rate_metric_frame.difference())
all_results = pd.DataFrame( {"predictor": predictors, "accuracy": accuracies, "disparity": disparities})
all_models_dict = {"unmitigated": model.steps[-1][1]}
dominant_models_dict = {"unmitigated": model.steps[-1][1]}
base_name_format = "grid_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
all_models_dict[model_name] = row.predictor
accuracy_for_lower_or_eq_disparity = all_results["accuracy"][all_results["disparity"] <= row.disparity]
if row.accuracy >= accuracy_for_lower_or_eq_disparity.max():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for all the models, and also for the dominant models:
###Code
from raiwidgets import FairnessDashboard
dashboard_all = {}
for name, predictor in all_models_dict.items():
value = predictor.predict(X_test_prep)
dashboard_all[name] = value
dominant_all = {}
for name, predictor in dominant_models_dict.items():
dominant_all[name] = predictor.predict(X_test_prep)
FairnessDashboard(sensitive_features=sensitive_features_test,
y_true=y_test,
y_pred=dominant_all)
###Output
_____no_output_____
###Markdown
Assess Fairness, Explore Interpretability, and Mitigate Fairness Issues This notebook demonstrates how to use [InterpretML](interpret.ml), [Fairlearn](fairlearn.org), and the Responsible AI Widget's Fairness and Interpretability dashboards to understand a model trained on the Census dataset. This dataset is a classification problem - given a range of data about 32,000 individuals, predict whether their annual income is above or below fifty thousand dollars per year.For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.We will first train a fairness-unaware predictor, load its global and local explanations, and use the interpretability and fairness dashboards to demonstrate how this model leads to unfair decisions (under a specific notion of fairness called *demographic parity*). We then mitigate unfairness by applying the `GridSearch` algorithm from `Fairlearn` package. Install Required Packages
###Code
%pip install --upgrade fairlearn
%pip install --upgrade interpret-community
%pip install --upgrade raiwidgets
###Output
_____no_output_____
###Markdown
After installing packages, you must close and reopen the notebook as well as restarting the kernel. Load and preprocess the data setFor simplicity, we import the data set from the `fairlearn` package, which contains the data in a cleaned format. We start by importing the various modules we're going to use:
###Code
from fairlearn.reductions import GridSearch
from fairlearn.reductions import DemographicParity, ErrorRate
from fairlearn.datasets import fetch_adult
from fairlearn.metrics import MetricFrame, selection_rate
from sklearn import svm, neighbors, tree
from sklearn.compose import ColumnTransformer, make_column_selector
from sklearn.preprocessing import LabelEncoder,StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
import pandas as pd
import numpy as np
# SHAP Tabular Explainer
from interpret.ext.blackbox import KernelExplainer
from interpret.ext.blackbox import MimicExplainer
from interpret.ext.glassbox import LGBMExplainableModel
###Output
_____no_output_____
###Markdown
We can now load and inspect the data:
###Code
dataset = fetch_adult(as_frame=True)
X_raw, y = dataset['data'], dataset['target']
X_raw["race"].value_counts().to_dict()
###Output
_____no_output_____
###Markdown
We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data. We then perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms
###Code
sensitive_features = X_raw[['sex','race']]
le = LabelEncoder()
y = le.fit_transform(y)
###Output
_____no_output_____
###Markdown
Finally, we split the data into training and test sets:
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test, sensitive_features_train, sensitive_features_test = \
train_test_split(X_raw, y, sensitive_features,
test_size = 0.2, random_state=0, stratify=y)
# Work around indexing bug
X_train = X_train.reset_index(drop=True)
sensitive_features_train = sensitive_features_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
sensitive_features_test = sensitive_features_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Training a fairness-unaware predictorTo show the effect of `Fairlearn` we will first train a standard ML predictor that does not incorporate fairness. For speed of demonstration, we use a simple logistic regression estimator from `sklearn`:
###Code
numeric_transformer = Pipeline(
steps=[
("impute", SimpleImputer()),
("scaler", StandardScaler()),
]
)
categorical_transformer = Pipeline(
[
("impute", SimpleImputer(strategy="most_frequent")),
("ohe", OneHotEncoder(handle_unknown="ignore")),
]
)
preprocessor = ColumnTransformer(
transformers=[
("num", numeric_transformer, make_column_selector(dtype_exclude="category")),
("cat", categorical_transformer, make_column_selector(dtype_include="category")),
]
)
model = Pipeline(
steps=[
("preprocessor", preprocessor),
(
"classifier",
LogisticRegression(solver="liblinear", fit_intercept=True),
),
]
)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Generate Model Explanations
###Code
# Using SHAP KernelExplainer
# clf.steps[-1][1] returns the trained classification model
explainer = MimicExplainer(model.steps[-1][1],
X_train,
LGBMExplainableModel,
features=X_raw.columns,
classes=['Rejected', 'Approved'],
transformations=preprocessor)
###Output
_____no_output_____
###Markdown
Generate global explanationsExplain overall model predictions (global explanation)
###Code
### Note we downsample the test data since visualization dashboard can't handle the full dataset
global_explanation = explainer.explain_global(X_test[:1000])
global_explanation.get_feature_importance_dict()
###Output
_____no_output_____
###Markdown
Generate local explanationsExplain local data points (individual instances)
###Code
# You can pass a specific data point or a group of data points to the explain_local function
# E.g., Explain the first data point in the test set
instance_num = 1
local_explanation = explainer.explain_local(X_test[:instance_num])
# Get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = model.predict(X_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
print('local importance values: {}'.format(sorted_local_importance_values))
print('local importance names: {}'.format(sorted_local_importance_names))
###Output
_____no_output_____
###Markdown
Visualize model explanationsLoad the interpretability visualization dashboard
###Code
from raiwidgets import ExplanationDashboard
ExplanationDashboard(global_explanation, model, dataset=X_test[:1000], true_y=y_test[:1000])
###Output
_____no_output_____
###Markdown
We can load this predictor into the Fairness dashboard, and examine how it is unfair: Assess model fairness Load the fairness visualization dashboard
###Code
from raiwidgets import FairnessDashboard
y_pred = model.predict(X_test)
FairnessDashboard(sensitive_features=sensitive_features_test,
y_true=y_test,
y_pred=y_pred)
###Output
_____no_output_____
###Markdown
Looking at the disparity in accuracy, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact. Mitigation with Fairlearn (GridSearch)The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (are approved for a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). We are using this metric for the sake of simplicity; in general, the appropriate fairness metric will not be obvious.
###Code
# Fairlearn is not yet fully compatible with Pipelines, so we have to pass the estimator only
X_train_prep = preprocessor.transform(X_train).toarray()
X_test_prep = preprocessor.transform(X_test).toarray()
sweep = GridSearch(LogisticRegression(solver="liblinear", fit_intercept=True),
constraints=DemographicParity(),
grid_size=70)
###Output
_____no_output_____
###Markdown
Our algorithms provide `fit()` and `predict()` methods, so they behave in a similar manner to other ML packages in Python. We do however have to specify two extra arguments to `fit()` - the column of protected attribute labels, and also the number of predictors to generate in our sweep.After `fit()` completes, we extract the full set of predictors from the `GridSearch` object.
###Code
sweep.fit(X_train_prep, y_train,
sensitive_features=sensitive_features_train.sex)
predictors = sweep.predictors_
###Output
_____no_output_____
###Markdown
We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the sensitive feature). In general, one might not want to do this, since there may be other considerations beyond the strict optimization of error and disparity (of the given protected attribute).
###Code
accuracies, disparities = [], []
for predictor in predictors:
accuracy_metric_frame = MetricFrame(accuracy_score, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)
selection_rate_metric_frame = MetricFrame(selection_rate, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)
accuracies.append(accuracy_metric_frame.overall)
disparities.append(selection_rate_metric_frame.difference())
all_results = pd.DataFrame({"predictor": predictors, "accuracy": accuracies, "disparity": disparities})
all_models_dict = {"unmitigated": model.steps[-1][1]}
dominant_models_dict = {"unmitigated": model.steps[-1][1]}
base_name_format = "grid_{0}"
row_id = 0
for row in all_results.itertuples():
model_name = base_name_format.format(row_id)
all_models_dict[model_name] = row.predictor
accuracy_for_lower_or_eq_disparity = all_results["accuracy"][all_results["disparity"] <= row.disparity]
if row.accuracy >= accuracy_for_lower_or_eq_disparity.max():
dominant_models_dict[model_name] = row.predictor
row_id = row_id + 1
###Output
_____no_output_____
###Markdown
We can construct predictions for all the models, and also for the dominant models:
###Code
from raiwidgets import FairnessDashboard
dashboard_all = {}
for name, predictor in all_models_dict.items():
value = predictor.predict(X_test_prep)
dashboard_all[name] = value
dominant_all = {}
for name, predictor in dominant_models_dict.items():
dominant_all[name] = predictor.predict(X_test_prep)
FairnessDashboard(sensitive_features=sensitive_features_test,
y_true=y_test,
y_pred=dominant_all)
###Output
_____no_output_____ |
python/plot.ipynb | ###Markdown
读取时延
###Code
import os
file_names = []
for file_name in os.listdir('./eclogs/data/opt-6files'):
if 'read' in file_name:
file_names.append(file_name)
file_names
read_latency_row = pd.read_csv('eclogs/data/opt-6files/' + file_names[11], names = ['file', 'block', 'ms'], sep=' ')
for index in read_latency_row.index:
if read_latency_row['ms'][index] > 1000:
read_latency_row = read_latency_row.drop(index)
read_latency_row = read_latency_row.reset_index(drop=True)
read_latency_blk = read_latency_row.groupby(['file', 'block']).mean().unstack()
read_latency_row
# read_latency_node_mean = read_latency_row.drop(columns=['block']).groupby(['file', 'node']).mean().unstack()
# read_latency_node_mean
block_location = pd.read_csv('eclogs/workersToWrite.txt',names = range(read_latency_blk.shape[-1]), sep = ' ')
read_latency_row['node'] = [block_location.at[read_latency_row.at[i, 'file'], read_latency_row.at[i, 'block']] for i in range(read_latency_row.shape[0])]
# read_latency_node = read_latency_row.drop('block', axis = 1).groupby(['file', 'node']).mean().unstack()
read_latency_node = pd.DataFrame(dict(list(read_latency_row.groupby('node')['ms'])))
read_latency_node_dic = dict()
for column in read_latency_node.columns:
read_latency_node_dic[column] = read_latency_node[column].dropna().values
maxlen = 0
for key in read_latency_node_dic.keys():
if key == 0:
continue
if len(read_latency_node_dic[key]) > maxlen:
maxlen = len(read_latency_node_dic[key])
for key in read_latency_node_dic.keys():
idx_to_drop = []
if key == 0:
continue
for i in range(len(read_latency_node_dic[key])):
if read_latency_node_dic[key][i] >= 1000:
idx_to_drop.append(i)
read_latency_node_dic[key] = np.delete(read_latency_node_dic[key], idx_to_drop)
# read_latency_node
for key in read_latency_node_dic.keys():
line = ''
if key == 0:
continue
for i in range(maxlen):
line += str(read_latency_node_dic[key][i%len(read_latency_node_dic[key])]) + ' '
print(line + ';')
# read_latency_node_dic
###Output
206.0 271.0 183.0 265.0 185.0 185.0 180.0 188.0 183.0 183.0 321.0 318.0 185.0 188.0 194.0 282.0 250.0 182.0 179.0 185.0 197.0 185.0 206.0 390.0 196.0 272.0 261.0 184.0 187.0 430.0 286.0 369.0 400.0 591.0 243.0 239.0 184.0 483.0 598.0 594.0 612.0 634.0 697.0 762.0 654.0 422.0 423.0 411.0 187.0 221.0 222.0 515.0 413.0 376.0 276.0 282.0 428.0 334.0 348.0 331.0 371.0 681.0 590.0 239.0 242.0 190.0 192.0 183.0 184.0 185.0 396.0 399.0 354.0 337.0 341.0 271.0 266.0 182.0 317.0 369.0 271.0 558.0 516.0 525.0 323.0 181.0 354.0 365.0 506.0 322.0 330.0 366.0 370.0 368.0 197.0 264.0 353.0 422.0 357.0 245.0 187.0 182.0 181.0 186.0 183.0 183.0 185.0 244.0 214.0 268.0 269.0 186.0 181.0 185.0 183.0 187.0 186.0 187.0 206.0 271.0 183.0 265.0 185.0 185.0 180.0 188.0 183.0 183.0 321.0 318.0 185.0 188.0 194.0 282.0 250.0 182.0 179.0 185.0 197.0 185.0 206.0 390.0 196.0 272.0 261.0 184.0 187.0 430.0 286.0 369.0 400.0 591.0 243.0 239.0 184.0 483.0 598.0 594.0 612.0 634.0 697.0 762.0 654.0 422.0 423.0 411.0 187.0 221.0 222.0 515.0 413.0 376.0 276.0 282.0 428.0 334.0 348.0 331.0 371.0 681.0 590.0 239.0 242.0 190.0 ;
214.0 181.0 179.0 183.0 191.0 184.0 192.0 185.0 225.0 181.0 181.0 183.0 189.0 200.0 232.0 214.0 285.0 265.0 227.0 227.0 357.0 351.0 181.0 224.0 249.0 213.0 193.0 312.0 307.0 448.0 447.0 194.0 188.0 185.0 214.0 223.0 290.0 284.0 180.0 182.0 216.0 492.0 260.0 249.0 183.0 192.0 184.0 286.0 343.0 347.0 189.0 182.0 200.0 184.0 181.0 350.0 350.0 181.0 182.0 236.0 544.0 502.0 200.0 339.0 196.0 188.0 193.0 183.0 181.0 361.0 372.0 259.0 258.0 185.0 181.0 214.0 181.0 179.0 183.0 191.0 184.0 192.0 185.0 225.0 181.0 181.0 183.0 189.0 200.0 232.0 214.0 285.0 265.0 227.0 227.0 357.0 351.0 181.0 224.0 249.0 213.0 193.0 312.0 307.0 448.0 447.0 194.0 188.0 185.0 214.0 223.0 290.0 284.0 180.0 182.0 216.0 492.0 260.0 249.0 183.0 192.0 184.0 286.0 343.0 347.0 189.0 182.0 200.0 184.0 181.0 350.0 350.0 181.0 182.0 236.0 544.0 502.0 200.0 339.0 196.0 188.0 193.0 183.0 181.0 361.0 372.0 259.0 258.0 185.0 181.0 214.0 181.0 179.0 183.0 191.0 184.0 192.0 185.0 225.0 181.0 181.0 183.0 189.0 200.0 232.0 214.0 285.0 265.0 227.0 227.0 357.0 351.0 181.0 224.0 249.0 213.0 193.0 312.0 307.0 448.0 447.0 194.0 188.0 185.0 ;
415.0 240.0 182.0 180.0 228.0 189.0 184.0 184.0 185.0 186.0 179.0 284.0 322.0 366.0 372.0 199.0 327.0 293.0 187.0 186.0 180.0 320.0 308.0 238.0 181.0 311.0 308.0 229.0 223.0 247.0 249.0 181.0 185.0 184.0 179.0 463.0 399.0 482.0 180.0 375.0 445.0 431.0 634.0 578.0 582.0 614.0 187.0 258.0 257.0 287.0 250.0 182.0 183.0 178.0 270.0 434.0 428.0 281.0 473.0 466.0 476.0 399.0 209.0 506.0 460.0 392.0 280.0 251.0 333.0 293.0 327.0 227.0 184.0 333.0 342.0 313.0 577.0 562.0 604.0 647.0 677.0 660.0 664.0 706.0 821.0 835.0 816.0 788.0 545.0 608.0 635.0 630.0 720.0 755.0 759.0 942.0 915.0 677.0 719.0 961.0 848.0 795.0 813.0 733.0 833.0 780.0 794.0 823.0 775.0 706.0 646.0 811.0 913.0 943.0 938.0 992.0 879.0 687.0 731.0 780.0 843.0 782.0 725.0 755.0 759.0 712.0 750.0 698.0 680.0 649.0 774.0 467.0 479.0 529.0 548.0 610.0 610.0 651.0 743.0 582.0 709.0 883.0 877.0 822.0 560.0 382.0 364.0 210.0 180.0 279.0 184.0 196.0 377.0 211.0 527.0 567.0 498.0 605.0 642.0 471.0 196.0 183.0 421.0 303.0 184.0 182.0 182.0 194.0 187.0 182.0 183.0 226.0 181.0 183.0 179.0 184.0 182.0 184.0 249.0 182.0 184.0 186.0 184.0 190.0 ;
198.0 194.0 185.0 179.0 182.0 285.0 237.0 322.0 179.0 507.0 535.0 333.0 229.0 244.0 185.0 184.0 450.0 305.0 201.0 198.0 183.0 329.0 187.0 292.0 293.0 181.0 293.0 466.0 184.0 183.0 182.0 187.0 179.0 458.0 495.0 228.0 178.0 179.0 178.0 178.0 279.0 315.0 405.0 179.0 184.0 365.0 285.0 271.0 212.0 354.0 182.0 182.0 186.0 247.0 243.0 187.0 184.0 179.0 186.0 176.0 177.0 182.0 178.0 198.0 194.0 185.0 179.0 182.0 285.0 237.0 322.0 179.0 507.0 535.0 333.0 229.0 244.0 185.0 184.0 450.0 305.0 201.0 198.0 183.0 329.0 187.0 292.0 293.0 181.0 293.0 466.0 184.0 183.0 182.0 187.0 179.0 458.0 495.0 228.0 178.0 179.0 178.0 178.0 279.0 315.0 405.0 179.0 184.0 365.0 285.0 271.0 212.0 354.0 182.0 182.0 186.0 247.0 243.0 187.0 184.0 179.0 186.0 176.0 177.0 182.0 178.0 198.0 194.0 185.0 179.0 182.0 285.0 237.0 322.0 179.0 507.0 535.0 333.0 229.0 244.0 185.0 184.0 450.0 305.0 201.0 198.0 183.0 329.0 187.0 292.0 293.0 181.0 293.0 466.0 184.0 183.0 182.0 187.0 179.0 458.0 495.0 228.0 178.0 179.0 178.0 178.0 279.0 315.0 405.0 179.0 184.0 365.0 285.0 271.0 212.0 354.0 182.0 182.0 186.0 247.0 243.0 187.0 184.0 179.0 ;
###Markdown
记录不同策略及文件大小下的平均块传输时延
###Code
# mean_time = read_latency_node.mean().mean()
mean_time = read_latency_row['ms'].mean()
with open('eclogs/MeanLatencyOfNodes.txt', 'a') as f:
f.write(str(mean_time) + ' ')
mean_time
meantime_filesize = pd.read_csv('eclogs/MeanLatencyOfNodes.txt',header = None, sep = ' ').dropna(1)
filesize = [25, 50, 75, 100, 125, 150, 175, 200, 225, 250]
meantime_filesize.loc[0]
fig, ax = plt.subplots(1, figsize = (10, 10))
ax.plot(filesize, meantime_filesize.loc[0])
ax.plot(filesize, meantime_filesize.loc[1])
ax.plot(filesize, meantime_filesize.loc[2])
ax.plot(filesize, meantime_filesize.loc[3])
ax.set_xlabel('filesize', size = 20)
ax.set_ylabel('ms', size = 20)
ax.legend(['opt', 'random', 'round', 'sprout'])
ax.set_title('mean block fetching latency', size = 20)
fig, ax = plt.subplots(1, figsize = (10, 10))
ax.boxplot(read_latency_node_dic.values())
ax.set_title('mean latency on per node')
ax.set_xticklabels(["node0", "node1", "node2", "node3", "node4",])
###Output
_____no_output_____
###Markdown
节点服务能力统计
###Code
size = [100, 200, 300, 400, 500, 600]
time = [[]]*4
time[0] = [105, 198, 282, 372, 464, 543]
time[1] = [112, 193, 279, 361, 474, 543]
time[2] = [68, 117, 173, 221, 271, 324]
time[3] = [109, 195, 281, 354, 434, 535]
plt.scatter(size, time[3])
[[]]*2
###Output
_____no_output_____
###Markdown
解码时延
###Code
decode_latency_row = pd.read_csv('eclogs/decodingLatency.txt', names = ['num', 'ms'], sep = ' ')
###Output
_____no_output_____
###Markdown
按块数分组
###Code
def yforx(x, y):
dic = dict.fromkeys(set(x))
length = list()
for key in dic.keys():
dic[key] = list()
for idx in range(len(x)):
dic[x[idx]].append(y[idx])
for key in dic.keys():
length.append(len(dic[key]))
for key in dic.keys():
dic[key].extend([None] * (max(length) - len(dic[key])))
return dic
decode_latency = pd.DataFrame(yforx(decode_latency_row['num'], decode_latency_row['ms'])).astype('float64')
# print(decode_latency[1].dropna().values)
decode_latency_row
fig1, ax1 = plt.subplots(1, figsize = (15, 10))
ax1.scatter(decode_latency_row['num'], decode_latency_row['ms'])
ax1.set_title('decoding latency for number of blocks')
ax1.scatter(decode_latency_row['num'].unique(), decode_latency_row.groupby(['num']).mean(), color = 'r', s = 100)
plt.xlabel('decoding number')
plt.ylabel('decoding latency')
plt.savefig(fname = 'decoding latency.png', format = 'png')
from scipy.stats import chi2, f
chi2y = chi2.pdf(np.arange(0.001,50, 0.1), 10, loc = 5.3, scale = 0.3) * 0.11
fy =f.pdf(np.arange(0.001,50, 0.1), 4, 2)
fig, axes = plt.subplots(nrows = decode_latency.shape[1],ncols = 2, figsize = (20, 20), sharex = True, sharey = False)
if(decode_latency.shape[1] == 1):
axes[0].scatter(range(len(decode_latency[i + 1])), decode_latency[i + 1].sort_values())
axes[0].set_title('decoding latency rank for '+str(i+1)+' blocks')
axes[1].set_title('decoding latency distribution for '+str(i+1)+' blocks')
axes[1].hist(decode_latency[i + 1].dropna().values, bins = 50, density = 1)
else:
for i in range(len(axes)):
axes[i, 0].scatter(range(len(decode_latency[i + 1])), decode_latency[i + 1].sort_values())
axes[i, 0].set_title('decoding latency rank for '+str(i+1)+' blocks')
axes[i, 1].set_title('decoding latency distribution for '+str(i+1)+' blocks')
axes[i, 1].hist(decode_latency[i + 1].dropna().values, bins = 50, density = 1)
plt.xlabel('decoding latency')
plt.ylabel('frequency')
# axes[i, 1].plot(chi2y)
plt.savefig(fname="解码时延2.png",format="png")
###Output
_____no_output_____
###Markdown
从pi计算读取策略
###Code
import pandas as pd
p = pd.read_csv('eclogs/p-sprout.txt',header = None, sep = '\s+')
# lmp = pd.read_csv('eclogs/lmp.txt',header = None, sep = '\s+')
p
k = 3
C = 7
N = 4
# p['cache'] = k - p.sum(axis = 1)
# data = []
# for i in range(len(lmp)):
# data.extend(lmp.values[i])
def quick_sort(lists):
if not lists:
return []
assert isinstance(lists, list)
if len(lists) == 1:
return lists
pivot = lists.pop()
llist, rlist = [], []
for x in lists:
if x>pivot:
rlist.append(x)
else:
llist.append(x)
return quick_sort(llist) + [pivot] + quick_sort(rlist)
def sort_topk(s, k):
# return sorted(s)[:k]
return quick_sort(s)[-k:]
# def determine_palcement():
# topk = sort_topk(data, C)
# kth = topk[0]
# for i in range(lmp.shape[1]):
# for j in range(lmp.shape[0]):
# if lmp[i][j] >= kth:
# # p[i][j] = 0
# continue
# # placement = (0.1 - p) < 0
# placement = (p > 0)
# print(p)
# return placement
p
import math
workersToWrite = list()
# placement = determine_palcement()
for index in p.index:
# vc = placement.loc[index].value_counts()
# new_row = [0] * int(N - sum(np.ceil(p.loc[index] - 0.01)))
new_row = [0] * int(N - sum(np.ceil(p.loc[index])))
tmp = []
cnt0 = (p.loc[index] == 0).astype(int).sum()
kth = sort_topk(list(p.loc[index]), k - cnt0)[0]
for i in range(p.shape[1]):
if (p.loc[index][i] >= kth) & (p.loc[index][i] != 0):
tmp.append(i+1)
new_row.extend(tmp)
tmp = []
for i in range(p.shape[1]):
if (p.loc[index][i] < kth) & (p.loc[index][i] != 0):
tmp.append(i+1)
new_row.extend(tmp)
workersToWrite.append(new_row)
pd.DataFrame(workersToWrite).to_csv('eclogs/workersToWrite.txt', index=False, header=False, sep = ' ')
workersToWrite
circle_times = 40
read_list = list()
while(len(read_list) < circle_times*len(p)):
rand = pd.DataFrame(np.random.rand(p.shape[0], p.shape[1]))
read_matrix = (rand - p) < 0
for index in read_matrix.index:
new_row = []
while(len(new_row) != k):
rand = pd.DataFrame(np.random.rand(p.shape[0], p.shape[1]))
read_matrix = (rand - p) < 0
vc = read_matrix.loc[index].value_counts()
new_row = list(np.arange(workersToWrite[index].count(0)))
if(len(new_row) > k):
new_row = new_row[:k]
tmp = []
for j in range(0, p.shape[1]):
if (read_matrix.loc[index][j] == True) and (j+1 in workersToWrite[index]):
tmp.append(workersToWrite[index].index(j+1))
new_row.extend(tmp)
print(new_row)
read_list.append(new_row)
pd.DataFrame(read_list).to_csv('eclogs/blocksToRead.txt', index=False, header=False, sep = ' ')
np.arange(workersToWrite[3].count(0))
# np.random.randint(0, len(t))
import time
time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
###Output
_____no_output_____
###Markdown
Matplot example** Run the cell below to import some packages and show a line plot **
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 20, 100)
plt.plot(x, np.sin(x))
plt.show()
###Output
_____no_output_____
###Markdown
Univariate tests
###Code
# Generate data
rng = 20201124
np.random.seed(rng)
n = 200
norm1 = np.random.normal(loc=-4.0, scale=1.0, size=int(n/2))
norm2 = np.random.normal(loc=+4.0, scale=1.0, size=int(n/2))
data_uni = np.concatenate((norm1, norm2))
np.savetxt("../resources/csv/in/data_uni.csv", data_uni, fmt='%1.5f')
# Generate grid
grid_uni = np.arange(-10, +10, 0.1)
np.savetxt("../resources/csv/in/grid_uni.csv", grid_uni, fmt='%1.5f')
# True density of data
true_pdf = 0.5 * stats.norm.pdf(grid_uni, -4.0, 1.0) + \
0.5 * stats.norm.pdf(grid_uni, +4.0, 1.0)
# Iterations to plot the density of
iters = [0, 10, 100, 500, 898]
###Output
_____no_output_____
###Markdown
Fixed values hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"Neal8", str(rng), "1", "6000", "5000",
"NNIG", "../resources/asciipb/nnig_ngg_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_uni.csv", "../resources/csv/in/grid_uni.csv",
"../resources/csv/out/uf_dens.csv", "../resources/csv/out/uf_mass.csv",
"../resources/csv/out/uf_nclu.csv", "../resources/csv/out/uf_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Densities
matr = np.genfromtxt("../resources/csv/out/uf_dens.csv", delimiter=',')
ax1 = fig.add_subplot(131)
for it in iters:
ax1.plot(grid_uni, np.exp(matr[it, :]), linewidth=0.5)
ax1.plot(grid_uni, np.exp(np.mean(matr, axis=0)), linewidth=1.0,
linestyle='--', color="black")
ax1.plot(grid_uni, true_pdf, linewidth=1.0, color="red")
ax1.legend(iters + ["mean", "true"])
ax1.set_title("Univariate densities")
# Total masses
masses = np.genfromtxt("../resources/csv/out/uf_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/uf_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.vlines(np.arange(len(num_clust)), num_clust - 0.3, num_clust + 0.3)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
NGG hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"Neal2", str(rng), "0", "7000", "5000",
"NNIG", "../resources/asciipb/nnig_ngg_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_uni.csv", "../resources/csv/in/grid_uni.csv",
"../resources/csv/out/un_dens.csv", "../resources/csv/out/un_mass.csv",
"../resources/csv/out/un_nclu.csv", "../resources/csv/out/un_clus.csv"
]
subprocess.run(cmd, capture_output=True)
matr.shape
fig = plt.figure(figsize=(21,7))
# Densities
matr = np.genfromtxt("../resources/csv/out/un_dens.csv", delimiter=',')
ax1 = fig.add_subplot(131)
for it in iters:
ax1.plot(grid_uni, np.exp(matr[it, :]), linewidth=0.5)
ax1.plot(grid_uni, np.exp(np.mean(matr, axis=0)), linewidth=1.0,
linestyle='--', color="black")
ax1.plot(grid_uni, true_pdf, linewidth=1.0, color="red")
ax1.legend(iters + ["mean", "true"])
ax1.set_title("Univariate densities")
# Total masses
masses = np.genfromtxt("../resources/csv/out/un_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/un_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.plot(num_clust)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
Multivariate tests
###Code
# Generate data
rng = 20201124
np.random.seed(rng)
n = 60
data_multi = np.zeros((n,2))
n2 = size=int(n/2)
data_multi[0:n2,0] = np.random.normal(loc=-3.0, scale=1.0, size=n2)
data_multi[0:n2,1] = np.random.normal(loc=-2.0, scale=1.0, size=n2)
data_multi[n2:n,0] = np.random.normal(loc=+3.0, scale=1.0, size=n2)
data_multi[n2:n,1] = np.random.normal(loc=+2.0, scale=1.0, size=n2)
np.savetxt("../resources/csv/in/data_multi.csv", data_multi, fmt='%1.5f')
# Generate grid
xx = np.arange(-7.0, +7.1, 0.5)
yy = np.arange(-6.0, +5.1, 0.5)
grid_multi = np.array(np.meshgrid(xx, yy)).T.reshape(-1, 2)
np.savetxt("../resources/csv/in/grid_multi.csv", grid_multi, fmt='%1.5f')
###Output
_____no_output_____
###Markdown
Fixed values hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"N8", str(rng), "0", "1000", "100",
"NNW", "../resources/asciipb/nnw_fixed_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_multi.csv", "../resources/csv/in/grid_multi.csv",
"../resources/csv/out/mf_dens.csv", "../resources/csv/out/mf_mass.csv",
"../resources/csv/out/mf_nclu.csv", "../resources/csv/out/mf_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Density
matr = np.genfromtxt("../resources/csv/out/mf_dens.csv", delimiter=',')
ax1 = fig.add_subplot(131, projection='3d')
ax1.scatter(grid_multi[:,0], grid_multi[:,1], np.exp(np.mean(matr, axis=1)))
ax1.set_title("Mean multivariate density")
# Total masses
masses = np.genfromtxt("../resources/csv/out/mf_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/mf_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.plot(num_clust)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
NGIW hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"N8", str(rng), "0", "1000", "100",
"NNW", "../resources/asciipb/nnw_ngiw_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_multi.csv", "../resources/csv/in/grid_multi.csv",
"../resources/csv/out/mn_dens.csv", "../resources/csv/out/mn_mass.csv",
"../resources/csv/out/mn_nclu.csv", "../resources/csv/out/mn_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Density
matr = np.genfromtxt("../resources/csv/out/mn_dens.csv", delimiter=',')
mean_dens = np.exp(np.mean(matr, axis=1)).reshape(-1, 1)
plot_data = pd.DataFrame(np.hstack([grid_multi, mean_dens]),
columns=["x", "y", "z"])
Z = plot_data.pivot_table(index='x', columns='y', values='z').T.values
X_unique = np.sort(plot_data.x.unique())
Y_unique = np.sort(plot_data.y.unique())
X, Y = np.meshgrid(X_unique, Y_unique)
ax1 = fig.add_subplot(131) #, projection='3d')
ax1.contour(X, Y, Z)
ax1.set_title("Mean multivariate density")
# Total masses
masses = np.genfromtxt("../resources/csv/out/mn_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/mn_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.vlines(np.arange(len(num_clust)), num_clust - 0.3, num_clust + 0.3)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
Univariate tests
###Code
# Generate data
rng = 20201124
np.random.seed(rng)
n = 200
norm1 = np.random.normal(loc=-4.0, scale=1.0, size=int(n/2))
norm2 = np.random.normal(loc=+4.0, scale=1.0, size=int(n/2))
data_uni = np.concatenate((norm1, norm2))
np.savetxt("../resources/csv/in/data_uni.csv", data_uni, fmt='%1.5f')
# Generate grid
grid_uni = np.arange(-10, +10, 0.1)
np.savetxt("../resources/csv/in/grid_uni.csv", grid_uni, fmt='%1.5f')
# True density of data
true_pdf = 0.5 * stats.norm.pdf(grid_uni, -4.0, 1.0) + \
0.5 * stats.norm.pdf(grid_uni, +4.0, 1.0)
# Iterations to plot the density of
iters = [0, 10, 100, 500, 898]
###Output
_____no_output_____
###Markdown
Fixed values hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"N8", str(rng), "5", "1000", "100",
"NNIG", "../resources/asciipb/nnig_ngg_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_uni.csv", "../resources/csv/in/grid_uni.csv",
"../resources/csv/out/uf_dens.csv", "../resources/csv/out/uf_mass.csv",
"../resources/csv/out/uf_nclu.csv", "../resources/csv/out/uf_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Densities
matr = np.genfromtxt("../resources/csv/out/uf_dens.csv", delimiter=',')
ax1 = fig.add_subplot(131)
for it in iters:
ax1.plot(grid_uni, np.exp(matr[:,it]), linewidth=0.5)
ax1.plot(grid_uni, np.exp(np.mean(matr, axis=1)), linewidth=1.0,
linestyle='--', color="black")
ax1.plot(grid_uni, true_pdf, linewidth=1.0, color="red")
ax1.legend(iters + ["mean", "true"])
ax1.set_title("Univariate densities")
# Total masses
masses = np.genfromtxt("../resources/csv/out/uf_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/uf_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.vlines(np.arange(len(num_clust)), num_clust - 0.3, num_clust + 0.3)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
NGG hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"Neal2", str(rng), "0", "2000", "1000",
"NNIG", "../resources/asciipb/nnig_ngg_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_uni.csv", "../resources/csv/in/grid_uni.csv",
"../resources/csv/out/un_dens.csv", "../resources/csv/out/un_mass.csv",
"../resources/csv/out/un_nclu.csv", "../resources/csv/out/un_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Densities
matr = np.genfromtxt("../resources/csv/out/un_dens.csv", delimiter=',')
ax1 = fig.add_subplot(131)
for it in iters:
ax1.plot(grid_uni, np.exp(matr[:,it]), linewidth=0.5)
ax1.plot(grid_uni, np.exp(np.mean(matr, axis=1)), linewidth=1.0,
linestyle='--', color="black")
ax1.plot(grid_uni, true_pdf, linewidth=1.0, color="red")
ax1.legend(iters + ["mean", "true"])
ax1.set_title("Univariate densities")
# Total masses
masses = np.genfromtxt("../resources/csv/out/un_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/un_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.plot(num_clust)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
Multivariate tests
###Code
# Generate data
rng = 20201124
np.random.seed(rng)
n = 60
data_multi = np.zeros((n,2))
n2 = size=int(n/2)
data_multi[0:n2,0] = np.random.normal(loc=-3.0, scale=1.0, size=n2)
data_multi[0:n2,1] = np.random.normal(loc=-2.0, scale=1.0, size=n2)
data_multi[n2:n,0] = np.random.normal(loc=+3.0, scale=1.0, size=n2)
data_multi[n2:n,1] = np.random.normal(loc=+2.0, scale=1.0, size=n2)
np.savetxt("../resources/csv/in/data_multi.csv", data_multi, fmt='%1.5f')
# Generate grid
xx = np.arange(-7.0, +7.1, 0.5)
yy = np.arange(-6.0, +5.1, 0.5)
grid_multi = np.array(np.meshgrid(xx, yy)).T.reshape(-1, 2)
np.savetxt("../resources/csv/in/grid_multi.csv", grid_multi, fmt='%1.5f')
###Output
_____no_output_____
###Markdown
Fixed values hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"N8", str(rng), "0", "1000", "100",
"NNW", "../resources/asciipb/nnw_fixed_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_multi.csv", "../resources/csv/in/grid_multi.csv",
"../resources/csv/out/mf_dens.csv", "../resources/csv/out/mf_mass.csv",
"../resources/csv/out/mf_nclu.csv", "../resources/csv/out/mf_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Density
matr = np.genfromtxt("../resources/csv/out/mf_dens.csv", delimiter=',')
ax1 = fig.add_subplot(131, projection='3d')
ax1.scatter(grid_multi[:,0], grid_multi[:,1], np.exp(np.mean(matr, axis=1)))
ax1.set_title("Mean multivariate density")
# Total masses
masses = np.genfromtxt("../resources/csv/out/mf_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/mf_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.plot(num_clust)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
NGIW hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"N8", str(rng), "0", "1000", "100",
"NNW", "../resources/asciipb/nnw_ngiw_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_multi.csv", "../resources/csv/in/grid_multi.csv",
"../resources/csv/out/mn_dens.csv", "../resources/csv/out/mn_mass.csv",
"../resources/csv/out/mn_nclu.csv", "../resources/csv/out/mn_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Density
matr = np.genfromtxt("../resources/csv/out/mn_dens.csv", delimiter=',')
mean_dens = np.exp(np.mean(matr, axis=1)).reshape(-1, 1)
plot_data = pd.DataFrame(np.hstack([grid_multi, mean_dens]),
columns=["x", "y", "z"])
Z = plot_data.pivot_table(index='x', columns='y', values='z').T.values
X_unique = np.sort(plot_data.x.unique())
Y_unique = np.sort(plot_data.y.unique())
X, Y = np.meshgrid(X_unique, Y_unique)
ax1 = fig.add_subplot(131) #, projection='3d')
ax1.contour(X, Y, Z)
ax1.set_title("Mean multivariate density")
# Total masses
masses = np.genfromtxt("../resources/csv/out/mn_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/mn_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.vlines(np.arange(len(num_clust)), num_clust - 0.3, num_clust + 0.3)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
Univariate tests
###Code
# Generate data
rng = 20201124
np.random.seed(rng)
n = 200
norm1 = np.random.normal(loc=-4.0, scale=1.0, size=int(n/2))
norm2 = np.random.normal(loc=+4.0, scale=1.0, size=int(n/2))
data_uni = np.concatenate((norm1, norm2))
np.savetxt("../resources/csv/in/uni_data.csv", data_uni, fmt='%1.5f')
# Generate grid
uni_grid = np.arange(-10, +10, 0.1)
np.savetxt("../resources/csv/in/uni_grid.csv", uni_grid, fmt='%1.5f')
# True density of data
true_pdf = 0.5 * stats.norm.pdf(uni_grid, -4.0, 1.0) + \
0.5 * stats.norm.pdf(uni_grid, +4.0, 1.0)
# Iterations to plot the density of
iters = [0, 10, 100, 500, 898]
###Output
_____no_output_____
###Markdown
NGG hyperprior
###Code
# Run the executable
cmd = ("../build/run ../algo_settings.asciipb "
"NNIG ../resources/asciipb/nnig_ngg_prior.asciipb "
"DP ../resources/asciipb/dp_gamma_prior.asciipb '' "
"../resources/csv/in/uni_data.csv ../resources/csv/in/uni_grid.csv "
"../resources/csv/out/uni_dens.csv ../resources/csv/out/uni_nclu.csv "
"../resources/csv/out/uni_clus.csv").split()
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Densities
matr = np.genfromtxt("../resources/csv/out/uni_dens.csv", delimiter=',')
ax1 = fig.add_subplot(131)
for it in iters:
ax1.plot(uni_grid, np.exp(matr[it, :]), linewidth=0.5)
ax1.plot(uni_grid, np.exp(np.mean(matr, axis=0)), linewidth=1.0,
linestyle='--', color="black")
ax1.plot(uni_grid, true_pdf, linewidth=1.0, color="red")
ax1.legend(iters + ["mean", "true"])
ax1.set_title("Univariate densities")
# # Total masses
# masses = np.genfromtxt("../resources/csv/out/un_mass.csv", delimiter='\n')
# ax2 = fig.add_subplot(132)
# ax2.plot(masses)
# ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/uni_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.plot(num_clust)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
Fixed values hyperprior (TODO)
###Code
# Run the executable
cmd = ("../build/run algo_settings.asciipb"
"NNIG", "../resources/asciipb/nnig_ngg_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/data_uni.csv", "../resources/csv/in/uni_grid.csv",
"../resources/csv/out/uf_dens.csv", "../resources/csv/out/uf_mass.csv",
"../resources/csv/out/uf_nclu.csv", "../resources/csv/out/uf_clus.csv"
).split()
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Densities
matr = np.genfromtxt("../resources/csv/out/uf_dens.csv", delimiter=',')
ax1 = fig.add_subplot(131)
for it in iters:
ax1.plot(uni_grid, np.exp(matr[it, :]), linewidth=0.5)
ax1.plot(uni_grid, np.exp(np.mean(matr, axis=0)), linewidth=1.0,
linestyle='--', color="black")
ax1.plot(uni_grid, true_pdf, linewidth=1.0, color="red")
ax1.legend(iters + ["mean", "true"])
ax1.set_title("Univariate densities")
# Total masses
masses = np.genfromtxt("../resources/csv/out/uf_mass.csv", delimiter='\n')
ax2 = fig.add_subplot(132)
ax2.plot(masses)
ax2.set_title("Total masses over iterations")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/uf_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(133)
ax3.vlines(np.arange(len(num_clust)), num_clust - 0.3, num_clust + 0.3)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
Multivariate tests (TODO)
###Code
# Generate data
rng = 20201124
np.random.seed(rng)
n = 500
multi_data = np.zeros((n,2))
n2 = size=int(n/2)
multi_data[0:n2,0] = np.random.normal(loc=-3.0, scale=1.0, size=n2)
multi_data[0:n2,1] = np.random.normal(loc=-2.0, scale=1.0, size=n2)
multi_data[n2:n,0] = np.random.normal(loc=+3.0, scale=1.0, size=n2)
multi_data[n2:n,1] = np.random.normal(loc=+2.0, scale=1.0, size=n2)
np.savetxt("../resources/csv/in/multi_data.csv", multi_data, fmt='%1.5f')
# Generate grid
xx = np.arange(-7.0, +7.1, 0.1)
yy = np.arange(-6.0, +5.1, 0.1)
multi_grid = np.array(np.meshgrid(xx, yy)).T.reshape(-1, 2)
np.savetxt("../resources/csv/in/multi_grid.csv", multi_grid, fmt='%1.5f')
###Output
_____no_output_____
###Markdown
Fixed values hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"N8", str(rng), "0", "1000", "100",
"NNW", "../resources/asciipb/nnw_fixed_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/multi_data.csv", "../resources/csv/in/multi_grid.csv",
"../resources/csv/out/mf_dens.csv", "../resources/csv/out/mf_mass.csv",
"../resources/csv/out/mf_nclu.csv", "../resources/csv/out/mf_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Density
matr = np.genfromtxt("../resources/csv/out/multi_dens.csv", delimiter=',')
ax1 = fig.add_subplot(121, projection='3d')
ax1.scatter(multi_grid[:,0], multi_grid[:,1], np.exp(np.mean(matr, axis=1)))
ax1.set_title("Mean multivariate density")
# Number of clusters
num_clust = np.genfromtxt("../resources/csv/out/multi_nclu.csv", delimiter='\n')
ax3 = fig.add_subplot(122)
ax3.plot(num_clust)
ax3.set_title("Number of clusters over iterations")
fig.show()
###Output
_____no_output_____
###Markdown
NGIW hyperprior
###Code
# Run the executable
cmd = ["../build/run",
"N8", str(rng), "0", "1000", "100",
"NNW", "../resources/asciipb/nnw_ngiw_prior.asciipb",
"DP", "../resources/asciipb/dp_gamma_prior.asciipb",
"",
"../resources/csv/in/multi_data.csv", "../resources/csv/in/multi_grid.csv",
"../resources/csv/out/mn_dens.csv", "../resources/csv/out/mn_mass.csv",
"../resources/csv/out/mn_nclu.csv", "../resources/csv/out/mn_clus.csv"
]
subprocess.run(cmd, capture_output=True)
fig = plt.figure(figsize=(21,7))
# Density
matr = np.genfromtxt("../resources/csv/out/multi_dens.csv", delimiter=',')
mean_dens = np.exp(np.mean(matr, axis=0)).reshape(-1, 1)
plot_data = pd.DataFrame(np.hstack([multi_grid, mean_dens]),
columns=["x", "y", "z"])
Z = plot_data.pivot_table(index='x', columns='y', values='z').T.values
X_unique = np.sort(plot_data.x.unique())
Y_unique = np.sort(plot_data.y.unique())
X, Y = np.meshgrid(X_unique, Y_unique)
ax1 = fig.add_subplot(121) #, projection='3d')
ax1.contour(X, Y, Z)
ax1.set_title("Mean multivariate density")
# # Number of clusters
# num_clust = np.genfromtxt("../resources/csv/out/mn_nclu.csv", delimiter='\n')
# ax3 = fig.add_subplot(122)
# ax3.vlines(np.arange(len(num_clust)), num_clust - 0.3, num_clust + 0.3)
# ax3.set_title("Number of clusters over iterations")
fig.show()
matr = np.genfromtxt("../resources/csv/out/multi_dens.csv", delimiter=',')
matr.shape
###Output
_____no_output_____ |
Notebooks/Day1.ipynb | ###Markdown
Day 1, Sonar Sweep Count the number of times a depth measurement increases from the previous measurement.- 199 (N/A - no previous measurement)- 200 (increased)- 208 (increased)- 210 (increased)- 200 (decreased)- 207 (increased)- 240 (increased)- 269 (increased)- 260 (decreased)- 263 (increased)> Sample answer: 7
###Code
let getData = System.IO.File.ReadLines("../Data/Day1.txt") |> Seq.map int
// let sample = [|199;200;208;210;200;207;240;269;260;263|]
getData
|> Seq.pairwise
|> Seq.where (fun (x,y) -> y > x)
|> Seq.length
###Output
_____no_output_____
###Markdown
count the number of times the sum of measurements in this sliding window increases from the previous sum. So, compare A with B, then compare B with C, then C with D, and so on. Stop when there aren't enough measurements left to create a new three-measurement sum.- 199 A - 200 A B - 208 A B C - 210 B C D- 200 E C D- 207 E F D- 240 E F G - 269 F G H- 260 G H- 263 H> in the first window are marked A (199, 200, 208); their sum is 199 + 200 + 208 = 607- A: 607 (N/A - no previous sum)- B: 618 (increased)- C: 618 (no change)- D: 617 (decreased)- E: 647 (increased)- F: 716 (increased)- G: 769 (increased)- H: 792 (increased)> Sample answer: 5
###Code
getData
|> Seq.windowed 3
|> Seq.map (fun w -> Seq.sum w)
|> Seq.pairwise
|> Seq.where (fun (x,y) -> y > x)
|> Seq.length
###Output
_____no_output_____ |
labs/lab10_time_series/Subsequence Mining.ipynb | ###Markdown
Subsequence mining of frequent temporal patternsJay Urbain, PhD Frequent patterns A frequent pattern is a pattern (set of items, subsequences, substructures, etc.) that occur frequently in a data set. Frequent pattern mining was first proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of frequent itemsets and association rule miningThe motivation is to identify the inherent regularities, or frequent patterns in data.For example: What medical events frequently co-occur within a medical encounter? - Lower back pain, spinal stenosis, ibuprofin- Diabetes, metformin, foot exam- myocardial ischemia, beta-blockers, ACE inhibitorsFor the medical events above, what might be frequently occuring subsequent in future medical encounters?- Oxycontin, spinal fusion surgery- Low EF, myocardial ischemia- STEMI, angioplastyEach medical event, e.g., diabetes, metformin, ischemia, is considered an item. An itemset is a set of one or more items where the ordering does not matter. Items within an itemset are considered concurren:, e.g., STEMI and angioplasty.$k-itemset$ $X = {x_1, …, x_k}$Absolute support, or support count of $X$: Frequency or occurrence of an itemset $X$.Relative support, $s$, is the fraction of transactions that contains $X$, i.e., the probability that a transaction contains $X$.An itemset $X$ is *frequent* if *X*’s support is no less than a min suppport threshold. Association rules Rules can be defined from frequent itemsets.The problem is to find all rules $X=>Y$ with minimum support and confidence. - Support, $s$, is the probability that a transaction contains $X \& Y$, i.e., $p(X,Y)$ - Confidence, $c$, is the conditional probability that a transaction (encounter) having $X$ also contains $Y$, i.e., $p(Y|X)$.Example: Let minsup = $0.5$, minconf = $0.5$Frequent patterns: *myocardial ischemia:3, beta-blockers:2, ACE inhibitors:2, diabetes:2, metformin:2*. Foot exam:1 would not be a frequent pattern.Association rules: *myocardial ischemia, beat-blockers $=>$ ACE inhibitors* would have support=2/4 and confidence 2/2. There are many more.Encounter IDMedical events10myocardial ischemia, beta-blockers, ACE inhibitors20Diabetes, metformin, foot exam30myocardial ischemia, diabetes, metformin, beta-blockers, ACE inhibitors40Diabetes, myocardial ischemia, metformin Subsequences A subsequence is a sequence that can be derived from another sequence by deleting some elements without changing the order of the remaining elements. For example, the sequence $\{A,B,D\}$ is a subsequence of $\{A,B,C,D,E,F\}$. Subsequences are suitable for mining patient event histories since they can model frequently occuring sequential patterns across patient event histories without the necessity to match each element within a sequence. For example, $\{diabetes, low EF, STEMI\}$ would be a common subsequence across the two following patient event sequences: - $\{diabetes, metformin, lung cancer, low EF, STEMI\}$ - $\{diabetes, ischemia, low EF, STEMI\}$ *Note: A subsequence should not be confused with substring which requires matching consecutive elements.* Mining subsequence patterns Frequent Pattern Mining - spark.mllibMining frequent items, itemsets, subsequences, or other substructures is usually among the first steps to analyze a large-scale dataset, which has been an active research topic in data mining for years. We refer users to Wikipedia’s association rule learning for more information. spark.mllib provides a parallel implementation of FP-growth, a popular algorithm to mining frequent itemsets.FP-growthThe FP-growth algorithm is described in the paper Han et al., Mining frequent patterns without candidate generation, where “FP” stands for frequent pattern. Given a dataset of transactions, the first step of FP-growth is to calculate item frequencies and identify frequent items. Different from Apriori-like algorithms designed for the same purpose, the second step of FP-growth uses a suffix tree (FP-tree) structure to encode transactions without generating candidate sets explicitly, which are usually expensive to generate. After the second step, the frequent itemsets can be extracted from the FP-tree. In spark.mllib, we implemented a parallel version of FP-growth called PFP, as described in Li et al., PFP: Parallel FP-growth for query recommendation. PFP distributes the work of growing FP-trees based on the suffices of transactions, and hence more scalable than a single-machine implementation. We refer users to the papers for more details.spark.mllib’s FP-growth implementation takes the following (hyper-)parameters:minSupport: the minimum support for an itemset to be identified as frequent. For example, if an item appears 3 out of 5 transactions, it has a support of 3/5=0.6.numPartitions: the number of partitions used to distribute the work. Given a set of sequences, find the complete set of *frequent* subsequences. Consider each sequence as an ordering over the patient's event history.Given the sequence: ${(ef) (ab) (df) c b}$ Each element, i.e., $(ef), (ab), (df), c, b$; may contain a set of items. Items within an element are considered unordered (happen concurrently) and are listed alphabetically to avoid ambiguity. For example, events within a patient encounter could be considered items within an element set.Example: $\{a(bc)dc\}$ is a subsequence of $\{a(abc)(ac)d(cf)\}$Example: Given minimum support threshold min_sup=2, $\{(ab)c\}$ is a sequential patternthe following sequence database: SIDsequence10{a(abc)(ac)d(cf)}20{(ad)c(bc)(ae)}30{(ef)(ab)(df)cb}40{eg(af)cbc} FPGrowth implements the FP-growth algorithm. It take an RDD of transactions, where each transaction is an List of items of a generic type. Calling FPGrowth.train with transactions returns an FPGrowthModel that stores the frequent itemsets with their frequencies.Refer to the FPGrowth Python docs for more details on the API. In the Oracle datawarehouse query below, a basic aggregate query, the first step for generating a sequence database takes approximately 9 minutes (588 seconds).This query generates item sets, i.e., medical events per encounter.
###Code
from IPython.display import HTML
HTML('''
<style>
table {float:left}
</style>
<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from pyspark.mllib.fpm import FPGrowth
x = np.random.normal(0., 10, 10000)
plt.hist(x,50)
plt.show()
from pyspark.sql import SQLContext, HiveContext
from pyspark.sql.types import *
from datetime import datetime
import time
from pyspark.sql.functions import *
from pyspark.mllib.fpm import FPGrowth, PrefixSpan
sqlContext = HiveContext(sc)
dataf = sc.textFile("/Users/jayurbain/Dropbox/machine-learning/machine-learning/data/sample_fpgrowth.txt")
print dataf.take(10)
print type(dataf)
# fpgrowth
transactions = dataf.map(lambda line: line.strip().split(' '))
print transactions.take(5)
model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)
result = model.freqItemsets().collect()
# for fi in result:
# print(fi)
for i in result:
print '(', ', '.join(i.items), ')', 'freq=', str(i.freq)
data = [
[["a", "b"], ["c"]],
[["a"], ["c", "b"], ["a", "b"]],
[["a", "b"], ["e"]],
[["f"]]]
rdd = sc.parallelize(data, 2)
model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)
result = model.freqItemsets().collect()
for i in result:
print '(', ', '.join(i.items), ')', 'freq=', str(i.freq)
sorted(model.freqItemsets().collect())
data = [
[["a", "b"], ["c"]],
[["a"], ["c", "b"], ["a", "b"]],
[["a", "b"], ["e"]],
[["f"]]]
rdd = sc.parallelize(data, 2)
#rdd = sc.parallelize(transactions, 2)
print rdd.take(5)
model = PrefixSpan.train(rdd)
sorted(model.freqSequences().collect())
###Output
[[['a', 'b'], ['c']], [['a'], ['c', 'b'], ['a', 'b']], [['a', 'b'], ['e']], [['f']]]
|
TBPP_train.ipynb | ###Markdown
Data
###Code
from data_synthtext import GTUtility
with open('gt_util_synthtext_seglink.pkl', 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = gt_util.split(0.9)
###Output
_____no_output_____
###Markdown
Model
###Code
# TextBoxes++
model = TBPP512(softmax=False)
weights_path = 'models/ssd512_voc_weights_fixed.hdf5'
freeze = ['conv1_1', 'conv1_2',
'conv2_1', 'conv2_2',
'conv3_1', 'conv3_2', 'conv3_3',
#'conv4_1', 'conv4_2', 'conv4_3',
#'conv5_1', 'conv5_2', 'conv5_3',
]
batch_size = 24
experiment = 'tbpp512fl_synthtext'
# TextBoxes++ + DenseNet
model = TBPP512_dense(softmax=False)
weights_path = None
freeze = []
batch_size = 6
experiment = 'dsodtbpp512fl_synthtext'
prior_util = PriorUtil(model)
if weights_path is not None:
load_weights(model, weights_path)
###Output
_____no_output_____
###Markdown
Training
###Code
epochs = 100
initial_epoch = 0
gen_train = InputGenerator(gt_util_train, prior_util, batch_size, model.image_size)
gen_val = InputGenerator(gt_util_val, prior_util, batch_size, model.image_size)
for layer in model.layers:
layer.trainable = not layer.name in freeze
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
#optim = keras.optimizers.SGD(lr=1e-3, momentum=0.9, decay=0, nesterov=True)
optim = keras.optimizers.Adam(lr=1e-3, beta_1=0.9, beta_2=0.999, epsilon=0.001, decay=0.0)
# weight decay
regularizer = keras.regularizers.l2(5e-4) # None if disabled
#regularizer = None
for l in model.layers:
if l.__class__.__name__.startswith('Conv'):
l.kernel_regularizer = regularizer
loss = TBPPFocalLoss(lambda_conf=10000.0, lambda_offsets=1.0)
model.compile(optimizer=optim, loss=loss.compute, metrics=loss.metrics)
print(checkdir.split('/')[-1])
history = model.fit_generator(
gen_train.generate(),
steps_per_epoch=int(gen_train.num_batches/4),
epochs=epochs,
verbose=1,
callbacks=[
keras.callbacks.ModelCheckpoint(checkdir+'/weights.{epoch:03d}.h5', verbose=1, save_weights_only=True),
Logger(checkdir),
#LearningRateDecay()
],
validation_data=gen_val.generate(),
validation_steps=gen_val.num_batches,
class_weight=None,
max_queue_size=1,
workers=1,
#use_multiprocessing=False,
initial_epoch=initial_epoch,
#pickle_safe=False, # will use threading instead of multiprocessing, which is lighter on memory use but slower
)
from utils.model import calc_memory_usage, count_parameters
count_parameters(model)
calc_memory_usage(model)
# frequency of class instance in local ground truth, used for weightning the focal loss
s = np.zeros(gt_util.num_classes)
for i in range(1000):#range(gt_util.num_samples):
egt = prior_util.encode(gt_util.data[i])
s += np.sum(egt[:,-gt_util.num_classes:], axis=0)
sn = np.asarray(np.sum(s))/s
print(np.array(sn, dtype=np.int32))
print(sn/np.sum(sn))
###Output
_____no_output_____
###Markdown
Data
###Code
from data_synthtext import GTUtility
with open('gt_util_synthtext_seglink.pkl', 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = gt_util.split(0.9)
###Output
_____no_output_____
###Markdown
Model
###Code
# TextBoxes++
model = TBPP512(softmax=False)
weights_path = './models/ssd512_voc_weights_fixed.hdf5'
freeze = ['conv1_1', 'conv1_2',
'conv2_1', 'conv2_2',
'conv3_1', 'conv3_2', 'conv3_3',
#'conv4_1', 'conv4_2', 'conv4_3',
#'conv5_1', 'conv5_2', 'conv5_3',
]
batch_size = 24
experiment = 'tbpp512fl_synthtext'
# TextBoxes++ with DSOD backbone
model = TBPP512_dense(softmax=False)
weights_path = None
freeze = []
batch_size = 6
experiment = 'dsodtbpp512fl_synthtext'
# TextBoxes++ with dense blocks and separable convolution
model = TBPP512_dense_separable()
weights_path = None
freeze = []
batch_size = 8
experiment = 'dstbpp512fl_synthtext'
prior_util = PriorUtil(model)
if weights_path is not None:
load_weights(model, weights_path)
###Output
_____no_output_____
###Markdown
Training
###Code
epochs = 100
initial_epoch = 0
#optimizer = tf.optimizers.SGD(learning_rate=1e-3, momentum=0.9, decay=0, nesterov=True)
optimizer = tf.optimizers.Adam(learning_rate=1e-3, beta_1=0.9, beta_2=0.999, epsilon=0.001, decay=0.0)
loss = TBPPFocalLoss(lambda_conf=10000.0, lambda_offsets=1.0)
#regularizer = None
regularizer = keras.regularizers.l2(5e-4) # None if disabled
gen_train = InputGenerator(gt_util_train, prior_util, batch_size, model.image_size, augmentation=False)
gen_val = InputGenerator(gt_util_val, prior_util, batch_size, model.image_size, augmentation=False)
dataset_train, dataset_val = gen_train.get_dataset(), gen_val.get_dataset()
iterator_train, iterator_val = iter(dataset_train), iter(dataset_val)
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
print(checkdir)
for l in model.layers:
l.trainable = not l.name in freeze
if regularizer and l.__class__.__name__.startswith('Conv'):
model.add_loss(lambda l=l: regularizer(l.kernel))
metric_util = MetricUtility(loss.metric_names, logdir=checkdir)
@tf.function
def step(x, y_true, training=False):
if training:
with tf.GradientTape() as tape:
y_pred = model(x, training=True)
metric_values = loss.compute(y_true, y_pred)
total_loss = metric_values['loss']
if len(model.losses):
total_loss += tf.add_n(model.losses)
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
else:
y_pred = model(x, training=True)
metric_values = loss.compute(y_true, y_pred)
return metric_values
for k in tqdm(range(initial_epoch, epochs), 'total', leave=False):
print('\nepoch %i/%i' % (k+1, epochs))
metric_util.on_epoch_begin()
for i in tqdm(range(gen_train.num_batches//4), 'training', leave=False):
x, y_true = next(iterator_train)
metric_values = step(x, y_true, training=True)
metric_util.update(metric_values, training=True)
model.save_weights(checkdir+'/weights.%03i.h5' % (k+1,))
for i in tqdm(range(gen_val.num_batches), 'validation', leave=False):
x, y_true = next(iterator_val)
metric_values = step(x, y_true, training=False)
metric_util.update(metric_values, training=False)
metric_util.on_epoch_end(verbose=1)
from utils.model import calc_memory_usage, count_parameters
count_parameters(model)
calc_memory_usage(model)
# frequency of class instance in local ground truth, used for weightning the focal loss
s = np.zeros(gt_util.num_classes)
for i in range(1000):#range(gt_util.num_samples):
egt = prior_util.encode(gt_util.data[i])
s += np.sum(egt[:,-gt_util.num_classes:], axis=0)
sn = np.asarray(np.sum(s))/s
print(np.array(sn, dtype=np.int32))
print(sn/np.sum(sn))
###Output
_____no_output_____
###Markdown
Data
###Code
from data_synthtext import GTUtility
with open('gt_util_synthtext_seglink.pkl', 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = gt_util.split(gt_util, split=0.95)
###Output
_____no_output_____
###Markdown
Model
###Code
# TextBoxes++
model = TBPP512(softmax=False)
weights_path = 'models/ssd512_voc_weights_fixed.hdf5'
freeze = ['conv1_1', 'conv1_2',
'conv2_1', 'conv2_2',
'conv3_1', 'conv3_2', 'conv3_3',
#'conv4_1', 'conv4_2', 'conv4_3',
#'conv5_1', 'conv5_2', 'conv5_3',
]
batch_size = 24
experiment = 'tbpp512fl_synthtext'
# TextBoxes++ + DenseNet
model = TBPP512_dense(softmax=False)
weights_path = None
freeze = []
batch_size = 6
experiment = 'dsodtbpp512fl_synthtext'
prior_util = PriorUtil(model)
if weights_path is not None:
load_weights(model, weights_path)
###Output
_____no_output_____
###Markdown
Training
###Code
epochs = 100
initial_epoch = 0
gen_train = InputGenerator(gt_util_train, prior_util, batch_size, model.image_size)
gen_val = InputGenerator(gt_util_val, prior_util, batch_size, model.image_size)
for layer in model.layers:
layer.trainable = not layer.name in freeze
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
#optim = keras.optimizers.SGD(lr=1e-3, momentum=0.9, decay=0, nesterov=True)
optim = keras.optimizers.Adam(lr=1e-3, beta_1=0.9, beta_2=0.999, epsilon=0.001, decay=0.0)
# weight decay
regularizer = keras.regularizers.l2(5e-4) # None if disabled
regularizer = None
for l in model.layers:
if l.__class__.__name__.startswith('Conv'):
l.kernel_regularizer = regularizer
loss = TBPPFocalLoss()
model.compile(optimizer=optim, loss=loss.compute, metrics=loss.metrics)
history = model.fit_generator(
gen_train.generate(),
steps_per_epoch=int(gen_train.num_batches/4),
epochs=epochs,
verbose=1,
callbacks=[
keras.callbacks.ModelCheckpoint(checkdir+'/weights.{epoch:03d}.h5', verbose=1, save_weights_only=True),
Logger(checkdir),
#LearningRateDecay()
],
validation_data=gen_val.generate(),
validation_steps=gen_val.num_batches,
class_weight=None,
max_queue_size=1,
workers=1,
#use_multiprocessing=False,
initial_epoch=initial_epoch,
#pickle_safe=False, # will use threading instead of multiprocessing, which is lighter on memory use but slower
)
from ssd_utils import calc_memory_usage, count_parameters
count_parameters(model)
calc_memory_usage(model)
# frequency of class instance in local ground truth, used for weightning the focal loss
s = np.zeros(gt_util.num_classes)
for i in range(1000):#range(gt_util.num_samples):
egt = prior_util.encode(gt_util.data[i])
s += np.sum(egt[:,-gt_util.num_classes:], axis=0)
sn = np.asarray(np.sum(s))/s
print(np.array(sn, dtype=np.int32))
print(sn/np.sum(sn))
###Output
_____no_output_____
###Markdown
Data
###Code
from data_synthtext import GTUtility
with open('gt_util_synthtext_seglink.pkl', 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = gt_util.split(0.9)
###Output
_____no_output_____
###Markdown
Model
###Code
# TextBoxes++
model = TBPP512(softmax=False)
weights_path = './models/ssd512_voc_weights_fixed.hdf5'
freeze = ['conv1_1', 'conv1_2',
'conv2_1', 'conv2_2',
'conv3_1', 'conv3_2', 'conv3_3',
#'conv4_1', 'conv4_2', 'conv4_3',
#'conv5_1', 'conv5_2', 'conv5_3',
]
batch_size = 24
experiment = 'tbpp512fl_synthtext'
# TextBoxes++ with DSOD backbone
model = TBPP512_dense(softmax=False)
weights_path = None
freeze = []
batch_size = 6
experiment = 'dsodtbpp512fl_synthtext'
# TextBoxes++ with dense blocks and separable convolution
model = TBPP512_dense_separable()
weights_path = None
freeze = []
batch_size = 8
experiment = 'dstbpp512fl_synthtext'
prior_util = PriorUtil(model)
if weights_path is not None:
load_weights(model, weights_path)
###Output
_____no_output_____
###Markdown
Training
###Code
epochs = 100
initial_epoch = 0
#optimizer = keras.optimizers.SGD(lr=1e-3, momentum=0.9, decay=0, nesterov=True)
optimizer = keras.optimizers.Adam(lr=1e-3, beta_1=0.9, beta_2=0.999, epsilon=0.001, decay=0.0)
loss = TBPPFocalLoss(lambda_conf=10000.0, lambda_offsets=1.0)
#regularizer = None
regularizer = keras.regularizers.l2(5e-4) # None if disabled
gen_train = InputGenerator(gt_util_train, prior_util, batch_size, model.image_size, augmentation=False)
gen_val = InputGenerator(gt_util_val, prior_util, batch_size, model.image_size, augmentation=False)
dataset_train, dataset_val = gen_train.get_dataset(), gen_val.get_dataset()
iterator_train, iterator_val = iter(dataset_train), iter(dataset_val)
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
print(checkdir)
for l in model.layers:
l.trainable = not l.name in freeze
if regularizer and l.__class__.__name__.startswith('Conv'):
model.add_loss(lambda l=l: regularizer(l.kernel))
metric_util = MetricUtility(loss.metric_names, logdir=checkdir)
@tf.function
def step(x, y_true, training=False):
if training:
with tf.GradientTape() as tape:
y_pred = model(x, training=True)
metric_values = loss.compute(y_true, y_pred)
total_loss = metric_values['loss']
if len(model.losses):
total_loss += tf.add_n(model.losses)
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
else:
y_pred = model(x, training=True)
metric_values = loss.compute(y_true, y_pred)
return metric_values
for k in tqdm(range(initial_epoch, epochs), 'total', leave=False):
print('\nepoch %i/%i' % (k+1, epochs))
metric_util.on_epoch_begin()
for i in tqdm(range(gen_train.num_batches//4), 'training', leave=False):
x, y_true = next(iterator_train)
metric_values = step(x, y_true, training=True)
metric_util.update(metric_values, training=True)
model.save_weights(checkdir+'/weights.%03i.h5' % (k+1,))
for i in tqdm(range(gen_val.num_batches), 'validation', leave=False):
x, y_true = next(iterator_val)
metric_values = step(x, y_true, training=False)
metric_util.update(metric_values, training=False)
metric_util.on_epoch_end(verbose=1)
from utils.model import calc_memory_usage, count_parameters
count_parameters(model)
calc_memory_usage(model)
# frequency of class instance in local ground truth, used for weightning the focal loss
s = np.zeros(gt_util.num_classes)
for i in range(1000):#range(gt_util.num_samples):
egt = prior_util.encode(gt_util.data[i])
s += np.sum(egt[:,-gt_util.num_classes:], axis=0)
sn = np.asarray(np.sum(s))/s
print(np.array(sn, dtype=np.int32))
print(sn/np.sum(sn))
###Output
_____no_output_____ |
chartbusters-prediction-foretell-the-popularity_v54.ipynb | ###Markdown
Unique_ID : Unique Identifier. Name : Name of the Artist. Genre : Genre of the Song. Country : Origin Country of Artist. Song_Name : Name of the Song. Timestamp : Release Date and Time. Views : Number of times the song was played/viewed (Target/Dependent Variable). Comments : Count of comments for the song. Likes : Count of Likes. Popularity : Popularity score for the artist. Followers : Number of Followers. Import libraries
###Code
import re
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
import seaborn as sns
from tqdm.notebook import tqdm
sns.set_style('darkgrid')
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', 500)
###Output
_____no_output_____
###Markdown
Import datasets
###Code
train = pd.read_csv('/kaggle/input/Data_Train.csv')
test = pd.read_csv('/kaggle/input/Data_Test.csv')
sub = pd.read_csv('/kaggle/input/Sample_Submission.csv')
train.shape, test.shape, sub.shape
###Output
_____no_output_____
###Markdown
Data exploration
###Code
train.duplicated().sum(), test.duplicated().sum()
train.head(2)
train.info()
train.isnull().sum()
train.nunique()
test.nunique()
###Output
_____no_output_____
###Markdown
Data pre-processing
###Code
train['Timestamp'] = pd.to_datetime(train['Timestamp'])
test['Timestamp'] = pd.to_datetime(test['Timestamp'])
train = train.sort_values('Timestamp').reset_index(drop = True)
test = test.sort_values('Timestamp').reset_index(drop = True)
df = train.append(test, ignore_index=True, sort=False)
df.shape
df.info()
df.head(2)
df['Other_artist'] = df['Song_Name'].str.count('feat|Feat|FEAT')
df['Year'] = pd.to_datetime(df['Timestamp']).dt.year
df['Month'] = pd.to_datetime(df['Timestamp']).dt.month
df['Day'] = pd.to_datetime(df['Timestamp']).dt.day
df['Hour'] = pd.to_datetime(df['Timestamp']).dt.hour
df['Minutes'] = pd.to_datetime(df['Timestamp']).dt.minute
df['Seconds'] = pd.to_datetime(df['Timestamp']).dt.second
df['Dayofweek'] = pd.to_datetime(df['Timestamp']).dt.dayofweek
df['DayOfyear'] = pd.to_datetime(df['Timestamp']).dt.dayofyear
df['WeekOfyear'] = pd.to_datetime(df['Timestamp']).dt.weekofyear
df['Likes'] = df['Likes'].str.replace(',','')
df['Likes'] = df['Likes'].replace({'K': '*1e3', 'M': '*1e6'}, regex=True).map(pd.eval)
df['Popularity'] = df['Popularity'].str.replace(',','')
df['Popularity'] = df['Popularity'].replace({'K': '*1e3', 'M': '*1e6'}, regex=True).map(pd.eval)
agg_func = {
'Comments': ['sum'],
'Likes': ['sum'],
'Popularity': ['sum'],
'Followers': ['sum']
}
agg_name = df.groupby(['Year','Name']).agg(agg_func)
agg_name.columns = [ 'YN_' + ('_'.join(col).strip()) for col in agg_name.columns.values]
agg_name.reset_index(inplace=True)
df = df.merge(agg_name, on=['Year','Name'], how='left')
del agg_name
agg_func = {
'Comments': ['mean','min','max','sum','median'],
'Likes': ['mean','min','max','sum','median'],
'Popularity': ['mean','min','max','sum','median'],
'Followers': ['mean','sum']
}
agg_name = df.groupby('Name').agg(agg_func)
agg_name.columns = [ 'Name_' + ('_'.join(col).strip()) for col in agg_name.columns.values]
agg_name.reset_index(inplace=True)
df = df.merge(agg_name, on=['Name'], how='left')
del agg_name
df['Followers / Popularity'] = df['Followers'] / df['Popularity']
df['Followers / Comments'] = df['Followers'] / df['Comments']
df['Followers / Likes'] = df['Followers'] / df['Likes']
df['Popularity / Followers'] = df['Popularity'] / df['Followers']
df['Popularity / Comments'] = df['Popularity'] / df['Comments']
df['Popularity / Likes'] = df['Popularity'] / df['Likes']
df['Likes / Followers'] = df['Likes'] / df['Followers']
df['Likes / Popularity'] = df['Likes'] / df['Popularity']
df['Likes / Comments'] = df['Likes'] / df['Comments']
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['Name'] = le.fit_transform(df['Name'])
df = pd.get_dummies(df, columns=['Genre'], drop_first=True)
df.drop(['Country','Song_Name','Timestamp'], axis=1, inplace=True)
train_df = df[df['Views'].isnull()!=True]
test_df = df[df['Views'].isnull()==True]
test_df.drop('Views', axis=1, inplace=True)
train_df = train_df.replace([np.inf, -np.inf], np.nan)
train_df = train_df.fillna(0)
test_df = test_df.replace([np.inf, -np.inf], np.nan)
test_df = test_df.fillna(0)
train_df.shape, test_df.shape
###Output
_____no_output_____
###Markdown
Train test split
###Code
X = train_df.drop(labels=['Views'], axis=1)
y = train_df['Views'].values
from sklearn.model_selection import train_test_split
X_train, X_cv, y_train, y_cv = train_test_split(X, y, test_size=0.25, random_state=42)
X_train.shape, y_train.shape, X_cv.shape, y_cv.shape
X_train.tail(2)
###Output
_____no_output_____
###Markdown
Build the model
###Code
from math import sqrt
from sklearn.metrics import accuracy_score, mean_squared_error
from sklearn.ensemble import GradientBoostingRegressor
gb = GradientBoostingRegressor(verbose=1, learning_rate=0.2, n_estimators=1000, random_state=42, subsample=0.8)
gb.fit(X_train, y_train)
y_pred = gb.predict(X_cv)
print('RMSE', sqrt(mean_squared_error(y_cv, y_pred)))
feature_imp = pd.DataFrame(sorted(zip(gb.feature_importances_, X.columns), reverse=True)[:60], columns=['Value','Feature'])
plt.figure(figsize=(12,10))
sns.barplot(x="Value", y="Feature", data=feature_imp.sort_values(by="Value", ascending=False))
plt.title('Gradient Boosting Features')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Predict on test set
###Code
Xtest = test_df
from sklearn.model_selection import KFold
errgb = []
y_pred_totgb = []
fold = KFold(n_splits=10, shuffle=True, random_state=42)
for train_index, test_index in fold.split(X):
X_train, X_test = X.loc[train_index], X.loc[test_index]
y_train, y_test = y[train_index], y[test_index]
gb = GradientBoostingRegressor(learning_rate=0.2, n_estimators=1000, random_state=42, subsample=0.8)
gb.fit(X_train, y_train)
y_pred = gb.predict(X_test)
print('RMSE', sqrt(mean_squared_error(y_test, y_pred)))
errgb.append(sqrt(mean_squared_error(y_test, y_pred)))
p = gb.predict(Xtest)
y_pred_totgb.append(p)
np.mean(errgb)
final = np.mean(y_pred_totgb,0).round()
final
for i in range(10):
sub = pd.DataFrame({'Unique_ID':test['Unique_ID'],'Views': y_pred_totgb[i]})
sub.to_excel('fold_'+str(i)+'_Output.xlsx', index=False)
sub = pd.DataFrame({'Unique_ID':test['Unique_ID'],'Views': final})
sub.head()
from IPython.display import HTML
import pandas as pd
import numpy as np
import base64
def create_download_link(df, title = "Download CSV file", filename = "submission.csv"):
csv = df.to_csv(index=False)
b64 = base64.b64encode(csv.encode())
payload = b64.decode()
html = '<a download="{filename}" href="data:text/csv;base64,{payload}" target="_blank">{title}</a>'
html = html.format(payload=payload,title=title,filename=filename)
return HTML(html)
create_download_link(sub)
###Output
_____no_output_____ |
dev/descriptives/monthly_descriptives.ipynb | ###Markdown
Descriptives StatisticsBasic statistics about monthly trendsLast updated: 19.06.2018.Created by: Orsi Vasarhelyi
###Code
import json
import os
import pickle
import psycopg2
import pandas as pd
import sqlalchemy
import sys
sys.path.append("..")
from connect_db import db_connection
import matplotlib.pyplot as plt
import seaborn as sns
username='ovasarhelyi'
cred_location = '/mnt/data/'+username+'/TPT_tourism/connect_db/data_creds_redshift.json.nogit'
db = db_connection.DBConnection(cred_location)
###Output
_____no_output_____
###Markdown
Read MCC data
###Code
# read the country codes in
mcc=pd.read_csv('/mnt/data/shared/mcc-mnc-table.csv')
mcc_country=mcc.drop_duplicates("MCC", keep='first')[['MCC', 'Country']]
mcc_country['mcc']=mcc_country['MCC'].astype(int)
def clean_guam(row):
if row['Country']=='Guam':
return 'United States'
else:
return row['Country']
mcc_country['Country']=mcc_country.apply(clean_guam,1)
###Output
_____no_output_____
###Markdown
Histogram about number of locations visited in Italy (by months) by country
###Code
#Histogram about number of locations visited in Italy (by months)
query1="""select
extract(month from time_stamp) as given_month,
mcc,
count(distinct location_id) as num_loc_in_italy
from tpt.tuscany.vodafone
group by extract(month from time_stamp), mcc
order by given_month, mcc desc"""
num_loc_month= db.sql_query_to_data_frame(query1)
num_loc_month.head()
s1 = pd.DataFrame.from_dict({'num_ppl_in_italy':[0,0],
'index':[3,4]})
s2 = pd.DataFrame.from_dict({'num_loc_in_italy':[0,0],
'index':[3,4]})
b=pd.DataFrame(num_loc_month.groupby("given_month")['num_loc_in_italy'].sum())
ax1=b.append(s2.set_index('index')).sort_index().plot(kind='bar')
ax1.set_ylim(ymin=0)
sns.despine()
plt.legend("")
country_loc=pd.DataFrame(num_loc_month.groupby("mcc")['num_loc_in_italy'].sum().sort_values(ascending=False))
locs_country=country_loc.join(mcc_country.set_index('mcc'))
###Output
_____no_output_____
###Markdown
Number of people by country, month in whole Italy
###Code
query2="""select
extract(month from time_stamp) as given_month,
mcc,
count(distinct customer_id) as num_ppl_in_italy
from tpt.tuscany.vodafone
group by extract(month from time_stamp), mcc
order by given_month, mcc desc"""
num_ppl_month= db.sql_query_to_data_frame(query2)
num_ppl_month.head()
a2=pd.DataFrame(num_ppl_month.groupby("given_month")['num_ppl_in_italy'].sum())
a3=a2.append(s1.set_index('index')).sort_index().plot(kind='bar', color='grey', title='Number of uniqie visitors in Tuscany per month')
ax2.set_ylim(ymin=0)
sns.despine()
plt.legend("")
b.append(s2.set_index('index')).sort_index().join(a2.append(s1.set_index('index')).sort_index()).plot(kind='bar', alpha=0.78)
sns.despine()
###Output
_____no_output_____
###Markdown
Num locations vs num people
###Code
b=pd.DataFrame(num_ppl_month.groupby("mcc")['num_ppl_in_italy'].sum())
mcc_country.index=mcc_country.mcc.index.astype(float)
ppl_loc=country_loc.join(b).join(mcc_country.set_index('mcc'))
ppl_loc.set_index('Country')[['num_loc_in_italy', 'num_ppl_in_italy']][:20].sort_values('num_ppl_in_italy', ascending=True).plot(kind='barh', figsize=(10,10))
sns.despine()
###Output
_____no_output_____ |
days/day11/Day11.ipynb | ###Markdown
Day 11* Computational Physics (PHYS 202)* Cal Poly, Spring 2015* Brian E. Granger In class * Go over midterm - Discuss solutions - Grade distribution* Coding tips and help - Difference between `return` and `print`. - Transforming data between different container types. - How to count things. - Comments on performance.* Fetch today's material: nbgrader fetch phys202-2015 day11 nbgrader fetch phys202-2015 assignment08 Coding tips and help Different between `return` and `print` A function that has no `return` statement will always return `None`:
###Code
def f(x):
print(x**2)
a = f(2)
print(a)
###Output
_____no_output_____
###Markdown
If you want a function to do anything useful, you have to return something:
###Code
def g(x):
return x**2
b = g(2)
print(b)
###Output
4
###Markdown
Transforming container types String can be turned into lists and tuples:
###Code
import random
alpha = 'abcdefghijklmnopqrstuvwxyz'
l = [random.choice(alpha) for i in range(10)]
l
s = ''.join(l)
s
for c in s:
print(c)
[c.upper() for c in s]
list(s)
tuple(s)
digits = str(1001023039)
list(digits)
[int(d) for d in digits]
###Output
_____no_output_____
###Markdown
How to count things
###Code
def random_string(n):
return ''.join([random.choice(alpha) for i in range(n)])
random_string(100)
def count0(seq):
counts = {}
for s in seq:
counts[s] = seq.count(s)
return counts
rs = random_string(10)
count0(rs), rs
%timeit count0(random_string(10000))
def count1(seq):
counts = {}
for s in seq:
if s in counts:
counts[s] += 1
else:
counts[s] = 1
return counts
rs = random_string(10)
count1(rs), rs
%timeit count1(random_string(10000))
from collections import defaultdict
def count2(seq):
counts = defaultdict(int)
for s in seq:
counts[s] += 1
return counts
rs = random_string(10)
count2(rs), rs
%timeit count2(random_string(10000))
from collections import Counter
def count3(seq):
return dict(Counter(seq))
rs = random_string(10)
count3(rs), rs
%timeit count3(random_string(10000))
###Output
100 loops, best of 3: 11.6 ms per loop
###Markdown
Day 11* Computational Physics (PHYS 202)* Cal Poly, Spring 2015* Brian E. Granger In class * Go over midterm - Discuss solutions - Grade distribution* Coding tips and help - Difference between `return` and `print`. - Transforming data between different container types. - How to count things. - Comments on performance.* Fetch today's material: nbgrader fetch phys202-2015 day11 nbgrader fetch phys202-2015 assignment08 Coding tips and help Different between `return` and `print` A function that has no `return` statement will always return `None`:
###Code
def f(x):
print(x**2)
a = f(2)
print(a)
###Output
None
###Markdown
If you want a function to do anything useful, you have to return something:
###Code
def g(x):
return x**2
b = g(2)
print(b)
###Output
4
###Markdown
Transforming container types String can be turned into lists and tuples:
###Code
import random
alpha = 'abcdefghijklmnopqrstuvwxyz'
l = [random.choice(alpha) for i in range(10)]
l
s = ''.join(l)
s
for c in s:
print(c)
[c.upper() for c in s]
list(s)
tuple(s)
digits = str(1001023039)
list(digits)
[int(d) for d in digits]
###Output
_____no_output_____
###Markdown
How to count things
###Code
def random_string(n):
return ''.join([random.choice(alpha) for i in range(n)])
random_string(100)
def count0(seq):
counts = {}
for s in seq:
counts[s] = seq.count(s)
return counts
rs = random_string(10)
count0(rs), rs
%timeit count0(random_string(10000))
def count1(seq):
counts = {}
for s in seq:
if s in counts:
counts[s] += 1
else:
counts[s] = 1
return counts
rs = random_string(10)
count1(rs), rs
%timeit count1(random_string(10000))
from collections import defaultdict
def count2(seq):
counts = defaultdict(int)
for s in seq:
counts[s] += 1
return counts
rs = random_string(10)
count2(rs), rs
%timeit count2(random_string(10000))
from collections import Counter
def count3(seq):
return dict(Counter(seq))
rs = random_string(10)
count3(rs), rs
%timeit count3(random_string(10000))
###Output
100 loops, best of 3: 11.6 ms per loop
###Markdown
Day 11* Computational Physics (PHYS 202)* Cal Poly, Spring 2015* Brian E. Granger In class * Go over midterm - Discuss solutions - Grade distribution* Coding tips and help - Difference between `return` and `print`. - Transforming data between different container types. - How to count things. - Comments on performance.* Fetch today's material: nbgrader fetch phys202-2015 day11 nbgrader fetch phys202-2015 assignment08 Coding tips and help Different between `return` and `print` A function that has no `return` statement will always return `None`:
###Code
def f(x):
print(x**2)
a = f(2)
print(a)
###Output
None
###Markdown
If you want a function to do anything useful, you have to return something:
###Code
def g(x):
return x**2
b = g(2)
print(b)
###Output
4
###Markdown
Transforming container types String can be turned into lists and tuples:
###Code
import random
alpha = 'abcdefghijklmnopqrstuvwxyz'
l = [random.choice(alpha) for i in range(10)]
l
s = ''.join(l)
s
for c in s:
print(c)
[c.upper() for c in s]
list(s)
tuple(s)
digits = str(1001023039)
list(digits)
[int(d) for d in digits]
###Output
_____no_output_____
###Markdown
How to count things
###Code
def random_string(n):
return ''.join([random.choice(alpha) for i in range(n)])
random_string(100)
def count0(seq):
counts = {}
for s in seq:
counts[s] = seq.count(s)
return counts
rs = random_string(10)
count0(rs), rs
%timeit count0(random_string(10000))
def count1(seq):
counts = {}
for s in seq:
if s in counts:
counts[s] += 1
else:
counts[s] = 1
return counts
rs = random_string(10)
count1(rs), rs
%timeit count1(random_string(10000))
from collections import defaultdict
def count2(seq):
counts = defaultdict(int)
for s in seq:
counts[s] += 1
return counts
rs = random_string(10)
count2(rs), rs
%timeit count2(random_string(10000))
from collections import Counter
def count3(seq):
return dict(Counter(seq))
rs = random_string(10)
count3(rs), rs
%timeit count3(random_string(10000))
###Output
100 loops, best of 3: 11.6 ms per loop
|
Sesiones/Ejemplos Sesion 5/Sesion5.ipynb | ###Markdown
Manipulacion de archivos
###Code
manejador_archivo = open('mbox.txt')
print(manejador_archivo)
#Falla del archivo
manejador_archivo = open('stuff.txt')
man_archivo = open('mbox-short.txt')
contador = 0
for linea in man_archivo:
contador = contador + 1
print('Contador de líneas:', contador)
manejador_archivo = open('mbox-short.txt')
inp = manejador_archivo.read()
print(len(inp))
print(inp[:20])
man_a = open('mbox-short.txt')
contador = 0
for linea in man_a:
linea = linea.rstrip() # Elimina los espacios en blanco del lado derecho de la cadena
if linea.startswith('From:'):
print(linea)
man_a = open('mbox-short.txt')
for linea in man_a:
linea = linea.rstrip()
# Ignorar lineas que no nos interesan
if not linea.startswith('From:'):
continue
# Procesar la linea que nos 'interesa'
print(linea)
man_a = open('mbox-short.txt')
for linea in man_a:
linea = linea.rstrip()
if linea.find('@uct.ac.za') == -1: continue
print(linea)
narchivo = input('Ingresa un nombre de archivo: ')
man_a = open(narchivo)
contador = 0
for linea in man_a:
if linea.startswith('Subject:'):
contador = contador + 1
print('Hay', contador, 'lineas de asunto (subject) en', narchivo)
narchivo = input('Ingresa un nombre de archivo: ')
try:
man_a = open(narchivo)
except:
print('No se puede abrir el archivo:', narchivo)
exit()
contador = 0
for linea in man_a:
if linea.startswith('Subject:'):
contador = contador + 1
print('Hay', contador, 'lineas de asunto (subject) en', narchivo)
fsal = open('salida.txt', 'w')
print(fsal)
linea1 = "Aquí está el zarzo,\n"
fsal.write(linea1)
linea2 = 'el símbolo de nuestra tierra.\n'
fsal.write(linea2)
fsal.close()
#Crear una nueva carpeta
import os
os.makedirs("Practica")
#Listar el contenidos de una carpeta
os.listdir("./")
#Mostrar el actual directorio de trabajo
os.getcwd()
#Mostrar el tamaño del archivo en bytes del archivo pasado en parámetro
os.path.getsize("Practica")
#¿Es un archivo el parámetro pasado?
os.path.isfile("Practica")
#¿Es una carpeta el parámetro pasado?
os.path.isdir("Practica")
#Renombrar un archivo
os.rename("Practica","PracticaV1")
os.listdir("./")
#Eliminar un archivo
os.chdir("PracticaV1")
archivo = open(os.getcwd()+'/datos.txt', 'w')
archivo.write("Se Feliz!")
archivo.close()
os.getcwd()
os.listdir("./")
os.remove(os.getcwd()+"/datos.txt")
os.listdir("./")
###Output
_____no_output_____ |
Exercise.ipynb | ###Markdown
###Code
!git clone https://github.com/knalin55/Object-Detection-and-Marking
import cv2
import numpy as np
from google.colab.patches import cv2_imshow
import os
import PIL
from PIL import Image
from keras.utils import to_categorical
from keras.optimizers import SGD
import random
import keras
from keras.layers import Layer, UpSampling2D, MaxPooling2D, ReLU, Activation, Conv2D,MaxPooling2D, Softmax, Concatenate, Input, Flatten, Dense, Convolution2D, BatchNormalization, Activation, Reshape,Conv2D
from keras.models import Model, load_model, save_model
from keras.regularizers import l2
import keras.backend as K
from keras.models import load_model
from skimage.feature import match_template
from numpy import reshape
from matplotlib.pyplot import imshow
import math
###Output
_____no_output_____
###Markdown
Libraries Major Libraries Used
* OpenCV (cv2): It is a computer vision library with variety of advanced functions.
* Pillow (PIL): It is image processing library mostly used for simpler stuffs such as cropping or adding basic filters.
* Numpy (np): Library used mostly for handling arrays and matrices.
* Keras : It is used for building a model architecture for Neural Networks in Python.
* Math : Library used for mathematical tasks. How computer interpret image Compute interpret images as matrix of pixel values. So value of each pixelis a three element matrix each representing value of Blue, Green and Red (The primary colours) but not in respective order. These colour value can range from 0 to 255.
###Code
#Reading image with Pillow
im = Image.open('/content/Object-Detection-and-Marking/Data/3.jpg')
image_pil = np.asarray(im)
print("This is how Pillow see the image")
#Showing image using openCV format
cv2_imshow(image_pil)
#Reading image with openCV
im = cv2.imread('/content/Object-Detection-and-Marking/Data/3.jpg')
print("This is how openCV see the image")
#Showing image using openCV format
cv2_imshow(im)
###Output
This is how openCV see the image
###Markdown
What is the difference between reading an image using openCV an Pillow?
###Code
#Answer: openCV use [Blue, Green, Red] channel while Pillow use [Red, Green, Blue] channel
###Output
_____no_output_____
###Markdown
Image Matrix
###Code
print("This is how computer interprets image using openCV")
print("Shape of matrix is " +str(im.shape))
print(im)
###Output
This is how computer interprets image using openCV
Shape of matrix is (225, 225, 3)
[[[255 255 255]
[255 255 255]
[255 255 255]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[255 255 255]
[255 255 255]
[255 255 255]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[255 255 255]
[255 255 255]
[255 255 255]
...
[255 255 255]
[255 255 255]
[255 255 255]]
...
[[255 255 255]
[255 255 255]
[255 255 255]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[255 255 255]
[255 255 255]
[255 255 255]
...
[255 255 255]
[255 255 255]
[255 255 255]]
[[255 255 255]
[255 255 255]
[255 255 255]
...
[255 255 255]
[255 255 255]
[255 255 255]]]
###Markdown
Check the image matrix of Pillow by writting code in cell below or just un-commenting the whole cell.
###Code
#Write code here
#Un-comment below
#im = Image.open('/content/Object-Detection-and-Marking/Data/3.jpg')
#image_pil = np.asarray(im)
#print("This is how computer interprets image using Pillow")
#print("Shape of matrix is " +str(im.shape))
#print(im)
###Output
_____no_output_____
###Markdown
How to change the shape of image matrix from (Height, Width, 3) to (Height, Width).
###Code
#Changing Clourful image to Gray image just to make pixel values single element instead of three values(BGR)
#To get this single value we use Gray = 0.114 * Blue + 0.299 * Red + 0.587 * Green
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
cv2_imshow(imgray)
print("Shape of matrix is " +str(imgray.shape))
print(imgray)
###Output
_____no_output_____
###Markdown
Create your own single channel image by using image matrix and averaging BGR values. You can write code in cell below or just un-commenting the whole cell.
###Code
#Write code here
#Un-comment below
#im = cv2.imread('/content/Object-Detection-and-Marking/Data/3.jpg')
#new_im=[]
#for key in im:
# new_element=[]
# for i in key:
# weigh= np.ndarray((3,), buffer=np.array([1, 0.114 , 0.587 , 0.299]), offset=np.int_().itemsize, dtype=float)
# avg= np.average(i, weights=weigh )
# new_element.append(avg)
# new_im.append(new_element)
#new_im= np.asarray(new_im)
#cv2_imshow(new_im)
###Output
_____no_output_____
###Markdown
Some methods for Object Counting without Neural Network Possible way to count balls in image can be using edge-detection based object detction methods. **Contour Method** Contours are defined as the line joining all the points along the boundary of an image that are having the same intensity. Here, we use Contours for object detection.
###Code
i=2
im = cv2.imread("/content/Object-Detection-and-Marking/Data/"+str(i)+'.jpg')
output = im.copy()
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
#Grayscale pixel value grater than Threshold will be converted to 1 rest 0.
ret, thresh = cv2.threshold(imgray, 220, 250, 0)
#Find Contours in mage
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#Draw the Contours on image
cv2.drawContours(output, contours, -1, (0,255,0), 3)
print("No. of Contours: "+ str(len(contours)))
cv2_imshow(output)
###Output
No. of Contours: 9
###Markdown
For which of the images (select image by changing i to 1-5) Contour method produces best and worst results.
###Code
#i= write number here
im = cv2.imread("/content/Object-Detection-and-Marking/Data/"+str(i)+'.jpg')
output = im.copy()
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 220, 250, 0)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(output, contours, -1, (0,255,0), 3)
print("No. of Contours: "+ str(len(contours)))
cv2_imshow(output)
###Output
No. of Contours: 9
###Markdown
**Hough Gradient Method**
###Code
i=2
im = cv2.imread("/content/Object-Detection-and-Marking/Data/"+str(i)+'.jpg')
output = im.copy()
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# Apply Hough transform on the grayscale image.
circles = cv2.HoughCircles(imgray, cv2.HOUGH_GRADIENT, 1.2, 80,\
param1 = 50,param2 = 30, minRadius = 0, maxRadius = 0)
if circles is not None:
print("No. of Circles: "+ str(len(circles[0])))
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2_imshow(output)
###Output
No. of Circles: 7
###Markdown
For which of the images (select image by changing i to 1-5) Hough Gradient Method produces best and worst results.
###Code
#i= write number here
im = cv2.imread("/content/Object-Detection-and-Marking/Data/"+str(i)+'.jpg')
output = im.copy()
imgray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# Apply Hough transform on the grayscale image.
circles = cv2.HoughCircles(imgray, cv2.HOUGH_GRADIENT, 1.2, 80,\
param1 = 50,param2 = 30, minRadius = 0, maxRadius = 0)
if circles is not None:
print("No. of Circles: "+ str(len(circles[0])))
circles = np.round(circles[0, :]).astype("int")
# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
# corresponding to the center of the circle
cv2.circle(output, (x, y), r, (0, 255, 0), 4)
cv2_imshow(output)
###Output
No. of Circles: 7
###Markdown
So, what are some of the challenges we noticed here? Slides **Object counting with Neural Network** Then what other option we have then train a neural network.
###Code
#Let's use MobileNetV2 architecture (a pre built CNN architecture)
mob = keras.applications.MobileNetV2(
input_shape=(60,60,1),
alpha=1.0,
include_top=True,
weights=None,
input_tensor=None,
pooling=None,
classes=2)
mob.summary()
sgd= SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
mob.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
#Sample_CNN_model.
input1= Input((60,60,1))
conv1 = Conv2D(32, (4,4), padding='same')(input1)
act1 = Activation("relu")(conv1)
conv2 = Conv2D(32, (3,3), padding='same')(act1)
act2 = Activation("relu")(conv2)
maxp = MaxPooling2D((2,2))(act2)
at = Flatten()(maxp)
dense = Dense(64, activation='sigmoid')(at)
out = Dense(2, activation='softmax')(dense)
model = Model(inputs= input1, outputs= out)
model.summary()
sgd= SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
###Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 60, 60, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 60, 60, 32) 544
_________________________________________________________________
activation (Activation) (None, 60, 60, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 60, 60, 32) 9248
_________________________________________________________________
activation_1 (Activation) (None, 60, 60, 32) 0
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 30, 30, 32) 0
_________________________________________________________________
flatten (Flatten) (None, 28800) 0
_________________________________________________________________
dense (Dense) (None, 64) 1843264
_________________________________________________________________
dense_1 (Dense) (None, 2) 130
=================================================================
Total params: 1,853,186
Trainable params: 1,853,186
Non-trainable params: 0
_________________________________________________________________
###Markdown
It's your time to write your own CNN model, name it exercise_model.
###Code
#Write your code here.
###Output
_____no_output_____
###Markdown
HOW WE CREATE DATASET FOR TRAINING We manually crop diameter*diameter enclosing individual ball's circular ends such that center of circle(ball) and square(window) coincide approximately. In case of partially visible balls one just try to coincide it's center with window's even if another rod is getting inside window. Save this data in a folder. Saving diameter*diameter cropped image from original image by shifting window by 2 pixel each time (given the restriction that the image is not present in the manually cropped images folder). Here this 2 is of no particular concern, we are trying to create large dataset while covering whole image. A dictionary is also created with image names as keys and label 0. We won't run this code as dataset is too big, you can try it later from our github repository. (P.S. We haven't uploaded the training data, soon we will)
###Code
directory = "Data/label_1" #Location of folder where we kept the manually cropped images.
path1 = "Data/label_0" #Location of folder where we will save our label '0' images
labels={}
n=0
count=0
for i in range(np.int64((im.size[0]- 60)/4) + 1):
for k in range(np.int64((im.size[1] - 60)/4) + 1):
count = 0
x = ""
y = ""
left = 4*i
right = 4*i + 60
top = 4*k
bottom = 4*k + 60
m = im.crop((left, top, right, bottom))
for files in os.listdir(directory): #Here we check whether there is any common image in data crated manually and image created above code, if there is, then we don't save it in the folder where we are saving our label '0' images.
file = Image.open(os.path.join(directory,files))
if np.array_equal(np.asarray(m), np.asarray(file)):
break
else:
count=count+1
if count == 189: #Here, 416 is the number of label '1' images we got while manually cropping.
x = x.join(['1_', str(n)]) #The images would be saved as 1_1.jpg, 1_2.jpg,... . Here, '1_' is insignificant.
labels[x] = '0'
y = y.join([x, '.jpeg'])
m = m.save(os.path.join(path1, y))
n = n+1
num_lab_0=n-1 #We will need this number in generating the training dataset
###Output
_____no_output_____
###Markdown
Data created maually is labelled 1, but it is too less compared to data labelled 0. So, what can we do solve this class imbalance? A Solution can be to rotate and transpose manually created data to make new data with label 1 which is 7 times manually created data.
###Code
im=Image.open('/content/Object-Detection-and-Marking/Data/2.jpg')
print("Actual image.")
im
###Output
Actual image.
###Markdown
Rotate Image
###Code
print("Rotated image.")
new_im=im.rotate(90)
new_im
###Output
Rotated image.
###Markdown
Transpose of image
###Code
print("Transposed image.")
new_im= im.transpose(Image.FLIP_LEFT_RIGHT)
new_im
###Output
Transposed image.
###Markdown
We won't access training data here but you can try it later from our github repository. (P.S. We haven't uploaded the training data, soon we will)
###Code
# Code to train with balanced data of each class
for files in os.listdir(directory):
file = Image.open(os.path.join(directory,files))
file_t = file.transpose(Image.FLIP_LEFT_RIGHT)
for i in range(4):
x = ""
p = ""
x = x.join(["1_", str(n)])
p = p.join([x, '.jpeg'])
rot = file.rotate(90*i)
labels[x]='1'
rot = rot.save(os.path.join(path1,p))
n=n+1
for i in range(4):
x = ""
p = ""
x = x.join(["1_", str(n)])
p = p.join([x, '.jpeg'])
rot = file_t.rotate(90*i)
labels[x]='1'
rot = rot.save(os.path.join(path1,p))
n=n+1
###Output
_____no_output_____
###Markdown
We will now train the pre-built model MobileNetV2 Still data with label 0 is too large compared to the data with label 1. Should we randomly choose from label 0 data to make their ratio comparable (that is 1:3).
###Code
#As data with label 0 is too large compared to the data with label 1, we want to randomly choose from label 0 data to make
# ratio 1:3. But we don't want to miss the surrounding images of label 1 data for better learning, hence we do the choosing
# after every two epoch.
for j in range(15):
lis = random.choices(range(num_lab_0), k=6400)
for i in range(num_lab_0, n):
lis.append(i)
lis = random.sample(lis, len(lis))
image_train = np.zeros((len(lis), 60, 60, 1))
lab = np.zeros((len(lis), 1))
i=0
for keys in lis:
a = ""
b = ""
a = a.join(["1_", str(keys), ".jpeg"])
b = b.join(["1_", str(keys)])
path_im = os.path.join(path1, a)
img_t = np.asarray(Image.open(path_im).convert('L')).reshape((60,60,1))
image_train[i] = img_t
lab[i] = labels[b]
i=i+1
lab = to_categorical(lab, 2)
#We take 10% of the training set as validation data
mob.fit(x= image_train , y=lab, epochs=2, verbose=1, shuffle=True, validation_split= 0.1, batch_size=28)
#Saving the trained model
mob.save('steel_bar_count.h5')
###Output
_____no_output_____
###Markdown
Similarly you can save your trained model (Exercise_model) too. Testing the model
###Code
#model path
model = load_model(r'/content/Object-Detection-and-Marking/ball_count_MobnetV2.h5')
def reshape_img(b,x):
b = b.resize((np.int64(b.size[0]/x), np.int64(b.size[1]/x)))
left= 0
right= b.size[0] - (b.size[0]%2)
top= 0
bottom= b.size[1] - (b.size[1]%2)
b= b.crop((left, top, right, bottom))
return b
def find_radius(image):
image = np.asarray(image)
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
ab=0
ba=0
for thr in range(2, 13):
_, binary = cv2.threshold(gray, 10*thr + 5, 255, cv2.THRESH_BINARY)
max1 = []
rad = []
ind=0
for i in range(10,50):
cir = np.zeros((2*i,2*i))
circ = cv2.circle(np.asarray(cir), (i,i), radius = i, color=(255, 255), thickness=-1)
template = circ
result= match_template(binary, template)
max1.append(np.amax(result))
rad.append(i)
radius = rad[max1.index(max(max1))]
print("radius =", radius)
return radius
###Output
_____no_output_____
###Markdown
Create your custom radius finding code by playing with threshold on image 1.jpg and2.jpg
###Code
#Write your code here
#Path of 1.jpg is "/content/Object-Detection-and-Marking/Data/1.jpg"
#Path of 2.jpg is "/content/Object-Detection-and-Marking/Data/2.jpg"
def find_centres(img):
radii = find_radius(img)
x=0
if radii<20:
print("Balls are too small to count.")
print(" Please give nearer image.")
quit()
if 20<=radii<=30:
x=1
elif 30<radii:
x= math.ceil(radii/30)
# sample = reshape_img(img,x)
# b = np.asarray(sample.convert('L')).reshape(sample.size[0], sample.size[1], 1)
b = reshape_img(img, x)
z= []
x= []
y= []
for i in range(np.int64((b.size[0]- 60)/3) + 1):
for k in range(np.int64((b.size[1] - 60)/3) + 1):
left = 3*i
right = 3*i + 60
top = 3*k
bottom = 3*k + 60
count=0
m = b.crop((left, top, right, bottom))
m = reshape(np.asarray(m.convert('L')), (1,60,60,1))
pred = model.predict(m)
if pred[0][1] >= 0.3:
z.append(pred[0][1])
x.append(np.int64((left+right)/2))
y.append(np.int64((top+bottom)/2))
x1 = np.array(x)
y1= np.array(y)
z1= np.array(z)
x1.resize(len(x),1)
y1.resize(len(x),1)
z1.resize(len(x),1)
xy= np.concatenate([x1,y1,z1], axis=-1)
xy = xy.tolist()
for i in range(np.int64((b.size[0] - 8)/2) + 1):
for k in range(np.int64((b.size[1] - 8)/2) + 1):
inde=[]
p=0
left = 2*i
right = 2*i + 8
top = 2*k
bottom = 2*k + 8
yx=[]
for item in xy:
if left <= item[0] <= right and top <= item[1] <= bottom:
if p >= item[2]:
try:
xa = xy.index(xy[xy.index(item)])
except:
break
else:
xy.remove(xy[xy.index(item)])
elif 0 < p < item[2]:
try:
xa = xy.index(xy[xy.index(yx)])
except:
break
else:
xy.remove(xy[xy.index(yx)])
p=item[2]
else:
p=item[2]
yx=item
xy = np.array(xy)
#marking the centres
for i in range(len(xy)):
b = cv2.circle(np.asarray(b), (np.int64(xy[i][0]),np.int64(xy[i][1])), radius = 0, color=(255, 0, 0), thickness=2)
img = Image.fromarray(b, 'RGB')
#byte_io = BytesIO()
img.save('output.jpeg')
print("Number of balls = ", len(xy))
#image path
img= Image.open(r'/content/Object-Detection-and-Marking/Data/balls.jpeg')
find_centres(img)
Image.open("/content/Object-Detection-and-Marking/output_MobNetV2.jpeg")
###Output
_____no_output_____
###Markdown
Let's see another example of trying to count steel rods stack
###Code
#model path
model = load_model(r'/content/Object-Detection-and-Marking/steel_bar_coun.h5')
def find_radius(image):
image = np.asarray(image)
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
ab=0
ba=0
for thr in range(2, 13):
_, binary = cv2.threshold(gray, 10*thr + 5, 25, cv2.THRESH_BINARY)
max1=[]
ind=0
for i in range(1,50):
cir = np.zeros((2*i,2*i))
circ = cv2.circle(np.asarray(cir), (i,i), radius = i, color=(255, 255), thickness=-1)
template = circ
result= match_template(binary, template)
max1.append(np.amax(result))
max_l=0
max_val=0
ind=0
for i in range(6):
max1.remove(max1[0])
max_val= max(max1)
max_l= 7 + max1.index(max_val)
if ba <= max_val:
ba = max_val
ab = max_l
radii=ab/ba
print("radius =", radii)
return radii
def find_centres(img):
radii = find_radius(img)
x=0
if radii<7:
print("Rods are too small to count.")
print(" Please give nearer image.")
quit()
if 7<=radii<=10:
x=1
elif 10<radii:
x= math.ceil(radii/10)
b= reshape_img(img,x)
z= []
x= []
y= []
for i in range(np.int64((b.size[0]- 20)/2) + 1):
for k in range(np.int64((b.size[1] - 20)/2) + 1):
left = 2*i
right = 2*i + 20
top = 2*k
bottom = 2*k + 20
count=0
m = b.crop((left, top, right, bottom))
m= reshape(np.asarray(m), (1,20,20,3))
pred= model.predict(m)
if pred[0][1] >= 0.3:
z.append(pred[0][1])
x.append(np.int64((left+right)/2))
y.append(np.int64((top+bottom)/2))
x1 = np.array(x)
y1= np.array(y)
z1= np.array(z)
x1.resize(len(x),1)
y1.resize(len(x),1)
z1.resize(len(x),1)
xy= np.concatenate([x1,y1,z1], axis=-1)
xy = xy.tolist()
for i in range(np.int64((b.size[0] - 8)/2) + 1):
for k in range(np.int64((b.size[1] - 8)/2) + 1):
inde=[]
p=0
left = 2*i
right = 2*i + 8
top = 2*k
bottom = 2*k + 8
yx=[]
for item in xy:
if left <= item[0] <= right and top <= item[1] <= bottom:
if p >= item[2]:
try:
xa = xy.index(xy[xy.index(item)])
except:
break
else:
xy.remove(xy[xy.index(item)])
elif 0 < p < item[2]:
try:
xa = xy.index(xy[xy.index(yx)])
except:
break
else:
xy.remove(xy[xy.index(yx)])
p=item[2]
else:
p=item[2]
yx=item
xy = np.array(xy)
#marking the centres
for i in range(len(xy)):
b = cv2.circle(np.asarray(b), (np.int64(xy[i][0]),np.int64(xy[i][1])), radius = 0, color=(255, 0, 0), thickness=2)
img = Image.fromarray(b, 'RGB')
img.show()
img.save(r'output.jpg')
print("Number of rods = ", len(xy))
#image path
img= Image.open(r'/content/Object-Detection-and-Marking/Data/rod_test.jpg')
find_centres(img)
Image.open("/content/Object-Detection-and-Marking/output_rod_saved.jpg")
###Output
_____no_output_____
###Markdown
Ex - GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv). Step 3. Assign it to a variable called drinks.
###Code
drinks=pd.read_csv('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv')
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
drinks.groupby(['continent'])['beer_servings'].mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
drinks
drinks.columns
drinks.groupby(['continent'])['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
drinks.groupby(['continent']).mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
drinks.groupby(['continent']).median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
drinks.groupby(['continent'])['spirit_servings'].agg(['mean','max','min'])
###Output
_____no_output_____
###Markdown
Particle physics data-analysis with CMS open data Welcome to the exercise where real data from CMS experiment at CERN is used for a simple particle physics data-analysis. The goal for the exercise is to discover the appearance of Z boson, determine the mass and the lifetime of Z and compare the results to the known values of these.In the exercise invariant mass values will be calculated for muon pairs that are detected in the CMS detector. A histogram will be made from the calculated invariant mass values. After that a Breit-Wigner fit will be made to the histogram. With the fitted Breit-Wigner function it will be possible to determine the mass and the lifetime of Z boson.In the end there will be also a quick look about how a pseudorapidity effects to the mass distribution of muon pairs.The structure of the exercise is following:- theory background- calculation of invariant masses- making the histogram- fitting the function to the histogram- analysing the histogram- looking the histogram of the whole range of data- the effect of pseudorapidity to the mass distributionNow take a relaxed position and read the theory background first. Understanding the theory is essential for reaching the goal and learning from the exercise. So take your time and enjoy the fascination of particle physics! Theory background Particle physics is the field of physics where structures of matter and radiation and interactions between them are studied. In experimental particle physics research is made by accelerating particles and colliding them to others or to solid targets. This is done with the _particle accelerators_. The collisions are examined with _particle detectors_.World's biggest particle accelerator, Large Hadron Collider (LHC), is located at CERN, the European Organization for Nuclear Research. LHC is 27 kilometers long circle-shaped synchrotron accelerator. LHC is located in the tunnel 100 meters underground on the border of France and Switzerland (image 1). Image 1: The LHC accelerator and the four detectors around it. © CERN [1] In 2012 the ATLAS and CMS experiments at CERN made an announcement that they had observed a new particle which mass was equal to the predicted mass of the Higgs boson. The Higgs boson and the Higgs field related to it explain the origin of the mass of particles. In 2013 Peter Higgs and François Englert, who predicted the Higgs boson theoretically, were awarded with the Nobel prize in physics. Accelerating particles The LHC mainly accelerates protons. The proton source of the LHC is a bottle of hydrogen. Protons are produced by stripping the electrons away from the hydrogen atoms with help of an electric field.Accelerating process starts already before the LHC. Before the protons arrive in the LHC they will be accelerated with electric fields and directed with magnetic fields in Linac 2, Proton Synchrotron Booster, Proton Synchrotron and Super Proton Synchrotron accelerators. After those the protons will receive energy of 450 GeV. Also the protons will be directed into constantly spreaded bunches in two different proton beams. Each beam contains 2808 proton bunches located about 7,5 meters from each others. Each of these bunches include $1\text{,}2\cdot 10^{11}$ protons.After the pre-accelerating the two proton beams are directed to the LHC accelerator. The beams will circulate in opposite directions in two different vacuum tubes. Image 2 shows a part of the LHC accelerator opened with the two vacuum tubes inside. Each of the proton beams will reach the energy of about 7 TeV (7000 GeV) in LHC. Image 2: Part of the LHC accelerator opened. © CERN [2] Particle collisions are created by crossing these two beams that are heading in opposite directions. When two proton bunches cross not all of the protons collide with each others. Only about 40 protons per bunch will collide and so create about 20 collisions. But because the bunches are travelling so fast, there will be about 40 million bunch crosses per one second in the LHC. That means there will be 800 million proton collisions every second in the LHC. That's a lot of action!The maximum energy in collisions is 14 TeV. However in most cases the collision energy is smaller than that because when protons collide it is really the quarks and gluons which collide with each others. So all of the energy of the protons won't be transmitted to the collision.When the protons collide the collision energy can be transformed into mass ($E=mc^2$). So it is possible that new particles are produced in the collisions. By examining and measuring the particles created in collisions, researchers try to understand better for example the dark matter, antimatter and the constitution of all matter.In image 3 there is a visualisation of some particles created in one collision event. These particles are detected with the CMS detector. Image 3: A visualised collision event. Video The acceleration and collision processes are summarised well in the short video below. Watch the video from the start until 1:15 to get a picture about these processes. You can start the video by running the code cell below (click the cell and then press CTRL + ENTER).
###Code
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/pQhbhpU9Wrg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
###Output
_____no_output_____
###Markdown
Examining particle collisions Particle collisions are examined with _particle detectors_. In LHC particle beams are crossed in four different sections. These sections are the locations of four particle detectors of LHC: ATLAS, LHCb, ALICE and CMS (check the image 1). This exercise focuses on the CMS detector and on the data it collects.CMS, the Compact Muon Solenoid, is a general-purpose detector. Goals of the CMS are for example studying the standard model, searching for extra dimensions and searching for particles that could make up dark matter.Simplified, the goal of the CMS detector is to detect particles that are created in collisions and measure different quantities from them. The detector consists of different detectors that can detect and measure different particles. The structure of the CMS detector is shown in the image 4. Image 4: The CMS detector opened. © CERN [3] The detectors form an onion-like structure to the CMS. This structure ensures that as many as possible particles from the collision is detected.Different particles act differently in the detectors of the CMS. Image 5 shows the cross-section of the CMS detector. The particle beams would travel in and out from the plane. Image 5 also demonstrates how different particles act in the CMS. Image 5: The cross-section of the CMS and different particle interactions in it. © CERN [4] Innermost part is the silicon tracker. The silicon tracker makes it possible to reconstruct trajectories of charged particles. Charged particles interact electromagnetically with the tracker and make the tracker to create an electric pulse. An intense magnetic field bends the trajectories of the charged particles. With the curvature of the trajectories shown by the pulses created in the tracker, it is possible to calculate the momenta of the charged particles.Particle energies can be measured with help of the calorimeters. Electrons and photons will stop to the Electromagnetic Calorimeter (ECAL). Hadrons, for example protons or neutrons, will pass through the ECAL but will be stopped in the Hadron Calorimeter (HCAL).ECAL is made from lead tungstate crystals that will produce light when electrons and photons pass through them. The amount of light produced is propotional to the energy of the particle. So it is possible to determine the energy of the particle stopped in ECAL with the photodetectors. Also the operation of the HCAL is based on detecting light.Only muons and weakly interacting particles like neutrinos will pass both the ECAL and HCAL. Energies and momenta of muons can be determined with the muon chambers. The detection of the momentum is based on electrical pulses that muons create in the different sections of the muon chambers. Energies of muons can't be measured directly, but the energies will be determined by calculating them from the other measured quantities.Neutrinos can't be detected directly with the CMS, but the existence of them can be derived with the help of missing energy. It is possible that the total energy of the particles detected in a collision is smaller than the energy before the collision. This makes a conflict with the energy conservation. The situation indicates that something has been left undetected in the collision, so there is a possibility that neutrons are created in the collision. Question 1 This exercise focuses on muons that are detected with the CMS detector. How can you describe the behaviour and detection of muons in the CMS? Recording the data As mentioned above, there happens about billion particle collision in the CMS in one second. The detector can detect all of these but it would be impossible to record all data from these collisions. Instead right after a collision different trigger systems will decide whether the collision has been potentially interesting or not. Non-interesting collision will not be recorded. This multi-staged triggering process reduces the amount of recorded collisions from billion to about thousand collisions per second.Data collected from collisions will be saved to AOD (Analysis Object Data) files that can be opened with the ROOT program (https://root.cern.ch/). Structures of the files are very complicated so those can't be handled for example in simple data tables.In this exercise a CSV file format is used instead of the AOD format. A CSV file is just a regular text file that contains different values separated with commas (check the image 6). These files can be easily read and handled with the Python programming language. Image 6: An example of the structure of the CSV file. Indirect detection of particles Not every particle can be detected directly as explained above with the CMS or other particle detectors. Interesting processes are often short-lived. These processes can be searched throughout long-lived processes so detecting is then indirect.For example the Z boson (the particle that mediates weak interaction) can't be detected directly with the CMS since the lifetime of the Z is very short. That means that the Z boson will decay before it even reaches the silicon detector of the CMS.How it is possible to detect the Z boson then? A solution to this question comes from the decay process of the Z boson. If particles that originate from the decay of the Z are prossible to detect, it is also possible to deduce the existence of the Z. So the detection is indirect.The Z boson can decay with 24 different ways. In this exercise only one of these is observed: the decay of the Z to the muon $\mu^-$ and the antimuon $\mu^+$. This decay process is shown as a Feynman diagram in the image 7. Image 7: The process where the Z boson decays to the muon and the antimuon. Muons that are created in the decay process can be detected with the CMS. But just the detection of the muon and the antimuon isn't a sufficient evidence of the existence of the Z. The detected two muons could originate from any of processes that will happen in the collision event (there are many different processes going on the same time). Because of this the mass of the Z is also needed to be reconstructed. The invariant mass The mass of the Z boson can be determined with the help of a concept called _invariant mass_. Let's next derive loosely an expression for the invariant mass.Let's observe a situation where a particle with mass $M$ and energy $E$ decays to two particles with masses $m_1$ and $m_2$, and energies $E_1$ and $E_2$. Energy $E$ and momentum $\vec{p}$ is concerved in the decay process so $E = E_1 +E_2$ and $\vec{p} = \vec{p}_1+ \vec{p}_2$.Particles will obey the relativistic dispersion relation:$$Mc^2 = \sqrt{E^2 - c^2\vec{p}^2}.$$And with the concervation of energy and momentum this can be shown as$$Mc^2 = \sqrt{(E_1+E_2)^2 - c^2(\vec{p_1} + \vec{p_2})^2}$$$$=\sqrt{E_1^2+2E_1E_2+E_2^2 -c^2\vec{p_1}^2-2c^2\vec{p_1}\cdot\vec{p_2}-c^2\vec{p_2}^2}$$$$=\sqrt{2E_1E_2 - 2c^2 |\vec{p_1}||\vec{p_2}|\cos(\theta)+m_1^2c^4+m_2^2c^4}. \qquad (1)$$The relativistic dispersion relation can be brought to the following format$$M^2c^4 = E^2 - c^2\vec{p}^2$$$$E = \sqrt{c^2\vec{p}^2 + M^2c^4},$$from where by setting $c = 1$ (very common in particle physics) and by assuming masses of the particles very small compared to momenta, it is possible to get the following:$$E = \sqrt{\vec{p}^2 + M^2} = |\vec{p}|\sqrt{1+\frac{M^2}{\vec{p}^2}}\stackrel{M<<|\vec{p}|}{\longrightarrow}|\vec{p}|.$$By applying the result $E = |\vec{p}|$ derived above and the setting $c=1$ to the equation (1), it can be reduced to the format$$M=\sqrt{2E_1E_2(1-\cos(\theta))},$$where $\theta$ is the angle between the momentum vector of the particles. With this equation it is possible to calculate the invariant mass for the particle pair if energies of the particles and the angle $\theta$ is known.In experimental particle physics the equation for the invariant mass is often in the form$$M = \sqrt{2p_{T1}p_{T2}( \cosh(\eta_1-\eta_2)-\cos(\phi_1-\phi_2) )}, \qquad (2)$$where transverse momentum $p_T$ is the component of the momentum of the particle that is perpendicular to the particle beam, $\eta$ the pseudorapidity and $\phi$ the azimuth angle. The pseudorapidity is defined with the $\theta$ with the equation $\eta = -\ln(\tan(\frac{\theta}{2}))$. So basically the pseudorapidity describes an angle. Also $\phi$ is describing an angle.Image 8 expresses $\theta$, $\eta$ and $\phi$ in the CMS detector. The particle beams will travel to the z-direction. Image 8 also shows that because of the determination of $\eta$ it goes to 0 when $\theta = 90^{\circ}$ and to $\infty$ when $\theta = 0^{\circ}$. Image 8: Quantities $\theta$, $\eta$ and $\phi$ in the CMS detector. Reconstructing the Z mass With the invariant mass it is possible to prove the existence of the Z boson. In this exercise only the decay of the Z to two muons shown in the image 7 is handled.This exercise uses data that contains collisions where two muons have been detected (among with many of other particles). It is possible to calculate an invariant mass value for the muon pair in an one collision event with the equation (2). And this can be repeated for a great amount of collision events.If the invariant mass of the muon pair is equal to the mass of the Z boson it can be verified that the muon pair originates from the deacay of the Z. And if the invariant mass of the muon pair gets some other value the muons will originate from some other processes. __So the invariant mass can be used as an evidence about the existence of the Z boson__. Identifying the Z boson In practice the identification of the Z boson goes in the following way. The invariant mass for two muons is calculaetd for the great amount of collision events. Then a histogram is made from the calcuated values. The histogram shows how many invariant mass values will be in each bin of the histogram.If a peak (many invariant mass values near the same bin compared to other bins) is formed in the histogram, it can prove that in the collision events there has been a particle which mass corresponds to the peak. After that it is possible to fit a function to the histogram and determine the mass and the lifetime of the Z from the parameters of the fitted function. Question 2 Let's practice the calculation of the invariant mass with the following task. Let's assume that for one muon pair the following values have been measured or determined:- $p_{T1} = 58,6914$ GeV/c- $p_{T2} = 45,7231$ GeV/c- $\eta_1 = -1,02101$- $\eta_2 = -0,37030$- $\phi_1 = 0,836256$ rad- $\phi_2 = 2,741820$ radCalculate the invariant mass value for this single pair of muons.Compare the calculated value to the mass of the Z boson reported by the Particle Data Group (PDG, http://pdg.lbl.gov/). What do you notice? Can you make sure conclusions from your notifications?That's the end of the theory part of this exercise. You can now move on to analysing the data. Calculating the invariant mass In this section the data-analysis is started by calculating the invariant masses of the muon pairs that are detected in the collision events. Analysis will be done with the Python programming language.The data used in the analysis has been collected by the CMS detector in 2011. From the original data a CSV file containing only some of the collision events and information has been derived. The original data is saved in AOD format that can be read with ROOT program. Open the link http://opendata.cern.ch/record/17 and take a look how large the original datafile is from the section _Characteristics_.From the original datafile only collision events with exactly two muons detected have been selected to the CSV file. The selection is done with the code similar to the one in the link http://opendata.cern.ch/record/552. In practice the code will select wanted values from the original file and write them to the CSV file. You can get an example of a CSV file by clicking the link http://opendata.cern.ch/record/545 and downloading one of the CSV files from the bottom of the page to your computer.The CSV file used in this excercise is already saved to the same repository than this notebook file. Now let's get the file with Python and start the analysis! Initialisation and getting the data In the code cell below needed Python modules _pandas_, _numpy_ and _matplotlib.pyplot_ are imported and named as _pd_, _np_ and _plt_. Modules are files that contain functions and commands for Python language. Modules are imported because not all of the things needed in the exercise could be done with the Python's built-in functions.Also the data file from the repository is imported and saved to the variable named `ds`. __Don't change the name of the variable.__ The file is imported with the function `read_csv()` from the pandas module. So in the code there has to be an reference to pandas module (that we named as _pd_) in front of the function.First we want to figure out how many collision events (or in this case data rows) there are in the data file. Add to the code cell below needed code to print out the number of rows of the imported file. With Python printing is done with the `print()` function where the thing that is wanted to be printed will be written inside the brackets. The length of an object can be determined with the `len()` function. Inside the brackets will be written the variable which length is wanted to be determined.You can run the code cell by clicking it active and then pressing CTRL + ENTER. Feel free to test different solutions for printing the length of the file.After you have printed the number of the rows in the datafile, you can move on to the next section. First try to figure it out yourself, but if you get stuck click on the hints below. Hint 1 The data was saved to the variable that was named as "ds". Hint 2 Write the function "len()" inside the function "print()": "print(len(variablename))", where variablename refers to the name of your variable.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
ds = pd.read_csv('DoubleMuRun2011A.csv')
# This is a comment separated with #-symbol. Comments do not affect to the code.
# Add your own code to print the number of collision events in the datafile!
###Output
_____no_output_____
###Markdown
What does the file look like? The file was saved as a _DataFrame_ structure (practically a table) of _pandas_ module in a variable called `ds`. Next print the five first rows of the file to look properly how does the file look. With the `print()` function it is possible to print a variable inside the brackets. With the function _variablename_`.head()` you can get the first five rows of the data file by changing the _variablename_ with the name of your variable.Write a code that prints the five first rows of the data file and run the code cell by clicking it active and pressing CTRL + ENTER. First try to figure it out yourself, but if you get stuck click on the hint below. Hint Hint: "print(variablename.head())" The "\\" symbols in the output tell that a row won't fit totally on a screen but continues to next rows of the output. The first row shows which information about muon pairs the file contains. For example E1 is the energy of the first muon and E2 the energy of the second etc. Here are the different values listed:- Run = number of the run where data has been collected from- Event = number of the collision event- Type = type of the muon, global muon (G) has been measured both in the silicon tracker and muon chambers, tracker myon (T) has been measured only in the silicon tracker (these classifications are hypotheses since the type cannot be known absolutely)- E = energy of the muon- px, py, pz = different coordinates of momentum of the muon- pt = transverse momentum, that is the component of momentum of the muon that is perpendicular to the particle beams- eta = $\eta$ = pseudorapidity, a coordinate describing an angle (check the image 8)- phi = $\phi$ = azimuth angle, also a coordinate describing an angle (check the image 8)- Q = electrical charge of the muon Calculating the invariant mass Next calculate invariant mass values for muon pairs in each event with the different values from the data file. You have to write a proper equation only once since code executes the equation automatically for each row of the file.For example if you would like to sum the electrical charges of two muons for each event and save results in a variable _charges_, it could be done with the following code:```charges = ds.Q1 + ds.Q2```So you have to tell in the code that Q1 and Q2 refer to values in the variable `ds`. This can be done by adding the variable name separated with a dot in front of the value that is wanted, as in the example above.There are square root, cosine and hyperbolic cosine terms in the equation of invariant mass. Those can be fetched from the _numpy_ module that we named as _np_. You can get a square root with the function `np.sqrt()`, a cosine with `np.cos()` and a hyperbolic cosine with `np.cosh()`. Naturally inside the brackets there will be anything that is inside the square root or brackets in the equation too.__Write below a code__ that will calculate the invariant mass value for muon pairs in each collision event in the data file. Save the values calculated in the variable `invariant_mass` that is already written in the code cell. Don't change the name of the variable.After running, the code will print the first five values that are calculated. Also the output will tell if the values are correct. This is done with a small piece of code at the end of the cell.You can get help from the theory part. Also use the hints below if you get stuck. But first try different solutions by yourself and try to figure it out without the hints! Hint 1 Use the equation (2) of the theory part for the calculation. Hint 2 When you write different quantities of the equation to your code, remember to refer to the variable from where you want to get the quantities. For example if you would need the quantity "pt1", write "ds.pt1" to the code. Hint 3 In practice write the equation (2) to one line to the code after the text "invariant_mass = ". Remember that you can get a cosine, a hyperbolic cosine and a square root from "numpy" module with the way that is described above. Also remember to tell from which variable you want to get the different quantities (hint 2).
###Code
invariant_mass =
print('The first five values calculated (in units GeV):')
print(invariant_mass[0:5])
# Rest of the code is for checking if the values are correct. You don't have to change that.
if 14.31 <= invariant_mass.values[4] <= 14.32:
print('Invariant mass values are correct!')
else:
print('Calculated values are not yet correct. Please check the calculation one more time.')
print('Remember: don´t change the name of the variable invariant_mass.')
###Output
The first five values calculated (in units GeV):
###Markdown
Making the histogram Next let's make a histogram from the calculated invariant mass values. The histogram describes how the values are distributed, that is, how many values there has been in each bin of the histogram. In the image 9 there is a histogram that represents how the amount of cash in a wallet has been distributed for some random group of people. One can see from the histogram that for example the most common amount of cash has been 10–15 euros (12 persons have had this). Image 9: An example histogram from the distribution of the amount of cash. Creating the histogram Histograms can be created with Python with the _matplotlib.pyplot_ module that was imported before and named as _plt_. With the function `plt.hist()` it is possible to create a histogram by giving different parameters inside the brackets. These parameters can be examined from https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.hist.html.Now only the first three of the parameters are needed: a variable from which values the histogram is created (_x)_, number of bins (_bins_) and the lower and upper range of the bins (_range_).Write down a code that will create a histogram from the invariant mass values that were calculated. Because this exercise focuses on the Z boson, set the range wisely to get the values near the mass of the Z boson. Use the Z boson mass value that you looked earlier from the Particle Data Group as a reference.Try what is the best amount of bins to make a clear histogram. You can try different values and see how they affect to the histogram.In the code there are already lines for naming the axes and the title of the histogram. Also there are comments marked with symbols. These comments doesn't affect to the functionality of the code.If you get stuck use the hints below. But try to create the histogram without using the hints! Hint 1 The invariant mass values that you have calculated are saved in the variable "invariant_mass". Hint 2 The function is in the form "plt.hist(x, bins=0, range=(0,0))", where x will be replaced with the name of the variable that contains the data that is wanted to be used in the histogram (in our case the invariant masses). The zeroes will be replaced with the wanted amount of bins and with the lower and upper limits of the histogram. Hint 3 Try different bin values between 50 and 200.
###Code
# Write down there a code that will create the histogram.
# Let's name the axes and the title. Don't change these.
plt.xlabel('Invariant mass [GeV]')
plt.ylabel('Number of events')
plt.title('Histogram of invariant mass values of two muons. \n')
plt.show()
###Output
_____no_output_____
###Markdown
Question 3 Describe the histogram. What information you can get from it? Fitting the function to the histogram To get information about mass and lifetime of the detected resonance, a function that describes the distribution of the invariant masses must be fitted to the values of the histogram. In our case the values follow a Breit-Wigner distribution:$$N(E) = \frac{K}{(E-M)^2 + \frac{\Gamma^2}{4}},$$where $E$ is the energy, $M$ the maximum of the distribution (equals to the mass of the particle that is detected in the resonance), $\Gamma$ the full width at half maximum (FWHM) or the decay width of the distribution and $K$ a constant.The Breit-Wigner distribution can also be expressed in the following form:$$\frac{ \frac{2\sqrt{2}M\Gamma\sqrt{M^2(M^2+\Gamma^2)} }{\pi\sqrt{M^2+\sqrt{M^2(M^2+\Gamma^2)}}} }{(E^2-M^2)^2 + M^2\Gamma^2},$$where the constant $K$ is written open.The decay width $\Gamma$ and the lifetime $\tau$ of the particle detected in the resonance are related in the following way:$$\Gamma \equiv \frac{\hbar}{\tau},$$where $\hbar$ is the reduced Planck's constant.With the code below it is possible to optimize a function that represents Breit-Wigner distribution to the values of the histogram. The function is already written in the code. It is now your task to figure out which the values of the maximum of the distribution $M$ and the full width at half maximum of the distribution $\Gamma$ could approximately be. The histogram that was created earlier will help in this task.Write these initial guesses in the code in the line `initials = [THE INITIAL GUESS FOR GAMMA, THE INITIAL GUESS FOR M, -2, 200, 13000]`. In other words replace the two comments in that line with the values that you derived.Notice that the initial guesses for parameters _a, b_ and _A_ have been already given. Other comments in the code can be left untouched. From them you can get information about what is happening in the code.After running the code Jupyter will print the values of the different parameters as a result of the optimization. Also uncertainties of the values and a graph of the fitted function are printed. The uncertainties will be received from the covariance matrix that the fitting function `curve_fit` will return. Hint 1 Think how M and gamma could be determined with the help of the histogram. Look from the histogram that you created that which would approximately be the values of M and gamma. Hint 2 If you figured out the initial guesses to be for example gamma = 12 and M = 1300 (note that these values are just random examples!) write them to the code in the form "initials = [12, 1300, -2, 200, 13000]".
###Code
# Let's limit the fit near to the peak of the histogram.
lowerlimit = 70
upperlimit = 110
bins = 100
# Let's select the invariant mass values that are inside the limitations.
limitedmasses = invariant_mass[(invariant_mass > lowerlimit) & (invariant_mass < upperlimit)]
#Let's create a histogram of the selected values.
histogram = #plt.hist(limitedmasses, bins=bins, range=(lowerlimit,upperlimit))
# In y-axis the number of the events per each bin (can be got from the variable histogram).
# In x-axis the centers of the bins.
y = histogram[0]
x = 0.5*( histogram[1][0:-1] + histogram[1][1:] )
# Let's define a function that describes Breit-Wigner distribution for the fit.
# E is the energy, gamma is the decay width, M the maximum of the distribution
# and a, b and A different parameters that are used for noticing the effect of
# the background events for the fit.
def breitwigner(E, gamma, M, a, b, A):
return a*E+b+A*( (2*np.sqrt(2)*M*gamma*np.sqrt(M**2*(M**2+gamma**2)))/(np.pi*np.sqrt(M**2+np.sqrt(M**2*(M**2+gamma**2)))) )/((E**2-M**2)**2+M**2*gamma**2)
# Initial values for the optimization in the following order:
# gamma (the full width at half maximum (FWHM) of the distribution)
# M (the maximum of the distribution)
# a (the slope that is used for noticing the effect of the background)
# b (the y intercept that is used for noticing the effect of the background)
# A (the "height" of the Breit-Wigner distribution)
initials = [#THE INITIAL GUESS FOR GAMMA, #THE INITIAL GUESS FOR M, -2, 200, 13000]
# Let's import the module that is used in the optimization, run the optimization
# and calculate the uncertainties of the optimized parameters.
from scipy.optimize import curve_fit
#best, covariance = curve_fit(breitwigner, x, y, p0=initials, sigma=np.sqrt(y))
#error = np.sqrt(np.diag(covariance))
# Let's print the values and uncertainties that are got from the optimization.
#print("The values and the uncertainties from the optimization")
#print("")
#first = "The value of the decay width (gamma) = {} +- {}".format(best[0], error[0])
#second = "The value of the maximum of the distribution (M) = {} +- {}".format(best[1], error[1])
#third = "a = {} +- {}".format(best[2], error[2])
#fourth = "b = {} +- {}".format(best[3], error[3])
#fifth = "A = {} +- {}".format(best[4], error[4])
#print(first)
#print(second)
#print(third)
#print(fourth)
#print(fifth)
#plt.plot(x, breitwigner(x, *best), 'r-', label='gamma = {}, M = {}'.format(best[0], best[1]))
#plt.xlabel('Invariant mass [GeV]')
#plt.ylabel('Number of event')
#plt.title('The Breit-Wigner fit')
#plt.legend()
#plt.show()
###Output
_____no_output_____
###Markdown
Notification 1: If the fitted function does not follow the histogram well, go back and check the intial guesses. Notification 2: In fitting the so called background of the mass distribution is taken into account. The background basically means muon pairs that come from other decay processes than from the decay of the Z boson. The background is taken into account in the code in the line that follows the command `def breitwigner`. The fit is adapted in the background with the term `a*E+b+A`, where $aE + b$ takes care of the linear part of the background and $A$ the height of the background. Notification 3: Even more correct way for doing the fit and getting the values and the uncertainties from it would be to iterate the fit several times. In the iteration a next step would take initial guesses from the previous fit. Analysing the histogram Question 4 What can you say about the appearance of the Z boson based on the histogram and the fitted function?Can you define the mass of the Z with the uncertainty? How?Explain your answers with the help from the theory part and other sources. Question 5 Calculate the lifetime $\tau$ of the Z boson with the uncertainty by using the fit.Compare the calculated value to the known lifetime of the Z. What do you notice? What could possibly explain your observations? Question 6 When was the Z boson detected first time and what is the physical meaning of the Z? Question 7 If energy and momentum could be measured by infinite accuracy, would there be an one exact peak that differs from the other distribution, or an distribution in the histogram on the location of the mass of the Z? Justify your answer. The histogram of the whole data As an example let's also create a histogram from the all of the invariant masses in the data file without limiting near to the peak of the Z boson.Run the code cell below to make that kind of histogram. Notice that the y-axis is logarithmic and the x-axis has logarithms to base 10 of the values of the invariant masses ( $\log_{10}(\text{value of the mass})$ ). So for example it is possible to calculate the invariant mass value in units GeV corresponding to the x-axis value of 0.5 with the following way:$$\log_{10}(\text{mass}) = 0.5$$$$10^{\log_{10}(\text{mass})} = 10^{0.5}$$$$\text{mass} = 10^{0.5} \approx 3.1622 \text{GeV}$$
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
ds = pd.read_csv('DoubleMuRun2011A.csv')
invariant_mass_1 = ds['M']
no_bins = 500
# Let's calculate the logarithms of the masses and weighs.
inv_mass_log = np.log10(invariant_mass_1)
weights = []
for a in invariant_mass_1:
weights.append(no_bins/np.log(10)/a)
# Let's plot the weighted histogram.
plt.hist(inv_mass_log, no_bins, range=(-0.5,2.5), weights=weights, lw=0, color="darkgrey")
plt.yscale('log')
# Naming the labels and the title.
plt.xlabel('log10(invariant mass) [log10(GeV)]')
plt.ylabel('Number of the events')
plt.title('The histogram of the invariant masses of two muons \n')
plt.show()
###Output
_____no_output_____
###Markdown
Question 8 Compare the histogram that you created to the histogram published by the CMS experiment in the image 10 below. What can you notice? Use the Particle Data Group web site if needed. Image 10: The histogram of the invariant masses published by the CMS experiment. © CMS Collaboration [5] Effect of pseudorapidity to the mass distribution In this final section it will be shortly studied how does pseudorapidities of muons that are detected in the CMS detector affect to the mass distribution.As it was told in the theory part, pseudorapidity $\eta$ describes an angle of which the detected particle has differed from the particle beam (z-axis). Pseudorapidity is determined with the angle $\theta$ mentioned before with the equation$$\eta = -\ln(\tan(\frac{\theta}{2}))$$For recap the image 8 is shown again below. From the image one can see that a small pseudorapidity in practice means that the particle has differed lot from the particle beam. And vice versa: greater pseudorapidity means that the particle has continued almost among the beam line after the collision. Image 8: Quantities $\theta$, $\eta$ and $\phi$ in the CMS detector. The image 11 below shows a situation where two particle beams from left and right collide. The image shows two muons with different pseudorapidities. The muon with the smaller pseudorapidity hits the barrel part of the detector when the muon with the greater pseudorapidity goes to the endcap of the detector. There are also muon chambers in the both ends of the detector so these muons can also be detected. Image 11: Two particles with different pseudorapidities in the CMS detector. In this final section it will be studied that how does pseudorapidities of muons that are detected in the CMS detector affect to the mass distribution. For doing that, two different histograms will be made: an one with only muon pairs with small pseudorapidities and an one with great pseduorapidities. The histograms will be made with the familiar method from the earlier part of this exercise. Selecting the events Next let’s create two variables for dividing the events: `small_etas` and `great_etas`. To the first one will be saved only collision events where pseudorapidities of the both detected muons have been small (for example under 0.38). And respectively to the second those whose pseudorapidities have been great (for example over 1.52). Absolute values will be used because $\eta$ can get also negative values.Complete the code cell below by determining the variables `small_etas` and `great_etas` in a way that the division described above will be made. You will need the following functions:- `ds[condition]` selects from the variable `ds` only events which fulfill the condition written inside the brackets. There can also be more than one condition. Then the function is in the form `ds[(condition1) & (condition2)]`- an example of this could be a function where from the variable `example` only rows where the values of the columns `a` and `b` have been both greater than 8 would be selected: `example[(example.a > 8) & (example.b > 8)]`- you can get the absolute values with the function `np.absolute()` from the _numpy_ module- pseudorapidity of the first muon is `ds.eta1` and the second `ds.eta2`- ”greater than” and ”smaller than” comparisons can be made in Python straight with the symbols > and <- Python uses a dot as a decimal separator (for example 0.38) Hint 1 Remember to define the small values in a way that both eta1 and eta2 have been smaller than 0.38. And same for the large values. Hint 2 Remember to tell from which variable you want to get the values of the pseudorapidities (write ds.eta1 or ds.eta2). Remember to use "np." in front of the aboslute value function. Hint 3 The first variable with the conditions is "great_etas = ds[(np.absolute(ds.eta1) > 1.52) & (np.absolute(ds.eta2) > 1.52)]" and the second "small_etas = ds[(np.absolute(ds.eta1) < 0.38) & (np.absolute(ds.eta2) < 0.38)]".
###Code
# Let's import the needed modules.
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# With this line the data is imported and saved to the variable "ds".
ds = pd.read_csv('DoubleMuRun2011A.csv')
great_etas = #OTHER SELECTION HERE
small_etas = #OTHER SELECTION HERE
# Let's print out some information about the selection
print('Amount of all events = %d' % len(ds))
print('Amount of the events where the pseudorapidity of the both muons have been large = %d' %len(great_etas))
print('Amount of the events where the pseudorapidity of the both muons have been small = %d' %len(small_etas))
###Output
_____no_output_____
###Markdown
Creating the histograms Run the code cell below to create separate histograms from the events with small and with great values of pseudorapidities. The cell will get the invariant masses for both of the selections and will create the histograms out of them near to the peak that refers to the Z boson.
###Code
# Let's differ the invariant masses of the large and small pseudorapidity
# events for making the histograms.
inv_mass_great = great_etas['M']
inv_mass_small = small_etas['M']
# Let's use the matplotlib.pyplot module to create a custom size
# figure where the two histograms will be plotted.
f = plt.figure(1)
f.set_figheight(15)
f.set_figwidth(15)
plt.subplot(211)
plt.hist(inv_mass_great, bins=120, range=(60,120))
plt.ylabel('great etas, number of events', fontsize=20)
plt.subplot(212)
plt.hist(inv_mass_small, bins=120, range=(60,120))
plt.ylabel('small etas, number of events', fontsize=20)
plt.xlabel('invariant mass [GeV]', fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Particle physics data-analysis with CMS open data Welcome to the exercise where real data from CMS experiment at CERN is used for a simple particle physics data-analysis. The goal for the exercise is to discover the appearance of Z boson, determine the mass and the lifetime of Z and compare the results to the known values of these.In the exercise invariant mass values will be calculated for muon pairs that are detected in the CMS detector. A histogram will be made from the calculated invariant mass values. After that a Breit-Wigner fit will be made to the histogram. With the fitted Breit-Wigner function it will be possible to determine the mass and the lifetime of Z boson.In the end there will be also a quick look about how a pseudorapidity effects to the mass distribution of muon pairs.The structure of the exercise is following:- theory background- calculation of invariant masses- making the histogram- fitting the function to the histogram- analysing the histogram- looking the histogram of the whole range of data- the effect of pseudorapidity to the mass distributionNow take a relaxed position and read the theory background first. Understanding the theory is essential for reaching the goal and learning from the exercise. So take your time and enjoy the fascination of particle physics! Theory background Particle physics is the field of physics where structures of matter and radiation and interactions between them are studied. In experimental particle physics research is made by accelerating particles and colliding them to others or to solid targets. This is done with the _particle accelerators_. The collisions are examined with _particle detectors_.World's biggest particle accelerator, Large Hadron Collider (LHC), is located at CERN, the European Organization for Nuclear Research. LHC is 27 kilometers long circle-shaped synchrotron accelerator. LHC is located in the tunnel 100 meters underground on the border of France and Switzerland (image 1). Image 1: The LHC accelerator and the four detectors around it. © CERN [1] In 2012 the ATLAS and CMS experiments at CERN made an announcement that they had observed a new particle which mass was equal to the predicted mass of the Higgs boson. The Higgs boson and the Higgs field related to it explain the origin of the mass of particles. In 2013 Peter Higgs and François Englert, who predicted the Higgs boson theoretically, were awarded with the Nobel prize in physics. Accelerating particles The LHC mainly accelerates protons. The proton source of the LHC is a bottle of hydrogen. Protons are produced by stripping the electrons away from the hydrogen atoms with help of an electric field.Accelerating process starts already before the LHC. Before the protons arrive in the LHC they will be accelerated with electric fields and directed with magnetic fields in Linac 2, Proton Synchrotron Booster, Proton Synchrotron and Super Proton Synchrotron accelerators. After those the protons will receive energy of 450 GeV. Also the protons will be directed into constantly spreaded bunches in two different proton beams. Each beam contains 2808 proton bunches located about 7,5 meters from each others. Each of these bunches include $1\text{,}2\cdot 10^{11}$ protons.After the pre-accelerating the two proton beams are directed to the LHC accelerator. The beams will circulate in opposite directions in two different vacuum tubes. Image 2 shows a part of the LHC accelerator opened with the two vacuum tubes inside. Each of the proton beams will reach the energy of about 7 TeV (7000 GeV) in LHC. Image 2: Part of the LHC accelerator opened. © CERN [2] Particle collisions are created by crossing these two beams that are heading in opposite directions. When two proton bunches cross not all of the protons collide with each others. Only about 40 protons per bunch will collide and so create about 20 collisions. But because the bunches are travelling so fast, there will be about 40 million bunch crosses per one second in the LHC. That means there will be 800 million proton collisions every second in the LHC. That's a lot of action!The maximum energy in collisions is 14 TeV. However in most cases the collision energy is smaller than that because when protons collide it is really the quarks and gluons which collide with each others. So all of the energy of the protons won't be transmitted to the collision.When the protons collide the collision energy can be transformed into mass ($E=mc^2$). So it is possible that new particles are produced in the collisions. By examining and measuring the particles created in collisions, researchers try to understand better for example the dark matter, antimatter and the constitution of all matter.In image 3 there is a visualisation of some particles created in one collision event. These particles are detected with the CMS detector. Image 3: A visualised collision event. Video The acceleration and collision processes are summarised well in the short video below. Watch the video from the start until 1:15 to get a picture about these processes. You can start the video by running the code cell below (click the cell and then press CTRL + ENTER).
###Code
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/pQhbhpU9Wrg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
###Output
_____no_output_____
###Markdown
Examining particle collisions Particle collisions are examined with _particle detectors_. In LHC particle beams are crossed in four different sections. These sections are the locations of four particle detectors of LHC: ATLAS, LHCb, ALICE and CMS (check the image 1). This exercise focuses on the CMS detector and on the data it collects.CMS, the Compact Muon Solenoid, is a general-purpose detector. Goals of the CMS are for example studying the standard model, searching for extra dimensions and searching for particles that could make up dark matter.Simplified, the goal of the CMS detector is to detect particles that are created in collisions and measure different quantities from them. The detector consists of different detectors that can detect and measure different particles. The structure of the CMS detector is shown in the image 4. Image 4: The CMS detector opened. © CERN [3] The detectors form an onion-like structure to the CMS. This structure ensures that as many as possible particles from the collision is detected.Different particles act differently in the detectors of the CMS. Image 5 shows the cross-section of the CMS detector. The particle beams would travel in and out from the plane. Image 5 also demonstrates how different particles act in the CMS. Image 5: The cross-section of the CMS and different particle interactions in it. © CERN [4] Innermost part is the silicon tracker. The silicon tracker makes it possible to reconstruct trajectories of charged particles. Charged particles interact electromagnetically with the tracker and make the tracker to create an electric pulse. An intense magnetic field bends the trajectories of the charged particles. With the curvature of the trajectories shown by the pulses created in the tracker, it is possible to calculate the momenta of the charged particles.Particle energies can be measured with help of the calorimeters. Electrons and photons will stop to the Electromagnetic Calorimeter (ECAL). Hadrons, for example protons or neutrons, will pass through the ECAL but will be stopped in the Hadron Calorimeter (HCAL).ECAL is made from lead tungstate crystals that will produce light when electrons and photons pass through them. The amount of light produced is propotional to the energy of the particle. So it is possible to determine the energy of the particle stopped in ECAL with the photodetectors. Also the operation of the HCAL is based on detecting light.Only muons and weakly interacting particles like neutrinos will pass both the ECAL and HCAL. Energies and momenta of muons can be determined with the muon chambers. The detection of the momentum is based on electrical pulses that muons create in the different sections of the muon chambers. Energies of muons can't be measured directly, but the energies will be determined by calculating them from the other measured quantities.Neutrinos can't be detected directly with the CMS, but the existence of them can be derived with the help of missing energy. It is possible that the total energy of the particles detected in a collision is smaller than the energy before the collision. This makes a conflict with the energy conservation. The situation indicates that something has been left undetected in the collision, so there is a possibility that neutrons are created in the collision. Question 1 This exercise focuses on muons that are detected with the CMS detector. How can you describe the behaviour and detection of muons in the CMS? Recording the data As mentioned above, there happens about billion particle collision in the CMS in one second. The detector can detect all of these but it would be impossible to record all data from these collisions. Instead right after a collision different trigger systems will decide whether the collision has been potentially interesting or not. Non-interesting collision will not be recorded. This multi-staged triggering process reduces the amount of recorded collisions from billion to about thousand collisions per second.Data collected from collisions will be saved to AOD (Analysis Object Data) files that can be opened with the ROOT program (https://root.cern.ch/). Structures of the files are very complicated so those can't be handled for example in simple data tables.In this exercise a CSV file format is used instead of the AOD format. A CSV file is just a regular text file that contains different values separated with commas (check the image 6). These files can be easily read and handled with the Python programming language. Image 6: An example of the structure of the CSV file. Indirect detection of particles Not every particle can be detected directly as explained above with the CMS or other particle detectors. Interesting processes are often short-lived. These processes can be searched throughout long-lived processes so detecting is then indirect.For example the Z boson (the particle that mediates weak interaction) can't be detected directly with the CMS since the lifetime of the Z is very short. That means that the Z boson will decay before it even reaches the silicon detector of the CMS.How it is possible to detect the Z boson then? A solution to this question comes from the decay process of the Z boson. If particles that originate from the decay of the Z are prossible to detect, it is also possible to deduce the existence of the Z. So the detection is indirect.The Z boson can decay with 24 different ways. In this exercise only one of these is observed: the decay of the Z to the muon $\mu^-$ and the antimuon $\mu^+$. This decay process is shown as a Feynman diagram in the image 7. Image 7: The process where the Z boson decays to the muon and the antimuon. Muons that are created in the decay process can be detected with the CMS. But just the detection of the muon and the antimuon isn't a sufficient evidence of the existence of the Z. The detected two muons could originate from any of processes that will happen in the collision event (there are many different processes going on the same time). Because of this the mass of the Z is also needed to be reconstructed. The invariant mass The mass of the Z boson can be determined with the help of a concept called _invariant mass_. Let's next derive loosely an expression for the invariant mass.Let's observe a situation where a particle with mass $M$ and energy $E$ decays to two particles with masses $m_1$ and $m_2$, and energies $E_1$ and $E_2$. Energy $E$ and momentum $\vec{p}$ is concerved in the decay process so $E = E_1 +E_2$ and $\vec{p} = \vec{p}_1+ \vec{p}_2$.Particles will obey the relativistic dispersion relation:$$Mc^2 = \sqrt{E^2 - c^2\vec{p}^2}.$$And with the conservation of energy and momentum this can be shown as$$Mc^2 = \sqrt{(E_1+E_2)^2 - c^2(\vec{p_1} + \vec{p_2})^2}$$$$=\sqrt{E_1^2+2E_1E_2+E_2^2 -c^2\vec{p_1}^2-2c^2\vec{p_1}\cdot\vec{p_2}-c^2\vec{p_2}^2}$$$$=\sqrt{2E_1E_2 - 2c^2 |\vec{p_1}||\vec{p_2}|\cos(\theta)+m_1^2c^4+m_2^2c^4}. \qquad (1)$$The relativistic dispersion relation can be brought to the following format$$M^2c^4 = E^2 - c^2\vec{p}^2$$$$E = \sqrt{c^2\vec{p}^2 + M^2c^4},$$from where by setting $c = 1$ (very common in particle physics) $$M = \sqrt{(E)^2 - (\vec{p})^2} = \sqrt{(E_1+E_2)^2 - (\vec{p_1} + \vec{p_2})^2}, \qquad (2)$$and$$E = \sqrt{\vec{p}^2 + M^2}, \qquad (3)$$and by assuming masses of the particles very small compared to momenta, it is possible to get the following:$$E = \sqrt{\vec{p}^2 + M^2} = |\vec{p}|\sqrt{1+\frac{M^2}{\vec{p}^2}}\stackrel{M<<|\vec{p}|}{\longrightarrow}|\vec{p}|.$$By applying the result $E = |\vec{p}|$ derived above and the setting $c=1$ to the equation (1), it can be reduced to the format$$M=\sqrt{2E_1E_2(1-\cos(\theta))},$$where $\theta$ is the angle between the momentum vector of the particles. With this equation it is possible to calculate the invariant mass for the particle pair if energies of the particles and the angle $\theta$ is known.In experimental particle physics the equation for the invariant mass is often in the form$$M = \sqrt{2p_{T1}p_{T2}( \cosh(\eta_1-\eta_2)-\cos(\phi_1-\phi_2) )}, \qquad (4)$$where transverse momentum $p_T$ is the component of the momentum of the particle that is perpendicular to the particle beam, $\eta$ the pseudorapidity and $\phi$ the azimuth angle. The pseudorapidity is defined with the $\theta$ with the equation $\eta = -\ln(\tan(\frac{\theta}{2}))$. So basically the pseudorapidity describes an angle. Also $\phi$ is describing an angle.Image 8 expresses $\theta$, $\eta$ and $\phi$ in the CMS detector. The particle beams will travel to the z-direction. Image 8 also shows that because of the determination of $\eta$ it goes to 0 when $\theta = 90^{\circ}$ and to $\infty$ when $\theta = 0^{\circ}$. Image 8: Quantities $\theta$, $\eta$ and $\phi$ in the CMS detector. Reconstructing the Z mass With the invariant mass it is possible to prove the existence of various particles. In this notebook we look at many particles but focus on the decay of the Z to two muons shown in the image 7 and then later the decay of the Higgs to two Z bosons.This exercise uses data that contains collisions where two muons have been detected (among with many of other particles). It is possible to calculate an invariant mass value for the muon pair in an one collision event with the equation (2). And this can be repeated for a great amount of collision events.If the invariant mass of the muon pair is equal to the mass of the Z boson it can be verified that the muon pair originates from the deacay of the Z. And if the invariant mass of the muon pair gets some other value the muons will originate from some other processes. __So the invariant mass can be used as an evidence about the existence of a particle__. Looking at the muon pair invariant mass spectrum Below is the histogram published by the CMS experiment of the invariant mass of muon pairs. Notice the labelled peaks in the histogram? These peaks are evidence for particles decaying to muon pairs. Use the Particle Data Group web site if want to know more about these particles. Image 9: The histogram of the invariant masses published by the CMS experiment. © CMS Collaboration [5] Reproducing the muon pair invariant mass distribution Let's look at some CMS data of muon pairs where the muon pair's invariant mass has been calculated to reproduce the histogram shown in Image 9. Run the code cell below to make that kind of histogram. Notice that the y-axis is logarithmic and the x-axis has logarithms to base 10 of the values of the invariant masses ( $\log_{10}(\text{value of the mass})$ ). So for example it is possible to calculate the invariant mass value in units GeV corresponding to the x-axis value of 0.5 with the following way:$$\log_{10}(\text{mass}) = 0.5$$$$10^{\log_{10}(\text{mass})} = 10^{0.5}$$$$\text{mass} = 10^{0.5} \approx 3.1622 \text{GeV}$$
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
ds = pd.read_csv('DoubleMuRun2011A.csv')
invariant_mass_1 = ds['M']
no_bins = 500
# Let's calculate the logarithms of the masses and weighs.
inv_mass_log = np.log10(invariant_mass_1)
weights = []
for a in invariant_mass_1:
weights.append(no_bins/np.log(10)/a)
# Let's plot the weighted histogram.
plt.hist(inv_mass_log, no_bins, range=(-0.5,2.5), weights=weights, lw=0, color="darkgrey")
plt.yscale('log')
# Naming the labels and the title.
plt.xlabel('log10(invariant mass) [log10(GeV)]')
plt.ylabel('Number of the events')
plt.title('The histogram of the invariant masses of two muons \n')
plt.show()
###Output
_____no_output_____
###Markdown
Identifying the Z boson We have just seen we can identify particles by reconstructing the invariant mass of the particles they decay to. We can look at any of these particles and study its properties. For example lets focus on the Z boson. In practice the identification of the Z boson goes in the following way. The invariant mass for two muons is calculated for the great amount of collision events. Then a histogram is made from the calculated values. The histogram shows how many invariant mass values will be in each bin of the histogram.If a peak (many invariant mass values near the same bin compared to other bins) is formed in the histogram, it can prove that in the collision events there has been a particle with a mass that corresponds to the peak. After that it is possible to fit a function to the histogram and determine the mass and the lifetime of the Z from the parameters of the fitted function. In the histogram we plotted the invariant mass was calculated for us. Let's calculate the invariant mass ourselves Question 2 Let's practice the calculation of the invariant mass with the following task. Let's assume that for one muon pair the following values have been measured or determined:- $p_{T1} = 58,6914$ GeV/c- $p_{T2} = 45,7231$ GeV/c- $\eta_1 = -1,02101$- $\eta_2 = -0,37030$- $\phi_1 = 0,836256$ rad- $\phi_2 = 2,741820$ radCalculate the invariant mass value for this single pair of muons.Compare the calculated value to the mass of the Z boson reported by the Particle Data Group (PDG, http://pdg.lbl.gov/). What do you notice? Can you make sure conclusions from your notifications?That's the end of the theory part of this exercise. You can now move on to analysing the data. Calculating the invariant mass using pseudorapidity In this section the data-analysis is started by calculating the invariant masses of the muon pairs that are detected in the collision events. Analysis will be done with the Python programming language.The data used in the analysis has been collected by the CMS detector in 2011. From the original data a CSV file containing only some of the collision events and information has been derived. The original data is saved in AOD format that can be read with ROOT program. Open the link http://opendata.cern.ch/record/17 and take a look how large the original datafile is from the section _Characteristics_.From the original datafile only collision events with exactly two muons detected have been selected to the CSV file. The selection is done with the code similar to the one in the link http://opendata.cern.ch/record/552. In practice the code will select wanted values from the original file and write them to the CSV file. You can get an example of a CSV file by clicking the link http://opendata.cern.ch/record/545 and downloading one of the CSV files from the bottom of the page to your computer.The CSV file used in this excercise is already saved to the same repository than this notebook file. Now let's get the file with Python and start the analysis! Initialisation and getting the data In the code cell below needed Python modules _pandas_, _numpy_ and _matplotlib.pyplot_ are imported and named as _pd_, _np_ and _plt_. Modules are files that contain functions and commands for Python language. Modules are imported because not all of the things needed in the exercise could be done with the Python's built-in functions.Also the data file from the repository is imported and saved to the variable named `ds`. __Don't change the name of the variable.__ The file is imported with the function `read_csv()` from the pandas module. So in the code there has to be an reference to pandas module (that we named as _pd_) in front of the function.First we want to figure out how many collision events (or in this case data rows) there are in the data file. Add to the code cell below needed code to print out the number of rows of the imported file. With Python printing is done with the `print()` function where the thing that is wanted to be printed will be written inside the brackets. The length of an object can be determined with the `len()` function. Inside the brackets will be written the variable which length is wanted to be determined.You can run the code cell by clicking it active and then pressing CTRL + ENTER. Feel free to test different solutions for printing the length of the file.After you have printed the number of the rows in the datafile, you can move on to the next section. First try to figure it out yourself, but if you get stuck click on the hints below. Hint 1 The data was saved to the variable that was named as "ds". Hint 2 Write the function "len()" inside the function "print()": "print(len(variablename))", where variablename refers to the name of your variable.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
ds = pd.read_csv('DoubleMuRun2011A.csv')
print(len(ds))
# This is a comment separated with #-symbol. Comments do not affect to the code.
# Add your own code to print the number of collision events in the datafile!
###Output
_____no_output_____
###Markdown
What does the file look like? The file was saved as a _DataFrame_ structure (practically a table) of _pandas_ module in a variable called `ds`. Next print the five first rows of the file to look properly how does the file look. With the `print()` function it is possible to print a variable inside the brackets. With the function _variablename_`.head()` you can get the first five rows of the data file by changing the _variablename_ with the name of your variable.Write a code that prints the five first rows of the data file and run the code cell by clicking it active and pressing CTRL + ENTER. First try to figure it out yourself, but if you get stuck click on the hint below. Hint Hint: "print(variablename.head())"
###Code
ds.head(10)
###Output
_____no_output_____
###Markdown
The "\\" symbols in the output tell that a row won't fit totally on a screen but continues to next rows of the output. The first row shows which information about muon pairs the file contains. For example E1 is the energy of the first muon and E2 the energy of the second etc. Here are the different values listed:- Run = number of the run where data has been collected from- Event = number of the collision event- Type = type of the muon, global muon (G) has been measured both in the silicon tracker and muon chambers, tracker myon (T) has been measured only in the silicon tracker (these classifications are hypotheses since the type cannot be known absolutely)- E = energy of the muon- px, py, pz = different coordinates of momentum of the muon- pt = transverse momentum, that is the component of momentum of the muon that is perpendicular to the particle beams- eta = $\eta$ = pseudorapidity, a coordinate describing an angle (check the image 8)- phi = $\phi$ = azimuth angle, also a coordinate describing an angle (check the image 8)- Q = electrical charge of the muon Calculating the invariant mass Next calculate invariant mass values for muon pairs in each event with the different values from the data file. You have to write a proper equation only once since code executes the equation automatically for each row of the file.For example if you would like to sum the electrical charges of two muons for each event and save results in a variable _charges_, it could be done with the following code:```charges = ds.Q1 + ds.Q2```So you have to tell in the code that Q1 and Q2 refer to values in the variable `ds`. This can be done by adding the variable name separated with a dot in front of the value that is wanted, as in the example above.There are square root, cosine and hyperbolic cosine terms in the equation of invariant mass. Those can be fetched from the _numpy_ module that we named as _np_. You can get a square root with the function `np.sqrt()`, a cosine with `np.cos()` and a hyperbolic cosine with `np.cosh()`. Naturally inside the brackets there will be anything that is inside the square root or brackets in the equation too.__Write below a code__ that will calculate the invariant mass value for muon pairs in each collision event in the data file. Save the values calculated in the variable `invariant_mass` that is already written in the code cell. Don't change the name of the variable.After running, the code will print the first five values that are calculated. Also the output will tell if the values are correct. This is done with a small piece of code at the end of the cell.You can get help from the theory part. Also use the hints below if you get stuck. But first try different solutions by yourself and try to figure it out without the hints! Hint 1 Use the equation (4) of the theory part for the calculation. Alternatively you can use equation (2). Hint 2 When you write different quantities of the equation to your code, remember to refer to the variable from where you want to get the quantities. For example if you would need the quantity "pt1", write "ds.pt1" to the code. Hint 3 In practice write the equation (2 or 4) to one line to the code after the text "invariant_mass = ". Remember that you can get a cosine, a hyperbolic cosine and a square root from "numpy" module with the way that is described above. Also remember to tell from which variable you want to get the different quantities (hint 2).
###Code
#Using Equation 4.
invariant_mass = np.sqrt(2*ds.pt1*ds.pt2*(np.cosh(ds.eta1-ds.eta2)-np.cos(ds.phi1-ds.phi2)))
print('The first five values calculated (in units GeV):')
print(invariant_mass[0:5])
# Rest of the code is for checking if the values are correct. You don't have to change that.
if 14.31 <= invariant_mass.values[4] <= 14.32:
print('Invariant mass values are correct!')
else:
print('Calculated values are not yet correct. Please check the calculation one more time.')
print('Remember: don´t change the name of the variable invariant_mass.')
###Output
_____no_output_____
###Markdown
Calculating the invariant mass using Equation 2Given the particles energy, momentum and mass you can use equation 2 to calulate the invariant mass of its parent particle
###Code
#No pseudorapidity!
#Mass of the Muon
muMass = 0.105658
#Momentum squared for the two muons
p1_squared = (ds.px1)**2 + (ds.py1)**2 + (ds.pz1)**2
p2_squared = (ds.px2)**2 + (ds.py2)**2 + (ds.pz2)**2
#Energy of the two muons
e1 = np.sqrt(p1_squared + (muMass*muMass))
e2 = np.sqrt(p2_squared + (muMass*muMass))
#Total Energy of the two muons
epair = e1 + e2
#Momentum squared of the muon pair vector
ptpair_squared = (ds.px1 + ds.px2)**2 + (ds.py1 + ds.py2)**2 + (ds.pz1 + ds.pz2)**2
invariant_mass = np.sqrt(epair**2 - ptpair_squared)
print('The first five values calculated (in units GeV):')
print(invariant_mass[0:5])
# Rest of the code is for checking if the values are correct. You don't have to change that.
if 14.31 <= invariant_mass.values[4] <= 14.32:
print('Invariant mass values are correct!')
else:
print('Calculated values are not yet correct. Please check the calculation one more time.')
print('Remember: don´t change the name of the variable invariant_mass.')
###Output
_____no_output_____
###Markdown
Making the histogram Next let's make a histogram from the calculated invariant mass values. The histogram describes how the values are distributed, that is, how many values there has been in each bin of the histogram. In the image 9 there is a histogram that represents how the amount of cash in a wallet has been distributed for some random group of people. One can see from the histogram that for example the most common amount of cash has been 10–15 euros (12 persons have had this). Image 9: An example histogram from the distribution of the amount of cash. Creating the histogram Histograms can be created with Python with the _matplotlib.pyplot_ module that was imported before and named as _plt_. With the function `plt.hist()` it is possible to create a histogram by giving different parameters inside the brackets. These parameters can be examined from https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.hist.html.Now only the first three of the parameters are needed: a variable from which values the histogram is created (_x)_, number of bins (_bins_) and the lower and upper range of the bins (_range_).Write down a code that will create a histogram from the invariant mass values that were calculated. Because this exercise focuses on the Z boson, set the range wisely to get the values near the mass of the Z boson. Use the Z boson mass value that you looked earlier from the Particle Data Group as a reference.Try what is the best amount of bins to make a clear histogram. You can try different values and see how they affect to the histogram.In the code there are already lines for naming the axes and the title of the histogram. Also there are comments marked with symbols. These comments doesn't affect to the functionality of the code.If you get stuck use the hints below. But try to create the histogram without using the hints! Hint 1 The invariant mass values that you have calculated are saved in the variable "invariant_mass". Hint 2 The function is in the form "plt.hist(x, bins=0, range=(0,0))", where x will be replaced with the name of the variable that contains the data that is wanted to be used in the histogram (in our case the invariant masses). The zeroes will be replaced with the wanted amount of bins and with the lower and upper limits of the histogram. Hint 3 Try different bin values between 50 and 200.
###Code
# Write down there a code that will create the histogram.
plt.hist(invariant_mass, bins=120, range=(60,120))
# Let's name the axes and the title. Don't change these.
plt.xlabel('Invariant mass [GeV]')
plt.ylabel('Number of events')
plt.title('Histogram of invariant mass values of two muons. \n')
plt.show()
###Output
_____no_output_____
###Markdown
Question 3 Describe the histogram. What information you can get from it? Fitting the function to the histogram To get information about mass and lifetime of the detected resonance, a function that describes the distribution of the invariant masses must be fitted to the values of the histogram. In our case the values follow a Breit-Wigner distribution:$$N(E) = \frac{K}{(E-M)^2 + \frac{\Gamma^2}{4}},$$where $E$ is the energy, $M$ the maximum of the distribution (equals to the mass of the particle that is detected in the resonance), $\Gamma$ the full width at half maximum (FWHM) or the decay width of the distribution and $K$ a constant.The Breit-Wigner distribution can also be expressed in the following form:$$\frac{ \frac{2\sqrt{2}M\Gamma\sqrt{M^2(M^2+\Gamma^2)} }{\pi\sqrt{M^2+\sqrt{M^2(M^2+\Gamma^2)}}} }{(E^2-M^2)^2 + M^2\Gamma^2},$$where the constant $K$ is written open.The decay width $\Gamma$ and the lifetime $\tau$ of the particle detected in the resonance are related in the following way:$$\Gamma \equiv \frac{\hbar}{\tau},$$where $\hbar$ is the reduced Planck's constant.With the code below it is possible to optimize a function that represents Breit-Wigner distribution to the values of the histogram. The function is already written in the code. It is now your task to figure out which the values of the maximum of the distribution $M$ and the full width at half maximum of the distribution $\Gamma$ could approximately be. The histogram that was created earlier will help in this task.Write these initial guesses in the code in the line `initials = [THE INITIAL GUESS FOR GAMMA, THE INITIAL GUESS FOR M, -2, 200, 13000]`. In other words replace the two comments in that line with the values that you derived.Notice that the initial guesses for parameters _a, b_ and _A_ have been already given. Other comments in the code can be left untouched. From them you can get information about what is happening in the code.After running the code Jupyter will print the values of the different parameters as a result of the optimization. Also uncertainties of the values and a graph of the fitted function are printed. The uncertainties will be received from the covariance matrix that the fitting function `curve_fit` will return. Hint 1 Think how M and gamma could be determined with the help of the histogram. Look from the histogram that you created that which would approximately be the values of M and gamma. Hint 2 If you figured out the initial guesses to be for example gamma = 12 and M = 1300 (note that these values are just random examples!) write them to the code in the form "initials = [12, 1300, -2, 200, 13000]".
###Code
# Let's limit the fit near to the peak of the histogram.
lowerlimit = 70
upperlimit = 110
bins = 100
# Let's select the invariant mass values that are inside the limitations.
limitedmasses = invariant_mass[(invariant_mass > lowerlimit) & (invariant_mass < upperlimit)]
#Let's create a histogram of the selected values.
histogram = plt.hist(limitedmasses, bins=bins, range=(lowerlimit,upperlimit))
# In y-axis the number of the events per each bin (can be got from the variable histogram).
# In x-axis the centers of the bins.
y = histogram[0]
x = 0.5*( histogram[1][0:-1] + histogram[1][1:] )
# Let's define a function that describes Breit-Wigner distribution for the fit.
# E is the energy, gamma is the decay width, M the maximum of the distribution
# and a, b and A different parameters that are used for noticing the effect of
# the background events for the fit.
def breitwigner(E, gamma, M, a, b, A):
return a*E+b+A*( (2*np.sqrt(2)*M*gamma*np.sqrt(M**2*(M**2+gamma**2)))/(np.pi*np.sqrt(M**2+np.sqrt(M**2*(M**2+gamma**2)))) )/((E**2-M**2)**2+M**2*gamma**2)
# Initial values for the optimization in the following order:
# gamma (the full width at half maximum (FWHM) of the distribution)
# M (the maximum of the distribution)
# a (the slope that is used for noticing the effect of the background)
# b (the y intercept that is used for noticing the effect of the background)
# A (the "height" of the Breit-Wigner distribution)
#initials = [#THE INITIAL GUESS FOR GAMMA, #THE INITIAL GUESS FOR M, -2, 200, 13000]
initials = [10,90,-2,150,13000]
# Let's import the module that is used in the optimization, run the optimization
# and calculate the uncertainties of the optimized parameters.
from scipy.optimize import curve_fit
best, covariance = curve_fit(breitwigner, x, y, p0=initials, sigma=np.sqrt(y))
error = np.sqrt(np.diag(covariance))
# Let's print the values and uncertainties that are got from the optimization.
print("The values and the uncertainties from the optimization")
print("")
first = "The value of the decay width (gamma) = {} +- {}".format(best[0], error[0])
second = "The value of the maximum of the distribution (M) = {} +- {}".format(best[1], error[1])
third = "a = {} +- {}".format(best[2], error[2])
fourth = "b = {} +- {}".format(best[3], error[3])
fifth = "A = {} +- {}".format(best[4], error[4])
print(first)
print(second)
print(third)
print(fourth)
print(fifth)
plt.plot(x, breitwigner(x, *best), 'r-', label='gamma = {}, M = {}'.format(best[0], best[1]))
plt.xlabel('Invariant mass [GeV]')
plt.ylabel('Number of event')
plt.title('The Breit-Wigner fit')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Notification 1: If the fitted function does not follow the histogram well, go back and check the intial guesses. Notification 2: In fitting the so called background of the mass distribution is taken into account. The background basically means muon pairs that come from other decay processes than from the decay of the Z boson. The background is taken into account in the code in the line that follows the command `def breitwigner`. The fit is adapted in the background with the term `a*E+b+A`, where $aE + b$ takes care of the linear part of the background and $A$ the height of the background. Notification 3: Even more correct way for doing the fit and getting the values and the uncertainties from it would be to iterate the fit several times. In the iteration a next step would take initial guesses from the previous fit. Analysing the histogram Question 4 What can you say about the appearance of the Z boson based on the histogram and the fitted function?Can you define the mass of the Z with the uncertainty? How?Explain your answers with the help from the theory part and other sources. Question 5 Calculate the lifetime $\tau$ of the Z boson with the uncertainty by using the fit.Compare the calculated value to the known lifetime of the Z. What do you notice? What could possibly explain your observations? Question 6 When was the Z boson detected first time and what is the physical meaning of the Z? Question 7 If energy and momentum could be measured by infinite accuracy, would there be an one exact peak that differs from the other distribution, or an distribution in the histogram on the location of the mass of the Z? Justify your answer. Question 8 Looking for Higgs to 4 lepton decaysNow that we can reconstruct invariant masses we can look to find the mass of the Higgs via its decay to two Z bosons. As the Z boson is not stable and decays we can identfiy the Z boson by its decay to two leptons as above. Consequently the Higgs boson can end up decaying to 4 leptons. We can look at the final states electron-positron and electron-positron, electron-positron and muon- muon+ as well as muon- muon+ muon- muon+To calculate the invariant mass of the Higgs we need to know the mass of the particles in the final state. There are three different mass configurations here so three different calculations. We could look at the invariant mass distribution of each state and then add them together to get the final distribution $p >> m$ Alternatively, because these are high energy collisions the momentum of the muons and electron is much higher than its mass. Therefore the mass contibution to energy is negligible, so we can just set the mass of the electron and muon to zero. Now we can add all the electron-positron and electron-positron, electron-positron and muon- muon+ as well as muon- muon+ muon- muon+ data together as their only physical difference is their mass and we have set this to zero. Have a try! Calculate the invariant mass of all the datasets in one go! 4 lepton invariant mass
###Code
ds_2e2mu_2011 = pd.read_csv('2e2mu_2011.csv')
ds_2e2mu_2012 = pd.read_csv('2e2mu_2012.csv')
ds_4e_2011 = pd.read_csv('4e_2011.csv')
ds_4e_2012 = pd.read_csv('4e_2012.csv')
ds_4mu_2011 = pd.read_csv('4mu_2011.csv')
ds_4mu_2012 = pd.read_csv('4mu_2012.csv')
ds = pd.concat([ds_2e2mu_2011, ds_2e2mu_2012, ds_4e_2011, ds_4e_2012, ds_4mu_2011, ds_4mu_2012], axis=0, ignore_index=True)
#Particles 1 and 2 are the electron and positron
#Particles 3 amd 4 are the muon- and muon +
#Mass of the Muon
muMass = 0.105658
#Mass of the electron
eMass = 0.000511
#Momentum squared for the 4 leptons
p1_squared = (ds.px1)**2 + (ds.py1)**2 + (ds.pz1)**2
p2_squared = (ds.px2)**2 + (ds.py2)**2 + (ds.pz2)**2
p3_squared = (ds.px3)**2 + (ds.py3)**2 + (ds.pz3)**2
p4_squared = (ds.px4)**2 + (ds.py4)**2 + (ds.pz4)**2
#Energy of the leptons
e1 = np.sqrt(p1_squared)
e2 = np.sqrt(p2_squared)
e3 = np.sqrt(p3_squared)
e4 = np.sqrt(p4_squared)
#Total Energy of the four leptons
epair = e1 + e2 + e3 + e4
#Momentum squared of the four leptons
ptleptons_squared = (ds.px1 + ds.px2 + ds.px3 + ds.px4)**2 + (ds.py1 + ds.py2+ ds.py3 + ds.py4)**2 + (ds.pz1 + ds.pz2 + ds.pz3 + ds.pz4)**2
invariant_mass_2e2mu = np.sqrt(epair**2 - ptleptons_squared)
# Write down there a code that will create the histogram.
plt.hist(invariant_mass_2e2mu, bins=60, range=(45,180))
# Let's name the axes and the title. Don't change these.
plt.xlabel('Invariant mass [GeV]')
plt.ylabel('Number of events')
plt.title('Histogram of invariant mass values of two muons. \n')
plt.arrow(70, 10, 18, -2.5,length_includes_head=True, width=0.2, fc='b', ec='b')
plt.arrow(123, 9, 0, -5,length_includes_head=True, width=0.5, fc='b', ec='b')
plt.text(118, 10.5, 'Higgs Boson', fontsize=12)
plt.text(60, 10.5, 'Z Boson', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Combining the 4 lepton data we start to see hints of the Higgs particle decaying to 4 leptons at a mass of around 126 $GeV/c^2$ Compare to the CMS analysis We can compare our distribution to the CMS analysis. Bear in mind the image produce below uses more data and a more sophisticated analysis. For example looking at events with more than 4 leptons where the additional leptons can come from other particles in the event Image 10: Distribution of the reconstructed four-lepton invariant mass in the low-mass range. © CMS Collaboration [6] In the end Now you have completed the exercise. Feel free to go back and test some different values to the code and see what happens. You can also create a new code cell by clicking "INSERT" -> "Insert Cell Below" and try to write some own code too!More information about the CERN Open Data can be found from http://opendata.cern.ch/. Further Work If you have finished all the exercises above and would like to do more, look at the section below on pseudorapidity Effect of pseudorapidity to the mass distribution In this final section it will be shortly studied how does pseudorapidities of muons that are detected in the CMS detector affect to the mass distribution.As it was told in the theory part, pseudorapidity $\eta$ describes an angle of which the detected particle has differed from the particle beam (z-axis). Pseudorapidity is determined with the angle $\theta$ mentioned before with the equation$$\eta = -\ln(\tan(\frac{\theta}{2}))$$For recap the image 8 is shown again below. From the image one can see that a small pseudorapidity in practice means that the particle has differed lot from the particle beam. And vice versa: greater pseudorapidity means that the particle has continued almost among the beam line after the collision. Image 8: Quantities $\theta$, $\eta$ and $\phi$ in the CMS detector. The image 11 below shows a situation where two particle beams from left and right collide. The image shows two muons with different pseudorapidities. The muon with the smaller pseudorapidity hits the barrel part of the detector when the muon with the greater pseudorapidity goes to the endcap of the detector. There are also muon chambers in the both ends of the detector so these muons can also be detected. Image 11: Two particles with different pseudorapidities in the CMS detector. In this final section it will be studied that how does pseudorapidities of muons that are detected in the CMS detector affect to the mass distribution. For doing that, two different histograms will be made: an one with only muon pairs with small pseudorapidities and an one with great pseduorapidities. The histograms will be made with the familiar method from the earlier part of this exercise. Selecting the events Next let’s create two variables for dividing the events: `small_etas` and `great_etas`. To the first one will be saved only collision events where pseudorapidities of the both detected muons have been small (for example under 0.38). And respectively to the second those whose pseudorapidities have been great (for example over 1.52). Absolute values will be used because $\eta$ can get also negative values.Complete the code cell below by determining the variables `small_etas` and `great_etas` in a way that the division described above will be made. You will need the following functions:- `ds[condition]` selects from the variable `ds` only events which fulfill the condition written inside the brackets. There can also be more than one condition. Then the function is in the form `ds[(condition1) & (condition2)]`- an example of this could be a function where from the variable `example` only rows where the values of the columns `a` and `b` have been both greater than 8 would be selected: `example[(example.a > 8) & (example.b > 8)]`- you can get the absolute values with the function `np.absolute()` from the _numpy_ module- pseudorapidity of the first muon is `ds.eta1` and the second `ds.eta2`- ”greater than” and ”smaller than” comparisons can be made in Python straight with the symbols > and <- Python uses a dot as a decimal separator (for example 0.38) Hint 1 Remember to define the small values in a way that both eta1 and eta2 have been smaller than 0.38. And same for the large values. Hint 2 Remember to tell from which variable you want to get the values of the pseudorapidities (write ds.eta1 or ds.eta2). Remember to use "np." in front of the aboslute value function. Hint 3 The first variable with the conditions is "great_etas = ds[(np.absolute(ds.eta1) > 1.52) & (np.absolute(ds.eta2) > 1.52)]" and the second "small_etas = ds[(np.absolute(ds.eta1) < 0.38) & (np.absolute(ds.eta2) < 0.38)]".
###Code
# Let's import the needed modules.
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# With this line the data is imported and saved to the variable "ds".
ds = pd.read_csv('DoubleMuRun2011A.csv')
great_etas = #OTHER SELECTION HERE
small_etas = #OTHER SELECTION HERE
# Let's print out some information about the selection
print('Amount of all events = %d' % len(ds))
print('Amount of the events where the pseudorapidity of the both muons have been large = %d' %len(great_etas))
print('Amount of the events where the pseudorapidity of the both muons have been small = %d' %len(small_etas))
###Output
_____no_output_____
###Markdown
Creating the histograms Run the code cell below to create separate histograms from the events with small and with great values of pseudorapidities. The cell will get the invariant masses for both of the selections and will create the histograms out of them near to the peak that refers to the Z boson.
###Code
# Let's differ the invariant masses of the large and small pseudorapidity
# events for making the histograms.
inv_mass_great = great_etas['M']
inv_mass_small = small_etas['M']
# Let's use the matplotlib.pyplot module to create a custom size
# figure where the two histograms will be plotted.
f = plt.figure(1)
f.set_figheight(15)
f.set_figwidth(15)
plt.subplot(211)
plt.hist(inv_mass_great, bins=120, range=(60,120))
plt.ylabel('great etas, number of events', fontsize=20)
plt.subplot(212)
plt.hist(inv_mass_small, bins=120, range=(60,120))
plt.ylabel('small etas, number of events', fontsize=20)
plt.xlabel('invariant mass [GeV]', fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Particle physics data-analysis with CMS open data Welcome to the RAL Particle Physics masterclass computer exercise, here we will use real data from the CMS experiment at CERN for a simple particle physics data-analysis. The goal of the exercise is to understand how particles are discovered, as an example we will look at the discovery of the Z boson .In the exercise, invariant mass values will be calculated for muon pairs that are detected in the CMS detector. A histogram will be made from the calculated invariant mass value, and the mass of the Z estimated.Finally, we will also look at 4-lepton events and try to identify the Higgs boson .The structure of the exercise is:- Theory background- Identifying events from event displays- Computer exercise: - Introduction to computing and python - Loading the data - Making some plots - Calculating the invariant mass - Looking for Higgs to 4-lepton decaysIf you complete the exercise and have time left, there are two possible extension exercises: - The effect of pseudorapidity on the Z mass distribution - Fitting the Z mass distribution to determine the mass and lifetime of the Z boson Part1 : Theory background Particle physics is the field of physics where structures of matter and radiation and the interactions between them are studied. In experimental particle physics, research is performed by accelerating particles and colliding them either with other particles or with solid targets. This is done with _particle accelerators_ and the collisions are examined with _particle detectors_.The world's largest particle accelerator, the Large Hadron Collider (LHC), is located at CERN, the European Organization for Nuclear Research. The LHC is a 27 kilometers long circle-shaped synchrotron accelerator. The LHC is located in a tunnel 100 meters underground on the border of France and Switzerland (image 1). Image 1: The LHC accelerator and the four detectors around it. © CERN [1] In 2012 the ATLAS and CMS experiments at CERN made an announcement that they had observed a new particle with a mass equal to the predicted mass of the Higgs boson. The Higgs boson and the Higgs field related to it explain the origin of the mass of particles. In 2013 Peter Higgs and François Englert, who predicted the Higgs boson theoretically, were awarded the Nobel prize in physics. Accelerating particles The LHC mainly accelerates protons. The proton source of the LHC is a bottle of hydrogen. Protons are produced by stripping the electrons away from the hydrogen atoms with the help of an electric field.The process of accelerating the protons starts before the LHC. Before the protons arrive in the LHC they are accelerated with electric fields and directed with magnetic fields in smaller accelerators(Linac 2, Proton Synchrotron Booster, Proton Synchrotron and Super Proton Synchrotron). After these the protons have an energy of 450 GeV. The protons are injected into the LHC in two different beampipes, each beam contains 2808 proton bunches located about 7.5 meters from each other. Each of these bunches include $1\text{.}2\cdot 10^{11}$ protons.The two beams circulate in opposite directions in two different vacuum tubes. Image 2 shows a part of the LHC accelerator opened with the two vacuum tubes visible inside. Each of the proton beams will reach the energy of about 7 TeV (7000 GeV) in the LHC. Image 2: Part of the LHC accelerator opened. © CERN [2] Particle collisions are created by crossing these two beams that are heading in opposite directions. Because the bunches are travelling so fast, there will be about 40 million bunch crosses per second in the LHC. When two proton bunches cross not all of the protons collide with each others. Only about 40 protons per bunch will collide and so create about 20 collisions. But that means there will be 800 million proton collisions every second in the LHC. That's a lot of action!The maximum energy of these collisions is 14 TeV. However in most cases the collision energy is smaller than that because when protons collide it is really their constitiuents, the quarks and gluons, which collide with each other. So not all of the energy of the protons is transmitted to the collision.When the protons collide the energy of the collision can be transformed into mass ($E=mc^2$) and new particles are produced in the collisions. These new particles are ejected from the collision area, a bit like a small explosion. By examining and measuring the particles created in collisions, researchers try to understand better the known particles which make up our universe and search for new particles which could explain puzzles such as dark matter. Video The acceleration and collision processes are summarised well in the short video below. Watch the video from the start until 1:15 to get a picture about these processes. You can start the video by running the code cell below (click the cell and then press SHIFT + ENTER).
###Code
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/pQhbhpU9Wrg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
###Output
_____no_output_____
###Markdown
Examining particle collisions At the LHC the proton beams are brought together to colide at four different points. In order to study the particles produced by the collisions, particle detectors are built around the collision points. The four particle detectors at the LHC are ATLAS, LHCb, ALICE and CMS (check Image 1). These detectors are like very large digital cameras and take a "picture" of the particles emerging from the collision.In Image 3 there is a visualisation of some particles created in one collision event seen at the CMS (Compact Muon Solenoid) detector. Image 3: A visualised collision event. This exercise uses data recorded by the CMS detector so lets look in more detail at this detector....Simplified, the goal of the CMS detector is to detect particles that are created in collisions and measure different quantities about them (charge, energy, momentum, etc.). The CMS detector consists of different sub-detectors which form an onion-like structure around the collision point. This structure ensures that as many particles as possible from the collision are detected and measured. Image 4: The CMS detector opened. © CERN [3] Different particles act differently in the different sub-detectors of CMS. Image 5 shows a cross-section of the CMS detector. The particle beams would travel in and out from the plane. Image 5 also demonstrates how different particles can be identified in the detector. Image 5: The cross-section of the CMS and different particle interactions in it. © CERN [4] Let's look at the different parts of the detector: Tracker The innermost part is the silicon tracker. The silicon tracker makes it possible to reconstruct trajectories of charged particles. Charged particles interact electromagnetically with the tracker and create an electric pulse. An intense magnetic field bends the trajectories of the charged particles. With the curvature of the trajectories shown by the pulses created in the tracker, it is possible to calculate the momenta of the charged particles. Calorimeter Particle energies can be measured with help of the calorimeters. Electrons and photons will stop to the Electromagnetic Calorimeter (ECAL). Hadrons, for example protons or neutrons, will pass through the ECAL but will be stopped in the Hadron Calorimeter (HCAL). ECAL is made from lead tungstate crystals that will produce light when electrons and photons pass through them. The amount of light produced is propotional to the energy of the particle. So it is possible to determine the energy of the particle stopped in ECAL with the photodetectors. The operation of the HCAL is also based on detecting light. Muon detector Only muons and very weakly interacting particles like neutrinos will pass through both the ECAL and HCAL without being stopped. Energies and momenta of muons can be determined with the muon chambers. The detection of the momentum is based on electrical pulses that muons create in the different sections of the muon chambers. Energies of muons can't be measured directly, but the energies will be determined by calculating them from the other measured quantities.Neutrinos can't be detected directly in the detector (they only interact very weakly and pass right through the detector), but the existence of them can be derived with the help of missing energy. It is possible that the total energy of the particles detected in a collision is smaller than the energy before the collision. Yet, we know that energy must be conserved. This situation indicates that something was undetected in the collision, this "missing energy" is assumed to be due to neutrinos created in the collision. Part2 : Looking at some events We can look at some more event displays by downloading the file here If you want to look at more events they can be found at this link Click the “folder” icon, click “Open files from the Web” and the “Education” folder Indirect detection of particles As we have seen, not every particle can be detected directly with the particle detectors. Interesting particles are often short-lived and decay essentially at the interaction point so never reach the detectors. These processes can be searched for via their long-lived decay products, this is indirect detection.For example the Z boson (the particle that mediates weak interaction) can't be detected directly with the CMS since the lifetime of the Z is very short. That means that the Z boson will decay before it even reaches the silicon detector of the CMS.How it is possible to detect the Z boson then? A solution to this question comes from the decay process of the Z boson. If particles that originate from the decay of the Z are possible to detect, it is also possible to deduce the existence of the Z. So the detection is indirect.The Z boson can decay in many ways (24 in fact) and in this exercise we will look at one of these: the decay of the Z to a muon ($\mu^-$) and an antimuon ($\mu^+$). This decay process is shown as a Feynman diagram in Image 6 below. Image 6: Feynmann diagram of the process where the Z boson decays to a muon and an antimuon. As we have just seen in the event displays, the muons that are created from the decay of the Z can be detected. But just the detection of the muon and the antimuon isn't sufficient evidence for the existence of the Z as they could have originated from another process (there are many different processes which can lead to the same final state). Assuming that the muon, antimuon pair came from the decay of a single "mother" particle, we can use their momentum and energy to calculate the invariant mass of that particle.With the invariant mass it is possible to prove the existence of particles. In our example, we can take all the muon-antimuon pairs recorded by the detector and calculate the invariant mass for each pair. If we get a different answer each time then the muon-antimuon pair were just a random combination.If the answer is always the same it indicates that the muon-antimuon pair came from a single particle with a specific mass. We can make a plot showing the calculated mass value for each muon-antimuon pair. A peak in this plot (i.e. lots of pairs with the the same mass value) would prove that the the muon pairs came from a single particle with that specific mass vaule. __So the invariant mass can be used as an evidence about the existence of a particle__.In this notebook we will look at some real data from muon pairs, plot the mass of the muon pairs and look at the particles we find. Then we will find out how to calculate the mass ourselves.The different parts of the exercise are:1) Introduction to python, Jupyter notebooks and some simple programming2) Loading the data3) Making some simple plots4) Make a plot of the invariant mass of the muon pair 5) Calculating the invariant mass yourself6) Apply the same principle to the 4-particle decay of the Higgs bosonNow to get started...... Part3 : Computer exercise Exercise 1 : An introduction to python and programming This is a jupyter notebook, where you can have text "cells" (like this text here) and code "cells" i.e. boxes where you can write python code to be executed (like the one below). No need to install anything or find compilers, it is all done for you in the background.It is useful to save the workbook as you work through the exercise (just in case of problems), use "File" -> "Save Notebook"We will be using python as the programming language: It is easy to get started, for example just type: "1 + 1" in the cell below then click on "Run"->"Run Selected Cell" above, or click "SHIFT" & "ENTER" at the same time.
###Code
1+1
###Output
_____no_output_____
###Markdown
Try some other maths functions for yourself: use "-", "*", "/"
###Code
4/2
###Output
_____no_output_____
###Markdown
Now try something more advanced, for example sqrt(4)
###Code
sqrt(4)
###Output
_____no_output_____
###Markdown
Ooops, that failed: basic python can do some mathematical operations but not everything. For anything more complex, we need additional software packages or "modules".Here we import "numpy", a maths module: (run the cell below):
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Now we can try sqrt again using numpy: np.sqrt(4)
###Code
# Try out np.sqrt - This is a comment separated with #-symbol.
np.sqrt(4)
###Output
_____no_output_____
###Markdown
You can try some other values, yourself, e.g. np.sqrt(16), np.sqrt(81) Note that starting a line with "" marks the line as a comment, this line doesn't affect the functionality of the code. Finally, you will need to be able to raise numbers to a power. This is done with "** n", where n is the power you wish to raise to. Try "3**2" in the cell below You can try some other calculations as well. What is "2\*\*4", "3\*\*3" ?? Exercise 2 : Loading the data The data used in the analysis has been collected by the CMS detector in 2011. From the original data only those collision events with exactly two muons have been selected and the information stored on a CSV file. The CSV file used in this excercise is already saved to the same repository as this notebook file. Now let's get the file with Python and start the analysis! In the code cell below some needed Python modules _pandas_, _numpy_ are imported and named as _pd_, _np_. Modules are files that contain functions and commands for Python language. Modules are imported because not all of the things needed in the exercise could be done with Python's built-in functions.Run the cell below to import the data file ('DoubleMuRun2011A.csv'). Note that the file is saved to the variable named `ds`. __Don't change the name of the variable.__ The file is imported with the function `read_csv()` from the pandas module. So in the code there has to be an reference to pandas module (that we named as _pd_) in front of the function.
###Code
import pandas as pd
import numpy as np
ds = pd.read_csv('DoubleMuRun2011A.csv')
###Output
_____no_output_____
###Markdown
How many events? First we want to figure out how many collision events (or in this case data rows) there are in the data file. Add to the code cell below needed code to print out the number of rows of the imported file. The length of an object can be determined with the `len()` function. Inside the brackets will be written the variable which length is wanted to be determined.Feel free to test different solutions for printing the length of the file.After you have printed the number of the rows in the datafile, you can move on to the next section. First try to figure it out yourself, but if you get stuck click on the hints below. Hint 1 The data was saved to the variable that was named as "ds". Hint 2 Use the function len() for example len(variablename), where variablename refers to the name of your variable.
###Code
# Add your own code to print the number of collision events in the datafile!
len(ds)
###Output
_____no_output_____
###Markdown
Answer len(ds) What does the data look like? The file was saved as a _DataFrame_ structure (practically a table) of _pandas_ module in a variable called `ds`. Next we will print the five first rows of the file to look at what is inside. With the function _variablename_.`head(N)` you can get the first N elements of _variablename_. You can get the first rows of the data file by changing the _variablename_ to the name of your dataset variable.Write a code that prints the five first rows of the data file and run the code cell. First try to figure it out yourself, but if you get stuck click on the answer below.
###Code
ds.head(5)
###Output
_____no_output_____
###Markdown
Answer ds.head(5) The first row shows the information about muon pairs contained in the file. For example E1 is the energy of the first muon and E2 the energy of the second etc. Here are the different values listed:- Run = number of the run where data has been collected from- Event = number of the collision event- Type = type of the muon, global muon (G) has been measured both in the silicon tracker and muon chambers, tracker muon (T) has been measured only in the silicon tracker (these classifications are hypotheses since the type cannot be known absolutely)- E = energy of the muon- px, py, pz = different coordinates of the momentum of the muon (remember momentum is a vector, $z$ is along the beamline, $x$ and $y$ are perpendicular to the beam)- pt = transverse momentum, that is the component of momentum of the muon that is perpendicular to the particle beams- eta = $\eta$ = pseudorapidity, a coordinate describing the angle the particle makes with the beamline- phi = $\phi$ = azimuth angle, also a coordinate describing an angle - this time in the x-y plane- Q = electrical charge of the muon Exercise 3 : Making some plots Next let's plot some of the values from the file in a histogram.A histogram describes how values are distributed, that is, how many values fall in each bin of the histogram. In Image 7 below there is a histogram that represents how the amount of cash in a wallet has been distributed for some random group of people. One can see from the histogram that, for example, the most common amount of cash was 10–15 euros (12 people had this). Image 7: An example histogram from the distribution of the amount of cash. Histograms can be created in python with the _matplotlib.pyplot_ module. Run the cell below to import this module as _plt_.
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Now we can plot something.... Let's try _'px1'_ (this is the x-component of the momentum vector for muon 1)The function _plt.hist()_ is used to create a histogram by giving different parameters inside the brackets. The full list of parameters can be seen at https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.hist.html.For now, we will only use the first three:plt.hist('variable name', bins = BINS, range=(low end of range, high end of range)) 'variable name' : a variable from which values the histogram is created (here "px1") 'bins' : number of bins for the histogram 'range' : the lower and upper range of the bins The function _plt.show()_ is used to display the histogramUncomment the "plt.hist" line in the cell below and fill in some values for bins and range.
###Code
# fill variable px1 with px1 from the file
px1 = ds['px1']
# now use plt.hist to make a histogram
plt.hist(px1, bins= 500 , range=(-50 , 50))
plt.show()
###Output
_____no_output_____
###Markdown
Answer px1 = ds['px1']plt.hist(px1, bins=100, range=(-20.,20.))plt.show() You can vary the bins and range until you have something suitable.We can add axes labels and a title using "plt.xlabel(' label')", "plt.ylabel(' label')" and "plt.title(' label')".Try that in the cell below
###Code
# First add your plt.hist() line here
plt.hist(px1, bins= 500 , range=(-50 , 50))
# add labels and title
plt.xlabel('x-component of momentum [GeV]')
plt.ylabel('Number of events')
plt.title('Histogram of px for muon 1. \n')
plt.show()
###Output
_____no_output_____
###Markdown
You can also plot some of the other muon properties using the variables we printed above. Exercise 4 : Plotting the invariant mass Next, let's look at the invariant mass, this has already been calculated and stored in the file as "M".Write the code to make a plot of the invariant mass.
###Code
M = ds['M']
plt.hist(M, bins= 200 , range=(0 , 150))
plt.show()
###Output
_____no_output_____
###Markdown
Hint 1 First fill a variable "invariant_mass_1" with the invariant mass ("M") from the file Hint 2 Use "plt.hist" to make a histogram of the invariant_mass_1 values. Remember to input the number of bins and the range. Answer invariant_mass_1 = ds['M']\ remember to input number of bins and range (0.5-150 works well)no_bins = 500 \ use plt.hist to plot the invariant_mass_1 variableplt.hist(invariant_mass_1, no_bins, range=(0.5,120.), color="darkgrey")plt.show() Looking at the muon pair invariant mass spectrum Below is the histogram published by the CMS experiment of the invariant mass of muon pairs. Does it look like yours?? Image 8: The histogram of the invariant masses published by the CMS experiment. © CMS Collaboration [5] Not quite.... That's because the CMS plot uses log scales on the axes to make the plot clearer.We can change our plot to log axes using plt.yscale('log') and plt.xscale('log') Try that in the cell below
###Code
#You need to add you plt.hist line here
plt.hist(M, bins= 200 , range=(0 , 150))
plt.yscale('log')
plt.xscale('log')
plt.show()
###Output
_____no_output_____
###Markdown
Now it should look more similar.The plot shows a smooth 'background' of random coincidences and on top of that some 'peaks'Each of these peaks is evidence for a particle decaying to muon pairs. The peaks corresponding to known particles and have been given labels in the CMS plot. You can use the Particle Data Group website if you want to know more about these particles.If we saw a peak at a point where no known particle was expected this would be evidence of a new particle discovery. Now try changing the range of the histogram to look at different parts of the mass spectrum - you can zoom in on the individual peaks (particles).For example in the range 2.5-4 you can see the 'J/psi' particle.
###Code
# You need to add a plt.hist function with your bins and range, Change the range to zoom on different regions of the plot
plt.hist(M, bins= 500 , range=(0 , 150))
plt.show()
# remember plt.show() to plot the histogram to the screen
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 5 : Calculating the invariant mass We have seen that the invariant mass can be used to identify partciles. Now let's calculate the invariant mass of the muon pairs for ourselves. Equation for invariant massFirst, we derive loosely the equation for the invariant mass.Let's assume we have a particle with mass $M$ and energy $E$ which decays to two particles with masses $m_1$ and $m_2$, and energies $E_1$ and $E_2$. Energy $E$ and momentum $\vec{p}$ are conserved in the decay process so$E = E_1 +E_2$ and $\vec{p} = \vec{p}_1+ \vec{p}_2$.Particles will obey the relativistic dispersion relation:$$Mc^2 = \sqrt{E^2 - c^2\vec{p}^2}.$$And with the conservation of energy and momentum this can be shown as$$Mc^2 = \sqrt{(E_1+E_2)^2 - c^2(\vec{p_1} + \vec{p_2})^2}$$<!--$$=\sqrt{E_1^2+2E_1E_2+E_2^2 -c^2\vec{p_1}^2-2c^2\vec{p_1}\cdot\vec{p_2}-c^2\vec{p_2}^2}$$$$=\sqrt{2E_1E_2 - 2c^2 |\vec{p_1}||\vec{p_2}|\cos(\theta)+m_1^2c^4+m_2^2c^4}. \qquad (1)$$The relativistic dispersion relation can be brought to the following format$$M^2c^4 = E^2 - c^2\vec{p}^2$$$$E = \sqrt{c^2\vec{p}^2 + M^2c^4},$$-->from where by setting $c = 1$ (very common in particle physics) $$M = \sqrt{(E)^2 - (\vec{p})^2} = \sqrt{(E_1+E_2)^2 - (\vec{p_1} + \vec{p_2})^2}, \qquad (2)$$For those that like maths, a fuller derivation of this can be found here How to do this in pythonIn python, you only need to write a proper equation once - since the code executes the equation automatically for each row of the file.For example if you would like to sum the electrical charges of two muons for each event and save results in a variable _charges_, it could be done with the following code:```charges = ds.Q1 + ds.Q2```So you have to tell in the code that Q1 and Q2 refer to values in the variable `ds`. This can be done by adding the variable name separated with a dot in front of the value that is wanted, as in the example above.Remember that you can use 'sqrt' from the _numpy_ module that we named as _np_. You can get a square root with the function `np.sqrt()`. Naturally inside the brackets there will be anything that is inside the square root or brackets in the equation too.__In the cell below write code__ that will calculate the invariant mass value for muon pairs in each collision event in the data file. You need to use the muons energy and momentum and then use equation 2 to calculate the invariant mass of the parent particle:The energy of each particle can be calculated from:$$E_1^2 = \vec{p_1}^2 + m_{1}^2$$Remember that momentum is a vector so:$$\vec{p_1}^2 = (p_1^x)^2 + (p_1^y)^2 + (p_1^z)^2 $$where $p_1^x$ is the $x$-component of the momentum of particle 1. Save the values calculated in a variable called `invariant_mass`.There are some comments in the cell below to help you with the different steps.There are also some hints - only use these if you are really stuck! Hint 1 When you write different quantities of the equation to your code, remember to refer to the variable from where you want to get the quantities. For example if you would need the quantity "pt1", write "ds.pt1" to the code. Hint 2 Use the equations above for each step, for example to calculate the momentum squared of muon1 : p1_squared = (ds.px1)**2 + (ds.py1)**2 + (ds.pz1)**2 Hint 3 To calulate the energy of muon1 : e1 = np.sqrt(p1_squared + (muMass**2))
###Code
# You need the Mass of the Muon to calculate the energy
muMass = 0.105658
# Momentum squared for the two individual muons
p1_squared = (ds.px1)**2 + (ds.py1)**2 + (ds.pz1)**2
p2_squared = (ds.px2)**2 + (ds.py2)**2 + (ds.pz2)**2
# Energy of the two individual muons
E1 = np.sqrt(p1_squared+(muMass**2))
E2 = np.sqrt(p2_squared+(muMass**2))
# Total Energy of the two muons
E = E1+E2
# Momentum squared of the muon pair vector (p1+p2) - remember to add the vectors before squaring
px = ds.px1 + ds.px2
py = ds.py1 + ds.py2
pz = ds.pz1 + ds.pz2
# Invariant mass of the muon pair, save this in vraiable called "invariant_mass"
invariant_mass = np.sqrt((E)**2 - px**2 - py**2 - pz**2)
###Output
_____no_output_____
###Markdown
Now, if you run the cell below, the code will print the first five mass values that are calculated and will tell if the calculation is correct.
###Code
print('The first five values calculated (in units GeV):')
print(invariant_mass[0:5])
# Rest of the code is for checking if the values are correct. You don't have to change that.
if 14.31 <= invariant_mass.values[4] <= 14.32:
print('Invariant mass values are correct!')
else:
print('Calculated values are not yet correct. Please check the calculation one more time.')
print('Remember: don´t change the name of the variable invariant_mass.')
###Output
The first five values calculated (in units GeV):
0 17.492160
1 11.553405
2 9.163621
3 12.477441
4 14.315873
dtype: float64
Invariant mass values are correct!
###Markdown
Creating the histogram Next, let's create a histogram from the invariant mass values that you have calculated. Here we want to focus on the Z boson, so set the range wisely to get the values near the mass of the Z boson. Try different numbers of bins to make a clear histogram. You can try different values and see how they affect the histogram.Add axes labels and a title of the histogram. If you get stuck use the hints below. But try to create the histogram without using the hints! Hint 1 The invariant mass values that you have calculated are saved in the variable "invariant_mass". Hint 2 The histogram function is in the form "plt.hist(x, bins=0, range=(0,0))", where x will be replaced with the name of the variable that contains the data that is wanted to be used in the histogram (in our case the invariant masses). The zeroes will be replaced with the wanted amount of bins and with the lower and upper limits of the histogram. Hint 3 Try different bin values between 50 and 200. Hint 4 A good mass range for the Z boson is 60-120 GeV
###Code
# Write down the code to create and plot the histogram (use plt.hist as we did earlier).
# Use plt.show() to print the histogram to the screen
plt.show()
###Output
_____no_output_____
###Markdown
Question 1 : Describe the histogram. What information can you get from it? Answer The position of the peak of the histogram on the x-axis tells you the mass of the Z boson Exercise 6 : Looking for Higgs to 4 lepton decaysNow that we can reconstruct invariant masses we can look to find the mass of the Higgs via its decay to two Z bosons. As the Z boson is not stable and decays we can identify the Z boson by its decay to two leptons as above. Consequently the Higgs boson can end up decaying to 4 leptons. We can look at the final states electron-positron and electron-positron ($e^+ e^- e^+ e^-$), electron-positron and muon-antimuon ($e^+ e^- \mu^+ \mu^-$) as well as $\mu^+ \mu^- \mu^+ \mu^-$ Image 9: Feymann diagrams for Higgs to 4-lepton decays To calculate the invariant mass of the Higgs we need to know the mass of the particles in the final state. There are three different mass configurations here so three different calculations. We could look at the invariant mass distribution of each diagram above and then add them together to get the final distribution. But because these are high energy collisions, the masses of the electron and muon are very small compared to the momentum of the particles: $p >> m$ Therefore the mass contribution to the energy is negligible, so: $ E^2 = \vec{p}^2 + m^2 \approx \vec{p}^2$ Now we can add all the $e^+ e^- e^+ e^-$, $e^+ e^- \mu^+ \mu^-$ and $\mu^+ \mu^- \mu^+ \mu^-$ data together as their only physical difference is their mass and we have set this to zero. The mass equation is now: $ M = \sqrt{(E)^2 - (\vec{p})^2} = \sqrt{(E_1+E_2+E_3+E_4)^2 - (\vec{p_1} + \vec{p_2} + \vec{p_3} + \vec{p_4})^2} $and$ {E_1}^2 = \vec{p_1}^2 $If you noticed the energy of each particle is already in the dataset. For example taking the concatenated dataset below, the energy of particle 1 is ds2.E1, the energy of particle 2 is ds2.E2 ... etc. Have a try! Calculate the invariant mass of all the datasets in one go! 4 lepton invariant mass
###Code
# Here we load the data for the different final sets of partciles
ds_2e2mu_2011 = pd.read_csv('2e2mu_2011.csv')
ds_2e2mu_2012 = pd.read_csv('2e2mu_2012.csv')
ds_4e_2011 = pd.read_csv('4e_2011.csv')
ds_4e_2012 = pd.read_csv('4e_2012.csv')
ds_4mu_2011 = pd.read_csv('4mu_2011.csv')
ds_4mu_2012 = pd.read_csv('4mu_2012.csv')
# Here we concatenate the 6 datasets into one called "ds2"
ds2 = pd.concat([ds_2e2mu_2011, ds_2e2mu_2012, ds_4e_2011, ds_4e_2012, ds_4mu_2011, ds_4mu_2012], axis=0, ignore_index=True)
#Total Energy of the four leptons
E = ds2.E1 + ds2.E2 + ds2.E3 + ds2.E4
#Total momentum in the x direction of the four leptons
px = ds2.px1 + ds2.px2 + ds2.px3 + ds2.px4
#Total momentum in the y direction of the four leptons
py = ds2.py1 + ds2.py2 + ds2.py3 + ds2.py4
#Total momentum in the z direction of the four leptons
pz = ds2.pz1 + ds2.pz2 + ds2.pz3 + ds2.pz4
# Now calculate the invariant mass using Equation (2) above and assign it to a variable called 'invariant_mass_2e2mu'
invariant_mass_2e2mu = np.sqrt((E)**2 - px**2 - py**2 - pz**2)
###Output
_____no_output_____
###Markdown
Run the cell below to plot your mass values. You should see peaks where the arrows are, corresponding to the Z and Higgs bosons
###Code
# Write down there a code that will create the histogram.
plt.hist(invariant_mass_2e2mu, bins=60, range=(45,180))
# Let's name the axes and the title. Don't change these.
plt.xlabel('Invariant mass [GeV]')
plt.ylabel('Number of events')
plt.title('Histogram of invariant mass values of four leptons. \n')
plt.arrow(70, 10, 18, -1.8,length_includes_head=True, width=0.2, fc='r', ec='r')
plt.arrow(125, 9, 0, -3.5,length_includes_head=True, width=0.5, fc='r', ec='r')
plt.text(118, 10.5, 'Higgs Boson', fontsize=12)
plt.text(60, 10.5, 'Z Boson', fontsize=12)
plt.show()
###Output
_____no_output_____
###Markdown
Combining the 4 lepton data we start to see hints of the Higgs particle decaying to 4 leptons at a mass of around 126 $GeV/c^2$ Compare to the CMS analysis We can compare our distribution to the CMS analysis. Bear in mind the image produced below uses more data and a more sophisticated analysis. For example looking at events with more than 4 leptons where the additional leptons can come from other particles in the event Image 10: Distribution of the reconstructed four-lepton invariant mass in the low-mass range. © CMS Collaboration [6] In the end Now you have completed the exercise. Feel free to go back and test some different values to the code and see what happens. You can also create a new code cell by clicking "INSERT" -> "Insert Cell Below" and try to write some of your own code too!More information about the CERN Open Data can be found from http://opendata.cern.ch/. Further Work Extension exercise 1 : Effect of pseudorapidity to the mass distribution If you have finished all the exercises above and would like to do more, look at the sections below on fitting the Z mass plot and the effect of pseudorapidity In this final section, we will study how the pseudorapidities of muons that are detected in the CMS detector alter the mass distribution.Pseudorapidity (denoted by $\eta$) is a measure of the angle the detected particle makes with the particle beam (z-axis). The angle itself is called $\theta$ (see diagram below). Pseudorapity is then determined with the equation:$$\eta = -\ln(\tan(\frac{\theta}{2}))$$From the image one can see that, in practise, a large pseudorapidity means that the particle has continued almost among the beam-line after the collision. And vice versa: a small pseudorapidity means that the particle is more perpendicular to the beam-line Image 11: Quantities $\theta$, $\eta$ and $\phi$ in the CMS detector. The image 11 below shows a situation where two particle beams from left and right collide. The image shows two muons with different pseudorapidities. The muon with the smaller pseudorapidity hits the barrel part of the detector when the muon with the greater pseudorapidity goes to the endcap of the detector. There are also muon chambers in the both ends of the detector so these muons can also be detected. Image 12: Two particles with different pseudorapidities in the CMS detector. In this final section, two different histograms will be made: one using only muon pairs with small pseudorapidities and one using only those with large pseduorapidities. We can then study how the pseudorapidities of the muons that are detected in the CMS detector affect the mass distribution. Selecting the events Next let’s create two variables for dividing the events: `small_etas` and `large_etas`. To the first one we will save only collision events where pseudorapidities of both the detected muons are small (for example under 0.38). And to the second one we save only those events there the pseudorapidities are both large (for example over 1.52). Absolute values will be used because $\eta$ can have both positove and negative values.Complete the code cell below by determining the variables `small_etas` and `large_etas` in a way that the division described above will be made. You will need the following functions:- `ds[condition]` selects from the variable `ds` only events which fulfill the condition written inside the brackets. There can also be more than one condition. Then the function is in the form `ds[(condition1) & (condition2)]`- an example of this could be a function where from the variable `example` only rows where the values of the columns `a` and `b` have been both greater than 8 would be selected: `ds[(example.a > 8) & (example.b > 8)]`- you can get the absolute values with the function `np.absolute()` from the _numpy_ module- pseudorapidity of the first muon is `ds.eta1` and the second `ds.eta2`- ”greater than” and ”smaller than” comparisons can be made in Python straight with the symbols > and <- Python uses a dot as a decimal separator (for example 0.38) Hint 1 Remember to define the small values in a way that both eta1 and eta2 are smaller than 0.38. And same for the large values. Hint 2 Remember to tell from which variable you want to get the values of the pseudorapidities (write ds.eta1 or ds.eta2). Remember to use "np." in front of the absolute value function. Hint 3 The first variable with the conditions is "large_etas = ds[(np.absolute(ds.eta1) > 1.52) & (np.absolute(ds.eta2) > 1.52)]" and the second "small_etas = ds[(np.absolute(ds.eta1) < 0.38) & (np.absolute(ds.eta2) < 0.38)]".
###Code
# Let's import the needed modules.
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# With this line the data is imported and saved to the variable "ds".
ds = pd.read_csv('DoubleMuRun2011A.csv')
# Define new variables "large_etas" and "small_etas" which contain only those events in "ds" which satisfy some condition
large_etas = #
small_etas = #
# Let's print out some information about the selection
print('Total number of events = %d' % len(ds))
print('Number of events where the pseudorapidity of the both muons is large = %d' %len(large_etas))
print('Number of events where the pseudorapidity of the both muons issmall = %d' %len(small_etas))
###Output
_____no_output_____
###Markdown
Creating the histograms Now create separate histograms from the events with small and with large values of pseudorapidities. You need to fill "inv_mass_large" and "inv_mass_small" with the invariant mass of events in your large and small eta datasets.The cell will get the invariant masses for both of the selections and will create the histograms out of them near to the peak of the Z boson. Hint 1 You can access the invariant mass values ('M') for the large eta slection with: large_etas['M']
###Code
# Let's get the invariant masses of the large and small pseudorapidity
# events for making the histograms.
inv_mass_large = #
inv_mass_small = #
# Let's use the matplotlib.pyplot module to create a custom size
# figure where the two histograms will be plotted.
f = plt.figure(1)
f.set_figheight(15)
f.set_figwidth(15)
plt.subplot(211)
plt.hist(inv_mass_large, bins=120, range=(60,120))
plt.ylabel('large etas, number of events', fontsize=20)
plt.subplot(212)
plt.hist(inv_mass_small, bins=120, range=(60,120))
plt.ylabel('small etas, number of events', fontsize=20)
plt.xlabel('invariant mass [GeV]', fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Question 2 Compare the two histograms that were created above. In what way does the pseudorapidity of the muons affect the mass distribution?What could possibly explain your observations?First try to think of the explanation by yourself, then you can open the explanation below to see how you did. Click here to open the explanation From the histograms one can see that the events where the pseudorapidity of both of the muons is small, produces a narrower peak than the events where the muons have large pseudorapidities. That means that the resolution of the invariant masses is worse with larger pseudorapidities. The worse resolution follows from the fact that the resolution of the transverse momentum ($p_T$ , the component of momentum that is perpendicular to the particle beams) is worse for muons with greater pseudorapidities. This can be seen for example from image 21 on page 32 of the CMS paper https://arxiv.org/pdf/1206.4071.pdf The explanation for the effect of the pseudorapidity on the resolution is that the particles which enter the endcap of the detector (larger pseudorapidities) will more probably interact with the material of the detector than the muons with smaller pseudorapidities (check the image 8). In these interactions muons will lose some of their energy. This messes up slightly the fitting of the trajectories of the muons and the measurement of the transverse momentum. The measurement of the transverse momentum also depends on, for example, the orientation of the muon chambers, the amount of material in the detector and the magnetic field. It can be assumed that these things are worse known for particles that have larger pseudorapidities. Extension exercise 2 : Fitting a function to the Z mass histogram To get information about the mass and lifetime of the detected resonance, a function that describes the distribution of the invariant masses must be fitted to the values of the histogram. In our case the values follow a Breit-Wigner distribution:$$N(E) = \frac{K}{(E-M)^2 + \frac{\Gamma^2}{4}},$$where $E$ is the energy, $M$ the maximum of the distribution (equals to the mass of the particle that is detected in the resonance), $\Gamma$ the full width at half maximum (FWHM) or the decay width of the distribution and $K$ a constant.The Breit-Wigner distribution can also be expressed in the following form:$$\frac{ \frac{2\sqrt{2}M\Gamma\sqrt{M^2(M^2+\Gamma^2)} }{\pi\sqrt{M^2+\sqrt{M^2(M^2+\Gamma^2)}}} }{(E^2-M^2)^2 + M^2\Gamma^2},$$where the constant $K$ is written open.The decay width $\Gamma$ and the lifetime $\tau$ of the particle detected in the resonance are related in the following way:$$\Gamma \equiv \frac{\hbar}{\tau},$$where $\hbar$ is the reduced Planck's constant.With the code below it is possible to optimize a function that represents Breit-Wigner distribution to the values of the histogram. The function is already written in the code. It is now your task to figure out which the values of the maximum of the distribution $M$ and the full width at half maximum of the distribution $\Gamma$ could approximately be. The histogram that was created earlier will help in this task.Write these initial guesses in the code in the line `initials = [THE INITIAL GUESS FOR GAMMA, THE INITIAL GUESS FOR M, -2, 200, 13000]`. In other words replace the two comments in that line with the values that you derived.Notice that the initial guesses for parameters _a, b_ and _A_ have been already given. Other comments in the code can be left untouched. From them you can get information about what is happening in the code.After running the code Jupyter will print the values of the different parameters as a result of the optimization. Also uncertainties of the values and a graph of the fitted function are printed. The uncertainties will be received from the covariance matrix that the fitting function `curve_fit` will return. Hint 1 Think how M and gamma could be determined with the help of the histogram. Look from the histogram that you created that which would approximately be the values of M and gamma. Hint 2 If you figured out the initial guesses to be for example gamma = 12 and M = 1300 (note that these values are just random examples!) write them to the code in the form "initials = [12, 1300, -2, 200, 13000]".
###Code
ds = pd.read_csv('DoubleMuRun2011A.csv')
invariant_mass = ds['M']
# Let's limit the fit near to the peak of the histogram.
lowerlimit = 70
upperlimit = 110
bins = 100
# Let's select the invariant mass values that are inside the limitations.
limitedmasses = invariant_mass[(invariant_mass > lowerlimit) & (invariant_mass < upperlimit)]
#Let's create a histogram of the selected values.
histogram = plt.hist(limitedmasses, bins=bins, range=(lowerlimit,upperlimit))
# In y-axis the number of the events per each bin (can be got from the variable histogram).
# In x-axis the centers of the bins.
y = histogram[0]
x = 0.5*( histogram[1][0:-1] + histogram[1][1:] )
# Let's define a function that describes Breit-Wigner distribution for the fit.
# E is the energy, gamma is the decay width, M the maximum of the distribution
# and a, b and A different parameters that are used for noticing the effect of
# the background events for the fit.
def breitwigner(E, gamma, M, a, b, A):
return a*E+b+A*( (2*np.sqrt(2)*M*gamma*np.sqrt(M**2*(M**2+gamma**2)))/(np.pi*np.sqrt(M**2+np.sqrt(M**2*(M**2+gamma**2)))) )/((E**2-M**2)**2+M**2*gamma**2)
# Initial values for the optimization in the following order:
# gamma (the full width at half maximum (FWHM) of the distribution)
# M (the maximum of the distribution)
# a (the slope that is used for noticing the effect of the background)
# b (the y intercept that is used for noticing the effect of the background)
# A (the "height" of the Breit-Wigner distribution)
#initials = [#THE INITIAL GUESS FOR GAMMA, #THE INITIAL GUESS FOR M, -2, 200, 13000]
# Let's import the module that is used in the optimization, run the optimization
# and calculate the uncertainties of the optimized parameters.
from scipy.optimize import curve_fit
best, covariance = curve_fit(breitwigner, x, y, p0=initials, sigma=np.sqrt(y))
error = np.sqrt(np.diag(covariance))
# Let's print the values and uncertainties that are got from the optimization.
print("The values and the uncertainties from the optimization")
print("")
first = "The value of the decay width (gamma) = {} +- {}".format(best[0], error[0])
second = "The value of the maximum of the distribution (M) = {} +- {}".format(best[1], error[1])
third = "a = {} +- {}".format(best[2], error[2])
fourth = "b = {} +- {}".format(best[3], error[3])
fifth = "A = {} +- {}".format(best[4], error[4])
print(first)
print(second)
print(third)
print(fourth)
print(fifth)
plt.plot(x, breitwigner(x, *best), 'r-', label='gamma = {}, M = {}'.format(best[0], best[1]))
plt.xlabel('Invariant mass [GeV]')
plt.ylabel('Number of event')
plt.title('The Breit-Wigner fit')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Exercise File Question 1: Installing Library/Package using pip with or without "!"
###Code
pip install numpy
!pip install pandas
###Output
_____no_output_____
###Markdown
Logic Homework Initialization
###Code
from utils import expr
import numpy as np
from kb import DpllPropKB, FCPropKB
from draw import draw
import sudoku_maps as maps
from field_var import field_var
from ask_solution import ask_solution
###Output
_____no_output_____
###Markdown
SudokuIn this exercise we will work with 2x2 sudoku, i.e. sudoku where the numbers of each row, column, and square block go from 1 to 4. Your task is to implement the knowledge basis needed to solve the sudoku, given the sudoku's initial state. The initialized sudoku is given as an array - a np.ndarray to be precise - of shape (4,4), where the first dimension denotes the rows and the second the columns. In other words, $sudoku[x][y]$ contains the number in square xy. The sudoku array contains the following values: 0,1,2,3,4. The number 0 means that the corresponding square is initialized as empty.
###Code
# choose a sudoku map: possible values: 1, 2, 3, 4, 5
sudoku_index = 1
sudoku = getattr(maps, "sudoku"+str(sudoku_index))
##as you can see, sudoku is a 4x4 array.
print('The sudoku array: ', sudoku)
##a better overview of the sudoku array
print('Pretty print of the sudoku array: ')
for row in sudoku:
print(row)
#You can access the array elements with sudoku[x][y]
print('The number at position 0,0 is ', sudoku[0][0])
print('The number at position 1,2 is ', sudoku[1][2])
###Output
_____no_output_____
###Markdown
Knowledge Base generation Variables:For simplicity, in this homework there is only one variable: $V_{n, x, y}$, which means that at position [x, y] there is number n, with $x= 0, ..., 3$; $y = 0, ..., 3$; and $n = 1, ..., 4$. For example, $V_{2, 1, 2}$ means that the square where the second row and the third column meet contains number two. The field_var method will help you generate correct variables:
###Code
###Example
#in square 1,2 there is number one
V202 = field_var(value=2, x=1, y=2)
###Output
_____no_output_____
###Markdown
Always use this method to generate a field variable. Your Knowledge BaseYour task is to implement the knowledge basis in order to solve a 2x2 Sudoku, given the initialized sudoku. The Sudoku must be solved according to the rules: - Valid numbers for each grid square are 1, 2, 3, and 4. - Each row and each column must contain all valid numbers. - Each square block has to contain all the valid numbers within its squares.The initial state of the sudoku is saved in the sudoku variable. Sample knowledge base generation
###Code
def generate_knowledge_example(initialized_sudoku):
kb = []
##remember to add the initial state of the sudoku to the knowledge base
if initialized_sudoku[x][y] == 1:
new_proposition = field_var(1,x, y) # V1xy
kb.append(new_proposition)
new_proposition = field_var(1,x, y) + " | ~" + field_var(2, x, y) # V1xy ∨ ¬V2xy
kb.append(new_proposition)
new_proposition = field_var(3,1, 1) + " ==> " + field_var(3,1, 1) # V311 ==> V311
kb.append(new_proposition)
new_proposition = field_var(2, x, y) + " & ~" + field_var(4, x, y) + " <=> False" # V2xy ∧ ¬V4xy <=> False
kb.append(new_proposition)
new_proposition = field_var(3,1, 1) + " <== " + field_var(3,1, 1) # V311 <== V311
kb.append(new_proposition)
return kb
###Output
_____no_output_____
###Markdown
Your TaskImplement the function $generate\_knowledge$ in $generate\_knowledge.py$ such that the function takes an initialized sudoku as input and outputs the knowledge basis. Feel free to define in $generate\_knowledge.py$ any helper function you may need, but do not import any additional modules or packages, otherwise your solution will be marked as failed. This exercise is easily solvable without any additional packages.Refer to the function generate_knowledge_example in the previous cell for correct syntax.
###Code
from generate_knowledge import generate_knowledge
###Output
_____no_output_____
###Markdown
Configuration Choose a knowledge base class:possible values: - "Dpll" - use this knowledge base for dpll proving - this works for every kind of knowledge base - "FC" - use this knowledge base for proving with forward chaining - warning: the knowledge base should only contain clauses in the following forms for this to work: - α & .. & β ==> γ & .. & δ - α & .. & β <== γ & .. & δ - α & .. & β γ & .. & δ - α & .. & β
###Code
# possible values: "Dpll", "FC"
# kb_gen = "FC"
kb_gen = "Dpll"
KB = globals()[kb_gen+"PropKB"]
###Output
_____no_output_____
###Markdown
Inference
###Code
kb = KB() # create empty knowledge base
print("feed knowledge base with knowledge..")
for str_expr in generate_knowledge(sudoku):
kb.tell(expr(str_expr))
# check if the knowledge base is obviosly wrong (you can remove this if it is too slow)
print("scan knowledge base for contradictions..")
assert not kb.has_contradicting_knowledge()
sudoku_solution = ask_solution(kb)
print('Pretty print of the sudoku array: ')
for row in sudoku_solution:
print(row)
##draw the inferred solution. Initial numbers are in orange, inferred numbers in green.
draw(sudoku_solution, sudoku)
###Output
_____no_output_____
###Markdown
Time Series Workshop − ExerciseODSC Kiev − April 14, 2018Michael Schmidthttps://github.com/mds47/time-series-workshop Setup
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display, HTML
from sklearn.linear_model import ElasticNetCV
from sklearn.model_selection import TimeSeriesSplit
from sklearn.metrics import mean_squared_error, mean_absolute_error
from sklearn.base import BaseEstimator, RegressorMixin
# Re-run this cell to reset your recent score history
attempt_scores = {}
###Output
_____no_output_____
###Markdown
Scoring Code (DO NOT EDIT)This scoring code defines functions that will be called by the editable section after.
###Code
def fit_model(t, X, y, distance, apriori_columns=None, log_transform=False,
test_size=0.2, estimator=None, show=False, exercise=''):
"""Fits a forecasting model to predict a future value and displays its accuracy.
Parameters
----------
t : pandas.Series, shape = (n_samples, )
The timestamps of each row in the data
X : pandas.DataFrame, shape = (n_samples, n_features)
The input features used for making predictions
y : pandas.Series, shape = (n_samples, )
The target values for each row
distance : int
The distance to forecast. Columns of X that are not listed in apriori_columns
will by shifted by this amount.
apriori_columns : list[str]
List of column names in X that should not be shifted by distance
log_transform : bool
Optional settings to enable log transform of the target
test_size : float or int
The percent or count of rows to use for the test set at the end of the time series
estimator : sklearn.base.BaseEstimator
Override the estimator used to fit the model
show : bool
Display a summary of the score, fit to the data, and feature impact
Returns
-------
estimator : sklearn.base.BaseEstimator
The fitted estimator model
test_score : float
The test split score (MASE by default)
"""
# apply forecast distance to non a priori columns
if apriori_columns is None:
apriori_columns = []
shift_cols = [c for c in X.columns if c not in apriori_columns]
X[shift_cols] = X[shift_cols].shift(distance)
# ignore missing value rows
non_null = X.dropna(how='any').index.values
# get row indices for train and test set
if isinstance(test_size, float):
test_size = int(test_size * len(t))
train = non_null[:-test_size]
test = non_null[-test_size:]
# standardize
numeric_cols = X.select_dtypes(include=[np.number]).columns
centers = X.loc[train, numeric_cols].mean(axis=0)
scales = np.maximum(1e-15, X.loc[train, numeric_cols].std(axis=0))
X[numeric_cols] = (X[numeric_cols] - centers)/scales
# fit model
if estimator is None:
estimator = ElasticNetCV(n_alphas=100, l1_ratio=0.9, cv=TimeSeriesSplit(5),
eps=0.00001, tol=0.00001, max_iter=10000,
selection='random', random_state=123)
if log_transform:
estimator = MultiplicativeEstimator(estimator)
estimator.fit(X.loc[train, :], y[train])
y_pred = estimator.predict(X.loc[test, :])
naive = y.shift(distance).loc[test]
test_score = mean_absolute_error(y[test], y_pred)/mean_absolute_error(y[test], naive)
# track the recent scores
if exercise not in attempt_scores:
attempt_scores[exercise] = []
attempt_scores[exercise].append(test_score)
if show:
display(HTML('<h4>rows: {}, columns: {}, distance: {}, test_size: {}, test_error: {}</h4>'.format(
X.shape[0], X.shape[1], distance, test_size, test_score)))
show_attempt_summary(exercise)
# plot the fit
display(HTML('<h3>Model Fit:</h3>'))
plt.figure(figsize=(9,3))
plt.plot(t, y, '.', c='gray', alpha=0.7)
plt.plot(t[train], estimator.predict(X.loc[train, :]), 'b-', lw=2, alpha=0.7)
plt.plot(t[test], estimator.predict(X.loc[test, :]), 'r-', lw=2, alpha=0.7)
plt.annotate('Test Error: {:%}'.format(test_score),
xy=(0, 1), xytext=(12, -12), va='top', fontweight='bold',
xycoords='axes fraction', textcoords='offset points')
plt.show()
# show feature importances
display(HTML('<h3>Important Features:</h3>'))
if hasattr(estimator, 'coef_'):
coef = pd.Series(estimator.coef_, index=X.columns, name='Importance')
importances = coef.abs() / y.std()
importances = importances[importances > 0]
importances.loc['(others)'] = 0
importances.sort_values(ascending=False, inplace=True)
display(importances.to_frame().style.bar(color='orange'))
else:
display(HTML('<i>Not available</i>'))
return estimator, test_score
def show_attempt_summary(ex):
"""Displays the recent scores"""
if len(attempt_scores[ex]) > 1:
score_delta = attempt_scores[ex][-1] - attempt_scores[ex][-2]
if score_delta < 0:
color = 'green'
elif score_delta < 1e-6:
color = 'gray'
else:
color = 'red'
message = '<h3>Score Change: <span style="color:{};">{:+}</span></h3>'.format(color, score_delta)
display(HTML(message))
display(HTML('<h3>Recent Errors:</h3>'))
plt.figure(figsize=(9,1))
plt.title('Your Recent Errors')
plt.plot(np.arange(len(attempt_scores[ex])), attempt_scores[ex], 'mo-', mec='white', ms=5)
plt.axhline(min(attempt_scores[ex]), linestyle='--', alpha=0.5, color='gray')
plt.axhline(attempt_scores[ex][-1], linestyle='-', alpha=0.5, color='gray')
plt.xlim(0, len(attempt_scores[ex]))
plt.ylim([0.95*min(attempt_scores[ex]), 1.05*np.nanpercentile(attempt_scores[ex], 95)])
plt.show()
class MultiplicativeEstimator(BaseEstimator, RegressorMixin):
"""Wrapper class that applies a log transform to the target."""
def __init__(self, estimator):
self.estimator = estimator
def fit(self, X, y, *args, **kwargs):
self.estimator.fit(X, np.log(y), *args, **kwargs)
return self
def predict(self, X):
return np.exp(self.estimator.predict(X))
def __getattr__(self, name):
return getattr(self.estimator, name)
###Output
_____no_output_____
###Markdown
--- Exercise − Electricity UsageMaximize the test set accuracy for the dataset below. You are encouraged to create lagged feature, rolling window statistic feature, as well as a priori features. You create features by adding columns to the dataframe `X` below. For example:```X['y mean 10'] = y.rolling(10).mean()X['y max 10'] = y.rolling(10).max()...```You should edit the skeleton code in the next cell and run it in order to see your score. Be sure to use distinct names for each feature. You can create a name programatically using python string formatting such as `feature_name = '{} lag {}'.format('y', 1)`. LagsYou can create a lag of a variable using the `y.shift(n)` member. For example, `X['y lag 10'] = data['y'].shift(10)` would add the variable y lagged by 10 rows. Rolling statisticsYou can create statistics derived from a rolling window using the `y.rolling(n)` member. For example, `X['rolling mean 7'] = y.rolling(7).mean()` would add the rolling mean average of y over the past 7 rows.For more details on other functions that can be called on the window (e.g. `min`, `max`, `sum`, etc), see the pandas documentation here: https://pandas.pydata.org/pandas-docs/stable/computation.htmlwindow-functions A priori Features A priori features are features that are known in advance and are not lagged − for example, features derived from the data like the day of the week. Be careful not to use any variables derived from the target or any other type of covariate. You can specify a feature a priori by appending it's name to the `apriori_columns` array:```X['day of week'] = t.dt.dayofweekapriori_columns.append('day of week')```You can view other datetime properties supported by the `.dt` syntax above here: https://pandas.pydata.org/pandas-docs/stable/api.htmldatetimelike-properties ScoringYou can score your features by calling `fit_model(..., show=True)`. The `show=True` parameter will print a summary of your recent model scores, show a plot of the fitted model to the data, and display a table of the feature importance for each feature that you created. If a feature does not appear it means it was not used by the model.
###Code
# SETUP: Read data and setup common variables (DO NOT EDIT)
data = pd.read_csv('data/turkish_electricity_demand.csv', parse_dates=['date'])
t = data['date']
X = pd.DataFrame()
y = data['y']
numeric_columns = data.select_dtypes(include=[np.number])
apriori_columns = []
# -----------------------------------------
# EDIT BELOW: Add Features to X
# -----------------------------------------
# EXAMPLE: Nearest lag
X['lag 0'] = y.shift(0)
# EXAMPLE: first day of month indicator (a priori)
X['is_month_start'] = t.dt.is_month_start
apriori_columns.append('is_month_start')
# SCORING: Fit model and and display fit and score (DO NOT EDIT)
model, score = fit_model(
t, X, y,
apriori_columns=apriori_columns,
log_transform=False,
distance=7,
show=True,
exercise='electricity',
)
###Output
_____no_output_____
###Markdown
--- Another Dataset − Stock DataFor fun, try to forecast the price of a stock into the future. Stock data is notoriously difficult to forecast. Expore this data to see if any signals exist and what accuracy is possible, if any. This data was collected via the Yahoo Finance API:```import pandas_datareader.data as webdata = web.DataReader(['IBM', 'GOOG', 'AAPL', 'MSFT'], 'yahoo', '2010-01-01', '2017-04-14')data = data.sort_index()data = data['Close']data.to_csv('stock_close_prices.csv')```
###Code
# SETUP: Read data and setup common variables (DO NOT EDIT)
data = pd.read_csv('data/stock_close_prices.csv', parse_dates=['Date'])
t = data['Date']
X = pd.DataFrame()
y = data['AAPL']
numeric_columns = data.select_dtypes(include=[np.number])
apriori_columns = []
# -----------------------------------------
# EDIT BELOW: Add Features to X
# -----------------------------------------
# EXAMPLE: Nearest lag
X['lag 0'] = y.shift(0)
# EXAMPLE: first day of month indicator (a priori)
X['is_month_start'] = t.dt.is_month_start
apriori_columns.append('is_month_start')
# SCORING: Fit model and and display fit and score (DO NOT EDIT)
model, score = fit_model(
t, X, y,
apriori_columns=apriori_columns,
log_transform=False,
distance=28,
show=True,
exercise='stocks',
)
###Output
_____no_output_____
###Markdown
OverviewThis exercise uses the Jupyter and Python you have learned in the tutorials, to manipulate, plot, and then analyse some experimental data. You will be given data for the **vapour pressure** of CO2. This is the pressure of a gas when it is in equilibrium with a condensed phase (solid or liquid). The vapour pressure approximately varies with temperature according to the Clausius-Clapeyron equation. If you have not yet seen the derivation of this equation, it is not essential for this exercise, but is included [below](clausius_clapeyron_derivation) if you are interested. Integrating the Clausius-Clapeyron equation gives a **linear** relationship between $\ln p$ and $1/T$, which means for a given phase equilibrium (i.e. solid—gas or liquid—gas) a plot of $\ln p$ against $1/T$ gives (approximately) a straight line. Furthermore, as explained below, the **slope** of this line is proportional to the **phase transition enthalpy** for these two phases.This means that experimental **vapour pressure** data can used to fit a straight line (linear regression) according to the Clausius-Clapeyron equation. This fitting allows you to describe the range of temperatures and pressures where either solid and gas, or solid and liquid, or all three phases, are in equilibrium, and to calculate various enthalpy changes for phase transitions. AssessmentWhen you have finished the exercise, save your completed notebook, using **File > Save and Checkpoint** in the Jupyter menu. Then upload your notebook for assessment using Moodle. Please make sure that you upload the `Exercise.ipynb` file, and that it is not an old version of the notebook (check the modification date and time before you upload).This notebook contains cells marked ` TEST CELL`. These contain hidden `assert` statements that will be used to test your code and calculate your mark. The comments in each cell describe what is being tested.Because your notebook will be marked by running your code, you should check that everything works as you expect when running from top to bottom. Because notebook cells can be run in any order, it is possible to have code that looks correct, but that gives errors when run by someone else. When you are happy with your notebook, you can test it by selecting **Kernel > Restart & Run All** from the Jupyter menu. Finding the Triple Point of CO2This is the phase diagram of CO2, which shows the ranges of temperature and pressure where different phases are stable.The solid lines on this diagram are **phase-coexistence lines**, which describe the temperatures and pressures where two phases are in equilibrium. These lines describe the conditions (pressure and temperature) for (a) solid—gas phase equilibrium. (b) solid–liquid equilibrium. (c) liquid–gas equilibrium.All three solid lines meet at the point marked in blue. This is the **triple point**, and is the pressure and temperature where all three phases coexist; solid, liquid, and gas are all in equilibrium.The phase-coexistence lines have slopes given by the [Clapeyron equation](clapeyron_derivation),\begin{equation}\frac{\mathrm{d}p}{\mathrm{d}T}= \frac{\Delta H_\mathrm{m}}{T\Delta V_\mathrm{m}} .\end{equation}For phase coexistence between solid and gas (sublimation) or liquid and gas (vapourisation), the slopes are approximately given by the [Clausius-Clapeyron equation](clausius_clapeyron_derivation),\begin{equation}\frac{\mathrm{d}p}{\mathrm{d}T} = \frac{p \Delta H_\mathrm{m}}{RT^2},\end{equation}which can be [integrated](integrated_CC_equation) to give\begin{equation}\ln p = - \frac{\Delta H}{RT} +\mathrm{constant}\end{equation}More detailed derivations of these equations are given at the bottom of this notebook. ExerciseThe vapour pressure of CO2 is given in the table below for different temperatures:\begin{array}{cc}T\,\mathrm{[K]} & 196 & 206 & 211 & 221 & 226 & 236 \\p\,\mathrm{[}10^5\,\mathrm{Pa]} & 1.146 & 2.479 & 3.558 & 6.296 & 7.704 & 11.212\end{array} 1. Preliminary Data PlottingPlot these data in the form $\ln p$ versus $1/T$.Create two `numpy` arrays, called `temperature` and `pressure` to store the data you have been given. Then use these to convert the data into the correct format for plotting, with this stored in two more array, `inverse_temperature` and `log_pressure`. You might need to convert into SI units.
###Code
# importing the modules you will need
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
temperature = ◽◽◽
pressure = ◽◽◽
# TEST CELL
# - check `temperature` is correct.
# - check `pressure is correct.
# Total marks: 1
inverse_temperature = ◽◽◽
log_pressure = ◽◽◽
# TEST CELL
# - check `inverse_temperature` is correct.
# - check `log_pressure` is correct.
# Total marks: 1
plt.plot( ◽◽◽, ◽◽◽, 'o' )
plt.xlabel( ◽◽◽ )
plt.ylabel( ◽◽◽ )
plt.show()
###Output
_____no_output_____
###Markdown
You should have a plot that shows **two** subsets of the data, each following a different straight line relationship. This means the data collected follow two coexistence lines, corresponding to the solid--gas _and_ liquid-gas phase equilibria. By considering which data are high-temperature, and which are low-temperature, and using the phase diagram above, you should be able to assign one region of the data to the solid--gas coexistence line, and the other to the liquid-gas coexistence line.Replot the data so that the high temperature and low temperature data are shown as distinct data sets.
###Code
plt.plot( ◽◽◽, ◽◽◽, 'o', label='high T' ) # High temperature data points
plt.plot( ◽◽◽, ◽◽◽, 's', label='low T' ) # Low temperature data points
plt.xlabel( ◽◽◽ )
plt.ylabel( ◽◽◽ )
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2. Calculating ΔHsub and ΔHvapBy performing separate [linear regressions](Tutorial%205.ipynbLinear-Regression) on the low temperature data and high temperature data, calculate * the latent heat of sublimation, in J. * the latent heat of vapourisation, in J. Make sure the check which slices of `inverse_temperature` and `log_pressure` correspond to high and low temperature.The latent heat of sublimation is the enthalpy change to convert one mole of a substance from solid to gas at constant pressure. The latent heat of vapourisation is the enthalpy change to convert one mole of a substance from liquid to gas at constant pressure.
###Code
from scipy.stats import linregress
slope_high_T, intercept_high_T, rvalue, pvalue, stderr = linregress( ◽◽◽, ◽◽◽ )
slope_low_T, intercept_low_T, rvalue, pvalue, stderr = linregress( ◽◽◽, ◽◽◽ )
# TEST CELL
# - check `slope_high_T` is correct.
# - check `slope_low_T` is correct.
# - check `intercept_high_T` is correct.
# - check `intercept_low_T` is correct.
# Total marks: 3
###Output
_____no_output_____
###Markdown
To calculate $\Delta H_\mathrm{sub}$ and $\Delta H_\mathrm{vap}$ from the fitted slopes you need the gas constant $R$.You could look this up and enter it by hand, but a more reliable option is to use [`scipy.constants`](https://docs.scipy.org/doc/scipy/reference/constants.html), which gives a tabulated list of physical constants and unit conversions.
###Code
from scipy.constants import R
print( R )
delta_H_vap = ◽◽◽
delta_H_sub = ◽◽◽
# TEST CELL
# - check `delta_H_vap` is correct.
# - check `delta_H_sub` is correct.
# Total marks: 4
###Output
_____no_output_____
###Markdown
3. Calculating ΔHfusIn 2. you calculated the enthalpy changes for converting from solid to gas ($\Delta H_\mathrm{sub}$) and from liquid to gas ($\Delta H_\mathrm{vap}$).The latent heat of fusion, $\Delta H_\mathrm{fus}$, is the enthalpy change to convert one mole of a substance from solid to liquid at constant pressure.Using your results from 2. (for example, to construct a Hess cycle) calculate the latent heat of fusion, in J.
###Code
delta_H_fus = ◽◽◽
# TEST CELL
# - check `delta_H_fus` is correct.
# Total marks: 1
###Output
_____no_output_____
###Markdown
4. Graphically Estimating the Triple Point of CO2Using your linear regression results, replot the experimental data, and add lines of best fit. Each line follows the integrated Clausius-Clapeyron equation for that particular phase equilibrium: one line describes the temperatures and pressures where liquid and gas are in equilibrium, and the other describes the temperatures and pressures where solid and gas are in equilibrium. At the point where these cross, both these things are true, and all three phases are in equilibrium. This is the **triple point** (the green dot in the phase diagram).Estimate the temperature and pressure of the triple point from your graph.Because you are interested in where your lines of best fit cross, when you generate data for plotting these you need to use the full (inverse) temperature range.
###Code
ln_p_high_T = ◽◽◽ * inverse_temperature + ◽◽◽
ln_p_low_T = ◽◽◽ * inverse_temperature + ◽◽◽
plt.plot( ◽◽◽, ◽◽◽, 'o' ) # high T experimental data
plt.plot( ◽◽◽, ◽◽◽, 'o' ) # low T experimental data
plt.plot( ◽◽◽, ◽◽◽, '-' ) # liquid-gas coexistence line
plt.plot( ◽◽◽, ◽◽◽, '-' ) # solid-gas coexistence line
plt.xlabel( ◽◽◽ )
plt.ylabel( ◽◽◽ )
plt.show()
from math import exp
estimated_log_pressure = ◽◽◽
estimated_inverse_temperature = ◽◽◽
estimated_pressure = ◽◽◽
estimated_temperature = ◽◽◽
print( "The triple point of CO2 is at P={} Pa and T={} K (estimated).".format( estimated_pressure, estimated_temperature ) )
# TEST CELL
# - check `estimated_pressure` is approximately correct.
# - check `estimated_temperature` is approximately correct.
# Total marks: 2
###Output
_____no_output_____
###Markdown
The `print` statement in the previous cell uses `"string {}".format()` to insert your calculated results into the string for printing. The values stored in these variables are inserted into the `{}` brackets in turn. 5. Directly Calculating the Triple Point of CO2Everything you have done to this point you could have been done using a calculator and a graph paper. Because you have done this analysis computationally, however, you are not restricted to estimating the pressure and temperature of the triple point, but can directly calculate it. By solving the pair of simultaneous equations below, (this bit by hand) derive expressions for the temperature and pressure of the triple point. Write these solutions as code, and use the fitted high- low-temperature slopes and intercepts, to calculate the triple point.\begin{equation}y = m_1 x + c_1\end{equation}\begin{equation}y = m_2 x + c_2\end{equation}
###Code
fitted_inverse_temperature = ◽◽◽
fitted_temperature = ◽◽◽
fitted_log_pressure = ◽◽◽
fitted_pressure = ◽◽◽
print( "The triple point of CO2 is at P={:.3f} Pa and T={:.3f} K (estimated).".format( fitted_pressure, fitted_temperature ) )
# TEST CELL
# - check `fitted_pressure` is correct.
# - check `fitted_temperature` iscorrect.
# Total marks: 2
###Output
_____no_output_____
###Markdown
Decision Trees, Random Forest, and Gradient Boosting Trees Import and Prepare the Data pandas provides excellent data reading and querying module,dataframe, which allows you to import structured data and perform SQL-like queries. We also use the mglearn package to help us visualize the data and models. Here we imported some house price records from Trulia. For more about extracting data from Trulia, please check my previous tutorial. We use the house type as the dependent variable and the house ages and house prices as the independent variables. To visualize the trees, we use the graphviz python module. You also need to download the graphviz Executable Package on your computer.
###Code
import sklearn
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
%matplotlib inline
import pandas
import numpy as np
import mglearn
from collections import Counter
from sklearn.metrics import cohen_kappa_score
import graphviz # use graphviz to visualize trees
import os
os.environ["PATH"] += os.pathsep + r"C:\Program Files (x86)\Graphviz2.38\bin" # link to the graphviz executable package
df = pandas.read_excel('house_price_label.xlsx')
# combine multipl columns into a 2D array
# also convert the integer data to float data
X = np.column_stack((df.built_in.astype(float),df.price.astype(float)))
y = df.house_type
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size =0.3,stratify = y, random_state=0)
# for classification, make sure a stratify splitting method is selected
mglearn.discrete_scatter(X[:,0],X[:,1],y) # use mglearn to visualize data
plt.legend(y,loc='best')
plt.xlabel('build_in')
plt.ylabel('house price')
plt.show()
###Output
_____no_output_____
###Markdown
Decision Trees A decision tree uses a tree structure to represent many possible decision paths and an outcome for each path. To build a tree, the algorithm searches over all possible questions or splits and finds the one that is most informative about the target variable. The quality of a question or split is measured with Gini Impurity or the Information Gain which based on Entropy. Here we use DecisionTreeClassifier to classify the house types based on house ages and house prices.
###Code
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(random_state=0)
tree.fit(X_train, y_train)
print("Training set accuracy: {:.2f}".format(tree.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,tree.predict(X_train))))
print("Test set accuracy: {:.2f}".format(tree.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,tree.predict(X_test))))
###Output
Training set accuracy: 0.99
Training Kappa: 0.982
Test set accuracy: 0.87
Test Kappa: 0.631
###Markdown
It is very easy (and very bad) to build decision trees that are overfitted to the training data, and that doesn’t generalize well to unseen data. We can control the complexity of the tree by limit the maximum depth of the tree or the maximum number of leaves, and more.
###Code
tree = DecisionTreeClassifier(max_depth=6, random_state=0)
tree.fit(X_train, y_train)
print("Training set accuracy: {:.2f}".format(tree.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,tree.predict(X_train))))
print("Test set accuracy: {:.2f}".format(tree.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,tree.predict(X_test))))
###Output
Training set accuracy: 0.90
Training Kappa: 0.715
Test set accuracy: 0.83
Test Kappa: 0.467
###Markdown
We can visualize and analyze the tree using the graphviz module.
###Code
from sklearn.tree import export_graphviz
export_graphviz(tree, out_file="tree.dot", impurity=False, filled=True,
feature_names = ['build_in','price'],class_names= list(set(y)))
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph )
###Output
_____no_output_____
###Markdown
Random Forrest Given how closely decision trees can fit themselves to their training data, it’s not surprising that decision trees tend to overfit. One way of avoiding this is a technique called Random Forest, in which we build multiple decision trees and let them vote on how to classify inputs. Here we use RandomForestClassifier to classify the house types based on house ages and house prices.
###Code
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=5, random_state=0)
forest.fit(X_train, y_train)
print("Training set accuracy: {:.2f}".format(forest.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,forest.predict(X_train))))
print("Test set accuracy: {:.2f}".format(forest.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,forest.predict(X_test))))
###Output
Training set accuracy: 0.97
Training Kappa: 0.909
Test set accuracy: 0.85
Test Kappa: 0.596
###Markdown
If we build many trees, all of which work well and overfit in different ways, we can reduce the amount of overfitting by averaging their results. This reduction in overfitting, while retaining the predictive power of the trees, can be shown in the following charts.
###Code
fig, axes = plt.subplots(2, 3, figsize=(20, 10))
for i, ax in enumerate(axes.ravel()):
if i<5:
sub_tree = forest.estimators_[i]
ax.set_title("Tree {}".format(i+1))
mglearn.discrete_scatter(X_train[:,0],X_train[:,1],sub_tree.predict(X_train),ax=ax) # use mglearn to visualize data
else:
ax.set_title("Forest")
mglearn.discrete_scatter(X_train[:,0],X_train[:,1],forest.predict(X_train),ax=ax) # use mglearn to visualize data
ax.set_xlabel("built_in")
ax.set_ylabel("house price")
ax.legend(y,loc='best')
###Output
_____no_output_____
###Markdown
Gradient Boosting Trees Gradient Boosting Tree works by building trees in a serial manner, where each tree tries to correct the mistakes of the previous one. Here we use GradientBoostingClassifier to classify the house types based on house ages and house prices.
###Code
from sklearn.ensemble import GradientBoostingClassifier
gbrt = GradientBoostingClassifier(random_state=0)
gbrt.fit(X_train, y_train)
print("Training set accuracy: {:.2f}".format(gbrt.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,gbrt.predict(X_train))))
print("Test set accuracy: {:.2f}".format(gbrt.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,gbrt.predict(X_test))))
###Output
Training set accuracy: 0.98
Training Kappa: 0.933
Test set accuracy: 0.86
Test Kappa: 0.608
###Markdown
In the Gradient Boosting Trees, we can also control the max depth and the learning rate to reduce or increase the model complexity.
###Code
gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)
gbrt.fit(X_train, y_train)
print("Training set accuracy: {:.2f}".format(gbrt.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,gbrt.predict(X_train))))
print("Test set accuracy: {:.2f}".format(gbrt.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,gbrt.predict(X_test))))
gbrt = GradientBoostingClassifier(random_state=0, learning_rate=0.01)
gbrt.fit(X_train, y_train)
print("Training set accuracy: {:.2f}".format(gbrt.score(X_train, y_train)))
print ("Training Kappa: {:.3f}".format(cohen_kappa_score(y_train,gbrt.predict(X_train))))
print("Test set accuracy: {:.2f}".format(gbrt.score(X_test, y_test)))
print ("Test Kappa: {:.3f}".format(cohen_kappa_score(y_test,gbrt.predict(X_test))))
###Output
Training set accuracy: 0.88
Training Kappa: 0.612
Test set accuracy: 0.85
Test Kappa: 0.505
|
_resources/Extreme Learning Machine.ipynb | ###Markdown
Extreme Learning MachineSample implementation of ELM that will classify MNIST dataset. Please visit [blog post](https://petlew.com/elm/machine%20learning/neural%20networks/elm-basics/) for more theoretical background.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
I am adding a `strict` parameter due to, probably mine, misunderstanding of the [paper](http://axon.cs.byu.edu/~martinez/classes/678/Presentations/Yao.pdf) on which I've based this notebook.In equation _4_ we can clearly see that each row has the same set of random weights and biases. This, unfortunately, leads to very poor performance.
###Code
def generate_hidden_values(X_input, hidden_nodes, strict=False):
_, single_input_length = X_input.shape
random_hidden_bias = np.random.normal(size=(1, hidden_nodes))
if strict:
random_hidden_node_values = np.tile(np.random.normal(size=(1, hidden_nodes)), (single_input_length, 1))
else:
random_hidden_node_values = np.random.normal(size=(single_input_length, hidden_nodes))
return (random_hidden_node_values, random_hidden_bias)
def calculate_hidden_layer(X_input, hidden_node_values, hidden_bias, activation_func):
X_length, _ = X_input.shape
return activation_func(np.dot(X_input, hidden_node_values) + np.tile(hidden_bias, (X_length, 1)))
def solve_with_least_squares(hidden_layer, y_labels):
return np.dot(np.linalg.pinv(hidden_layer), y_labels)
# Overkill, but as lazy as possible way to get mnist data.
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
def flatten_image(images):
n, x, y = images.shape
return images.reshape((n, x * y))
def one_hot_encode(labels, classes=10):
y_encoded = np.zeros([labels.shape[0], classes])
for i in range(labels.shape[0]):
y_encoded[i][labels[i]] = 1
return y_encoded
def sigmoid(x):
return 1. / (1. + np.exp(-x))
flatten_image(x_train).shape
print(one_hot_encode(y_train))
# Training
import time
def train(hidden_nodes):
start = time.perf_counter()
flat_x_train = flatten_image(x_train)
encoded_y_train = one_hot_encode(y_train)
random_hidden_node_values, random_hidden_bias = generate_hidden_values(flat_x_train, hidden_nodes, False)
hidden_layer_training = calculate_hidden_layer(flat_x_train, random_hidden_node_values, random_hidden_bias, sigmoid)
beta_layer = solve_with_least_squares(hidden_layer_training, encoded_y_train)
end = time.perf_counter()
return random_hidden_node_values, random_hidden_bias, beta_layer, end - start
# Testing
def test(random_hidden_node_values, random_hidden_bias, beta_layer):
flat_x_test = flatten_image(x_test)
encoded_y_test = one_hot_encode(y_test)
hidden_layer_test = calculate_hidden_layer(flat_x_test, random_hidden_node_values, random_hidden_bias, sigmoid)
predictions = np.dot(hidden_layer_test, beta_layer)
correct = 0
total = y_test.shape[0]
for i in range(total):
predicted = np.argmax(predictions[i])
test = np.argmax(encoded_y_test[i])
correct = correct + (1 if predicted == test else 0)
accuracy = correct/total
return accuracy
results = []
np.random.seed(28)
# Just for warm up
random_hidden_node_values, random_hidden_bias, beta_layer, training_time = train(100)
for hdn_nds in [1, 2, 5, 10, 20, 25, 50, 100, 150, 200, 250, 350, 500, 1000, 1500, 2000, 2500, 3000, 4000, 5000]:
random_hidden_node_values, random_hidden_bias, beta_layer, training_time = train(hdn_nds)
acc = test(random_hidden_node_values, random_hidden_bias, beta_layer)
results.append((hdn_nds, acc, training_time))
results
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax1 = plt.subplots()
t = [node for (node, acc, time) in results]
s1 = [acc for (node, acc, time) in results]
ax1.plot(t, s1, 'b-')
ax1.set_xlabel('Number of hidden nodes')
# Make the y-axis label, ticks and tick labels match the line color.
ax1.set_ylabel('Accuracy', color='b')
ax1.tick_params('y', colors='b')
ax2 = ax1.twinx()
s2 = [time for (node, acc, time) in results]
ax2.plot(t, s2, 'r-')
ax2.set_ylabel('Time', color='r')
ax2.tick_params('y', colors='r')
fig.tight_layout()
plt.show()
###Output
_____no_output_____ |
notebooks/P02_prepare_spectrogram_to_tfrecords.ipynb | ###Markdown
Prepare Spectrogram Images into TFRecords
###Code
%load_ext autoreload
%autoreload 2
import os
import glob
labels = list(set([os.path.split(f)[-1] for f in glob.glob('/data/spec/*')]))
n_labels = len(labels)
label_dictionary = dict(zip(labels, list(range(n_labels))))
inverse_label_dictionary = {v: k for k, v in label_dictionary.items()}
print(label_dictionary)
print(inverse_label_dictionary)
###Output
{'cat': 0, 'happy': 1, 'bed': 2}
{0: 'cat', 1: 'happy', 2: 'bed'}
###Markdown
Write a wrapper in multiprocessing.pool for istarmap()
###Code
%%writefile /code/src/multiprocessing_wrapper.py
import multiprocessing.pool as mpp
def istarmap(self, func, iterable, chunksize=1):
"""starmap-version of imap
"""
self._check_running()
if chunksize < 1:
raise ValueError(
"Chunksize must be 1+, not {0:n}".format(
chunksize))
task_batches = mpp.Pool._get_tasks(func, iterable, chunksize)
result = mpp.IMapIterator(self)
self._taskqueue.put(
(
self._guarded_task_generation(result._job,
mpp.starmapstar,
task_batches),
result._set_length
))
return (item for chunk in result for item in chunk)
mpp.Pool.istarmap = istarmap
###Output
Overwriting /code/src/multiprocessing_wrapper.py
###Markdown
Some Useful Functions Module to create shrards of tfrecords
###Code
%%writefile /code/src/utilities/tfrecords_save.py
import os
import numpy as np
from tqdm import tqdm
from multiprocessing import (
Pool,
cpu_count
)
import tensorflow as tf
from PIL import Image
from typing import Tuple, List
from .. import multiprocessing_wrapper
IMAGE_SIZE = (20, 30)
label_dictionary = {
'bed': 0,
'cat': 1,
'happy': 2
}
## helper functions to build the tfrecords protocol buffer
# https://www.tensorflow.org/tutorials/load_data/tfrecord
def _bytes_feature(value):
"""Returns a bytes_list from a string / byte."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _int64_feature(value):
"""Returns an int64_list from a bool / enum / int / uint."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _float_feature(value):
"""Returns a float_list from a float / double."""
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def create_tfrecords_in_shards(
data_filenames: List[str],
record_index: int,
dataset_name: str,
output_path: str = '/data/spec_tfrecords'
) -> Tuple[int, str]:
# set up paths
savepath = os.path.join(output_path, dataset_name)
savefile = os.path.join(savepath, f'{dataset_name}_{record_index:03d}.tfrecords')
os.makedirs(savepath, exist_ok=True)
number_of_samples_written = 0
with tf.io.TFRecordWriter(savefile) as record_writer:
for input_file in data_filenames:
label_str = input_file.split(os.path.sep)[-2]
label_int = label_dictionary[label_str]
image = Image.open(input_file).convert('RGB')
image = image.resize(IMAGE_SIZE[::-1], Image.ANTIALIAS)
image_np = np.asarray(image)
# encode and write to tfrecord
example = tf.train.Example(features=tf.train.Features(
feature={
'image': _bytes_feature(image_np.tobytes()),
'label': _int64_feature(label_int)
}))
record_writer.write(example.SerializeToString())
number_of_samples_written += 1
return number_of_samples_written, savefile
def multi_create_tfrecords_in_shards(
data_filenames: List[str],
dataset_name: str,
num_files_to_create: int,
num_workers: int
) -> Tuple[int, List[str]]:
split_data_filenames = np.array_split(data_filenames, num_files_to_create)
tfrecord_args = zip(split_data_filenames, range(0, num_files_to_create), [dataset_name]*num_files_to_create)
tfrecords_func_outputs = []
with Pool(processes=num_workers) as pool_:
with tqdm(total = num_files_to_create) as pbar:
pbar.set_description(f"Processing {dataset_name}: ")
for tfrecords_func_output in pool_.istarmap(create_tfrecords_in_shards, tfrecord_args):
tfrecords_func_outputs.append(tfrecords_func_output)
pbar.update(1)
sample_counts = sum(x[0] for x in tfrecords_func_outputs)
output_files = [x[1] for x in tfrecords_func_outputs]
return sample_counts, output_files
for label in labels:
print(label, len(glob.glob(f'/data/spec/{label}/*')))
# shuffle and split
from sklearn.model_selection import train_test_split
image_files = glob.glob(f'/data/spec/*/*')
train_filenames, validation_test_filenames = train_test_split(
image_files,
train_size = 0.8,
test_size = 0.2,
random_state = 24601
)
validation_filenames, test_filenames = train_test_split(
validation_test_filenames,
train_size = 0.5,
test_size = 0.5,
random_state = 24601
)
cases = ['train', 'validation', 'test']
case_id = 2
filenames = dict(
train = train_filenames,
test = test_filenames,
validation = validation_filenames
)
print(f"Case: {cases[case_id]} \n")
filenames_x = filenames[cases[case_id]]
sample_size = 0
for label in labels:
x = [f for f in filenames_x if f.split(os.path.sep)[-2] == label]
print(label, len(x))
sample_size += len(x)
print()
print(sample_size)
###Output
Case: test
cat 153
happy 188
bed 178
519
###Markdown
Write the script to create tfrecords
###Code
%%writefile ../scripts/generate_spec_tfrecords.py
import sys
sys.path.append('..')
import glob
from typing import NoReturn
from multiprocessing import cpu_count
from sklearn.model_selection import train_test_split
from src.utilities.tfrecords_save import multi_create_tfrecords_in_shards
def main(
num_files_to_create: int = 96,
num_workeers: int = cpu_count() - 1
) -> NoReturn:
num_files_to_create = 96
num_workers = cpu_count()-1
filenames = {
'train': None,
'test' : None,
'validation' : None
}
image_files = glob.glob(f'/data/spec/*/*')
train_filenames, validation_test_filenames = train_test_split(
image_files,
train_size = 0.8,
test_size = 0.2,
random_state = 24601
)
validation_filenames, test_filenames = train_test_split(
validation_test_filenames,
train_size = 0.5,
test_size = 0.5,
random_state = 24601
)
num_test_samples, test_tfrecords = multi_create_tfrecords_in_shards(
data_filenames = test_filenames,
dataset_name = 'test',
num_files_to_create = num_files_to_create,
num_workers = num_workers
)
num_validation_samples, validation_tfrecords = multi_create_tfrecords_in_shards(
data_filenames = validation_filenames,
dataset_name = 'validation',
num_files_to_create = num_files_to_create,
num_workers = num_workers
)
num_train_samples, train_tfrecords = multi_create_tfrecords_in_shards(
data_filenames = train_filenames,
dataset_name = 'train',
num_files_to_create = num_files_to_create,
num_workers = num_workers
)
filenames['train'] = train_tfrecords
filenames['test'] = test_tfrecords
filenames['validation'] = validation_tfrecords
return filenames
if __name__ == "__main__":
main()
import sys
sys.path.append('..')
from scripts.generate_spec_tfrecords import main
tfrecords = main()
###Output
Processing test: : 100%|██████████| 96/96 [00:08<00:00, 11.54it/s]
Processing validation: : 100%|██████████| 96/96 [00:10<00:00, 9.04it/s]
Processing train: : 100%|██████████| 96/96 [01:21<00:00, 1.18it/s]
###Markdown
Test to view the data from the shards of tfrecords Read TFRecords
###Code
%%writefile /code/src/utilities/tfrecords_load.py
from functools import partial
import tensorflow as tf
def _parse_sample_function(example_proto):
return tf.io.parse_single_example(example_proto, sample_feature_description)
IMAGE_SIZE = (20, 30)
def decode_image(image):
image = tf.io.decode_raw(image, tf.uint8)
image = tf.reshape(image, [*IMAGE_SIZE, 3])
return image
def read_tfrecord(example, labeled):
tfrecord_format = (
{
"image": tf.io.FixedLenFeature([], tf.string),
"label": tf.io.FixedLenFeature([], tf.int64),
}
if labeled
else {
"image": tf.io.FixedLenFeature([], tf.string),
}
)
example = tf.io.parse_single_example(example, tfrecord_format)
image = decode_image(example["image"])
if labeled:
num_classes = 3
label = tf.cast(example["label"], tf.int32)
label = tf.one_hot(label, num_classes)
return image, label
return image
def load_dataset(filenames, labeled=True):
ignore_order = tf.data.Options()
ignore_order.experimental_deterministic = False
dataset = tf.data.TFRecordDataset(
filenames
)
dataset = dataset.with_options(
ignore_order
)
dataset = dataset.map(
partial(read_tfrecord, labeled = labeled),
num_parallel_calls=tf.data.AUTOTUNE
)
return dataset
BATCH_SIZE = 64
def get_dataset(filenames, labeled=True):
dataset = load_dataset(
filenames,
labeled = labeled
)
return (dataset
.shuffle(2048)
.prefetch(buffer_size = tf.data.AUTOTUNE)
.batch(BATCH_SIZE)
)
def get_samplesize(filenames):
dataset = tf.data.TFRecordDataset(
filenames
)
return sum(1 for _ in dataset)
###Output
Overwriting /code/src/utilities/tfrecords_load.py
###Markdown
Run test to view a batch in the test dataset
###Code
tfrecords = {
'train' : glob.glob('/data/spec_tfrecords/train/*'),
'test' : glob.glob('/data/spec_tfrecords/test/*'),
'validation' : glob.glob('/data/spec_tfrecords/validation/*')
}
import matplotlib.pyplot as plt
import sys
sys.path.append('..')
from src.utilities.tfrecords_load import get_dataset
test_dataset = get_dataset(tfrecords['test'], labeled=True)
image_batch, label_batch = next(iter(test_dataset))
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(image_batch[n] / 255.0)
plt.title(inverse_label_dictionary[label_batch[n].argmax()])
plt.axis("off")
show_batch(image_batch.numpy(), label_batch.numpy())
###Output
2021-11-10 13:14:02.060113: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-11-10 13:14:02.460103: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
###Markdown
Run test to view the sample size of each dataset
###Code
from src.utilities.tfrecords_load import get_samplesize
num_train_samples = get_samplesize(glob.glob('/data/spec_tfrecords/train/*'))
print("Counted {} samples in train set.".format(num_train_samples))
num_validation_samples = get_samplesize(glob.glob('/data/spec_tfrecords/validation/*'))
print("Counted {} samples in validation set.".format(num_validation_samples))
num_test_samples = get_samplesize(glob.glob('/data/spec_tfrecords/test/*'))
print("Counted {} samples in test set.".format(num_test_samples))
###Output
Counted 4150 samples in train set.
Counted 519 samples in validation set.
Counted 519 samples in test set.
|
open-metadata-resources/open-metadata-labs/ui-labs/ui-asset-search.ipynb | ###Markdown
 ODPi Egeria Hands-On Lab Welcome to the UI Asset Search Lab IntroductionODPi Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.In this hands-on lab you will get a chance to have hands-on experience with the UI of Egeria, by searching for assets we add along the way. The ScenarioThe ODPi Egeria team use the personas and scenarios from the fictitious company called Coco Pharmaceuticals. (See https://opengovernance.odpi.org/coco-pharmaceuticals/ for more information).As part of the huge business transformation that Coco Pharmaceuticals has embarked on, theyhave created a data lake for managing data for research, analytics, exchange between their internal organizations and business partners (such as hospitals). As a result, the data lake has to bedesigned to handle a wide variety of data, including some highly sensitive and regulated data.In this lab we first create some assets. Afterwards, the UI where we can search for assets is shown. The main characters engaged in this lab are the data analyst named [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), and the data scientist named [Callie Quartile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/callie-quartile.md). Peter Profile Callie Quartile Setting upCoco Pharmaceuticals make widespread use of ODPi Egeria for tracking and managing their data and related assets.Figure 1 below shows their metadata servers and the Open Metadata and Governance (OMAG) Server Platforms that are hosting them. Each metadata server supports a department in the organization. The servers are distributed across the platform to even out the workload. Servers can be moved to a different platform if needed.> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe code below sets up the network addresses for the three platforms. This varies depending on whether you are running them locally, through **docker-compose** or on **kubernetes**.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----Callie is using the data lake metadata server called `cocoMDS4`. This server is hosted on the Data Lake OMAG Server Platform. It enables business users and the executive team to access data from the data lake.Check that `cocoMDS4` is running. If any of the platforms are not running, follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). If any server is reporting that it is not configured thenrun the steps in the **[Server Configuration](../egeria-server-config.ipynb)** lab to configurethe servers. Then re-run the previous step to ensure all of the servers are started. ---- Exercise 1 Generating some assetsBefore we can interact with the UI, we have to create some data in the metadata repository. This section is a simplified version of the [Building a Data Catalog Lab](../asset-management-labs/building-a-data-catalog.ipynb). If you haven't done so already, it is recommended that you also check those labs out.For this part of the exercises, Peter is going to create some CSV files using Asset Owner OMAS located in `cocoMDS1`. The code block below describes their basic attributes.
###Code
csv_file_list = [
{
"displayName": "List of patients",
"description": "Basic information regarding patients recorded in February 2020.",
"fullPath": "file://secured/research/patients.csv"
},
{
"displayName": "Log of treatments",
"description": "Treatments carried out for patients in 2019.",
"fullPath": "file://secured/research/treatments.csv"
},
]
###Output
_____no_output_____
###Markdown
Afterwards, we will have our Data Analyst, [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), add these CSV files into the repository. You can see two lists of GUIDs for the two CSV files we have added. For a more thorough explanation for this step, check out the [Building a Data Catalog lab](../asset-management-labs/building-a-data-catalog.ipynb).
###Code
for csv_file_specs in csv_file_list:
response_guid = assetOwnerCreateCSVAsset(
serverName=cocoMDS1Name,
serverPlatformName=cocoMDS1PlatformName,
serverPlatformURL=cocoMDS1PlatformURL,
userId=petersUserId,
**csv_file_specs
)
printGUIDList(response_guid)
###Output
_____no_output_____
###Markdown
Exercise 2 Search for assets in the UI Now we can get the URL through the environment variable, with a default value of `https://localhost:8443`:
###Code
uiURL = os.environ.get('uiExternalURL', 'https://localhost:8443')
print(uiURL)
###Output
_____no_output_____
###Markdown
 Egeria Hands-On Lab Welcome to the UI Asset Search Lab IntroductionEgeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.In this hands-on lab you will get a chance to have hands-on experience with the UI of Egeria, by searching for assets we add along the way. The ScenarioThe Egeria team use the personas and scenarios from the fictitious company called [Coco Pharmaceuticals](https://egeria-project.org/practices/coco-pharmaceuticals/).As part of the huge business transformation that Coco Pharmaceuticals has embarked on, theyhave created a data lake for managing data for research, analytics, exchange between their internal organizations and business partners (such as hospitals). As a result, the data lake has to bedesigned to handle a wide variety of data, including some highly sensitive and regulated data.In this lab we first create some assets. Afterwards, the UI where we can search for assets is shown. The main characters engaged in this lab are the data analyst named [Peter Profile](https://egeria-project.org/practices/coco-pharmaceuticals/personas/peter-profile/), and the data scientist named [Callie Quartile](https://egeria-project.org/practices/coco-pharmaceuticals/personas/callie-quartile/). Peter Profile Callie Quartile Setting upCoco Pharmaceuticals make widespread use of ODPi Egeria for tracking and managing their data and related assets.Figure 1 below shows their metadata servers and the Open Metadata and Governance (OMAG) Server Platforms that are hosting them. Each metadata server supports a department in the organization. The servers are distributed across the platform to even out the workload. Servers can be moved to a different platform if needed.> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe code below sets up the network addresses for the three platforms. This varies depending on whether you are running them locally, or on **kubernetes**.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----Callie is using the data lake metadata server called `cocoMDS4`. This server is hosted on the Data Lake OMAG Server Platform. It enables business users and the executive team to access data from the data lake.Check that `cocoMDS4` is running. If any of the platforms are not running, follow [this link to set up and run the platform](https://egeria-project.org/education/open-metadata-labs/overview/). If any server is reporting that it is not configured thenrun the steps in the **[Server Configuration](../egeria-server-config.ipynb)** lab to configurethe servers. Then re-run the previous step to ensure all of the servers are started. ---- Exercise 1 Generating some assetsBefore we can interact with the UI, we have to create some data in the metadata repository. This section is a simplified version of the [Building a Data Catalog Lab](../asset-management-labs/building-a-data-catalog.ipynb). If you haven't done so already, it is recommended that you also check those labs out.For this part of the exercises, Peter is going to create some CSV files using Asset Owner OMAS located in `cocoMDS1`. The code block below describes their basic attributes.
###Code
csv_file_list = [
{
"displayName": "List of patients",
"description": "Basic information regarding patients recorded in February 2020.",
"fullPath": "file://secured/research/patients.csv"
},
{
"displayName": "Log of treatments for patients",
"description": "Treatments carried out for patients in 2019.",
"fullPath": "file://secured/research/treatments.csv"
},
]
###Output
_____no_output_____
###Markdown
Afterwards, we will have our Data Analyst, [Peter Profile](https://egeria-project.org/practices/coco-pharmaceuticals/personas/peter-profile/), add these CSV files into the repository. You can see two lists of GUIDs for the two CSV files we have added. For a more thorough explanation for this step, check out the [Building a Data Catalog lab](../asset-management-labs/building-a-data-catalog.ipynb).
###Code
for csv_file_specs in csv_file_list:
response_guid = assetOwnerCreateCSVAsset(
serverName=cocoMDS1Name,
serverPlatformName=cocoMDS1PlatformName,
serverPlatformURL=cocoMDS1PlatformURL,
userId=petersUserId,
**csv_file_specs
)
printGUIDList(response_guid)
###Output
_____no_output_____
###Markdown
 Egeria Hands-On Lab Welcome to the UI Asset Search Lab IntroductionEgeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.In this hands-on lab you will get a chance to have hands-on experience with the UI of Egeria, by searching for assets we add along the way. The ScenarioThe ODPi Egeria team use the personas and scenarios from the fictitious company called Coco Pharmaceuticals. (See https://opengovernance.odpi.org/coco-pharmaceuticals/ for more information).As part of the huge business transformation that Coco Pharmaceuticals has embarked on, theyhave created a data lake for managing data for research, analytics, exchange between their internal organizations and business partners (such as hospitals). As a result, the data lake has to bedesigned to handle a wide variety of data, including some highly sensitive and regulated data.In this lab we first create some assets. Afterwards, the UI where we can search for assets is shown. The main characters engaged in this lab are the data analyst named [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), and the data scientist named [Callie Quartile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/callie-quartile.md). Peter Profile Callie Quartile Setting upCoco Pharmaceuticals make widespread use of ODPi Egeria for tracking and managing their data and related assets.Figure 1 below shows their metadata servers and the Open Metadata and Governance (OMAG) Server Platforms that are hosting them. Each metadata server supports a department in the organization. The servers are distributed across the platform to even out the workload. Servers can be moved to a different platform if needed.> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe code below sets up the network addresses for the three platforms. This varies depending on whether you are running them locally, or on **kubernetes**.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----Callie is using the data lake metadata server called `cocoMDS4`. This server is hosted on the Data Lake OMAG Server Platform. It enables business users and the executive team to access data from the data lake.Check that `cocoMDS4` is running. If any of the platforms are not running, follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). If any server is reporting that it is not configured thenrun the steps in the **[Server Configuration](../egeria-server-config.ipynb)** lab to configurethe servers. Then re-run the previous step to ensure all of the servers are started. ---- Exercise 1 Generating some assetsBefore we can interact with the UI, we have to create some data in the metadata repository. This section is a simplified version of the [Building a Data Catalog Lab](../asset-management-labs/building-a-data-catalog.ipynb). If you haven't done so already, it is recommended that you also check those labs out.For this part of the exercises, Peter is going to create some CSV files using Asset Owner OMAS located in `cocoMDS1`. The code block below describes their basic attributes.
###Code
csv_file_list = [
{
"displayName": "List of patients",
"description": "Basic information regarding patients recorded in February 2020.",
"fullPath": "file://secured/research/patients.csv"
},
{
"displayName": "Log of treatments",
"description": "Treatments carried out for patients in 2019.",
"fullPath": "file://secured/research/treatments.csv"
},
]
###Output
_____no_output_____
###Markdown
Afterwards, we will have our Data Analyst, [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), add these CSV files into the repository. You can see two lists of GUIDs for the two CSV files we have added. For a more thorough explanation for this step, check out the [Building a Data Catalog lab](../asset-management-labs/building-a-data-catalog.ipynb).
###Code
for csv_file_specs in csv_file_list:
response_guid = assetOwnerCreateCSVAsset(
serverName=cocoMDS1Name,
serverPlatformName=cocoMDS1PlatformName,
serverPlatformURL=cocoMDS1PlatformURL,
userId=petersUserId,
**csv_file_specs
)
printGUIDList(response_guid)
###Output
_____no_output_____
###Markdown
Exercise 2 Search for assets in the UI> **Important:** When running this lab using kubernetes deployment, make sure that you [expose the Egeria UI](https://odpi.github.io/egeria-docs/guides/operations/kubernetes/charts/lab/accessing-the-egeria-ui) running in the container to your local network and access it via localhost. Now we can get the URL through the environment variable, with a default value of `https://localhost:8443`:
###Code
uiURL = os.environ.get('uiExternalURL', 'https://localhost:8443')
print(uiURL)
###Output
_____no_output_____
###Markdown
 Egeria Hands-On Lab Welcome to the UI Asset Search Lab IntroductionEgeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.In this hands-on lab you will get a chance to have hands-on experience with the UI of Egeria, by searching for assets we add along the way. The ScenarioThe ODPi Egeria team use the personas and scenarios from the fictitious company called Coco Pharmaceuticals. (See https://opengovernance.odpi.org/coco-pharmaceuticals/ for more information).As part of the huge business transformation that Coco Pharmaceuticals has embarked on, theyhave created a data lake for managing data for research, analytics, exchange between their internal organizations and business partners (such as hospitals). As a result, the data lake has to bedesigned to handle a wide variety of data, including some highly sensitive and regulated data.In this lab we first create some assets. Afterwards, the UI where we can search for assets is shown. The main characters engaged in this lab are the data analyst named [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), and the data scientist named [Callie Quartile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/callie-quartile.md). Peter Profile Callie Quartile Setting upCoco Pharmaceuticals make widespread use of ODPi Egeria for tracking and managing their data and related assets.Figure 1 below shows their metadata servers and the Open Metadata and Governance (OMAG) Server Platforms that are hosting them. Each metadata server supports a department in the organization. The servers are distributed across the platform to even out the workload. Servers can be moved to a different platform if needed.> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe code below sets up the network addresses for the three platforms. This varies depending on whether you are running them locally, or on **kubernetes**.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----Callie is using the data lake metadata server called `cocoMDS4`. This server is hosted on the Data Lake OMAG Server Platform. It enables business users and the executive team to access data from the data lake.Check that `cocoMDS4` is running. If any of the platforms are not running, follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). If any server is reporting that it is not configured thenrun the steps in the **[Server Configuration](../egeria-server-config.ipynb)** lab to configurethe servers. Then re-run the previous step to ensure all of the servers are started. ---- Exercise 1 Generating some assetsBefore we can interact with the UI, we have to create some data in the metadata repository. This section is a simplified version of the [Building a Data Catalog Lab](../asset-management-labs/building-a-data-catalog.ipynb). If you haven't done so already, it is recommended that you also check those labs out.For this part of the exercises, Peter is going to create some CSV files using Asset Owner OMAS located in `cocoMDS1`. The code block below describes their basic attributes.
###Code
csv_file_list = [
{
"displayName": "List of patients",
"description": "Basic information regarding patients recorded in February 2020.",
"fullPath": "file://secured/research/patients.csv"
},
{
"displayName": "Log of treatments",
"description": "Treatments carried out for patients in 2019.",
"fullPath": "file://secured/research/treatments.csv"
},
]
###Output
_____no_output_____
###Markdown
Afterwards, we will have our Data Analyst, [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), add these CSV files into the repository. You can see two lists of GUIDs for the two CSV files we have added. For a more thorough explanation for this step, check out the [Building a Data Catalog lab](../asset-management-labs/building-a-data-catalog.ipynb).
###Code
for csv_file_specs in csv_file_list:
response_guid = assetOwnerCreateCSVAsset(
serverName=cocoMDS1Name,
serverPlatformName=cocoMDS1PlatformName,
serverPlatformURL=cocoMDS1PlatformURL,
userId=petersUserId,
**csv_file_specs
)
printGUIDList(response_guid)
###Output
_____no_output_____
###Markdown
Exercise 2 Search for assets in the UI Now we can get the URL through the environment variable, with a default value of `https://localhost:8443`:
###Code
uiURL = os.environ.get('uiExternalURL', 'https://localhost:8443')
print(uiURL)
###Output
_____no_output_____
###Markdown
 Egeria Hands-On Lab Welcome to the UI Asset Search Lab IntroductionEgeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.In this hands-on lab you will get a chance to have hands-on experience with the UI of Egeria, by searching for assets we add along the way. The ScenarioThe ODPi Egeria team use the personas and scenarios from the fictitious company called Coco Pharmaceuticals. (See https://opengovernance.odpi.org/coco-pharmaceuticals/ for more information).As part of the huge business transformation that Coco Pharmaceuticals has embarked on, theyhave created a data lake for managing data for research, analytics, exchange between their internal organizations and business partners (such as hospitals). As a result, the data lake has to bedesigned to handle a wide variety of data, including some highly sensitive and regulated data.In this lab we first create some assets. Afterwards, the UI where we can search for assets is shown. The main characters engaged in this lab are the data analyst named [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), and the data scientist named [Callie Quartile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/callie-quartile.md). Peter Profile Callie Quartile Setting upCoco Pharmaceuticals make widespread use of ODPi Egeria for tracking and managing their data and related assets.Figure 1 below shows their metadata servers and the Open Metadata and Governance (OMAG) Server Platforms that are hosting them. Each metadata server supports a department in the organization. The servers are distributed across the platform to even out the workload. Servers can be moved to a different platform if needed.> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe code below sets up the network addresses for the three platforms. This varies depending on whether you are running them locally, or on **kubernetes**.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----Callie is using the data lake metadata server called `cocoMDS4`. This server is hosted on the Data Lake OMAG Server Platform. It enables business users and the executive team to access data from the data lake.Check that `cocoMDS4` is running. If any of the platforms are not running, follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). If any server is reporting that it is not configured thenrun the steps in the **[Server Configuration](../egeria-server-config.ipynb)** lab to configurethe servers. Then re-run the previous step to ensure all of the servers are started. ---- Exercise 1 Generating some assetsBefore we can interact with the UI, we have to create some data in the metadata repository. This section is a simplified version of the [Building a Data Catalog Lab](../asset-management-labs/building-a-data-catalog.ipynb). If you haven't done so already, it is recommended that you also check those labs out.For this part of the exercises, Peter is going to create some CSV files using Asset Owner OMAS located in `cocoMDS1`. The code block below describes their basic attributes.
###Code
csv_file_list = [
{
"displayName": "List of patients",
"description": "Basic information regarding patients recorded in February 2020.",
"fullPath": "file://secured/research/patients.csv"
},
{
"displayName": "Log of treatments for patients",
"description": "Treatments carried out for patients in 2019.",
"fullPath": "file://secured/research/treatments.csv"
},
]
###Output
_____no_output_____
###Markdown
Afterwards, we will have our Data Analyst, [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), add these CSV files into the repository. You can see two lists of GUIDs for the two CSV files we have added. For a more thorough explanation for this step, check out the [Building a Data Catalog lab](../asset-management-labs/building-a-data-catalog.ipynb).
###Code
for csv_file_specs in csv_file_list:
response_guid = assetOwnerCreateCSVAsset(
serverName=cocoMDS1Name,
serverPlatformName=cocoMDS1PlatformName,
serverPlatformURL=cocoMDS1PlatformURL,
userId=petersUserId,
**csv_file_specs
)
printGUIDList(response_guid)
###Output
_____no_output_____
###Markdown
 ODPi Egeria Hands-On Lab Welcome to the UI Asset Search Lab IntroductionODPi Egeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.In this hands-on lab you will get a chance to have hands-on experience with the UI of Egeria, by searching for assets we add along the way. The ScenarioThe ODPi Egeria team use the personas and scenarios from the fictitious company called Coco Pharmaceuticals. (See https://opengovernance.odpi.org/coco-pharmaceuticals/ for more information).As part of the huge business transformation that Coco Pharmaceuticals has embarked on, theyhave created a data lake for managing data for research, analytics, exchange between their internal organizations and business partners (such as hospitals). As a result, the data lake has to bedesigned to handle a wide variety of data, including some highly sensitive and regulated data.In this lab we first create some assets. Afterwards, the UI where we can search for assets is shown. The main characters engaged in this lab are the data analyst named [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), and the data scientist named [Callie Quartile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/callie-quartile.md). Peter Profile Callie Quartile Setting upCoco Pharmaceuticals make widespread use of ODPi Egeria for tracking and managing their data and related assets.Figure 1 below shows their metadata servers and the Open Metadata and Governance (OMAG) Server Platforms that are hosting them. Each metadata server supports a department in the organization. The servers are distributed across the platform to even out the workload. Servers can be moved to a different platform if needed.> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe code below sets up the network addresses for the three platforms. This varies depending on whether you are running them locally, through **docker-compose** or on **kubernetes**.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----Callie is using the data lake metadata server called `cocoMDS4`. This server is hosted on the Data Lake OMAG Server Platform. It enables business users and the executive team to access data from the data lake.Check that `cocoMDS4` is running. If any of the platforms are not running, follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). If any server is reporting that it is not configured thenrun the steps in the **[Server Configuration](../egeria-server-config.ipynb)** lab to configurethe servers. Then re-run the previous step to ensure all of the servers are started. ---- Exercise 1 Generating some assetsBefore we can interact with the UI, we have to create some data in the metadata repository. This section is a simplified version of the [Building a Data Catalog Lab](../asset-management-labs/building-a-data-catalog.ipynb). If you haven't done so already, it is recommended that you also check those labs out.For this part of the exercises, Peter is going to create some CSV files using Asset Owner OMAS located in `cocoMDS1`. The code block below describes their basic attributes.
###Code
csv_file_list = [
{
"displayName": "List of patients",
"description": "Basic information regarding patients recorded in February 2020.",
"fullPath": "file://secured/research/patients.csv"
},
{
"displayName": "Log of treatments",
"description": "Treatments carried out for patients in 2019.",
"fullPath": "file://secured/research/treatments.csv"
},
]
###Output
_____no_output_____
###Markdown
Afterwards, we will have our Data Analyst, [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), add these CSV files into the repository. You can see two lists of GUIDs for the two CSV files we have added. For a more thorough explanation for this step, check out the [Building a Data Catalog lab](../asset-management-labs/building-a-data-catalog.ipynb).
###Code
for csv_file_specs in csv_file_list:
response_guid = assetOwnerCreateCSVAsset(
serverName=cocoMDS1Name,
serverPlatformName=cocoMDS1PlatformName,
serverPlatformURL=cocoMDS1PlatformURL,
userId=petersUserId,
**csv_file_specs
)
printGUIDList(response_guid)
###Output
_____no_output_____
###Markdown
Exercise 2 Search for assets in the UI Now we can get the URL through the environment variable, with a default value of `https://localhost:8443`:
###Code
uiURL = os.environ.get('uiExternalURL', 'https://localhost:8443')
print(uiURL)
###Output
_____no_output_____
###Markdown
 Egeria Hands-On Lab Welcome to the UI Asset Search Lab IntroductionEgeria is an open source project that provides open standards and implementation libraries to connect tools, catalogs and platforms together so they can share information (called metadata) about data and the technology that supports it.In this hands-on lab you will get a chance to have hands-on experience with the UI of Egeria, by searching for assets we add along the way. The ScenarioThe ODPi Egeria team use the personas and scenarios from the fictitious company called Coco Pharmaceuticals. (See https://opengovernance.odpi.org/coco-pharmaceuticals/ for more information).As part of the huge business transformation that Coco Pharmaceuticals has embarked on, theyhave created a data lake for managing data for research, analytics, exchange between their internal organizations and business partners (such as hospitals). As a result, the data lake has to bedesigned to handle a wide variety of data, including some highly sensitive and regulated data.In this lab we first create some assets. Afterwards, the UI where we can search for assets is shown. The main characters engaged in this lab are the data analyst named [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), and the data scientist named [Callie Quartile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/callie-quartile.md). Peter Profile Callie Quartile Setting upCoco Pharmaceuticals make widespread use of ODPi Egeria for tracking and managing their data and related assets.Figure 1 below shows their metadata servers and the Open Metadata and Governance (OMAG) Server Platforms that are hosting them. Each metadata server supports a department in the organization. The servers are distributed across the platform to even out the workload. Servers can be moved to a different platform if needed.> **Figure 1:** Coco Pharmaceuticals' OMAG Server PlatformsThe code below sets up the network addresses for the three platforms. This varies depending on whether you are running them locally, through **docker-compose** or on **kubernetes**.
###Code
%run ../common/environment-check.ipynb
###Output
_____no_output_____
###Markdown
----Callie is using the data lake metadata server called `cocoMDS4`. This server is hosted on the Data Lake OMAG Server Platform. It enables business users and the executive team to access data from the data lake.Check that `cocoMDS4` is running. If any of the platforms are not running, follow [this link to set up and run the platform](https://egeria.odpi.org/open-metadata-resources/open-metadata-labs/). If any server is reporting that it is not configured thenrun the steps in the **[Server Configuration](../egeria-server-config.ipynb)** lab to configurethe servers. Then re-run the previous step to ensure all of the servers are started. ---- Exercise 1 Generating some assetsBefore we can interact with the UI, we have to create some data in the metadata repository. This section is a simplified version of the [Building a Data Catalog Lab](../asset-management-labs/building-a-data-catalog.ipynb). If you haven't done so already, it is recommended that you also check those labs out.For this part of the exercises, Peter is going to create some CSV files using Asset Owner OMAS located in `cocoMDS1`. The code block below describes their basic attributes.
###Code
csv_file_list = [
{
"displayName": "List of patients",
"description": "Basic information regarding patients recorded in February 2020.",
"fullPath": "file://secured/research/patients.csv"
},
{
"displayName": "Log of treatments",
"description": "Treatments carried out for patients in 2019.",
"fullPath": "file://secured/research/treatments.csv"
},
]
###Output
_____no_output_____
###Markdown
Afterwards, we will have our Data Analyst, [Peter Profile](https://github.com/odpi/data-governance/blob/master/docs/coco-pharmaceuticals/personas/peter-profile.md), add these CSV files into the repository. You can see two lists of GUIDs for the two CSV files we have added. For a more thorough explanation for this step, check out the [Building a Data Catalog lab](../asset-management-labs/building-a-data-catalog.ipynb).
###Code
for csv_file_specs in csv_file_list:
response_guid = assetOwnerCreateCSVAsset(
serverName=cocoMDS1Name,
serverPlatformName=cocoMDS1PlatformName,
serverPlatformURL=cocoMDS1PlatformURL,
userId=petersUserId,
**csv_file_specs
)
printGUIDList(response_guid)
###Output
_____no_output_____
###Markdown
Exercise 2 Search for assets in the UI Now we can get the URL through the environment variable, with a default value of `https://localhost:8443`:
###Code
uiURL = os.environ.get('uiExternalURL', 'https://localhost:8443')
print(uiURL)
###Output
_____no_output_____ |
src/notebooks/conversation_between_blender_and_eliza.ipynb | ###Markdown
Conversation between BlenderBot and Eliza In this notebook, we create a loop in which Blenderbot has a conversation with Eliza. We send an initial prompt to Blenderbot to start a conversation. We try to extract triples from the Blenderbot response and store these in the eKG.Next we capture Eliza's response and DialogueGpt's response constinuously until we meet the stop condition. In principle, this conversation can go on forever. At the end, we save the scenario in EMISSOR. Before running, start GraphDB and make sure that there is a sandbox repository.GraphDB can be downloaded from:https://graphdb.ontotext.com Import the necessary modules
###Code
import json
import os
import time
import uuid
from datetime import date
from datetime import datetime
from random import getrandbits, choice
import pathlib
import pprint
import spacy
# general imports for EMISSOR and the BRAIN
import emissor as em
import requests
from cltl import brain
from cltl.brain.long_term_memory import LongTermMemory
from cltl.brain.utils.helper_functions import brain_response_to_json
from cltl.combot.backend.api.discrete import UtteranceType
from cltl.reply_generation.data.sentences import GREETING, ASK_NAME, ELOQUENCE, TALK_TO_ME
from cltl.reply_generation.lenka_replier import LenkaReplier
from cltl.triple_extraction.api import Chat, UtteranceHypothesis
from emissor.persistence import ScenarioStorage
from emissor.representation.annotation import AnnotationType, Token, NER
from emissor.representation.container import Index
from emissor.representation.scenario import Modality, ImageSignal, TextSignal, Mention, Annotation, Scenario
#!python -m spacy download en
###Output
_____no_output_____
###Markdown
Import the chatbot utility functions
###Code
import sys
import os
src_path = os.path.abspath(os.path.join('..'))
if src_path not in sys.path:
sys.path.append(src_path)
#### The next utils are needed for the interaction and creating triples and capsules
import chatbots.util.driver_util as d_util
import chatbots.util.text_util as t_util
import chatbots.util.capsule_util as c_util
import chatbots.intentions.talk as talk
import chatbots.intentions.get_to_know_you as friend
import chatbots.bots.eliza as eliza
###Output
_____no_output_____
###Markdown
Import a conversation agent pipeline
###Code
#from transformers import AutoModelForCausalLM, AutoTokenizer, AutoModel, AutoModelWithLMHead
#import torch
#tokenizer = AutoTokenizer.from_pretrained('microsoft/DialoGPT-medium')
#model = AutoModelForCausalLM.from_pretrained('microsoft/DialoGPT-medium')
#tokenizer = AutoTokenizer.from_pretrained('gpt2')
#model = AutoModelForCausalLM.from_pretrained('gpt2')
#tokenizer = AutoTokenizer.from_pretrained("manueltonneau/bert-base-cased-conversational-nli")
#model = AutoModel.from_pretrained("manueltonneau/bert-base-cased-conversational-nli")
#tokenizer = AutoTokenizer.from_pretrained("xlnet-large-cased")
#model = AutoModelForCausalLM.from_pretrained("xlnet-large-cased")
#tokenizer = AutoTokenizer.from_pretrained("t5-small")
#model = AutoModelWithLMHead.from_pretrained("t5-small")
#### Needed to suppress messages from DialgGPT
#import os
#os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
mname = 'facebook/blenderbot-400M-distill'
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
UTTERANCE = " I am curious. Has blenderbot live in location?"
inputs = tokenizer([UTTERANCE], return_tensors='pt')
reply_ids = model.generate(**inputs)
print(tokenizer.batch_decode(reply_ids))
###Output
["<s> I'm not sure, but I do know that they have been around since the late 19th century.</s>"]
###Markdown
Standard initialisation of a scenarioWe initialise a scenario in the standard way by creating a unique folder and setting the AGENT and HUMAN_NAME and HUMAN_ID variables. Throughout this scenario, the HUMAN_NAME and HUMAN_ID will be used as the source for the utterances.
###Code
from random import getrandbits
import requests
##### Setting the location
place_id = getrandbits(8)
location = None
try:
location = requests.get("https://ipinfo.io").json()
except:
print("failed to get the IP location")
##### Setting the agents
AGENT = "Leolani"
HUMAN_NAME = "BLENDERBOT"
HUMAN_ID = "BLENDERBOT"
### The name of your scenario
scenario_id = datetime.today().strftime("%Y-%m-%d-%H_%M_%S")
### Specify the path to an existing data folder where your scenario is created and saved as a subfolder
# Find the repository root dir
parent, dir_name = (d_util.__file__, "_")
while dir_name and dir_name != "src":
parent, dir_name = os.path.split(parent)
root_dir = parent
scenario_path = os.path.abspath(os.path.join(root_dir, 'data'))
if not os.path.exists(scenario_path) :
os.mkdir(scenario_path)
print("Created a data folder for storing the scenarios", scenario_path)
### Define the folders where the images and rdf triples are saved
imagefolder = scenario_path + "/" + scenario_id + "/" + "image"
rdffolder = scenario_path + "/" + scenario_id + "/" + "rdf"
### Create the scenario folder, the json files and a scenarioStorage and scenario in memory
scenarioStorage = d_util.create_scenario(scenario_path, scenario_id)
scenario_ctrl = scenarioStorage.create_scenario(scenario_id, int(time.time() * 1e3), None, AGENT)
###Output
Directories for 2022-05-03-19_43_11 created in /Users/piek/PycharmProjects/cltl-chatbots/data
###Markdown
Specifying the BRAIN We specify the BRAIN in GraphDB and use the scenario path just defined for storing the RDF triple produced in EMISSOR.If you set *clear_all* to *True*, the sandbox triple store is emptied (memory erased) and the basic ontological models are reloaded. Setting it to *False* means you add things to the current memory.
###Code
log_path = pathlib.Path(rdffolder)
my_brain = brain.LongTermMemory(address="http://localhost:7200/repositories/blender4",
log_dir=log_path,
clear_all=True)
###Output
2022-05-03 19:44:13,740 - INFO - cltl.brain.basic_brain.LongTermMemory - Booted
###Markdown
Create an instance of a replier
###Code
replier = eliza.ElizaImpl()
###Output
_____no_output_____
###Markdown
Initialise a chat with the HUMAN_ID to keep track of the dialogue history
###Code
chat = Chat(HUMAN_ID)
nlp = spacy.load("en_core_web_sm")
#nlp= spacy.load('en') # other languages: de, es, pt, fr, it, nl
from cltl.triple_extraction.cfg_analyzer import CFGAnalyzer
analyzer = CFGAnalyzer()
#CFG test
item = {'utterance': "I like dogs."}
chat.add_utterance(item['utterance'])
analyzer.analyze(chat.last_utterance)
print(chat.last_utterance, chat.last_utterance.triples)
###Output
2022-05-03 19:45:08 - INFO - cltl.triple_extraction.Chat - BLENDERBOT 000: "I like dogs."
2022-05-03 19:45:10 - INFO - cltl.triple_extraction.CFGAnalyzer - Found 1 triples
2022-05-03 19:45:11 - INFO - cltl.triple_extraction.GeneralStatementAnalyzer - Utterance type: "STATEMENT"
2022-05-03 19:45:11 - INFO - cltl.triple_extraction.GeneralStatementAnalyzer - RDF triplet subject: {"label": "BLENDERBOT", "type": ["agent"]}
2022-05-03 19:45:11 - INFO - cltl.triple_extraction.GeneralStatementAnalyzer - RDF triplet predicate: {"label": "like", "type": ["emotion"]}
2022-05-03 19:45:11 - INFO - cltl.triple_extraction.GeneralStatementAnalyzer - RDF triplet object: {"label": "dogs.", "type": ["agent"]}
2022-05-03 19:45:11 - INFO - cltl.triple_extraction.GeneralStatementAnalyzer - Perspective certainty: POSSIBLE
2022-05-03 19:45:11 - INFO - cltl.triple_extraction.GeneralStatementAnalyzer - Perspective polarity: POSITIVE
2022-05-03 19:45:11 - INFO - cltl.triple_extraction.GeneralStatementAnalyzer - Perspective sentiment: NEUTRAL
2022-05-03 19:45:11 - INFO - cltl.triple_extraction.GeneralStatementAnalyzer - Perspective emotion: UNDERSPECIFIED
###Markdown
Start the interaction
###Code
context_size = 5
def get_answer_from_blender(leolani_prompt:str, history_list:[]):
answer = ""
sentences = []
history = ""
for i, his in enumerate(history):
if i==context_size:
break
history += his +". "
input_prompt = history+leolani_prompt
print('input_prompt:',input_prompt)
bot_input_ids = tokenizer(input_prompt, return_tensors='pt')
chat_history_ids = model.generate(**bot_input_ids)
utteranceList = tokenizer.batch_decode(chat_history_ids)
print('utteranceList:', utteranceList)
answer = utteranceList[0].strip('</s>')
print("ANSWER", answer)
doc = nlp(answer)
for s in doc.sents:
sentence = ""
for token in s:
if token.text==',':
sentences.append(sentence)
else:
sentence += token.text+" "
sentences.append(sentence)
return sentences
def process_text_with_cfg(scenario: Scenario,
place_id: str,
location: str,
human_id: str,
textSignal: TextSignal,
chat: Chat,
analyzer: CFGAnalyzer,
my_brain: LongTermMemory,
print_details:False):
capsule = None
response = None
response_json = None
### Next, we get all possible triples
chat.add_utterance(c_util.seq_to_text(textSignal.seq))
analyzer.analyze(chat.last_utterance)
if print_details:
print('Last utterance:', c_util.seq_to_text(textSignal.seq))
print('CFG Tripels:', chat.last_utterance.triples)
for extracted_triple in chat.last_utterance.triples:
print(extracted_triple['utterance_type'], extracted_triple)
capsule = c_util.scenario_utterance_to_capsule(scenario,place_id,location, textSignal,human_id, extracted_triple)
response = my_brain.update(capsule, reason_types=False, create_label=True)
def process_triple_spacy(scenario: Scenario,
place_id: str,
location: str,
speaker: str,
hearer:str,
textSignal: TextSignal,
my_brain: LongTermMemory,
nlp,
print_details:False):
response = None
triples = t_util.get_subj_amod_triples_with_spacy(textSignal, nlp, speaker, hearer)
triples.extend(t_util.get_subj_obj_triples_with_spacy(textSignal, nlp, speaker, hearer))
if print_details:
print('spacy Triples', triples)
for triple in triples:
capsule = c_util.scenario_pred_to_capsule(scenario,
place_id,
location,
textSignal,
speaker,
triple[1],
triple[0],
triple[2])
if print_details:
print('Triple spacy Capsule:')
pprint.pprint(capsule)
try:
response = my_brain.update(capsule, reason_types=False, create_label=True)
response_json = brain_response_to_json(response)
except:
print('Error:', response)
print_details=True
max_context=50
t1 = datetime.now()
history = []
#### Initial prompt by the system from which we create a TextSignal and store it
leolani_prompt = f"{choice(TALK_TO_ME)}"
history.append(leolani_prompt)
print('\n\t'+AGENT + ": " + leolani_prompt)
textSignal = d_util.create_text_signal_with_speaker_annotation(scenario_ctrl, leolani_prompt, AGENT)
scenario_ctrl.append_signal(textSignal)
#BLENDERBOT
repetition = []
utterance = ""
response_json = None
response_json_list = []
replies = []
#### Get input and loop
#while (datetime.now()-t1).seconds <= 3600 or no_reply_count<15:
while True:
# BLENDER
answer = ""
sentences = get_answer_from_blender(leolani_prompt, history)
leolani_prompt = ""
if not sentences:
if len(response_json_list)>0:
for response_json in response_json_list:
if response_json['statement']:
leolani_prompt += replier.reply_to_statement(response_json, proactive=True, persist=True)
response_json_list = []
else:
leolani_prompt = f"{choice(TALK_TO_ME)}"
print('\n\t'+AGENT + ": " + leolani_prompt)
textSignal = d_util.create_text_signal_with_speaker_annotation(scenario_ctrl, leolani_prompt, AGENT)
scenario_ctrl.append_signal(textSignal)
else:
for utterance in sentences:
if utterance in repetition:
print('Repeating', utterance)
#utterance = None
else:
repetition.append(utterance)
if utterance:
utterance = utterance.strip()
if utterance.endswith(' .'):
utterance = utterance[:-2]+'.'
print('\n\tFixed period '+HUMAN_NAME + ": " + utterance)
if utterance.endswith(' ?'):
utterance = utterance[:-2]+'?'
print('\n\tFixed question '+HUMAN_NAME + ": " + utterance)
textSignal = d_util.create_text_signal_with_speaker_annotation(scenario_ctrl, utterance, HUMAN_ID)
scenario_ctrl.append_signal(textSignal)
#### Process input and generate reply
process_text_with_cfg(scenario_ctrl, place_id, location, HUMAN_ID, textSignal, chat, analyzer, my_brain, print_details)
process_triple_spacy(scenario_ctrl,place_id,location,HUMAN_ID,AGENT, textSignal,my_brain,nlp,print_details)
leolani_prompt = replier.respond(utterance)
print('\n\t'+AGENT + ": " + leolani_prompt)
textSignal = d_util.create_text_signal_with_speaker_annotation(scenario_ctrl, leolani_prompt, AGENT)
scenario_ctrl.append_signal(textSignal)
history.append(leolani_prompt)
###Output
Leolani: Would you like to chat? I'll do my best to keep up
input_prompt: Would you like to chat? I'll do my best to keep up
###Markdown
Save the scenario data
###Code
scenario_ctrl.scenario.ruler.end = int(time.time() * 1e3)
scenarioStorage.save_scenario(scenario_ctrl)
###Output
_____no_output_____ |
Intro to SQL/2 Select, From & Where/exercise-select-from-where.ipynb | ###Markdown
**This notebook is an exercise in the [SQL](https://www.kaggle.com/learn/intro-to-sql) course. You can reference the tutorial at [this link](https://www.kaggle.com/dansbecker/select-from-where).**--- IntroductionTry writing some **SELECT** statements of your own to explore a large dataset of air pollution measurements.Run the cell below to set up the feedback system.
###Code
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql.ex2 import *
print("Setup Complete")
###Output
_____no_output_____
###Markdown
The code cell below fetches the `global_air_quality` table from the `openaq` dataset. We also preview the first five rows of the table.
###Code
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "openaq" dataset
dataset_ref = client.dataset("openaq", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "global_air_quality" table
table_ref = dataset_ref.table("global_air_quality")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the "global_air_quality" table
client.list_rows(table, max_results=5).to_dataframe()
###Output
_____no_output_____
###Markdown
Exercises 1) Units of measurementWhich countries have reported pollution levels in units of "ppm"? In the code cell below, set `first_query` to an SQL query that pulls the appropriate entries from the `country` column.In case it's useful to see an example query, here's some code from the tutorial:```query = """ SELECT city FROM `bigquery-public-data.openaq.global_air_quality` WHERE country = 'US' """```
###Code
# Query to select countries with units of "ppm"
first_query = """
SELECT country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'ppm'
""" # Your code goes here
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 10 GB)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
first_query_job = client.query(first_query, job_config=safe_config)
# API request - run the query, and return a pandas DataFrame
first_results = first_query_job.to_dataframe()
# View top few rows of results
print(first_results.head())
# Check your answer
q_1.check()
###Output
_____no_output_____
###Markdown
For the solution, uncomment the line below.
###Code
#q_1.solution()
###Output
_____no_output_____
###Markdown
2) High air qualityWhich pollution levels were reported to be exactly 0? - Set `zero_pollution_query` to select **all columns** of the rows where the `value` column is 0.- Set `zero_pollution_results` to a pandas DataFrame containing the query results.
###Code
# Query to select all columns where pollution levels are exactly 0
zero_pollution_query = """
SELECT *
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE value = 0
""" # Your code goes here
# Set up the query
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
query_job = client.query(zero_pollution_query, job_config=safe_config)
# API request - run the query and return a pandas DataFrame
zero_pollution_results = query_job.to_dataframe() # Your code goes here
print(zero_pollution_results.head())
# Check your answer
q_2.check()
###Output
_____no_output_____
###Markdown
For the solution, uncomment the line below.
###Code
#q_2.solution()
###Output
_____no_output_____ |
dataset-test/Visualizations/.ipynb_checkpoints/Final_Visualization-checkpoint.ipynb | ###Markdown
PLEASE USE THE BELOW OUTLINED GRAPHS
###Code
df2 = pd.read_excel('gender.xlsx')
df2.head()
plt.figure(figsize=(12, 10))
g = sns.swarmplot(x="Male_P", y="Female_P", data=df2, hue = "Course",size = 20);
g.set(xticklabels = []);
g.set_title("Hello")
g = sns.pairplot(df2, hue="Course")
df2['Total'] = df2['Total'].apply(lambda x: 100)
df2.sort_values(by='Female_P', ascending=1)
df2.reset_index(drop=True)
# Histogram Visualisation on Dataset
fig = plt.figure(figsize=(12,6))
sns.set_color_codes("pastel")
sns.barplot(x="Total", y="Course", data=df2,
label="Total", color="b")
g = sns.set_color_codes("muted")
g = sns.barplot(x="Female_P", y="Course", data=df2,
color="pink")
g.set_title("Female")
# Histogram Visualisation on Dataset
fig = plt.figure(figsize=(12,6))
sns.set_color_codes("pastel")
sns.barplot(x="Total", y="Course", data=df2,
label="Total", color="b")
g = sns.set_color_codes("muted")
g = sns.barplot(x="Male", y="Course", data=df2,
palette="cool")
g.set_title("Male")
g.set_xlabel("")
plt.figure(figsize=(12, 10))
g = sns.factorplot(x="Male_P", y="Course", data = df2, kind="bar", size=12, palette='coolwarm');
g = sns.factorplot(x="Total", y="Course", data = df2, kind="bar", size=12, palette='coolwarm');
###Output
_____no_output_____ |
SparkNLP--Text_Preprocessing_with_Annotators_Transformers.ipynb | ###Markdown
 [](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Public/2.Text_Preprocessing_with_SparkNLP_Annotators_Transformers.ipynb) 2. Text Preprocessing with Spark NLP **Note** Read this article if you want to understand the basic concepts in Spark NLP.https://towardsdatascience.com/introduction-to-spark-nlp-foundations-and-basic-components-part-i-c83b7629ed59 Colab Setup
###Code
! pip install -q pyspark==3.1.2 spark-nlp
###Output
[K |████████████████████████████████| 212.4 MB 14 kB/s
[K |████████████████████████████████| 130 kB 21.6 MB/s
[K |████████████████████████████████| 198 kB 49.4 MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
###Markdown
if you want to work with Spark 2.3 ```import os Install java! apt-get update -qq! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null!wget -q https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz!tar xf spark-2.3.0-bin-hadoop2.7.tgz!pip install -q findsparkos.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]os.environ["SPARK_HOME"] = "/content/spark-2.3.0-bin-hadoop2.7"! java -versionimport findsparkfindspark.init()from pyspark.sql import SparkSession! pip install --ignore-installed -q spark-nlp==2.7.5import sparknlpspark = sparknlp.start(spark23=True)``` 1. Annotators and Transformer Concepts In Spark NLP, all Annotators are either Estimators or Transformers as we see in Spark ML. An Estimator in Spark ML is an algorithm which can be fit on a DataFrame to produce a Transformer. E.g., a learning algorithm is an Estimator which trains on a DataFrame and produces a model. A Transformer is an algorithm which can transform one DataFrame into another DataFrame. E.g., an ML model is a Transformer that transforms a DataFrame with features into a DataFrame with predictions.In Spark NLP, there are two types of annotators: AnnotatorApproach and AnnotatorModelAnnotatorApproach extends Estimators from Spark ML, which are meant to be trained through fit(), and AnnotatorModel extends Transformers which are meant to transform data frames through transform().Some of Spark NLP annotators have a Model suffix and some do not. The model suffix is explicitly stated when the annotator is the result of a training process. Some annotators, such as Tokenizer are transformers but do not contain the suffix Model since they are not trained, annotators. Model annotators have a pre-trained() on its static object, to retrieve the public pre-trained version of a model.Long story short, if it trains on a DataFrame and produces a model, it’s an AnnotatorApproach; and if it transforms one DataFrame into another DataFrame through some models, it’s an AnnotatorModel (e.g. WordEmbeddingsModel) and it doesn’t take Model suffix if it doesn’t rely on a pre-trained annotator while transforming a DataFrame (e.g. Tokenizer). By convention, there are three possible names:Approach — Trainable annotatorModel — Trained annotatornothing — Either a non-trainable annotator with pre-processingstep or shorthand for a modelSo for example, Stemmer doesn’t say Approach nor Model, however, it is a Model. On the other hand, Tokenizer doesn’t say Approach nor Model, but it has a TokenizerModel(). Because it is not “training” anything, but it is doing some preprocessing before converting into a Model.When in doubt, please refer to official documentation and API reference.Even though we will do many hands-on practices in the following articles, let us give you a glimpse to let you understand the difference between AnnotatorApproach and AnnotatorModel.As stated above, Tokenizer is an AnnotatorModel. So we need to call fit() and then transform(). Now let’s see how this can be done in Spark NLP using Annotators and Transformers. Assume that we have the following steps that need to be applied one by one on a data frame.- Split text into sentences- Tokenize- Normalize- Get word embeddings  What’s actually happening under the hood?When we fit() on the pipeline with Spark data frame (df), its text column is fed into DocumentAssembler() transformer at first and then a new column “document” is created in Document type (AnnotatorType). As we mentioned before, this transformer is basically the initial entry point to Spark NLP for any Spark data frame. Then its document column is fed into SentenceDetector() (AnnotatorApproach) and the text is split into an array of sentences and a new column “sentences” in Document type is created. Then “sentences” column is fed into Tokenizer() (AnnotatorModel) and each sentence is tokenized and a new column “token” in Token type is created. And so on.
###Code
import sparknlp
spark = sparknlp.start()
print("Spark NLP version", sparknlp.version())
print("Apache Spark version:", spark.version)
###Output
Spark NLP version 3.3.2
Apache Spark version: 3.1.2
###Markdown
Create Spark Dataframe
###Code
text = 'Peter Parker is a nice guy and lives in New York'
spark_df = spark.createDataFrame([[text]]).toDF("text")
spark_df.show(truncate=False)
from pyspark.sql.types import StringType, IntegerType
# if you want to create a spark datafarme from a list of strings
text_list = ['Peter Parker is a nice guy and lives in New York.', 'Bruce Wayne is also a nice guy and lives in Gotham City.']
spark.createDataFrame(text_list, StringType()).toDF("text").show(truncate=80)
# https://www.geeksforgeeks.org/python-lambda-anonymous-functions-filter-map-reduce/
from pyspark.sql import Row
spark.createDataFrame(list(map(lambda x: Row(text=x), text_list))).show(truncate=80)
!wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jupyter/annotation/english/spark-nlp-basics/sample-sentences-en.txt
with open('./sample-sentences-en.txt') as f:
print (f.read())
spark_df = spark.read.text('./sample-sentences-en.txt').toDF('text')
spark_df.show(truncate=False)
spark_df.select('text').show(truncate=False)
textFiles = spark.sparkContext.wholeTextFiles("./*.txt",4)
print(textFiles)
spark_df_folder = textFiles.toDF(schema=['path','text'])
spark_df_folder.show(truncate=30)
spark_df_folder.select('text').take(1)
spark_df_folder.select('text').collect()
###Output
_____no_output_____
###Markdown
Transformers What are we going to do if our DataFrame doesn’t have columns in those type? Here comes transformers. In Spark NLP, we have five different transformers that are mainly used for getting the data in or transform the data from one AnnotatorType to another. Here is the list of transformers:`DocumentAssembler`: To get through the NLP process, we need to get raw data annotated. This is a special transformer that does this for us; it creates the first annotation of type Document which may be used by annotators down the road.`TokenAssembler`: This transformer reconstructs a Document type annotation from tokens, usually after these have been, lemmatized, normalized, spell checked, etc, to use this document annotation in further annotators.`Doc2Chunk`: Converts DOCUMENT type annotations into CHUNK type with the contents of a chunkCol.`Chunk2Doc` : Converts a CHUNK type column back into DOCUMENT. Useful when trying to re-tokenize or do further analysis on a CHUNK result.`Finisher`: Once we have our NLP pipeline ready to go, we might want to use our annotation results somewhere else where it is easy to use. The Finisher outputs annotation(s) values into a string. each annotator accepts certain types of columns and outputs new columns in another type (we call this AnnotatorType).In Spark NLP, we have the following types: `Document`, `token`, `chunk`, `pos`, `word_embeddings`, `date`, `entity`, `sentiment`, `named_entity`, `dependency`, `labeled_dependency`. That is, the DataFrame you have needs to have a column from one of these types if that column will be fed into an annotator; otherwise, you’d need to use one of the Spark NLP transformers. 2. Document Assembler In Spark NLP, we have five different transformers that are mainly used for getting the data in or transform the data from one AnnotatorType to another. That is, the DataFrame you have needs to have a column from one of these types if that column will be fed into an annotator; otherwise, you’d need to use one of the Spark NLP transformers. Here is the list of transformers: DocumentAssembler, TokenAssembler, Doc2Chunk, Chunk2Doc, and the Finisher.So, let’s start with DocumentAssembler(), an entry point to Spark NLP annotators. To get through the process in Spark NLP, we need to get raw data transformed into Document type at first. DocumentAssembler() is a special transformer that does this for us; it creates the first annotation of type Document which may be used by annotators down the road.DocumentAssembler() comes from sparknlp.base class and has the following settable parameters. See the full list here and the source code here.`setInputCol()` -> the name of the column that will be converted. We can specify only one column here. It can read either a String column or an Array[String]`setOutputCol()` -> optional : the name of the column in Document type that is generated. We can specify only one column here. Default is ‘document’`setIdCol()` -> optional: String type column with id information`setMetadataCol()` -> optional: Map type column with metadata information`setCleanupMode()` -> optional: Cleaning up options, possible values:```disabled: Source kept as original. This is a default.inplace: removes new lines and tabs.inplace_full: removes new lines and tabs but also those which were converted to strings (i.e. \n)shrink: removes new lines and tabs, plus merging multiple spaces and blank lines to a single space.shrink_full: remove new lines and tabs, including stringified values, plus shrinking spaces and blank lines.```
###Code
spark_df.show()
spark_df.show(truncate=False)
###Output
+-----------------------------------------------------------------------------+
|text |
+-----------------------------------------------------------------------------+
|Peter is a very good person. |
|My life in Russia is very interesting. |
|John and Peter are brothers. However they don't support each other that much.|
|Lucas Nogal Dunbercker is no longer happy. He has a good car though. |
|Europe is very culture rich. There are huge churches! and big houses! |
+-----------------------------------------------------------------------------+
###Markdown
spark-nlp [API](https://nlp.johnsnowlabs.com/api/com/johnsnowlabs/nlp/DocumentAssembler.htmlsetCleanupMode(v:String):DocumentAssembler.this.type)
###Code
# CleanupMode
# https://nlp.johnsnowlabs.com/api/com/johnsnowlabs/nlp/DocumentAssembler.html#setCleanupMode(v:String):DocumentAssembler.this.type
from sparknlp.base import *
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")\
.setCleanupMode("shrink")
doc_df = documentAssembler.transform(spark_df)
doc_df.show(truncate=30)
###Output
_____no_output_____
###Markdown
At first, we define DocumentAssembler with desired parameters and then transform the data frame with it. The most important point to pay attention to here is that you need to use a String or String[Array] type column in .setInputCol(). So it doesn’t have to be named as text. You just use the column name as it is.
###Code
doc_df.printSchema()
doc_df.select('document.result','document.begin','document.end').show(truncate=False)
###Output
+-------------------------------------------------------------------------------+-----+----+
|result |begin|end |
+-------------------------------------------------------------------------------+-----+----+
|[Peter is a very good person.] |[0] |[27]|
|[My life in Russia is very interesting.] |[0] |[37]|
|[John and Peter are brothers. However they don't support each other that much.]|[0] |[76]|
|[Lucas Nogal Dunbercker is no longer happy. He has a good car though.] |[0] |[67]|
|[Europe is very culture rich. There are huge churches! and big houses!] |[0] |[68]|
+-------------------------------------------------------------------------------+-----+----+
###Markdown
The new column is in an array of struct type and has the parameters shown above. The annotators and transformers all come with universal metadata that would be filled down the road depending on the annotators being used. Unless you want to append other Spark NLP annotators to DocumentAssembler(), you don’t need to know what all these parameters mean for now. So we will talk about them in the following articles. You can access all these parameters with {column name}.{parameter name}.Let’s print out the first item’s result.
###Code
doc_df.select("document.result").take(1)
###Output
_____no_output_____
###Markdown
If we would like to flatten the document column, we can do as follows.
###Code
import pyspark.sql.functions as F
doc_df.withColumn(
"tmp",
F.explode("document"))\
.select("tmp.*")\
.show(truncate=False)
###Output
+-------------+-----+---+-----------------------------------------------------------------------------+---------------+----------+
|annotatorType|begin|end|result |metadata |embeddings|
+-------------+-----+---+-----------------------------------------------------------------------------+---------------+----------+
|document |0 |27 |Peter is a very good person. |{sentence -> 0}|[] |
|document |0 |37 |My life in Russia is very interesting. |{sentence -> 0}|[] |
|document |0 |76 |John and Peter are brothers. However they don't support each other that much.|{sentence -> 0}|[] |
|document |0 |67 |Lucas Nogal Dunbercker is no longer happy. He has a good car though. |{sentence -> 0}|[] |
|document |0 |68 |Europe is very culture rich. There are huge churches! and big houses! |{sentence -> 0}|[] |
+-------------+-----+---+-----------------------------------------------------------------------------+---------------+----------+
###Markdown
3. Sentence Detector Finds sentence bounds in raw text. `setCustomBounds(string)`: Custom sentence separator text e.g. `["\n"]``setUseCustomOnly(bool)`: Use only custom bounds without considering those of Pragmatic Segmenter. Defaults to false. Needs customBounds.`setUseAbbreviations(bool)`: Whether to consider abbreviation strategies for better accuracy but slower performance. Defaults to true.`setExplodeSentences(bool)`: Whether to split sentences into different Dataset rows. Useful for higher parallelism in fat rows. Defaults to false.
###Code
from sparknlp.annotator import *
# we feed the document column coming from Document Assembler
sentenceDetector = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentences')
sentenceDetector.extractParamMap()
doc_df.show(truncate=False)
sent_df = sentenceDetector.transform(doc_df)
sent_df.show(truncate=False)
sent_df.select('sentences').take(3)
sent_df.select('sentences.result').take(3)
text ='The patient was prescribed 1 capsule of Advil for 5 days. He was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely fro 3 months.'
text
spark_df = spark.createDataFrame([[text]]).toDF("text")
spark_df.show(truncate=False)
spark_df.show(truncate=50)
doc_df = documentAssembler.transform(spark_df)
sent_df = sentenceDetector.transform(doc_df)
sent_df.show(truncate=True)
sent_df.select('sentences.result').take(1)
sentenceDetector.setExplodeSentences(True)
sent_df = sentenceDetector.transform(doc_df)
sent_df.show(truncate=50)
sent_df.select('sentences.result').show(truncate=False)
from pyspark.sql import functions as F
sent_df.select(F.explode('sentences.result')).show(truncate=False)
###Output
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|col |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|The patient was prescribed 1 capsule of Advil for 5 days. |
|He was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg two times a day.|
|It was determined that all SGLT2 inhibitors should be discontinued indefinitely fro 3 months. |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
###Markdown
Sentence Detector DL
###Code
sentencerDL = SentenceDetectorDLModel\
.pretrained("sentence_detector_dl", "en") \
.setInputCols(["document"]) \
.setOutputCol("sentences")
sent_dl_df = sentencerDL.transform(doc_df)
sent_dl_df.select(F.explode('sentences.result')).show(truncate=False)
documenter = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentences')
sentencerDL = SentenceDetectorDLModel\
.pretrained("sentence_detector_dl", "en") \
.setInputCols(["document"]) \
.setOutputCol("sentences")
sd_pipeline = PipelineModel(stages=[documenter, sentenceDetector])
sd_model = LightPipeline(sd_pipeline)
# DL version
sd_dl_pipeline = PipelineModel(stages=[documenter, sentencerDL])
sd_dl_model = LightPipeline(sd_dl_pipeline)
text = """John loves Mary.Mary loves Peter
Peter loves Helen .Helen loves John;
Total: four people involved."""
# sd_model
for anno in sd_model.fullAnnotate(text)[0]["sentences"]:
print("{}\t{}\t{}\t{}".format(
anno.metadata["sentence"], anno.begin, anno.end, anno.result))
# sd_dl_model
for anno in sd_dl_model.fullAnnotate(text)[0]["sentences"]:
print("{}\t{}\t{}\t{}".format(
anno.metadata["sentence"], anno.begin, anno.end, anno.result))
###Output
0 0 15 John loves Mary.
1 16 32 Mary loves Peter
2 33 51 Peter loves Helen .
3 52 68 Helen loves John;
4 71 98 Total: four people involved.
###Markdown
Tokenizer Identifies tokens with tokenization open standards. It is an **Annotator Approach, so it requires .fit()**.A few rules will help customizing it if defaults do not fit user needs.setExceptions(StringArray): List of tokens to not alter at all. Allows composite tokens like two worded tokens that the user may not want to split.`addException(String)`: Add a single exception`setExceptionsPath(String)`: Path to txt file with list of token exceptions`caseSensitiveExceptions(bool)`: Whether to follow case sensitiveness for matching exceptions in text`contextChars(StringArray)`: List of 1 character string to rip off from tokens, such as parenthesis or question marks. Ignored if using prefix, infix or suffix patterns.`splitChars(StringArray)`: List of 1 character string to split tokens inside, such as hyphens. Ignored if using infix, prefix or suffix patterns.`splitPattern (String)`: pattern to separate from the inside of tokens. takes priority over splitChars.setTargetPattern: Basic regex rule to identify a candidate for tokenization. Defaults to \\S+ which means anything not a space`setSuffixPattern`: Regex to identify subtokens that are in the end of the token. Regex has to end with \\z and must contain groups (). Each group will become a separate token within the prefix. Defaults to non-letter characters. e.g. quotes or parenthesis`setPrefixPattern`: Regex to identify subtokens that come in the beginning of the token. Regex has to start with \\A and must contain groups (). Each group will become a separate token within the prefix. Defaults to non-letter characters. e.g. quotes or parenthesis`addInfixPattern`: Add an extension pattern regex with groups to the top of the rules (will target first, from more specific to the more general).`minLength`: Set the minimum allowed legth for each token`maxLength`: Set the maximum allowed legth for each token
###Code
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
tokenizer.extractParamMap()
text = 'Peter Parker (Spiderman) is a nice guy and lives in New York but has no e-mail!'
spark_df = spark.createDataFrame([[text]]).toDF("text")
doc_df = documentAssembler.transform(spark_df)
token_df = tokenizer.fit(doc_df).transform(doc_df)
token_df.show(truncate=100)
token_df.select('token.result').take(1)
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token") \
.setSplitChars(['-']) \
.setContextChars(['?', '!']) \
.addException("New York") \
token_df = tokenizer.fit(doc_df).transform(doc_df)
token_df.select('token.result').take(1)
###Output
_____no_output_____
###Markdown
Regex Tokenizer
###Code
from pyspark.sql.types import StringType
content = "1. T1-T2 DATE**[12/24/13] $1.99 () (10/12), ph+ 90%"
pattern = "\\s+|(?=[-.:;*+,$&%\\[\\]])|(?<=[-.:;*+,$&%\\[\\]])"
df = spark.createDataFrame([content], StringType()).withColumnRenamed("value", "text")
documenter = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentence')
regexTokenizer = RegexTokenizer() \
.setInputCols(["sentence"]) \
.setOutputCol("regexToken") \
.setPattern(pattern) \
.setPositionalMask(False)
docPatternRemoverPipeline = Pipeline().setStages([
documenter,
sentenceDetector,
regexTokenizer
])
result = docPatternRemoverPipeline.fit(df).transform(df)
result.show(10, False)
import pyspark.sql.functions as F
result_df = result.select(F.explode('regexToken.result').alias('regexToken')).toPandas()
result_df
###Output
_____no_output_____
###Markdown
Stacking Spark NLP Annotators in Spark ML Pipeline Spark NLP provides an easy API to integrate with Spark ML Pipelines and all the Spark NLP annotators and transformers can be used within Spark ML Pipelines. So, it’s better to explain Pipeline concept through Spark ML official documentation.What is a Pipeline anyway? In machine learning, it is common to run a sequence of algorithms to process and learn from data. Apache Spark ML represents such a workflow as a Pipeline, which consists of a sequence of PipelineStages (Transformers and Estimators) to be run in a specific order.In simple terms, a pipeline chains multiple Transformers and Estimators together to specify an ML workflow. We use Pipeline to chain multiple Transformers and Estimators together to specify our machine learning workflow.The figure below is for the training time usage of a Pipeline.  A Pipeline is specified as a sequence of stages, and each stage is either a Transformer or an Estimator. These stages are run in order, and the input DataFrame is transformed as it passes through each stage. That is, the data are passed through the fitted pipeline in order. Each stage’s transform() method updates the dataset and passes it to the next stage. With the help of Pipelines, we can ensure that training and test data go through identical feature processing steps.Now let’s see how this can be done in Spark NLP using Annotators and Transformers. Assume that we have the following steps that need to be applied one by one on a data frame.- Split text into sentences- TokenizeAnd here is how we code this pipeline up in Spark NLP.
###Code
from pyspark.ml import Pipeline
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer() \
.setInputCols(["sentences"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
spark_df = spark.read.text('./sample-sentences-en.txt').toDF('text')
spark_df.show(truncate=False)
result = pipelineModel.transform(spark_df)
result.show(truncate=20)
result.printSchema()
result.select('sentences.result').take(3)
result.select('token').take(3)[2]
###Output
_____no_output_____
###Markdown
Normalizer Removes all dirty characters from text following a regex pattern and transforms words based on a provided dictionary`setCleanupPatterns(patterns)`: Regular expressions list for normalization, defaults [^A-Za-z]`setLowercase(value)`: lowercase tokens, default false`setSlangDictionary(path)`: txt file with delimited words to be transformed into something else
###Code
import string
string.punctuation
from sparknlp.base import *
from sparknlp.annotator import *
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
normalizer = Normalizer() \
.setInputCols(["token"]) \
.setOutputCol("normalized")\
.setLowercase(True)\
.setCleanupPatterns(["[^\w\d\s]"]) # remove punctuations (keep alphanumeric chars)
# if we don't set CleanupPatterns, it will only keep alphabet letters ([^A-Za-z])
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
normalizer
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
pipelineModel.stages
result = pipelineModel.transform(spark_df)
result.show(truncate=20)
result.select('token').take(2)
result.select('normalized.result').take(2)
result.select('normalized').take(2)
###Output
_____no_output_____
###Markdown
Document Normalizer The DocumentNormalizer is an annotator that can be used after the DocumentAssembler to narmalize documents once that they have been processed and indexed .It takes in input annotated documents of type Array AnnotatorType.DOCUMENT and gives as output annotated document of type AnnotatorType.DOCUMENT .Parameters are: - inputCol: input column name string which targets a column of type Array(AnnotatorType.DOCUMENT). - outputCol: output column name string which targets a column of type AnnotatorType.DOCUMENT. - action: action string to perform applying regex patterns, i.e. (clean | extract). Default is "clean". - cleanupPatterns: normalization regex patterns which match will be removed from document. Default is "]*>" (e.g., it removes all HTML tags). - replacement: replacement string to apply when regexes match. Default is " ". - lowercase: whether to convert strings to lowercase. Default is False. - removalPolicy: removalPolicy to remove patterns from text with a given policy. Valid policy values are: "all", "pretty_all", "first", "pretty_first". Defaults is "pretty_all". - encoding: file encoding to apply on normalized documents. Supported encodings are: UTF_8, UTF_16, US_ASCII, ISO-8859-1, UTF-16BE, UTF-16LE. Default is "UTF-8".
###Code
text = '''
<div id="theworldsgreatest" class='my-right my-hide-small my-wide toptext' style="font-family:'Segoe UI',Arial,sans-serif">
THE WORLD'S LARGEST WEB DEVELOPER SITE
<h1 style="font-size:300%;">THE WORLD'S LARGEST WEB DEVELOPER SITE</h1>
<p style="font-size:160%;">Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum..</p>
</div>
</div>'''
spark_df = spark.createDataFrame([[text]]).toDF("text")
spark_df.show(truncate=False)
documentNormalizer = DocumentNormalizer() \
.setInputCols("document") \
.setOutputCol("normalizedDocument")
documentNormalizer.extractParamMap()
documentAssembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
#default
cleanUpPatterns = ["<[^>]*>"]
documentNormalizer = DocumentNormalizer() \
.setInputCols("document") \
.setOutputCol("normalizedDocument") \
.setAction("clean") \
.setPatterns(cleanUpPatterns) \
.setReplacement(" ") \
.setPolicy("pretty_all") \
.setLowercase(True)
docPatternRemoverPipeline = Pipeline() \
.setStages([
documentAssembler,
documentNormalizer])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = docPatternRemoverPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.select('normalizedDocument.result').show(truncate=False)
###Output
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|result |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[ the world's largest web developer site the world's largest web developer site lorem ipsum is simply dummy text of the printing and typesetting industry. lorem ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. it has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. it was popularised in the 1960s with the release of letraset sheets containing lorem ipsum passages, and more recently with desktop publishing software like aldus pagemaker including versions of lorem ipsum..]|
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
###Markdown
for more examples : https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/annotation/english/document-normalizer/document_normalizer_notebook.ipynb Stopwords Cleaner This annotator excludes from a sequence of strings (e.g. the output of a Tokenizer, Normalizer, Lemmatizer, and Stemmer) and drops all the stop words from the input sequences. Functions:`setStopWords`: The words to be filtered out. Array[String]`setCaseSensitive`: Whether to do a case sensitive comparison over the stop words.
###Code
stopwords_cleaner = StopWordsCleaner()\
.setInputCols("token")\
.setOutputCol("cleanTokens")\
.setCaseSensitive(False)\
#.setStopWords(["no", "without"]) (e.g. read a list of words from a txt)
stopwords_cleaner.getStopWords()
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
stopwords_cleaner
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
spark_df = spark.read.text('./sample-sentences-en.txt').toDF('text')
result = pipelineModel.transform(spark_df)
result.show()
result.select('cleanTokens.result').take(1)
###Output
_____no_output_____
###Markdown
Token Assembler
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentences')
tokenizer = Tokenizer() \
.setInputCols(["sentences"]) \
.setOutputCol("token")
normalizer = Normalizer() \
.setInputCols(["token"]) \
.setOutputCol("normalized")\
.setLowercase(False)\
stopwords_cleaner = StopWordsCleaner()\
.setInputCols("normalized")\
.setOutputCol("cleanTokens")\
.setCaseSensitive(False)\
tokenassembler = TokenAssembler()\
.setInputCols(["sentences", "cleanTokens"]) \
.setOutputCol("clean_text")
nlpPipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer,
normalizer,
stopwords_cleaner,
tokenassembler
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.show()
# if we use TokenAssembler().setPreservePosition(True), the original borders will be preserved (dropped & unwanted chars will be replaced by spaces)
result.select('clean_text').take(1)
result.select('text', F.explode('clean_text.result').alias('clean_text')).show(truncate=False)
result.select('text', F.explode('clean_text.result').alias('clean_text')).toPandas()
import pyspark.sql.functions as F
result.withColumn(
"tmp",
F.explode("clean_text")) \
.select("tmp.*").select("begin","end","result","metadata.sentence").show(truncate = False)
# if we hadn't used Sentence Detector, this would be what we got. (tokenizer gets document instead of sentences column)
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
tokenassembler = TokenAssembler()\
.setInputCols(["document", "cleanTokens"]) \
.setOutputCol("clean_text")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
normalizer,
stopwords_cleaner,
tokenassembler
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.select('text', 'clean_text.result').show(truncate=False)
result.withColumn(
"tmp",
F.explode("clean_text")) \
.select("tmp.*").select("begin","end","result","metadata.sentence").show(truncate = False)
###Output
+-----+---+---------------------------------------------------+--------+
|begin|end|result |sentence|
+-----+---+---------------------------------------------------+--------+
|0 |16 |Peter good person |0 |
|0 |22 |life Russia interesting |0 |
|0 |44 |John Peter brothers However dont support much |0 |
|0 |50 |Lucas Nogal Dunbercker longer happy good car though|0 |
|0 |43 |Europe culture rich huge churches big houses |0 |
+-----+---+---------------------------------------------------+--------+
###Markdown
**IMPORTANT NOTE:**If you have some other steps & annotators in your pipeline that will need to use the tokens from cleaned text (assembled tokens), you will need to tokenize the processed text again as the original text is probably changed completely. Stemmer Returns hard-stems out of words with the objective of retrieving the meaningful part of the word
###Code
stemmer = Stemmer() \
.setInputCols(["token"]) \
.setOutputCol("stem")
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
stemmer
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.show()
result.select('stem.result').show(truncate=False)
import pyspark.sql.functions as F
result_df = result.select(F.explode(F.arrays_zip('token.result', 'stem.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("token"),
F.expr("cols['1']").alias("stem")).toPandas()
result_df.head(10)
###Output
_____no_output_____
###Markdown
Lemmatizer Retrieves lemmas out of words with the objective of returning a base dictionary word
###Code
!wget -q https://raw.githubusercontent.com/mahavivo/vocabulary/master/lemmas/AntBNC_lemmas_ver_001.txt
lemmatizer = Lemmatizer() \
.setInputCols(["token"]) \
.setOutputCol("lemma") \
.setDictionary("./AntBNC_lemmas_ver_001.txt", value_delimiter ="\t", key_delimiter = "->")
lemmatizer.extractParamMap()
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
stemmer = Stemmer() \
.setInputCols(["token"]) \
.setOutputCol("stem")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
stemmer,
lemmatizer
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.show()
result.select('lemma.result').show(truncate=False)
result_df = result.select(F.explode(F.arrays_zip('token.result', 'stem.result', 'lemma.result')).alias("cols")) \
.select(F.expr("cols['0']").alias("token"),
F.expr("cols['1']").alias("stem"),
F.expr("cols['2']").alias("lemma")).toPandas()
result_df.head(10)
###Output
_____no_output_____
###Markdown
NGram Generator NGramGenerator annotator takes as input a sequence of strings (e.g. the output of a `Tokenizer`, `Normalizer`, `Stemmer`, `Lemmatizer`, and `StopWordsCleaner`). The parameter n is used to determine the number of terms in each n-gram. The output will consist of a sequence of n-grams where each n-gram is represented by a space-delimited string of n consecutive words with annotatorType `CHUNK` same as the Chunker annotator.Functions:`setN:` number elements per n-gram (>=1)`setEnableCumulative:` whether to calculate just the actual n-grams or all n-grams from 1 through n`setDelimiter:` Glue character used to join the tokens
###Code
ngrams_cum = NGramGenerator() \
.setInputCols(["token"]) \
.setOutputCol("ngrams") \
.setN(3) \
.setEnableCumulative(True)\
.setDelimiter("_") # Default is space
# .setN(3) means, take bigrams and trigrams.
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
ngrams_cum
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(spark_df)
result.select('ngrams.result').show(truncate=200)
ngrams_nonCum = NGramGenerator() \
.setInputCols(["token"]) \
.setOutputCol("ngrams_v2") \
.setN(3) \
.setEnableCumulative(False)\
.setDelimiter("_") # Default is space
ngrams_nonCum.transform(result).select('ngrams_v2.result').show(truncate=200)
###Output
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| result|
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| [Peter_is_a, is_a_very, a_very_good, very_good_person, good_person_.]|
| [My_life_in, life_in_Russia, in_Russia_is, Russia_is_very, is_very_interesting, very_interesting_.]|
|[John_and_Peter, and_Peter_are, Peter_are_brothers, are_brothers_., brothers_._However, ._However_they, However_they_don't, they_don't_support, don't_support_each, support_each_other, each_other_th...|
| [Lucas_Nogal_Dunbercker, Nogal_Dunbercker_is, Dunbercker_is_no, is_no_longer, no_longer_happy, longer_happy_., happy_._He, ._He_has, He_has_a, has_a_good, a_good_car, good_car_though, car_though_.]|
|[Europe_is_very, is_very_culture, very_culture_rich, culture_rich_., rich_._There, ._There_are, There_are_huge, are_huge_churches, huge_churches_!, churches_!_and, !_and_big, and_big_houses, big_ho...|
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
###Markdown
TextMatcher Annotator to match entire phrases (by token) provided in a file against a DocumentFunctions:`setEntities(path, format, options)`: Provides a file with phrases to match. Default: Looks up path in configuration.`path`: a path to a file that contains the entities in the specified format.`readAs`: the format of the file, can be one of {ReadAs.LINE_BY_LINE, ReadAs.SPARK_DATASET}. Defaults to LINE_BY_LINE.`options`: a map of additional parameters. Defaults to {“format”: “text”}.`entityValue` : Value for the entity metadata field to indicate which chunk comes from which textMatcher when there are multiple textMatchers. `mergeOverlapping` : whether to merge overlapping matched chunks. Defaults false`caseSensitive` : whether to match regardless of case. Defaults true
###Code
entity_extractor = TextMatcher() \
.setInputCols(["document",'token'])\
.setOutputCol("matched_entities")\
entity_extractor.extractParamMap()
! wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Public/data/news_category_train.csv
news_df = spark.read \
.option("header", True) \
.csv("news_category_train.csv")
news_df.show(5, truncate=50)
# write the target entities to txt file
entities = ['Wall Street', 'USD', 'stock', 'NYSE']
with open ('financial_entities.txt', 'w') as f:
for i in entities:
f.write(i+'\n')
entities = ['soccer', 'world cup', 'Messi', 'FC Barcelona']
with open ('sport_entities.txt', 'w') as f:
for i in entities:
f.write(i+'\n')
documentAssembler = DocumentAssembler()\
.setInputCol("description")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
financial_entity_extractor = TextMatcher() \
.setInputCols(["document",'token'])\
.setOutputCol("financial_entities")\
.setEntities("financial_entities.txt")\
.setCaseSensitive(False)\
.setEntityValue('financial_entity')
sport_entity_extractor = TextMatcher() \
.setInputCols(["document",'token'])\
.setOutputCol("sport_entities")\
.setEntities("sport_entities.txt")\
.setCaseSensitive(False)\
.setEntityValue('sport_entity')
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
financial_entity_extractor,
sport_entity_extractor
])
empty_df = spark.createDataFrame([['']]).toDF("description")
pipelineModel = nlpPipeline.fit(empty_df)
result = pipelineModel.transform(news_df)
result.select('financial_entities.result','sport_entities.result').take(3)
result.select('description','financial_entities.result','sport_entities.result')\
.toDF('text','financial_matches','sport_matches').filter((F.size('financial_matches')>1) | (F.size('sport_matches')>1))\
.show(truncate=70)
result_df = result.select(F.explode(F.arrays_zip('financial_entities.result', 'financial_entities.begin', 'financial_entities.end')).alias("cols")) \
.select(F.expr("cols['0']").alias("clinical_entities"),
F.expr("cols['1']").alias("begin"),
F.expr("cols['2']").alias("end")).toPandas()
result_df.head(10)
###Output
_____no_output_____
###Markdown
RegexMatcher
###Code
! wget -q https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/pubmed/pubmed-sample.csv
pubMedDF = spark.read\
.option("header", "true")\
.csv("./pubmed-sample.csv")\
.filter("AB IS NOT null")\
.withColumnRenamed("AB", "text")\
.drop("TI")
pubMedDF.show(truncate=50)
rules = '''
renal\s\w+, started with 'renal'
cardiac\s\w+, started with 'cardiac'
\w*ly\b, ending with 'ly'
\S*\d+\S*, match any word that contains numbers
(\d+).?(\d*)\s*(mg|ml|g), match medication metrics
'''
with open('regex_rules.txt', 'w') as f:
f.write(rules)
RegexMatcher().extractParamMap()
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
regex_matcher = RegexMatcher()\
.setInputCols('document')\
.setStrategy("MATCH_ALL")\
.setOutputCol("regex_matches")\
.setExternalRules(path='./regex_rules.txt', delimiter=',')
nlpPipeline = Pipeline(stages=[
documentAssembler,
regex_matcher
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
match_df = pipelineModel.transform(pubMedDF)
match_df.select('regex_matches.result').take(3)
match_df.select('text','regex_matches.result')\
.toDF('text','matches').filter(F.size('matches')>1)\
.show(truncate=70)
###Output
+----------------------------------------------------------------------+----------------------------------------------------------------------+
| text| matches|
+----------------------------------------------------------------------+----------------------------------------------------------------------+
|The human KCNJ9 (Kir 3.3, GIRK3) is a member of the G-protein-activ...|[inwardly, family, spansapproximately, byapproximately, approximate...|
|BACKGROUND: At present, it is one of the most important issues for ...|[previously, previously, intravenously, previously, 25, mg/m(2), 1,...|
|OBJECTIVE: To investigate the relationship between preoperative atr...|[renal failure, cardiac surgery, cardiac surgery, cardiac surgical,...|
|Combined EEG/fMRI recording has been used to localize the generator...|[normally, significantly, effectively, analy, only, considerably, 2...|
|Statistical analysis of neuroimages is commonly approached with int...|[analy, commonly, overly, normally, thatsuccessfully, recently, ana...|
|The synthetic DOX-LNA conjugate was characterized by proton nuclear...| [wasanaly, substantially]|
|Our objective was to compare three different methods of blood press...|[daily, only, Conversely, Hourly, hourly, Hourly, hourly, hourly, h...|
|We conducted a phase II study to assess the efficacy and tolerabili...|[analy, respectively, generally, 5-fluorouracil, (5-FU)-, 5-FU-base...|
|"Monomeric sarcosine oxidase (MSOX) is a flavoenzyme that catalyzes...|[cataly, methylgly, gly, ethylgly, dimethylgly, spectrally, practic...|
|We presented the tachinid fly Exorista japonica with moving host mo...| [fly, fly, fly, fly, fly]|
|The literature dealing with the water conducting properties of sapw...| [generally, mathematically, especially]|
|A novel approach to synthesize chitosan-O-isopropyl-5'-O-d4T monoph...|[efficiently, poly, chitosan-O-isopropyl-5'-O-d4T, Chitosan-d4T, 1....|
|An HPLC-ESI-MS-MS method has been developed for the quantitative de...|[chromatographically, respectively, successfully, C18, (n=5), 95.0%...|
|The localizing and lateralizing values of eye and head ictal deviat...| [early, early]|
|OBJECTIVE: To evaluate the effectiveness and acceptability of expec...|[weekly, respectively, theanaly, 2006, 2007,, 2, 66, 1), 30patients...|
|We report the results of a screen for genetic association with urin...|[poly, threepoly, significantly, analy, actually, anextremely, only...|
|Intraparenchymal pericatheter cyst is rarely reported. Obstruction ...| [rarely, possibly, unusually, Early]|
|PURPOSE: To compare the effectiveness, potential advantages and com...|[analy, comparatively, wassignificantly, respectively, a7-year, 155...|
|We have demonstrated a new type of all-optical 2 x 2 switch by usin...|[approximately, fully, approximately, approximately, approximately,...|
|Physalis peruviana (PP) is a widely used medicinal herb for treatin...|[widely, (20,, 40,, 60,, 80, 95%, 100, 95%, (82.3%), onFeCl2-ascorb...|
+----------------------------------------------------------------------+----------------------------------------------------------------------+
only showing top 20 rows
###Markdown
MultiDateMatcher Extract exact & normalize dates from relative date-time phrases. The default anchor date will be the date the code is run.
###Code
MultiDateMatcher().extractParamMap()
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
date_matcher = MultiDateMatcher() \
.setInputCols('document') \
.setOutputCol("date") \
.setDateFormat("yyyy/MM/dd")
date_pipeline = PipelineModel(stages=[
documentAssembler,
date_matcher
])
sample_df = spark.createDataFrame([['I saw him yesterday and he told me that he will visit us next week']]).toDF("text")
result = date_pipeline.transform(sample_df)
result.select('date.result').show(truncate=False)
###Output
+------------------------+
|result |
+------------------------+
|[2021/10/14, 2021/10/06]|
+------------------------+
###Markdown
Text Cleaning with UDF
###Code
text = '<h1 style="color: #5e9ca0;">Have a great <span style="color: #2b2301;">birth</span> day!</h1>'
text_df = spark.createDataFrame([[text]]).toDF("text")
import re
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType, IntegerType
clean_text = lambda s: re.sub(r'<[^>]*>', '', s)
text_df.withColumn('cleaned', udf(clean_text, StringType())('text')).select('text','cleaned').show(truncate= False)
find_not_alnum_count = lambda s: len([i for i in s if not i.isalnum() and i!=' '])
find_not_alnum_count("it's your birth day!")
text = '<h1 style="color: #5e9ca0;">Have a great <span style="color: #2b2301;">birth</span> day!</h1>'
find_not_alnum_count(text)
text_df.withColumn('cleaned', udf(find_not_alnum_count, IntegerType())('text')).select('text','cleaned').show(truncate= False)
###Output
+----------------------------------------------------------------------------------------------+-------+
|text |cleaned|
+----------------------------------------------------------------------------------------------+-------+
|<h1 style="color: #5e9ca0;">Have a great <span style="color: #2b2301;">birth</span> day!</h1>|23 |
+----------------------------------------------------------------------------------------------+-------+
###Markdown
Finisher ***Finisher:*** Once we have our NLP pipeline ready to go, we might want to use our annotation results somewhere else where it is easy to use. The Finisher outputs annotation(s) values into a string.If we just want the desired output column in the final dataframe, we can use Finisher to drop previous stages in the final output and get the `result` from the process.This is very handy when you want to use the output from Spark NLP annotator as an input to another Spark ML transformer.Settable parameters are:`setInputCols()``setOutputCols()``setCleanAnnotations(True)` -> Whether to remove intermediate annotations`setValueSplitSymbol(“”)` -> split values within an annotation character`setAnnotationSplitSymbol(“@”)` -> split values between annotations character`setIncludeMetadata(False)` -> Whether to include metadata keys. Sometimes useful in some annotations.`setOutputAsArray(False)` -> Whether to output as Array. Useful as input for other Spark transformers.
###Code
finisher = Finisher() \
.setInputCols(["regex_matches"]) \
.setIncludeMetadata(False) # set to False to remove metadata
nlpPipeline = Pipeline(stages=[
documentAssembler,
regex_matcher,
finisher
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
match_df = pipelineModel.transform(pubMedDF)
match_df.show(truncate = 50)
match_df.printSchema()
match_df.filter(F.size('finished_regex_matches')>2).show(truncate = 50)
###Output
+--------------------------------------------------+--------------------------------------------------+
| text| finished_regex_matches|
+--------------------------------------------------+--------------------------------------------------+
|The human KCNJ9 (Kir 3.3, GIRK3) is a member of...|[inwardly, family, spansapproximately, byapprox...|
|BACKGROUND: At present, it is one of the most i...|[previously, previously, intravenously, previou...|
|OBJECTIVE: To investigate the relationship betw...|[renal failure, cardiac surgery, cardiac surger...|
|Combined EEG/fMRI recording has been used to lo...|[normally, significantly, effectively, analy, o...|
|Statistical analysis of neuroimages is commonly...|[analy, commonly, overly, normally, thatsuccess...|
|Our objective was to compare three different me...|[daily, only, Conversely, Hourly, hourly, Hourl...|
|We conducted a phase II study to assess the eff...|[analy, respectively, generally, 5-fluorouracil...|
|"Monomeric sarcosine oxidase (MSOX) is a flavoe...|[cataly, methylgly, gly, ethylgly, dimethylgly,...|
|We presented the tachinid fly Exorista japonica...| [fly, fly, fly, fly, fly]|
|The literature dealing with the water conductin...| [generally, mathematically, especially]|
|A novel approach to synthesize chitosan-O-isopr...|[efficiently, poly, chitosan-O-isopropyl-5'-O-d...|
|An HPLC-ESI-MS-MS method has been developed for...|[chromatographically, respectively, successfull...|
|OBJECTIVE: To evaluate the effectiveness and ac...|[weekly, respectively, theanaly, 2006, 2007,, 2...|
|We report the results of a screen for genetic a...|[poly, threepoly, significantly, analy, actuall...|
|Intraparenchymal pericatheter cyst is rarely re...| [rarely, possibly, unusually, Early]|
|PURPOSE: To compare the effectiveness, potentia...|[analy, comparatively, wassignificantly, respec...|
|We have demonstrated a new type of all-optical ...|[approximately, fully, approximately, approxima...|
|Physalis peruviana (PP) is a widely used medici...|[widely, (20,, 40,, 60,, 80, 95%, 100, 95%, (82...|
|We report the discovery of a series of substitu...|[highly, potentially, highly, respectively, tub...|
|The purpose of this study was to identify and c...|[family, Nearly, only, 43, 10, 44%, 32%, 64%, 4...|
+--------------------------------------------------+--------------------------------------------------+
only showing top 20 rows
###Markdown
LightPipelinehttps://medium.com/spark-nlp/spark-nlp-101-lightpipeline-a544e93f20f1 LightPipelines are Spark NLP specific Pipelines, equivalent to Spark ML Pipeline, but meant to deal with smaller amounts of data. They’re useful working with small datasets, debugging results, or when running either training or prediction from an API that serves one-off requests.Spark NLP LightPipelines are Spark ML pipelines converted into a single machine but the multi-threaded task, becoming more than 10x times faster for smaller amounts of data (small is relative, but 50k sentences are roughly a good maximum). To use them, we simply plug in a trained (fitted) pipeline and then annotate a plain text. We don't even need to convert the input text to DataFrame in order to feed it into a pipeline that's accepting DataFrame as an input in the first place. This feature would be quite useful when it comes to getting a prediction for a few lines of text from a trained ML model. **It is nearly 10x faster than using Spark ML Pipeline**`LightPipeline(someTrainedPipeline).annotate(someStringOrArray)`
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
stemmer = Stemmer() \
.setInputCols(["token"]) \
.setOutputCol("stem")
nlpPipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
stemmer,
lemmatizer
])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipelineModel = nlpPipeline.fit(empty_df)
pipelineModel.transform(spark_df).show()
from sparknlp.base import LightPipeline
light_model = LightPipeline(pipelineModel)
light_result = light_model.annotate("John and Peter are brothers. However they don't support each other that much.")
light_result.keys()
list(zip(light_result['token'], light_result['stem'], light_result['lemma']))
light_result = light_model.fullAnnotate("John and Peter are brothers. However they don't support each other that much.")
light_result
text_list= ["How did serfdom develop in and then leave Russia ?",
"There will be some exciting breakthroughs in NLP this year."]
light_model.annotate(text_list)
###Output
_____no_output_____ |
assignments/PythonBasics/Assignment_20.ipynb | ###Markdown
1. Set the variable test1 to the string 'This is a test of the emergency text system,' and save test1 to a file named test.txt.
###Code
test1 = 'This is a test of the emergency text system,'
my_file = open('test.txt', 'w')
my_file.write(test1)
my_file.close()
###Output
_____no_output_____
###Markdown
2. Read the contents of the file test.txt into the variable test2. Is there a difference between test 1
###Code
my_file = open('test.txt', 'r')
test2 = my_file.readline()
print(test2) # just for reference
my_file.close()
if test1 == test2:
print("Both test1 and test2 are same")
###Output
This is a test of the emergency text system,
Both test1 and test2 are same
###Markdown
3. Create a CSV file called books.csv by using these lines:title,author,yearThe Weirdstone of Brisingamen,Alan Garner,1960Perdido Street Station,China Miéville,2000Thud!,Terry Pratchett,2005The Spellman Files,Lisa Lutz,2007Small Gods,Terry Pratchett,1992
###Code
import csv
rows =[ ['title','author','year'],
['The Weirdstone of Brisingamen','Alan Garner',1960],
['Perdido Street Station','China Miéville',2000],
['Thud!','Terry Pratchett',2005],
['The Spellman Files','Lisa Lutz',2007],
['Small Gods','Terry Pratchett',1992]]
with open('books.csv','w',newline='') as file:
writer = csv.writer(file)
writer.writerows(rows)
with open('books.csv','r',newline='') as file:
for line in file.readlines():
print(line)
###Output
title,author,year
The Weirdstone of Brisingamen,Alan Garner,1960
Perdido Street Station,China Miéville,2000
Thud!,Terry Pratchett,2005
The Spellman Files,Lisa Lutz,2007
Small Gods,Terry Pratchett,1992
###Markdown
4. Use the sqlite3 module to create a SQLite database called books.db, and a table called books with these fields: title (text), author (text), and year (integer).
###Code
import sqlite3
conn = sqlite3.connect('books.db')
c = conn.cursor()
c.execute('DROP TABLE IF EXISTS books')
c.execute('create table books(title varchar(20),author varchar(20), year int)')
conn.commit()
###Output
_____no_output_____
###Markdown
5. Read books.csv and insert its data into the book table.
###Code
import pandas as pd
read_books = pd.read_csv('books.csv',encoding='unicode_escape')
read_books.to_sql('books', conn, if_exists='append', index = False)
###Output
_____no_output_____
###Markdown
6. Select and print the title column from the book table in alphabetical order.
###Code
c.execute('select title from books order by title asc')
print(c.fetchall())
###Output
[('Perdido Street Station',), ('Small Gods',), ('The Spellman Files',), ('The Weirdstone of Brisingamen',), ('Thud!',)]
###Markdown
7. From the book table, select and print all columns in the order of publication.
###Code
c.execute('select title, author,year from books order by year')
#print(c.fetchall())
df = pd.DataFrame(c.fetchall(), columns=['title','author','year'])
df
###Output
_____no_output_____
###Markdown
8. Use the sqlalchemy module to connect to the sqlite3 database books.db that you just made in exercise 6.
###Code
import sqlalchemy
engine = sqlalchemy.create_engine("sqlite:///books.db")
rows = engine.execute('select * from books')
for i in rows:
print(i)
###Output
('The Weirdstone of Brisingamen', 'Alan Garner', 1960)
('Perdido Street Station', 'China Miéville', 2000)
('Thud!', 'Terry Pratchett', 2005)
('The Spellman Files', 'Lisa Lutz', 2007)
('Small Gods', 'Terry Pratchett', 1992)
###Markdown
9. Install the Redis server and the Python redis library (pip install redis) on your computer. Create a Redis hash called test with the fields count (1) and name ('Fester Bestertester'). Print all the fields for test.
###Code
# pip install redis
import redis
conn = redis.Redis()
conn.delete('test')
conn.hmset('test', {'count': 1, 'name': 'Fester Bestertester'})
conn.hgetall('test')
###Output
_____no_output_____
###Markdown
10. Increment the count field of test and print it.
###Code
conn.hincrby('test','count', 3)
###Output
_____no_output_____ |
python/optimization/pairwise_comparison/pairwise_comparison_blog.ipynb | ###Markdown
Introduction I started as a Data Scientist at a startup, A42 Labs, in November and hit the ground running FAST in the world of data science consulting. I was quickly onboarded and rolled into a client project as the lead Data Scientist within a week of starting. The experience has been highly challenging and rewarding, with exponential improvements in my Python and SQL code. Perhaps the most benefitial aspect of this first client project has been the need to optimize code to fit within the parameters of our local programming requirements. This post highlights the approaches used to eliminate code loops, and prepare client data for row-wise regex string comparisons. The Problem The client in question (Client A), is a large, Fortune-ranked software company. There are plugins that can be utilized with their software that are developed either internally or by 3rd party partners. These plugins represent a large revenue stream for Client A, engage thousands of partners who develop and sell these plugins, hundreds of thousands of customers who purchase and use them, and cover a wide range of functionality and industries. Internal teams at Client A would like to leverage this data to report utilization and revenue back to partners, and build machine learning models to make plugin recommendations to customers. However, through many iterations of legacy data systems, historical data quality issues have occurred. Specifically, links between the plugin data and the partners who own/create the plugins have been lost, and no unique key exists between the data. Working with client-side SMEs, A42 Labs built a rule-based algorithm to match nearly 13,000 plugins with nearly 12,000 potential partners using regex string matching between the plugin information and the partner information where available. Due to security constraints from Client A, this process had to be conducted on a local enviroment, specifically a MacBook Pro with 16GB of RAM. This led to initial code iterations that could take more than a day to run. Since this was a highly iterative process working with SMEs to validate and update the algorithm matching rules, improving code efficiency was a priority when I started on the project to eliminate bottlenecks. The Solution The rule-based algorithm was built to match plugins to their partners based on information found in the plugin and partner registration data. This repeatable, semi-automated process for matching plugins to partners relied on automatically identifying a set of candidate partner matches using regular expression string searching. Since Python Pandas is designed for vectorized manipulations it's important to avoid the use of loops in the code logic. Pandas reads all data into memory, so looping over rows is very inefficient. The A42 Labs team decided to leverage the apply method to compare plugin and partner string information in a row-wise, vectorized fashion. An additional benefit of using this method is that, once the solution is sent to the client side production team, it can more easily be converted to a parallel process, such as using PySpark, since row-wise operations can be sharded across parallel processes. To execute this logic, we built a pairwise comparison table of all plugins matched with all possible partners. This process alone was eating A LOT of computation time by time I joined the project. Here's a breakdown of how optimized the creation of the pairwise comparison table. Imports Since this project used proprietary data from Client A, we'll illustrate the process using some fake data. [Faker](https://faker.readthedocs.io/en/master/) is an awesome Python library for creating fake profile data! I found out about Faker from Khuyen Tran, an amazing data science blogger and programmer. Check out all her great work [here](https://khuyentran1401.github.io/).
###Code
import numpy as np
import pandas as pd
import time
from faker import Faker
fake = Faker()
###Output
_____no_output_____
###Markdown
Create fake data
###Code
profiles = [fake.profile() for i in range(1000)]
profiles = pd.DataFrame(profiles)
###Output
_____no_output_____
###Markdown
Split data We'll create 1000 fake profiles and then "break" the data into two dataframes to simulate the disjointed data from our project with Client A. We will also take 95% of the records from dataframe 2 to simulate the unequal sizes of our original data from Client A.
###Code
#Split dataframe into 2 dataframes
data1 = profiles.iloc[:,[0,1,2,3,4,5,6,7]]
data2 = profiles.iloc[:,[8,9,10,11,12]]
data2 = data2.sample(frac = 0.95)
data1.head()
data2.head()
###Output
_____no_output_____
###Markdown
Now that we've created our fake data sources, let's see how large our pairwise comparison table will be:
###Code
# Number of plugins
print(data1.shape)
# Number of partners
print(data2.shape)
print(data1.shape[0] * data2.shape[0])
###Output
(1000, 8)
(950, 5)
950000
###Markdown
We'll be creating a pairwise comparison table of every row from dataset 1 matched with every row of dataset 2. This is 950,000 records. For context, our problem with Client A required comparing nearly 13,000 rows of plugins with nearly 12,000 potential partners - that's a pairwise comparison table of nearly 155 million! Method 1: Append When I first started at A42 Labs in November and started taking over the code base for this project, the existing code relied on nested for-loops, itterows, and append to create the pairwise comparison table, and output the table to a csv in chunks. The itterows and append methods, and the repeated IO operations lead to succesful yet computational inefficent process. Given the size of the data for Client A, this took over 18 hours to run; a process that was not sustainable for the required iterations. On our small size of fake data below this method takes over 5 minutes.
###Code
%%time
cols = ['job',
'company',
'ssn',
'residence',
'current_location',
'blood_group',
'website',
'username',
'name',
'sex',
'address',
'mail',
'birthdate']
pairwise_lst = []
rowcnt = 0
# append rows in batches of 15000
for i_out, row_out in data1.iterrows():
for i_in, row_in in data2.iterrows():
pairwise_lst.append(row_out.append(row_in))
rowcnt += 1
if rowcnt % 15000 == 0:
pairwise = pd.DataFrame(pairwise_lst,columns = cols)
pairwise.to_csv('pairwise_comparison.csv', mode='a', header=False, index=False)
pairwise_lst = []
pairwise = pd.read_csv('pairwise_comparison.csv', header=None, names = cols)
print(pairwise.shape)
pairwise.head()
###Output
(4170000, 13)
CPU times: user 5min 58s, sys: 1.84 s, total: 6min
Wall time: 6min
###Markdown
Method 2: Merge The more I understood the project and data requirements, the more I realized that the proposed pairwise comparison table was simply a cartesian join of the two disjoint datasets. Assigning both datasets an arbitrary key value of 1, we can merge using an outer join on that key.
###Code
%%time
pairwise_merge = data1.assign(key=1).merge(data2.assign(key=1), how='outer', on='key')
print(pairwise_merge.shape)
pairwise_merge.head()
###Output
(950000, 14)
CPU times: user 102 ms, sys: 14.8 ms, total: 116 ms
Wall time: 116 ms
###Markdown
Our operation has now improved from nearly 6 minutes to 145 milliseconds, a 2500% improvement! Method 3: Concatenation Another method I found thanks to every programmers favorite resource, StackOverflow, was to concatenate every row from one dataset onto the other dataset. This replicates rows of the first dataframe by the length of the second dataframe, and then concatenates the data along their columns.
###Code
%%time
def pairwise_merge_concat(*args):
if len(args) == 2:
if len(args[0]) > len(args[1]):
df_1 = pd.concat([args[0]] * len(args[1]), ignore_index=True).sort_values(by=[args[0].columns[0]]).reset_index(drop=True)
df_2 = pd.concat([args[1]] * len(args[0]), ignore_index=True)
return pd.concat([df_1, df_2], 1)
pairwise_concat = pairwise_merge_concat(data1, data2)
print(pairwise_concat.shape)
pairwise_concat.head()
###Output
(950000, 13)
CPU times: user 879 ms, sys: 47 ms, total: 926 ms
Wall time: 919 ms
|
notebooks/00A Intro PyTorch.ipynb | ###Markdown
Let's train a very simple non-linear multivariate regression model using PyTorch.
###Code
import numpy as np
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torch.autograd import Variable
###Output
_____no_output_____
###Markdown
Create a simple PyTorch Module. This Module models the function:$y = M_2 (M_1 x + b_1) + b_2$And you want to minimize:$L = || y - y_{observed} ||^2 + \lambda_1 ||M_1||_F + \lambda_2 ||M_2||_F + \lambda_3 ||b_1||_F$where $x$ and $y$ are vectors and we're taking the Frobenius norm of our parameters. Note that $M_1$ and $M_2$ aren't necessarily square. In fact, if we set the shapes of $M_1$ and $M_2$ to be small we can try to "squeeze" into a smaller space and effectively build-in dimensionality reduction.This whole model is slightly more complicated than vanilla linear regression, and is now something like quadratic (because of the two matrix multiplies) vector regression (because we're predicting a vector outcome, not a scalar one). We can split the above model into three steps, which (1) initialize parameters, (2) compute a prediction, and (3) compute the loss.- Declare that you'll be optimizing two linear functions. This saves space for $M_1$, $M_2$, $b_1$ and $b_2$, but not $x$ or $y$. Putting it in the `__init__` function of a `nn.Module` is special: PyTorch will remember that these parameters are optimize-able.- `forward` tells us how to combine our parameters to make a prediction $y$- `loss` tells us how to compare our prediction vector to our observed vector, plus how to minimize our regularizer.
###Code
class Bottleneck(nn.Module):
def __init__(self, n_in_cols, n_out_cols, n_hidden=1):
super(Bottleneck, self).__init__()
# 368 -> 2
self.lin1 = nn.Linear(n_in_cols, n_hidden)
# 2 -> 4
self.lin2 = nn.Linear(n_hidden, n_out_cols)
def forward(self, x):
# x is a minibatch of rows of our features
# x is of shape (batch_size, 9)
hidden = self.lin1(x)
# y is a minibatch of our predictions
y = self.lin2(hidden)
return y
def loss(self, prediction, target, lam1=1e-3, lam2=1e-3):
# This is just the mean squared error
loss_mse = ((prediction - target)**2.0).sum()
# This computes our Frobenius norm over both matrices
# Note that we can access the Linear model's variables
# directly if we'd like. No tricks here!
loss_reg_m1 = (self.lin1.weight**2.0 * lam1).sum()
loss_reg_m2 = (self.lin2.weight**2.0 * lam2).sum()
loss = loss_mse + loss_reg_m1 + loss_reg_m2
return loss
###Output
_____no_output_____
###Markdown
Let's make up some fake data to fit. Annoyingly, it has to be `float32` or `int64`. $ y X @ M + noise$
###Code
X = np.random.normal(size=(2000, 9)).astype(np.float32)
Y = np.random.normal(size=(2000, 4)) + np.dot(X, np.random.normal(size=(9, 4)))
Y = Y.astype(np.float32)
X.shape, Y.shape
from matplotlib import pyplot as plt
plt.scatter(X[:, 0], Y[:, 1], lw=0.0, s=1.0)
X.shape, Y.shape
###Output
_____no_output_____
###Markdown
Initialize the model. Note that we'll also initialize the "optimizer". Check out [this link](http://ruder.io/optimizing-gradient-descent/) to learn more about different optimizers. For now, `Adam` is a good choice.
###Code
model = Bottleneck(9, 4, 3)
o = optim.Adam(model.parameters())
o
model.lin1.bias
model.lin2.weight.data.shape
model.lin1.bias.data.shape
model.lin1.bias.grad
from random import shuffle
def chunks(X, Y, size):
"""Yield successive `size` chunks from X & Y."""
starts = list(range(0, len(X), size))
shuffle(starts)
for i in starts:
yield (X[i:i + size], Y[i:i + size])
batch_size = 64
losses = []
for epoch in range(400):
for itr, (feature, target) in enumerate(chunks(X, Y, batch_size)):
# This zeros the gradients on every parameter.
# This is easy to miss and hard to troubleshoot.
o.zero_grad()
# Convert
feature = Variable(torch.from_numpy(feature))
target = Variable(torch.from_numpy(target))
# Compute a prediction for these features
prediction = model.forward(feature)
# Compute a loss given what the true target outcome was
loss = model.loss(prediction, target)
# break
# Backpropagate: compute the direction / gradient every model parameter
# defined in your __init__ should move in in order to minimize this loss
# However, we're not actually changing these parameters, we're just storing
# how they should change.
loss.backward()
# Now take a step & update the model parameters. The optimizer uses the gradient at
# defined on every parameter in our model and nudges it in that direction.
o.step()
# Record the loss per example
losses.append(loss.data.numpy() / len(feature))
if epoch % 10 == 0 and itr ==0:
print(epoch, loss.data)
batch_size
loss
model.lin1.weight
o.step()
model.lin1.weight
model.lin1.weight.grad
###Output
_____no_output_____
###Markdown
I can introspect my model and get the parameters out:
###Code
model.lin1.weight.data.numpy()
model.lin1.bias.data.numpy()
###Output
_____no_output_____
###Markdown
I can also see that the loss is simply a scalar:
###Code
loss
###Output
_____no_output_____
###Markdown
You can see that the gradient is zero before we call `loss.backward()`
###Code
model.lin1.bias.grad
###Output
_____no_output_____
###Markdown
...And non-zero afterwards.
###Code
model.lin1.bias.grad
model.lin1.bias.data
###Output
_____no_output_____
###Markdown
And after we run `o.step()` we'll notice that the bias parameter has been updated:
###Code
model.lin1.bias.data.numpy()
###Output
_____no_output_____
###Markdown
Let's check on convergence:
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
n = len(losses)
smooth = np.convolve(losses, np.ones((n,))/n, mode='valid')
plt.figure(figsize=(12, 12))
plt.plot(losses[::10])
plt.plot(smooth, c='r')
###Output
_____no_output_____ |
client/workflows/demos/census-end-to-end-s3-example.ipynb | ###Markdown
Logistic Regression with Grid Search (scikit-learn)
###Code
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
###Output
_____no_output_____
###Markdown
This example features:- **scikit-learn**'s `LinearRegression` model- **verta**'s Python client logging grid search results- **verta**'s Python client retrieving the best run from the grid search to calculate full training accuracy- predictions against a deployed model
###Code
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification - S3 test"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
###Output
_____no_output_____
###Markdown
Imports
###Code
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
###Output
_____no_output_____
###Markdown
--- Log Workflow This section demonstrates logging model metadata and training artifacts to ModelDB. Instantiate Client
###Code
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
dataset = client.set_dataset(name="Census Income S3", type="s3")
version = dataset.create_version(bucket_name="verta-starter")
DATASET_PATH = "./"
train_data_filename = DATASET_PATH + "census-train.csv"
test_data_filename = DATASET_PATH + "census-test.csv"
def download_starter_dataset(bucket_name):
if not os.path.exists(DATASET_PATH + "census-train.csv"):
train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv"
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
if not os.path.exists(DATASET_PATH + "census-test.csv"):
test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv"
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
download_starter_dataset("verta-starter")
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Train Models
###Code
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, y_train)
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log dataset snapshot as version
run.log_dataset_version("train", version)
# log Git information as code version
run.log_code()
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
###Output
_____no_output_____
###Markdown
--- Revisit Workflow This section demonstrates querying and retrieving runs via the Client. Retrieve Best Run
###Code
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
###Output
_____no_output_____
###Markdown
Train on Full Dataset
###Code
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____
###Markdown
--- Deployment and Live Predictions This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
###Code
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
###Output
_____no_output_____
###Markdown
Log Training Data for Reference
###Code
run.log_training_data(X_train, y_train)
###Output
_____no_output_____
###Markdown
Prepare "Live" Data
###Code
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
###Output
_____no_output_____
###Markdown
Deploy Model
###Code
run.deploy(wait=True)
run
###Output
_____no_output_____
###Markdown
Query Deployed Model
###Code
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
###Output
_____no_output_____
###Markdown
Logistic Regression with Grid Search (scikit-learn)
###Code
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
###Output
_____no_output_____
###Markdown
This example features:- **scikit-learn**'s `LinearRegression` model- **verta**'s Python client logging grid search results- **verta**'s Python client retrieving the best run from the grid search to calculate full training accuracy- predictions against a deployed model
###Code
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification - S3 test"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
###Output
_____no_output_____
###Markdown
Imports
###Code
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
###Output
_____no_output_____
###Markdown
--- Log Workflow This section demonstrates logging model metadata and training artifacts to ModelDB. Instantiate Client
###Code
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
from verta.dataset import S3
dataset = client.set_dataset(name="Census Income S3")
version = dataset.create_version(S3("s3://verta-starter"))
DATASET_PATH = "./"
train_data_filename = DATASET_PATH + "census-train.csv"
test_data_filename = DATASET_PATH + "census-test.csv"
def download_starter_dataset(bucket_name):
if not os.path.exists(DATASET_PATH + "census-train.csv"):
train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv"
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
if not os.path.exists(DATASET_PATH + "census-test.csv"):
test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv"
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
download_starter_dataset("verta-starter")
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Train Models
###Code
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, y_train)
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log dataset snapshot as version
run.log_dataset_version("train", version)
# log Git information as code version
run.log_code()
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
###Output
_____no_output_____
###Markdown
--- Revisit Workflow This section demonstrates querying and retrieving runs via the Client. Retrieve Best Run
###Code
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
###Output
_____no_output_____
###Markdown
Train on Full Dataset
###Code
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____
###Markdown
--- Deployment and Live Predictions This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
###Code
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
###Output
_____no_output_____
###Markdown
Log Training Data for Reference
###Code
run.log_training_data(X_train, y_train)
###Output
_____no_output_____
###Markdown
Prepare "Live" Data
###Code
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
###Output
_____no_output_____
###Markdown
Deploy Model
###Code
run.deploy(wait=True)
run
###Output
_____no_output_____
###Markdown
Query Deployed Model
###Code
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
###Output
_____no_output_____
###Markdown
Logistic Regression with Grid Search (scikit-learn)
###Code
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
###Output
_____no_output_____
###Markdown
This example features:- **scikit-learn**'s `LinearRegression` model- **verta**'s Python client logging grid search results- **verta**'s Python client retrieving the best run from the grid search to calculate full training accuracy- predictions against a deployed model
###Code
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification - S3 test"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
###Output
_____no_output_____
###Markdown
Imports
###Code
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
###Output
_____no_output_____
###Markdown
--- Log Workflow This section demonstrates logging model metadata and training artifacts to ModelDB. Instantiate Client
###Code
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
dataset = client.set_dataset(name="Census Income S3", type="s3")
version = dataset.create_version(bucket_name="verta-starter")
DATASET_PATH = "./"
train_data_filename = DATASET_PATH + "census-train.csv"
test_data_filename = DATASET_PATH + "census-test.csv"
def download_starter_dataset(bucket_name):
if not os.path.exists(DATASET_PATH + "census-train.csv"):
train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv"
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
if not os.path.exists(DATASET_PATH + "census-test.csv"):
test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv"
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
download_starter_dataset("verta-starter")
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Train Models
###Code
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, y_train)
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log dataset snapshot as version
run.log_dataset_version("train", version)
# log Git information as code version
run.log_code()
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
###Output
_____no_output_____
###Markdown
--- Revisit Workflow This section demonstrates querying and retrieving runs via the Client. Retrieve Best Run
###Code
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
###Output
_____no_output_____
###Markdown
Train on Full Dataset
###Code
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____
###Markdown
--- Deployment and Live Predictions This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
###Code
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
###Output
_____no_output_____
###Markdown
Log Training Data for Reference
###Code
run.log_training_data(X_train, y_train)
###Output
_____no_output_____
###Markdown
Prepare "Live" Data
###Code
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
###Output
_____no_output_____
###Markdown
Deploy Model
###Code
run.deploy(wait=True)
run
###Output
_____no_output_____
###Markdown
Query Deployed Model
###Code
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
###Output
_____no_output_____
###Markdown
Logistic Regression with Grid Search (scikit-learn)
###Code
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
###Output
_____no_output_____
###Markdown
This example builds on our [basic census income classification example](census-end-to-end.ipynb) by incorporating [S3 data versioning](https://docs.verta.ai/en/master/api/api/versioning.htmlverta.dataset.S3).
###Code
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification - S3 Data"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
###Output
_____no_output_____
###Markdown
Imports
###Code
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
###Output
_____no_output_____
###Markdown
--- Log Workflow This section demonstrates logging model metadata and training artifacts to ModelDB. Instantiate Client
###Code
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
from verta.dataset import S3
dataset = client.set_dataset(name="Census Income S3")
version = dataset.create_version(S3("s3://verta-starter"))
DATASET_PATH = "./"
train_data_filename = DATASET_PATH + "census-train.csv"
test_data_filename = DATASET_PATH + "census-test.csv"
def download_starter_dataset(bucket_name):
if not os.path.exists(DATASET_PATH + "census-train.csv"):
train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv"
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
if not os.path.exists(DATASET_PATH + "census-test.csv"):
test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv"
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
download_starter_dataset("verta-starter")
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Train Models
###Code
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, y_train)
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log dataset snapshot as version
run.log_dataset_version("train", version)
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
###Output
_____no_output_____
###Markdown
--- Revisit Workflow This section demonstrates querying and retrieving runs via the Client. Retrieve Best Run
###Code
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
###Output
_____no_output_____
###Markdown
Train on Full Dataset
###Code
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____
###Markdown
--- Deployment and Live Predictions This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
###Code
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
###Output
_____no_output_____
###Markdown
Log Training Data for Reference
###Code
run.log_training_data(X_train, y_train)
###Output
_____no_output_____
###Markdown
Prepare "Live" Data
###Code
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
###Output
_____no_output_____
###Markdown
Deploy Model
###Code
run.deploy(wait=True)
run
###Output
_____no_output_____
###Markdown
Query Deployed Model
###Code
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
###Output
_____no_output_____
###Markdown
Logistic Regression with Grid Search (scikit-learn)
###Code
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
###Output
_____no_output_____
###Markdown
This example builds on our [basic census income classification example](census-end-to-end.ipynb) by incorporating [S3 data versioning](https://verta.readthedocs.io/en/master/_autogen/verta.dataset.S3.html).
###Code
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification - S3 Data"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
###Output
_____no_output_____
###Markdown
Imports
###Code
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
###Output
_____no_output_____
###Markdown
--- Log Workflow This section demonstrates logging model metadata and training artifacts to ModelDB. Instantiate Client
###Code
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
from verta.dataset import S3
dataset = client.set_dataset(name="Census Income S3")
version = dataset.create_version(S3("s3://verta-starter"))
DATASET_PATH = "./"
train_data_filename = DATASET_PATH + "census-train.csv"
test_data_filename = DATASET_PATH + "census-test.csv"
def download_starter_dataset(bucket_name):
if not os.path.exists(DATASET_PATH + "census-train.csv"):
train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv"
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
if not os.path.exists(DATASET_PATH + "census-test.csv"):
test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv"
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
download_starter_dataset("verta-starter")
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Train Models
###Code
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, y_train)
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log dataset snapshot as version
run.log_dataset_version("train", version)
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
###Output
_____no_output_____
###Markdown
--- Revisit Workflow This section demonstrates querying and retrieving runs via the Client. Retrieve Best Run
###Code
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
###Output
_____no_output_____
###Markdown
Train on Full Dataset
###Code
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____
###Markdown
--- Deployment and Live Predictions This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
###Code
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
###Output
_____no_output_____
###Markdown
Prepare "Live" Data
###Code
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
###Output
_____no_output_____
###Markdown
Deploy Model
###Code
run.deploy(wait=True)
run
###Output
_____no_output_____
###Markdown
Query Deployed Model
###Code
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
###Output
_____no_output_____
###Markdown
Logistic Regression with Grid Search (scikit-learn)
###Code
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
###Output
_____no_output_____
###Markdown
This example features:- **scikit-learn**'s `LinearRegression` model- **verta**'s Python client logging grid search results- **verta**'s Python client retrieving the best run from the grid search to calculate full training accuracy- predictions against a deployed model
###Code
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification - S3 test"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
###Output
_____no_output_____
###Markdown
Imports
###Code
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
###Output
_____no_output_____
###Markdown
--- Log Workflow Instantiate Client
###Code
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
dataset = client.set_dataset(name="Census Income", type="s3")
version = dataset.create_version(bucket_name="verta-starter")
DATASET_PATH = "./"
def download_starter_dataset(bucket_name):
if not os.path.exists(DATASET_PATH + "census-train.csv"):
train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv"
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
if not os.path.exists(DATASET_PATH + "census-test.csv"):
test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv"
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
download_starter_dataset("verta-starter")
train_data_filename = DATASET_PATH + "census-train.csv"
test_data_filename = DATASET_PATH + "census-test.csv"
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Train Models
###Code
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, y_train)
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
run.log_training_data(X_train, y_train)
# log dataset snapshot as version
run.log_dataset_version("train", version)
# log Git information as code version
run.log_code()
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
###Output
_____no_output_____
###Markdown
--- Revisit Workflow Retrieve Best Run
###Code
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
###Output
_____no_output_____
###Markdown
Train on Full Dataset
###Code
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____
###Markdown
--- Make Live Predictions
###Code
model_id = 'YOUR_MODEL_ID'
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
###Output
_____no_output_____
###Markdown
Load Deployed Model
###Code
from verta._demo_utils import DeployedModel
deployed_model = DeployedModel(HOST, model_id)
###Output
_____no_output_____
###Markdown
Query Deployed Model
###Code
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
###Output
_____no_output_____
###Markdown
Logistic Regression with Grid Search (scikit-learn)
###Code
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
###Output
_____no_output_____
###Markdown
This example features:- **scikit-learn**'s `LinearRegression` model- **verta**'s Python client logging grid search results- **verta**'s Python client retrieving the best run from the grid search to calculate full training accuracy- predictions against a deployed model
###Code
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification - S3 test"
EXPERIMENT_NAME = "Logistic Regression"
# import os
# os.environ['VERTA_EMAIL'] = ''
# os.environ['VERTA_DEV_KEY'] = ''
###Output
_____no_output_____
###Markdown
Imports
###Code
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
###Output
_____no_output_____
###Markdown
--- Log Workflow This section demonstrates logging model metadata and training artifacts to ModelDB. Instantiate Client
###Code
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Prepare Data
###Code
dataset = client.set_dataset(name="Census Income S3", type="s3")
version = dataset.create_version(bucket_name="verta-starter")
DATASET_PATH = "./"
def download_starter_dataset(bucket_name):
if not os.path.exists(DATASET_PATH + "census-train.csv"):
train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv"
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
if not os.path.exists(DATASET_PATH + "census-test.csv"):
test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv"
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
download_starter_dataset("verta-starter")
train_data_filename = DATASET_PATH + "census-train.csv"
test_data_filename = DATASET_PATH + "census-test.csv"
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
###Output
_____no_output_____
###Markdown
Prepare Hyperparameters
###Code
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
###Output
_____no_output_____
###Markdown
Train Models
###Code
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, y_train)
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log dataset snapshot as version
run.log_dataset_version("train", version)
# log Git information as code version
run.log_code()
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
###Output
_____no_output_____
###Markdown
--- Revisit Workflow This section demonstrates querying and retrieving runs via the Client. Retrieve Best Run
###Code
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
###Output
_____no_output_____
###Markdown
Train on Full Dataset
###Code
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate Accuracy on Full Training Set
###Code
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
###Output
_____no_output_____
###Markdown
--- Deployment and Live Predictions This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
###Code
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
###Output
_____no_output_____
###Markdown
Log Training Data for Reference
###Code
run.log_training_data(X_train, y_train)
###Output
_____no_output_____
###Markdown
Prepare "Live" Data
###Code
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
###Output
_____no_output_____
###Markdown
Deploy Model
###Code
run.deploy(wait=True)
run
###Output
_____no_output_____
###Markdown
Query Deployed Model
###Code
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
###Output
_____no_output_____ |
notebooks/03-classifier_evaluation.ipynb | ###Markdown
Classifier EvaluationIn this notebook we test a bunch of different classifiers in order to find the 3 best performing one. Those three will then be combined in an ensemble model to form our final classifier.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import sys
sys.path.append('./lib')
import numpy as np
import pandas as pd
import time
from pathlib import Path
from sklearn.metrics import log_loss, classification_report, accuracy_score
from tqdm import tqdm_notebook as tqdm
from lib.definitions import RANDOM_SEED
import random
random.seed(RANDOM_SEED)
import numpy.random
numpy.random.seed(RANDOM_SEED)
import os
os.environ['PYTHONHASHSEED']=str(RANDOM_SEED)
import tensorflow
tensorflow.set_random_seed(RANDOM_SEED)
from lib.definitions import SPLITS_BASE_PATH
SPLITS_BASE_PATH.mkdir(parents=True, exist_ok=True)
###Output
_____no_output_____
###Markdown
Find the top 3 best performing classifiers
###Code
from lib.dataset import load_split, evaluate_estimator, evaluate_splits
from lib.classifiers import classifier_factories
results = {}
for (name, factory) in classifier_factories.items():
t0 = time.time()
print(f'Evaluating "{name}"')
scores = evaluate_splits(factory)
duration = time.time() - t0
print(f'[Took {duration}s]\n')
results[name] = {
'scores': scores,
'duration': duration
}
results
###Output
_____no_output_____
###Markdown
The 3 best performing classifiers are: Random Forest, Naive Bayes and Quadratic Discriminant Analysis. In the next section we are going to create an ensemble classifier out of those 3.
###Code
best_classifiers = [
'Random Forest',
'Naive Bayes',
'Quadratic Discriminant Analysis'
]
###Output
_____no_output_____
###Markdown
Assemble the Ensemble Classifier
###Code
def create_ensemble_estimator(verbose, random_state, n_jobs):
from sklearn.ensemble import VotingClassifier
estimators = list(
map(
lambda name: (name, classifier_factories[name](verbose=verbose, random_state=RANDOM_SEED, n_jobs=-1)),
best_classifiers))
return VotingClassifier(estimators, voting='soft')
print(f'Evaluating "Ensemble Classifier"')
t0 = time.time()
scores = evaluate_splits(create_ensemble_estimator)
duration = time.time() - t0
print(f'[Took {duration}s]\n')
scores
###Output
Evaluating "Ensemble Classifier"
[Took 2.948072910308838s]
|
Notebooks/MultiVariableLinearRegression.ipynb | ###Markdown
| Feature | Value || ----------- | ----------- || Pearson's Correlation | 0.66 || R^2 | 0.44 || MSE | 0.56 || RMSE | 0.75 |
###Code
corr_matrix = df_hfi.corr()
fig, ax = plt.subplots(figsize=(30,30))
sns.heatmap(corr_matrix, annot=True, ax=ax, cmap="YlGnBu")
plt.show()
predictors = ['Country Name', 'Property Rights', 'Government Integrity', 'Trade Freedom', 'Financial Freedom', 'Tax Burden % of GDP',
'GDP per Capita (PPP)', 'EPI']
hfi_epi = df_hfi[predictors]
hfi_epi = hfi_epi.dropna()
hfi_epi.sort_values('EPI', ascending=False).head(10)
predictors = ['Property Rights', 'Government Integrity', 'Trade Freedom',
'Financial Freedom', 'Tax Burden % of GDP', 'GDP per Capita (PPP)']
X = hfi_epi[predictors]
y = hfi_epi[['EPI']]
linearFit = LinearRegression().fit(X, y)
for predictor, coefficient in zip(predictors, linearFit.coef_[0]):
print('{}: {:.2f}'.format(predictor, coefficient))
X_norm = StandardScaler().fit_transform(X)
y_norm = StandardScaler().fit_transform(y)
linearFitNorm = LinearRegression().fit(X_norm, y_norm)
y_hat = linearFitNorm.predict(X_norm)
for predictor, coefficient in zip(predictors, linearFitNorm.coef_[0]):
print(f'{predictor}: {coefficient:.2f}')
print(f'R^2 Score: {linearFitNorm.score(X_norm, y_norm):.2f}')
print(f'MSE: {mean_squared_error(y_norm, y_hat):2.f}')
print(f'RMSE: {math.sqrt(mean_squared_error(y_norm, y_hat)):.2f}')
###Output
Property Rights: 0.01
Government Integrity: 0.28
Trade Freedom: 0.11
Financial Freedom: 0.05
Tax Burden % of GDP: 0.36
GDP per Capita (PPP): 0.29
R^2 Score: 0.82
MSE: 0.1849637077934119
RMSE: 0.4300740724496327
###Markdown
| Feature | SLR | MLR || ----------- | ----------- | ----------- || Pearson's Correlation | 0.66 | || R^2 | 0.44 | 0.82 || MSE | 0.56 | 0.18 || RMSE | 0.75 | 0.43 |
###Code
ax = sns.barplot(x=linearFitNorm.coef_[0], y=predictors, palette="Blues_d",
order=['Tax Burden % of GDP', 'GDP per Capita (PPP)', 'Government Integrity',
'Trade Freedom','Financial Freedom','Property Rights'])
ax.figure.dpi = 120
###Output
_____no_output_____ |
docs/tutorials/average_optimizers_callback.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Title Run in Google Colab View source on GitHub OverviewThis notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons pagkage. Moving Averaging > The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more genral idea of the model training until some point. Stocastic Averaging> Stocastic Weight Averaging converges to wider optimas. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer. Model Average Checkpoint > ```callbacks.ModelCheckpoint``` doesn't give you the option to save moving average weights in the middle of traning, which is why Model Average Optimizers required a custom callback. Using the ```update_weights``` parameter, ```ModelAverageCheckpoint``` allows you to:1. Assign the moving average weights to the model, and save them.2. Keep the old non-averaged weights, but the saved model uses the average weights. Setup
###Code
try:
%tensorflow_version 2.x
except:
pass
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
import os
###Output
_____no_output_____
###Markdown
Build Model
###Code
def create_model(opt):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Prepare Dataset
###Code
#Load Fashion MNIST dataset
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
test_images, test_labels = test
###Output
_____no_output_____
###Markdown
We will be comparing three optimizers here:* Unwrapped SGD* SGD with Moving Average* SGD with Stochastic Weight AveragingAnd see how they perform with the same model.
###Code
#Optimizers
sgd = tf.keras.optimizers.SGD(0.01)
moving_avg_sgd = tfa.optimizers.MovingAverage(sgd)
stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
###Output
_____no_output_____
###Markdown
Both ```MovingAverage``` and ```StocasticAverage``` optimers use ```ModelAverageCheckpoint```.
###Code
#Callback
checkpoint_path = "./training/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir,
save_weights_only=True,
verbose=1)
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
###Output
_____no_output_____
###Markdown
Train Model Vanilla SGD Optimizer
###Code
#Build Model
model = create_model(sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
10000/10000 - 0s - loss: 87.3869 - accuracy: 0.7872
Loss : 87.38689237976074
Accuracy : 0.7872
###Markdown
Moving Average SGD
###Code
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
10000/10000 - 0s - loss: 87.3869 - accuracy: 0.7872
Loss : 87.38689237976074
Accuracy : 0.7872
###Markdown
Stocastic Weight Average SGD
###Code
#Build Model
model = create_model(stocastic_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
10000/10000 - 0s - loss: 87.3869 - accuracy: 0.7872
Loss : 87.38689237976074
Accuracy : 0.7872
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Model Averaging Run in Google Colab View source on GitHub OverviewThis notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons pagkage. Moving Averaging > The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more genral idea of the model training until some point. Stocastic Averaging> Stocastic Weight Averaging converges to wider optimas. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer. Model Average Checkpoint > `callbacks.ModelCheckpoint` doesn't give you the option to save moving average weights in the middle of traning, which is why Model Average Optimizers required a custom callback. Using the ```update_weights``` parameter, ```ModelAverageCheckpoint``` allows you to:1. Assign the moving average weights to the model, and save them.2. Keep the old non-averaged weights, but the saved model uses the average weights. Setup
###Code
try:
%tensorflow_version 2.x
except:
pass
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
import os
###Output
_____no_output_____
###Markdown
Build Model
###Code
def create_model(opt):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Prepare Dataset
###Code
#Load Fashion MNIST dataset
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
test_images, test_labels = test
###Output
_____no_output_____
###Markdown
We will be comparing three optimizers here:* Unwrapped SGD* SGD with Moving Average* SGD with Stochastic Weight AveragingAnd see how they perform with the same model.
###Code
#Optimizers
sgd = tf.keras.optimizers.SGD(0.01)
moving_avg_sgd = tfa.optimizers.MovingAverage(sgd)
stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
###Output
_____no_output_____
###Markdown
Both ```MovingAverage``` and ```StocasticAverage``` optimers use ```ModelAverageCheckpoint```.
###Code
#Callback
checkpoint_path = "./training/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir,
save_weights_only=True,
verbose=1)
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
###Output
_____no_output_____
###Markdown
Train Model Vanilla SGD Optimizer
###Code
#Build Model
model = create_model(sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
10000/10000 - 0s - loss: 87.3869 - accuracy: 0.7872
Loss : 87.38689237976074
Accuracy : 0.7872
###Markdown
Moving Average SGD
###Code
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
10000/10000 - 0s - loss: 87.3869 - accuracy: 0.7872
Loss : 87.38689237976074
Accuracy : 0.7872
###Markdown
Stocastic Weight Average SGD
###Code
#Build Model
model = create_model(stocastic_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
10000/10000 - 0s - loss: 87.3869 - accuracy: 0.7872
Loss : 87.38689237976074
Accuracy : 0.7872
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Model Averaging View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons pagkage. Moving Averaging > The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more genral idea of the model training until some point. Stocastic Averaging> Stocastic Weight Averaging converges to wider optimas. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer. Model Average Checkpoint > `callbacks.ModelCheckpoint` doesn't give you the option to save moving average weights in the middle of traning, which is why Model Average Optimizers required a custom callback. Using the ```update_weights``` parameter, ```ModelAverageCheckpoint``` allows you to:1. Assign the moving average weights to the model, and save them.2. Keep the old non-averaged weights, but the saved model uses the average weights. Setup
###Code
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
import os
###Output
_____no_output_____
###Markdown
Build Model
###Code
def create_model(opt):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Prepare Dataset
###Code
#Load Fashion MNIST dataset
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
test_images, test_labels = test
###Output
_____no_output_____
###Markdown
We will be comparing three optimizers here:* Unwrapped SGD* SGD with Moving Average* SGD with Stochastic Weight AveragingAnd see how they perform with the same model.
###Code
#Optimizers
sgd = tf.keras.optimizers.SGD(0.01)
moving_avg_sgd = tfa.optimizers.MovingAverage(sgd)
stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
###Output
_____no_output_____
###Markdown
Both ```MovingAverage``` and ```StocasticAverage``` optimers use ```ModelAverageCheckpoint```.
###Code
#Callback
checkpoint_path = "./training/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir,
save_weights_only=True,
verbose=1)
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
###Output
_____no_output_____
###Markdown
Train Model Vanilla SGD Optimizer
###Code
#Build Model
model = create_model(sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Moving Average SGD
###Code
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Stocastic Weight Average SGD
###Code
#Build Model
model = create_model(stocastic_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Model Averaging View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons package. Moving Averaging > The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more genral idea of the model training until some point. Stochastic Averaging> Stochastic Weight Averaging converges to wider optima. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer. Model Average Checkpoint > `callbacks.ModelCheckpoint` doesn't give you the option to save moving average weights in the middle of training, which is why Model Average Optimizers required a custom callback. Using the ```update_weights``` parameter, ```ModelAverageCheckpoint``` allows you to:1. Assign the moving average weights to the model, and save them.2. Keep the old non-averaged weights, but the saved model uses the average weights. Setup
###Code
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
import os
###Output
_____no_output_____
###Markdown
Build Model
###Code
def create_model(opt):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Prepare Dataset
###Code
#Load Fashion MNIST dataset
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
test_images, test_labels = test
###Output
_____no_output_____
###Markdown
We will be comparing three optimizers here:* Unwrapped SGD* SGD with Moving Average* SGD with Stochastic Weight AveragingAnd see how they perform with the same model.
###Code
#Optimizers
sgd = tf.keras.optimizers.SGD(0.01)
moving_avg_sgd = tfa.optimizers.MovingAverage(sgd)
stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
###Output
_____no_output_____
###Markdown
Both ```MovingAverage``` and ```StocasticAverage``` optimers use ```ModelAverageCheckpoint```.
###Code
#Callback
checkpoint_path = "./training/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir,
save_weights_only=True,
verbose=1)
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
###Output
_____no_output_____
###Markdown
Train Model Vanilla SGD Optimizer
###Code
#Build Model
model = create_model(sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Moving Average SGD
###Code
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Stocastic Weight Average SGD
###Code
#Build Model
model = create_model(stocastic_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Model Averaging View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons pagkage. Moving Averaging > The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more genral idea of the model training until some point. Stocastic Averaging> Stocastic Weight Averaging converges to wider optimas. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer. Model Average Checkpoint > `callbacks.ModelCheckpoint` doesn't give you the option to save moving average weights in the middle of training, which is why Model Average Optimizers required a custom callback. Using the ```update_weights``` parameter, ```ModelAverageCheckpoint``` allows you to:1. Assign the moving average weights to the model, and save them.2. Keep the old non-averaged weights, but the saved model uses the average weights. Setup
###Code
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
import os
###Output
_____no_output_____
###Markdown
Build Model
###Code
def create_model(opt):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Prepare Dataset
###Code
#Load Fashion MNIST dataset
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
test_images, test_labels = test
###Output
_____no_output_____
###Markdown
We will be comparing three optimizers here:* Unwrapped SGD* SGD with Moving Average* SGD with Stochastic Weight AveragingAnd see how they perform with the same model.
###Code
#Optimizers
sgd = tf.keras.optimizers.SGD(0.01)
moving_avg_sgd = tfa.optimizers.MovingAverage(sgd)
stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
###Output
_____no_output_____
###Markdown
Both ```MovingAverage``` and ```StocasticAverage``` optimers use ```ModelAverageCheckpoint```.
###Code
#Callback
checkpoint_path = "./training/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir,
save_weights_only=True,
verbose=1)
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
###Output
_____no_output_____
###Markdown
Train Model Vanilla SGD Optimizer
###Code
#Build Model
model = create_model(sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Moving Average SGD
###Code
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Stocastic Weight Average SGD
###Code
#Build Model
model = create_model(stocastic_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Model Averaging View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis notebook demonstrates how to use Moving Average Optimizer along with the Model Average Checkpoint from tensorflow addons pagkage. Moving Averaging > The advantage of Moving Averaging is that they are less prone to rampant loss shifts or irregular data representation in the latest batch. It gives a smooothened and a more genral idea of the model training until some point. Stocastic Averaging> Stocastic Weight Averaging converges to wider optimas. By doing so, it resembles geometric ensembeling. SWA is a simple method to improve model performance when used as a wrapper around other optimizers and averaging results from different points of trajectory of the inner optimizer. Model Average Checkpoint > `callbacks.ModelCheckpoint` doesn't give you the option to save moving average weights in the middle of traning, which is why Model Average Optimizers required a custom callback. Using the ```update_weights``` parameter, ```ModelAverageCheckpoint``` allows you to:1. Assign the moving average weights to the model, and save them.2. Keep the old non-averaged weights, but the saved model uses the average weights. Setup
###Code
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
import os
###Output
_____no_output_____
###Markdown
Build Model
###Code
def create_model(opt):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Prepare Dataset
###Code
#Load Fashion MNIST dataset
train, test = tf.keras.datasets.fashion_mnist.load_data()
images, labels = train
images = images/255.0
labels = labels.astype(np.int32)
fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels))
fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32)
test_images, test_labels = test
###Output
_____no_output_____
###Markdown
We will be comparing three optimizers here:* Unwrapped SGD* SGD with Moving Average* SGD with Stochastic Weight AveragingAnd see how they perform with the same model.
###Code
#Optimizers
sgd = tf.keras.optimizers.SGD(0.01)
moving_avg_sgd = tfa.optimizers.MovingAverage(sgd)
stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
###Output
_____no_output_____
###Markdown
Both ```MovingAverage``` and ```StocasticAverage``` optimers use ```ModelAverageCheckpoint```.
###Code
#Callback
checkpoint_path = "./training/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir,
save_weights_only=True,
verbose=1)
avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir,
update_weights=True)
###Output
_____no_output_____
###Markdown
Train Model Vanilla SGD Optimizer
###Code
#Build Model
model = create_model(sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Moving Average SGD
###Code
#Build Model
model = create_model(moving_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____
###Markdown
Stocastic Weight Average SGD
###Code
#Build Model
model = create_model(stocastic_avg_sgd)
#Train the network
model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback])
#Evalute results
model.load_weights(checkpoint_dir)
loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2)
print("Loss :", loss)
print("Accuracy :", accuracy)
###Output
_____no_output_____ |
tutorial_3_matplotlib.ipynb | ###Markdown
[Fundamental Python Data Science Libraries: A Cheatsheet (Part 3/4)](https://hackernoon.com/fundamental-python-data-science-libraries-a-cheatsheet-part-3-4-6c2aecc697a4) by [Lauren Glass](https://www.linkedin.com/in/laurenjglass/), [Hackernoon](https://hackernoon.com/), Aug. 7, 2018 MatplotlibThis library is the go-to Python visualization package! It allows you to create rich images displaying your data with Python code.This library is extensive, but this article will focus on two objects: the Figure and the Axes.
###Code
# Load libraries
import matplotlib.pyplot as plt
#will lead to static images of your plot embedded in the notebook
%matplotlib inline
import numpy as np
# Create data:
x = np.array([1,2,3,4,5,6])
y = np.array([1,4,9,16,25,36])
# Create figure
# Figure is a blank canvas
fig = plt.figure(figsize=(8,5), dpi=100) # 800x500 pixel image
# Add axes at specific position (fractions of fig width and height)
position = [0.1, 0.1, 0.8, 0.8] # left, bottom, width, height
axes = fig.add_axes(position)
# Plot a line
axes.plot(x, y, label="growth") # label keyword used later!
axes.set_xlabel('X Axis')
axes.set_ylabel('Y Axis')
axes.set_title("Simple Line")
# Save the image
fig.savefig("file1.jpg")
# Legends
# Figure is a blank canvas
fig = plt.figure(figsize=(8,5), dpi=100) # 800x500 pixel image
# Add axes at specific position (fractions of fig width and height)
position = [0.1, 0.1, 0.8, 0.8] # left, bottom, width, height
axes = fig.add_axes(position)
# Plot a line
axes.plot(x, y, label="growth") # label keyword used later!
axes.set_xlabel('X Axis')
axes.set_ylabel('Y Axis')
axes.set_title("Simple Line")
# Location options: 0 = Auto Best Fit, 1 = Upper Right, 2 = Lower Right,
# 3 = Lower Left, 4 = Lower Right
axes.legend(loc=0)
# Save the image
fig.savefig("file2.jpg")
# Colors & Lines
# Figure is a blank canvas
fig = plt.figure(figsize=(8,5), dpi=100) # 800x500 pixel image
# Add axes at specific position (fractions of fig width and height)
position = [0.1, 0.1, 0.8, 0.8] # left, bottom, width, height
axes = fig.add_axes(position)
# Plot a line
axes.plot(x, y, label="growth") # label keyword used later!
axes.set_xlabel('X Axis')
axes.set_ylabel('Y Axis')
axes.set_title("Simple Line")
# Use the keywords in the plot method
benchmark_data = [5,5,5,5,5,5]
axes.plot(x, benchmark_data, label="benchmark", color="r", alpha=.5, linewidth=1, linestyle ='-', marker='+', markersize=4)
axes.legend(loc=0)
# Save the image
fig.savefig("file3.jpg")
# Axes Range & Tick Marks
# Figure is a blank canvas
fig = plt.figure(figsize=(8,5), dpi=100) # 800x500 pixel image
# Add axes at specific position (fractions of fig width and height)
position = [0.1, 0.1, 0.8, 0.8] # left, bottom, width, height
axes = fig.add_axes(position)
# Plot a line
axes.plot(x, y, label="growth") # label keyword used later!
axes.set_xlabel('X Axis')
axes.set_ylabel('Y Axis')
axes.set_title("Simple Line")
# Control the range of the axes
axes.set_xlim([1, 6])
axes.set_ylim([1, 50]) # increasing y axis maximum to 50, instead of 35
#axes.axis("tight") # to get auto tight fitted axes, do this
# Control the tick lines
axes.set_xticks([1, 2, 3, 4, 5, 6])
axes.set_yticks([0, 25, 50])
# Control the labels of the tick lines
axes.set_xticklabels(["2018-07-0{0}".format(d) for d in range(1,7)])
axes.set_yticklabels([0, 25, 50])
axes.legend(loc=0)
fig.savefig("file4.jpg")
# Subplots
# 2 graphs side by side
fig1, axes1 = plt.subplots(nrows=1, ncols=2, figsize=(8,5), dpi=100)
# Set up first graph
axes1[0].plot(x, x**2, color='r')
axes1[0].set_xlabel("x")
axes1[0].set_ylabel("y")
axes1[0].set_title("Squared")
# Set up second graph
axes1[1].plot(x, x**3, color='b')
axes1[1].set_xlabel("x")
axes1[1].set_ylabel("y")
axes1[1].set_title("Cubed")
# Automatically adjust the positions of the axes so there is no overlap
fig1.tight_layout()
fig1.savefig("file5.jpg")
###Output
_____no_output_____ |
homework_9/homework_9.ipynb | ###Markdown
Sesion 9 HomeworkIn this homework, we'll deploy the dogs vs cats model we trained in the previous homework.Download the model from here: https://github.com/alexeygrigorev/large-datasets/releases/download/dogs-cats-model/dogs_cats_10_0.687.h5
###Code
!wget https://github.com/alexeygrigorev/large-datasets/releases/download/dogs-cats-model/dogs_cats_10_0.687.h5 -O dogs_cats.h5
!python -V
import numpy as np
import tensorflow as tf
from tensorflow import keras
tf.__version__
###Output
_____no_output_____
###Markdown
Question 1Now convert this model from Keras to TF-Lite format.What's the size of the converted model?
###Code
model = keras.models.load_model('dogs_cats.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open('dogs_cats.tflite', 'wb') as f_out:
f_out.write(tflite_model)
ls -lh | grep dogs_cats.tflite
###Output
-rw-r--r-- 1 root root 43M Dec 1 12:26 dogs_cats.tflite
###Markdown
**Answer**: Size of converted binary: 43M Question 2To be able to use this model, we need to know the index of the input and the index of the output. What's the output index for this model?
###Code
import tensorflow.lite as tflite
interpreter = tflite.Interpreter(model_path='dogs_cats.tflite')
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output_index = interpreter.get_output_details()[0]['index']
print(f'Output index: {output_index}')
###Output
Output index: 13
###Markdown
Preparing the imageYou'll need some code for downloading and resizing images. You can use this code:```pythonfrom io import BytesIOfrom urllib import requestfrom PIL import Imagedef download_image(url): with request.urlopen(url) as resp: buffer = resp.read() stream = BytesIO(buffer) img = Image.open(stream) return imgdef prepare_image(img, target_size): if img.mode != 'RGB': img = img.convert('RGB') img = img.resize(target_size, Image.NEAREST) return img```For that, you'll need to have pillow installed:```bashpip install pillow```Let's download and resize this image: https://upload.wikimedia.org/wikipedia/commons/9/9a/Pug_600.jpgBased on [the solution of the previous homework](https://github.com/alexeygrigorev/mlbookcamp-code/blob/master/course-zoomcamp/08-deep-learning/CNN_solution.ipynb),what should be the target size for the image?
###Code
pip install pillow
from io import BytesIO
from urllib import request
from PIL import Image
def download_image(url):
with request.urlopen(url) as resp:
buffer = resp.read()
stream = BytesIO(buffer)
img = Image.open(stream)
return img
def prepare_image(img, target_size):
if img.mode != 'RGB':
img = img.convert('RGB')
img = img.resize(target_size, Image.NEAREST)
return img
url = 'https://upload.wikimedia.org/wikipedia/commons/9/9a/Pug_600.jpg'
img = download_image(url)
prepared_img = prepare_image(img, (150, 150))
###Output
_____no_output_____
###Markdown
Question 3Now we need to turn the image into an numpy array and pre-process it. > Tip: Check the previous homework. What was the pre-processing > we did there?After the pre-processing, what's the value in the first pixel, the R channel?
###Code
def preprocess_input(x):
x /= 255
return x
from tensorflow.keras.preprocessing import image
x = np.array(prepared_img, dtype='float32')
preprocess_input(x)[0][0][0]
###Output
_____no_output_____
###Markdown
Question 4Now let's apply this model to this image. What's the output of the model?
###Code
interpreter.set_tensor(input_index, x)
interpreter.invoke()
preds = interpreter.get_tensor(output_index)
classes = [
'dog',
'cat'
]
dict(zip(classes, preds[0]))
###Output
_____no_output_____
###Markdown
Prepepare the lambda code Now you need to copy all the code into a separate python file. You will need to use this file for the next two questions.Tip: you can test this file locally with `ipython` or Jupyter Notebook by importing the file and invoking the function from this file. Docker For the next two questions, we'll use a Docker image that I already prepared. This is the Dockerfile that I used for creating the image:```dockerFROM public.ecr.aws/lambda/python:3.8COPY cats-dogs-v2.tflite .```And pushed it to [`agrigorev/zoomcamp-cats-dogs-lambda:v2`](https://hub.docker.com/r/agrigorev/zoomcamp-cats-dogs-lambda/tags).> Note: The image already contains a model and it's not the same model> as the one we used for questions 1-4.
###Code
!pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime
!pip install keras_image_helper
#!/usr/bin/env python
# coding: utf-8
import tflite_runtime.interpreter as tflite
from io import BytesIO
from urllib import request
from PIL import Image
interpreter = tflite.Interpreter(model_path='dogs_cats.tflite')
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output_index = interpreter.get_output_details()[0]['index']
classes = [
'dog',
'cat'
]
def download_image(url):
with request.urlopen(url) as resp:
buffer = resp.read()
stream = BytesIO(buffer)
img = Image.open(stream)
return img
def prepare_image(img, target_size):
if img.mode != 'RGB':
img = img.convert('RGB')
img = img.resize(target_size, Image.NEAREST)
return img
def predict(url):
img = download_image(url)
prepared_img = prepare_image(img, (150, 150))
x = preprocessor.from_url(url)
x = np.array(prepared_img, dtype='float32')
interpreter.set_tensor(input_index, x)
interpreter.invoke()
preds = interpreter.get_tensor(output_index)
float_predictions = preds[0].tolist()
return dict(zip(classes, float_predictions))
def lambda_handler(event, context):
url = event['url']
result = predict(url)
return result
###Output
_____no_output_____
###Markdown
Question 5Now let's extend this docker image, install all the required librariesand add the code for lambda.You don't need to include the model in the image. It's already included. The name of the file with the model is `cats-dogs-v2.tflite` and it's in the current workdir in the image (see the Dockerfile above for the reference).What's the image id of the base image? In the build logs (on Linux), you'll see a log like that:```$ docker some-command-for-buildingSending build context to Docker daemon 2.048kBStep 1/N : FROM agrigorev/zoomcamp-cats-dogs-lambda:v2 ---> XXXXXXXXXXXXStep 2/N : ....```You need to get this `XXXXXXXXXXXX`. On MacOS and Windows, the logs for `docker build` are different. To get the image id there, you can use `docker image ls -a`.
###Code
###Output
_____no_output_____
###Markdown
Question 6Now run the container locally.Score this image: https://upload.wikimedia.org/wikipedia/commons/1/18/Vombatus_ursinus_-Maria_Island_National_Park.jpgWhat's the output from the model?
###Code
###Output
_____no_output_____ |
Deep_Reinforcement_Learning_Connect_4.ipynb | ###Markdown
Deep Reinforcement Learning using AlphaZero methodologyPlease see https://applied-data.science/blog/how-to-build-your-own-alphazero-ai-using-python-and-keras/ for further notes on the codebase
###Code
!rm -rf DeepReinforcementLearning
!git clone https://github.com/AppliedDataSciencePartners/DeepReinforcementLearning.git
!pip install -r ./DeepReinforcementLearning/requirements.txt
###Output
Cloning into 'DeepReinforcementLearning'...
remote: Enumerating objects: 168, done.[K
remote: Total 168 (delta 0), reused 0 (delta 0), pack-reused 168[K
Receiving objects: 100% (168/168), 2.64 MiB | 18.63 MiB/s, done.
Resolving deltas: 100% (79/79), done.
Collecting absl-py==0.1.12 (from -r ./DeepReinforcementLearning/requirements.txt (line 1))
[?25l Downloading https://files.pythonhosted.org/packages/d0/53/cd8524308bb662bb8493ff140c76d329f6e88742e5cd18ee9b3f090ff60d/absl-py-0.1.12.tar.gz (79kB)
[K 100% |████████████████████████████████| 81kB 3.2MB/s
[?25hCollecting appnope==0.1.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/87/a9/7985e6a53402f294c8f0e8eff3151a83f1fb901fa92909bb3ff29b4d22af/appnope-0.1.0-py2.py3-none-any.whl
Collecting astor==0.6.2 (from -r ./DeepReinforcementLearning/requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/b2/91/cc9805f1ff7b49f620136b3a7ca26f6a1be2ed424606804b0fbcf499f712/astor-0.6.2-py2.py3-none-any.whl
Collecting bleach==1.5.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/33/70/86c5fec937ea4964184d4d6c4f0b9551564f821e1c3575907639036d9b90/bleach-1.5.0-py2.py3-none-any.whl
Requirement already satisfied: cycler==0.10.0 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 5)) (0.10.0)
Collecting decorator==4.2.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/e1/5a/53db15bf367d2028bdc6700dbdf1bdfab46b9f208b7516952817c0808118/decorator-4.2.1-py2.py3-none-any.whl
Collecting gast==0.2.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 7))
Downloading https://files.pythonhosted.org/packages/5c/78/ff794fcae2ce8aa6323e789d1f8b3b7765f601e7702726f430e814822b96/gast-0.2.0.tar.gz
Collecting graphviz==0.8.2 (from -r ./DeepReinforcementLearning/requirements.txt (line 8))
Downloading https://files.pythonhosted.org/packages/05/e4/8fcc76823534d47f079c0ff1b3d8b57784e8fba63ceb1ded32c9f4dd993c/graphviz-0.8.2-py2.py3-none-any.whl
Collecting grpcio==1.10.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 9))
[?25l Downloading https://files.pythonhosted.org/packages/97/ec/a32bb323eeb32236d2faa7876231225c40fba12771c50a44880155e80c20/grpcio-1.10.0-cp36-cp36m-manylinux1_x86_64.whl (7.5MB)
[K 100% |████████████████████████████████| 7.5MB 3.3MB/s
[?25hCollecting h5py==2.7.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 10))
[?25l Downloading https://files.pythonhosted.org/packages/f2/b8/a63fcc840bba5c76e453dd712dbca63178a264c8990e0086b72965d4e954/h5py-2.7.1-cp36-cp36m-manylinux1_x86_64.whl (5.4MB)
[K 100% |████████████████████████████████| 5.4MB 8.0MB/s
[?25hCollecting html5lib==0.9999999 (from -r ./DeepReinforcementLearning/requirements.txt (line 11))
[?25l Downloading https://files.pythonhosted.org/packages/ae/ae/bcb60402c60932b32dfaf19bb53870b29eda2cd17551ba5639219fb5ebf9/html5lib-0.9999999.tar.gz (889kB)
[K 100% |████████████████████████████████| 890kB 20.2MB/s
[?25hCollecting ipykernel==4.8.2 (from -r ./DeepReinforcementLearning/requirements.txt (line 12))
[?25l Downloading https://files.pythonhosted.org/packages/ab/3f/cd624c835aa3336a9110d0a99e15070f343b881b7d651ab1375ef226a3ac/ipykernel-4.8.2-py3-none-any.whl (108kB)
[K 100% |████████████████████████████████| 112kB 33.0MB/s
[?25hCollecting ipython==6.2.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 13))
[?25l Downloading https://files.pythonhosted.org/packages/e1/87/294b718125085559b56453be87d90777863173470167e5f1d5de20b9eea3/ipython-6.2.1-py3-none-any.whl (745kB)
[K 100% |████████████████████████████████| 747kB 21.4MB/s
[?25hRequirement already satisfied: ipython-genutils==0.2.0 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 14)) (0.2.0)
Collecting jedi==0.11.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 15))
[?25l Downloading https://files.pythonhosted.org/packages/50/ca/d71f5a427601c98eadabfd73104cacbec8cc230e8416158decf61a48b0c6/jedi-0.11.1-py2.py3-none-any.whl (250kB)
[K 100% |████████████████████████████████| 256kB 26.9MB/s
[?25hCollecting jupyter-client==5.2.3 (from -r ./DeepReinforcementLearning/requirements.txt (line 16))
[?25l Downloading https://files.pythonhosted.org/packages/94/dd/fe6c4d683b09eb05342bd2816b7779663f71762b4fa9c2d5203d35d17354/jupyter_client-5.2.3-py2.py3-none-any.whl (89kB)
[K 100% |████████████████████████████████| 92kB 28.6MB/s
[?25hRequirement already satisfied: jupyter-core==4.4.0 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 17)) (4.4.0)
Collecting Keras==2.1.5 (from -r ./DeepReinforcementLearning/requirements.txt (line 18))
[?25l Downloading https://files.pythonhosted.org/packages/ba/65/e4aff762b8696ec0626a6654b1e73b396fcc8b7cc6b98d78a1bc53b85b48/Keras-2.1.5-py2.py3-none-any.whl (334kB)
[K 100% |████████████████████████████████| 337kB 28.1MB/s
[?25hRequirement already satisfied: kiwisolver==1.0.1 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 19)) (1.0.1)
Collecting Markdown==2.6.11 (from -r ./DeepReinforcementLearning/requirements.txt (line 20))
[?25l Downloading https://files.pythonhosted.org/packages/6d/7d/488b90f470b96531a3f5788cf12a93332f543dbab13c423a5e7ce96a0493/Markdown-2.6.11-py2.py3-none-any.whl (78kB)
[K 100% |████████████████████████████████| 81kB 27.2MB/s
[?25hCollecting matplotlib==2.2.2 (from -r ./DeepReinforcementLearning/requirements.txt (line 21))
[?25l Downloading https://files.pythonhosted.org/packages/49/b8/89dbd27f2fb171ce753bb56220d4d4f6dbc5fe32b95d8edc4415782ef07f/matplotlib-2.2.2-cp36-cp36m-manylinux1_x86_64.whl (12.6MB)
[K 100% |████████████████████████████████| 12.6MB 3.1MB/s
[?25hCollecting numpy==1.14.2 (from -r ./DeepReinforcementLearning/requirements.txt (line 22))
[?25l Downloading https://files.pythonhosted.org/packages/6e/dc/92c0f670e7b986829fc92c4c0208edb9d72908149da38ecda50d816ea057/numpy-1.14.2-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
[K 100% |████████████████████████████████| 12.2MB 2.1MB/s
[?25hCollecting parso==0.1.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 23))
[?25l Downloading https://files.pythonhosted.org/packages/c6/2f/96f54499c920070ccc1bffaee115a6a0cf1a0e7ece34b8faa7ee632688dd/parso-0.1.1-py2.py3-none-any.whl (91kB)
[K 100% |████████████████████████████████| 92kB 29.1MB/s
[?25hCollecting pexpect==4.4.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 24))
[?25l Downloading https://files.pythonhosted.org/packages/05/80/3a8f823aadc85a8ba3744d95ba591cbe999fbca5d97429a056fd82c4ea92/pexpect-4.4.0-py2.py3-none-any.whl (56kB)
[K 100% |████████████████████████████████| 61kB 24.5MB/s
[?25hCollecting pickleshare==0.7.4 (from -r ./DeepReinforcementLearning/requirements.txt (line 25))
Downloading https://files.pythonhosted.org/packages/9f/17/daa142fc9be6b76f26f24eeeb9a138940671490b91cb5587393f297c8317/pickleshare-0.7.4-py2.py3-none-any.whl
Collecting prompt-toolkit==1.0.15 (from -r ./DeepReinforcementLearning/requirements.txt (line 26))
[?25l Downloading https://files.pythonhosted.org/packages/04/d1/c6616dd03701e7e2073f06d5c3b41b012256e42b72561f16a7bd86dd7b43/prompt_toolkit-1.0.15-py3-none-any.whl (247kB)
[K 100% |████████████████████████████████| 256kB 18.5MB/s
[?25hCollecting protobuf==3.5.2.post1 (from -r ./DeepReinforcementLearning/requirements.txt (line 27))
[?25l Downloading https://files.pythonhosted.org/packages/74/ad/ecd865eb1ba1ff7f6bd6bcb731a89d55bc0450ced8d457ed2d167c7b8d5f/protobuf-3.5.2.post1-cp36-cp36m-manylinux1_x86_64.whl (6.4MB)
[K 100% |████████████████████████████████| 6.4MB 6.9MB/s
[?25hCollecting ptyprocess==0.5.2 (from -r ./DeepReinforcementLearning/requirements.txt (line 28))
Downloading https://files.pythonhosted.org/packages/ff/4e/fa4a73ccfefe2b37d7b6898329e7dbcd1ac846ba3a3b26b294a78a3eb997/ptyprocess-0.5.2-py2.py3-none-any.whl
Collecting pydot==1.2.4 (from -r ./DeepReinforcementLearning/requirements.txt (line 29))
[?25l Downloading https://files.pythonhosted.org/packages/c3/f1/e61d6dfe6c1768ed2529761a68f70939e2569da043e9f15a8d84bf56cadf/pydot-1.2.4.tar.gz (132kB)
[K 100% |████████████████████████████████| 133kB 34.7MB/s
[?25hCollecting pydot-ng==1.0.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 30))
Downloading https://files.pythonhosted.org/packages/de/64/86b0502c3644190c0b9fed0e378ee18f31b1f0262bdead1eb9ac1d404529/pydot_ng-1.0.0.tar.gz
Collecting Pygments==2.2.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 31))
[?25l Downloading https://files.pythonhosted.org/packages/02/ee/b6e02dc6529e82b75bb06823ff7d005b141037cb1416b10c6f00fc419dca/Pygments-2.2.0-py2.py3-none-any.whl (841kB)
[K 100% |████████████████████████████████| 849kB 23.9MB/s
[?25hCollecting pyparsing==2.2.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 32))
[?25l Downloading https://files.pythonhosted.org/packages/6a/8a/718fd7d3458f9fab8e67186b00abdd345b639976bc7fb3ae722e1b026a50/pyparsing-2.2.0-py2.py3-none-any.whl (56kB)
[K 100% |████████████████████████████████| 61kB 24.8MB/s
[?25hCollecting python-dateutil==2.7.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 33))
[?25l Downloading https://files.pythonhosted.org/packages/95/27/d6be8938e2cd9c956c2c6c0b3253e1c62d6db29a52b477943da3c3ec728c/python_dateutil-2.7.1-py2.py3-none-any.whl (212kB)
[K 100% |████████████████████████████████| 215kB 31.8MB/s
[?25hCollecting pytz==2018.3 (from -r ./DeepReinforcementLearning/requirements.txt (line 34))
[?25l Downloading https://files.pythonhosted.org/packages/3c/80/32e98784a8647880dedf1f6bf8e2c91b195fe18fdecc6767dcf5104598d6/pytz-2018.3-py2.py3-none-any.whl (509kB)
[K 100% |████████████████████████████████| 512kB 23.2MB/s
[?25hCollecting PyYAML==3.12 (from -r ./DeepReinforcementLearning/requirements.txt (line 35))
[?25l Downloading https://files.pythonhosted.org/packages/4a/85/db5a2df477072b2902b0eb892feb37d88ac635d36245a72a6a69b23b383a/PyYAML-3.12.tar.gz (253kB)
[K 100% |████████████████████████████████| 256kB 32.1MB/s
[?25hRequirement already satisfied: pyzmq==17.0.0 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 36)) (17.0.0)
Collecting scipy==1.0.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 37))
[?25l Downloading https://files.pythonhosted.org/packages/2c/13/eb888fcc83f14d114dee794c3491477ce156caa9f456b7bef1112dde36b5/scipy-1.0.1-cp36-cp36m-manylinux1_x86_64.whl (50.0MB)
[K 100% |████████████████████████████████| 50.0MB 723kB/s
[?25hRequirement already satisfied: simplegeneric==0.8.1 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 38)) (0.8.1)
Requirement already satisfied: six==1.11.0 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 39)) (1.11.0)
Collecting tensorboard==1.6.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 40))
[?25l Downloading https://files.pythonhosted.org/packages/b0/67/a8c91665987d359211dcdca5c8b2a7c1e0876eb0702a4383c1e4ff76228d/tensorboard-1.6.0-py3-none-any.whl (3.0MB)
[K 100% |████████████████████████████████| 3.1MB 12.5MB/s
[?25hCollecting tensorflow==1.6.0 (from -r ./DeepReinforcementLearning/requirements.txt (line 41))
[?25l Downloading https://files.pythonhosted.org/packages/d9/0f/fbd8bb92459c75db93040f80702ebe4ba83a52cdb6ad930654c31dc0b711/tensorflow-1.6.0-cp36-cp36m-manylinux1_x86_64.whl (45.8MB)
[K 100% |████████████████████████████████| 45.9MB 872kB/s
[?25hRequirement already satisfied: termcolor==1.1.0 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 42)) (1.1.0)
Collecting tornado==5.0.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 43))
[?25l Downloading https://files.pythonhosted.org/packages/66/60/5b34caa5014eb3f1deb16d0e72cc08abeec7a9c9823486da7984ddadc95f/tornado-5.0.1.tar.gz (504kB)
[K 100% |████████████████████████████████| 512kB 20.5MB/s
[?25hRequirement already satisfied: traitlets==4.3.2 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 44)) (4.3.2)
Requirement already satisfied: wcwidth==0.1.7 in /usr/local/lib/python3.6/dist-packages (from -r ./DeepReinforcementLearning/requirements.txt (line 45)) (0.1.7)
Collecting Werkzeug==0.14.1 (from -r ./DeepReinforcementLearning/requirements.txt (line 46))
[?25l Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)
[K 100% |████████████████████████████████| 327kB 22.4MB/s
[?25hRequirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython==6.2.1->-r ./DeepReinforcementLearning/requirements.txt (line 13)) (40.9.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorboard==1.6.0->-r ./DeepReinforcementLearning/requirements.txt (line 40)) (0.33.1)
Building wheels for collected packages: absl-py, gast, html5lib, pydot, pydot-ng, PyYAML, tornado
Building wheel for absl-py (setup.py) ... [?25ldone
[?25h Stored in directory: /root/.cache/pip/wheels/f3/6d/ce/98cfc871c5846580a58a0167483395a24b2728ab6770fcc218
Building wheel for gast (setup.py) ... [?25ldone
[?25h Stored in directory: /root/.cache/pip/wheels/9a/1f/0e/3cde98113222b853e98fc0a8e9924480a3e25f1b4008cedb4f
Building wheel for html5lib (setup.py) ... [?25ldone
[?25h Stored in directory: /root/.cache/pip/wheels/50/ae/f9/d2b189788efcf61d1ee0e36045476735c838898eef1cad6e29
Building wheel for pydot (setup.py) ... [?25ldone
[?25h Stored in directory: /root/.cache/pip/wheels/6a/a5/14/25541ebcdeaf97a37b6d05c7ff15f5bd20f5e91b99d313e5b4
Building wheel for pydot-ng (setup.py) ... [?25ldone
[?25h Stored in directory: /root/.cache/pip/wheels/ef/d1/b6/e2b937c79d99d49b7db9233832a425c7e6b787486f5831b302
Building wheel for PyYAML (setup.py) ... [?25ldone
[?25h Stored in directory: /root/.cache/pip/wheels/03/05/65/bdc14f2c6e09e82ae3e0f13d021e1b6b2481437ea2f207df3f
Building wheel for tornado (setup.py) ... [?25ldone
[?25h Stored in directory: /root/.cache/pip/wheels/09/6b/dd/9c4fa388872cddb7e88ee7c457d2213c65ae74b8099cb4c5ca
Successfully built absl-py gast html5lib pydot pydot-ng PyYAML tornado
[31mtfds-nightly 1.0.2.dev201904090105 has requirement protobuf>=3.6.1, but you'll have protobuf 3.5.2.post1 which is incompatible.[0m
[31mtensorflow-metadata 0.13.0 has requirement protobuf<4,>=3.7, but you'll have protobuf 3.5.2.post1 which is incompatible.[0m
[31mspacy 2.0.18 has requirement numpy>=1.15.0, but you'll have numpy 1.14.2 which is incompatible.[0m
[31mnetworkx 2.3 has requirement decorator>=4.3.0, but you'll have decorator 4.2.1 which is incompatible.[0m
[31mmagenta 0.3.19 has requirement tensorflow>=1.12.0, but you'll have tensorflow 1.6.0 which is incompatible.[0m
[31mjupyter-console 6.0.0 has requirement prompt-toolkit<2.1.0,>=2.0.0, but you'll have prompt-toolkit 1.0.15 which is incompatible.[0m
[31mimgaug 0.2.8 has requirement numpy>=1.15.0, but you'll have numpy 1.14.2 which is incompatible.[0m
[31mgoogleapis-common-protos 1.5.9 has requirement protobuf>=3.6.0, but you'll have protobuf 3.5.2.post1 which is incompatible.[0m
[31mgoogle-colab 1.0.0 has requirement ipykernel~=4.6.0, but you'll have ipykernel 4.8.2 which is incompatible.[0m
[31mgoogle-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 6.2.1 which is incompatible.[0m
[31mgoogle-colab 1.0.0 has requirement tornado~=4.5.0, but you'll have tornado 5.0.1 which is incompatible.[0m
[31mfastai 1.0.51 has requirement numpy>=1.15, but you'll have numpy 1.14.2 which is incompatible.[0m
[31mdopamine-rl 1.0.5 has requirement absl-py>=0.2.2, but you'll have absl-py 0.1.12 which is incompatible.[0m
[31mdatascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[0m
[31mcvxpy 1.0.15 has requirement scipy>=1.1.0, but you'll have scipy 1.0.1 which is incompatible.[0m
[31malbumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.8 which is incompatible.[0m
Installing collected packages: absl-py, appnope, astor, html5lib, bleach, decorator, gast, graphviz, protobuf, grpcio, numpy, h5py, ptyprocess, pexpect, prompt-toolkit, pickleshare, parso, jedi, Pygments, ipython, python-dateutil, tornado, jupyter-client, ipykernel, scipy, PyYAML, Keras, Markdown, pyparsing, pytz, matplotlib, pydot, pydot-ng, Werkzeug, tensorboard, tensorflow
Found existing installation: absl-py 0.7.1
Uninstalling absl-py-0.7.1:
Successfully uninstalled absl-py-0.7.1
Found existing installation: astor 0.7.1
Uninstalling astor-0.7.1:
Successfully uninstalled astor-0.7.1
Found existing installation: html5lib 1.0.1
Uninstalling html5lib-1.0.1:
Successfully uninstalled html5lib-1.0.1
Found existing installation: bleach 3.1.0
Uninstalling bleach-3.1.0:
Successfully uninstalled bleach-3.1.0
Found existing installation: decorator 4.4.0
Uninstalling decorator-4.4.0:
Successfully uninstalled decorator-4.4.0
Found existing installation: gast 0.2.2
Uninstalling gast-0.2.2:
Successfully uninstalled gast-0.2.2
Found existing installation: graphviz 0.10.1
Uninstalling graphviz-0.10.1:
Successfully uninstalled graphviz-0.10.1
Found existing installation: protobuf 3.7.1
Uninstalling protobuf-3.7.1:
Successfully uninstalled protobuf-3.7.1
Found existing installation: grpcio 1.15.0
Uninstalling grpcio-1.15.0:
Successfully uninstalled grpcio-1.15.0
Found existing installation: numpy 1.16.2
Uninstalling numpy-1.16.2:
Successfully uninstalled numpy-1.16.2
Found existing installation: h5py 2.8.0
Uninstalling h5py-2.8.0:
Successfully uninstalled h5py-2.8.0
Found existing installation: ptyprocess 0.6.0
Uninstalling ptyprocess-0.6.0:
Successfully uninstalled ptyprocess-0.6.0
Found existing installation: pexpect 4.7.0
Uninstalling pexpect-4.7.0:
Successfully uninstalled pexpect-4.7.0
Found existing installation: prompt-toolkit 1.0.16
Uninstalling prompt-toolkit-1.0.16:
Successfully uninstalled prompt-toolkit-1.0.16
Found existing installation: pickleshare 0.7.5
Uninstalling pickleshare-0.7.5:
Successfully uninstalled pickleshare-0.7.5
Found existing installation: parso 0.4.0
Uninstalling parso-0.4.0:
Successfully uninstalled parso-0.4.0
Found existing installation: jedi 0.13.3
Uninstalling jedi-0.13.3:
Successfully uninstalled jedi-0.13.3
Found existing installation: Pygments 2.1.3
Uninstalling Pygments-2.1.3:
Successfully uninstalled Pygments-2.1.3
Found existing installation: ipython 5.5.0
Uninstalling ipython-5.5.0:
Successfully uninstalled ipython-5.5.0
Found existing installation: python-dateutil 2.5.3
Uninstalling python-dateutil-2.5.3:
Successfully uninstalled python-dateutil-2.5.3
Found existing installation: tornado 4.5.3
Uninstalling tornado-4.5.3:
Successfully uninstalled tornado-4.5.3
Found existing installation: jupyter-client 5.2.4
Uninstalling jupyter-client-5.2.4:
Successfully uninstalled jupyter-client-5.2.4
Found existing installation: ipykernel 4.6.1
Uninstalling ipykernel-4.6.1:
Successfully uninstalled ipykernel-4.6.1
Found existing installation: scipy 1.2.1
Uninstalling scipy-1.2.1:
Successfully uninstalled scipy-1.2.1
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Found existing installation: Keras 2.2.4
Uninstalling Keras-2.2.4:
Successfully uninstalled Keras-2.2.4
Found existing installation: Markdown 3.1
Uninstalling Markdown-3.1:
Successfully uninstalled Markdown-3.1
Found existing installation: pyparsing 2.4.0
Uninstalling pyparsing-2.4.0:
Successfully uninstalled pyparsing-2.4.0
Found existing installation: pytz 2018.9
Uninstalling pytz-2018.9:
Successfully uninstalled pytz-2018.9
Found existing installation: matplotlib 3.0.3
Uninstalling matplotlib-3.0.3:
Successfully uninstalled matplotlib-3.0.3
Found existing installation: pydot 1.3.0
Uninstalling pydot-1.3.0:
Successfully uninstalled pydot-1.3.0
Found existing installation: pydot-ng 2.0.0
Uninstalling pydot-ng-2.0.0:
Successfully uninstalled pydot-ng-2.0.0
Found existing installation: Werkzeug 0.15.2
Uninstalling Werkzeug-0.15.2:
Successfully uninstalled Werkzeug-0.15.2
Found existing installation: tensorboard 1.13.1
Uninstalling tensorboard-1.13.1:
Successfully uninstalled tensorboard-1.13.1
Found existing installation: tensorflow 1.13.1
Uninstalling tensorflow-1.13.1:
Successfully uninstalled tensorflow-1.13.1
Successfully installed Keras-2.1.5 Markdown-2.6.11 PyYAML-3.12 Pygments-2.2.0 Werkzeug-0.14.1 absl-py-0.1.12 appnope-0.1.0 astor-0.6.2 bleach-1.5.0 decorator-4.2.1 gast-0.2.0 graphviz-0.8.2 grpcio-1.10.0 h5py-2.7.1 html5lib-0.9999999 ipykernel-4.8.2 ipython-6.2.1 jedi-0.11.1 jupyter-client-5.2.3 matplotlib-2.2.2 numpy-1.14.2 parso-0.1.1 pexpect-4.4.0 pickleshare-0.7.4 prompt-toolkit-1.0.15 protobuf-3.5.2.post1 ptyprocess-0.5.2 pydot-1.2.4 pydot-ng-1.0.0 pyparsing-2.2.0 python-dateutil-2.7.1 pytz-2018.3 scipy-1.0.1 tensorboard-1.6.0 tensorflow-1.6.0 tornado-5.0.1
###Markdown
1. First load the core libraries
###Code
%cd ./DeepReinforcementLearning
!ls
# -*- coding: utf-8 -*-
# %matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
np.set_printoptions(suppress=True)
from shutil import copyfile
import random
from importlib import reload
from keras.utils import plot_model
from game import Game, GameState
from agent import Agent
from memory import Memory
from model import Residual_CNN
from funcs import playMatches, playMatchesBetweenVersions
import loggers as lg
from settings import run_folder, run_archive_folder
import initialise
import pickle
###Output
/usr/local/lib/python3.6/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
###Markdown
2. Now run this block to start the learning processThis block loops for ever, continually learning from new game data.The current best model and memories are saved in the run folder so you can kill the process and restart from the last checkpoint.
###Code
lg.logger_main.info('=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*')
lg.logger_main.info('=*=*=*=*=*=. NEW LOG =*=*=*=*=*')
lg.logger_main.info('=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*')
env = Game()
# If loading an existing neural network, copy the config file to root
if initialise.INITIAL_RUN_NUMBER != None:
copyfile(run_archive_folder + env.name + '/run' + str(initialise.INITIAL_RUN_NUMBER).zfill(4) + '/config.py', './config.py')
import config
######## LOAD MEMORIES IF NECESSARY ########
if initialise.INITIAL_MEMORY_VERSION == None:
memory = Memory(config.MEMORY_SIZE)
else:
print('LOADING MEMORY VERSION ' + str(initialise.INITIAL_MEMORY_VERSION) + '...')
memory = pickle.load( open( run_archive_folder + env.name + '/run' + str(initialise.INITIAL_RUN_NUMBER).zfill(4) + "/memory/memory" + str(initialise.INITIAL_MEMORY_VERSION).zfill(4) + ".p", "rb" ) )
######## LOAD MODEL IF NECESSARY ########
# create an untrained neural network objects from the config file
current_NN = Residual_CNN(config.REG_CONST, config.LEARNING_RATE, (2,) + env.grid_shape, env.action_size, config.HIDDEN_CNN_LAYERS)
best_NN = Residual_CNN(config.REG_CONST, config.LEARNING_RATE, (2,) + env.grid_shape, env.action_size, config.HIDDEN_CNN_LAYERS)
#If loading an existing neural netwrok, set the weights from that model
if initialise.INITIAL_MODEL_VERSION != None:
best_player_version = initialise.INITIAL_MODEL_VERSION
print('LOADING MODEL VERSION ' + str(initialise.INITIAL_MODEL_VERSION) + '...')
m_tmp = best_NN.read(env.name, initialise.INITIAL_RUN_NUMBER, best_player_version)
current_NN.model.set_weights(m_tmp.get_weights())
best_NN.model.set_weights(m_tmp.get_weights())
#otherwise just ensure the weights on the two players are the same
else:
best_player_version = 0
best_NN.model.set_weights(current_NN.model.get_weights())
#copy the config file to the run folder
copyfile('./config.py', run_folder + 'config.py')
plot_model(current_NN.model, to_file=run_folder + 'models/model.png', show_shapes = True)
print('\n')
######## CREATE THE PLAYERS ########
current_player = Agent('current_player', env.state_size, env.action_size, config.MCTS_SIMS, config.CPUCT, current_NN)
best_player = Agent('best_player', env.state_size, env.action_size, config.MCTS_SIMS, config.CPUCT, best_NN)
#user_player = User('player1', env.state_size, env.action_size)
iteration = 0
while 1:
iteration += 1
reload(lg)
reload(config)
print('ITERATION NUMBER ' + str(iteration))
lg.logger_main.info('BEST PLAYER VERSION: %d', best_player_version)
print('BEST PLAYER VERSION ' + str(best_player_version))
######## SELF PLAY ########
print('SELF PLAYING ' + str(config.EPISODES) + ' EPISODES...')
_, memory, _, _ = playMatches(best_player, best_player, config.EPISODES, lg.logger_main, turns_until_tau0 = config.TURNS_UNTIL_TAU0, memory = memory)
print('\n')
memory.clear_stmemory()
if len(memory.ltmemory) >= config.MEMORY_SIZE:
######## RETRAINING ########
print('RETRAINING...')
current_player.replay(memory.ltmemory)
print('')
if iteration % 5 == 0:
pickle.dump( memory, open( run_folder + "memory/memory" + str(iteration).zfill(4) + ".p", "wb" ) )
lg.logger_memory.info('====================')
lg.logger_memory.info('NEW MEMORIES')
lg.logger_memory.info('====================')
memory_samp = random.sample(memory.ltmemory, min(1000, len(memory.ltmemory)))
for s in memory_samp:
current_value, current_probs, _ = current_player.get_preds(s['state'])
best_value, best_probs, _ = best_player.get_preds(s['state'])
lg.logger_memory.info('MCTS VALUE FOR %s: %f', s['playerTurn'], s['value'])
lg.logger_memory.info('CUR PRED VALUE FOR %s: %f', s['playerTurn'], current_value)
lg.logger_memory.info('BES PRED VALUE FOR %s: %f', s['playerTurn'], best_value)
lg.logger_memory.info('THE MCTS ACTION VALUES: %s', ['%.2f' % elem for elem in s['AV']] )
lg.logger_memory.info('CUR PRED ACTION VALUES: %s', ['%.2f' % elem for elem in current_probs])
lg.logger_memory.info('BES PRED ACTION VALUES: %s', ['%.2f' % elem for elem in best_probs])
lg.logger_memory.info('ID: %s', s['state'].id)
lg.logger_memory.info('INPUT TO MODEL: %s', current_player.model.convertToModelInput(s['state']))
s['state'].render(lg.logger_memory)
######## TOURNAMENT ########
print('TOURNAMENT...')
scores, _, points, sp_scores = playMatches(best_player, current_player, config.EVAL_EPISODES, lg.logger_tourney, turns_until_tau0 = 0, memory = None)
print('\nSCORES')
print(scores)
print('\nSTARTING PLAYER / NON-STARTING PLAYER SCORES')
print(sp_scores)
#print(points)
print('\n\n')
if scores['current_player'] > scores['best_player'] * config.SCORING_THRESHOLD:
best_player_version = best_player_version + 1
best_NN.model.set_weights(current_NN.model.get_weights())
best_NN.write(env.name, best_player_version)
else:
print('MEMORY SIZE: ' + str(len(memory.ltmemory)))
###Output
WARNING:tensorflow:From /content/DeepReinforcementLearning/loss.py:15: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See tf.nn.softmax_cross_entropy_with_logits_v2.
ITERATION NUMBER 1
BEST PLAYER VERSION 0
SELF PLAYING 30 EPISODES...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
MEMORY SIZE: 1576
ITERATION NUMBER 2
BEST PLAYER VERSION 0
SELF PLAYING 30 EPISODES...
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
###Markdown
The following panels are not involved in the learning process¶ Play matches between versions (use -1 for human player
###Code
from game import Game
from funcs import playMatchesBetweenVersions
import loggers as lg
env = Game()
playMatchesBetweenVersions(env, 1, 1, 1, 10, lg.logger_tourney, 0)
###Output
_____no_output_____
###Markdown
Pass a particular game state through the neural network (setup below for Connect4)
###Code
gs = GameState(np.array([
0,0,0,0,0,0,0,
0,0,0,0,0,0,0,
0,0,0,0,0,0,0,
0,0,0,0,0,0,0,
0,0,0,0,0,0,0,
0,0,0,0,0,0,0
]), 1)
preds = current_player.get_preds(gs)
print(preds)
###Output
_____no_output_____
###Markdown
See the layers of the current neural network
###Code
current_player.model.viewLayers()
###Output
_____no_output_____
###Markdown
Output a diagram of the neural network architecture
###Code
from keras.utils import plot_model
plot_model(current_NN.model, to_file=run_folder + 'models/model.png', show_shapes = True)
###Output
_____no_output_____ |
src/user_guide/magic_env.ipynb | ###Markdown
Running cell in modified environments * **Difficulty level**: easy* **Time need to lean**: 10 minutes or less* **Key points**: * Magic `%env --new` runs the cell in a fresh SoS environment * Magic `%env --tempdir` runs the cell in a temporary directory that will be removed after the completion of the cell * Magic `%env --expect-error` expects an error from the cell * Magic `%env --allow-error` ignores errors from the cell * Magic `%env --set KEY=VAR` set environment variable * Magic `%env --prepend-path PATH` prepend `PATH` to `$PATH` Running with fresh SoS environment Sometimes when you are developing a script you would like to make sure that the script can be executed independently. In this case you can run it in a sandbox, which runs the cell in a fresh SoS dictionary.For example, when you define a variable `filename`,
###Code
filename = 'magic_env.ipynb'
###Output
_____no_output_____
###Markdown
you can use it in other cells
###Code
sh: expand=True
wc -l {filename}
###Output
483 magic_env.ipynb
###Markdown
But you can not use it directly in a new SoS environment (thus cannot be used directly as a workflow step),
###Code
%env --new --expect-error
sh: expand=True
wc -l {filename}
###Output
ExecuteError: [0]:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
script_522225025603812617 in <module>
sh(fr"""wc -l {filename}
----> """)
NameError: name 'filename' is not defined
###Markdown
This would remind you that `filename` is not defined in the script and you will need to define it as a parameter and pass it from command line if you would like to expect it as a complete workflow with magic `%run`:
###Code
%run --filename={filename} -v1
parameter: filename=str
sh: expand=True
wc -l {filename}
###Output
_____no_output_____
###Markdown
Using a temporary directory (option `--tempdir`) With option `--tempdir`, this magic will create a temporary directory and set it as the current directory before the execution of the cell, and remove the directory after the completion of the cell. For example, when you execute a workflow to create `sandbox.txt` with `%env --tempdir`, this file will not exist in the current directory after the completion of the cell.
###Code
%env --tempdir
output: 'sandbox.txt'
_output.touch()
!ls sandbox.txt
###Output
ls: sandbox.txt: No such file or directory
###Markdown
Expect an error (option `--expect-error`) Sometimes, for example, in the documentation of SoS, we would intentionally generate an error to demonstrate a problem. The error does not matter when we execute the notebook interactively, but would stop the execution of the notebook with `Run All Cells` from Jupyter, or execute the notebook in batch mode with `papermill --engine sos`.To fix this problem, as what has been done in some previous cells of this document, you can use magic `%env --expect-error`, which let SoS know that the cell will return an error. The magic will return `ok` if an `error` is received, and actually `error` if an `ok` is received. Allow errors (option `--allow-error`) Similar to `%env --expect-error`, `%env --allow-error` will tolerate an error returned from the cell, so it will return `ok` if the cell returns `error`, or `ok`. Set environment variables (option `--set`) Option `--set` set environment variables for the execution of a cell, which is meant to be temporary since the variables will be unset as soon as the exection of the cell is completed.The values of this option can be one or more of * `KEY=VALUE` pair which sets value of environment variable `KEY` to `VALUE` * `KEY` which sets value of environment variable `KEY` to `''`
###Code
%env --set LOGGING='DEBUG'
sh:
echo $LOGGING
sh:
echo $LOGGING
###Output
###Markdown
Set `PATH` (option `--prepend-path`) If you would like to use a command in a specific path instead of the one in system `$PATH`, you can use option `env` for actions (see [SoS actions](sos_actions) for details). You can also modifying the `PATH` of the current cell by using option `--prepend-path`.For example,
###Code
R:
R.Version()$version.string
%env --prepend-path /usr/local/bin
R:
R.Version()$version.string
###Output
[1] "R version 3.5.2 (2018-12-20)"
###Markdown
Running cell in modified environments * **Difficulty level**: easy* **Time need to lean**: 10 minutes or less* **Key points**: * Magic `%env --new` runs the cell in a fresh SoS environment * Magic `%env --tempdir` runs the cell in a temporary directory that will be removed after the completion of the cell * Magic `%env --expect-error` expects an error from the cell * Magic `%env --allow-error` ignores errors from the cell * Magic `%env --set KEY=VAR` set environment variable * Magic `%env --prepend-path PATH` prepend `PATH` to `$PATH` Running with fresh SoS environment Sometimes when you are developing a script you would like to make sure that the script can be executed independently. In this case you can run it in a sandbox, which runs the cell in a fresh SoS dictionary.For example, when you define a variable `filename`,
###Code
filename = 'magic_env.ipynb'
###Output
_____no_output_____
###Markdown
you can use it in other cells
###Code
sh: expand=True
wc -l {filename}
###Output
483 magic_env.ipynb
###Markdown
But you can not use it directly in a new SoS environment (thus cannot be used directly as a workflow step),
###Code
%env --new --expect-error
sh: expand=True
wc -l {filename}
###Output
ExecuteError: [0]:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
script_522225025603812617 in <module>
sh(fr"""wc -l {filename}
----> """)
NameError: name 'filename' is not defined
###Markdown
This would remind you that `filename` is not defined in the script and you will need to define it as a parameter and pass it from command line if you would like to expect it as a complete workflow with magic `%run`:
###Code
%run --filename={filename} -v1
parameter: filename=str
sh: expand=True
wc -l {filename}
###Output
_____no_output_____
###Markdown
Using a temporary directory (option `--tempdir`) With option `--tempdir`, this magic will create a temporary directory and set it as the current directory before the execution of the cell, and remove the directory after the completion of the cell. For example, when you execute a workflow to create `sandbox.txt` with `%env --tempdir`, this file will not exist in the current directory after the completion of the cell.
###Code
%env --tempdir
output: 'sandbox.txt'
_output.touch()
!ls sandbox.txt
###Output
ls: sandbox.txt: No such file or directory
###Markdown
Expect an error (option `--expect-error`) Sometimes, for example, in the documentation of SoS, we would intentionally generate an error to demonstrate a problem. The error does not matter when we execute the notebook interactively, but would stop the execution of the notebook with `Run All Cells` from Jupyter, or execute the notebook in batch mode with `papermill --engine sos`.To fix this problem, as what has been done in some previous cells of this document, you can use magic `%env --expect-error`, which let SoS know that the cell will return an error. The magic will return `ok` if an `error` is received, and actually `error` if an `ok` is received. Allow errors (option `--allow-error`) Similar to `%env --expect-error`, `%env --allow-error` will tolerate an error returned from the cell, so it will return `ok` if the cell returns `error`, or `ok`. Set environment variables (option `--set`) Option `--set` set environment variables for the execution of a cell, which is meant to be temporary since the variables will be unset as soon as the exection of the cell is completed.The values of this option can be one or more of * `KEY=VALUE` pair which sets value of environment variable `KEY` to `VALUE` * `KEY` which sets value of environment variable `KEY` to `''`
###Code
%env --set LOGGING='DEBUG'
sh:
echo $LOGGING
sh:
echo $LOGGING
###Output
###Markdown
Set `PATH` (option `--prepend-path`) If you would like to use a command in a specific path instead of the one in system `$PATH`, you can use option `env` for actions (see [SoS actions](sos_actions) for details). You can also modifying the `PATH` of the current cell by using option `--prepend-path`.For example,
###Code
R:
R.Version()$version.string
%env --prepend-path /usr/local/bin
R:
R.Version()$version.string
###Output
[1] "R version 3.5.2 (2018-12-20)"
|
notebooks/deanna_nash/analysis/compute_ivt_ar_detection_s2s_forecase.ipynb | ###Markdown
Compute ivtx and ivty netcdf for AR detection algorithm**Author: Deanna Nash**This notebook computes ivtx and ivty using the ECMWF QUV.grb files and writes to a netCDF that is compliant for use with the Guan and Waliser AR detection algorithm.
###Code
%matplotlib inline
import sys
from netCDF4 import Dataset
import netCDF4 as nc
import cftime
from datetime import datetime, timedelta
from netCDF4 import num2date, date2num
import time as time2
import numpy as np
import pandas as pd
import xarray as xr
import eofs
from eofs.standard import Eof
import glob
# you need intake-esm V 2020.11.4 and intake V 0.6.0
# import tensorflow as tf
import cartopy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter
from shapely.geometry.polygon import LinearRing
import matplotlib as mpl
import matplotlib.pyplot as plt
from netCDF4 import Dataset
from matplotlib import cm
import copy
import fsspec
import intake
path_to_data = '/glade/scratch/dlnash/data/ECMWF/'
aneesh_data = '/glade/scratch/acsubram/S2S_Database/'
###Output
_____no_output_____
###Markdown
Compute ivtx and ivty for selected initialization date
###Code
import re
# for file in glob.glob(aneesh_data + "QUV_20170119.grb"):
# Read Control Run
ds = aneesh_data + "QUV_20170119.grb"
dsopen = xr.open_dataset(ds, engine='cfgrib')
u = dsopen['u']
v = dsopen['v']
q = dsopen['q']
time_ctl = dsopen['time']
lon_ctl = dsopen['longitude']
lat_ctl = dsopen['latitude']
ens_ctl = dsopen['number']
step_ctl = dsopen['step']
prs_ctl = dsopen['isobaricInhPa']
# calculate the zonal and meridional horizontal vapour transport
# Pressure levels: 1000 925 850 700 500 300 200
#200 - 300mb
qu0 = np.mean(q.sel(isobaricInhPa=[200,300]),axis=2) * np.mean(u.sel(isobaricInhPa=[200,300]),axis=2)*10000/9.8
#300 - 500mb
qu1 = np.mean(q.sel(isobaricInhPa=[300,500]),axis=2) * np.mean(u.sel(isobaricInhPa=[300,500]),axis=2)*20000/9.8
#500 - 700 mb
qu2 = np.mean(q.sel(isobaricInhPa=[500,700]),axis=2) * np.mean(u.sel(isobaricInhPa=[500,700]),axis=2)*20000/9.8
#700 - 850 mb
qu3 = np.mean(q.sel(isobaricInhPa=[700,850]),axis=2) * np.mean(u.sel(isobaricInhPa=[700,850]),axis=2)*15000/9.8
#850 - 925 mb
qu4 = np.mean(q.sel(isobaricInhPa=[850,925]),axis=2) * np.mean(u.sel(isobaricInhPa=[850,925]),axis=2)*7500/9.8
#925 - 1000 mb
qu5 = np.mean(q.sel(isobaricInhPa=[925,1000]),axis=2) * np.mean(u.sel(isobaricInhPa=[925,1000]),axis=2)*7500/9.8
qu = qu0+qu1+qu2+qu3+qu4+qu5
#200 - 300mb
qv0 = np.mean(q.sel(isobaricInhPa=[200,300]),axis=2) * np.mean(v.sel(isobaricInhPa=[200,300]),axis=2)*10000/9.8
#300 - 500mb
qv1 = np.mean(q.sel(isobaricInhPa=[300,500]),axis=2) * np.mean(v.sel(isobaricInhPa=[300,500]),axis=2)*20000/9.8
#500 - 700 mb
qv2 = np.mean(q.sel(isobaricInhPa=[500,700]),axis=2) * np.mean(v.sel(isobaricInhPa=[500,700]),axis=2)*20000/9.8
#700 - 850 mb
qv3 = np.mean(q.sel(isobaricInhPa=[700,850]),axis=2) * np.mean(v.sel(isobaricInhPa=[700,850]),axis=2)*15000/9.8
#850 - 925 mb
qv4 = np.mean(q.sel(isobaricInhPa=[850,925]),axis=2) * np.mean(v.sel(isobaricInhPa=[850,925]),axis=2)*7500/9.8
#925 - 1000 mb
qv5 = np.mean(q.sel(isobaricInhPa=[925,1000]),axis=2) * np.mean(v.sel(isobaricInhPa=[925,1000]),axis=2)*7500/9.8
qv = qv0+qv1+qv2+qv3+qv4+qv5
tmp = xr.Dataset({'ivtx': qu,
'ivty': qv})
tmp['ivtx'] = qu
tmp['ivty'] = qv
# tmp['lev'] = ('lev', [1])
# tmp = tmp.set_coords('lev')
## hack for times so they aren't in a gregorian calendar which the algorithm did not like
# initialization date
t1 = pd.date_range(start='2017-01-19', end='2017-01-19', freq='1D')
# valid times
times_lst = pd.date_range(start='2017-01-19', end='2017-03-06', freq='1D')
times_lst
dates=times_lst.tolist()
units="days since 1900-01-01 00"
cal = 'standard'
timez = nc.date2num(dates, units, calendar=cal)
time1 = nc.date2num(t1.tolist(), units, calendar=cal)
# cftime.date2index(times_lst, nctime, calendar=None, select='exact', has_year_zero=None)
# cftime.date2num(dates, units, calendar=None)
# dns = xr.Dataset(
# {
# "ivtx": (['ens', 'time', 'lat', 'lon'], tmp.ivtx.values),
# "ivty": (['ens', 'time', 'lat', 'lon'], tmp.ivty.values)
# },
# coords={
# "ens": tmp.number.values, # ensemble number
# "time": timez, # days since initialization date
# "lat":tmp.latitude.values,
# "lon":tmp.longitude.values,
# },)
# # dimensions:
# # lon = 240 ;
# # lat = 121 ;
# # lev = 47 ; # change to step
# # time = UNLIMITED ; // (1 currently)
# # ens = 11 ; # number
# dns = dns.expand_dims(dim={"lev":1}) # initialization date
# dns['lev'] = ('lev', [1])
# dns = dns.set_coords('lev')
# dns
dns = xr.Dataset(
{
"ivtx": (['ens', 'lev', 'lat', 'lon'], tmp.ivtx.values),
"ivty": (['ens', 'lev', 'lat', 'lon'], tmp.ivty.values)
},
coords={
"ens": tmp.number.values, # ensemble number
"lev": np.arange(len(tmp.valid_time.values)), # days since initialization date
"lat":tmp.latitude.values,
"lon":tmp.longitude.values,
},)
# dimensions:
# lon = 240 ;
# lat = 121 ;
# lev = 47 ; # change to step
# time = UNLIMITED ; // (1 currently)
# ens = 11 ; # number
# add time dimension
dns = dns.expand_dims(dim={"time":1}) # initialization date
# dns['time'] = ('time', [time1])
dns = dns.assign(time=lambda dns: time1)
dns = dns.set_coords('time')
# update time attributes
dns.time.attrs = dict(
units="days since 1900-01-01 00",
calendar='standard'
)
print(dns.time.attrs)
# reorder dimensions
dns = dns.transpose('ens', 'time', 'lev', 'lat', 'lon')
dns
# write to ds with time as unlimited dimension
dns.to_netcdf(path_to_data+"IVT_20170119.nc", unlimited_dims=['time'])
###Output
_____no_output_____ |
docs/Vive Position Calibration using Several Points and Minimisation.ipynb | ###Markdown
Vive Position Calibration using Several Points and MinimisationThe following script is made to calibrate the position of the base stations using several points and formulating a minimisation-problem to fit the position of the two base-stations to each other before transforming this system to adhere to a spatially defined coordinate system.Though theoretically a better approach than the 4-point solution, this method has been known to run into local minima that keeps the optimizer from converging satisfactorily. We know this in part is caused by the systematic deviations in angle we observe (see [validation](Vive Validation.ipynb) under section "Base Station Angle" for details).
###Code
%matplotlib notebook
import math
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import scipy.optimize as opt
## Rotation matrices
def rotate_x(ang):
return np.array([[1,0,0],[0,np.cos(ang),-np.sin(ang)],[0,np.sin(ang),np.cos(ang)]])
def rotate_y(ang):
return np.array([[np.cos(ang),0,np.sin(ang)],[0,1,0],[-np.sin(ang),0,np.cos(ang)]])
def rotate_z(ang):
return np.array([[np.cos(ang),-np.sin(ang),0],[np.sin(ang),np.cos(ang),0],[0,0,1]])
def rotate_zyx(x,y,z):
return np.matmul(rotate_x(x),np.matmul(rotate_y(y),rotate_z(z)))
## Rotation helpers
def transform_to_lh_view(pt, pose):
rotation = rotate_zyx(pose[3],pose[4],pose[5])
return np.matmul(rotation,pt-pose[0:3])
def measure_samples(samples, pose):
output = np.zeros_like(samples)
for i in range(0,samples.shape[1]):
output[...,i] = transform_to_lh_view(samples[...,i],pose)
output[...,i] = output[...,i]/np.linalg.norm(output[...,i])
return output
## Distance between lines
def line_distance(P,u,Q,v):
w0 = np.array(P)-np.array(Q)
a = np.dot(u,u)
b = np.dot(u,v)
c = np.dot(v,v)
d = np.dot(u,w0)
e = np.dot(v,w0)
return np.linalg.norm(w0 + ((b*e-c*d)*u-(a*e-b*d)*v)/(a*c-b*b))
def line_distance_front(p1,v1,p2,v2):
W0 = np.array(p1) - np.array(p2)
a = np.dot(v1, v1)
b = np.dot(v1, v2)
c = np.dot(v2, v2)
d = np.dot(v1, W0)
e = np.dot(v2, W0)
denom = a*c - b*b
s = (b*e - c*d) / denom
t = (a*e - b*d) / denom
P = p1 + v1*s
Q = p2 + v2*t
# return distance only for line in front of base station
if (np.dot(P-p1, v1) < 0):
P = p1
if (np.dot(Q-p2, v2) < 0):
Q = p2
return np.linalg.norm(P-Q)
def rotate_zyx_to_angles(R):
x = -math.atan2(R[1,2],R[2,2])
y = math.asin(R[0,2])
z = -math.atan2(R[0,1],R[0,0])
return np.array([x,y,z])
def normVec(v):
return v/np.linalg.norm(v)
## Plotting helper functions
def transform_to_pose(vec, pose):
rotated = np.matmul(rotate_zyx(pose[3],pose[4],pose[5]),vec)
return rotated + pose[0:3]
def transform_vector_to_pose(vec, pose):
return np.matmul(rotate_zyx(pose[3],pose[4],pose[5]),vec)
def plot_axes(ax, pose, color):
scale = 0.5
ps = np.array([[0,0,1],[0,0,0],[0,1,0],[0,0,0],[1,0,0]]).T
ps = scale*ps
transformed = np.zeros_like(ps)
for i in range(0,ps.shape[1]):
transformed[...,i] = transform_to_pose(ps[...,i],pose)
ax.plot(transformed[0,...],transformed[1,...],transformed[2,...],'-o',markersize=5,markevery=10,color=color)
def plot_measured_lines(ax, pose, samples, color, length):
rotation = rotate_zyx(pose[3],pose[4],pose[5])
measured = measure_samples(samples, pose)
for i in range(0,measured.shape[1]):
line = measured[...,i]/np.linalg.norm(measured[...,i])*length
rotated = np.matmul(rotation,line)+pose[0:3]
ax.plot([pose[0],rotated[0]],[pose[1],rotated[1]],[pose[2],rotated[2]],'--',color=color)
def plot_measured_lines2(ax, pose, measured, color, length):
rotation = rotate_zyx(pose[3],pose[4],pose[5])
#measured = measure_samples(samples, pose)
for i in range(0,measured.shape[1]):
line = measured[...,i]/np.linalg.norm(measured[...,i])*length
rotated = np.matmul(rotation,line)+pose[0:3]
ax.plot([pose[0],rotated[0]],[pose[1],rotated[1]],[pose[2],rotated[2]],'-',color=color, alpha=0.6, lw=0.4)
## hAngle is angle in horizontal plane, vAngle in vertical.
def measured_angles_to_vector(hAngle, vAngle):
y = np.sin(hAngle)
x = np.sin(vAngle)
z = (1-np.sin(hAngle)**2-np.sin(vAngle)**2)**0.5
return np.array([x,y,z]).T
### Reading data-points or input manually
#data = np.loadtxt(open("FILENAME", "rb"), delimiter=",", skiprows=1)
data = np.array([[-0.1324063 , -0.20348653, 0.1092956 , -0.1373839 ],
[-0.06606588, -0.241081 , 0.03299012, -0.07948918],
[ 0.00734387, -0.28111848, -0.03585261, -0.02582142],
[ 0.08710127, -0.32440883, -0.0993386 , 0.02290227],
[-0.1917443 , -0.25598393, 0.17656827, -0.0700229 ],
[-0.25709023, -0.3136574 , 0.2359915 , -0.00850857],
[-0.24115813, -0.42059075, 0.19790666, 0.08808769],
[-0.13624286, -0.31807803, 0.11247162, 0.0033521 ],
[-0.03372648, -0.35028003, 0.01777376, 0.04133479],
[ 0.04336819, -0.44409848, -0.02935516, 0.12057842],
[-0.1347303 , -0.49336973, 0.1077308 , 0.14414993],
[-0.24685544, -0.19140019, 0.26410059, -0.1655845 ],
[-0.1484878 , -0.133989 , 0.1371715 , -0.25629743],
[-0.0516663 , -0.16025093, -0.00291017, -0.19854753],
[-0.12865594, -0.08504003, 0.10553519, -0.35872361],
[-0.2139658 , -0.06953147, 0.2577041 , -0.5022223 ],
[ 0.178072 , -0.24148273, -0.18318553, 0.14830857],
[-0.02108021, 0.02674724, -0.09501194, -0.13155845],
[ 0.0229672 , 0.01056387, -0.14915167, -0.08649187],
[ 0.07452043, -0.00889037, -0.20379097, -0.0382662 ],
[ 0.11964547, -0.02824217, -0.24264547, 0.0034279 ],
[-0.24950913, -0.27755487, 0.20589887, 0.1490889 ],
[-0.51117283, -0.3576123 , 0.4150047 , 0.192826 ],
[-0.32757758, -0.14601715, 0.34110336, -0.00121924],
[-0.13620307, 0.0257063 , 0.1072575 , -0.3498338 ],
[-0.25448853, -0.06841807, 0.28306863, -0.12624657],
[ 0.02915703, -0.21974855, -0.05952135, 0.11216968],
[-0.13984384, -0.15606174, 0.0910619 , 0.32435094],
[-0.1298462 , 0.03105397, 0.07921527, 0.11882097],
[-0.00194477, -0.03513213, -0.05924167, 0.22513607],
[ 0.14433763, -0.09971667, -0.164468 , 0.2986605 ],
[-0.2611565 , -0.0442019 , 0.22897707, 0.21709793],
[-0.41650053, -0.156089 , 0.33573983, 0.31695747],
[-0.13630217, -0.11456693, 0.08837517, 0.29882127]])
meas1 = measured_angles_to_vector(data[:,0], data[:,1]).T #transformToCoordinateSystem( measured_angles_to_vector(data[:,0], data[:,1]), Qb )
meas2 = measured_angles_to_vector(data[:,2], data[:,3]).T
print measured_angles_to_vector(0,-np.pi/2)
print sum(meas1[0]/len(meas1[2]))
print sum(meas2[0]/len(meas2[2]))
###Output
-0.177828383914
0.00895868445869
###Markdown
1. Determining Position of C Relative to BDataset of angles measured from base stations is givens. We assume B is located at (0,0,0) and with rotation (0,0,0) and construct and solve an optimization problem to determine position of C. Constraints: - distance between base stations is 1 unit. - base stations need to point in opposite directions. - C need to be in front of B.
###Code
## Solve optimization problem
# Initial guess
q1 = [0.0, 0, 0, 0, 0, 0]
q2 = [1, 0, 1, 0, 3*np.pi/2, np.pi] # Pretty good guess to start with
#q2 = [1, 0, 1, 0, 3*np.pi/2, np.pi] # Pretty good guess to start with
dists = []
# Objective function
def objective(pose):
sum = 0
P = q1[0:3]
Q = pose[0:3]
rotation = rotate_zyx(pose[3],pose[4],pose[5])
distss = []
for i in range(meas1.shape[1]):
u = meas1[...,i]
v = np.matmul(rotation, meas2[...,i])
dist = line_distance_front(P,u,Q,v)
distss.append(dist)
sum += dist
dists.append(distss)
return sum
# Constraints
def distance(pose):
return np.linalg.norm(pose[0:3])-1
cstr_distance = { 'type': 'eq', 'fun': distance }
def point_opposite(pose):
rotation = rotate_zyx(pose[3],pose[4],pose[5])
z1 = np.array([0,0,1])
z2 = np.matmul(rotation, np.array([0,0,1]))
return -np.dot(z1,z2)
cstr_point_opposite = { 'type': 'ineq', 'fun': point_opposite}
def point_towards(pose):
rotation = rotate_zyx(pose[3],pose[4],pose[5])
z1 = np.array(pose[0:3])
z2 = np.matmul(rotation, np.array([0,0,1]))
return -np.dot(z1,z2)
cstr_point_towards = { 'type': 'ineq', 'fun': point_towards}
# Bounds (translation positive in z - in front of other lighhouse, and rotations in [0,2*pi])
bounds = [
(-1, 1),
(-1, 1),
(0, 1),
(-np.pi, np.pi),
(-np.pi, np.pi),
(-np.pi, np.pi)
]
## Do optimization
res = opt.minimize(objective,q2,
method='SLSQP',
jac=False,
bounds=bounds,
constraints=[cstr_distance,cstr_point_opposite,cstr_point_towards],
options={'disp': True, 'ftol': 1e-9, 'maxiter': 1000}
)
## Plot resulting estimate
fig2 = plt.figure()
ax2 = fig2.add_subplot(111, projection='3d')
# Shift estimate to position of 2nd LH
P1 = q1
P2 = res['x']
# Poses of LH
plot_axes(ax2,P1,'r')
plot_axes(ax2,P2,'c')
plot_axes(ax2,q2,'b')
ax2.set_xlim([-1,1])
ax2.set_ylim([-.5,1.5])
ax2.set_zlim([-.5,1.5])
## plot with lines
fig2 = plt.figure()
ax2 = fig2.add_subplot(111, projection='3d')
Po1 = P1#[0, 0, 2.5, -np.pi/6, np.pi/6, np.pi/4]
Po2 = P2#[2.8, 2.8, 2.5, np.pi/6, -np.pi/6, 5*np.pi/4]
#Po1 = [0, 0, 0, 0, 0, 0]
#Po2 = [1, 0, 0, 0, 0, 0]
plot_axes(ax2,Po1,'r')
plot_axes(ax2,Po2,'c')
ax2.set_xlim3d(-1,1)
ax2.set_ylim3d(-1,1)
ax2.set_zlim3d(-1,1)
plot_measured_lines2(ax2, Po1, meas1, "k", 1.5)
plot_measured_lines2(ax2, Po2, meas2, "b", 1.5)
plt.figure()
[plt.plot(range(len(d)), d) for d in dists]
plt.legend(range(len(d)))
plt.xlabel('Itteration')
plt.ylabel('Cost (shortest distance between lines)')
plt.title('Cost for each point with optimization itteration')
def position(p1,v1,p2,v2):
# Point of intersection
W0 = p1 - p2
a = np.dot(v1, v1)
b = np.dot(v1, v2)
c = np.dot(v2, v2)
d = np.dot(v1, W0)
e = np.dot(v2, W0)
denom = a*c - b*b
s = (b*e - c*d) / denom
t = (a*e - b*d) / denom
P = p1 + v1*s
Q = p2 + v2*t
point = (P+Q)/2
return point
def positionFromMeasurement(ang, P1, P2):
va = measured_angles_to_vector(ang[0], ang[1])
vb = measured_angles_to_vector(ang[2], ang[3])
v1 = transform_vector_to_pose(va,P1)
v2 = transform_vector_to_pose(vb,P2)
return position(P1[0:3],v1,P2[0:3],v2)
data = [[-0.1327934, -0.20342617, -0.044094, -0.29171177],
[-0.0661195, -0.24125297, -0.1219269, -0.23382247],
[0.00722473, -0.28129783, -0.19208193, -0.18038883],
[-0.1922547, -0.25594557, 0.023179, -0.224814],
[-0.25785843, -0.3135612, 0.0824434, -0.1634431]]
aa = np.array(data[2])
ab = np.array(data[0])
ac = np.array(data[4])
pa = positionFromMeasurement(aa, P1, P2)
pb = positionFromMeasurement(ab, P1, P2)
pc = positionFromMeasurement(ac, P1, P2)
s = np.vstack((pa,pb,pc)).T
ax2.scatter(s[0], s[1], s[2])
###Output
_____no_output_____
###Markdown
2. Translation to Local Cartesian Coordinate LocationKnowing the relative position of C with respect to B, we are not ready to embed the local coordinate system we have obtained in the Cartesian coordiantes we which to define for the room. This is obtained through three transformations:- translation - we define a point to be the origin of the system- rotation - we define xy-plane as floor or table surface and define a x-axies direction - scaling - we define two points in space with known spearationWe use set distance between pairs of points to determine the scale of the setup (distance between base stations). We collect multiple pairs of 1m separated data-points and average. There should be a mechanism here to alert if there is significant variance/discard outliers.
###Code
PB = np.array(P1, copy=True)
PC = np.array(P2, copy=True)
v1 = pa-pb
v2 = pc-pb
n = normVec((np.cross(v1, v2)))
# Translation
t = pb
PB[0:3] = PB[0:3]-t
PC[0:3] = PC[0:3]-t
# Rotate location
iVec = normVec(v1)
kVec = normVec(-np.cross(v1,v2))
jVec = normVec(np.cross(kVec,iVec))
M = np.vstack((iVec, jVec, kVec)).T
R = np.linalg.inv(M)
PB[0:3] = np.matmul(R,PB[0:3])
PC[0:3] = np.matmul(R,PC[0:3])
def rotate_and_pose(R,vec,P):
return normVec(np.matmul(R, transform_vector_to_pose(vec, P)))
# Rotate orientation
iVecB = rotate_and_pose(R,[1,0,0],PB)
jVecB = rotate_and_pose(R,[0,1,0],PB)
kVecB = rotate_and_pose(R,[0,0,1],PB)
iVecC = rotate_and_pose(R,[1,0,0],PC)
jVecC = rotate_and_pose(R,[0,1,0],PC)
kVecC = rotate_and_pose(R,[0,0,1],PC)
rotB = np.array([iVecB, jVecB, kVecB]).T
rotC = np.array([iVecC, jVecC, kVecC]).T
PB[3:6] = rotate_zyx_to_angles(rotB)
PC[3:6] = rotate_zyx_to_angles(rotC)
# Scaling
s = 1/np.linalg.norm(v1)
PB[0:3] = np.multiply(PB[0:3], s)
PC[0:3] = np.multiply(PC[0:3], s)
## Plot resulting estimate
fig2 = plt.figure()
ax2 = fig2.add_subplot(111, projection='3d')
# Poses of LH
plot_axes(ax2,PB,'r')
plot_axes(ax2,PC,'c')
paNew = positionFromMeasurement(aa, PB, PC)
pbNew = positionFromMeasurement(ab, PB, PC)
pcNew = positionFromMeasurement(ac, PB, PC)
sNew = np.vstack((paNew,pbNew,pcNew)).T
ax2.scatter(sNew[0], sNew[1], sNew[2])
ax2.set_xlim3d(-3,3)
ax2.set_ylim3d(-3,3)
ax2.set_zlim3d(-3,3)
print PB, PC
# Test:
# p,-2.943105,4.667657,3.507365,,
pb
ang1 = [-0.132575,-0.203491,0.109485,-0.137350]
print positionFromMeasurement(ac, PB, PC)
print ab
#rotate_zyx_to_angles()
print pa, pb, pc
print P1, P2
###Output
[-0.32867392 0.11266406 1.20976395] [-0.265582 -0.08522318 1.31570361] [-0.38348953 -0.23138988 1.14435701]
[0.0, 0, 0, 0, 0, 0] [ 0.98648107 -0.01760844 1. 0.04484837 4.6760137 3.09510888]
|
examples/nested/example_low.ipynb | ###Markdown
Example notebook showing how to use the nested sampler with a lower number of live points and MCMC steps
###Code
import os
import sys
import torch
import logging
from getdist import plots, MCSamples
import getdist
import numpy as np
path = os.path.realpath(os.path.join(os.getcwd(), '../..'))
sys.path.insert(0, path)
from nnest import NestedSampler
from nnest.likelihoods import *
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# Likelihood + prior
#like = Himmelblau(2)
#transform = lambda x: 5*x
#like = Rosenbrock(4)
#transform = lambda x: 5*x
#like = Gaussian(2, 0.9)
#transform = lambda x: 3*x
#like = Eggbox(2)
#transform = lambda x: 5*np.pi*x
like = GaussianShell(2)
transform = lambda x: 5*x
#like = GaussianMix(2)
#transform = lambda x: 5*x
sampler = NestedSampler(like.x_dim, like, transform=transform, num_live_points=100, hidden_dim=16,
num_blocks=3, flow='spline')
sampler.run(strategy=['rejection_prior', 'rejection_flow', 'mcmc'], mcmc_steps=5*like.x_dim)
print(sampler.logz)
mc = MCSamples(samples=sampler.samples, weights=sampler.weights, loglikes=-sampler.loglikes)
print(mc.getEffectiveSamples())
print(mc.getMargeStats())
print(mc.likeStats)
g = plots.getSubplotPlotter(width_inch=8)
g.triangle_plot(mc, filled=True)
###Output
_____no_output_____ |
scripts/tractatus-timeline/tractatus_diary.ipynb | ###Markdown
edits to original doc: philosophy.docx- in word doc, go to Edit > Find > Advanced Find & Replace- in "Find what" field, type: <[A-Za-z]- select "Use Wildcards" and (at bottom) Format > Font > Italic- in "Replace" field, type: ∞^&- this should insert a "∞" before every italicized word- & do the same for underlinesnew file = philosophy emph.docx
###Code
import docx
from guess_language import guessLanguage
import re
import json
doc = docx.Document("diary.docx")
all_paras = doc.paragraphs
para_list=[]
for para in all_paras:
text=para.text
text=re.sub("∞","<i>",text)
text=re.sub("¡","<u>",text)
para_list.append(text)
para_text='\n'.join(para_list)
# split into sections
split_mark="¬"
para_text=re.sub("Ms-10",split_mark+"Ms-10",para_text) # insert split marker
sections0=re.split(split_mark,para_text)
sections=[]
for sec in sections0:
if len(sec)>3: # incomplete sections?
sec=re.sub("·","•",sec)
#sec=re.sub("\(|\)","†",sec)
sections.append(sec)
#sections.sort()
print(len(sections))
print(sections[1])
# fields and their corresponding regular expressions-
fields=[("manuscript","Ms-.*"), #Ms-101,12r[2] et 13r[1] (1914--0902) (NB)
("date","\n\s*([0-9]+[.][0-9]+[.][0-9]+[.]*)")] #2.9.14.
# grab into json
nodes=[]
empties=[]
multiples=[]
sos=[]
sections.sort()
for section in sections:
text=[]
# metadata
s={}
e=[]
m=[]
for field in fields:
line=re.findall(field[1],section)
if len(line)==1:
s[field[0]]=line[0]
elif len(line)==0:
s[field[0]]=""
e.append(s)
elif len(line)>1:
#s[field[0]]=line
#OR
for i in range(len(line)):
s[field[0]+str(i+1)]=line[i]
m.append(s)
if len(e)>0:
new=(section,e)
empties.append(new)
if m!=[]:
new=(section,m[-1])
multiples.append(new)
# translations
subsections=re.split("\n",section)
ger=[]
ger_ind=[]
eng=[]
eng_ind=[]
for subsec in subsections:
if (guessLanguage(subsec) == 'de'):
ger.append(subsec)
ger_ind.append(subsections.index(subsec))
elif (guessLanguage(subsec) == 'en'):
eng.append(subsec)
eng_ind.append(subsections.index(subsec))
if eng_ind!=[]:
s["eng"]='\n'.join(eng)
else:
s["eng"]=""
if ger_ind!=[]:
s["ger"]='\n'.join(ger)
else:
s["ger"]=""
if s["manuscript"]!='':
nodes.append(s)
print(len(nodes))
print(len(empties))
print(len(multiples))
with open('stern_hz_diary.json', 'w', encoding='utf8') as f:
json.dump(nodes, f, indent=4, ensure_ascii=False)
# View empties
for e in empties:
print(e[0],'\n',e[1],'\n\n•••\n')
# View multiples
for m in multiples:
print(m[0],'\n',m[1],'\n\n\n')
###Output
Ms-102,16v[2]
15.11.14.
Lese jetzt in Emersons Essays. Vielleicht werden sie einen guten Einfluß auf mich haben. Ziemlich gearbeitet. ——.
Reading Emerson's essays now. Maybe they will have a good influence on me. Worked fairly well. ——.
16.11.14.
{'manuscript': 'Ms-102,16v[2]', 'date1': '15.11.14.', 'date2': '16.11.14.', 'eng': "Reading Emerson's essays now. Maybe they will have a good influence on me. Worked fairly well. ——.\n16.11.14.\n", 'ger': 'Lese jetzt in Emersons Essays. Vielleicht werden sie einen guten Einfluß auf mich haben. Ziemlich gearbeitet. ——.'}
Ms-102,63v[2]
28.2.15.
1.3.15
Nicht gearbeitet. Keine Nachricht von David! Unentschiedener und wechselnder Stimmung.
Didn’t work. No news from David! Indecisive and unsettled mood.
{'manuscript': 'Ms-102,63v[2]', 'date1': '28.2.15.', 'date2': '1.3.15', 'eng': 'Didn’t work. No news from David! Indecisive and unsettled mood.\n', 'ger': 'Nicht gearbeitet. Keine Nachricht von David! Unentschiedener und wechselnder Stimmung.'}
Ms-102,63v[3]
2.3.15.
3.3.15.
Nicht gearbeitet. Gestern abend einen momentanen Lichtblick. Keine Nachricht von David! — Abends gemütlich bei Scholz. Sonst im allgemeinen trüber Stimmung.
Didn’t work. A momentary glimmer of light yesterday evening. No news from David! — Pleasant evening at Scholz’s. Otherwise, a generally gloomy mood.
{'manuscript': 'Ms-102,63v[3]', 'date1': '2.3.15.', 'date2': '3.3.15.', 'eng': 'Didn’t work. A momentary glimmer of light yesterday evening. No news from David! — Pleasant evening at Scholz’s. Otherwise, a generally gloomy mood.\n', 'ger': 'Nicht gearbeitet. Gestern abend einen momentanen Lichtblick. Keine Nachricht von David! — Abends gemütlich bei Scholz. Sonst im allgemeinen trüber Stimmung.'}
Ms-102,69v[6]
4.4.15.
5.4.15.
Wechselnder Stimmung.
Unsettled mood.
{'manuscript': 'Ms-102,69v[6]\xa0', 'date1': '4.4.15.', 'date2': '5.4.15.', 'eng': '', 'ger': 'Wechselnder Stimmung.\nUnsettled mood.\n'}
Ms-102,72v[1]
5.5.15.
7.5.15.
Noch immer nicht ernannt! Immer wieder wegen meiner unklaren Stellung Unanehmlichkeiten. Wenn das noch lange so geht werde ich von hier wegzukommen trachten.
Still not appointed! Troubles because of my unclear position, again and again. If it goes on like this for much longer, I’ll try to get away from here.
{'manuscript': 'Ms-102,72v[1]\xa0', 'date1': '5.5.15.', 'date2': '7.5.15.', 'eng': 'Still not appointed! Troubles because of my unclear position, again and again. If it goes on like this for much longer, I’ll try to get away from here.\n', 'ger': 'Noch immer nicht ernannt! Immer wieder wegen meiner unklaren Stellung Unanehmlichkeiten. Wenn das noch lange so geht werde ich von hier wegzukommen trachten.'}
Ms-102,72v[2]
8.5.15.
10.5.15.
Viel Aufregung! War nahe am Weinen!!!! Fühle mich wie gebrochen und krank! Von Gemeinheit umgeben.
Much agitation! Was close to crying!!!! Feel myself broken and sick! Surrounded by viciousness.
{'manuscript': 'Ms-102,72v[2]\xa0', 'date1': '8.5.15.', 'date2': '10.5.15.', 'eng': 'Much agitation! Was close to crying!!!! Feel myself broken and sick! Surrounded by viciousness.\n', 'ger': 'Viel Aufregung! War nahe am Weinen!!!! Fühle mich wie gebrochen und krank! Von Gemeinheit umgeben.'}
Ms-102,73v[2]
25.5.15.
8.6.15.
Erneuerte Schwierigkeit wegen meiner Beförderung. Werde wahrscheinlich von hier wegkommen. Vielfach sehr niedergedrückt. Durch die Gemeinheit meiner Umgebung die mich aufs Schändlichste ausnützt. ——.
Renewed difficulty over my promotion. Will probably get away from here. Often very weighed down. Because of the viciousness of those around me who exploit me most disgracefully. ——.
{'manuscript': 'Ms-102,73v[2]\xa0', 'date1': '25.5.15.', 'date2': '8.6.15.', 'eng': 'Renewed difficulty over my promotion. Will probably get away from here. Often very weighed down. Because of the viciousness of those around me who exploit me most disgracefully. ——.\n', 'ger': 'Erneuerte Schwierigkeit wegen meiner Beförderung. Werde wahrscheinlich von hier wegkommen. Vielfach sehr niedergedrückt. Durch die Gemeinheit meiner Umgebung die mich aufs Schändlichste ausnützt. ——.'}
Ms-103,3v[1]
6.4.16.
Das Leben ist eine
7.4.16.
Tortur von der man nur zeitweise heruntergespannt wird um für weitere Qualen empfänglich zu bleiben. Ein furchtbares Sortiment von Qualen. Ein erschöpfender Marsch, eine durchhustete Nacht, eine Gesellschaft von Besoffenen, eine Gesellschaft von gemeinen und dummen Leuten. Tue Gutes und freue dich über deine Tugend. Bin krank und habe ein schlechtes Leben. Gott helfe mir. Ich bin ein armer unglücklicher Mensch. Gott erlöse mich und schenke mir den Frieden! Amen.
Life is a torture from which one is only briefly relieved so as to remain receptive to further torments. A terrible variety of torments. An exhausting march, a night of coughing, a company of drunkards, a company of vile and stupid people. Do good and rejoice in your virtue. Am sick and have a miserable life. God help me. I am a poor unhappy man. God redeem me and give me peace! Amen.
{'manuscript': 'Ms-103,3v[1]\xa0', 'date1': '6.4.16.', 'date2': '7.4.16.', 'eng': 'Life is a torture from which one is only briefly relieved so as to remain receptive to further torments. A terrible variety of torments. An exhausting march, a night of coughing, a company of drunkards, a company of vile and stupid people. Do good and rejoice in your virtue. Am sick and have a miserable life. God help me. I am a poor unhappy man. God redeem me and give me peace! Amen.\n', 'ger': 'Tortur von der man nur zeitweise heruntergespannt wird um für weitere Qualen empfänglich zu bleiben. Ein furchtbares Sortiment von Qualen. Ein erschöpfender Marsch, eine durchhustete Nacht, eine Gesellschaft von Besoffenen, eine Gesellschaft von gemeinen und dummen Leuten. Tue Gutes und freue dich über deine Tugend. Bin krank und habe ein schlechtes Leben. Gott helfe mir. Ich bin ein armer unglücklicher Mensch. Gott erlöse mich und schenke mir den Frieden! Amen.'}
|
Scikit-Learn/4) Evaluating a Model/EvaluationModel_example1_score.ipynb | ###Markdown
4. Evaluating a model Once you've trained a model, you'll want a way to measure how trustworthy its predictions are. Scikit-Learn implements 3 different methods of evaluating models.1. Estimator score() method. Calling score() on a model instance will return a metric assosciated with the type of model you're using. The metric depends on which model you're using.2. Scoring parameter. This parameter can be passed to methods such as cross_val_score() or GridSearchCV() to tell Scikit-Learn to use a specific type of scoring metric.3. Problem-specific metric functions. Similar to how the scoring parameter can be passed different scoring functions, Scikit-Learn implements these as stand alone functions. The scoring function you use will also depend on the problem you're working on.Classification problems have different evaluation metrics and scoring functions to regression problems. 4.1 Evaluation a model with score() method - it is a quick evaluation classification problem example (building a classifier to predict whether or not someone has heart disease based on their medical records)
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
heart_df = pd.read_csv("data/heart-disease.csv")
heart_df.head() # classification dataset - supervised learning
# No. of samples in the dataset
len(heart_df)
from sklearn.model_selection import train_test_split
# Import the RandomForestClassifier model class from the ensemble module
from sklearn.ensemble import RandomForestClassifier
# Setup random seed
np.random.seed(42)
# Split the data into X (features/data) and y (target/labels)
X = heart_df.drop("target", axis=1)
y = heart_df["target"]
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the model (on the training set)
clf = RandomForestClassifier()
# Call the fit method on the model and pass it training data
clf.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
Once the model has been fit on the training data (X_train, y_train), we can call the score() method on it and evaluate our model on the test data, data the model has never seen before (X_test, y_test).
###Code
# Check the score of the model (on the test set)
heart_score = clf.score(X_test, y_test)
print(f"Score = {heart_score*100}%")
###Output
Score = 85.24590163934425%
###Markdown
clf is an instance of RandomForestClassifier, the score() method uses mean accuracy as its score method -- default metric for classification score() makes predictions on X_test using the trained model and then compares those predictions to the actual labels y_test. Remember, you can find this by pressing SHIFT + TAB within the brackets of score() when called on a model instance. regression model
###Code
# Import the Boston housing dataset of SKlearn - built in regression dataset
from sklearn.datasets import load_boston
boston = load_boston()
# Covert it to a pandas dataframe - for better inspection
# take the data key, and label the columns
boston_df = pd.DataFrame(boston["data"],columns=boston["feature_names"])
# create a target column in df by using target values from dataset
boston_df["target"] = pd.Series(boston["target"])
boston_df
# Import the RandomForestRegressor model class from the ensemble module
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
# Setup random seed
np.random.seed(42)
# Create the data
X = boston_df.drop("target", axis=1)
y = boston_df["target"]
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Institate and fit the model (on the training set)
model = RandomForestRegressor()
model.fit(X_train, y_train)
# Check the score of the model (on the test set)
reg_score = model.score(X_test, y_test)
print(f"Score = {reg_score*100}%")
###Output
Score = 86.54448653350507%
|
oTranscribe_txt_to_srt_格式轉換.ipynb | ###Markdown
oTranscribe txt 轉出轉 srt 格式
srt 為 SubRip (.srt) 的格式,可用於 YouTube cc 字幕。
###Code
#@title 需求模組預載
#@markdown 此區塊一定要執行
from google.colab import files
import re
#@title 上傳檔案
uploaded = files.upload()
#@title 環境設定
#@markdown 上傳檔案名稱
input_filename='1.txt' #@param {type:"string"}
#@markdown 輸出檔案名稱
output_filename="srt_output.txt" #@param {type:"string"}
#@title 執行格式轉換
with open(input_filename,'r',encoding='utf-8') as f:
text=f.read().replace("\xa0",' ')
re_patten=r'([0-9:]+)\s{0,2}(.*)\s?\n'
aa=re.findall(re_patten,text)
content=''
end=len(aa)-1
for idx,itm in enumerate(aa):
if len(itm)==0:
continue
content+="%d\n"%(idx+1)
if idx != end:
content+="00:{} --> 00:{}\n".format(itm[0],aa[idx+1][0])
else:
tmp=itm[0].split(":")
tmp[-1]=str(int(tmp[-1])+5)
# print(":".join(tmp))
content+="00:{} --> 00:{}\n".format(itm[0],":".join(tmp))
content+="%s\n\n"%itm[1]
with open(output_filename,"w",encoding='utf-8') as f:
f.write(content)
#@title 下載檔案
from google.colab import files
files.download(output_filename)
###Output
_____no_output_____ |
arrays_strings/permutation/permutation_challenge.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
if not str1 or not str2:
return False
str1_ = "".join(str1.split())
str2_ = "".join(str2.split())
if len(str1_) != len(str2_):
return False
for char in str1_:
if char not in str2_:
return False
return True
p = Permutations()
print(p.is_permutation("Nib", "bin"))
###Output
False
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
import unittest
class TestPermutation(unittest.TestCase):
def test_permutation(self, func):
self.assertEqual(func(None, 'foo'), False)
self.assertEqual(func('', 'foo'), False)
self.assertEqual(func('Nib', 'bin'), False)
self.assertEqual(func('act', 'cat'), True)
self.assertEqual(func('a ct', 'ca t'), True)
self.assertEqual(func('dog', 'doggo'), False)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
Success: test_permutation
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
if not (str1 and str2):
return False
str2 = list(str2)
for char in str1:
try:
str2.remove(char)
except ValueError:
return False
return not str2
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
if str1 is None or str2 is None:
return False
if len(str1) != len(str2):
return False
set1 = set(str1)
set2 = set(str2)
return len(set1 - set2) == 0 and len(set2 - set1) == 0
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
assert_equal(func('dog', 'doggo'), False)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
Success: test_permutation
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes Test Cases* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
def permutations(str1, str2):
return sorted(str1) == sorted(str2)
###Output
_____no_output_____
###Markdown
**The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
test.test_permutation(permutations)
try:
test.test_permutation(permutations_alt)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
Success: test_permutation
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
from collections import defaultdict
class Permutations(object):
def histogram(self, str):
result = defaultdict(lambda: 0)
for letter in str:
result[letter] += 1
return result
def is_permutation(self, str1, str2):
if str1 and str2:
return self.histogram(str1) == self.histogram(str2)
else:
return False
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
Success: test_permutation
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
import unittest
class TestPermutation(unittest.TestCase):
def test_permutation(self, func):
self.assertEqual(func(None, 'foo'), False)
self.assertEqual(func('', 'foo'), False)
self.assertEqual(func('Nib', 'bin'), False)
self.assertEqual(func('act', 'cat'), True)
self.assertEqual(func('a ct', 'ca t'), True)
self.assertEqual(func('dog', 'doggo'), False)
self.assertEqual(func('dogoo', 'dogcc'), False)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
if(str1 is None or str2 is None): return False
if(str1 == "" or str2 == ""): return False
arr1 = [char for char in str1]
arr2 = [char for char in str2]
for char in arr1:
if char in arr2:
arr2.remove(char)
else:
return False
return False if len(arr2)>0 else True
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
import unittest
class TestPermutation(unittest.TestCase):
def test_permutation(self, func):
self.assertEqual(func(None, 'foo'), False)
self.assertEqual(func('', 'foo'), False)
self.assertEqual(func('Nib', 'bin'), False)
self.assertEqual(func('act', 'cat'), True)
self.assertEqual(func('a ct', 'ca t'), True)
self.assertEqual(func('dog', 'doggo'), False)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
Success: test_permutation
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
from collections import defaultdict
class Permutations(object):
def is_permutation(self, str1, str2):
if str1 is None or str2 is None:
return False
if len(str1) != len(str2):
return False
hashmap = defaultdict(int)
for chr1 in str1:
hashmap[chr1] += 1
for chr2 in str2:
hashmap[chr2] -= 1
return max(hashmap.values()) == 0
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
defaultdict(<class 'int'>, {'N': 1, 'i': 0, 'b': 0, 'n': -1})
defaultdict(<class 'int'>, {'a': 0, 'c': 0, 't': 0})
defaultdict(<class 'int'>, {'a': 0, ' ': 0, 'c': 0, 't': 0})
Success: test_permutation
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
if not str1 or not str2:
return False
else:
seen = dict()
for i in str1:
try:
seen[i] += 1
except:
seen[i] = 1
for i in str2:
try:
seen[i] -= 1
except:
return False
return not any([seen[i] != 0 for i in seen])
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
Success: test_permutation
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
if str1 is None or str1=='':
return False
if str2 is None or str2=='':
return False
d1 = self._create_dict(str1)
d2 = self._create_dict(str2)
#for key, value in d1.items():
# if key not in d2 or d2[key] != value:
# return False
#return True
if d1==d2:
return True
else:
return False
def _create_dict(self, string):
d = dict()
for c in string:
if c in d:
d[c] += 1
else:
d[c] = 1
return d
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
Success: test_permutation
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes Test Cases* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
def permutations(str1, str2):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
test.test_permutation(permutations)
try:
test.test_permutation(permutations_alt)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
import unittest
class TestPermutation(unittest.TestCase):
def test_permutation(self, func):
self.assertEqual(func(None, 'foo'), False)
self.assertEqual(func('', 'foo'), False)
self.assertEqual(func('Nib', 'bin'), False)
self.assertEqual(func('act', 'cat'), True)
self.assertEqual(func('a ct', 'ca t'), True)
self.assertEqual(func('dog', 'doggo'), False)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Permutations(object):
def is_permutation(self, str1, str2):
if str1:
return sorted(str1) == sorted(str2)
return False
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
from nose.tools import assert_equal
class TestPermutation(object):
def test_permutation(self, func):
assert_equal(func(None, 'foo'), False)
assert_equal(func('', 'foo'), False)
assert_equal(func('Nib', 'bin'), False)
assert_equal(func('act', 'cat'), True)
assert_equal(func('a ct', 'ca t'), True)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Determine if a string is a permutation of another string.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume the string is ASCII? * Yes * Note: Unicode strings could require special handling depending on your language* Is whitespace important? * Yes* Is this case sensitive? 'Nib', 'bin' is not a match? * Yes* Can we use additional data structures? * Yes* Can we assume this fits in memory? * Yes Test Cases* One or more None inputs -> False* One or more empty strings -> False* 'Nib', 'bin' -> False* 'act', 'cat' -> True* 'a ct', 'ca t' -> True AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/arrays_strings/permutation/permutation_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
from collections import Counter
class Permutations(object):
def is_permutation(self, str1, str2):
# cnt = Counter()
if str1 is None or str2 is None:
return False
if str1 == "" or str2 == "":
return False
if Counter(str1) == Counter(str2):
return True
return False
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_permutation_solution.py
import unittest
class TestPermutation(unittest.TestCase):
def test_permutation(self, func):
self.assertEqual(func(None, 'foo'), False)
self.assertEqual(func('', 'foo'), False)
self.assertEqual(func('Nib', 'bin'), False)
self.assertEqual(func('act', 'cat'), True)
self.assertEqual(func('a ct', 'ca t'), True)
self.assertEqual(func('dog', 'doggo'), False)
print('Success: test_permutation')
def main():
test = TestPermutation()
permutations = Permutations()
test.test_permutation(permutations.is_permutation)
try:
permutations_alt = PermutationsAlt()
test.test_permutation(permutations_alt.is_permutation)
except NameError:
# Alternate solutions are only defined
# in the solutions file
pass
if __name__ == '__main__':
main()
###Output
Success: test_permutation
|
05. Prepare_data.ipynb | ###Markdown
5. Prepare Data[](https://www.youtube.com/watch?v=tBfGYKITno8&list=PLLBUgWXdTBDg1Qgmwt4jKtVn9BWh5-zgy "Python Data Science")Much of data science and machine learning work is getting clean data into the correct form. This may include data cleansing to remove outliers or bad information, scaling for machine learning algorithms, splitting into train and test sets, and enumeration of string data. All of this needs to happen before regression, classification, or other model training. Fortunately, there are functions that help with automating data preparation. Generate Sample DataRun the following cell to generate the sample data that is corrupted with NaN (not a number) and outliers that are corrupted data points far outside of the expected trend.
###Code
import numpy as np
import pandas as pd
np.random.seed(1)
n = 100
tt = np.linspace(0,n-1,n)
x = np.random.rand(n)+10+np.sqrt(tt)
y = np.random.normal(10,x*0.01,n)
x[1] = np.nan; y[2] = np.nan # 2 NaN (not a number)
for i in range(3): # add 3 outliers (bad data)
ri = np.random.randint(0,n)
x[ri] += np.random.rand()*100
data = pd.DataFrame(np.vstack((tt,x,y)).T,\
columns=['time','x','y'])
data.head()
###Output
_____no_output_____
###Markdown
 Visualize DataThe outliers are shown on a semi-logy plot. The `NaN` values do not show on the plot and are missing points.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.semilogy(tt,x,'r.',label='x')
plt.semilogy(tt,y,'b.',label='y')
plt.legend(); plt.xlabel('time')
plt.text(50,60,'Outliers')
plt.show()
###Output
_____no_output_____
###Markdown
 Remove Outliers and Bad DataNaN values are removed with `numpy` by identifying rows `ix` that contain `NaN`. Next, the rows are removed with `z=z[~iz]` where `~` is a bitwise `not` operator.
###Code
z = np.array([[ 1, 2],
[ np.nan, 3],
[ 4, np.nan],
[ 5, 6]])
iz = np.any(np.isnan(z), axis=1)
print(~iz)
z = z[~iz]
print(z)
###Output
_____no_output_____
###Markdown
The method `dropna` is a command to drop `NaN` rows in a `pandas` `DataFrame`. Rows 1 and 2 are dropped.
###Code
# drop any row with bad (NaN) values
data = data.dropna()
data.head()
###Output
_____no_output_____
###Markdown
There are several graphical techniques to help detect outliers. A box or histogram plot shows the 3 outlying points.
###Code
plt.boxplot(data['x'])
plt.show()
###Output
_____no_output_____
###Markdown
A Grubbs test or [other statistical measures](https://towardsdatascience.com/ways-to-detect-and-remove-the-outliers-404d16608dba) can detect outliers. The Grubbs test, in particular, assumes univariate, normally distributed data and is intended to detect only a single outlier. In practice, many outliers be eliminated by removing points that violate a change limit or upper / lower bounds. The statement `data[data['x']<30]` keeps the rows where x is less than 30.
###Code
data = data[data['x']<30]
plt.boxplot(data['x'])
plt.show()
###Output
_____no_output_____
###Markdown
 Time ActivityWithout looking at a clock, run this cell to record 1 second intervals for 10 seconds. When you run the cell, press `Enter` everytime you think 1 second has passed. After you collect the data, use a boxplot to identify any data points in `tsec` that are outliers.
###Code
import time
from IPython.display import clear_output
tsec = []
input('Press "Enter" to record 1 second intervals'); t = time.time()
for i in range(10):
clear_output(); input('Press "Enter": ' + str(i+1))
tsec.append(time.time()-t); t = time.time()
clear_output(); print('Completed. Add boxplot to identify outliers')
# Add a boxplot to identify outliers
###Output
_____no_output_____
###Markdown
 Scale DataThe `sklearn` package has a `preprocessing` module to implement common scaling methods. The `StandardScalar` is shown below where each column is normalized to zero mean and a standard deviation of one. The common scaling methods `fit_transform(X)` to fit and transform, `transform(X)` to transform based on another fit, and `inverse_transform(Xs)` to scale back to the original representation.
###Code
from sklearn.preprocessing import StandardScaler
s = StandardScaler()
ds = s.fit_transform(data)
print(ds[0:5]) # print 5 rows
###Output
_____no_output_____
###Markdown
The value `ds` is returned as a `numpy` array so we need to convert it back to a `pandas` `DataFrame`, re-using the column names from `data`.
###Code
ds = pd.DataFrame(ds,columns=data.columns)
ds.head()
###Output
_____no_output_____
###Markdown
 Divide DataData is divided into train and test sets to separate a fraction of the rows for evaluating classification or regression models. A typical split is 80% for training and 20% for testing, although the range depends on how much data is available and the objective of the study.
###Code
divide = int(len(ds)*0.8)
train = ds[0:divide]
test = ds[divide:]
print(len(train),len(test))
###Output
_____no_output_____
###Markdown
The `train_test_split` is a function in `sklearn` for the specific purpose of splitting data into train and test sets. There are options such as `shuffle=True` to randomize the selection in each set.
###Code
from sklearn.model_selection import train_test_split
train,test = train_test_split(ds, test_size=0.2, shuffle=True)
print(len(train),len(test))
###Output
_____no_output_____
###Markdown
TCLab Activity Data with Bad Values & OutliersGenerate a new data file with some randomly inserted bad data (3 minutes) or read the data file from [an online link](https://apmonitor.com/do/uploads/Main/tclab_bad_data.txt) with the following code.
###Code
import tclab, time, csv
import numpy as np
try:
with tclab.TCLab() as lab:
with open('05-tclab.csv',mode='w',newline='') as f:
cw = csv.writer(f)
cw.writerow(['Time','Q1','Q2','T1','T2'])
print('t Q1 Q2 T1 T2')
for t in range(180):
T1 = lab.T1; T2 = lab.T2
# insert bad values
bad = np.random.randint(0,30)
T1=np.nan if bad==10 else T1
T2=np.nan if bad==15 else T2
# insert random number (possibly outlier)
outlier = np.random.randint(-40,150)
T1=outlier if bad==20 else T1
T2=outlier if bad==25 else T2
# change heater
if t%30==0:
Q1 = np.random.randint(0,81)
Q2 = np.random.randint(0,81)
lab.Q1(Q1); lab.Q2(Q2)
cw.writerow([t,Q1,Q2,T1,T2])
if t%10==0:
print(t,Q1,Q2,T1,T2)
time.sleep(1)
data5=pd.read_csv('05-tclab.csv')
except:
print('Connect TCLab to generate new data')
print('Importing data from online source')
url = 'http://apmonitor.com/do/uploads/Main/tclab_bad_data.txt'
data5=pd.read_csv(url)
###Output
_____no_output_____
###Markdown
5. Prepare Data[Data Science Playlist on YouTube](https://www.youtube.com/watch?v=tBfGYKITno8&list=PLLBUgWXdTBDg1Qgmwt4jKtVn9BWh5-zgy)[](https://www.youtube.com/watch?v=tBfGYKITno8&list=PLLBUgWXdTBDg1Qgmwt4jKtVn9BWh5-zgy "Python Data Science")Much of data science and machine learning work is getting clean data into the correct form. This may include data cleansing to remove outliers or bad information, scaling for machine learning algorithms, splitting into train and test sets, and enumeration of string data. All of this needs to happen before regression, classification, or other model training. Fortunately, there are functions that help with automating data preparation. Generate Sample DataRun the following cell to generate the sample data that is corrupted with NaN (not a number) and outliers that are corrupted data points far outside of the expected trend.
###Code
import numpy as np
import pandas as pd
np.random.seed(1)
n = 100
tt = np.linspace(0,n-1,n)
x = np.random.rand(n)+10+np.sqrt(tt)
y = np.random.normal(10,x*0.01,n)
x[1] = np.nan; y[2] = np.nan # 2 NaN (not a number)
for i in range(3): # add 3 outliers (bad data)
ri = np.random.randint(0,n)
x[ri] += np.random.rand()*100
data = pd.DataFrame(np.vstack((tt,x,y)).T,\
columns=['time','x','y'])
data.head()
###Output
_____no_output_____
###Markdown
 Visualize DataThe outliers are shown on a semi-logy plot. The `NaN` values do not show on the plot and are missing points.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.semilogy(tt,x,'r.',label='x')
plt.semilogy(tt,y,'b.',label='y')
plt.legend(); plt.xlabel('time')
plt.text(50,60,'Outliers')
plt.show()
###Output
_____no_output_____
###Markdown
 Remove Outliers and Bad DataNaN values are removed with `numpy` by identifying rows `ix` that contain `NaN`. Next, the rows are removed with `z=z[~iz]` where `~` is a bitwise `not` operator.
###Code
z = np.array([[ 1, 2],
[ np.nan, 3],
[ 4, np.nan],
[ 5, 6]])
iz = np.any(np.isnan(z), axis=1)
print(~iz)
z = z[~iz]
print(z)
###Output
_____no_output_____
###Markdown
The method `dropna` is a command to drop `NaN` rows in a `pandas` `DataFrame`. Rows 1 and 2 are dropped.
###Code
# drop any row with bad (NaN) values
data = data.dropna()
data.head()
###Output
_____no_output_____
###Markdown
There are several graphical techniques to help detect outliers. A box or histogram plot shows the 3 outlying points.
###Code
plt.boxplot(data['x'])
plt.show()
###Output
_____no_output_____
###Markdown
A Grubbs test or [other statistical measures](https://towardsdatascience.com/ways-to-detect-and-remove-the-outliers-404d16608dba) can detect outliers. The Grubbs test, in particular, assumes univariate, normally distributed data and is intended to detect only a single outlier. In practice, many outliers be eliminated by removing points that violate a change limit or upper / lower bounds. The statement `data[data['x']<30]` keeps the rows where x is less than 30.
###Code
data = data[data['x']<30]
plt.boxplot(data['x'])
plt.show()
###Output
_____no_output_____
###Markdown
 Time ActivityWithout looking at a clock, run this cell to record 1 second intervals for 10 seconds. When you run the cell, press `Enter` everytime you think 1 second has passed. After you collect the data, use a boxplot to identify any data points in `tsec` that are outliers.
###Code
import time
from IPython.display import clear_output
tsec = []
input('Press "Enter" to record 1 second intervals'); t = time.time()
for i in range(10):
clear_output(); input('Press "Enter": ' + str(i+1))
tsec.append(time.time()-t); t = time.time()
clear_output(); print('Completed. Add boxplot to identify outliers')
# Add a boxplot to identify outliers
###Output
_____no_output_____
###Markdown
 Scale DataThe `sklearn` package has a `preprocessing` module to implement common scaling methods. The `StandardScalar` is shown below where each column is normalized to zero mean and a standard deviation of one. The common scaling methods `fit_transform(X)` to fit and transform, `transform(X)` to transform based on another fit, and `inverse_transform(Xs)` to scale back to the original representation.
###Code
from sklearn.preprocessing import StandardScaler
s = StandardScaler()
ds = s.fit_transform(data)
print(ds[0:5]) # print 5 rows
###Output
_____no_output_____
###Markdown
The value `ds` is returned as a `numpy` array so we need to convert it back to a `pandas` `DataFrame`, re-using the column names from `data`.
###Code
ds = pd.DataFrame(ds,columns=data.columns)
ds.head()
###Output
_____no_output_____
###Markdown
 Divide DataData is divided into train and test sets to separate a fraction of the rows for evaluating classification or regression models. A typical split is 80% for training and 20% for testing, although the range depends on how much data is available and the objective of the study.
###Code
divide = int(len(ds)*0.8)
train = ds[0:divide]
test = ds[divide:]
print(len(train),len(test))
###Output
_____no_output_____
###Markdown
The `train_test_split` is a function in `sklearn` for the specific purpose of splitting data into train and test sets. There are options such as `shuffle=True` to randomize the selection in each set.
###Code
from sklearn.model_selection import train_test_split
train,test = train_test_split(ds, test_size=0.2, shuffle=True)
print(len(train),len(test))
###Output
_____no_output_____
###Markdown
TCLab Activity Data with Bad Values & OutliersGenerate a new data file with some randomly inserted bad data (3 minutes) or read the data file from [an online link](https://apmonitor.com/do/uploads/Main/tclab_bad_data.txt) with the following code.
###Code
import tclab, time, csv
import numpy as np
try:
with tclab.TCLab() as lab:
with open('05-tclab.csv',mode='w',newline='') as f:
cw = csv.writer(f)
cw.writerow(['Time','Q1','Q2','T1','T2'])
print('t Q1 Q2 T1 T2')
for t in range(180):
T1 = lab.T1; T2 = lab.T2
# insert bad values
bad = np.random.randint(0,30)
T1=np.nan if bad==10 else T1
T2=np.nan if bad==15 else T2
# insert random number (possibly outlier)
outlier = np.random.randint(-40,150)
T1=outlier if bad==20 else T1
T2=outlier if bad==25 else T2
# change heater
if t%30==0:
Q1 = np.random.randint(0,81)
Q2 = np.random.randint(0,81)
lab.Q1(Q1); lab.Q2(Q2)
cw.writerow([t,Q1,Q2,T1,T2])
if t%10==0:
print(t,Q1,Q2,T1,T2)
time.sleep(1)
data5=pd.read_csv('05-tclab.csv')
except:
print('Connect TCLab to generate new data')
print('Importing data from online source')
url = 'http://apmonitor.com/do/uploads/Main/tclab_bad_data.txt'
data5=pd.read_csv(url)
###Output
_____no_output_____ |
Top 100 interview Questions/88. Merge Sorted Array.ipynb | ###Markdown
88. Merge Sorted ArrayGiven two sorted integer arrays nums1 and nums2, merge nums2 into nums1 as one sorted array.Note:The number of elements initialized in nums1 and nums2 are m and n respectively.You may assume that nums1 has enough space (size that is greater or equal to m + n) to hold additional elements from nums2.Example:Input:nums1 = [1,2,3,0,0,0], m = 3nums2 = [2,5,6], n = 3Output: [1,2,2,3,5,6]
###Code
class Solution:
def merge(self, nums1: List[int], m: int, nums2: List[int], n: int) -> None:
p1 = m -1
p2 = n -1
p = m+n-1
while p1>=0 and p2>=0:
if nums1[p1] < nums2[p2]:
nums1[p] = nums2[p2]
p2 -=1
else:
nums1[p] = nums1[p1]
p1 -=1
p -=1
nums1[:(p2+1)] = nums2[:(p2+1)]
###Output
_____no_output_____ |
tutorials/asr/Streaming_ASR.ipynb | ###Markdown
Streaming ASRIn this tutorial, we will look at one way to use one of NeMo's pretrained Conformer-CTC models for streaming inference. We will first look at some use cases where we may need streaming inference and then we will work towards developing a method for transcribing a long audio file using streaming. Why Stream?Streaming inference may be needed in one of the following scenarios:* Real-time or close to real-time inference for live transcriptions* Offline transcriptions of very long audioIn this tutorial, we will mainly focus on streaming for handling long form audio and close to real-time inference with CTC based models. For training ASR models we usually use short segments of audio (<20s) that may be smaller chunks of a long audio that is aligned with the transcriptions and segmented into smaller chunks (see [tools/](https://github.com/NVIDIA/NeMo/tree/main/tools) for some great tools to do this). For running inference on long audio files we are restricted by the available GPU memory that dictates the maximum length of audio that can be transcribed in one inference call. We will take a look at one of the ways to overcome this restriction using NeMo's Conformer-CTC ASR model. Conformer-CTCConformer-CTC models distributed with NeMo use a combination of self-attention and convolution modules to achieve the best of the two approaches, the self-attention layers can learn the global interaction while the convolutions efficiently capture the local correlations. Use of self-attention layers comes with a cost of increased memory usage at a quadratic rate with the sequence length. That means that transcribing long audio files with Conformer-CTC models needs streaming inference to break up the audio into smaller chunks. We will develop one method to do such inference through the course of this tutorial. DataTo demonstrate transcribing a long audio file we will use utterances from the dev-clean set of the [mini Librispeech corpus](https://www.openslr.org/31/).
###Code
# If something goes wrong during data processing, un-comment the following line to delete the cached dataset
# !rm -rf datasets/mini-dev-clean
!mkdir -p datasets/mini-dev-clean
!python ../../scripts/dataset_processing/get_librispeech_data.py \
--data_root "datasets/mini-dev-clean/" \
--data_sets dev_clean_2
manifest = "datasets/mini-dev-clean/dev_clean_2.json"
###Output
_____no_output_____
###Markdown
Let's create a long audio that is about 15 minutes long by concatenating audio from dev-clean and also create the corresponding concatenated transcript.
###Code
import json
def concat_audio(manifest_file, final_len=3600):
concat_len = 0
final_transcript = ""
with open("concat_file.txt", "w") as cat_f:
while concat_len < final_len:
with open(manifest_file, "r") as mfst_f:
for l in mfst_f:
row = json.loads(l.strip())
if concat_len >= final_len:
break
cat_f.write(f"file {row['audio_filepath']}\n")
final_transcript += (" " + row['text'])
concat_len += float(row['duration'])
return concat_len, final_transcript
new_duration, ref_transcript = concat_audio(manifest, 15*60)
concat_audio_path = "datasets/mini-dev-clean/concatenated_audio.wav"
!ffmpeg -t {new_duration} -safe 0 -f concat -i concat_file.txt -c copy -t {new_duration} {concat_audio_path} -y
print("Finished concatenating audio file!")
###Output
_____no_output_____
###Markdown
Streaming with CTC based modelsNow let's try to transcribe the long audio file created above using a conformer-large model.
###Code
import torch
import nemo.collections.asr as nemo_asr
import contextlib
import gc
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
###Output
_____no_output_____
###Markdown
We are mainly concerned about decoding on the GPU in this tutorial. CPU decoding may be able to handle longer files but would also not be as fast as GPU decoding. Let's check if we can run transcribe() on the long audio file that we created above.
###Code
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_conformer_ctc_large", map_location=device)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# device = 'cpu' # You can transcribe even longer samples on the CPU, though it will take much longer !
model = model.to(device)
# Helper for torch amp autocast
if torch.cuda.is_available():
autocast = torch.cuda.amp.autocast
else:
@contextlib.contextmanager
def autocast():
print("AMP was not available, using FP32!")
yield
###Output
_____no_output_____
###Markdown
The call to transcribe() below should fail with a "CUDA out of memory" error when run on a GPU with 32 GB memory.
###Code
with autocast():
transcript = model.transcribe([concat_audio_path], batch_size=1)[0]
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
###Output
_____no_output_____
###Markdown
Buffer mechanism for streaming long audio filesOne way to transcribe long audio with a Conformer-CTC model would be to split the audio into consecutive smaller chunks and running inference on each chunk. Care should be taken to have enough context for audio at either edges for accurate transcription. Let's introduce some terminology here to help us navigate the rest of this tutorial. * Buffer size is the length of audio on which inference is run* Chunk size is the length of new audio that is added to the buffer.An audio buffer is made up of a chunk of audio with some padded audio from previous chunk. In order to make the best predictions with enough context for the beginning and end portions of the buffer, we only collect tokens for the middle portion of the buffer of length equal to the size of each chunk. Let's suppose that the maximum length of audio that can be transcribed with conformer-large model is 20s, then we can use 20s as the buffer size and use 15s (for example) as the chunk size, so one hour of audio is broken into 240 chunks of 15s each. Let's take a look at a few audio buffers that may be created for this audio.
###Code
# A simple iterator class to return successive chunks of samples
class AudioChunkIterator():
def __init__(self, samples, frame_len, sample_rate):
self._samples = samples
self._chunk_len = chunk_len_in_secs*sample_rate
self._start = 0
self.output=True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
last = int(self._start + self._chunk_len)
if last <= len(self._samples):
chunk = self._samples[self._start: last]
self._start = last
else:
chunk = np.zeros([int(self._chunk_len)], dtype='float32')
samp_len = len(self._samples) - self._start
chunk[0:samp_len] = self._samples[self._start:len(self._samples)]
self.output = False
return chunk
# a helper function for extracting samples as a numpy array from the audio file
import soundfile as sf
def get_samples(audio_file, target_sr=16000):
with sf.SoundFile(audio_file, 'r') as f:
dtype = 'int16'
sample_rate = f.samplerate
samples = f.read(dtype=dtype)
if sample_rate != target_sr:
samples = librosa.core.resample(samples, sample_rate, target_sr)
samples=samples.astype('float32')/32768
samples = samples.transpose()
return samples
###Output
_____no_output_____
###Markdown
Let's take a look at each chunk of speech that is used for decoding.
###Code
import matplotlib.pyplot as plt
samples = get_samples(concat_audio_path)
sample_rate = model.preprocessor._cfg['sample_rate']
chunk_len_in_secs = 1
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
count = 0
for chunk in chunk_reader:
count +=1
plt.plot(chunk)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now, let's plot the actual buffers at each stage after a new chunk is added to the buffer. Audio buffer can be thought of as a fixed size queue with each incoming chunk added at the end of the buffer and the oldest samples removed from the beginning.
###Code
import numpy as np
context_len_in_secs = 1
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
plt.plot(sampbuffer)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now that we have a method to split the long audio into smaller chunks, we can now work on transcribing the individual buffers and merging the outputs to get the transcription of the whole audio.First, we implement some helper functions to help load the buffers into the data layer.
###Code
from nemo.core.classes import IterableDataset
def speech_collate_fn(batch):
"""collate batch of audio sig, audio len
Args:
batch (FloatTensor, LongTensor): A tuple of tuples of signal, signal lengths.
This collate func assumes the signals are 1d torch tensors (i.e. mono audio).
"""
_, audio_lengths = zip(*batch)
max_audio_len = 0
has_audio = audio_lengths[0] is not None
if has_audio:
max_audio_len = max(audio_lengths).item()
audio_signal= []
for sig, sig_len in batch:
if has_audio:
sig_len = sig_len.item()
if sig_len < max_audio_len:
pad = (0, max_audio_len - sig_len)
sig = torch.nn.functional.pad(sig, pad)
audio_signal.append(sig)
if has_audio:
audio_signal = torch.stack(audio_signal)
audio_lengths = torch.stack(audio_lengths)
else:
audio_signal, audio_lengths = None, None
return audio_signal, audio_lengths
# simple data layer to pass audio signal
class AudioBuffersDataLayer(IterableDataset):
def __init__(self):
super().__init__()
def __iter__(self):
return self
def __next__(self):
if self._buf_count == len(self.signal) :
raise StopIteration
self._buf_count +=1
return torch.as_tensor(self.signal[self._buf_count-1], dtype=torch.float32), \
torch.as_tensor(self.signal_shape[0], dtype=torch.int64)
def set_signal(self, signals):
self.signal = signals
self.signal_shape = self.signal[0].shape
self._buf_count = 0
def __len__(self):
return 1
###Output
_____no_output_____
###Markdown
Next we implement a class that implements transcribing audio buffers and merging the tokens corresponding to a chunk of audio within each buffer. For each buffer, we pick tokens corresponding to one chunk length of audio. The chunk within each buffer is chosen such that there is equal left and right context available to the audio within the chunk.For example, if the chunk size is 1s and buffer size is 3s, we collect tokens corresponding to audio starting from 1s to 2s within each buffer. Conformer-CTC models have a model stride of 4, i.e., a token is produced for every 4 feature vectors in the time domain. MelSpectrogram features are generated once every 10 ms, so a token is produced for every 40 ms of audio.**Note:** The inherent assumption here is that the output tokens from the model are well aligned with corresponding audio segments. This may not always be true for models trained with CTC loss, so this method of streaming inference may not always work with CTC based models.
###Code
from torch.utils.data import DataLoader
import math
class ChunkBufferDecoder:
def __init__(self,asr_model, stride, chunk_len_in_secs=1, buffer_len_in_secs=3):
self.asr_model = asr_model
self.asr_model.eval()
self.data_layer = AudioBuffersDataLayer()
self.data_loader = DataLoader(self.data_layer, batch_size=1, collate_fn=speech_collate_fn)
self.buffers = []
self.all_preds = []
self.chunk_len = chunk_len_in_secs
self.buffer_len = buffer_len_in_secs
assert(chunk_len_in_secs<=buffer_len_in_secs)
feature_stride = asr_model._cfg.preprocessor['window_stride']
self.model_stride_in_secs = feature_stride * stride
self.n_tokens_per_chunk = math.ceil(self.chunk_len / self.model_stride_in_secs)
self.blank_id = len(asr_model.decoder.vocabulary)
self.plot=False
@torch.no_grad()
def transcribe_buffers(self, buffers, merge=True, plot=False):
self.plot = plot
self.buffers = buffers
self.data_layer.set_signal(buffers[:])
self._get_batch_preds()
return self.decode_final(merge)
def _get_batch_preds(self):
device = self.asr_model.device
for batch in iter(self.data_loader):
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(device), audio_signal_len.to(device)
log_probs, encoded_len, predictions = self.asr_model(input_signal=audio_signal, input_signal_length=audio_signal_len)
preds = torch.unbind(predictions)
for pred in preds:
self.all_preds.append(pred.cpu().numpy())
def decode_final(self, merge=True, extra=0):
self.unmerged = []
self.toks_unmerged = []
# index for the first token corresponding to a chunk of audio would be len(decoded) - 1 - delay
delay = math.ceil((self.chunk_len + (self.buffer_len - self.chunk_len) / 2) / self.model_stride_in_secs)
decoded_frames = []
all_toks = []
for pred in self.all_preds:
ids, toks = self._greedy_decoder(pred, self.asr_model.tokenizer)
decoded_frames.append(ids)
all_toks.append(toks)
for decoded in decoded_frames:
self.unmerged += decoded[len(decoded) - 1 - delay:len(decoded) - 1 - delay + self.n_tokens_per_chunk]
if self.plot:
for i, tok in enumerate(all_toks):
plt.plot(self.buffers[i])
plt.show()
print("\nGreedy labels collected from this buffer")
print(tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk])
self.toks_unmerged += tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk]
print("\nTokens collected from succesive buffers before CTC merge")
print(self.toks_unmerged)
if not merge:
return self.unmerged
return self.greedy_merge(self.unmerged)
def _greedy_decoder(self, preds, tokenizer):
s = []
ids = []
for i in range(preds.shape[0]):
if preds[i] == self.blank_id:
s.append("_")
else:
pred = preds[i]
s.append(tokenizer.ids_to_tokens([pred.item()])[0])
ids.append(preds[i])
return ids, s
def greedy_merge(self, preds):
decoded_prediction = []
previous = self.blank_id
for p in preds:
if (p != previous or previous == self.blank_id) and p != self.blank_id:
decoded_prediction.append(p.item())
previous = p
hypothesis = self.asr_model.tokenizer.ids_to_text(decoded_prediction)
return hypothesis
###Output
_____no_output_____
###Markdown
To see how this chunk based decoder comes together, let's call the decoder with a few buffers we create from our long audio file.Some interesting experiments to try would be to see how changing sizes of the chunk and the context affects transcription accuracy.
###Code
chunk_len_in_secs = 4
context_len_in_secs = 2
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
n_buffers = 5
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
buffer_list = []
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
if count >= n_buffers:
break
stride = 4 # 8 for Citrinet
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=True)
# Final transcription after CTC merge
print(transcription)
###Output
_____no_output_____
###Markdown
Time to evaluate our streaming inference on the whole long file that we created.
###Code
# WER calculation
from nemo.collections.asr.metrics.wer import word_error_rate
# Collect all buffers from the audio file
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
buffer_list = []
for chunk in chunk_reader:
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=False)
wer = word_error_rate(hypotheses=[transcription], references=[ref_transcript])
print(f"WER: {round(wer*100,2)}%")
###Output
_____no_output_____
###Markdown
Streaming ASRIn this tutorial, we will look at one way to use one of NeMo's pretrained Conformer-CTC models for streaming inference. We will first look at some use cases where we may need streaming inference and then we will work towards developing a method for transcribing a long audio file using streaming. Why Stream?Streaming inference may be needed in one of the following scenarios:* Real-time or close to real-time inference for live transcriptions* Offline transcriptions of very long audioIn this tutorial, we will mainly focus on streaming for handling long form audio and close to real-time inference with CTC based models. For training ASR models we usually use short segments of audio (<20s) that may be smaller chunks of a long audio that is aligned with the transcriptions and segmented into smaller chunks (see [tools/](https://github.com/NVIDIA/NeMo/tree/main/tools) for some great tools to do this). For running inference on long audio files we are restricted by the available GPU memory that dictates the maximum length of audio that can be transcribed in one inference call. We will take a look at one of the ways to overcome this restriction using NeMo's Conformer-CTC ASR model. Conformer-CTCConformer-CTC models distributed with NeMo use a combination of self-attention and convolution modules to achieve the best of the two approaches, the self-attention layers can learn the global interaction while the convolutions efficiently capture the local correlations. Use of self-attention layers comes with a cost of increased memory usage at a quadratic rate with the sequence length. That means that transcribing long audio files with Conformer-CTC models needs streaming inference to break up the audio into smaller chunks. We will develop one method to do such inference through the course of this tutorial. DataTo demonstrate transcribing a long audio file we will use utterances from the dev-clean set of the [mini Librispeech corpus](https://www.openslr.org/31/).
###Code
# If something goes wrong during data processing, un-comment the following line to delete the cached dataset
# !rm -rf datasets/mini-dev-clean
!mkdir -p datasets/mini-dev-clean
!python ../../scripts/dataset_processing/get_librispeech_data.py \
--data_root "datasets/mini-dev-clean/" \
--data_sets dev_clean_2
manifest = "datasets/mini-dev-clean/dev_clean_2.json"
###Output
_____no_output_____
###Markdown
Let's create a long audio that is about 15 minutes long by concatenating audio from dev-clean and also create the corresponding concatenated transcript.
###Code
import json
def concat_audio(manifest_file, final_len=3600):
concat_len = 0
final_transcript = ""
with open("concat_file.txt", "w") as cat_f:
while concat_len < final_len:
with open(manifest_file, "r") as mfst_f:
for l in mfst_f:
row = json.loads(l.strip())
if concat_len >= final_len:
break
cat_f.write(f"file {row['audio_filepath']}\n")
final_transcript += (" " + row['text'])
concat_len += float(row['duration'])
return concat_len, final_transcript
new_duration, ref_transcript = concat_audio(manifest, 15*60)
concat_audio_path = "datasets/mini-dev-clean/concatenated_audio.wav"
!ffmpeg -t {new_duration} -safe 0 -f concat -i concat_file.txt -c copy -t {new_duration} {concat_audio_path} -y
print("Finished concatenating audio file!")
###Output
_____no_output_____
###Markdown
Streaming with CTC based modelsNow let's try to transcribe the long audio file created above using a conformer-large model.
###Code
import torch
import nemo.collections.asr as nemo_asr
import contextlib
import gc
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
###Output
_____no_output_____
###Markdown
We are mainly concerned about decoding on the GPU in this tutorial. CPU decoding may be able to handle longer files but would also not be as fast as GPU decoding. Let's check if we can run transcribe() on the long audio file that we created above.
###Code
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_conformer_ctc_large", map_location=device)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# device = 'cpu' # You can transcribe even longer samples on the CPU, though it will take much longer !
model = model.to(device)
# Helper for torch amp autocast
if torch.cuda.is_available():
autocast = torch.cuda.amp.autocast
else:
@contextlib.contextmanager
def autocast():
print("AMP was not available, using FP32!")
yield
###Output
_____no_output_____
###Markdown
The call to transcribe() below should fail with a "CUDA out of memory" error when run on a GPU with 32 GB memory.
###Code
with autocast():
transcript = model.transcribe([concat_audio_path], batch_size=1)[0]
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
###Output
_____no_output_____
###Markdown
Buffer mechanism for streaming long audio filesOne way to transcribe long audio with a Conformer-CTC model would be to split the audio into consecutive smaller chunks and running inference on each chunk. Care should be taken to have enough context for audio at either edges for accurate transcription. Let's introduce some terminology here to help us navigate the rest of this tutorial. * Buffer size is the length of audio on which inference is run* Chunk size is the length of new audio that is added to the buffer.An audio buffer is made up of a chunk of audio with some padded audio from previous chunk. In order to make the best predictions with enough context for the beginning and end portions of the buffer, we only collect tokens for the middle portion of the buffer of length equal to the size of each chunk. Let's suppose that the maximum length of audio that can be transcribed with conformer-large model is 20s, then we can use 20s as the buffer size and use 15s (for example) as the chunk size, so one hour of audio is broken into 240 chunks of 15s each. Let's take a look at a few audio buffers that may be created for this audio.
###Code
# A simple iterator class to return successive chunks of samples
class AudioChunkIterator():
def __init__(self, samples, frame_len, sample_rate):
self._samples = samples
self._chunk_len = chunk_len_in_secs*sample_rate
self._start = 0
self.output=True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
last = int(self._start + self._chunk_len)
if last <= len(self._samples):
chunk = self._samples[self._start: last]
self._start = last
else:
chunk = np.zeros([int(self._chunk_len)], dtype='float32')
samp_len = len(self._samples) - self._start
chunk[0:samp_len] = self._samples[self._start:len(self._samples)]
self.output = False
return chunk
# a helper function for extracting samples as a numpy array from the audio file
import soundfile as sf
def get_samples(audio_file, target_sr=16000):
with sf.SoundFile(audio_file, 'r') as f:
dtype = 'int16'
sample_rate = f.samplerate
samples = f.read(dtype=dtype)
if sample_rate != target_sr:
samples = librosa.core.resample(samples, sample_rate, target_sr)
samples=samples.astype('float32')/32768
samples = samples.transpose()
return samples
###Output
_____no_output_____
###Markdown
Let's take a look at each chunk of speech that is used for decoding.
###Code
import matplotlib.pyplot as plt
samples = get_samples(concat_audio_path)
sample_rate = model.preprocessor._cfg['sample_rate']
chunk_len_in_secs = 1
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
count = 0
for chunk in chunk_reader:
count +=1
plt.plot(chunk)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now, let's plot the actual buffers at each stage after a new chunk is added to the buffer. Audio buffer can be thought of as a fixed size queue with each incoming chunk added at the end of the buffer and the oldest samples removed from the beginning.
###Code
import numpy as np
context_len_in_secs = 1
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
plt.plot(sampbuffer)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now that we have a method to split the long audio into smaller chunks, we can now work on transcribing the individual buffers and merging the outputs to get the transcription of the whole audio.First, we implement some helper functions to help load the buffers into the data layer.
###Code
from nemo.core.classes import IterableDataset
def speech_collate_fn(batch):
"""collate batch of audio sig, audio len
Args:
batch (FloatTensor, LongTensor): A tuple of tuples of signal, signal lengths.
This collate func assumes the signals are 1d torch tensors (i.e. mono audio).
"""
_, audio_lengths = zip(*batch)
max_audio_len = 0
has_audio = audio_lengths[0] is not None
if has_audio:
max_audio_len = max(audio_lengths).item()
audio_signal= []
for sig, sig_len in batch:
if has_audio:
sig_len = sig_len.item()
if sig_len < max_audio_len:
pad = (0, max_audio_len - sig_len)
sig = torch.nn.functional.pad(sig, pad)
audio_signal.append(sig)
if has_audio:
audio_signal = torch.stack(audio_signal)
audio_lengths = torch.stack(audio_lengths)
else:
audio_signal, audio_lengths = None, None
return audio_signal, audio_lengths
# simple data layer to pass audio signal
class AudioBuffersDataLayer(IterableDataset):
def __init__(self):
super().__init__()
def __iter__(self):
return self
def __next__(self):
if self._buf_count == len(self.signal) :
raise StopIteration
self._buf_count +=1
return torch.as_tensor(self.signal[self._buf_count-1], dtype=torch.float32), \
torch.as_tensor(self.signal_shape[0], dtype=torch.int64)
def set_signal(self, signals):
self.signal = signals
self.signal_shape = self.signal[0].shape
self._buf_count = 0
def __len__(self):
return 1
###Output
_____no_output_____
###Markdown
Next we implement a class that implements transcribing audio buffers and merging the tokens corresponding to a chunk of audio within each buffer. For each buffer, we pick tokens corresponding to one chunk length of audio. The chunk within each buffer is chosen such that there is equal left and right context available to the audio within the chunk.For example, if the chunk size is 1s and buffer size is 3s, we collect tokens corresponding to audio starting from 1s to 2s within each buffer. Conformer-CTC models have a model stride of 4, i.e., a token is produced for every 4 feature vectors in the time domain. MelSpectrogram features are generated once every 10 ms, so a token is produced for every 40 ms of audio.**Note:** The inherent assumption here is that the output tokens from the model are well aligned with corresponding audio segments. This may not always be true for models trained with CTC loss, so this method of streaming inference may not always work with CTC based models.
###Code
from torch.utils.data import DataLoader
import math
class ChunkBufferDecoder:
def __init__(self,asr_model, stride, chunk_len_in_secs=1, buffer_len_in_secs=3):
self.asr_model = asr_model
self.asr_model.eval()
self.data_layer = AudioBuffersDataLayer()
self.data_loader = DataLoader(self.data_layer, batch_size=1, collate_fn=speech_collate_fn)
self.buffers = []
self.all_preds = []
self.chunk_len = chunk_len_in_secs
self.buffer_len = buffer_len_in_secs
assert(chunk_len_in_secs<=buffer_len_in_secs)
feature_stride = asr_model._cfg.preprocessor['window_stride']
self.model_stride_in_secs = feature_stride * stride
self.n_tokens_per_chunk = math.ceil(self.chunk_len / self.model_stride_in_secs)
self.blank_id = len(asr_model.decoder.vocabulary)
self.plot=False
@torch.no_grad()
def transcribe_buffers(self, buffers, merge=True, plot=False):
self.plot = plot
self.buffers = buffers
self.data_layer.set_signal(buffers[:])
self._get_batch_preds()
return self.decode_final(merge)
def _get_batch_preds(self):
device = self.asr_model.device
for batch in iter(self.data_loader):
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(device), audio_signal_len.to(device)
log_probs, encoded_len, predictions = self.asr_model(input_signal=audio_signal, input_signal_length=audio_signal_len)
preds = torch.unbind(predictions)
for pred in preds:
self.all_preds.append(pred.cpu().numpy())
def decode_final(self, merge=True, extra=0):
self.unmerged = []
self.toks_unmerged = []
# index for the first token corresponding to a chunk of audio would be len(decoded) - 1 - delay
delay = math.ceil((self.chunk_len + (self.buffer_len - self.chunk_len) / 2) / self.model_stride_in_secs)
decoded_frames = []
all_toks = []
for pred in self.all_preds:
ids, toks = self._greedy_decoder(pred, self.asr_model.tokenizer)
decoded_frames.append(ids)
all_toks.append(toks)
for decoded in decoded_frames:
self.unmerged += decoded[len(decoded) - 1 - delay:len(decoded) - 1 - delay + self.n_tokens_per_chunk]
if self.plot:
for i, tok in enumerate(all_toks):
plt.plot(self.buffers[i])
plt.show()
print("\nGreedy labels collected from this buffer")
print(tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk])
self.toks_unmerged += tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk]
print("\nTokens collected from succesive buffers before CTC merge")
print(self.toks_unmerged)
if not merge:
return self.unmerged
return self.greedy_merge(self.unmerged)
def _greedy_decoder(self, preds, tokenizer):
s = []
ids = []
for i in range(preds.shape[0]):
if preds[i] == self.blank_id:
s.append("_")
else:
pred = preds[i]
s.append(tokenizer.ids_to_tokens([pred.item()])[0])
ids.append(preds[i])
return ids, s
def greedy_merge(self, preds):
decoded_prediction = []
previous = self.blank_id
for p in preds:
if (p != previous or previous == self.blank_id) and p != self.blank_id:
decoded_prediction.append(p.item())
previous = p
hypothesis = self.asr_model.tokenizer.ids_to_text(decoded_prediction)
return hypothesis
###Output
_____no_output_____
###Markdown
To see how this chunk based decoder comes together, let's call the decoder with a few buffers we create from our long audio file.Some interesting experiments to try would be to see how changing sizes of the chunk and the context affects transcription accuracy.
###Code
chunk_len_in_secs = 4
context_len_in_secs = 2
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
n_buffers = 5
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
buffer_list = []
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
if count >= n_buffers:
break
stride = 4 # 8 for Citrinet
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=True)
# Final transcription after CTC merge
print(transcription)
###Output
_____no_output_____
###Markdown
Time to evaluate our streaming inference on the whole long file that we created.
###Code
# WER calculation
from nemo.collections.asr.metrics.wer import word_error_rate
# Collect all buffers from the audio file
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
buffer_list = []
for chunk in chunk_reader:
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=False)
wer = word_error_rate(hypotheses=[transcription], references=[ref_transcript])
print(f"WER: {round(wer*100,2)}%")
###Output
_____no_output_____
###Markdown
Streaming ASRIn this tutorial, we will look at one way to use one of NeMo's pretrained Conformer-CTC models for streaming inference. We will first look at some use cases where we may need streaming inference and then we will work towards developing a method for transcribing a long audio file using streaming. Why Stream?Streaming inference may be needed in one of the following scenarios:* Real-time or close to real-time inference for live transcriptions* Offline transcriptions of very long audioIn this tutorial, we will mainly focus on streaming for handling long form audio and close to real-time inference with CTC based models. For training ASR models we usually use short segments of audio (<20s) that may be smaller chunks of a long audio that is aligned with the transcriptions and segmented into smaller chunks (see [tools/](https://github.com/NVIDIA/NeMo/tree/main/tools) for some great tools to do this). For running inference on long audio files we are restricted by the available GPU memory that dictates the maximum length of audio that can be transcribed in one inference call. We will take a look at one of the ways to overcome this restriction using NeMo's Conformer-CTC ASR model. Conformer-CTCConformer-CTC models distributed with NeMo use a combination of self-attention and convolution modules to achieve the best of the two approaches, the self-attention layers can learn the global interaction while the convolutions efficiently capture the local correlations. Use of self-attention layers comes with a cost of increased memory usage at a quadratic rate with the sequence length. That means that transcribing long audio files with Conformer-CTC models needs streaming inference to break up the audio into smaller chunks. We will develop one method to do such inference through the course of this tutorial. DataTo demonstrate transcribing a long audio file we will use utterances from the dev-clean set of the [mini Librispeech corpus](https://www.openslr.org/31/).
###Code
# If something goes wrong during data processing, un-comment the following line to delete the cached dataset
# !rm -rf datasets/mini-dev-clean
!mkdir -p datasets/mini-dev-clean
!python ../../scripts/dataset_processing/get_librispeech_data.py \
--data_root "datasets/mini-dev-clean/" \
--data_sets dev_clean_2
manifest = "datasets/mini-dev-clean/dev_clean_2.json"
###Output
_____no_output_____
###Markdown
Let's create a long audio that is about 15 minutes long by concatenating audio from dev-clean and also create the corresponding concatenated transcript.
###Code
import json
def concat_audio(manifest_file, final_len=3600):
concat_len = 0
final_transcript = ""
with open("concat_file.txt", "w") as cat_f:
while concat_len < final_len:
with open(manifest_file, "r") as mfst_f:
for l in mfst_f:
row = json.loads(l.strip())
if concat_len >= final_len:
break
cat_f.write(f"file {row['audio_filepath']}\n")
final_transcript += (" " + row['text'])
concat_len += float(row['duration'])
return concat_len, final_transcript
new_duration, ref_transcript = concat_audio(manifest, 15*60)
concat_audio_path = "datasets/mini-dev-clean/concatenated_audio.wav"
!ffmpeg -t {new_duration} -safe 0 -f concat -i concat_file.txt -c copy -t {new_duration} {concat_audio_path} -y
print("Finished concatenating audio file!")
###Output
_____no_output_____
###Markdown
Streaming with CTC based modelsNow let's try to transcribe the long audio file created above using a conformer-large model.
###Code
import torch
import nemo.collections.asr as nemo_asr
import contextlib
import gc
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
###Output
_____no_output_____
###Markdown
We are mainly concerned about decoding on the GPU in this tutorial. CPU decoding may be able to handle longer files but would also not be as fast as GPU decoding. Let's check if we can run transcribe() on the long audio file that we created above.
###Code
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_conformer_ctc_large", map_location=device)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# device = 'cpu' # You can transcribe even longer samples on the CPU, though it will take much longer !
model = model.to(device)
# Helper for torch amp autocast
if torch.cuda.is_available():
autocast = torch.cuda.amp.autocast
else:
@contextlib.contextmanager
def autocast():
print("AMP was not available, using FP32!")
yield
###Output
_____no_output_____
###Markdown
The call to transcribe() below should fail with a "CUDA out of memory" error when run on a GPU with 32 GB memory.
###Code
with autocast():
transcript = model.transcribe([concat_audio_path], batch_size=1)[0]
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
###Output
_____no_output_____
###Markdown
Buffer mechanism for streaming long audio filesOne way to transcribe long audio with a Conformer-CTC model would be to split the audio into consecutive smaller chunks and running inference on each chunk. Care should be taken to have enough context for audio at either edges for accurate transcription. Let's introduce some terminology here to help us navigate the rest of this tutorial. * Buffer size is the length of audio on which inference is run* Chunk size is the length of new audio that is added to the buffer.An audio buffer is made up of a chunk of audio with some padded audio from previous chunk. In order to make the best predictions with enough context for the beginning and end portions of the buffer, we only collect tokens for the middle portion of the buffer of length equal to the size of each chunk. Let's suppose tha the maximum length of audio that can be transcribed with conformer-large model is 20s, then we can use 20s as the buffer size and use 15s (for example) as the chunk size, so one hour of audio is broken into 240 chunks of 15s each. Let's take a look at a few audio buffers that may be created for this audio.
###Code
# A simple iterator class to return successive chunks of samples
class AudioChunkIterator():
def __init__(self, samples, frame_len, sample_rate):
self._samples = samples
self._chunk_len = chunk_len_in_secs*sample_rate
self._start = 0
self.output=True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
last = int(self._start + self._chunk_len)
if last <= len(self._samples):
chunk = self._samples[self._start: last]
self._start = last
else:
chunk = np.zeros([int(self._chunk_len)], dtype='float32')
samp_len = len(self._samples) - self._start
chunk[0:samp_len] = self._samples[self._start:len(self._samples)]
self.output = False
return chunk
# a helper function for extracting samples as a numpy array from the audio file
import soundfile as sf
def get_samples(audio_file, target_sr=16000):
with sf.SoundFile(audio_file, 'r') as f:
dtype = 'int16'
sample_rate = f.samplerate
samples = f.read(dtype=dtype)
if sample_rate != target_sr:
samples = librosa.core.resample(samples, sample_rate, target_sr)
samples=samples.astype('float32')/32768
samples = samples.transpose()
return samples
###Output
_____no_output_____
###Markdown
Let's take a look at each chunk of speech that is used for decoding.
###Code
import matplotlib.pyplot as plt
samples = get_samples(concat_audio_path)
sample_rate = model.preprocessor._cfg['sample_rate']
chunk_len_in_secs = 1
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
count = 0
for chunk in chunk_reader:
count +=1
plt.plot(chunk)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now, let's plot the actual buffers at each stage after a new chunk is added to the buffer. Audio buffer can be thought of as a fixed size queue with each incoming chunk added at the end of the buffer and the oldest samples removed from the beginning.
###Code
import numpy as np
context_len_in_secs = 1
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
plt.plot(sampbuffer)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now that we have a method to split the long audio into smaller chunks, we can now work on transcribing the individual buffers and merging the outputs to get the trancription of the whole audio.First, we implement some helper functions to help load the buffers into the datalayer.
###Code
from nemo.core.classes import IterableDataset
def speech_collate_fn(batch):
"""collate batch of audio sig, audio len
Args:
batch (FloatTensor, LongTensor): A tuple of tuples of signal, signal lengths.
This collate func assumes the signals are 1d torch tensors (i.e. mono audio).
"""
_, audio_lengths = zip(*batch)
max_audio_len = 0
has_audio = audio_lengths[0] is not None
if has_audio:
max_audio_len = max(audio_lengths).item()
audio_signal= []
for sig, sig_len in batch:
if has_audio:
sig_len = sig_len.item()
if sig_len < max_audio_len:
pad = (0, max_audio_len - sig_len)
sig = torch.nn.functional.pad(sig, pad)
audio_signal.append(sig)
if has_audio:
audio_signal = torch.stack(audio_signal)
audio_lengths = torch.stack(audio_lengths)
else:
audio_signal, audio_lengths = None, None
return audio_signal, audio_lengths
# simple data layer to pass audio signal
class AudioBuffersDataLayer(IterableDataset):
def __init__(self):
super().__init__()
def __iter__(self):
return self
def __next__(self):
if self._buf_count == len(self.signal) :
raise StopIteration
self._buf_count +=1
return torch.as_tensor(self.signal[self._buf_count-1], dtype=torch.float32), \
torch.as_tensor(self.signal_shape[0], dtype=torch.int64)
def set_signal(self, signals):
self.signal = signals
self.signal_shape = self.signal[0].shape
self._buf_count = 0
def __len__(self):
return 1
###Output
_____no_output_____
###Markdown
Next we implement a class that implements transcribing audio buffers and merging the tokens corresponding to a chunk of audio within each buffer. For each buffer, we pick tokens corresponding to one chunk length of audio. The chunk within each buffer is chosen such that there is equal left and right context available to the audio within the chunk.For example, if the chunk size is 1s and buffer size is 3s, we collect tokents corresponding to audio starting from 1s to 2s within each buffer. Conformer-CTC models have a model stride of 4, i.e., a token is produced for every 4 feature vectors in the time domain. MelSpectrogram features are generated once every 10 ms, so a token is produced for every 40 ms of audio.**Note:** The inherent assumption here is that the output tokens from the model are well aligned with corresponding audio segments. This may not always be true for models trained with CTC loss, so this method of streaming inference may not always work with CTC based models.
###Code
from torch.utils.data import DataLoader
import math
class ChunkBufferDecoder:
def __init__(self,asr_model, stride, chunk_len_in_secs=1, buffer_len_in_secs=3):
self.asr_model = asr_model
self.asr_model.eval()
self.data_layer = AudioBuffersDataLayer()
self.data_loader = DataLoader(self.data_layer, batch_size=1, collate_fn=speech_collate_fn)
self.buffers = []
self.all_preds = []
self.chunk_len = chunk_len_in_secs
self.buffer_len = buffer_len_in_secs
assert(chunk_len_in_secs<=buffer_len_in_secs)
feature_stride = asr_model._cfg.preprocessor['window_stride']
self.model_stride_in_secs = feature_stride * stride
self.n_tokens_per_chunk = math.ceil(self.chunk_len / self.model_stride_in_secs)
self.blank_id = len(asr_model.decoder.vocabulary)
self.plot=False
@torch.no_grad()
def transcribe_buffers(self, buffers, merge=True, plot=False):
self.plot = plot
self.buffers = buffers
self.data_layer.set_signal(buffers[:])
self._get_batch_preds()
return self.decode_final(merge)
def _get_batch_preds(self):
device = self.asr_model.device
for batch in iter(self.data_loader):
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(device), audio_signal_len.to(device)
log_probs, encoded_len, predictions = self.asr_model(input_signal=audio_signal, input_signal_length=audio_signal_len)
preds = torch.unbind(predictions)
for pred in preds:
self.all_preds.append(pred.cpu().numpy())
def decode_final(self, merge=True, extra=0):
self.unmerged = []
self.toks_unmerged = []
# index for the first token corresponding to a chunk of audio would be len(decoded) - 1 - delay
delay = math.ceil((self.chunk_len + (self.buffer_len - self.chunk_len) / 2) / self.model_stride_in_secs)
decoded_frames = []
all_toks = []
for pred in self.all_preds:
ids, toks = self._greedy_decoder(pred, self.asr_model.tokenizer)
decoded_frames.append(ids)
all_toks.append(toks)
for decoded in decoded_frames:
self.unmerged += decoded[len(decoded) - 1 - delay:len(decoded) - 1 - delay + self.n_tokens_per_chunk]
if self.plot:
for i, tok in enumerate(all_toks):
plt.plot(self.buffers[i])
plt.show()
print("\nGreedy labels collected from this buffer")
print(tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk])
self.toks_unmerged += tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk]
print("\nTokens collected from succesive buffers before CTC merge")
print(self.toks_unmerged)
if not merge:
return self.unmerged
return self.greedy_merge(self.unmerged)
def _greedy_decoder(self, preds, tokenizer):
s = []
ids = []
for i in range(preds.shape[0]):
if preds[i] == self.blank_id:
s.append("_")
else:
pred = preds[i]
s.append(tokenizer.ids_to_tokens([pred.item()])[0])
ids.append(preds[i])
return ids, s
def greedy_merge(self, preds):
decoded_prediction = []
previous = self.blank_id
for p in preds:
if (p != previous or previous == self.blank_id) and p != self.blank_id:
decoded_prediction.append(p.item())
previous = p
hypothesis = self.asr_model.tokenizer.ids_to_text(decoded_prediction)
return hypothesis
###Output
_____no_output_____
###Markdown
To see how this chunk based decoder comes together, let's call the decoder with a few buffers we create from our long audio file.Some interesting experiments to try would be to see how changing sizes of the chunk and the context affects transcription accuracy.
###Code
chunk_len_in_secs = 4
context_len_in_secs = 2
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
n_buffers = 5
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
buffer_list = []
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
if count >= n_buffers:
break
stride = 4 # 8 for Citrinet
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=True)
# Final transcription after CTC merge
print(transcription)
###Output
_____no_output_____
###Markdown
Time to evaluate our streaming inference on the whole long file that we created.
###Code
# WER calculation
from nemo.collections.asr.metrics.wer import word_error_rate
# Collect all buffers from the audio file
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
buffer_list = []
for chunk in chunk_reader:
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=False)
wer = word_error_rate(hypotheses=[transcription], references=[ref_transcript])
print(f"WER: {round(wer*100,2)}%")
###Output
_____no_output_____
###Markdown
Streaming ASRIn this tutorial, we will look at one way to use one of NeMo's pretrained Conformer-CTC models for streaming inference. We will first look at some use cases where we may need streaming inference and then we will work towards developing a method for transcribing a long audio file using streaming. Why Stream?Streaming inference may be needed in one of the following scenarios:* Real-time or close to real-time inference for live transcriptions* Offline transcriptions of very long audioIn this tutorial, we will mainly focus on streaming for handling long form audio and close to real-time inference with CTC based models. For training ASR models we usually use short segments of audio (<20s) that may be smaller chunks of a long audio that is aligned with the transcriptions and segmented into smaller chunks (see [tools/](https://github.com/NVIDIA/NeMo/tree/main/tools) for some great tools to do this). For running inference on long audio files we are restricted by the available GPU memory that dictates the maximum length of audio that can be transcribed in one inference call. We will take a look at one of the ways to overcome this restriction using NeMo's Conformer-CTC ASR model. Conformer-CTCConformer-CTC models distributed with NeMo use a combination of self-attention and convolution modules to achieve the best of the two approaches, the self-attention layers can learn the global interaction while the convolutions efficiently capture the local correlations. Use of self-attention layers comes with a cost of increased memory usage at a quadratic rate with the sequence length. That means that transcribing long audio files with Conformer-CTC models needs streaming inference to break up the audio into smaller chunks. We will develop one method to do such inference through the course of this tutorial. DataTo demonstrate transcribing a long audio file we will use utterances from the dev-clean set of the [mini Librispeech corpus](https://www.openslr.org/31/).
###Code
# If something goes wrong during data processing, un-comment the following line to delete the cached dataset
# !rm -rf datasets/mini-dev-clean
!mkdir -p datasets/mini-dev-clean
!python ../../scripts/dataset_processing/get_librispeech_data.py \
--data_root "datasets/mini-dev-clean/" \
--data_sets dev_clean_2
manifest = "datasets/mini-dev-clean/dev_clean_2.json"
###Output
_____no_output_____
###Markdown
Let's create a long audio that is about 15 minutes long by concatenating audio from dev-clean and also create the corresponding concatenated transcript.
###Code
import json
def concat_audio(manifest_file, final_len=3600):
concat_len = 0
final_transcript = ""
with open("concat_file.txt", "w") as cat_f:
while concat_len < final_len:
with open(manifest_file, "r") as mfst_f:
for l in mfst_f:
row = json.loads(l.strip())
if concat_len >= final_len:
break
cat_f.write(f"file {row['audio_filepath']}\n")
final_transcript += (" " + row['text'])
concat_len += float(row['duration'])
return concat_len, final_transcript
new_duration, ref_transcript = concat_audio(manifest, 15*60)
concat_audio_path = "datasets/mini-dev-clean/concatenated_audio.wav"
!ffmpeg -t {new_duration} -safe 0 -f concat -i concat_file.txt -c copy -t {new_duration} {concat_audio_path} -y
print("Finished concatenating audio file!")
###Output
_____no_output_____
###Markdown
Streaming with CTC based modelsNow let's try to transcribe the long audio file created above using a conformer-large model.
###Code
import torch
import nemo.collections.asr as nemo_asr
import contextlib
import gc
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
###Output
_____no_output_____
###Markdown
We are mainly concerned about decoding on the GPU in this tutorial. CPU decoding may be able to handle longer files but would also not be as fast as GPU decoding. Let's check if we can run transcribe() on the long audio file that we created above.
###Code
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_conformer_ctc_large", map_location=device)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# device = 'cpu' # You can transcribe even longer samples on the CPU, though it will take much longer !
model = model.to(device)
# Helper for torch amp autocast
if torch.cuda.is_available():
autocast = torch.cuda.amp.autocast
else:
@contextlib.contextmanager
def autocast():
print("AMP was not available, using FP32!")
yield
###Output
_____no_output_____
###Markdown
The call to transcribe() below should fail with a "CUDA out of memory" error when run on a GPU with 32 GB memory.
###Code
with autocast():
transcript = model.transcribe([concat_audio_path], batch_size=1)[0]
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
###Output
_____no_output_____
###Markdown
Buffer mechanism for streaming long audio filesOne way to transcribe long audio with a Conformer-CTC model would be to split the audio into consecutive smaller chunks and running inference on each chunk. Care should be taken to have enough context for audio at either edges for accurate transcription. Let's introduce some terminology here to help us navigate the rest of this tutorial. * Buffer size is the length of audio on which inference is run* Chunk size is the length of new audio that is added to the buffer.An audio buffer is made up of a chunk of audio with some padded audio from previous chunk. In order to make the best predictions with enough context for the beginning and end portions of the buffer, we only collect tokens for the middle portion of the buffer of length equal to the size of each chunk. Let's suppose that the maximum length of audio that can be transcribed with conformer-large model is 20s, then we can use 20s as the buffer size and use 15s (for example) as the chunk size, so one hour of audio is broken into 240 chunks of 15s each. Let's take a look at a few audio buffers that may be created for this audio.
###Code
# A simple iterator class to return successive chunks of samples
class AudioChunkIterator():
def __init__(self, samples, frame_len, sample_rate):
self._samples = samples
self._chunk_len = chunk_len_in_secs*sample_rate
self._start = 0
self.output=True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
last = int(self._start + self._chunk_len)
if last <= len(self._samples):
chunk = self._samples[self._start: last]
self._start = last
else:
chunk = np.zeros([int(self._chunk_len)], dtype='float32')
samp_len = len(self._samples) - self._start
chunk[0:samp_len] = self._samples[self._start:len(self._samples)]
self.output = False
return chunk
# a helper function for extracting samples as a numpy array from the audio file
import soundfile as sf
def get_samples(audio_file, target_sr=16000):
with sf.SoundFile(audio_file, 'r') as f:
dtype = 'int16'
sample_rate = f.samplerate
samples = f.read(dtype=dtype)
if sample_rate != target_sr:
samples = librosa.core.resample(samples, sample_rate, target_sr)
samples=samples.astype('float32')/32768
samples = samples.transpose()
return samples
###Output
_____no_output_____
###Markdown
Let's take a look at each chunk of speech that is used for decoding.
###Code
import matplotlib.pyplot as plt
samples = get_samples(concat_audio_path)
sample_rate = model.preprocessor._cfg['sample_rate']
chunk_len_in_secs = 1
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
count = 0
for chunk in chunk_reader:
count +=1
plt.plot(chunk)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now, let's plot the actual buffers at each stage after a new chunk is added to the buffer. Audio buffer can be thought of as a fixed size queue with each incoming chunk added at the end of the buffer and the oldest samples removed from the beginning.
###Code
import numpy as np
context_len_in_secs = 1
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
plt.plot(sampbuffer)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now that we have a method to split the long audio into smaller chunks, we can now work on transcribing the individual buffers and merging the outputs to get the transcription of the whole audio.First, we implement some helper functions to help load the buffers into the data layer.
###Code
from nemo.core.classes import IterableDataset
def speech_collate_fn(batch):
"""collate batch of audio sig, audio len
Args:
batch (FloatTensor, LongTensor): A tuple of tuples of signal, signal lengths.
This collate func assumes the signals are 1d torch tensors (i.e. mono audio).
"""
_, audio_lengths = zip(*batch)
max_audio_len = 0
has_audio = audio_lengths[0] is not None
if has_audio:
max_audio_len = max(audio_lengths).item()
audio_signal= []
for sig, sig_len in batch:
if has_audio:
sig_len = sig_len.item()
if sig_len < max_audio_len:
pad = (0, max_audio_len - sig_len)
sig = torch.nn.functional.pad(sig, pad)
audio_signal.append(sig)
if has_audio:
audio_signal = torch.stack(audio_signal)
audio_lengths = torch.stack(audio_lengths)
else:
audio_signal, audio_lengths = None, None
return audio_signal, audio_lengths
# simple data layer to pass audio signal
class AudioBuffersDataLayer(IterableDataset):
def __init__(self):
super().__init__()
def __iter__(self):
return self
def __next__(self):
if self._buf_count == len(self.signal) :
raise StopIteration
self._buf_count +=1
return torch.as_tensor(self.signal[self._buf_count-1], dtype=torch.float32), \
torch.as_tensor(self.signal_shape[0], dtype=torch.int64)
def set_signal(self, signals):
self.signal = signals
self.signal_shape = self.signal[0].shape
self._buf_count = 0
def __len__(self):
return 1
###Output
_____no_output_____
###Markdown
Next we implement a class that implements transcribing audio buffers and merging the tokens corresponding to a chunk of audio within each buffer. For each buffer, we pick tokens corresponding to one chunk length of audio. The chunk within each buffer is chosen such that there is equal left and right context available to the audio within the chunk.For example, if the chunk size is 1s and buffer size is 3s, we collect tokens corresponding to audio starting from 1s to 2s within each buffer. Conformer-CTC models have a model stride of 4, i.e., a token is produced for every 4 feature vectors in the time domain. MelSpectrogram features are generated once every 10 ms, so a token is produced for every 40 ms of audio.**Note:** The inherent assumption here is that the output tokens from the model are well aligned with corresponding audio segments. This may not always be true for models trained with CTC loss, so this method of streaming inference may not always work with CTC based models.
###Code
from torch.utils.data import DataLoader
import math
class ChunkBufferDecoder:
def __init__(self,asr_model, stride, chunk_len_in_secs=1, buffer_len_in_secs=3):
self.asr_model = asr_model
self.asr_model.eval()
self.data_layer = AudioBuffersDataLayer()
self.data_loader = DataLoader(self.data_layer, batch_size=1, collate_fn=speech_collate_fn)
self.buffers = []
self.all_preds = []
self.chunk_len = chunk_len_in_secs
self.buffer_len = buffer_len_in_secs
assert(chunk_len_in_secs<=buffer_len_in_secs)
feature_stride = asr_model._cfg.preprocessor['window_stride']
self.model_stride_in_secs = feature_stride * stride
self.n_tokens_per_chunk = math.ceil(self.chunk_len / self.model_stride_in_secs)
self.blank_id = len(asr_model.decoder.vocabulary)
self.plot=False
@torch.no_grad()
def transcribe_buffers(self, buffers, merge=True, plot=False):
self.plot = plot
self.buffers = buffers
self.data_layer.set_signal(buffers[:])
self._get_batch_preds()
return self.decode_final(merge)
def _get_batch_preds(self):
device = self.asr_model.device
for batch in iter(self.data_loader):
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(device), audio_signal_len.to(device)
log_probs, encoded_len, predictions = self.asr_model(input_signal=audio_signal, input_signal_length=audio_signal_len)
preds = torch.unbind(predictions)
for pred in preds:
self.all_preds.append(pred.cpu().numpy())
def decode_final(self, merge=True, extra=0):
self.unmerged = []
self.toks_unmerged = []
# index for the first token corresponding to a chunk of audio would be len(decoded) - 1 - delay
delay = math.ceil((self.chunk_len + (self.buffer_len - self.chunk_len) / 2) / self.model_stride_in_secs)
decoded_frames = []
all_toks = []
for pred in self.all_preds:
ids, toks = self._greedy_decoder(pred, self.asr_model.tokenizer)
decoded_frames.append(ids)
all_toks.append(toks)
for decoded in decoded_frames:
self.unmerged += decoded[len(decoded) - 1 - delay:len(decoded) - 1 - delay + self.n_tokens_per_chunk]
if self.plot:
for i, tok in enumerate(all_toks):
plt.plot(self.buffers[i])
plt.show()
print("\nGreedy labels collected from this buffer")
print(tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk])
self.toks_unmerged += tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk]
print("\nTokens collected from succesive buffers before CTC merge")
print(self.toks_unmerged)
if not merge:
return self.unmerged
return self.greedy_merge(self.unmerged)
def _greedy_decoder(self, preds, tokenizer):
s = []
ids = []
for i in range(preds.shape[0]):
if preds[i] == self.blank_id:
s.append("_")
else:
pred = preds[i]
s.append(tokenizer.ids_to_tokens([pred.item()])[0])
ids.append(preds[i])
return ids, s
def greedy_merge(self, preds):
decoded_prediction = []
previous = self.blank_id
for p in preds:
if (p != previous or previous == self.blank_id) and p != self.blank_id:
decoded_prediction.append(p.item())
previous = p
hypothesis = self.asr_model.tokenizer.ids_to_text(decoded_prediction)
return hypothesis
###Output
_____no_output_____
###Markdown
To see how this chunk based decoder comes together, let's call the decoder with a few buffers we create from our long audio file.Some interesting experiments to try would be to see how changing sizes of the chunk and the context affects transcription accuracy.
###Code
chunk_len_in_secs = 4
context_len_in_secs = 2
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
n_buffers = 5
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
buffer_list = []
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
if count >= n_buffers:
break
stride = 4 # 8 for Citrinet
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=True)
# Final transcription after CTC merge
print(transcription)
###Output
_____no_output_____
###Markdown
Time to evaluate our streaming inference on the whole long file that we created.
###Code
# WER calculation
from nemo.collections.asr.metrics.wer import word_error_rate
# Collect all buffers from the audio file
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
buffer_list = []
for chunk in chunk_reader:
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=False)
wer = word_error_rate(hypotheses=[transcription], references=[ref_transcript])
print(f"WER: {round(wer*100,2)}%")
###Output
_____no_output_____
###Markdown
Streaming ASRIn this tutorial, we will look at one way to use one of NeMo's pretrained Conformer-CTC models for streaming inference. We will first look at some use cases where we may need streaming inference and then we will work towards developing a method for transcribing a long audio file using streaming. Why Stream?Streaming inference may be needed in one of the following scenarios:* Real-time or close to real-time inference for live transcriptions* Offline transcriptions of very long audioIn this tutorial, we will mainly focus on streaming for handling long form audio and close to real-time inference with CTC based models. For training ASR models we usually use short segments of audio (<20s) that may be smaller chunks of a long audio that is aligned with the transcriptions and segmented into smaller chunks (see [tools/](https://github.com/NVIDIA/NeMo/tree/main/tools) for some great tools to do this). For running inference on long audio files we are restricted by the available GPU memory that dictates the maximum length of audio that can be transcribed in one inference call. We will take a look at one of the ways to overcome this restriction using NeMo's Conformer-CTC ASR model. Conformer-CTCConformer-CTC models distributed with NeMo use a combination of self-attention and convolution modules to achieve the best of the two approaches, the self-attention layers can learn the global interaction while the convolutions efficiently capture the local correlations. Use of self-attention layers comes with a cost of increased memory usage at a quadratic rate with the sequence length. That means that transcribing long audio files with Conformer-CTC models needs streaming inference to break up the audio into smaller chunks. We will develop one method to do such inference through the course of this tutorial. DataTo demonstrate transcribing a long audio file we will use utterances from the dev-clean set of the [mini Librispeech corpus](https://www.openslr.org/31/).
###Code
# If something goes wrong during data processing, un-comment the following line to delete the cached dataset
# !rm -rf datasets/mini-dev-clean
!mkdir -p datasets/mini-dev-clean
!python ../../scripts/dataset_processing/get_librispeech_data.py \
--data_root "datasets/mini-dev-clean/" \
--data_sets dev_clean_2
manifest = "datasets/mini-dev-clean/dev_clean_2.json"
###Output
_____no_output_____
###Markdown
Let's create a long audio that is about 15 minutes long by concatenating audio from dev-clean and also create the corresponding concatenated transcript.
###Code
import json
def concat_audio(manifest_file, final_len=3600):
concat_len = 0
final_transcript = ""
with open("concat_file.txt", "w") as cat_f:
while concat_len < final_len:
with open(manifest_file, "r") as mfst_f:
for l in mfst_f:
row = json.loads(l.strip())
if concat_len >= final_len:
break
cat_f.write(f"file {row['audio_filepath']}\n")
final_transcript += (" " + row['text'])
concat_len += float(row['duration'])
return concat_len, final_transcript
new_duration, ref_transcript = concat_audio(manifest, 15*60)
concat_audio_path = "datasets/mini-dev-clean/concatenated_audio.wav"
!ffmpeg -t {new_duration} -safe 0 -f concat -i concat_file.txt -c copy -t {new_duration} {concat_audio_path} -y
print("Finished concatenating audio file!")
###Output
_____no_output_____
###Markdown
Streaming with CTC based modelsNow let's try to transcribe the long audio file created above using a conformer-large model.
###Code
import torch
import nemo.collections.asr as nemo_asr
import contextlib
import gc
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
###Output
_____no_output_____
###Markdown
We are mainly concerned about decoding on the GPU in this tutorial. CPU decoding may be able to handle longer files but would also not be as fast as GPU decoding. Let's check if we can run transcribe() on the long audio file that we created above.
###Code
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_conformer_ctc_large", map_location=device)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# device = 'cpu' # You can transcribe even longer samples on the CPU, though it will take much longer !
model = model.to(device)
# Helper for torch amp autocast
if torch.cuda.is_available():
autocast = torch.cuda.amp.autocast
else:
@contextlib.contextmanager
def autocast():
print("AMP was not available, using FP32!")
yield
###Output
_____no_output_____
###Markdown
The call to transcribe() below should fail with a "CUDA out of memory" error when run on a GPU with 32 GB memory.
###Code
with autocast():
transcript = model.transcribe([concat_audio_path], batch_size=1)[0]
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
###Output
_____no_output_____
###Markdown
Buffer mechanism for streaming long audio filesOne way to transcribe long audio with a Conformer-CTC model would be to split the audio into consecutive smaller chunks and running inference on each chunk. Care should be taken to have enough context for audio at either edges for accurate transcription. Let's introduce some terminology here to help us navigate the rest of this tutorial. * Buffer size is the length of audio on which inference is run* Chunk size is the length of new audio that is added to the buffer.An audio buffer is made up of a chunk of audio with some padded audio from previous chunk. In order to make the best predictions with enough context for the beginning and end portions of the buffer, we only collect tokens for the middle portion of the buffer of length equal to the size of each chunk. Let's suppose that the maximum length of audio that can be transcribed with conformer-large model is 20s, then we can use 20s as the buffer size and use 15s (for example) as the chunk size, so one hour of audio is broken into 240 chunks of 15s each. Let's take a look at a few audio buffers that may be created for this audio.
###Code
# A simple iterator class to return successive chunks of samples
class AudioChunkIterator():
def __init__(self, samples, frame_len, sample_rate):
self._samples = samples
self._chunk_len = chunk_len_in_secs*sample_rate
self._start = 0
self.output=True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
last = int(self._start + self._chunk_len)
if last <= len(self._samples):
chunk = self._samples[self._start: last]
self._start = last
else:
chunk = np.zeros([int(self._chunk_len)], dtype='float32')
samp_len = len(self._samples) - self._start
chunk[0:samp_len] = self._samples[self._start:len(self._samples)]
self.output = False
return chunk
# a helper function for extracting samples as a numpy array from the audio file
import soundfile as sf
def get_samples(audio_file, target_sr=16000):
with sf.SoundFile(audio_file, 'r') as f:
dtype = 'int16'
sample_rate = f.samplerate
samples = f.read(dtype=dtype)
if sample_rate != target_sr:
samples = librosa.core.resample(samples, sample_rate, target_sr)
samples=samples.astype('float32')/32768
samples = samples.transpose()
return samples
###Output
_____no_output_____
###Markdown
Let's take a look at each chunk of speech that is used for decoding.
###Code
import matplotlib.pyplot as plt
samples = get_samples(concat_audio_path)
sample_rate = model.preprocessor._cfg['sample_rate']
chunk_len_in_secs = 1
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
count = 0
for chunk in chunk_reader:
count +=1
plt.plot(chunk)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now, let's plot the actual buffers at each stage after a new chunk is added to the buffer. Audio buffer can be thought of as a fixed size queue with each incoming chunk added at the end of the buffer and the oldest samples removed from the beginning.
###Code
import numpy as np
context_len_in_secs = 1
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
plt.plot(sampbuffer)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now that we have a method to split the long audio into smaller chunks, we can now work on transcribing the individual buffers and merging the outputs to get the transcription of the whole audio.First, we implement some helper functions to help load the buffers into the data layer.
###Code
from nemo.core.classes import IterableDataset
def speech_collate_fn(batch):
"""collate batch of audio sig, audio len
Args:
batch (FloatTensor, LongTensor): A tuple of tuples of signal, signal lengths.
This collate func assumes the signals are 1d torch tensors (i.e. mono audio).
"""
_, audio_lengths = zip(*batch)
max_audio_len = 0
has_audio = audio_lengths[0] is not None
if has_audio:
max_audio_len = max(audio_lengths).item()
audio_signal= []
for sig, sig_len in batch:
if has_audio:
sig_len = sig_len.item()
if sig_len < max_audio_len:
pad = (0, max_audio_len - sig_len)
sig = torch.nn.functional.pad(sig, pad)
audio_signal.append(sig)
if has_audio:
audio_signal = torch.stack(audio_signal)
audio_lengths = torch.stack(audio_lengths)
else:
audio_signal, audio_lengths = None, None
return audio_signal, audio_lengths
# simple data layer to pass audio signal
class AudioBuffersDataLayer(IterableDataset):
def __init__(self):
super().__init__()
def __iter__(self):
return self
def __next__(self):
if self._buf_count == len(self.signal) :
raise StopIteration
self._buf_count +=1
return torch.as_tensor(self.signal[self._buf_count-1], dtype=torch.float32), \
torch.as_tensor(self.signal_shape[0], dtype=torch.int64)
def set_signal(self, signals):
self.signal = signals
self.signal_shape = self.signal[0].shape
self._buf_count = 0
def __len__(self):
return 1
###Output
_____no_output_____
###Markdown
Next we implement a class that implements transcribing audio buffers and merging the tokens corresponding to a chunk of audio within each buffer. For each buffer, we pick tokens corresponding to one chunk length of audio. The chunk within each buffer is chosen such that there is equal left and right context available to the audio within the chunk.For example, if the chunk size is 1s and buffer size is 3s, we collect tokens corresponding to audio starting from 1s to 2s within each buffer. Conformer-CTC models have a model stride of 4, i.e., a token is produced for every 4 feature vectors in the time domain. MelSpectrogram features are generated once every 10 ms, so a token is produced for every 40 ms of audio.**Note:** The inherent assumption here is that the output tokens from the model are well aligned with corresponding audio segments. This may not always be true for models trained with CTC loss, so this method of streaming inference may not always work with CTC based models.
###Code
from torch.utils.data import DataLoader
import math
class ChunkBufferDecoder:
def __init__(self,asr_model, stride, chunk_len_in_secs=1, buffer_len_in_secs=3):
self.asr_model = asr_model
self.asr_model.eval()
self.data_layer = AudioBuffersDataLayer()
self.data_loader = DataLoader(self.data_layer, batch_size=1, collate_fn=speech_collate_fn)
self.buffers = []
self.all_preds = []
self.chunk_len = chunk_len_in_secs
self.buffer_len = buffer_len_in_secs
assert(chunk_len_in_secs<=buffer_len_in_secs)
feature_stride = asr_model._cfg.preprocessor['window_stride']
self.model_stride_in_secs = feature_stride * stride
self.n_tokens_per_chunk = math.ceil(self.chunk_len / self.model_stride_in_secs)
self.blank_id = len(asr_model.decoder.vocabulary)
self.plot=False
@torch.no_grad()
def transcribe_buffers(self, buffers, merge=True, plot=False):
self.plot = plot
self.buffers = buffers
self.data_layer.set_signal(buffers[:])
self._get_batch_preds()
return self.decode_final(merge)
def _get_batch_preds(self):
device = self.asr_model.device
for batch in iter(self.data_loader):
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(device), audio_signal_len.to(device)
log_probs, encoded_len, predictions = self.asr_model(input_signal=audio_signal, input_signal_length=audio_signal_len)
preds = torch.unbind(predictions)
for pred in preds:
self.all_preds.append(pred.cpu().numpy())
def decode_final(self, merge=True, extra=0):
self.unmerged = []
self.toks_unmerged = []
# index for the first token corresponding to a chunk of audio would be len(decoded) - 1 - delay
delay = math.ceil((self.chunk_len + (self.buffer_len - self.chunk_len) / 2) / self.model_stride_in_secs)
decoded_frames = []
all_toks = []
for pred in self.all_preds:
ids, toks = self._greedy_decoder(pred, self.asr_model.tokenizer)
decoded_frames.append(ids)
all_toks.append(toks)
for decoded in decoded_frames:
self.unmerged += decoded[len(decoded) - 1 - delay:len(decoded) - 1 - delay + self.n_tokens_per_chunk]
if self.plot:
for i, tok in enumerate(all_toks):
plt.plot(self.buffers[i])
plt.show()
print("\nGreedy labels collected from this buffer")
print(tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk])
self.toks_unmerged += tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk]
print("\nTokens collected from succesive buffers before CTC merge")
print(self.toks_unmerged)
if not merge:
return self.unmerged
return self.greedy_merge(self.unmerged)
def _greedy_decoder(self, preds, tokenizer):
s = []
ids = []
for i in range(preds.shape[0]):
if preds[i] == self.blank_id:
s.append("_")
else:
pred = preds[i]
s.append(tokenizer.ids_to_tokens([pred.item()])[0])
ids.append(preds[i])
return ids, s
def greedy_merge(self, preds):
decoded_prediction = []
previous = self.blank_id
for p in preds:
if (p != previous or previous == self.blank_id) and p != self.blank_id:
decoded_prediction.append(p.item())
previous = p
hypothesis = self.asr_model.tokenizer.ids_to_text(decoded_prediction)
return hypothesis
###Output
_____no_output_____
###Markdown
To see how this chunk based decoder comes together, let's call the decoder with a few buffers we create from our long audio file.Some interesting experiments to try would be to see how changing sizes of the chunk and the context affects transcription accuracy.
###Code
chunk_len_in_secs = 4
context_len_in_secs = 2
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
n_buffers = 5
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
buffer_list = []
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
if count >= n_buffers:
break
stride = 4 # 8 for Citrinet
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=True)
# Final transcription after CTC merge
print(transcription)
###Output
_____no_output_____
###Markdown
Time to evaluate our streaming inference on the whole long file that we created.
###Code
# WER calculation
from nemo.collections.asr.metrics.wer import word_error_rate
# Collect all buffers from the audio file
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
buffer_list = []
for chunk in chunk_reader:
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=False)
wer = word_error_rate(hypotheses=[transcription], references=[ref_transcript])
print(f"WER: {round(wer*100,2)}%")
###Output
_____no_output_____
###Markdown
Streaming ASRIn this tutorial, we will look at one way to use one of NeMo's pretrained Conformer-CTC models for streaming inference. We will first look at some use cases where we may need streaming inference and then we will work towards developing a method for transcribing a long audio file using streaming. Why Stream?Streaming inference may be needed in one of the following scenarios:* Real-time or close to real-time inference for live transcriptions* Offline transcriptions of very long audioIn this tutorial, we will mainly focus on streaming for handling long form audio and close to real-time inference with CTC based models. For training ASR models we usually use short segments of audio (<20s) that may be smaller chunks of a long audio that is aligned with the transcriptions and segmented into smaller chunks (see [tools/](https://github.com/NVIDIA/NeMo/tree/main/tools) for some great tools to do this). For running inference on long audio files we are restricted by the available GPU memory that dictates the maximum length of audio that can be transcribed in one inference call. We will take a look at one of the ways to overcome this restriction using NeMo's Conformer-CTC ASR model. Conformer-CTCConformer-CTC models distributed with NeMo use a combination of self-attention and convolution modules to achieve the best of the two approaches, the self-attention layers can learn the global interaction while the convolutions efficiently capture the local correlations. Use of self-attention layers comes with a cost of increased memory usage at a quadratic rate with the sequence length. That means that transcribing long audio files with Conformer-CTC models needs streaming inference to break up the audio into smaller chunks. We will develop one method to do such inference through the course of this tutorial. DataTo demonstrate transcribing a long audio file we will use utterances from the dev-clean set of the [mini Librispeech corpus](https://www.openslr.org/31/).
###Code
# If something goes wrong during data processing, un-comment the following line to delete the cached dataset
# !rm -rf datasets/mini-dev-clean
!mkdir -p datasets/mini-dev-clean
!python ../../scripts/dataset_processing/get_librispeech_data.py \
--data_root "datasets/mini-dev-clean/" \
--data_sets dev_clean_2
manifest = "datasets/mini-dev-clean/dev_clean_2.json"
###Output
_____no_output_____
###Markdown
Let's create a long audio that is about 15 minutes long by concatenating audio from dev-clean and also create the corresponding concatenated transcript.
###Code
import json
def concat_audio(manifest_file, final_len=3600):
concat_len = 0
final_transcript = ""
with open("concat_file.txt", "w") as cat_f:
while concat_len < final_len:
with open(manifest_file, "r") as mfst_f:
for l in mfst_f:
row = json.loads(l.strip())
if concat_len >= final_len:
break
cat_f.write(f"file {row['audio_filepath']}\n")
final_transcript += (" " + row['text'])
concat_len += float(row['duration'])
return concat_len, final_transcript
new_duration, ref_transcript = concat_audio(manifest, 15*60)
concat_audio_path = "datasets/mini-dev-clean/concatenated_audio.wav"
!ffmpeg -t {new_duration} -safe 0 -f concat -i concat_file.txt -c copy -t {new_duration} {concat_audio_path} -y
print("Finished concatenating audio file!")
###Output
_____no_output_____
###Markdown
Streaming with CTC based modelsNow let's try to transcribe the long audio file created above using a conformer-large model.
###Code
import torch
import nemo.collections.asr as nemo_asr
import contextlib
import gc
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
###Output
_____no_output_____
###Markdown
We are mainly concerned about decoding on the GPU in this tutorial. CPU decoding may be able to handle longer files but would also not be as fast as GPU decoding. Let's check if we can run transcribe() on the long audio file that we created above.
###Code
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained("stt_en_conformer_ctc_large", map_location=device)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# device = 'cpu' # You can transcribe even longer samples on the CPU, though it will take much longer !
model = model.to(device)
# Helper for torch amp autocast
if torch.cuda.is_available():
autocast = torch.cuda.amp.autocast
else:
@contextlib.contextmanager
def autocast():
print("AMP was not available, using FP32!")
yield
###Output
_____no_output_____
###Markdown
The call to transcribe() below should fail with a "CUDA out of memory" error when run on a GPU with 32 GB memory.
###Code
with autocast():
transcript = model.transcribe([concat_audio_path], batch_size=1)[0]
# Clear up memory
torch.cuda.empty_cache()
gc.collect()
###Output
_____no_output_____
###Markdown
Buffer mechanism for streaming long audio filesOne way to transcribe long audio with a Conformer-CTC model would be to split the audio into consecutive smaller chunks and running inference on each chunk. Care should be taken to have enough context for audio at either edges for accurate transcription. Let's introduce some terminology here to help us navigate the rest of this tutorial. * Buffer size is the length of audio on which inference is run* Chunk size is the length of new audio that is added to the buffer.An audio buffer is made up of a chunk of audio with some padded audio from previous chunk. In order to make the best predictions with enough context for the beginning and end portions of the buffer, we only collect tokens for the middle portion of the buffer of length equal to the size of each chunk. Let's suppose that the maximum length of audio that can be transcribed with conformer-large model is 20s, then we can use 20s as the buffer size and use 15s (for example) as the chunk size, so one hour of audio is broken into 240 chunks of 15s each. Let's take a look at a few audio buffers that may be created for this audio.
###Code
# A simple iterator class to return successive chunks of samples
class AudioChunkIterator():
def __init__(self, samples, frame_len, sample_rate):
self._samples = samples
self._chunk_len = chunk_len_in_secs*sample_rate
self._start = 0
self.output=True
def __iter__(self):
return self
def __next__(self):
if not self.output:
raise StopIteration
last = int(self._start + self._chunk_len)
if last <= len(self._samples):
chunk = self._samples[self._start: last]
self._start = last
else:
chunk = np.zeros([int(self._chunk_len)], dtype='float32')
samp_len = len(self._samples) - self._start
chunk[0:samp_len] = self._samples[self._start:len(self._samples)]
self.output = False
return chunk
# a helper function for extracting samples as a numpy array from the audio file
import soundfile as sf
def get_samples(audio_file, target_sr=16000):
with sf.SoundFile(audio_file, 'r') as f:
dtype = 'int16'
sample_rate = f.samplerate
samples = f.read(dtype=dtype)
if sample_rate != target_sr:
samples = librosa.core.resample(samples, sample_rate, target_sr)
samples=samples.astype('float32')/32768
samples = samples.transpose()
return samples
###Output
_____no_output_____
###Markdown
Let's take a look at each chunk of speech that is used for decoding.
###Code
import matplotlib.pyplot as plt
samples = get_samples(concat_audio_path)
sample_rate = model.preprocessor._cfg['sample_rate']
chunk_len_in_secs = 1
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
count = 0
for chunk in chunk_reader:
count +=1
plt.plot(chunk)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now, let's plot the actual buffers at each stage after a new chunk is added to the buffer. Audio buffer can be thought of as a fixed size queue with each incoming chunk added at the end of the buffer and the oldest samples removed from the beginning.
###Code
import numpy as np
context_len_in_secs = 1
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
plt.plot(sampbuffer)
plt.show()
if count >= 5:
break
###Output
_____no_output_____
###Markdown
Now that we have a method to split the long audio into smaller chunks, we can now work on transcribing the individual buffers and merging the outputs to get the transcription of the whole audio.First, we implement some helper functions to help load the buffers into the data layer.
###Code
from nemo.core.classes import IterableDataset
def speech_collate_fn(batch):
"""collate batch of audio sig, audio len
Args:
batch (FloatTensor, LongTensor): A tuple of tuples of signal, signal lengths.
This collate func assumes the signals are 1d torch tensors (i.e. mono audio).
"""
_, audio_lengths = zip(*batch)
max_audio_len = 0
has_audio = audio_lengths[0] is not None
if has_audio:
max_audio_len = max(audio_lengths).item()
audio_signal= []
for sig, sig_len in batch:
if has_audio:
sig_len = sig_len.item()
if sig_len < max_audio_len:
pad = (0, max_audio_len - sig_len)
sig = torch.nn.functional.pad(sig, pad)
audio_signal.append(sig)
if has_audio:
audio_signal = torch.stack(audio_signal)
audio_lengths = torch.stack(audio_lengths)
else:
audio_signal, audio_lengths = None, None
return audio_signal, audio_lengths
# simple data layer to pass audio signal
class AudioBuffersDataLayer(IterableDataset):
def __init__(self):
super().__init__()
def __iter__(self):
return self
def __next__(self):
if self._buf_count == len(self.signal) :
raise StopIteration
self._buf_count +=1
return torch.as_tensor(self.signal[self._buf_count-1], dtype=torch.float32), \
torch.as_tensor(self.signal_shape[0], dtype=torch.int64)
def set_signal(self, signals):
self.signal = signals
self.signal_shape = self.signal[0].shape
self._buf_count = 0
def __len__(self):
return 1
###Output
_____no_output_____
###Markdown
Next we implement a class that implements transcribing audio buffers and merging the tokens corresponding to a chunk of audio within each buffer. For each buffer, we pick tokens corresponding to one chunk length of audio. The chunk within each buffer is chosen such that there is equal left and right context available to the audio within the chunk.For example, if the chunk size is 1s and buffer size is 3s, we collect tokens corresponding to audio starting from 1s to 2s within each buffer. Conformer-CTC models have a model stride of 4, i.e., a token is produced for every 4 feature vectors in the time domain. MelSpectrogram features are generated once every 10 ms, so a token is produced for every 40 ms of audio.**Note:** The inherent assumption here is that the output tokens from the model are well aligned with corresponding audio segments. This may not always be true for models trained with CTC loss, so this method of streaming inference may not always work with CTC based models.
###Code
from torch.utils.data import DataLoader
import math
class ChunkBufferDecoder:
def __init__(self,asr_model, stride, chunk_len_in_secs=1, buffer_len_in_secs=3):
self.asr_model = asr_model
self.asr_model.eval()
self.data_layer = AudioBuffersDataLayer()
self.data_loader = DataLoader(self.data_layer, batch_size=1, collate_fn=speech_collate_fn)
self.buffers = []
self.all_preds = []
self.chunk_len = chunk_len_in_secs
self.buffer_len = buffer_len_in_secs
assert(chunk_len_in_secs<=buffer_len_in_secs)
feature_stride = asr_model._cfg.preprocessor['window_stride']
self.model_stride_in_secs = feature_stride * stride
self.n_tokens_per_chunk = math.ceil(self.chunk_len / self.model_stride_in_secs)
self.blank_id = len(asr_model.decoder.vocabulary)
self.plot=False
@torch.no_grad()
def transcribe_buffers(self, buffers, merge=True, plot=False):
self.plot = plot
self.buffers = buffers
self.data_layer.set_signal(buffers[:])
self._get_batch_preds()
return self.decode_final(merge)
def _get_batch_preds(self):
device = self.asr_model.device
for batch in iter(self.data_loader):
audio_signal, audio_signal_len = batch
audio_signal, audio_signal_len = audio_signal.to(device), audio_signal_len.to(device)
log_probs, encoded_len, predictions = self.asr_model(input_signal=audio_signal, input_signal_length=audio_signal_len)
preds = torch.unbind(predictions)
for pred in preds:
self.all_preds.append(pred.cpu().numpy())
def decode_final(self, merge=True, extra=0):
self.unmerged = []
self.toks_unmerged = []
# index for the first token corresponding to a chunk of audio would be len(decoded) - 1 - delay
delay = math.ceil((self.chunk_len + (self.buffer_len - self.chunk_len) / 2) / self.model_stride_in_secs)
decoded_frames = []
all_toks = []
for pred in self.all_preds:
ids, toks = self._greedy_decoder(pred, self.asr_model.tokenizer)
decoded_frames.append(ids)
all_toks.append(toks)
for decoded in decoded_frames:
self.unmerged += decoded[len(decoded) - 1 - delay:len(decoded) - 1 - delay + self.n_tokens_per_chunk]
if self.plot:
for i, tok in enumerate(all_toks):
plt.plot(self.buffers[i])
plt.show()
print("\nGreedy labels collected from this buffer")
print(tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk])
self.toks_unmerged += tok[len(tok) - 1 - delay:len(tok) - 1 - delay + self.n_tokens_per_chunk]
print("\nTokens collected from succesive buffers before CTC merge")
print(self.toks_unmerged)
if not merge:
return self.unmerged
return self.greedy_merge(self.unmerged)
def _greedy_decoder(self, preds, tokenizer):
s = []
ids = []
for i in range(preds.shape[0]):
if preds[i] == self.blank_id:
s.append("_")
else:
pred = preds[i]
s.append(tokenizer.ids_to_tokens([pred.item()])[0])
ids.append(preds[i])
return ids, s
def greedy_merge(self, preds):
decoded_prediction = []
previous = self.blank_id
for p in preds:
if (p != previous or previous == self.blank_id) and p != self.blank_id:
decoded_prediction.append(p.item())
previous = p
hypothesis = self.asr_model.tokenizer.ids_to_text(decoded_prediction)
return hypothesis
###Output
_____no_output_____
###Markdown
To see how this chunk based decoder comes together, let's call the decoder with a few buffers we create from our long audio file.Some interesting experiments to try would be to see how changing sizes of the chunk and the context affects transcription accuracy.
###Code
chunk_len_in_secs = 4
context_len_in_secs = 2
buffer_len_in_secs = chunk_len_in_secs + 2* context_len_in_secs
n_buffers = 5
buffer_len = sample_rate*buffer_len_in_secs
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
chunk_len = sample_rate*chunk_len_in_secs
count = 0
buffer_list = []
for chunk in chunk_reader:
count +=1
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
if count >= n_buffers:
break
stride = 4 # 8 for Citrinet
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=True)
# Final transcription after CTC merge
print(transcription)
###Output
_____no_output_____
###Markdown
Time to evaluate our streaming inference on the whole long file that we created.
###Code
# WER calculation
from nemo.collections.asr.metrics.wer import word_error_rate
# Collect all buffers from the audio file
sampbuffer = np.zeros([buffer_len], dtype=np.float32)
chunk_reader = AudioChunkIterator(samples, chunk_len_in_secs, sample_rate)
buffer_list = []
for chunk in chunk_reader:
sampbuffer[:-chunk_len] = sampbuffer[chunk_len:]
sampbuffer[-chunk_len:] = chunk
buffer_list.append(np.array(sampbuffer))
asr_decoder = ChunkBufferDecoder(model, stride=stride, chunk_len_in_secs=chunk_len_in_secs, buffer_len_in_secs=buffer_len_in_secs )
transcription = asr_decoder.transcribe_buffers(buffer_list, plot=False)
wer = word_error_rate(hypotheses=[transcription], references=[ref_transcript])
print(f"WER: {round(wer*100,2)}%")
###Output
_____no_output_____ |
SVM/SVM/UpgradCaseStudy.ipynb | ###Markdown
Case Study: Support Vector Machines in Object Detection What we will discuss today... 1. Basic Introduction of Support Vector Machines - a. What is SVM ? - b. Applications of SVM 2. Case Study: Digit Recognition in Images - a. Object Detection Problem? - b. How to prepare dataset? - c. What kind of features required? - d. Histogram of oriented gradient as the features. - e. Prepare feature set for traiining our model - f. Train our SVM model to classify digits in the Images.- g. Test the classifier on images. What is Support Vector Machine? Support Vector Machines are perhaps one of the most popular machine learning algorithms; it is a supervised machine learning algorithm which can be used for both classification and regression tasks. However, it is mostly used in classification problems. SVMs are also known as; Maximal margin classifier, Soft margin classifier, linear SVM and kernel based SVM. What it does actually?Support Vectors are simply the co-ordinates of individual observation. Let’s understand it with the help of an example. We have a population composed of 50%-50% Males and Females. Using a sample of this population, you want to create some set of rules which will guide us the gender class for rest of the population. Using this algorithm, we intend to build a robot which can identify whether a person is a Male or a Female. This is a sample problem of classification analysis. Using some set of rules, we will try to classify the population into two possible segments. For simplicity, let’s assume that the two differentiating factors identified are; height of the individual and hair Length. Following is a scatter plot of the sample.Now as I have mentioned earlier that SVM are the coordinates of individual observations; for instance, (45,150) is a support vector which corresponds to a female. Support Vector Machine is a frontier which best segregates the Male from the Females. In this case, the two classes are well separated from each other; hence it is easier to find a SVM.Now question is how to find out the frontiers; for current example, following figure shows three possible frontiers;So what do you think; How do we decide which is the best frontier for this particular problem statement?The easiest way to interpret the objective function in a SVM is to find the minimum distance of the frontier from closest support vector (this can belong to any class). For instance, orange frontier is closest to blue circles. And the closest blue circle is 2 units away from the frontier. Once we have these distances for all the frontiers, we simply choose the frontier with the maximum distance (from the closest support vector). Out of the three shown frontiers, we see the black frontier is farthest from nearest support vector (i.e. 15 units).What is Hyper-plane?Geometry tells us that a hyperplane is a subspace of one dimension less than its ambient space. For instance, a hyperplane of an n-dimensional space is a flat subset with dimension n − 1. By its nature, it separates the space into two half spaces. For machine learning tasks we can re write the hyper-planes as;- Linear decision surface that splits the space into two parts.- Hyper-plane is a Binary classifier.Following figure shows the hyper planes; Application of SVMs 1. Image Segmentation and Categorization2. Geographic Image Processing3. Handwriting recognition4. Healthcare : Analyzing a group of over million people for myocardial infarction within a period of 10 years is an application area of Support vector machines.5. Prediction whether a person is depressed or not based on bag of words from the corpus seems to be conveniently solvable using SVM. Case Study: Object Detection and Classification using SVM What is an Image ?An image is just another numerical matrix which you have seen in your maths classes earlier; you can apply any algebric operation on it as you can apply on any other matrix; these operations may be simple maths such as addition, subtraction, multiplication etc. or it may be any complex analysis such as singular vector decomposition or principle component analysis. we can represent an image as following: MNIST Digit Recognition Problem in Image processingThis problem is known as the "Hello World!" problem of Machine learning world; whether you are working with conventional ML algorithms like one we are working on or stat of the art deep learning every thing related to classification starts from here, reason for this love is it is simple to understand so we will also use the same application to start our journey with it.Lets see how our the data set looks like and what are the expectations with the classifiers. What does we wanna do?Well we want to build a classifier which can do this.. So How the hell we gonna do that!!!Well answer is quite simple for that, we will train a Classifier to this work for us!But before moving forward lets see how we can deal with this problem using a Machine learning based frame work Generalize Machine Learning frameworkWe will start our journey with data preparation Data PreparationWell as this is most published and well appreciated problem, we can easily download its data set from online repositary; original data set is quite huge in size, it have around 60000 hand written digits from different people. but we will not going to work with all 60k we will load only 5000 data instances and try to come up with a solution.lets write some code to fetch the data set and see how it really looks like;
###Code
import numpy as np
def load_digits(datasetPath):
# build the dataset and then split it into data
# and labels
data = np.genfromtxt(datasetPath, delimiter = ",", dtype = "uint8")
target = data[:, 0]
data = data[:, 1:].reshape(data.shape[0], 28, 28)
# return a tuple of the data and targets
return (data, target)
###Output
_____no_output_____
###Markdown
Lets plot some of the loaded instances
###Code
data,label = load_digits('digits.csv')
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
for i in range(9):
plt.subplot(3,3,i+1)
plt.tight_layout()
plt.imshow(data[i], cmap='gray', interpolation='none')
plt.title("Digit: {}".format(label[i]))
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
So this is how our data set look like; now what to do with this data set; well this problem is not that simple as it is looking. Before starting to implement a classifier we need to put a lot of work to pre-process the data so that we can make a efficient classifier we will do following pre-processing operations to our data set.- a. Deskew images.- b. Re-Center image content.We will write following line of code for deskew an image.
###Code
import cv2
def deskew(image, width):
# grab the width and height of the image and compute
# moments for the image
(h, w) = image.shape[:2]
moments = cv2.moments(image)
# deskew the image by applying an affine transformation
skew = moments["mu11"] / moments["mu02"]
M = np.float32([
[1, skew, -0.5 * w * skew],
[0, 1, 0]])
image = cv2.warpAffine(image, M, (w, h),
flags = cv2.WARP_INVERSE_MAP | cv2.INTER_LINEAR)
# resize the image to have a constant width
image = cv2.resize(image, (28,28))
# return the deskewed image
return image
###Output
_____no_output_____
###Markdown
Lets see how a deskew image looks like;
###Code
image = deskew(data[0],28)
plt.subplot(1,2,1)
plt.tight_layout()
plt.imshow(data[0], cmap='gray', interpolation='none')
plt.title("Skewed Image")
plt.subplot(1,2,2)
plt.tight_layout()
plt.imshow(image, cmap='gray', interpolation='none')
plt.title("De-Skewed Image")
plt.show()
###Output
_____no_output_____
###Markdown
Now lets see effect of Extent Center; Code for that will look like;
###Code
import mahotas
def center_extent(image, size):
# grab the extent width and height
(eW, eH) = size
#Image Shape is
(h, w) = image.shape[:2]
#New dimension according to image aspect ratio
dim = None
# handle when the width is greater than the height
if image.shape[1] > image.shape[0]:
#image = resize(image, width = eW)
r = eW / float(w)
dim = (eW, int(h * r))
image = cv2.resize(image,dim,cv2.INTER_AREA)
# otherwise, the height is greater than the width
else:
#image = resize(image, height = eH)
r = eH / float(h)
dim = (int(w * r), eH)
image = cv2.resize(image,dim,cv2.INTER_AREA)
# allocate memory for the extent of the image and
# grab it
extent = np.zeros((eH, eW), dtype = "uint8")
offsetX = (eW - image.shape[1]) / 2
offsetY = (eH - image.shape[0]) / 2
extent[offsetY:offsetY + image.shape[0], offsetX:offsetX + image.shape[1]] = image
# compute the center of mass of the image and then
# move the center of mass to the center of the image
(cY, cX) = np.round(mahotas.center_of_mass(extent)).astype("int32")
(dX, dY) = ((size[0] / 2) - cX, (size[1] / 2) - cY)
M = np.float32([[1, 0, dX], [0, 1, dY]])
extent = cv2.warpAffine(extent, M, size)
# return the extent of the image
return extent
###Output
_____no_output_____
###Markdown
Lets see how it works
###Code
im = cv2.imread('Decenter.png')
im = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
image = center_extent(im,(28,28))
plt.subplot(1,2,1)
plt.tight_layout()
plt.imshow(im, cmap='gray', interpolation='none')
plt.title("De-Centered")
plt.subplot(1,2,2)
plt.tight_layout()
plt.imshow(image, cmap='gray', interpolation='none')
plt.title("Centered Image")
plt.show()
###Output
_____no_output_____
###Markdown
So we are ready with our pre-processing modules; now our next task is to extract meaniningfull features out of our images so we can use those features for our use. Histogram of Oriented Gradients (HOG) features.As its name suggests it contains three key terms- Histogram- Oriented- GradientsNow Histogram is nothing but a frequncy map which shows how many times a random variable appears in the context; Orientation is directly associated with angles; and Gradients signifies transitions of a random variable. so HOG shows us a frequency map of edges(Gradients) in different orientations of an Image.Lets try to understand it with an exampleSuppose we are having an Image of containing different shapes like following So how we can calculate such histograms for our images; in practice when we calculate HOG of an image it will always calculated in following manner.So How we will gonna do it in our case, well there is a function in skimage library which allows us to extract HOG features out of the images and we will do the same for our images too; we will write a method fo calculating HOG for our images.definition will go as follows:
###Code
from skimage import feature
def HOG_describe(image):
# compute HOG for the image
hist = feature.hog(image, orientations = 9,
pixels_per_cell = (8, 8),
cells_per_block = (3, 3)
)
return hist
###Output
_____no_output_____
###Markdown
Lets try out our function for an image to calculate its HOG features;
###Code
hist = HOG_describe(data[0])
print(np.shape(hist))
###Output
_____no_output_____
###Markdown
So As you can see there are 81 features were extracted from the image it is roughly 9 histograms from an image with 9 bins So this is the time to combine all the knowledge we have gained and create a digit classifier so lets roll it we will do following steps to complete this task:- Build a data set- Pre-Process the data set (De-skew and Centralisation)- Train a classifier on the data set.Following method will do the same for us;
###Code
from sklearn.svm import LinearSVC
import cPickle
def train():
# load the dataset and initialize the data matrix
path2model = 'svm_cs.cpickle'
path2data = 'digits.csv'
#Image size
factor = 28
(digits, target) = load_digits(path2data)
data = []
# loop over the images
for image in digits:
# deskew the image, center it
image = deskew(image, factor)
image = center_extent(image, (factor, factor))
# describe the image and update the data matrix
hist = HOG_describe(image)
data.append(hist)
# train the model
model = LinearSVC()
print(model)
model.fit(data, target)
# dump the model to file
f = open(path2model, "w")
f.write(cPickle.dumps(model))
f.close()
###Output
_____no_output_____
###Markdown
Lets call the above method for our data set and train a SVM classifier;
###Code
train()
###Output
_____no_output_____
###Markdown
So all good till here it is the time to test our classifier on real imageslets roll it;
###Code
def test():
path2model = 'svm_cs.cpickle'
path2im = 'cellphone.png'
factor = 28
# load the model
model = open(path2model).read()
model = cPickle.loads(model)
# initialize the HOG descriptor
# load the image and convert it to grayscale
image = cv2.imread(path2im)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# blur the image, find edges, and then find contours along
# the edged regions
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(blurred, 30, 150)
(_,cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# sort the contours by their x-axis position, ensuring
# that we read the numbers from left to right
cnts = sorted([(c, cv2.boundingRect(c)[0]) for c in cnts], key = lambda x: x[1])
# loop over the contours
for (c, _) in cnts:
# compute the bounding box for the rectangle
(x, y, w, h) = cv2.boundingRect(c)
# if the width is at least 7 pixels and the height
# is at least 20 pixels, the contour is likely a digit
if w >= 7 and h >= 20:
# crop the ROI and then threshold the grayscale
# ROI to reveal the digit
roi = gray[y:y + h, x:x + w]
thresh = roi.copy()
T = mahotas.thresholding.otsu(roi)
thresh[thresh > T] = 255
thresh = cv2.bitwise_not(thresh)
# deskew the image center its extent
thresh = deskew(thresh, factor)
thresh = center_extent(thresh, (factor, factor))
# thresh = cv2.resize(thresh,(28,28))
# extract features from the image and classify it
hist = HOG_describe(thresh)
digit = model.predict(np.array([hist]))[0]
# draw a rectangle around the digit, the show what the
# digit was classified as
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 1)
cv2.putText(image, str(digit), (x - 10, y - 10),
cv2.FONT_HERSHEY_SIMPLEX, 1.2, (0, 255, 0), 2)
plt.figure(figsize=(15,10))
plt.imshow(image)
plt.show()
test()
###Output
_____no_output_____ |
examples/Pandas.ipynb | ###Markdown
Load Pandas DataFrame
###Code
import json
import numpy as np
import pandas as pd
from bqplot import DateScale, ColorScale
from py2vega.functions.type_coercing import toDate
from py2vega.functions.date_time import datetime
from ipydatagrid import Expr, DataGrid, TextRenderer, BarRenderer
n = 10000
df = pd.DataFrame(
{
"Value 1": np.random.randn(n),
"Value 2": np.random.randn(n),
"Dates": pd.date_range(end=pd.Timestamp("today"), periods=n),
}
)
text_renderer = TextRenderer(
text_color="black", background_color=ColorScale(min=-5, max=5)
)
def bar_color(cell):
date = toDate(cell.value)
return "green" if date > datetime("2000") else "red"
renderers = {
"Value 1": text_renderer,
"Value 2": text_renderer,
"Dates": BarRenderer(
bar_value=DateScale(min=df["Dates"][0], max=df["Dates"][n - 1]),
bar_color=Expr(bar_color),
format="%Y/%m/%d",
format_type="time",
),
}
grid = DataGrid(df, base_row_size=30, base_column_size=300, renderers=renderers)
grid.transform([{"type": "sort", "columnIndex": 2, "desc": True}])
grid
###Output
_____no_output_____ |
jupyter_russian/topic10_boosting/lesson10_part4_sklearn_interface.ipynb | ###Markdown
Открытый курс по машинному обучению. Сессия № 2Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала. Тема 10. Бустинг Часть 4. Xgboost, интерфейс Sklearn Загрузка бибилиотек
###Code
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score, f1_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from xgboost import XGBClassifier
###Output
_____no_output_____
###Markdown
Загрузка и подготовка данныхПосмотрим на примере данных по оттоку клиентов из телеком-компании.
###Code
df = pd.read_csv("../../data/telecom_churn.csv")
df.head()
###Output
_____no_output_____
###Markdown
**Штаты просто занумеруем, а признаки International plan (наличие международного роуминга), Voice mail plan (наличие голосовой почтыы) и целевой Churn сделаем бинарными.**
###Code
state_enc = LabelEncoder()
df["State"] = state_enc.fit_transform(df["State"])
df["International plan"] = (df["International plan"] == "Yes").astype("int")
df["Voice mail plan"] = (df["Voice mail plan"] == "Yes").astype("int")
df["Churn"] = (df["Churn"]).astype("int")
###Output
_____no_output_____
###Markdown
**Разделим данные на обучающую и тестовую выборки в отношении 7:3.**
###Code
X_train, X_test, y_train, y_test = train_test_split(
df.drop("Churn", axis=1),
df["Churn"],
test_size=0.3,
stratify=df["Churn"],
random_state=17,
)
###Output
_____no_output_____
###Markdown
Инициализация параметров- бинарная классификация (`'objective':'binary:logistic'`)- ограничим глубину деревьев (`'max_depth':3`)- не хотим лишнего вывода (`'silent':1`)- проведем 10 итераций бустинга- шаг градиентного спуска довольно большой (`'eta':1`) - алгоритм будет обучаться быстро и "агрессивно" (лучше результаты будут, если уменьшить eta и увеличить число итераций)
###Code
params = {
"objective": "binary:logistic",
"max_depth": 3,
"learning_rate": 1.0,
"silent": 1.0,
"n_estimators": 50,
}
###Output
_____no_output_____
###Markdown
Обучение классификатораТут мы просто передаем слоавть параметров, данные и число итераций.
###Code
xgb_model = XGBClassifier(**params).fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Прогнозы для тестовой выборки
###Code
preds_prob = xgb_model.predict(X_test)
###Output
_____no_output_____
###Markdown
**Посчитаем долю правильных ответов алгоритма на тестовой выборке.**
###Code
predicted_labels = preds_prob > 0.5
print(
"Accuracy and F1 on the test set are: {} and {}".format(
round(accuracy_score(y_test, predicted_labels), 3),
round(f1_score(y_test, predicted_labels), 3),
)
)
###Output
_____no_output_____ |
notebook-fig/figs-slides-5-comparing-two-proportions.ipynb | ###Markdown
The Gilbert's case Shifts data
###Code
shifts = pd.read_csv("data-src/gilbert-data.csv")
shifts.index = shifts.year
shifts[shifts.columns[1:]].plot(kind="bar");
fig,ax1 = plt.subplots(figsize=(8,4))
ax2 = ax1.twinx()
width=0.6
scale = 3.
colors_list = ["red", "blue", "orange"]
labels_list = ["Night", "Day", "Evening"]
xpos = np.arange(len(shifts.year))*scale
for i,shift in enumerate(shifts.columns[1:]):
ax1.bar(xpos+width*i, shifts[shift], width=width, color=colors[colors_list[i]], label=labels_list[i]);
for spine in ["bottom", "left"]:
ax1.spines[spine].set_linewidth(1)
ax1.spines[spine].set_color(colors["lightgray"])
for spine in ["top", "right"]:
ax1.spines[spine].set_visible(False)
for ax in [ax1]:
ax.set_xticks(xpos+width)
ax.set_xticklabels(shifts.year)
ax.set_ylim(0)
ax.tick_params(axis="y", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"],
labelsize=13, pad=4)
ax.tick_params(axis="x", size=0, labelcolor=colors["lightgray"],
labelsize=13, pad=4)
ax.set_xlabel("Year", size=16, color=colors["lightgray"])
ax.set_ylabel("Number of deaths", size=16, color=colors["lightgray"])
legend = ax.legend(prop=dict(size=14), loc="upper left", frameon=True, facecolor="none",
bbox_to_anchor=(0, 1.05))
for text in legend.get_texts():
text.set_color(colors["lightgray"])
for ax in [ax2]:
ax2.axis("off")
ax.axvspan(xpos[2]-1.5*width, xpos[8]-1.5*width, alpha=0.5, color=colors["gray"])
ax.text(xpos[2]-1.5*width+(xpos[8]-1.5*width-xpos[2]-1.5*width)/2,
ax.get_ylim()[1]*0.9, "Gilbert's time at the VA", color=colors["lightgray"],
ha="center", size=16)
plt.tight_layout()
plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/gilbert-case-shifts-pattern.svg", transparent=True)
###Output
_____no_output_____
###Markdown
Relative risk
###Code
data = pd.DataFrame({"shift_gilbert": [40, 217],
"shift_no_gilbert": [34, 1350]}, index=["death", "no_death"])
data
data.shift_gilbert/data.shift_gilbert.sum()
data.shift_no_gilbert/data.shift_no_gilbert.sum()
rr = (data.shift_gilbert/data.shift_gilbert.sum()).death/(data.shift_no_gilbert/data.shift_no_gilbert.sum()).death
print("Relative risk: {:.5f}".format(rr))
fig,(ax1, ax2) = plt.subplots(ncols=2, figsize=(10,4))
ax1.bar(np.arange(2), data.ix["death"], color=colors['red'], label="Death");
ax1.bar(np.arange(2), data.ix["no_death"], bottom=data.ix["death"], color=colors['blue'], label="No death");
ax2.bar(np.arange(2), data.ix["death"]/data.sum(), color=colors['red']);
ax2.bar(np.arange(2), data.ix["no_death"]/data.sum(), bottom=data.ix["death"]/data.sum(), color=colors['blue']);
for ax in [ax1, ax2]:
for spine in ["bottom", "left"]:
ax.spines[spine].set_linewidth(1)
ax.spines[spine].set_color(colors["lightgray"])
for spine in ["top", "right"]:
ax.spines[spine].set_visible(False)
ax.set_xticks(np.arange(2))
ax.set_xticklabels(["Gilbert present", "Gilbert absent"])
ax.set_ylim(0)
ax.tick_params(axis="y", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"],
labelsize=13, pad=4)
ax.tick_params(axis="x", size=0, labelcolor=colors["lightgray"],
labelsize=15, pad=6)
ax.set_xlim(-0.6, 1.6)
ax.set_xlabel("Shifts", size=16, color=colors["lightgray"])
for ax in [ax1]:
ax.set_ylabel("Number of shifts", size=16, color=colors["lightgray"])
legend = ax.legend(prop=dict(size=14), loc="upper left", frameon=True, facecolor="none",
bbox_to_anchor=(0, 1.05))
for text in legend.get_texts():
text.set_color(colors["lightgray"])
for ax in [ax2]:
ax.set_ylabel("Proportion of shifts", size=16, color=colors["lightgray"])
plt.tight_layout()
plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/gilbert-case-death-pattern-during-shifts.svg", transparent=True)
###Output
_____no_output_____
###Markdown
Null hypothesis testing (Shuffling)
###Code
data
print("Total number of shifts: {}".format(data.values.sum()))
print("Proportion of shifts for which death occured: {}".format(data.ix["death"].sum()/data.values.sum()))
###Output
Total number of shifts: 1641
Proportion of shifts for which death occured: 0.04509445460085314
###Markdown
If death was equally likely to happen during the shifts with and without Gilbert present, then the proportion of shifts with death should be the same for both conditions
###Code
#the population is all the shifts, with as many 1 as shift with death
all_shifts = np.zeros(data.values.sum())
all_shifts[:data.ix["no_death"].sum()]=1
#initial observed statistic (difference in relative proportions)
diff_init = data.ix["death"].shift_gilbert/data.shift_gilbert.sum()-data.ix["death"].shift_no_gilbert/data.shift_no_gilbert.sum()
n_simul = 10000
res_diff = np.zeros(n_simul) #store the data
res_relativerisk = np.zeros(n_simul) #store the data
n_shift_gilbert = data.shift_gilbert.sum() #number of shifts with Gilbert present
n_shift_no_gilbert = data.shift_no_gilbert.sum() #number of shifts without Gilbert present
#each simulation is the shuffling of the full population and the
#calculation of the difference in proportion of shifts with death
for i in range(n_simul):
np.random.shuffle(all_shifts)
with_gilbert = all_shifts[:n_shift_gilbert]
without_gilbert = all_shifts[n_shift_gilbert:]
deathprop_with_gilbert = np.sum(with_gilbert)/n_shift_gilbert
deathprop_without_gilbert = np.sum(without_gilbert)/n_shift_no_gilbert
difference = deathprop_with_gilbert-deathprop_without_gilbert
relativerisk = deathprop_with_gilbert/deathprop_without_gilbert
res_diff[i] = difference
res_relativerisk[i] = relativerisk
data.shift_gilbert.sum()
fig = plt.figure(figsize=(10,4))
ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75])
ax2 = ax1.twinx()
ax3 = ax1.twinx()
ax4 = ax1.twinx()
for ax in [ax1]:
ax.hist(res_diff, bins="auto", color=colors["blue"])
for spine in ["bottom"]:
ax.spines[spine].set_linewidth(1)
ax.spines[spine].set_color(colors["lightgray"])
for spine in ["top", "right", "left"]:
ax.spines[spine].set_visible(False)
ax.set_yticks([])
ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"],
labelsize=13, pad=4)
ax.text(0, -250, '$\hat{p}_{\mathrm{present}}-\hat{p}_{\mathrm{absent}}$', size=18, color=colors["lightgray"], ha="center")
ax.set_ylim(0)
ax.set_xlim(-0.08, 0.18)
for ax in [ax2]:
ax.axvline(diff_init, color=colors["orange"])
ax.text(diff_init-0.005, ax.get_ylim()[1]*0.85, "Observed statistic\n{:.3f}".format(diff_init), size=14, color=colors["orange"], ha="right")
for ax in [ax3]:
ax.text(diff_init+0.01, ax.get_ylim()[1]*0.5, "{:.0f} simulations\n(p-value<0.0001)".format(np.sum(res_diff>diff_init)), size=14, color=colors["lightgray"], ha="left")
for ax in [ax4]:
ax.axis("off")
#normal
norm_mu = 0
pooled_p = data.ix["death"].sum()/data.values.sum()
p1 = (data.shift_gilbert/data.shift_gilbert.sum()).death
p2 = (data.shift_no_gilbert/data.shift_no_gilbert.sum()).death
norm_sigma = np.sqrt((pooled_p*(1-pooled_p))*(1/data.shift_gilbert.sum()+1/data.shift_no_gilbert.sum()))
x = np.linspace(-0.1,0.1,100)
ynorm = stats.norm.pdf(x, norm_mu, norm_sigma)
ax.fill_between(x, ynorm, color=colors["red"], alpha=0.7)
ax.set_ylim(0)
for ax in [ax2, ax3]:
ax.set_ylim(0)
ax.axis("off")
plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/gilbert-case-resampling-differences.svg", transparent=True)
data
fig = plt.figure(figsize=(6,4))
ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75])
ax2 = ax1.twinx()
ax3 = ax1.twinx()
for ax in [ax1]:
ax.hist(res_relativerisk, bins="auto", color=colors["blue"])
for spine in ["bottom"]:
ax.spines[spine].set_linewidth(1)
ax.spines[spine].set_color(colors["lightgray"])
for spine in ["top", "right", "left"]:
ax.spines[spine].set_visible(False)
ax.set_yticks([])
ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"],
labelsize=13, pad=4)
ax.text(1, -250, 'Relative risk', size=18, color=colors["lightgray"], ha="center")
ax.set_ylim(0)
#ax.set_xlim(0.85, 7)
#for ax in [ax2]:
# ax.axvline(rr, color=colors["orange"])
for ax in [ax2, ax3]:
ax.set_ylim(0)
ax.axis("off")
#plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-resampling-ttest.svg", transparent=True)
###Output
_____no_output_____
###Markdown
Z test
###Code
z = ((40/257)-(34/1384))/np.sqrt((74/1641)*(1-74/1641)*(1/257+1/1384))
z
###Output
_____no_output_____
###Markdown
Confidence intervals of RR
###Code
#we keep the two groups separated and draw bootstrap samples from them
population_gilbert = np.zeros(data.shift_gilbert.sum())
population_gilbert[:data.shift_gilbert.death] = 1
population_no_gilbert = np.zeros(data.shift_no_gilbert.sum())
population_no_gilbert[:data.shift_no_gilbert.death] = 1
n_simul = 10000
res_diff = np.zeros(n_simul) #store the data
res_relativerisk = np.zeros(n_simul) #store the data
n_shift_gilbert = data.shift_gilbert.sum() #number of shifts with Gilbert present
n_shift_no_gilbert = data.shift_no_gilbert.sum() #number of shifts without Gilbert present
#each simulation is the separate bootstrap drawing from the two groups
#calculation of the statistic
for i in range(n_simul):
sample_gilbert = np.random.choice(population_gilbert, size=n_shift_gilbert)
sample_no_gilbert = np.random.choice(population_no_gilbert, size=n_shift_no_gilbert)
deathprop_with_gilbert = np.sum(sample_gilbert)/n_shift_gilbert
deathprop_without_gilbert = np.sum(sample_no_gilbert)/n_shift_no_gilbert
difference = deathprop_with_gilbert-deathprop_without_gilbert
relativerisk = deathprop_with_gilbert/deathprop_without_gilbert
res_diff[i] = difference
res_relativerisk[i] = relativerisk
fig = plt.figure(figsize=(6,4))
ax1 = fig.add_axes([0.03, 0.15, 0.9, 0.75])
ax2 = ax1.twinx()
ax3 = ax1.twinx()
for ax in [ax1]:
ax.hist(res_relativerisk, bins="auto", color=colors["blue"])
for spine in ["bottom"]:
ax.spines[spine].set_linewidth(1)
ax.spines[spine].set_color(colors["lightgray"])
for spine in ["top", "right", "left"]:
ax.spines[spine].set_visible(False)
ax.set_yticks([])
ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"],
labelsize=13, pad=4)
ax.set_xlabel("Relative risk", size=18, color=colors["lightgray"], ha="center")
ax.set_ylim(0)
#ax.set_xlim(0.85, 7)
for ax in [ax2]:
ax.axvline(rr, color=colors["orange"], ymax=0.95)
ax.text(rr, ax.get_ylim()[1]*1, "Observed statistic\n{:.2f}".format(rr), size=14, color=colors["orange"], ha="center")
for ax in [ax3]:
ax.axvline(np.percentile(res_relativerisk, 2.5), ymax=0.55, color=colors["red"], lw=2)
ax.axvline(np.percentile(res_relativerisk, 97.5), ymax=0.55, color=colors["red"], lw=2)
ax.text(np.percentile(res_relativerisk, 2.5), ax.get_ylim()[1]*0.6, "2.5$^{{th}}$\npercentile\n{:.2f}".format(np.percentile(res_relativerisk, 2.5)), color=colors["red"], size=15, ha="center")
ax.text(np.percentile(res_relativerisk, 97.5), ax.get_ylim()[1]*0.6, "97.5$^{{th}}$\npercentile\n{:.2f}".format(np.percentile(res_relativerisk, 97.5)), color=colors["red"], size=15, ha="center")
for ax in [ax2, ax3]:
ax.set_ylim(0)
ax.axis("off")
plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/gilbert-case-resampling-relativerisk-ci95.svg", transparent=True)
###Output
_____no_output_____
###Markdown
Oklahoma City Thunder (NBA team)
###Code
data = pd.DataFrame({"sellout": [3, 15],
"no_sellout": [12, 11]}, index=["win", "loss"])
data
data_r = data/data.sum()
data_r
relativerisk = data_r.no_sellout.win/data_r.sellout.win #3.13 times more likely to win if no sell out
all_games = np.zeros(data.values.sum())
all_games[:data.ix["win"].sum()]=1
n_simul = 10000
res_relativerisk = np.zeros(n_simul) #store the data
n_sellout = data.sellout.sum() #number of sell out crowd games
n_no_sellout = data.no_sellout.sum() #number of no sell out crowd games
#each simulation is the shuffling of the full population and the
#calculation of the relative risk in proportion of no sell out crowd games
for i in range(n_simul):
np.random.shuffle(all_games)
sellout = all_games[:n_sellout]
no_sellout = all_games[n_sellout:]
res_relativerisk[i] = (np.sum(no_sellout)/len(no_sellout))/(np.sum(sellout)/len(sellout))
fig = plt.figure(figsize=(6,4))
ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75])
ax2 = ax1.twinx()
ax3 = ax1.twinx()
for ax in [ax1]:
ax.hist(res_relativerisk[res_relativerisk!=np.inf], bins="auto", color=colors["blue"])
for spine in ["bottom"]:
ax.spines[spine].set_linewidth(1)
ax.spines[spine].set_color(colors["lightgray"])
for spine in ["top", "right", "left"]:
ax.spines[spine].set_visible(False)
ax.set_yticks([])
ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"],
labelsize=13, pad=4)
#ax.text(1, -250, 'Relative risk', size=18, color=colors["lightgray"], ha="center")
ax.set_xlabel('Relative risk', size=18, color=colors["lightgray"], ha="center")
ax.set_ylim(0)
ax.set_xlim(0., 6)
ax.axvline(1, color=colors["red"], ymax=1)
for ax in [ax2]:
ax.axvline(relativerisk, color=colors["orange"], ymax=0.8)
ax.text(relativerisk, ax.get_ylim()[1]*0.85, "Observed statistic\nRelative Risk = {:.1f}".format(relativerisk), size=14, color=colors["orange"], ha="center")
for ax in [ax3]:
ax.text(relativerisk+0.5, ax.get_ylim()[1]*0.5, "{:.0f} simulations$\geq${:.1f}\n{:.0f} simulations$\leq${:.2f}\n(2-tail p-value={:.3f})".format(np.sum(res_relativerisk>=relativerisk), relativerisk, np.sum(res_relativerisk<=1/relativerisk), 1/relativerisk, (np.sum(res_relativerisk>=relativerisk)+np.sum(res_relativerisk<=1/relativerisk))/10000), size=14, color=colors["lightgray"], ha="left")
ax.axvline(1/relativerisk, color=colors["orange"], ymax=0.8)
ax.text(1/relativerisk, ax.get_ylim()[1]*0.85, r"$\frac{{1}}{{3.1}}$" "\n({:.2f})".format(1/relativerisk), size=14, color=colors["orange"], ha="center")
for ax in [ax2, ax3]:
ax.set_ylim(0)
ax.axis("off")
plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/sell-out-crowd-simulation-pvalue.svg", transparent=True)
###Output
_____no_output_____
###Markdown
95% confidence intervals
###Code
#we keep the two groups separated and draw bootstrap samples from them
population_sellout = np.zeros(data.sellout.sum())
population_sellout[:data.sellout.win] = 1
population_no_sellout = np.zeros(data.no_sellout.sum())
population_no_sellout[:data.no_sellout.win] = 1
n_simul = 10000
res_relativerisk = np.zeros(n_simul) #store the data
n_sellout = data.sellout.sum() #number of shifts with Gilbert present
n_no_sellout = data.no_sellout.sum() #number of shifts without Gilbert present
#each simulation is the separate bootstrap drawing from the two groups
#calculation of the statistic
for i in range(n_simul):
sample_sellout = np.random.choice(population_sellout, size=n_sellout)
sample_no_sellout = np.random.choice(population_no_sellout, size=n_no_sellout)
relativerisk = (sample_no_sellout.sum()/n_no_sellout)/(sample_sellout.sum()/n_sellout)
res_relativerisk[i] = relativerisk
fig = plt.figure(figsize=(6,4))
ax1 = fig.add_axes([0.03, 0.15, 0.9, 0.75])
ax2 = ax1.twinx()
ax3 = ax1.twinx()
for ax in [ax1]:
ax.hist(res_relativerisk[res_relativerisk!=np.inf], bins="auto", color=colors["blue"])
for spine in ["bottom"]:
ax.spines[spine].set_linewidth(1)
ax.spines[spine].set_color(colors["lightgray"])
for spine in ["top", "right", "left"]:
ax.spines[spine].set_visible(False)
ax.set_yticks([])
ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"],
labelsize=13, pad=4)
ax.set_xlabel("Relative risk", size=18, color=colors["lightgray"], ha="center")
ax.set_ylim(0)
#ax.set_xlim(0.85, 7)
for ax in [ax2]:
ax.axvline(data_r.no_sellout.win/data_r.sellout.win, color=colors["orange"], ymax=0.95)
ax.text(data_r.no_sellout.win/data_r.sellout.win, ax.get_ylim()[1]*1, "Observed statistic\n{:.2f}".format(data_r.no_sellout.win/data_r.sellout.win), size=14, color=colors["orange"], ha="center")
for ax in [ax3]:
ax.axvline(np.percentile(res_relativerisk[res_relativerisk!=np.inf], 2.5), ymax=0.55, color=colors["red"], lw=2)
ax.axvline(np.percentile(res_relativerisk[res_relativerisk!=np.inf], 97.5), ymax=0.55, color=colors["red"], lw=2)
ax.text(np.percentile(res_relativerisk[res_relativerisk!=np.inf], 2.5), ax.get_ylim()[1]*0.6, "2.5$^{{th}}$\npercentile\n{:.2f}".format(np.percentile(res_relativerisk[res_relativerisk!=np.inf], 2.5)), color=colors["red"], size=15, ha="center")
ax.text(np.percentile(res_relativerisk[res_relativerisk!=np.inf], 97.5), ax.get_ylim()[1]*0.6, "97.5$^{{th}}$\npercentile\n{:.2f}".format(np.percentile(res_relativerisk[res_relativerisk!=np.inf], 97.5)), color=colors["red"], size=15, ha="center")
for ax in [ax2, ax3]:
ax.set_ylim(0)
ax.axis("off")
plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/sell-out-crowd-simulation-ci95.svg", transparent=True)
np.percentile(res_relativerisk, 97.5)
###Output
_____no_output_____ |
cs231n/numpy_test.ipynb | ###Markdown
Get the most common values in an array
###Code
a = np.array([20, 23, 21, 19, 23, 21, 27])
c = np.bincount(a)
c
np.argmax(c)
###Output
_____no_output_____
###Markdown
Get the index array sorted by value
###Code
a = np.array([[1, 4, 5, 7, 3, 2 ,2], [1, 4, 5, 7, 3, 2 ,1]])
y = np.array([1, 4, 5, 7, 3, 2 ,1])
s = np.argsort(a[1])[0:3]
s
y.shape, y.dtype
b = [ a[1, i] for i in s ]
b
# if y is an python array, error
y[s]
###Output
_____no_output_____
###Markdown
L2 Norm
###Code
x = np.random.randn(10, 4, 3, 2)
y = np.random.randn(10, 4, 3, 2)
x.shape, y.shape
np.sqrt(np.sum((x-y)**2))
np.linalg.norm(x-y)
squared_sum = np.sum((x-y)**2, axis=1)
squared_sum.shape
norm1 = np.sqrt(np.sum((x-y)**2, axis=1))
norm1.shape, norm1.dtype
norm2 = np.linalg.norm(x-y, axis=1)
norm2.shape, norm2.dtype
np.sqrt(np.sum(norm1- norm2)**2)
difference = np.linalg.norm(norm1 - norm2)
difference
squared_sum = np.zeros((3, 2))
for i in range(4):
squared_sum += (x[0, i] - y[0, i])**2
np.sqrt(squared_sum)
###Output
_____no_output_____
###Markdown
L2 Norm using broadcastUse this formula to compute the L2 norm for matrix \\(x\\) and \\(y\\): $$||x-y||^2 = ||x||^2 -2||xy||+||y||^2$$
###Code
X = np.random.rand(5,4,3,2)
Y = np.random.rand(6,4,3,2)
# flat the array
X = np.reshape(X, (X.shape[0], -1))
Y = np.reshape(Y, (Y.shape[0], -1))
X.shape, Y.shape
num_X = X.shape[0]
num_Y = Y.shape[0]
num_X, num_Y
dists = np.zeros((num_X, num_Y))
dists.shape
sum_Y = np.sum(Y ** 2, axis=1)
sum_Y.shape
sum_YY = sum_Y.reshape(1, num_Y)
sum_YY.shape
dists += sum_YY
dists.shape
sum_X = np.sum(X ** 2, axis=1)
sum_X.shape
sum_XX = sum_X.reshape(num_X, 1)
sum_XX.shape
dists += sum_XX
dists.shape
dot_XY = 2 * np.dot(X, Y.T)
dot_XY.shape
dists -= dot_XY
dists.shape
dists = np.sqrt(dists)
dists
###Output
_____no_output_____
###Markdown
Split an array and stack them partially
###Code
x = np.random.rand(10, 2)
y = np.array_split(x, 3, axis=0)
x.shape
y[0].shape, y[1].shape, y[2].shape
x_0_2 = np.vstack((y[0], y[2]))
x_0_2.shape
x_0_2
###Output
_____no_output_____ |
data/cap/cap_2016.ipynb | ###Markdown
Common Agricultural Policy (CAP) Data
###Code
%matplotlib inline
from collections import OrderedDict
import json
import os
import pandas as pd
DAERA = pd.read_excel('input/2016_All_CAP_Search_Results_Data_P14.xlsx', sheet_name=0)
SGRPID = pd.read_excel('input/2016_All_CAP_Search_Results_Data_P14.xlsx', sheet_name=1)
WG = pd.read_excel('input/2016_All_CAP_Search_Results_Data_P14.xlsx', sheet_name=2)
RPA = pd.read_excel('input/2016_All_CAP_Search_Results_Data_P14.xlsx', sheet_name=3)
RPA2 = pd.read_excel('input/2016_All_CAP_Search_Results_Data_P14.xlsx', sheet_name=4)
RPA2.head()
[RPA2.PayingAgencyLink.isna().sum(), RPA2.PayingAgencyLink.value_counts()]
raw_cap = pd.concat([DAERA, SGRPID, WG, RPA, RPA2])
raw_cap.shape
raw_cap.count()
###Output
_____no_output_____
###Markdown
Postcode District ValidationCheck the supplied postcode prefixes against a list of all valid postcode districts.
###Code
ukpostcodes = pd.read_csv('../postcodes/input/ukpostcodes.csv.gz')
ukpostcodes.shape
ukpostcodes['district'] = ukpostcodes['postcode'].str.replace(r'^(.+)\s.+$', r'\1')
ukpostcodes['sector'] = ukpostcodes['postcode'].str.replace(r'^(.+)\s([0-9]).+$', r'\1 \2')
ukpostcodes.head()
postcode_districts = ukpostcodes['district'].unique()
len(postcode_districts)
postcode_sectors = ukpostcodes['sector'].unique()
len(postcode_sectors)
pd.merge(
pd.DataFrame({'district': postcode_districts}),
raw_cap,
left_on='district', right_on='PostcodePrefix_F202B').shape
raw_cap['postcode_district'] = raw_cap['PostcodePrefix_F202B'].str.upper().str.strip()
pd.merge(
pd.DataFrame({'district': ukpostcodes['district'].unique()}),
raw_cap,
left_on='district', right_on='postcode_district').shape
raw_cap[raw_cap['postcode_district'].isin(postcode_sectors)]
def coarsen_sectors():
is_sector = raw_cap['postcode_district'].isin(postcode_sectors)
sectors = raw_cap['postcode_district'][is_sector]
raw_cap.loc[is_sector, 'postcode_district'] = sectors.str.replace(r'^(.+)\s[0-9]$', r'\1')
coarsen_sectors()
pd.merge(
pd.DataFrame({'district': postcode_districts}),
raw_cap,
left_on='district', right_on='postcode_district').shape
def find_unmatched_districts():
unmatched = raw_cap[~raw_cap['postcode_district'].isin(postcode_districts)]
pairs = unmatched[['PostcodePrefix_F202B', 'TownCity_F202C']]
return pd.DataFrame({
'unmatched': pairs.apply(lambda x: ' / '.join(x), axis=1).unique()
}).sort_values('unmatched')
find_unmatched_districts()
###Output
_____no_output_____
###Markdown
Ok, neither of these are things that we should expect to match!
###Code
cap = raw_cap[raw_cap['postcode_district'].isin(postcode_districts)].copy()
cap[[
'Year', 'BeneficiaryCode', 'BeneficiaryName_F201',
'OtherEAGFTotal', 'DirectEAGFTotal', 'RuralDevelopmentTotal',
'postcode_district']].to_pickle('output/cap_2016.pkl.gz')
###Output
_____no_output_____
###Markdown
Aggregation to Postcode District
###Code
cap_by_district = cap.groupby(['PayingAgencyLink', 'postcode_district']).aggregate(OrderedDict([
('OtherEAGFTotal', sum),
('DirectEAGFTotal', sum),
('RuralDevelopmentTotal', sum),
('Total', [sum, len]),
('Year', max)
]))
cap_by_district.reset_index(inplace=True)
cap_by_district.columns = [
'agency',
'postcode_district',
'otherEAGF',
'directEAGF',
'ruralDevelopment',
'total',
'count',
'year'
]
PROPERTY_COLUMNS = [
'otherEAGF', 'directEAGF', 'ruralDevelopment', 'total', 'count'
]
for column in PROPERTY_COLUMNS:
cap_by_district[column] = cap_by_district[column].round().astype('int32')
cap_by_district.shape
cap_by_district.head()
cap_by_district.agency.unique()
cap_by_district.describe()
###Output
_____no_output_____
###Markdown
Aggregation to Postcode Area
###Code
cap['postcode_area'] = \
cap['postcode_district'].str.replace(r'^([A-Z]{1,2}).+$', r'\1')
cap.head()
cap_by_area = cap.groupby('postcode_area').sum()
cap_by_area = cap.groupby('postcode_area').aggregate(OrderedDict([
('OtherEAGFTotal', sum),
('DirectEAGFTotal', sum),
('RuralDevelopmentTotal', sum),
('Total', [sum, len]),
('Year', max)
]))
cap_by_area.reset_index(inplace=True)
cap_by_area.columns = [
'postcode_area',
'otherEAGF',
'directEAGF',
'ruralDevelopment',
'total',
'count',
'year'
]
print(cap_by_area['total'].max()) # still a 32-bit integer?
for column in PROPERTY_COLUMNS:
cap_by_area[column] = cap_by_area[column].round().astype('int32')
cap_by_area.head()
cap_by_area.describe()
cap_by_area.to_pickle('output/cap_by_area_2016.pkl.gz')
###Output
_____no_output_____ |
demo/MMSegmentation_Tutorial_RainFilter_Simple_conv.ipynb | ###Markdown
MMSegmentation TutorialWelcome to MMSegmentation! In this tutorial, we demo* How to do inference with MMSeg trained weight* How to train on your own dataset and visualize the results. Install MMSegmentationThis step may take several minutes. We use PyTorch 1.5.0 and CUDA 10.1 for this tutorial. You may install other versions by change the version number in pip install command.
###Code
# Check nvcc version
!nvcc -V
# Check GCC version
!gcc --version
# Check Pytorch installation
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
# Check MMSegmentation installation
import mmseg
print(mmseg.__version__)
###Output
1.7.1 True
0.14.1
###Markdown
Run Inference with MMSeg trained weight
###Code
from mmseg.apis import inference_segmentor, init_segmentor, show_result_pyplot
from mmseg.core.evaluation import get_palette
###Output
/home/shirakawa/miniconda3/envs/mmsegmentation/lib/python3.8/site-packages/mmcv/utils/registry.py:249: UserWarning: The old API of register_module(module, force=False) is deprecated and will be removed, please use the new API register_module(name=None, force=False, module=None) instead.
warnings.warn(
###Markdown
Train a semantic segmentation model on a new datasetTo train on a customized dataset, the following steps are neccessary. 1. Add a new dataset class. 2. Create a config file accordingly. 3. Perform training and evaluation. Add a new datasetDatasets in MMSegmentation require image and semantic segmentation maps to be placed in folders with the same perfix. To support a new dataset, we may need to modify the original file structure. In this tutorial, we give an example of converting the dataset. You may refer to [docs](https://github.com/open-mmlab/mmsegmentation/docs/tutorials/new_dataset.md) for details about dataset reorganization. We use [Standord Background Dataset](http://dags.stanford.edu/projects/scenedataset.html) as an example. The dataset contains 715 images chosen from existing public datasets [LabelMe](http://labelme.csail.mit.edu), [MSRC](http://research.microsoft.com/en-us/projects/objectclassrecognition), [PASCAL VOC](http://pascallin.ecs.soton.ac.uk/challenges/VOC) and [Geometric Context](http://www.cs.illinois.edu/homes/dhoiem/). Images from these datasets are mainly outdoor scenes, each containing approximately 320-by-240 pixels. In this tutorial, we use the region annotations as labels. There are 8 classes in total, i.e. sky, tree, road, grass, water, building, mountain, and foreground object.
###Code
# download and unzip
#!wget http://dags.stanford.edu/data/iccv09Data.tar.gz -O /var/datasets/standford_background.tar.gz
#!tar xf /var/datasets/standford_background.tar.gz
#!tar xf /var/datasets/standford_background.tar.gz
# Let's take a look at the dataset
import mmcv
import matplotlib.pyplot as plt
img = mmcv.imread('./iccv09Data/images/6000124.jpg')
plt.figure(figsize=(8, 6))
plt.imshow(mmcv.bgr2rgb(img))
plt.show()
import os.path as osp
import numpy as np
from PIL import Image
# convert dataset annotation to semantic segmentation map
data_root = 'iccv09Data'
img_dir = 'images'
ann_dir = 'labels'
# define class and plaette for better visualization
classes = ('sky', 'tree', 'road', 'grass', 'water', 'bldg', 'mntn', 'fg obj')
palette = [[128, 128, 128], [129, 127, 38], [120, 69, 125], [53, 125, 34],
[0, 11, 123], [118, 20, 12], [122, 81, 25], [241, 134, 51]]
"""
for file in mmcv.scandir(osp.join(data_root, ann_dir), suffix='.regions.txt'):
seg_map = np.loadtxt(osp.join(data_root, ann_dir, file)).astype(np.uint8)
seg_img = Image.fromarray(seg_map).convert('P')
seg_img.putpalette(np.array(palette, dtype=np.uint8))
seg_img.save(osp.join(data_root, ann_dir, file.replace('.regions.txt',
'.png')))
"""
# Let's take a look at the segmentation map we got
import matplotlib.patches as mpatches
"""
img = Image.open('iccv09Data/labels/6000124.png')
plt.figure(figsize=(8, 6))
im = plt.imshow(np.array(img.convert('RGB')))
# create a patch (proxy artist) for every color
patches = [mpatches.Patch(color=np.array(palette[i])/255.,
label=classes[i]) for i in range(8)]
# put those patched as legend-handles into the legend
plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.,
fontsize='large')
plt.show()
"""
###Output
_____no_output_____
###Markdown
After downloading the data, we need to implement `load_annotations` function in the new dataset class `StandfordBackgroundDataset`.
###Code
from mmseg.datasets.builder import DATASETS
from mmseg.datasets.custom import CustomDataset
@DATASETS.register_module()
class StandfordBackgroundDataset(CustomDataset):
CLASSES = classes
PALETTE = palette
def __init__(self, split, **kwargs):
super().__init__(img_suffix='.jpg', seg_map_suffix='.png',
split=split, **kwargs)
assert osp.exists(self.img_dir) and self.split is not None
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we need to modify the config for the training. To accelerate the process, we finetune the model from trained weights.
###Code
from mmcv import Config
cfg = Config.fromfile('../configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py')
cfg = Config.fromfile('../configs/unet/deeplabv3_unet_s5-d16_64x64_40k_rain_filtering.py')
cfg = Config.fromfile('../configs/unet/fcn_unet_s4-d16_32x32_40k_rain_filtering.py')
#cfg = Config.fromfile('../configs/simple_convnet/fcn_simple_conv_s1_32x32_40k_rain_filtering.py')
###Output
_____no_output_____
###Markdown
Since the given config is used to train PSPNet on cityscapes dataset, we need to modify it accordingly for our new dataset. Train and Evaluation
###Code
cfg.data.test
cfg.data.test
from mmseg.datasets import build_dataset
from mmseg.models import build_segmentor
from mmseg.apis import train_segmentor
cfg.data.train.data_root = '../data/rain_filtering_v2'
cfg.data.test.data_root = '../data/rain_filtering_v2'
cfg.data.test.test_mode = True
datasets = [build_dataset(cfg.data.test)]
cfg.model
#cfg.model.train_cfg = None
model = build_segmentor(cfg.model)#, test_cfg=cfg.get('test_cfg'))
model
#%%debug
ee = datasets[0].__getitem__(0)
q = ee['img'][0]
q.shape
ee.keys()
ee['img_metas'][0]
out = model.backbone(q[np.newaxis])
out[0].shape
out[1].shape
out[2].shape
out[3].shape
model.backbone(q[0][np.newaxis]).shape
rr = model.backbone(q[0][np.newaxis]).shape
model.decode_head
model.inference(q[0][np.newaxis], ee['img_metas'][0], rescale=1)
for i, data in enumerate(datasets[0]):
print(i)
assert 1==0
data['img']
model(ee['img'])
model.backbone(ee['img'].data[np.newaxis])[0].shape
ee['img'].data[np.newaxis].shape
ee['img'].data[np.newaxis]
(4 - 5.805868e+00) / 1.6501048e+0
e
datasets[0].prepare_train_img(1)['img'].data[5].mean(0)
datasets[0].prepare_train_img(13)['img'].data[5]
datasets[0].prepare_test_img(1)['img'].data.shape
model.CLASSES =
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
cfg.pretty_text
train_segmentor(model, datasets, cfg, distributed=False, validate=True,
meta=dict())
###Output
2021-06-05 11:30:55,144 - mmseg - INFO - Loaded 298 images
2021-06-05 11:30:55,145 - mmseg - INFO - Start running, host: [email protected], work_dir: /home/shirakawa/projects/openmmlab/KS_work/mmsegmentation/demo
2021-06-05 11:30:55,145 - mmseg - INFO - workflow: [('train', 1)], max: 40000 iters
###Markdown
Inference with trained model
###Code
img = mmcv.imread('iccv09Data/images/6000124.jpg')
model.cfg = cfgb
result = inference_segmentor(model, img)
plt.figure(figsize=(8, 6))
show_result_pyplot(model, img, result, palette)
from mmseg.datasets.builder import DATASETS
from mmseg.datasets.custom import CustomDataset
from mmseg.datasets.KSdataset import ParticleDataset
from mmcv import Config
cfg = Config.fromfile('../configs/unet/deeplabv3_unet_s5-d16_64x64_40k_rain_filtering.py')
#cfg = Config.fromfile('../configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py')
from mmseg.datasets import build_dataset
from mmseg.models import build_segmentor
from mmseg.apis import train_segmentor
# Build the dataset
datasets = [build_dataset(cfg.data.train)]
# Build the detector
#model = build_segmentor(
# cfg.model, train_cfg=cfg.model.train_cfg, test_cfg=cfg.model.test_cfg)
model = build_segmentor(
cfg.model)
# Add an attribute for visualization convenience
model.CLASSES = datasets[0].CLASSES
# Create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
train_segmentor(model, datasets, cfg, distributed=False, validate=True,
meta=dict())
datasets[0].CLASSES
###Output
_____no_output_____ |
BIG_GAME - Hackerearth/3. Bagging.ipynb | ###Markdown
Dropping columns -- ['Won_Championship','Team_Value','Playing_Style','Coach_Experience_Level']
###Code
y = training_data.Won_Championship
training_data = training_data.drop(columns=['Won_Championship','Team_Value','Playing_Style','Coach_Experience_Level'],axis=1)
#training_data = training_data.drop(columns=['Won_Championship','Coach_Experience_Level','Number_Of_Wins_This_Season','Number_Of_Injured_Players'],axis=1)
x_train,x_test, y_train, y_test = train_test_split(training_data,y,test_size=0.2)
bag = BaggingClassifier(n_estimators=100,oob_score=True,bootstrap_features=True)
bag.fit(x_train,y_train)
#bag.fit(training_data,y)
prediction = bag.predict(x_test)
acc = 100 * (f1_score(y_test,prediction,average='binary'))
acc
cols = training_data.columns
test_data = pd.read_csv('test.csv')
event_id = test_data['ID']
print(test_data.shape)
test_data = test_data.drop(columns=['ID','Team_Value','Playing_Style','Coach_Experience_Level'],axis=1)
#test_data['Team_Value'] = le_Team_Value.fit_transform(test_data['Team_Value'])
#test_data['Playing_Style'] = le_Playing_Style.fit_transform(test_data['Playing_Style'])
test_data['Number_Of_Injured_Players'] = le_Number_Of_Injured_Players.fit_transform(test_data['Number_Of_Injured_Players'])
#test_data['Coach_Experience_Level'] = le_Coach_Experience_Level.fit_transform(test_data['Coach_Experience_Level'])
predictions = bag.predict(test_data)
result_df = pd.DataFrame({'ID':event_id,'Won_Championship':predictions})
result_df.to_csv('Prediction.csv',index=False)
###Output
(3500, 9)
|
notebooks/fun/choose_your_own_adventure.ipynb | ###Markdown
Choose Your Own Adventure GameThis sample is for a simple Choose Your Own Adventure style game. You could implement this in a console application or other user interface, so this notebook is meant to show you the "raw" assertion logic happening behind the scenes.
###Code
import os, sys
sys.path.insert(1, os.path.abspath('..\\..'))
from thoughts.rules_engine import RulesEngine
import pprint
engine = RulesEngine()
###Output
_____no_output_____
###Markdown
Define the KB Rules (World)
###Code
rules = [
{ "#when": {"game-event": "start"},
"#then": [{"#output": "You are standing in a scary woods at night."},
{"#output": "There are even scarier sounds coming from the north."},
{"#output": "To go north, turn to page 15."},
{"#output": "To stand there and whimper like a 3-year old, turn to page 10."}]
},
{ "#when": "10",
"#then": [{"#output": "You cry, and cry and cry and cry."},
{"game-event": "start"}]
},
{ "#when": "15",
"#then": [{"#output": "North?? OK...."},
{"#output": "You go north (a terrible choice, btw) and run into goblins."},
{"#output": "To try talking with the goblins, turn to page 32."},
{"#output": "To try sneaking past the goblins, turn to page 50."}]
},
{ "#when": "32",
"#then": [{"#output": "You try talking with the goblins."},
{"#output": "Unfortunately, they do not speak your language and become murderous."},
{"#output": "Roll a die to see if you escape them."},
{"#output": "If you rolled a 2 or lower, #then turn to page 60."},
{"#output": "If you rolled a 3 or higher #then turn to page 65."}]
},
{ "#when": "50",
"#then": [{"#output": "Your sneaky plan does not work."},
{"#output": "Unfortunately, they drag you back into their lair and keep you as a pet."},
{"#output": "GAME OVER"}]
},
{ "#when": "60",
"#then": [{"#output": "You missed. That's extremely bad."},
{"#output": "Unfortunately, they knock yout out and take all of your money."},
{"#output": "GAME OVER"}]
},
{ "#when": "65",
"#then": [{"#output": "Great job! You sneak past the goblins!"},
{"#output": "#then you went on to live happily ever after."},
{"#output": "GAME OVER"}]
}
]
engine.load_rules_from_list(rules, "adventure-game")
###Output
_____no_output_____
###Markdown
Start the GameAny good game has a beginning. Start the game using the initial assertion to get things going. There's nothing special here about the words "game-event" or "start", you can use any designation as long as it triggers the rules you need.
###Code
response = engine.process({"game-event": "start"})
###Output
You are standing in a scary woods at night.
There are even scarier sounds coming from the north.
To go north, turn to page 15.
To stand there and whimper like a 3-year old, turn to page 10.
###Markdown
Go North (Turn to Page 15)Uh oh - We ran into goblins!Go North by asserting "15", which will match the correponding when rule and return the matching then portion for that rule.
###Code
response = engine.process("15")
###Output
North?? OK....
You go north (a terrible choice, btw) and run into goblins.
To try talking with the goblins, turn to page 32.
To try sneaking past the goblins, turn to page 50.
###Markdown
Try Talking to the Goblins (Turn to Page 32)Let's see if we can talk our way out of this. Assert "32" to turn to page 32.
###Code
response = engine.process("32")
###Output
You try talking with the goblins.
Unfortunately, they do not speak your language and become murderous.
Roll a die to see if you escape them.
If you rolled a 2 or lower, #then turn to page 60.
If you rolled a 3 or higher #then turn to page 65.
###Markdown
Roll a 2 or Lower (Turn to Page 60)Run! Roll a die to see what happens next. Let's assume we roll a 2. Assert "60" and check the result
###Code
response = engine.process("60")
###Output
You missed. That's extremely bad.
Unfortunately, they knock yout out and take all of your money.
GAME OVER
|
predictions/landslides/predict.ipynb | ###Markdown
NASA Space Apps 2020 Automated Detection of Hazards ____________________ Here we do predictions and save them to the DB
###Code
import time
import datetime
import keras
import mysql.connector
import pandas as pd
import math
###Output
_____no_output_____
###Markdown
*Our columns: landslide_type landslide_size trigger injuries fatalities* Make predictions for next days
###Code
model = keras.models.load_model("./model.hdf5")
###Output
_____no_output_____
###Markdown
MYSQL
###Code
mydb = mysql.connector.connect(
host="localhost",
user="root",
password="",
database="stati"
)
print(mydb)
###Output
_____no_output_____
###Markdown
Points to predict
###Code
#POINTS FOR DEMO
points = [
[47.027963, 28.837133,"Moldova"],
[47.756670, 27.908760,"Moldova"],
[47.375748, 28.822655,"Moldova"],
[50.400586, 30.451840,"Ukraine"],
[49.811988, 23.997584,"Ukraine"],
[49.391303, 26.991186,"Ukraine"],
[44.337761, 26.009808,"Romania"],
[46.585592, 23.492343,"Romania"],
[44.250436, 23.753458,"Romania"]
]
###Output
_____no_output_____
###Markdown
Prepare data info
###Code
df=pd.read_csv("landslides.csv")
df["landslide_type"]=df["landslide_type"].str.lower()
df["landslide_size"]=df["landslide_size"].str.lower()
df["trigger"]=df["trigger"].str.lower()
df = df[df.landslide_type != "unknown"]
df = df[df.trigger != "unknown"]
df=df.fillna(0)
df = df[df.trigger != 0]
landslide_types = list(set(df['landslide_type']))
landslide_sizes = list(set(df['landslide_size']))
triggers = list(set(df['trigger']))
#1st day
for p in points:
data = model.predict(x=np.array([[1.001,p[0],p[1]]]))[0]
mycursor = mydb.cursor()
print(data)
sql = "INSERT INTO redalert (la, lo, hazard, types, size, trig, injuries, fatalities, prob_trig, country) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)"
val = (p[0],p[1],"landslides",landslide_types[math.floor(data[0])],landslide_sizes[math.floor(data[1])],triggers[math.floor(data[2])],str(data[3]),str(data[4]),data[2]-math.floor(data[2]),p[2])
mycursor.execute(sql, val)
mydb.commit()
###Output
_____no_output_____ |
notebooks/02_from_raw/03_CUR_column_subset.ipynb | ###Markdown
03 :: CUR column subset selection
###Code
import pandas as pd
import numpy as np
import os
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
# Commonly used constants.
slides = [
'B02_D1', 'B02_E1', 'B03_C2', 'B03_D2', 'B04_D1',
'B04_E1', 'B05_D2', 'B05_E2', 'B06_E1', 'B07_C2',
'N02_C1', 'N02_D1', 'N03_C2', 'N03_D2', 'N04_D1',
'N04_E1', 'N05_C2', 'N05_D2', 'N06_D2', 'N07_C1']
lcpm_parquet = '/media/tmo/data/work/datasets/02_ST/lcpm/lcpm.parquet'
meta_parquet = '/media/tmo/data/work/datasets/02_ST/meta/meta.parquet'
%%time
lcpm_df = pd.read_parquet(lcpm_parquet)
meta_df = pd.read_parquet(meta_parquet)
st_df = lcpm_df.merge(meta_df, how='inner', on='spot_UID')
st_df.info()
st_df.head()
###Output
_____no_output_____
###Markdown
--- Compute Leverage Scores
###Code
gene_columns = lcpm_df.columns[1:-1]
ex_matrix = lcpm_df[gene_columns].as_matrix()
ex_matrix.shape
from scipy.linalg import svd
%%time
U, s, V = svd(ex_matrix)
pd.DataFrame(s[:100]).plot(logy=True, figsize=(20,8))
plt.show()
row_k = 60
col_k = 60 # guess based on the plot above
def to_lev_df(lev_values):
return pd.DataFrame(lev_values, columns=['leverage'])
def to_row_lev_scores(U, k):
row_lev_values = np.sum(U[:,:k]**2, axis=1)
return to_lev_df(row_lev_values)
def to_col_lev_scores(V, k):
col_lev_values = np.sum(V[:k,:]**2,axis=0)
return to_lev_df(col_lev_values)
row_lev_df = to_row_lev_scores(U, row_k)
col_lev_df = to_col_lev_scores(V, col_k)
col_lev_stats = col_lev_df.describe()
col_lev_min = col_lev_stats.loc['min'][0]
col_lev_std = col_lev_stats.loc['std'][0]
ranked_gene_lev_df = pd.DataFrame(gene_columns) \
.merge(col_lev_df.sort_values(by='leverage', ascending=False),
left_index=True,
right_index=True) \
.sort_values(by='leverage', ascending=False)
ranked_gene_lev_df.columns = ['gene', 'leverage']
ranked_gene_lev_df.sort_values(by='leverage', ascending=False)[:2000].plot(figsize=(20,8), use_index=False)
plt.show()
ranked_gene_lev_df.to_csv('ranked_gene_leverage.tsv', sep='\t')
ranked_gene_lev_df.head(50)
###Output
_____no_output_____ |
notebooks/tutorial_delira.ipynb | ###Markdown
Delira Introduction*Last updated: 09.05.2019*Authors: Justus Schock, Christoph Haarburger Loading DataTo train your network you first need to load your training data (and probably also your validation data). This chapter will therefore deal with `delira`'s capabilities to load your data (and apply some augmentation). The DatasetThere are mainly two ways to load your data: Lazy or non-lazy. Loading in a lazy way means that you load the data just in time and keep the used memory to a bare minimum. This has, however, the disadvantage that your loading function could be a bottleneck since all postponed operations may have to wait until the needed data samples are loaded. In a no-lazy way, one would preload all data to RAM before starting any other operations. This has the advantage that there cannot be a loading bottleneck during latter operations. This advantage comes at cost of a higher memory usage and a (possibly) huge latency at the beginning of each experiment. Both ways to load your data are implemented in `delira` and they are named `BaseLazyDataset`and `BaseCacheDataset`. In the following steps you will only see the `BaseLazyDataset` since exchanging them is trivial. All Datasets (including the ones you might want to create yourself later) must be derived of `delira.data_loading.AbstractDataset` to ensure a minimum common API.The dataset's `__init__` has the following signature:```pythondef __init__(self, data_path, load_fn, **load_kwargs):```This means, you have to pass the path to the directory containing your data (`data_path`), a function to load a single sample of your data (`load_fn`). To get a single sample of your dataset after creating it, you can index it like this: `dataset[0]`.Additionally you can iterate over your dataset just like over any other `python` iterator via```pythonfor sample in dataset: do your stuff here```or enumerate it via```pythonfor idx, sample in enumerate(dataset): do your stuff here```.The missing argument `**load_kwargs` accepts an arbitrary amount of additional keyword arguments which are directly passed to your loading function.An example of how loading your data may look like is given below:```pythonfrom delira.data_loading import BaseLazyDataset, default_load_fn_2ddataset_train = BaseLazyDataset("/images/datasets/external/mnist/train", default_load_fn_2d, img_shape=(224, 224))```In this case all data lying in `/images/datasets/external/mnist/train` is loaded by `default_load_fn_2d`. The files containing the data must be PNG-files, while the groundtruth is defined in TXT-files. The `default_load_fn_2d` needs the additional argument `img_shape` which is passed as keyword argument via `**load_kwargs`.> **Note:** for reproducability we decided to use some wrapped PyTorch datasets for this introduction. Now, let's just initialize our trainset:
###Code
from delira.data_loading import TorchvisionClassificationDataset
dataset_train = TorchvisionClassificationDataset("mnist", train=True,
img_shape=(224, 224))
###Output
_____no_output_____
###Markdown
Getting a single sample of your dataset with dataset_train[0] will produce:
###Code
dataset_train[0]
###Output
_____no_output_____
###Markdown
which means, that our data is stored in a dictionary containing the keys `data` and `label`, each of them holding the corresponding numpy arrays. The dataloading works on `numpy` purely and is thus backend agnostic. It does not matter in which format or with which library you load/preprocess your data, but at the end it must be converted to numpy arraysFor validation purposes another dataset could be created with the test data like this:
###Code
dataset_val = TorchvisionClassificationDataset("mnist", train=False,
img_shape=(224, 224))
###Output
_____no_output_____
###Markdown
The DataloaderThe Dataloader wraps your dataset to privode the ability to load whole batches with an abstract interface. To create a dataloader, one would have to pass the following arguments to it's `__init__`: the previously created `dataset`.Additionally, it is possible to pass the `batch_size` defining the number of samples per batch, the total number of batches (`num_batches`), which will be the number of samples in your dataset devided by the batchsize per default, a random `seed`for always getting the same behaviour of random number generators and a [`sampler`]() defining your sampling strategy. This would create a dataloader for your `dataset_train`:
###Code
from delira.data_loading import BaseDataLoader
batch_size = 32
loader_train = BaseDataLoader(dataset_train, batch_size)
###Output
_____no_output_____
###Markdown
Since the batch_size has been set to 32, the loader will load 32 samples as one batch.Even though it would be possible to train your network with an instance of `BaseDataLoader`, `malira` also offers a different approach that covers multithreaded data loading and augmentation: The DatamanagerThe data manager is implemented as `delira.data_loading.BaseDataManager` and wraps a `DataLoader`. It also encapsulates augmentations. Having a view on the `BaseDataManager`'s signature, it becomes obvious that it accepts the same arguments as the [`DataLoader`](The-Dataloader). You can either pass a `dataset` or a combination of path, dataset class and load function. Additionally, you can pass a custom dataloder class if necessary and a sampler class to choose a sampling algorithm. The parameter `transforms` accepts augmentation transformations as implemented in `batchgenerators`. Augmentation is applied on the fly using `n_process_augmentation` threads.All in all the DataManager is the recommended way to generate batches from your dataset.The following example shows how to create a data manager instance:
###Code
from delira.data_loading import BaseDataManager
from batchgenerators.transforms.abstract_transforms import Compose
from batchgenerators.transforms.sample_normalization_transforms import MeanStdNormalizationTransform
batchsize = 64
transforms = Compose([MeanStdNormalizationTransform(mean=1*[0], std=1*[1])])
data_manager_train = BaseDataManager(dataset_train, # dataset to use
batchsize, # batchsize
n_process_augmentation=1, # number of augmentation processes
transforms=transforms) # augmentation transforms
###Output
_____no_output_____
###Markdown
The approach to initialize a DataManager from a datapath takes more arguments since, in opposite to initializaton from dataset, it needs all the arguments which are necessary to internally create a dataset.Since we want to validate our model we have to create a second manager containing our `dataset_val`:
###Code
data_manager_val = BaseDataManager(dataset_val,
batchsize,
n_process_augmentation=1,
transforms=transforms)
###Output
_____no_output_____
###Markdown
That's it - we just finished loading our data!Iterating over a DataManager is possible in simple loops:
###Code
from tqdm.auto import tqdm # utility for progress bars
# create actual batch generator from DataManager
batchgen = data_manager_val.get_batchgen()
for data in tqdm(batchgen):
pass # here you can access the data of the current batch
###Output
_____no_output_____
###Markdown
SamplerIn previous section samplers have been already mentioned but not yet explained. A sampler implements an algorithm how a batch should be assembled from single samples in a dataset. `delira` provides the following sampler classes in it's subpackage `delira.data_loading.sampler`:* `AbstractSampler`* `SequentialSampler`* `PrevalenceSequentialSampler`* `RandomSampler`* `PrevalenceRandomSampler`* `WeightedRandomSampler`* `LambdaSampler`The `AbstractSampler` implements no sampling algorithm but defines a sampling API and thus all custom samplers must inherit from this class. The `Sequential` sampler builds batches by just iterating over the samples' indices in a sequential way. Following this, the `RandomSampler` builds batches by randomly drawing the samples' indices with replacement. If the class each sample belongs to is known for each sample at the beginning, the `PrevalenceSequentialSampler` and the `PrevalenceRandomSampler` perform a per-class sequential or random sampling and building each batch with the exactly same number of samples from each class. The `WeightedRandomSampler`accepts custom weights to give specific samples a higher probability during random sampling than others.The `LambdaSampler` is a wrapper for a custom sampling function, which can be passed to the wrapper during it's initialization, to ensure API conformity.It can be passed to the DataLoader or DataManager as class (argument `sampler_cls`) or as instance (argument `sampler`). ModelsSince the purpose of this framework is to use machine learning algorithms, there has to be a way to define them. Defining models is straight forward. `delira` provides a class `delira.models.AbstractNetwork`. *All models must inherit from this class*.To inherit this class four functions must be implemented in the subclass:* `__init__`* `closure`* `prepare_batch`* `__call__` `__init__`The `__init__`function is a classes constructor. In our case it builds the entire model (maybe using some helper functions). If writing your own custom model, you have to override this method.> **Note:** If you want the best experience for saving your model and completely recreating it during the loading process you need to take care of a few things:> * if using `torchvision.models` to build your model, always import it with `from torchvision import models as t_models`> * register all arguments in your custom `__init__` in the abstract class. A init_prototype could look like this:>```pythondef __init__(self, in_channels: int, n_outputs: int, **kwargs): """ Parameters ---------- in_channels: int number of input_channels n_outputs: int number of outputs (usually same as number of classes) """ register params by passing them as kwargs to parent class __init__ only params registered like this will be saved! super().__init__(in_channels=in_channels, n_outputs=n_outputs, **kwargs)``` `closure`The `closure`function defines one batch iteration to train the network. This function is needed for the framework to provide a generic trainer function which works with all kind of networks and loss functions.The closure function must implement all steps from forwarding, over loss calculation, metric calculation, logging (for which `delira.logging_handlers` provides some extensions for pythons logging module), and the actual backpropagation.It is called with an empty optimizer-dict to evaluate and should thus work with optional optimizers. `prepare_batch`The `prepare_batch`function defines the transformation from loaded data to match the networks input and output shape and pushes everything to the right device. Abstract Networks for specific Backends PyTorchAt the time of writing, PyTorch is the only backend which is supported, but other backends are planned.In PyTorch every network should be implemented as a subclass of `torch.nn.Module`, which also provides a `__call__` method.This results in sloghtly different requirements for PyTorch networks: instead of implementing a `__call__` method, we simply call the `torch.nn.Module.__call__` and therefore have to implement the `forward` method, which defines the module's behaviour and is internally called by `torch.nn.Module.__call__` (among other stuff). To give a default behaviour suiting most cases and not have to care about internals, `delira` provides the `AbstractPyTorchNetwork` which is a more specific case of the `AbstractNetwork` for PyTorch modules. `forward`The `forward` function defines what has to be done to forward your input through your network and must return a dictionary. Assuming your network has three convolutional layers stored in `self.conv1`, `self.conv2` and `self.conv3` and a ReLU stored in `self.relu`, a simple `forward` function could look like this:```pythondef forward(self, input_batch: torch.Tensor): out_1 = self.relu(self.conv1(input_batch)) out_2 = self.relu(self.conv2(out_1)) out_3 = self.conv3(out2) return {"pred": out_3}``` `prepare_batch`The default `prepare_batch` function for PyTorch networks looks like this:```python @staticmethod def prepare_batch(batch: dict, input_device, output_device): """ Helper Function to prepare Network Inputs and Labels (convert them to correct type and shape and push them to correct devices) Parameters ---------- batch : dict dictionary containing all the data input_device : torch.device device for network inputs output_device : torch.device device for network outputs Returns ------- dict dictionary containing data in correct type and shape and on correct device """ return_dict = {"data": torch.from_numpy(batch.pop("data")).to( input_device)} for key, vals in batch.items(): return_dict[key] = torch.from_numpy(vals).to(output_device) return return_dict```and can be customized by subclassing the `AbstractPyTorchNetwork`. `closure example`A simple closure function for a PyTorch module could look like this:```python @staticmethod def closure(model: AbstractPyTorchNetwork, data_dict: dict, optimizers: dict, criterions={}, metrics={}, fold=0, **kwargs): """ closure method to do a single backpropagation step Parameters ---------- model : :class:`ClassificationNetworkBasePyTorch` trainable model data_dict : dict dictionary containing the data optimizers : dict dictionary of optimizers to optimize model's parameters criterions : dict dict holding the criterions to calculate errors (gradients from different criterions will be accumulated) metrics : dict dict holding the metrics to calculate fold : int Current Fold in Crossvalidation (default: 0) **kwargs: additional keyword arguments Returns ------- dict Metric values (with same keys as input dict metrics) dict Loss values (with same keys as input dict criterions) list Arbitrary number of predictions as torch.Tensor Raises ------ AssertionError if optimizers or criterions are empty or the optimizers are not specified """ assert (optimizers and criterions) or not optimizers, \ "Criterion dict cannot be emtpy, if optimizers are passed" loss_vals = {} metric_vals = {} total_loss = 0 choose suitable context manager: if optimizers: context_man = torch.enable_grad else: context_man = torch.no_grad with context_man(): inputs = data_dict.pop("data") obtain outputs from network preds = model(inputs)["pred"] if data_dict: for key, crit_fn in criterions.items(): _loss_val = crit_fn(preds, *data_dict.values()) loss_vals[key] = _loss_val.detach() total_loss += _loss_val with torch.no_grad(): for key, metric_fn in metrics.items(): metric_vals[key] = metric_fn( preds, *data_dict.values()) if optimizers: optimizers['default'].zero_grad() total_loss.backward() optimizers['default'].step() else: add prefix "val" in validation mode eval_loss_vals, eval_metrics_vals = {}, {} for key in loss_vals.keys(): eval_loss_vals["val_" + str(key)] = loss_vals[key] for key in metric_vals: eval_metrics_vals["val_" + str(key)] = metric_vals[key] loss_vals = eval_loss_vals metric_vals = eval_metrics_vals for key, val in {**metric_vals, **loss_vals}.items(): logging.info({"value": {"value": val.item(), "name": key, "env_appendix": "_%02d" % fold }}) logging.info({'image_grid': {"images": inputs, "name": "input_images", "env_appendix": "_%02d" % fold}}) return metric_vals, loss_vals, preds```> **Note:** This closure is taken from the `delira.models.classification.ClassificationNetworkBasePyTorch` Other examplesIn `delira.models` you can find exemplaric implementations of generative adversarial networks, classification and regression approaches or segmentation networks. Training ParametersTraining-parameters (often called hyperparameters) can be defined in the `delira.training.Parameters` class. The class accepts the parameters `batch_size` and `num_epochs` to define the batchsize and the number of epochs to train, the parameters `optimizer_cls` and `optimizer_params` to create an optimizer or training, the parameter `criterions` to specify the training criterions (whose gradients will be accumulated by defaut), the parameters `lr_sched_cls` and `lr_sched_params` to define the learning rate scheduling and the parameter `metrics` to specify evaluation metrics.Additionally, it is possible to pass an aritrary number of keyword arguments to the classIt is good practice to create a `Parameters` object at the beginning and then use it for creating other objects which are needed for training, since you can use the classes attributes and changes in hyperparameters only have to be done once:
###Code
import torch
from delira.training import Parameters
from delira.data_loading import RandomSampler, SequentialSampler
params = Parameters(fixed_params={
"model": {},
"training": {
"batch_size": 64, # batchsize to use
"num_epochs": 2, # number of epochs to train
"optimizer_cls": torch.optim.Adam, # optimization algorithm to use
"optimizer_params": {'lr': 1e-3}, # initialization parameters for this algorithm
"criterions": {"CE": torch.nn.CrossEntropyLoss()}, # the loss function
"lr_sched_cls": None, # the learning rate scheduling algorithm to use
"lr_sched_params": {}, # the corresponding initialization parameters
"metrics": {} # and some evaluation metrics
}
})
# recreating the data managers with the batchsize of the params object
manager_train = BaseDataManager(dataset_train, params.nested_get("batch_size"), 1,
transforms=None, sampler_cls=RandomSampler,
n_process_loading=4)
manager_val = BaseDataManager(dataset_val, params.nested_get("batch_size"), 3,
transforms=None, sampler_cls=SequentialSampler,
n_process_loading=4)
###Output
_____no_output_____
###Markdown
TrainerThe `delira.training.NetworkTrainer` class provides functions to train a single network by passing attributes from your parameter object, a `save_freq` to specify how often your model should be saved (`save_freq=1` indicates every epoch, `save_freq=2` every second epoch etc.) and `gpu_ids`. If you don't pass any ids at all, your network will be trained on CPU (and probably take a lot of time). If you specify 1 id, the network will be trained on the GPU with the corresponding index and if you pass multiple `gpu_ids` your network will be trained on multiple GPUs in parallel.> **Note:** The GPU indices are refering to the devices listed in `CUDA_VISIBLE_DEVICES`. E.g if `CUDA_VISIBLE_DEVICES` lists GPUs 3, 4, 5 then gpu_id 0 will be the index for GPU 3 etc.> **Note:** training on multiple GPUs is not recommended for easy and small networks, since for these networks the synchronization overhead is far greater than the parallelization benefit.Training your network might look like this:
###Code
from delira.training import PyTorchNetworkTrainer
from delira.models.classification import ClassificationNetworkBasePyTorch
# path where checkpoints should be saved
save_path = "./results/checkpoints"
model = ClassificationNetworkBasePyTorch(in_channels=1, n_outputs=10)
trainer = PyTorchNetworkTrainer(network=model,
save_path=save_path,
criterions=params.nested_get("criterions"),
optimizer_cls=params.nested_get("optimizer_cls"),
optimizer_params=params.nested_get("optimizer_params"),
metrics=params.nested_get("metrics"),
lr_scheduler_cls=params.nested_get("lr_sched_cls"),
lr_scheduler_params=params.nested_get("lr_sched_params"),
gpu_ids=[0]
)
#trainer.train(params.nested_get("num_epochs"), manager_train, manager_val)
###Output
_____no_output_____
###Markdown
ExperimentThe `delira.training.AbstractExperiment` class needs an experiment name, a path to save it's results to, a parameter object, a model class and the keyword arguments to create an instance of this class. It provides methods to perform a single training and also a method for running a kfold-cross validation. In order to create it, you must choose the `PyTorchExperiment`, which is basically just a subclass of the `AbstractExperiment` to provide a general setup for PyTorch modules. Running an experiment could look like this:
###Code
from delira.training import PyTorchExperiment
from delira.training.train_utils import create_optims_default_pytorch
# Add model parameters to Parameter class
params.fixed.model = {"in_channels": 1, "n_outputs": 10}
experiment = PyTorchExperiment(params=params,
model_cls=ClassificationNetworkBasePyTorch,
name="TestExperiment",
save_path="./results",
optim_builder=create_optims_default_pytorch,
gpu_ids=[0])
experiment.run(manager_train, manager_val)
###Output
_____no_output_____
###Markdown
An `Experiment` is the most abstract (and recommended) way to define, train and validate your network. Logging Previous class and function definitions used pythons's `logging` library. As extensions for this library `delira` provides a package (`delira.logging`) containing handlers to realize different logging methods. To use these handlers simply add them to your logger like this:```pythonlogger.addHandler(logging.StreamHandler())```Nowadays, delira mainly relies on [trixi](https://github.com/MIC-DKFZ/trixi/) for logging and provides only a `MultiStreamHandler` and a `TrixiHandler`, which is a binding to `trixi`'s loggers and integrates them into the python `logging` module `MultiStreamHandler`The `MultiStreamHandler` accepts an arbitrary number of streams during initialization and writes the message to all of it's streams during logging. Logging with `Visdom` - The `trixi` Loggers[`Visdom`](https://github.com/facebookresearch/visdom) is a tool designed to visualize your logs. To use this tool you need to open a port on the machine you want to train on via `visdom -port YOUR_PORTNUMBER` Afterwards just add the handler of your choice to the logger. For more detailed information and customization have a look at [this](https://github.com/facebookresearch/visdom) website.Logging the scalar tensors containing `1`, `2`, `3`, `4` (at the beginning; will increase to show epochwise logging) with the corresponding keys `"one"`, `"two"`, `"three"`, `"four"` and two random images with the keys `"prediction"` and `"groundtruth"` would look like this:
###Code
NUM_ITERS = 4
# import logging handler and logging module
from delira.logging import TrixiHandler
from trixi.logger import PytorchVisdomLogger
import logging
# configure logging module (and root logger)
logger_kwargs = {
'name': 'test_env', # name of loggin environment
'port': 9999 # visdom port to connect to
}
logger_cls = PytorchVisdomLogger
# configure logging module (and root logger)
logging.basicConfig(level=logging.INFO,
handlers=[TrixiHandler(logger_cls, **logger_kwargs)])
# derive logger from root logger
# (don't do `logger = logging.Logger("...")` since this will create a new
# logger which is unrelated to the root logger
logger = logging.getLogger("Test Logger")
# create dict containing the scalar numbers as torch.Tensor
scalars = {"one": torch.Tensor([1]),
"two": torch.Tensor([2]),
"three": torch.Tensor([3]),
"four": torch.Tensor([4])}
# create dict containing the images as torch.Tensor
# pytorch awaits tensor dimensionality of
# batchsize x image channels x height x width
images = {"prediction": torch.rand(1, 3, 224, 224),
"groundtruth": torch.rand(1, 3, 224, 224)}
# Simulate 4 Epochs
for i in range(4*NUM_ITERS):
logger.info({"image_grid": {"images": images["prediction"], "name": "predictions"}})
for key, val_tensor in scalars.items():
logger.info({"value": {"value": val_tensor.item(), "name": key}})
scalars[key] += 1
###Output
_____no_output_____
###Markdown
Delira Introduction*Last updated: 09.05.2019*Authors: Justus Schock, Christoph Haarburger Loading DataTo train your network you first need to load your training data (and probably also your validation data). This chapter will therefore deal with `delira`'s capabilities to load your data (and apply some augmentation). The DatasetThere are mainly two ways to load your data: Lazy or non-lazy. Loading in a lazy way means that you load the data just in time and keep the used memory to a bare minimum. This has, however, the disadvantage that your loading function could be a bottleneck since all postponed operations may have to wait until the needed data samples are loaded. In a no-lazy way, one would preload all data to RAM before starting any other operations. This has the advantage that there cannot be a loading bottleneck during latter operations. This advantage comes at cost of a higher memory usage and a (possibly) huge latency at the beginning of each experiment. Both ways to load your data are implemented in `delira` and they are named `BaseLazyDataset`and `BaseCacheDataset`. In the following steps you will only see the `BaseLazyDataset` since exchanging them is trivial. All Datasets (including the ones you might want to create yourself later) must be derived of `delira.data_loading.AbstractDataset` to ensure a minimum common API.The dataset's `__init__` has the following signature:```pythondef __init__(self, data_path, load_fn, **load_kwargs):```This means, you have to pass the path to the directory containing your data (`data_path`), a function to load a single sample of your data (`load_fn`). To get a single sample of your dataset after creating it, you can index it like this: `dataset[0]`.Additionally you can iterate over your dataset just like over any other `python` iterator via```pythonfor sample in dataset: do your stuff here```or enumerate it via```pythonfor idx, sample in enumerate(dataset): do your stuff here```.The missing argument `**load_kwargs` accepts an arbitrary amount of additional keyword arguments which are directly passed to your loading function.An example of how loading your data may look like is given below:```pythonfrom delira.data_loading import BaseLazyDataset, default_load_fn_2ddataset_train = BaseLazyDataset("/images/datasets/external/mnist/train", default_load_fn_2d, img_shape=(224, 224))```In this case all data lying in `/images/datasets/external/mnist/train` is loaded by `default_load_fn_2d`. The files containing the data must be PNG-files, while the groundtruth is defined in TXT-files. The `default_load_fn_2d` needs the additional argument `img_shape` which is passed as keyword argument via `**load_kwargs`.> **Note:** for reproducability we decided to use some wrapped PyTorch datasets for this introduction. Now, let's just initialize our trainset:
###Code
from delira.data_loading import TorchvisionClassificationDataset
dataset_train = TorchvisionClassificationDataset("mnist", train=True,
img_shape=(224, 224))
###Output
_____no_output_____
###Markdown
Getting a single sample of your dataset with dataset_train[0] will produce:
###Code
dataset_train[0]
###Output
_____no_output_____
###Markdown
which means, that our data is stored in a dictionary containing the keys `data` and `label`, each of them holding the corresponding numpy arrays. The dataloading works on `numpy` purely and is thus backend agnostic. It does not matter in which format or with which library you load/preprocess your data, but at the end it must be converted to numpy arraysFor validation purposes another dataset could be created with the test data like this:
###Code
dataset_val = TorchvisionClassificationDataset("mnist", train=False,
img_shape=(224, 224))
###Output
_____no_output_____
###Markdown
The DataloaderThe Dataloader wraps your dataset to privode the ability to load whole batches with an abstract interface. To create a dataloader, one would have to pass the following arguments to it's `__init__`: the previously created `dataset`.Additionally, it is possible to pass the `batch_size` defining the number of samples per batch, the total number of batches (`num_batches`), which will be the number of samples in your dataset devided by the batchsize per default, a random `seed`for always getting the same behaviour of random number generators and a [`sampler`]() defining your sampling strategy. This would create a dataloader for your `dataset_train`:
###Code
from delira.data_loading import DataLoader
batch_size = 32
loader_train = DataLoader(dataset_train, batch_size)
###Output
_____no_output_____
###Markdown
Since the batch_size has been set to 32, the loader will load 32 samples as one batch.Even though it would be possible to train your network with an instance of `DataLoader`, `malira` also offers a different approach that covers multithreaded data loading and augmentation: The DatamanagerThe data manager is implemented as `delira.data_loading.DataManager` and wraps a `DataLoader`. It also encapsulates augmentations. Having a view on the `DataManager`'s signature, it becomes obvious that it accepts the same arguments as the [`DataLoader`](The-Dataloader). You can either pass a `dataset` or a combination of path, dataset class and load function. Additionally, you can pass a custom dataloder class if necessary and a sampler class to choose a sampling algorithm. The parameter `transforms` accepts augmentation transformations as implemented in `batchgenerators`. Augmentation is applied on the fly using `n_process_augmentation` threads.All in all the DataManager is the recommended way to generate batches from your dataset.The following example shows how to create a data manager instance:
###Code
from delira.data_loading import DataManager
from batchgenerators.transforms.abstract_transforms import Compose
from batchgenerators.transforms.sample_normalization_transforms import MeanStdNormalizationTransform
batchsize = 64
transforms = Compose([MeanStdNormalizationTransform(mean=1*[0], std=1*[1])])
data_manager_train = DataManager(dataset_train, # dataset to use
batchsize, # batchsize
n_process_augmentation=1, # number of augmentation processes
transforms=transforms) # augmentation transforms
###Output
_____no_output_____
###Markdown
The approach to initialize a DataManager from a datapath takes more arguments since, in opposite to initializaton from dataset, it needs all the arguments which are necessary to internally create a dataset.Since we want to validate our model we have to create a second manager containing our `dataset_val`:
###Code
data_manager_val = DataManager(dataset_val,
batchsize,
n_process_augmentation=1,
transforms=transforms)
###Output
_____no_output_____
###Markdown
That's it - we just finished loading our data!Iterating over a DataManager is possible in simple loops:
###Code
from tqdm.auto import tqdm # utility for progress bars
# create actual batch generator from DataManager
batchgen = data_manager_val.get_batchgen()
for data in tqdm(batchgen):
pass # here you can access the data of the current batch
###Output
_____no_output_____
###Markdown
SamplerIn previous section samplers have been already mentioned but not yet explained. A sampler implements an algorithm how a batch should be assembled from single samples in a dataset. `delira` provides the following sampler classes in it's subpackage `delira.data_loading.sampler`:* `AbstractSampler`* `SequentialSampler`* `PrevalenceSequentialSampler`* `RandomSampler`* `PrevalenceRandomSampler`* `WeightedRandomSampler`* `LambdaSampler`The `AbstractSampler` implements no sampling algorithm but defines a sampling API and thus all custom samplers must inherit from this class. The `Sequential` sampler builds batches by just iterating over the samples' indices in a sequential way. Following this, the `RandomSampler` builds batches by randomly drawing the samples' indices with replacement. If the class each sample belongs to is known for each sample at the beginning, the `PrevalenceSequentialSampler` and the `PrevalenceRandomSampler` perform a per-class sequential or random sampling and building each batch with the exactly same number of samples from each class. The `WeightedRandomSampler`accepts custom weights to give specific samples a higher probability during random sampling than others.The `LambdaSampler` is a wrapper for a custom sampling function, which can be passed to the wrapper during it's initialization, to ensure API conformity.It can be passed to the DataLoader or DataManager as class (argument `sampler_cls`) or as instance (argument `sampler`). ModelsSince the purpose of this framework is to use machine learning algorithms, there has to be a way to define them. Defining models is straight forward. `delira` provides a class `delira.models.AbstractNetwork`. *All models must inherit from this class*.To inherit this class four functions must be implemented in the subclass:* `__init__`* `closure`* `prepare_batch`* `__call__` `__init__`The `__init__`function is a classes constructor. In our case it builds the entire model (maybe using some helper functions). If writing your own custom model, you have to override this method.> **Note:** If you want the best experience for saving your model and completely recreating it during the loading process you need to take care of a few things:> * if using `torchvision.models` to build your model, always import it with `from torchvision import models as t_models`> * register all arguments in your custom `__init__` in the abstract class. A init_prototype could look like this:>```pythondef __init__(self, in_channels: int, n_outputs: int, **kwargs): """ Parameters ---------- in_channels: int number of input_channels n_outputs: int number of outputs (usually same as number of classes) """ register params by passing them as kwargs to parent class __init__ only params registered like this will be saved! super().__init__(in_channels=in_channels, n_outputs=n_outputs, **kwargs)``` `closure`The `closure`function defines one batch iteration to train the network. This function is needed for the framework to provide a generic trainer function which works with all kind of networks and loss functions.The closure function must implement all steps from forwarding, over loss calculation, metric calculation, logging (for which `delira.logging_handlers` provides some extensions for pythons logging module), and the actual backpropagation.It is called with an empty optimizer-dict to evaluate and should thus work with optional optimizers. `prepare_batch`The `prepare_batch`function defines the transformation from loaded data to match the networks input and output shape and pushes everything to the right device. Abstract Networks for specific Backends PyTorchAt the time of writing, PyTorch is the only backend which is supported, but other backends are planned.In PyTorch every network should be implemented as a subclass of `torch.nn.Module`, which also provides a `__call__` method.This results in sloghtly different requirements for PyTorch networks: instead of implementing a `__call__` method, we simply call the `torch.nn.Module.__call__` and therefore have to implement the `forward` method, which defines the module's behaviour and is internally called by `torch.nn.Module.__call__` (among other stuff). To give a default behaviour suiting most cases and not have to care about internals, `delira` provides the `AbstractPyTorchNetwork` which is a more specific case of the `AbstractNetwork` for PyTorch modules. `forward`The `forward` function defines what has to be done to forward your input through your network and must return a dictionary. Assuming your network has three convolutional layers stored in `self.conv1`, `self.conv2` and `self.conv3` and a ReLU stored in `self.relu`, a simple `forward` function could look like this:```pythondef forward(self, input_batch: torch.Tensor): out_1 = self.relu(self.conv1(input_batch)) out_2 = self.relu(self.conv2(out_1)) out_3 = self.conv3(out2) return {"pred": out_3}``` `prepare_batch`The default `prepare_batch` function for PyTorch networks looks like this:```python @staticmethod def prepare_batch(batch: dict, input_device, output_device): """ Helper Function to prepare Network Inputs and Labels (convert them to correct type and shape and push them to correct devices) Parameters ---------- batch : dict dictionary containing all the data input_device : torch.device device for network inputs output_device : torch.device device for network outputs Returns ------- dict dictionary containing data in correct type and shape and on correct device """ return_dict = {"data": torch.from_numpy(batch.pop("data")).to( input_device)} for key, vals in batch.items(): return_dict[key] = torch.from_numpy(vals).to(output_device) return return_dict```and can be customized by subclassing the `AbstractPyTorchNetwork`. `closure example`A simple closure function for a PyTorch module could look like this:```python @staticmethod def closure(model: AbstractPyTorchNetwork, data_dict: dict, optimizers: dict, criterions={}, metrics={}, fold=0, **kwargs): """ closure method to do a single backpropagation step Parameters ---------- model : :class:`ClassificationNetworkBasePyTorch` trainable model data_dict : dict dictionary containing the data optimizers : dict dictionary of optimizers to optimize model's parameters criterions : dict dict holding the criterions to calculate errors (gradients from different criterions will be accumulated) metrics : dict dict holding the metrics to calculate fold : int Current Fold in Crossvalidation (default: 0) **kwargs: additional keyword arguments Returns ------- dict Metric values (with same keys as input dict metrics) dict Loss values (with same keys as input dict criterions) list Arbitrary number of predictions as torch.Tensor Raises ------ AssertionError if optimizers or criterions are empty or the optimizers are not specified """ assert (optimizers and criterions) or not optimizers, \ "Criterion dict cannot be emtpy, if optimizers are passed" loss_vals = {} metric_vals = {} total_loss = 0 choose suitable context manager: if optimizers: context_man = torch.enable_grad else: context_man = torch.no_grad with context_man(): inputs = data_dict.pop("data") obtain outputs from network preds = model(inputs)["pred"] if data_dict: for key, crit_fn in criterions.items(): _loss_val = crit_fn(preds, *data_dict.values()) loss_vals[key] = _loss_val.detach() total_loss += _loss_val with torch.no_grad(): for key, metric_fn in metrics.items(): metric_vals[key] = metric_fn( preds, *data_dict.values()) if optimizers: optimizers['default'].zero_grad() total_loss.backward() optimizers['default'].step() else: add prefix "val" in validation mode eval_loss_vals, eval_metrics_vals = {}, {} for key in loss_vals.keys(): eval_loss_vals["val_" + str(key)] = loss_vals[key] for key in metric_vals: eval_metrics_vals["val_" + str(key)] = metric_vals[key] loss_vals = eval_loss_vals metric_vals = eval_metrics_vals for key, val in {**metric_vals, **loss_vals}.items(): logging.info({"value": {"value": val.item(), "name": key, "env_appendix": "_%02d" % fold }}) logging.info({'image_grid': {"images": inputs, "name": "input_images", "env_appendix": "_%02d" % fold}}) return metric_vals, loss_vals, preds```> **Note:** This closure is taken from the `delira.models.classification.ClassificationNetworkBasePyTorch` Other examplesIn `delira.models` you can find exemplaric implementations of generative adversarial networks, classification and regression approaches or segmentation networks. Training ParametersTraining-parameters (often called hyperparameters) can be defined in the `delira.training.Parameters` class. The class accepts the parameters `batch_size` and `num_epochs` to define the batchsize and the number of epochs to train, the parameters `optimizer_cls` and `optimizer_params` to create an optimizer or training, the parameter `criterions` to specify the training criterions (whose gradients will be accumulated by defaut), the parameters `lr_sched_cls` and `lr_sched_params` to define the learning rate scheduling and the parameter `metrics` to specify evaluation metrics.Additionally, it is possible to pass an aritrary number of keyword arguments to the classIt is good practice to create a `Parameters` object at the beginning and then use it for creating other objects which are needed for training, since you can use the classes attributes and changes in hyperparameters only have to be done once:
###Code
import torch
from delira.training import Parameters
from delira.data_loading import RandomSampler, SequentialSampler
params = Parameters(fixed_params={
"model": {},
"training": {
"batch_size": 64, # batchsize to use
"num_epochs": 2, # number of epochs to train
"optimizer_cls": torch.optim.Adam, # optimization algorithm to use
"optimizer_params": {'lr': 1e-3}, # initialization parameters for this algorithm
"criterions": {"CE": torch.nn.CrossEntropyLoss()}, # the loss function
"lr_sched_cls": None, # the learning rate scheduling algorithm to use
"lr_sched_params": {}, # the corresponding initialization parameters
"metrics": {} # and some evaluation metrics
}
})
# recreating the data managers with the batchsize of the params object
manager_train = DataManager(dataset_train, params.nested_get("batch_size"), 1,
transforms=None, sampler_cls=RandomSampler,
n_process_loading=4)
manager_val = DataManager(dataset_val, params.nested_get("batch_size"), 3,
transforms=None, sampler_cls=SequentialSampler,
n_process_loading=4)
###Output
_____no_output_____
###Markdown
TrainerThe `delira.training.NetworkTrainer` class provides functions to train a single network by passing attributes from your parameter object, a `save_freq` to specify how often your model should be saved (`save_freq=1` indicates every epoch, `save_freq=2` every second epoch etc.) and `gpu_ids`. If you don't pass any ids at all, your network will be trained on CPU (and probably take a lot of time). If you specify 1 id, the network will be trained on the GPU with the corresponding index and if you pass multiple `gpu_ids` your network will be trained on multiple GPUs in parallel.> **Note:** The GPU indices are refering to the devices listed in `CUDA_VISIBLE_DEVICES`. E.g if `CUDA_VISIBLE_DEVICES` lists GPUs 3, 4, 5 then gpu_id 0 will be the index for GPU 3 etc.> **Note:** training on multiple GPUs is not recommended for easy and small networks, since for these networks the synchronization overhead is far greater than the parallelization benefit.Training your network might look like this:
###Code
from delira.training import PyTorchNetworkTrainer
from delira.models.classification import ClassificationNetworkBasePyTorch
# path where checkpoints should be saved
save_path = "./results/checkpoints"
model = ClassificationNetworkBasePyTorch(in_channels=1, n_outputs=10)
trainer = PyTorchNetworkTrainer(network=model,
save_path=save_path,
criterions=params.nested_get("criterions"),
optimizer_cls=params.nested_get("optimizer_cls"),
optimizer_params=params.nested_get("optimizer_params"),
metrics=params.nested_get("metrics"),
lr_scheduler_cls=params.nested_get("lr_sched_cls"),
lr_scheduler_params=params.nested_get("lr_sched_params"),
gpu_ids=[0]
)
#trainer.train(params.nested_get("num_epochs"), manager_train, manager_val)
###Output
_____no_output_____
###Markdown
ExperimentThe `delira.training.AbstractExperiment` class needs an experiment name, a path to save it's results to, a parameter object, a model class and the keyword arguments to create an instance of this class. It provides methods to perform a single training and also a method for running a kfold-cross validation. In order to create it, you must choose the `PyTorchExperiment`, which is basically just a subclass of the `AbstractExperiment` to provide a general setup for PyTorch modules. Running an experiment could look like this:
###Code
from delira.training import PyTorchExperiment
from delira.training.train_utils import create_optims_default_pytorch
# Add model parameters to Parameter class
params.fixed.model = {"in_channels": 1, "n_outputs": 10}
experiment = PyTorchExperiment(params=params,
model_cls=ClassificationNetworkBasePyTorch,
name="TestExperiment",
save_path="./results",
optim_builder=create_optims_default_pytorch,
gpu_ids=[0])
experiment.run(manager_train, manager_val)
###Output
_____no_output_____
###Markdown
An `Experiment` is the most abstract (and recommended) way to define, train and validate your network. Logging Previous class and function definitions used pythons's `logging` library. As extensions for this library `delira` provides a package (`delira.logging`) containing handlers to realize different logging methods. To use these handlers simply add them to your logger like this:```pythonlogger.addHandler(logging.StreamHandler())```Nowadays, delira mainly relies on [trixi](https://github.com/MIC-DKFZ/trixi/) for logging and provides only a `MultiStreamHandler` and a `TrixiHandler`, which is a binding to `trixi`'s loggers and integrates them into the python `logging` module `MultiStreamHandler`The `MultiStreamHandler` accepts an arbitrary number of streams during initialization and writes the message to all of it's streams during logging. Logging with `Visdom` - The `trixi` Loggers[`Visdom`](https://github.com/facebookresearch/visdom) is a tool designed to visualize your logs. To use this tool you need to open a port on the machine you want to train on via `visdom -port YOUR_PORTNUMBER` Afterwards just add the handler of your choice to the logger. For more detailed information and customization have a look at [this](https://github.com/facebookresearch/visdom) website.Logging the scalar tensors containing `1`, `2`, `3`, `4` (at the beginning; will increase to show epochwise logging) with the corresponding keys `"one"`, `"two"`, `"three"`, `"four"` and two random images with the keys `"prediction"` and `"groundtruth"` would look like this:
###Code
NUM_ITERS = 4
# import logging handler and logging module
from delira.logging import TrixiHandler
from trixi.logger import PytorchVisdomLogger
import logging
# configure logging module (and root logger)
logger_kwargs = {
'name': 'test_env', # name of loggin environment
'port': 9999 # visdom port to connect to
}
logger_cls = PytorchVisdomLogger
# configure logging module (and root logger)
logging.basicConfig(level=logging.INFO,
handlers=[TrixiHandler(logger_cls, **logger_kwargs)])
# derive logger from root logger
# (don't do `logger = logging.Logger("...")` since this will create a new
# logger which is unrelated to the root logger
logger = logging.getLogger("Test Logger")
# create dict containing the scalar numbers as torch.Tensor
scalars = {"one": torch.Tensor([1]),
"two": torch.Tensor([2]),
"three": torch.Tensor([3]),
"four": torch.Tensor([4])}
# create dict containing the images as torch.Tensor
# pytorch awaits tensor dimensionality of
# batchsize x image channels x height x width
images = {"prediction": torch.rand(1, 3, 224, 224),
"groundtruth": torch.rand(1, 3, 224, 224)}
# Simulate 4 Epochs
for i in range(4*NUM_ITERS):
logger.info({"image_grid": {"images": images["prediction"], "name": "predictions"}})
for key, val_tensor in scalars.items():
logger.info({"value": {"value": val_tensor.item(), "name": key}})
scalars[key] += 1
###Output
_____no_output_____ |
utils.ipynb | ###Markdown
Various Utilities Train Test SplitSplit a list of images and their annotations (xmls) into train and validation sets for training
###Code
from sklearn.model_selection import train_test_split
from pathlib import Path
from shutil import move
X = list(Path("images").glob("*.jpeg"))
y = list(Path("annotations/xmls/").glob("*.xml"))
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
def reorganize_train_val(train_files, val_files, base_folder):
for folder, files in [("train", train_files), ("val", val_files)]:
dest_dir = Path(base_folder)/folder
dest_dir.mkdir(parents=True, exist_ok=True)
for f in files: move(f.as_posix(), dest_dir)
reorganize_train_val(X_train, X_val, "images/")
reorganize_train_val(y_train, y_val, "annotations/xmls/")
###Output
_____no_output_____
###Markdown
XML To CSVCreate a CSV with information of annotations from XML generated by LabelImg app
###Code
from xml_to_csv import xml_to_csv
xml_to_csv("annotations/xmls/train/").to_csv("annotations/train_labels.csv", index=False)
xml_to_csv("annotations/xmls/val/").to_csv("annotations/val_labels.csv", index=False)
###Output
_____no_output_____
###Markdown
Create Preparation Folder
###Code
for i in files:
if '#' in i:
subfiles = os.listdir(os.path.join(i))
if 'preparation' not in subfiles:
os.mkdir(os.path.join(i, 'preparation'))
preparation_folder = os.path.join(i,'preparation')
preparation_folder_files = os.listdir(preparation_folder)
for j in ['session', 'practice']:
a = []
for k in preparation_folder_files:
if re.search(j, k):
a.append(k)
a
matched_file
###Output
_____no_output_____
###Markdown
Move Files to Preparation Inserting banners
###Code
banner = '''- Book + Lessons [here ↗](https://sotastica.com/reservar)
- Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)
- **Keep in touch** with me on [LinkedIn ↗](https://www.linkedin.com/in/jsulopz)'''
import os
block_folders = os.listdir()
block_folders.sort()
import re
import pandas as pd
df = pd.Series(block_folders)
mask = df.str.match(r'^I')
df = df[mask].values
block = df[0]
block_chapters = [i for i in os.listdir(block) if '#' in i]
chapter = block_chapters[0]
chapter_files = pd.Series(os.listdir(os.path.join(block, chapter)))
chapter_files
mask_files = chapter_files.str.contains('practice|session')
df = chapter_files[mask_files]
df_complete_path = block + '/' + chapter + '/' + df
path_files = df_complete_path.values
path_file = path_files[0]
with open(path_file, 'rw') as f:
###Output
_____no_output_____
###Markdown
Create Preparation Folder
###Code
for i in files:
if '#' in i:
subfiles = os.listdir(os.path.join(i))
if 'preparation' not in subfiles:
os.mkdir(os.path.join(i, 'preparation'))
preparation_folder = os.path.join(i,'preparation')
preparation_folder_files = os.listdir(preparation_folder)
for j in ['session', 'practice']:
a = []
for k in preparation_folder_files:
if re.search(j, k):
a.append(k)
a
matched_file
###Output
_____no_output_____
###Markdown
Move Files to Preparation Inserting banners
###Code
banner = '''- Book + Lessons [here ↗](https://sotastica.com/reservar)
- Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)
- **Keep in touch** with me on [LinkedIn ↗](https://www.linkedin.com/in/jsulopz)'''
import os
block_folders = os.listdir()
block_folders.sort()
import re
import pandas as pd
df = pd.Series(block_folders)
mask = df.str.match(r'^I')
df = df[mask].values
block = df[0]
block_chapters = [i for i in os.listdir(block) if '#' in i]
chapter = block_chapters[0]
chapter_files = pd.Series(os.listdir(os.path.join(block, chapter)))
chapter_files
mask_files = chapter_files.str.contains('practice|session')
df = chapter_files[mask_files]
df_complete_path = block + '/' + chapter + '/' + df
path_files = df_complete_path.values
path_file = path_files[0]
with open(path_file, 'rw') as f:
###Output
_____no_output_____
###Markdown
###Code
#!pip install twython
import pandas as pd
import re
from twython import Twython
import json
from nltk.stem import WordNetLemmatizer
import nltk
nltk.download('wordnet')
import pickle
def searchTweets(query, result_type='popular', count=1000, lang='en'):
""" returns a dict"""
# Load credentials from json file
with open("/content/drive/MyDrive/Lighthouselabs/Project_Planning/Final_Project/twitter_credentials.json", "r") as file:
creds = json.load(file)
# Instantiate an object
python_tweets = Twython(creds['CONSUMER_KEY'], creds['CONSUMER_SECRET'])
# Create our query
query = {'q': query,
'result_type': result_type,
'count': count,
'lang': lang,
}
# Search tweets
dict_ = {'user': [], 'date': [], 'text': [], 'favorite_count': [], 'location':[]}
for status in python_tweets.search(**query)['statuses']:
dict_['user'].append(status['user']['screen_name'])
dict_['date'].append(status['created_at'])
dict_['text'].append(status['text'])
dict_['favorite_count'].append(status['favorite_count'])
dict_['location'].append(status['user']['location'])
return dict_
# Defining dictionary containing all emojis with their meanings.
emojis = {':)': 'smile', ':-)': 'smile', ';d': 'wink', ':-E': 'vampire', ':(': 'sad',
':-(': 'sad', ':-<': 'sad', ':P': 'raspberry', ':O': 'surprised',
':-@': 'shocked', ':@': 'shocked',':-$': 'confused', ':\\': 'annoyed',
':#': 'mute', ':X': 'mute', ':^)': 'smile', ':-&': 'confused', '$_$': 'greedy',
'@@': 'eyeroll', ':-!': 'confused', ':-D': 'smile', ':-0': 'yell', 'O.o': 'confused',
'<(-_-)>': 'robot', 'd[-_-]b': 'dj', ":'-)": 'sadsmile', ';)': 'wink',
';-)': 'wink', 'O:-)': 'angel','O*-)': 'angel','(:-D': 'gossip', '=^.^=': 'cat'}
## Defining set containing all stopwords in english.
stopwordlist = ['a', 'about', 'above', 'after', 'again', 'ain', 'all', 'am', 'an',
'and','any','are', 'as', 'at', 'be', 'because', 'been', 'before',
'being', 'below', 'between','both', 'by', 'can', 'd', 'did', 'do',
'does', 'doing', 'down', 'during', 'each','few', 'for', 'from',
'further', 'had', 'has', 'have', 'having', 'he', 'her', 'here',
'hers', 'herself', 'him', 'himself', 'his', 'how', 'i', 'if', 'in',
'into','is', 'it', 'its', 'itself', 'just', 'll', 'm', 'ma',
'me', 'more', 'most','my', 'myself', 'now', 'o', 'of', 'on', 'once',
'only', 'or', 'other', 'our', 'ours','ourselves', 'out', 'own', 're',
's', 'same', 'she', "shes", 'should', "shouldve",'so', 'some', 'such',
't', 'than', 'that', "thatll", 'the', 'their', 'theirs', 'them',
'themselves', 'then', 'there', 'these', 'they', 'this', 'those',
'through', 'to', 'too','under', 'until', 'up', 've', 'very', 'was',
'we', 'were', 'what', 'when', 'where','which','while', 'who', 'whom',
'why', 'will', 'with', 'won', 'y', 'you', "youd","youll", "youre",
"youve", 'your', 'yours', 'yourself', 'yourselves']
def preprocess(textdata):
processedText = []
# Create Lemmatizer and Stemmer.
wordLemm = WordNetLemmatizer()
# Defining regex patterns.
urlPattern = r"((http://)[^ ]*|(https://)[^ ]*|( www\.)[^ ]*)"
userPattern = '@[^\s]+'
alphaPattern = "[^a-zA-Z0-9]"
sequencePattern = r"(.)\1\1+"
seqReplacePattern = r"\1\1"
for tweet in textdata:
tweet = tweet.lower()
# Replace all URls with 'URL'
tweet = re.sub(urlPattern,' URL',tweet)
# Replace all emojis.
for emoji in emojis.keys():
tweet = tweet.replace(emoji, "EMOJI" + emojis[emoji])
# Replace @USERNAME to 'USER'.
tweet = re.sub(userPattern,' USER', tweet)
# Replace all non alphabets.
tweet = re.sub(alphaPattern, " ", tweet)
# Replace 3 or more consecutive letters by 2 letter.
tweet = re.sub(sequencePattern, seqReplacePattern, tweet)
tweetwords = ''
for word in tweet.split():
# Checking if the word is a stopword.
#if word not in stopwordlist:
if len(word)>1:
# Lemmatizing the word.
word = wordLemm.lemmatize(word)
tweetwords += (word+' ')
processedText.append(tweetwords)
return processedText
def load_models():
'''
Replace '..path/' by the path of the saved models.
'''
# Load the vectoriser.
file = open('/content/drive/MyDrive/Lighthouselabs/Project_Planning/Final_Project/vectoriser.pickle', 'rb')
vectoriser = pickle.load(file)
file.close()
# Load the LR Model.
file = open('/content/drive/MyDrive/Lighthouselabs/Project_Planning/Final_Project/LogisticRegression.pickle', 'rb')
LRmodel = pickle.load(file)
file.close()
## Load the BNB Model.
#file = open('/content/drive/MyDrive/Lighthouselabs/Project_Planning/Final_Project/NaiveBayes.pickle', 'rb')
#BNBModel = pickle.load(file)
#file.close()
return vectoriser, LRmodel #, BNBModel
def getConfidence(sentiment, probaScore):
data = []
for i in range(len(sentiment)):
data.append(round(probaScore[i][sentiment[i]]*100,2))
return data
def predict(vectoriser, model, text):
# Predict the sentiment
textdata = vectoriser.transform(preprocess(text))
sentiment = model.predict(textdata)
probaScore = LRmodel.predict_proba(textdata)
confidence = getConfidence(sentiment, probaScore)
# Make a list of text with sentiment.
data = []
for text, pred, conf in zip(text, sentiment, confidence):
data.append((text,pred, conf))
# Convert the list into a Pandas DataFrame.
df = pd.DataFrame(data, columns = ['text','sentiment', 'confidence'])
df = df.replace([0,1], ["Negative","Positive"])
return df
def getSentiment(query, result_type='popular', count=1000, lang='en'):
try:
#get tweets
tweets = searchTweets(query=query, result_type=result_type, count=count, lang=lang)
#import model
vectoriser, LRmodel = load_models()
#predict
df = predict(vectoriser, LRmodel, tweets['text'])
return df
except ValueError:
print('No Results')
vectoriser, LRmodel = load_models()
TEXT = ['This text is just a test, i love DataScience!']
pred = predict(vectoriser, LRmodel, TEXT)
pred
import requests
query = 'covid'
parameters = {'q':f'{query}','lang':'en','count':100}
r = requests.get('https://api.twitter.com/1.1/search/tweets.json?', params=parameters)
print(r.url)
!pip install twitter
from twitter import *
import json
from IPython.display import JSON
with open("/content/drive/MyDrive/Lighthouselabs/Project_Planning/Final_Project/twitter_credentials.json", "r") as file:
creds = json.load(file)
t = Twitter(
auth=OAuth(creds['ACCESS_TOKEN'], creds['ACCESS_SECRET'], creds['CONSUMER_KEY'], creds['CONSUMER_SECRET']))
result = t.search.tweets(q="data science", lang = 'en', count = 100, result_type = 'mixed')
len(result['statuses'])
count = result['search_metadata']['count']
# Search tweets
dict_ = {'user': [], 'date': [], 'text': [], 'location':[]}
for i in range(count):
dict_['user'].append(result['statuses'][i]['user']['screen_name'])
dict_['date'].append(result['statuses'][i]['created_at'])
dict_['text'].append(result['statuses'][i]['text'])
dict_['location'].append(result['statuses'][i]['user']['location'])
import pandas as pd
pd.DataFrame(dict_)
def searchTweets(query, count=100, result_type='popular'):
""" returns a dict"""
# Load credentials from json file
with open("/content/drive/MyDrive/Lighthouselabs/Project_Planning/Final_Project/twitter_credentials.json", "r") as file:
creds = json.load(file)
# Instantiate an object
t = Twitter(
auth=OAuth(creds['ACCESS_TOKEN'], creds['ACCESS_SECRET'], creds['CONSUMER_KEY'], creds['CONSUMER_SECRET']))
# Create our query
result = t.search.tweets(q=query, count = count, result_type = result_type, lang = 'en')
# Search tweets
dict_ = {'user': [], 'date': [], 'text': [], 'location':[]}
for i in range(count):
try:
dict_['user'].append(result['statuses'][i]['user']['screen_name'])
dict_['date'].append(result['statuses'][i]['created_at'])
dict_['text'].append(result['statuses'][i]['text'])
dict_['location'].append(result['statuses'][i]['user']['location'])
except:
continue
return dict_
result = getSentiment('covid')
result
vectoriser, LRmodel = load_models()
textdata = vectoriser.transform(preprocess(['I Hate Python!','I love Python!', 'python is not bad', 'python is not good']))
sentiment = LRmodel.predict(textdata)
probaScore = LRmodel.predict_proba(textdata)
confidence = getConfidence(sentiment, probaScore)
probaScore
sentiment
confidence
getConfidence(sentiment, probaScore)
###Output
_____no_output_____
###Markdown
Utils Module Notebook Utils
###Code
import sys
import io
import ipywidgets
widget_table = {}
def create_text_widget( name, placeholder, default_value="" ):
if name in widget_table:
widget = widget_table[name]
if name not in widget_table:
widget = ipywidgets.Text( description = name, placeholder = placeholder, value=default_value )
widget_table[name] = widget
display(widget)
return widget
class StatusIndicator:
def __init__(self):
self.previous_status = None
self.need_newline = False
def update( self, status ):
if self.previous_status != status:
if self.need_newline:
sys.stdout.write("\n")
sys.stdout.write( status + " ")
self.need_newline = True
self.previous_status = status
else:
sys.stdout.write(".")
self.need_newline = True
sys.stdout.flush()
def end(self):
if self.need_newline:
sys.stdout.write("\n")
###Output
_____no_output_____
###Markdown
Fcst_utils
###Code
import time
import json
import gzip
import boto3
import botocore.exceptions
import pandas as pd
import matplotlib.pyplot as plt
import util.notebook_utils
def wait_till_delete(callback, check_time = 5, timeout = None):
elapsed_time = 0
while timeout is None or elapsed_time < timeout:
try:
out = callback()
except botocore.exceptions.ClientError as e:
# When given the resource not found exception, deletion has occured
if e.response['Error']['Code'] == 'ResourceNotFoundException':
print('Successful delete')
return
else:
raise
time.sleep(check_time) # units of seconds
elapsed_time += check_time
raise TimeoutError( "Forecast resource deletion timed-out." )
def wait(callback, time_interval = 10):
status_indicator = util.notebook_utils.StatusIndicator()
while True:
status = callback()['Status']
status_indicator.update(status)
if status in ('ACTIVE', 'CREATE_FAILED'): break
time.sleep(time_interval)
status_indicator.end()
return (status=="ACTIVE")
def load_exact_sol(fname, item_id, is_schema_perm=False):
exact = pd.read_csv(fname, header = None)
exact.columns = ['item_id', 'timestamp', 'target']
if is_schema_perm:
exact.columns = ['timestamp', 'target', 'item_id']
return exact.loc[exact['item_id'] == item_id]
def get_or_create_iam_role( role_name ):
iam = boto3.client("iam")
assume_role_policy_document = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "forecast.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
try:
create_role_response = iam.create_role(
RoleName = role_name,
AssumeRolePolicyDocument = json.dumps(assume_role_policy_document)
)
role_arn = create_role_response["Role"]["Arn"]
print("Created", role_arn)
except iam.exceptions.EntityAlreadyExistsException:
print("The role " + role_name + " exists, ignore to create it")
role_arn = boto3.resource('iam').Role(role_name).arn
print("Attaching policies")
iam.attach_role_policy(
RoleName = role_name,
PolicyArn = "arn:aws:iam::aws:policy/AmazonForecastFullAccess"
)
iam.attach_role_policy(
RoleName=role_name,
PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess',
)
print("Waiting for a minute to allow IAM role policy attachment to propagate")
time.sleep(60)
print("Done.")
return role_arn
def delete_iam_role( role_name ):
iam = boto3.client("iam")
iam.detach_role_policy( PolicyArn = "arn:aws:iam::aws:policy/AmazonS3FullAccess", RoleName = role_name )
iam.detach_role_policy( PolicyArn = "arn:aws:iam::aws:policy/AmazonForecastFullAccess", RoleName = role_name )
iam.delete_role(RoleName=role_name)
def plot_forecasts(fcsts, exact, freq = '1H', forecastHorizon=24, time_back = 80):
p10 = pd.DataFrame(fcsts['Forecast']['Predictions']['p10'])
p50 = pd.DataFrame(fcsts['Forecast']['Predictions']['p50'])
p90 = pd.DataFrame(fcsts['Forecast']['Predictions']['p90'])
pred_int = p50['Timestamp'].apply(lambda x: pd.Timestamp(x))
fcst_start_date = pred_int.iloc[0]
fcst_end_date = pred_int.iloc[-1]
time_int = exact['timestamp'].apply(lambda x: pd.Timestamp(x))
plt.plot(time_int[-time_back:],exact['target'].values[-time_back:], color = 'r')
plt.plot(pred_int, p50['Value'].values, color = 'k')
plt.fill_between(p50['Timestamp'].values,
p10['Value'].values,
p90['Value'].values,
color='b', alpha=0.3);
plt.axvline(x=pd.Timestamp(fcst_start_date), linewidth=3, color='g', ls='dashed')
plt.axvline(x=pd.Timestamp(fcst_end_date), linewidth=3, color='g', ls='dashed')
plt.xticks(rotation=30)
plt.legend(['Target', 'Forecast'], loc = 'lower left')
def extract_gz( src, dst ):
print( f"Extracting {src} to {dst}" )
with open(dst, 'wb') as fd_dst:
with gzip.GzipFile( src, 'rb') as fd_src:
data = fd_src.read()
fd_dst.write(data)
print("Done.")
###Output
_____no_output_____
###Markdown
EDA
###Code
import seaborn as sb
plt.rcParams['figure.figsize'] = (30,20)
sb.set()
sunspots.head()
q, w = zip(*(sorted(distributions_classes.items())))
plt.barh(q, w)
plt.savefig('all_classes.png')
# a, b = zip(distributions_classes.items())
a, b = zip(*distributions_classes1.items())
plt.bar(a, b)
zip(*sorted(distributions_classes.items()))
size_distr = Counter(size for size in sunspots['size'])
r, c = zip(*size_distr.items())
sunspots
ern = [int(e) for e in er]
plt.hist(ern, bins=200);
dist_over_Z = {}
for letter in distributions_classes1:
dist_over_Z[letter] = Counter([class_ for class_ in sunspots['class'] if class_[0]==letter])
dist_over_Z
###Output
_____no_output_____
###Markdown
Utils> Collection of useful functions.
###Code
#hide
from nbdev.showdoc import *
#export
import os
import numpy as np
from typing import Iterable, TypeVar, Generator
from plum import dispatch
from pathlib import Path
from functools import reduce
function = type(lambda: ())
T = TypeVar('T')
###Output
_____no_output_____
###Markdown
Basics
###Code
#export
def identity(x: T) -> T:
"""Indentity function."""
return x
#export
def simplify(x):
"""Return an object of an iterable if it is lonely."""
@dispatch
def _simplify(x):
if callable(x):
try:
return x()
except TypeError:
pass
return x
@dispatch
def _simplify(i: Iterable): return next(i.__iter__()) if len(i) == 1 else i
return _simplify(x)
###Output
_____no_output_____
###Markdown
The simplify function is used to de-nest an iterable with a single element in it, as for instance [1], while leaving everything else constant. It can also exchange a function for its default argument.
###Code
simplify({1})
simplify(simplify)(lambda x='lul': 2*x)
#export
def listify(x, *args):
"""Convert `x` to a `list`."""
if args:
x = (x,) + args
if x is None:
result = []
elif isinstance(x, list): result = x
elif isinstance(x, str) or hasattr(x, "__array__") or hasattr(x, "iloc"):
result = [x]
elif isinstance(x, (Iterable, Generator)):
result = list(x)
else:
result = [x]
return result
###Output
_____no_output_____
###Markdown
What's very convenient is that it leaves lists invariant (it doen't nest them into a new list).
###Code
listify([1, 2])
listify(1, 2, 3)
#export
def setify(x, *args):
"""Convert `x` to a `set`."""
return set(listify(x, *args))
setify(1, 2, 3)
#export
def tuplify(x, *args):
"""Convert `x` to a `tuple`."""
return tuple(listify(x, *args))
tuplify(1)
#export
def merge_tfms(*tfms):
"""Merge two dictionnaries by stacking common key into list."""
def _merge_tfms(tf1, tf2):
return {
k: simplify(listify(setify(listify(tf1.get(k)) + listify(tf2.get(k)))))
for k in {**tf1, **tf2}
}
return reduce(_merge_tfms, tfms, dict())
merge_tfms(
{'animals': ['cats', 'dog'], 'colors': 'blue'},
{'animals': 'cats', 'colors': 'red', 'OS': 'i use arch btw'}
)
#export
def compose(*functions):
"""Compose an arbitrary number of functions."""
def _compose(fn1, fn2):
return lambda x: fn1(fn2(x))
return reduce(_compose, functions, identity)
#export
def pipe(*functions):
"""Pipe an arbitrary number of functions."""
return compose(*functions[::-1])
#export
def flow(data, *functions):
"""Flow `data` through a list of functions."""
return pipe(*functions)(data)
###Output
_____no_output_____
###Markdown
File manipulation helper
###Code
#export
def get_files(path, extensions=None, recurse=False, folders=None, followlinks=True):
"""Get all those file names."""
path = Path(path)
folders = listify(folders)
extensions = setify(extensions)
extensions = {e.lower() for e in extensions}
def simple_getter(p, fs, extensions=None):
p = Path(p)
res = [
p / f
for f in fs
if not f.startswith(".")
and ((not extensions) or f'.{f.split(".")[-1].lower()}' in extensions)
]
return res
if recurse:
result = []
for i, (p, d, f) in enumerate(os.walk(path, followlinks=followlinks)):
if len(folders) != 0 and i == 0:
d[:] = [o for o in d if o in folders]
else:
d[:] = [o for o in d if not o.startswith(".")]
if len(folders) != 0 and i == 0 and "." not in folders:
continue
result += simple_getter(p, f, extensions)
else:
f = [o.name for o in os.scandir(path) if o.is_file()]
result = simple_getter(path, f, extensions)
return list(map(str, result))
# export
from fastcore.all import *
@patch
def decompress(self: Path, dest='.'):
pass
#export
@patch
def compress(self: Path, dest='.', keep_copy=True):
pass
#export
def save_array(array, fname, suffix):
"""Save an array with the given name and suffix."""
if not suffix.startswith("."):
suffix = "." + suffix
fname = Path(fname)
return np.save(array, fname.with_suffix(suffix))
def save_dataset(data):
return 'NotImplementedError'
###Output
_____no_output_____
###Markdown
###Code
from __future__ import print_function
from collections import defaultdict, deque
import datetime
import pickle
import time
import torch
import torch.distributed as dist
import errno
import os
class SmoothedValue(object):
"""Track a series of values and provide access to smoothed values over a
window or the global series average.
"""
def __init__(self, window_size=20, fmt=None):
if fmt is None:
fmt = "{median:.4f} ({global_avg:.4f})"
self.deque = deque(maxlen=window_size)
self.total = 0.0
self.count = 0
self.fmt = fmt
def update(self, value, n=1):
self.deque.append(value)
self.count += n
self.total += value * n
def synchronize_between_processes(self):
"""
Warning: does not synchronize the deque!
"""
if not is_dist_avail_and_initialized():
return
t = torch.tensor([self.count, self.total], dtype=torch.float64, device='cuda')
dist.barrier()
dist.all_reduce(t)
t = t.tolist()
self.count = int(t[0])
self.total = t[1]
@property
def median(self):
d = torch.tensor(list(self.deque))
return d.median().item()
@property
def avg(self):
d = torch.tensor(list(self.deque), dtype=torch.float32)
return d.mean().item()
@property
def global_avg(self):
return self.total / self.count
@property
def max(self):
return max(self.deque)
@property
def value(self):
return self.deque[-1]
def __str__(self):
return self.fmt.format(
median=self.median,
avg=self.avg,
global_avg=self.global_avg,
max=self.max,
value=self.value)
def all_gather(data):
"""
Run all_gather on arbitrary picklable data (not necessarily tensors)
Args:
data: any picklable object
Returns:
list[data]: list of data gathered from each rank
"""
world_size = get_world_size()
if world_size == 1:
return [data]
# serialized to a Tensor
buffer = pickle.dumps(data)
storage = torch.ByteStorage.from_buffer(buffer)
tensor = torch.ByteTensor(storage).to("cuda")
# obtain Tensor size of each rank
local_size = torch.tensor([tensor.numel()], device="cuda")
size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)]
dist.all_gather(size_list, local_size)
size_list = [int(size.item()) for size in size_list]
max_size = max(size_list)
# receiving Tensor from all ranks
# we pad the tensor because torch all_gather does not support
# gathering tensors of different shapes
tensor_list = []
for _ in size_list:
tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda"))
if local_size != max_size:
padding = torch.empty(size=(max_size - local_size,), dtype=torch.uint8, device="cuda")
tensor = torch.cat((tensor, padding), dim=0)
dist.all_gather(tensor_list, tensor)
data_list = []
for size, tensor in zip(size_list, tensor_list):
buffer = tensor.cpu().numpy().tobytes()[:size]
data_list.append(pickle.loads(buffer))
return data_list
def reduce_dict(input_dict, average=True):
"""
Args:
input_dict (dict): all the values will be reduced
average (bool): whether to do average or sum
Reduce the values in the dictionary from all processes so that all processes
have the averaged results. Returns a dict with the same fields as
input_dict, after reduction.
"""
world_size = get_world_size()
if world_size < 2:
return input_dict
with torch.no_grad():
names = []
values = []
# sort the keys so that they are consistent across processes
for k in sorted(input_dict.keys()):
names.append(k)
values.append(input_dict[k])
values = torch.stack(values, dim=0)
dist.all_reduce(values)
if average:
values /= world_size
reduced_dict = {k: v for k, v in zip(names, values)}
return reduced_dict
class MetricLogger(object):
def __init__(self, delimiter="\t"):
self.meters = defaultdict(SmoothedValue)
self.delimiter = delimiter
def update(self, **kwargs):
for k, v in kwargs.items():
if isinstance(v, torch.Tensor):
v = v.item()
assert isinstance(v, (float, int))
self.meters[k].update(v)
def __getattr__(self, attr):
if attr in self.meters:
return self.meters[attr]
if attr in self.__dict__:
return self.__dict__[attr]
raise AttributeError("'{}' object has no attribute '{}'".format(
type(self).__name__, attr))
def __str__(self):
loss_str = []
for name, meter in self.meters.items():
loss_str.append(
"{}: {}".format(name, str(meter))
)
return self.delimiter.join(loss_str)
def synchronize_between_processes(self):
for meter in self.meters.values():
meter.synchronize_between_processes()
def add_meter(self, name, meter):
self.meters[name] = meter
def log_every(self, iterable, print_freq, header=None):
i = 0
if not header:
header = ''
start_time = time.time()
end = time.time()
iter_time = SmoothedValue(fmt='{avg:.4f}')
data_time = SmoothedValue(fmt='{avg:.4f}')
space_fmt = ':' + str(len(str(len(iterable)))) + 'd'
if torch.cuda.is_available():
log_msg = self.delimiter.join([
header,
'[{0' + space_fmt + '}/{1}]',
'eta: {eta}',
'{meters}',
'time: {time}',
'data: {data}',
'max mem: {memory:.0f}'
])
else:
log_msg = self.delimiter.join([
header,
'[{0' + space_fmt + '}/{1}]',
'eta: {eta}',
'{meters}',
'time: {time}',
'data: {data}'
])
MB = 1024.0 * 1024.0
for obj in iterable:
data_time.update(time.time() - end)
yield obj
iter_time.update(time.time() - end)
if i % print_freq == 0 or i == len(iterable) - 1:
eta_seconds = iter_time.global_avg * (len(iterable) - i)
eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
if torch.cuda.is_available():
print(log_msg.format(
i, len(iterable), eta=eta_string,
meters=str(self),
time=str(iter_time), data=str(data_time),
memory=torch.cuda.max_memory_allocated() / MB))
else:
print(log_msg.format(
i, len(iterable), eta=eta_string,
meters=str(self),
time=str(iter_time), data=str(data_time)))
i += 1
end = time.time()
total_time = time.time() - start_time
total_time_str = str(datetime.timedelta(seconds=int(total_time)))
print('{} Total time: {} ({:.4f} s / it)'.format(
header, total_time_str, total_time / len(iterable)))
def collate_fn(batch):
return tuple(zip(*batch))
def warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor):
def f(x):
if x >= warmup_iters:
return 1
alpha = float(x) / warmup_iters
return warmup_factor * (1 - alpha) + alpha
return torch.optim.lr_scheduler.LambdaLR(optimizer, f)
def mkdir(path):
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
def setup_for_distributed(is_master):
"""
This function disables printing when not in master process
"""
import builtins as __builtin__
builtin_print = __builtin__.print
def print(*args, **kwargs):
force = kwargs.pop('force', False)
if is_master or force:
builtin_print(*args, **kwargs)
__builtin__.print = print
def is_dist_avail_and_initialized():
if not dist.is_available():
return False
if not dist.is_initialized():
return False
return True
def get_world_size():
if not is_dist_avail_and_initialized():
return 1
return dist.get_world_size()
def get_rank():
if not is_dist_avail_and_initialized():
return 0
return dist.get_rank()
def is_main_process():
return get_rank() == 0
def save_on_master(*args, **kwargs):
if is_main_process():
torch.save(*args, **kwargs)
def init_distributed_mode(args):
if 'RANK' in os.environ and 'WORLD_SIZE' in os.environ:
args.rank = int(os.environ["RANK"])
args.world_size = int(os.environ['WORLD_SIZE'])
args.gpu = int(os.environ['LOCAL_RANK'])
elif 'SLURM_PROCID' in os.environ:
args.rank = int(os.environ['SLURM_PROCID'])
args.gpu = args.rank % torch.cuda.device_count()
else:
print('Not using distributed mode')
args.distributed = False
return
args.distributed = True
torch.cuda.set_device(args.gpu)
args.dist_backend = 'nccl'
print('| distributed init (rank {}): {}'.format(
args.rank, args.dist_url), flush=True)
torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
world_size=args.world_size, rank=args.rank)
torch.distributed.barrier()
setup_for_distributed(args.rank == 0)
###Output
_____no_output_____
###Markdown
This contains general utilities used in different modules Import
###Code
# export
import hashlib
import json
import math
import os
import re
import time
import warnings
from pathlib import Path
import ipykernel
import nbdev.export
import numpy as np
import requests
import seaborn as sns
import torch
from IPython.display import Javascript, display
from notebook.notebookapp import list_running_servers
from PIL import Image
from shapely.geometry import Point
from shapely.geometry.polygon import Polygon
from torchvision import transforms
from IPython.core.debugger import set_trace
###Output
_____no_output_____
###Markdown
numpy/torch conversion stuff In general I'd like to have functions which work for both numpy and torch since the APIs aren't exactly the same. The approach I've taken is to write the function in `torch` (if possible) then add a function decorator `numpyify` to allow it to work with `numpy` arrays. This approach is far from perfect but it seems to work for most my cases. `args_loop` assumes arguments are grouped in tuples or dictionaries, which is mostly true as `*args` and `**kwargs` are tuples and dictionaries, respectively. This will loop over and apply `callback` to each argument. Again, not perfect, but seems to work for most cases.
###Code
# export
def args_loop(args, callback):
if isinstance(args, tuple): return tuple(args_loop(arg, callback) for arg in args)
elif isinstance(args, dict): return {key: args_loop(arg, callback) for key,arg in args.items()}
else: return callback(args)
###Output
_____no_output_____
###Markdown
`Formatter` is a base class which will format input arguments; it also keeps track if any arguments were successfully formatted. Again, not perfect but works in most cases.
###Code
# export
class Formatter():
def __init__(self): self.formatted = False
def predicate(self, arg): raise NotImplementedError('Please implement predicate')
def formatter(self, arg): raise NotImplementedError('Please implement formatter')
def callback(self, arg):
if self.predicate(arg):
self.formatted = True
return self.formatter(arg)
else:
return arg
def __call__(self, args):
self.formatted = False
return args_loop(args, self.callback)
###Output
_____no_output_____
###Markdown
`torch2np` will convert arguments from torch to numpy arrays. It should work for all cases (tensors which require gradients and cuda tensors) and will use the same underlying data and `dtype`.NOTE: The `formatted` flag for `torch2np` will not be thread safe, calling `torch2np = Torch2np()` per invocation should make it safe
###Code
# export
class Torch2np(Formatter):
def predicate(self, arg): return isinstance(arg, torch.Tensor)
def formatter(self, arg): return arg.detach().cpu().numpy()
torch2np = Torch2np()
###Output
_____no_output_____
###Markdown
`np2torch` will convert to tensor with the same `dtype`. It's not as general as `torch2np` since output tensor might need to be on the gpu.NOTE: The `formatted` flag for `np2torch` will not be thread safe, calling `np2torch = Np2Torch()` per invocation should make it safe
###Code
# export
class Np2torch(Formatter):
def predicate(self, arg): return isinstance(arg, np.ndarray)
def formatter(self, arg): return torch.from_numpy(arg)
np2torch = Np2torch()
###Output
_____no_output_____
###Markdown
`numpyify` decorator will allow functions designed for torch tensors to also work for numpy tensors. NOTE: this will fail if input does not contain a `torch.tensor` (i.e. like the `eye` function, which takes an integer and returns a tensor). To ensure this works, one of the inputs must be a `torch.tensor`.
###Code
# export
def numpyify(f):
def _numpyify(*args, **kwargs):
np2torch = Np2torch() # For thread safety, make a local instantiation
args, kwargs = np2torch((args, kwargs))
out = f(*args, **kwargs)
if np2torch.formatted: out = torch2np(out) # Convert back
return out
return _numpyify
###Output
_____no_output_____
###Markdown
Tests `assert_allclose` checks if two things, `A` and `B`, are close to each other. I use `np.allclose` because its more robust than `torch.allclose` (i.e. numpys version will work with datatypes other than `np.array`s, like `bool`s and `int`s)NOTE: I'm assuming the format of the inputs is the same; if not I'm assuming this is programmer error.
###Code
# export
def _assert_allclose(A, B, **kwargs):
if isinstance(A, tuple):
for a,b in zip(A,B): _assert_allclose(a, b, **kwargs) # Possibly add "strict" keyword here
elif isinstance(A, dict):
for key in A.keys() | B.keys(): _assert_allclose(A[key], B[key], **kwargs)
else:
try: assert(np.allclose(A, B, **kwargs))
except: assert(np.all(A == B))
# export
def assert_allclose(A, B, **kwargs): _assert_allclose(*torch2np((A, B)), **kwargs)
A = torch.rand((4,3))
B = np.random.normal(size=(4,3))
A, B
assert_allclose(A, A+1e-5, atol=1e-5)
assert_allclose((A, (B, {'test': 1.})), (A+1e-5, (B+1e-5, {'test': 1 + 1e-5})), atol=1e-5)
###Output
_____no_output_____
###Markdown
Since we have multiple functions that should work for torch and numpy, we should have a way to test for both without having to write duplicate tests.
###Code
# export
def assert_allclose_f(f, x, y, **kwargs):
if not isinstance(x, tuple): x = (x,)
assert_allclose(f(*x), y, **kwargs)
# export
def assert_allclose_f_ttn(f, x, y, **kwargs): # ttn == "torch, then numpy"
torch2np = Torch2np()
assert_allclose_f(f, x, y, **kwargs) # Torch test
x, y = torch2np((x, y))
assert(torch2np.formatted) # Make sure something was converted
assert_allclose_f(f, x, y, **kwargs) # Numpy test
###Output
_____no_output_____
###Markdown
General For some reason I can't find a built-in that will reverse and return a list without doing some iterable thing.TODO: get this to work for torch tensors
###Code
# export
def reverse(A): return A[::-1]
assert_allclose(reverse(['a', 'b', 'c']), ['c', 'b', 'a'])
###Output
_____no_output_____
###Markdown
`shape` returns the tensor shape as a tensor the same type of the tensor. Convenient if you need to do arithmetic based on the size of the tensor.
###Code
# export
@numpyify
def shape(A, dtype=None):
if dtype is None: dtype = A.dtype
return A.new_tensor(A.shape, dtype=dtype)
A = torch.rand(3,4)
assert_allclose_f_ttn(shape, A, torch.FloatTensor([3, 4]))
# export
@numpyify
def stackify(A, dim=0):
if isinstance(A, tuple): return torch.stack([stackify(a, dim) for a in A], dim)
return A
x = (tuple(torch.FloatTensor([1,2])), tuple(torch.FloatTensor([1,2])))
assert_allclose_f_ttn(stackify, (x,), torch.FloatTensor([[1, 2],[1, 2]]))
###Output
_____no_output_____
###Markdown
pytorch doesnt have an equivalent of `np.delete`NOTE: temporarily not `numpyified` because I need `np.array(,dtype=np.object)` to work since torch does not support jagged/nested tensors yet.
###Code
# export
def delete(A, idx_delete):
idx = torch.ones(len(A), dtype=torch.bool)
idx[idx_delete] = False
return A[idx]
A = torch.FloatTensor([[1,2,3],
[4,5,6],
[7,8,9]])
assert_allclose_f_ttn(delete, (A, 1), torch.FloatTensor([[1, 2, 3],
[7, 8, 9]]))
# export
def rescale(A, r1, r2): return (A-r1[0])/(r1[1]-r1[0])*(r2[1]-r2[0])+r2[0]
A = torch.FloatTensor([1,2])
assert_allclose_f_ttn(rescale, (A, torch.FloatTensor([1,2]), torch.FloatTensor([2,4])),
torch.FloatTensor([2, 4]))
###Output
_____no_output_____
###Markdown
Points/Vectors Kind of hard to keep separate distinction between points and vectors even though they technically aren't the same thing. General point stuff `singlify` decorator will allow functions designed for multiple points to work for single point inputs. points should have a shape of `(N, [2,3])` whereas a single point will have shape of `([2,3])`, so its convenient to just define the function for multiple points and use the decorator so it will work for a single point.NOTE: first argument must be `ps`
###Code
# export
def singlify(f):
def _singlify(ps, *args, **kwargs):
single = len(ps.shape) == 1
if single: ps = ps[None]
ps = f(ps, *args, **kwargs)
if single: ps = ps[0]
return ps
return _singlify
###Output
_____no_output_____
###Markdown
`augment` will add ones to points; useful for affine and homography xforms
###Code
# export
@numpyify
@singlify
def augment(ps): return torch.cat([ps, ps.new_ones((len(ps), 1))], dim=1)
ps = torch.FloatTensor([[0.1940, 0.2536],
[0.2172, 0.1626],
[0.9834, 0.2700],
[0.5324, 0.7137]])
assert_allclose_f_ttn(augment, ps, torch.FloatTensor([[0.1940, 0.2536, 1.0000],
[0.2172, 0.1626, 1.0000],
[0.9834, 0.2700, 1.0000],
[0.5324, 0.7137, 1.0000]]))
###Output
_____no_output_____
###Markdown
`deaugment` will remove last column; might wanna add check to make sure column contains ones
###Code
# export
@singlify
def deaugment(ps): return ps[:, 0:-1]
ps = torch.FloatTensor([[0.1940, 0.2536, 1.0000],
[0.2172, 0.1626, 1.0000],
[0.9834, 0.2700, 1.0000],
[0.5324, 0.7137, 1.0000]])
assert_allclose_f_ttn(deaugment, ps, torch.FloatTensor([[0.1940, 0.2536],
[0.2172, 0.1626],
[0.9834, 0.2700],
[0.5324, 0.7137]]))
###Output
_____no_output_____
###Markdown
`normalize` will divide by last column and remove it
###Code
# export
@singlify
def normalize(ps): return deaugment(ps/ps[:, [-1]])
ps = torch.FloatTensor([[0.1940, 0.2536, 2.0000],
[0.2172, 0.1626, 3.0000],
[0.9834, 0.2700, 4.0000],
[0.5324, 0.7137, 5.0000]])
assert_allclose_f_ttn(normalize, ps, torch.FloatTensor([[0.0970, 0.1268],
[0.0724, 0.0542],
[0.2458, 0.0675],
[0.1065, 0.1427]]), atol=1e-4)
###Output
_____no_output_____
###Markdown
Bounding box stuff `ps_bb` is points bounding box
###Code
# export
@numpyify
def ps_bb(ps): return stackify((torch.min(ps, dim=0).values, torch.max(ps, dim=0).values))
ps = torch.FloatTensor([[0.1940, 0.2536],
[0.2172, 0.1626],
[0.9834, 0.2700],
[0.5324, 0.7137]])
assert_allclose_f_ttn(ps_bb, ps, torch.FloatTensor([[0.194 , 0.1626],
[0.9834, 0.7137]]))
###Output
_____no_output_____
###Markdown
`array_bb` is array bounding box
###Code
# export
@numpyify
def array_bb(arr, dtype=None):
if dtype is None: dtype = arr.dtype
return arr.new_tensor([[0,0], [arr.shape[1]-1, arr.shape[0]-1]], dtype=dtype)
arr = torch.rand(5,4)
assert_allclose_f_ttn(array_bb, arr, torch.FloatTensor([[0, 0],
[3, 4]]))
###Output
_____no_output_____
###Markdown
`bb_sz` returns the size of a bounding box
###Code
# export
@numpyify
def bb_sz(bb): return stackify((bb[1,1]-bb[0,1]+1, bb[1,0]-bb[0,0]+1))
bb = torch.FloatTensor([[0,0],[5,4]])
assert_allclose_f_ttn(bb_sz, bb, torch.FloatTensor([5, 6]))
###Output
_____no_output_____
###Markdown
`bb_grid` is bounding box grid; i,j is swapped to x,y
###Code
# export
@numpyify
def bb_grid(bb, dtype=None):
if dtype is None: dtype = bb.dtype
return stackify(reverse(torch.meshgrid(torch.arange(bb[0,1], bb[1,1]+1, dtype=dtype, device=bb.device),
torch.arange(bb[0,0], bb[1,0]+1, dtype=dtype, device=bb.device))))
bb = torch.FloatTensor([[0,0],[2,1]])
assert_allclose_f_ttn(bb_grid, bb, torch.FloatTensor([[[0, 1, 2],
[0, 1, 2]],
[[0, 0, 0],
[1, 1, 1]]]))
###Output
_____no_output_____
###Markdown
`bb_array` applies bounding box to array and returns the sub array
###Code
# export
@numpyify
def bb_array(arr, bb):
bb = bb.long() # Must be long for indexing; cannot round either incase input bb is type long
return arr[bb[0,1]:bb[1,1]+1, bb[0,0]:bb[1,0]+1]
arr = torch.FloatTensor([[1,2,3],[4,5,6],[7,8,9]])
bb = torch.FloatTensor([[1,1],[2,2]])
assert_allclose_f_ttn(bb_array, (arr, bb), torch.FloatTensor([[5, 6],[8, 9]]))
# export
def is_p_in_bb(p, bb): return p[0] >= bb[0,0] and p[1] >= bb[0,1] and p[0] <= bb[1,0] and p[1] <= bb[1,1]
p1 = torch.FloatTensor([0.0, 0.0])
p2 = torch.FloatTensor([1.5, 1.5])
bb = torch.FloatTensor([[1,1], [2,2]])
assert_allclose_f_ttn(is_p_in_bb, (p1, bb), False)
assert_allclose_f_ttn(is_p_in_bb, (p2, bb), True)
# export
def is_bb_in_bb(bb1, bb2): return is_p_in_bb(bb1[0], bb2) and is_p_in_bb(bb1[1], bb2)
bb1 = torch.FloatTensor([[1,1],[5,5]])
bb2 = torch.FloatTensor([[2,2],[4,4]])
bb3 = torch.FloatTensor([[8,8],[9,9]])
assert_allclose_f_ttn(is_bb_in_bb, (bb2, bb1), True)
assert_allclose_f_ttn(is_bb_in_bb, (bb3, bb1), False)
###Output
_____no_output_____
###Markdown
Boundary stuff
###Code
# export
def is_p_in_b(p, b): return Polygon(b).contains(Point(*p))
b = torch.FloatTensor([[0,0],[0,1],[1,1],[1,0]])
p1 = torch.FloatTensor([0.5, 0.5])
p2 = torch.FloatTensor([1.5, 1.5])
assert_allclose_f_ttn(is_p_in_b, (p1, b), True)
assert_allclose_f_ttn(is_p_in_b, (p2, b), False)
# export
def bb2b(bb): return bb[[[0,0],[0,1],[1,1],[1,0]],
[[0,0],[0,1],[0,1],[0,0]]]
bb = torch.FloatTensor([[1,1],[5,5]])
assert_allclose_f_ttn(bb2b, bb, torch.FloatTensor([[1., 1.],
[1., 5.],
[5., 5.],
[5., 1.]]))
###Output
_____no_output_____
###Markdown
Point geometries `grid2ps` converts grid to points
###Code
# export
@numpyify
def grid2ps(X, Y, order='row'):
if order == 'row': return stackify((X.flatten(), Y.flatten()), dim=1)
elif order == 'col': return grid2ps(X.T, Y.T, order='row')
else: raise RuntimeError(f'Unrecognized option: {order}')
X = torch.FloatTensor([[1,2],[1,2]])
Y = torch.FloatTensor([[1,1],[2,2]])
assert_allclose_f_ttn(grid2ps, (X, Y, 'row'), torch.FloatTensor([[1, 1],
[2, 1],
[1, 2],
[2, 2]]))
assert_allclose_f_ttn(grid2ps, (X, Y, 'col'), torch.FloatTensor([[1, 1],
[1, 2],
[2, 1],
[2, 2]]))
# export
@numpyify
def array_ps(arr, dtype=None):
if dtype is None: dtype = arr.dtype
return grid2ps(*bb_grid(array_bb(arr), dtype))
arr = torch.rand(3,2)
assert_allclose_f_ttn(array_ps, arr, torch.FloatTensor([[0, 0],
[1, 0],
[0, 1],
[1, 1],
[0, 2],
[1, 2]]))
###Output
_____no_output_____
###Markdown
`crrgrid` is centered rectanglular rectangle gridNOTE: not yet numpyified
###Code
# export
def crrgrid(num_h, num_w, spacing_h, spacing_w, dtype, device=None):
h, w = spacing_h*(num_h-1), spacing_w*(num_w-1)
y = torch.linspace(-h/2, h/2, int(num_h), dtype=dtype, device=device)
x = torch.linspace(-w/2, w/2, int(num_w), dtype=dtype, device=device)
return grid2ps(*reverse(torch.meshgrid(y, x)), 'col')
assert_allclose_f_ttn(crrgrid, (2, 2, 1, 1, torch.float), torch.FloatTensor([[-0.5000, -0.5000],
[-0.5000, 0.5000],
[ 0.5000, -0.5000],
[ 0.5000, 0.5000]]))
###Output
_____no_output_____
###Markdown
`csrgrid` is centered square rectangle gridNOTE: not yet numpyified
###Code
# export
def csrgrid(num_h, num_w, spacing, dtype, device=None):
return crrgrid(num_h, num_w, spacing, spacing, dtype, device)
assert_allclose_f_ttn(csrgrid, (2, 2, 1, torch.float), torch.FloatTensor([[-0.5, -0.5],
[-0.5, 0.5],
[ 0.5, -0.5],
[ 0.5, 0.5]]))
###Output
_____no_output_____
###Markdown
`csdgrid` is centered square diamond gridNOTE: not yet numpyified
###Code
# export
def csdgrid(num_h, num_w, spacing, fo, dtype, device=None):
h, w = spacing*(num_h-1), spacing*(num_w-1)
xs_grid = torch.linspace(-w/2, w/2, int(num_w), dtype=dtype, device=device)
ys_grid = torch.linspace(-h/2, h/2, int(num_h), dtype=dtype, device=device)
ps = []
for x_grid in xs_grid:
if fo: ys, fo = ys_grid[0::2], False
else: ys, fo = ys_grid[1::2], True
xs = torch.full((len(ys),), x_grid, dtype=dtype, device=device)
ps.append(stackify((xs, ys), dim=1))
return torch.cat(ps)
assert_allclose_f_ttn(csdgrid, (5, 4, 1, True, torch.float), torch.FloatTensor([[-1.5, -2],
[-1.5, 0],
[-1.5, 2],
[-0.5, -1],
[-0.5, 1],
[ 0.5, -2],
[ 0.5, 0],
[ 0.5, 2],
[ 1.5, -1],
[ 1.5, 1]]))
###Output
_____no_output_____
###Markdown
`cfpgrid` is centered four point gridNOTE: not yet numpyified
###Code
# export
def cfpgrid(h, w, dtype, device=None): return crrgrid(2, 2, h, w, dtype, device)
assert_allclose_f_ttn(cfpgrid, (2 ,2, torch.float), torch.FloatTensor([[-1, -1],
[-1, 1],
[ 1, -1],
[ 1, 1]]))
###Output
_____no_output_____
###Markdown
General vector stuff `unitize` will make norm of each vector 1NOTE: Handling of zero vector might be a little tricky
###Code
# export
@numpyify
@singlify
def unitize(vs): return vs/torch.norm(vs, dim=1, keepdim=True)
vs = torch.FloatTensor([[0.1940, 0.2536],
[0.2172, 0.1626],
[0.9834, 0.2700],
[0.5324, 0.7137]])
assert_allclose_f_ttn(unitize, vs, torch.FloatTensor([[0.6075896, 0.7942511],
[0.8005304, 0.5992921],
[0.9643143, 0.2647598],
[0.5979315, 0.8015471]]))
# export
@numpyify
def cross_mat(v):
zero = v.new_tensor(0)
return stackify((( zero, -v[2], v[1]),
( v[2], zero, -v[0]),
(-v[1], v[0], zero)))
v = torch.FloatTensor([1,2,3])
assert_allclose_f_ttn(cross_mat, v, torch.FloatTensor([[ 0,-3, 2],
[ 3, 0,-1],
[-2, 1, 0]]))
assert_allclose(cross_mat(v)@v, 0)
###Output
_____no_output_____
###Markdown
Transforms `pmm` is point matrix multiplication
###Code
# export
@singlify
def pmm(ps, A, aug=False):
if aug: ps = augment(ps)
ps = ([email protected]).T
if aug: ps = normalize(ps) # works for both affine and homography transforms
return ps
ps = torch.FloatTensor([[0.1940, 0.2536],
[0.2172, 0.1626],
[0.9834, 0.2700],
[0.5324, 0.7137]])
A = torch.FloatTensor([[0.9571, 0.5551],
[0.8914, 0.2626]])
assert_allclose_f_ttn(pmm, (ps, A), torch.FloatTensor([[0.3265, 0.2395],
[0.2981, 0.2363],
[1.0911, 0.9475],
[0.9057, 0.6620]]), atol=1e-4)
###Output
_____no_output_____
###Markdown
Spherical coordinates
###Code
@numpyify
@singlify
def cart2spherical(ps):
x, y, z = ps[:,0], ps[:,1], ps[:,2]
r = torch.sqrt(x**2 + y**2 + z**2)
theta = torch.atan2(torch.sqrt(x**2 + y**2), z)
phi = torch.atan2(y, x)
return stackify((r, theta, phi), 1)
ps = torch.FloatTensor([[0.4082785 , 0.1256695 , 0.54343185],
[0.93478803, 0.40557636, 0.40224384],
[0.48831708, 0.82743735, 0.68537884]])
assert_allclose_f_ttn(cart2spherical, ps, torch.FloatTensor([[0.69123247, 0.66619615, 0.29860039],
[1.09550032, 1.19482283, 0.40935945],
[1.18019079, 0.95116431, 1.03764654]]))
@numpyify
@singlify
def spherical2cart(ps):
r, theta, phi = ps[:,0], ps[:,1], ps[:,2]
x = r*torch.sin(theta)*torch.cos(phi)
y = r*torch.sin(theta)*torch.sin(phi)
z = r*torch.cos(theta)
return stackify((x, y, z), dim=1)
ps = torch.FloatTensor([[0.69123247, 0.66619615, 0.29860039],
[1.09550032, 1.19482283, 0.40935945],
[1.18019079, 0.95116431, 1.03764654]])
assert_allclose_f_ttn(spherical2cart, ps, torch.FloatTensor([[0.4082785 , 0.1256695 , 0.5434318 ],
[0.9347881 , 0.4055764 , 0.40224388],
[0.4883171 , 0.82743734, 0.68537885]]))
ps = torch.rand(4,3)
ps
assert_allclose(ps, spherical2cart(cart2spherical(ps)))
###Output
_____no_output_____
###Markdown
Point conditioning `condition_mat` is typically used to "condition" points to improve conditioning; its inverse is usually applied afterwards. It sets the mean of the points to zero and the average distance to `sqrt(2)`. I use the term "condition" here so I don't get confused with "normalization" which is used above.
###Code
# export
@numpyify
def condition_mat(ps):
zero, one = ps.new_tensor(0), ps.new_tensor(1)
xs, ys = ps[:, 0], ps[:, 1]
mean_x, mean_y = xs.mean(), ys.mean()
s_m = math.sqrt(2)*len(ps)/(torch.sqrt((xs-mean_x)**2+(ys-mean_y)**2)).sum()
return stackify((( s_m, zero, -mean_x*s_m),
(zero, s_m, -mean_y*s_m),
(zero, zero, one)))
ps = torch.FloatTensor([[0.5466, 0.0889],
[0.1493, 0.6591],
[0.5600, 0.0352],
[0.7287, 0.5892]])
assert_allclose_f_ttn(condition_mat, ps, torch.FloatTensor([[ 4.0950, 0.0000, -2.0317],
[ 0.0000, 4.0950, -1.4050],
[ 0.0000, 0.0000, 1.0000]]), atol=1e-4)
# export
def condition(ps):
T = condition_mat(ps)
return pmm(ps, T, aug=True), T
ps = torch.FloatTensor([[0.5466, 0.0889],
[0.1493, 0.6591],
[0.5600, 0.0352],
[0.7287, 0.5892]])
ps_cond, T = condition(ps)
assert_allclose(ps_cond.mean(), 0, atol=1e-6)
assert_allclose(ps_cond.norm(dim=1).mean(), math.sqrt(2), atol=1e-6)
###Output
_____no_output_____
###Markdown
Homographies `homography` estimates a homography between two sets of points
###Code
# export
@numpyify
def homography(ps1, ps2):
# Condition and augment points
(ps1_cond, T1), (ps2_cond, T2) = map(condition, [ps1, ps2])
ps1_cond, ps2_cond = map(augment, [ps1_cond, ps2_cond])
# Form homogeneous system
L = torch.cat([torch.cat([ps1_cond, torch.zeros_like(ps1_cond), -ps2_cond[:, 0:1]*ps1_cond], dim=1),
torch.cat([torch.zeros_like(ps1_cond), ps1_cond, -ps2_cond[:, 1:2]*ps1_cond], dim=1)])
# Solution is the last column of V
H12_cond = torch.svd(L, some=False).V[:,-1].reshape(3,3)
# Undo conditioning
H12 = torch.inverse(T2)@H12_cond@T1
H12 = H12/H12[2,2] # Sets H12[2,2] to 1
return H12
ps1 = torch.FloatTensor([[-350, -350],
[-350, 350],
[ 350, -350],
[ 350, 350]])
ps2 = torch.FloatTensor([[ 970, 517],
[ 156, 498],
[ 973, 1317],
[ 192, 1279]])
assert_allclose_f_ttn(homography, (ps1, ps2),
torch.FloatTensor([[ 6.252e-02, -1.117e+00, 5.6786e+02],
[ 1.183e+00, -8.049e-03, 9.1087e+02],
[ 6.000e-05, 3.650e-05, 1.0000e+00]]), atol=1e-3)
###Output
_____no_output_____
###Markdown
Rotations `approx_R` gives the nearest rotational approximation to the input matrix (frobenius norm?). Note that for a proper rotation determinant must be +1, which is checked after.
###Code
# export
@numpyify
def approx_R(R):
one = R.new_tensor(1)
[U,_,V] = torch.svd(R)
R = [email protected]
if not torch.isclose(torch.det(R), one):
R = R.new_full((3,3), math.nan)
return R
R = torch.FloatTensor([[0.0958, 0.8441, 0.2009],
[0.7877, 0.9110, 0.9277],
[0.3727, 0.7262, 0.1417]])
assert_allclose_f_ttn(approx_R, R, torch.FloatTensor([[-0.5659, 0.8152, 0.1237],
[ 0.5155, 0.2328, 0.8247],
[ 0.6434, 0.5304, -0.5520]]), atol=1e-4)
###Output
_____no_output_____
###Markdown
Euler
###Code
# export
@numpyify
def euler2R(euler):
s, c = torch.sin, torch.cos
e_x, e_y, e_z = euler
return stackify((
(c(e_y)*c(e_z), c(e_z)*s(e_x)*s(e_y) - c(e_x)*s(e_z), s(e_x)*s(e_z) + c(e_x)*c(e_z)*s(e_y)),
(c(e_y)*s(e_z), c(e_x)*c(e_z) + s(e_x)*s(e_y)*s(e_z), c(e_x)*s(e_y)*s(e_z) - c(e_z)*s(e_x)),
( -s(e_y), c(e_y)*s(e_x), c(e_x)*c(e_y))
))
euler = torch.FloatTensor([0.2748, 0.0352, 0.4496])
assert_allclose_f_ttn(euler2R, euler, torch.FloatTensor([[ 0.9001, -0.4097, 0.1484],
[ 0.4343, 0.8710, -0.2297],
[-0.0352, 0.2712, 0.9619]]), atol=1e-4)
# export
@numpyify
def R2euler(R):
return stackify((torch.atan2( R[2, 1], R[2, 2]),
torch.atan2(-R[2, 0], torch.sqrt(R[0, 0]**2+R[1, 0]**2)),
torch.atan2( R[1, 0], R[0, 0])))
R = torch.FloatTensor([[ 0.9001, -0.4097, 0.1484],
[ 0.4343, 0.8710, -0.2297],
[-0.0352, 0.2712, 0.9619]])
assert_allclose_f_ttn(R2euler, R, torch.FloatTensor([0.2748, 0.0352, 0.4496]), atol=1e-4)
###Output
_____no_output_____
###Markdown
ensure euler => R => euler
###Code
euler = torch.FloatTensor([0.2748, 0.0352, 0.4496])
assert_allclose(R2euler(euler2R(euler)), euler)
###Output
_____no_output_____
###Markdown
Rodrigues rodrigues conversion from:* https://www2.cs.duke.edu/courses/fall13/compsci527/notes/rodrigues.pdf
###Code
# export
@numpyify
def rodrigues2R(r):
zero = r.new_tensor(0)
theta = torch.norm(r)
if theta > math.pi: warnings.warn('Theta greater than pi')
if torch.isclose(theta, zero): return torch.eye(3, dtype=r.dtype, device=r.device)
u = r/theta
return torch.eye(3, dtype=r.dtype, device=r.device)*torch.cos(theta) + \
(1-torch.cos(theta))*u[:,None]@u[:,None].T + \
cross_mat(u)*torch.sin(theta)
theta = math.pi/4
k = torch.FloatTensor([ 0.8155, 0.0937, -0.5711])
r = theta*k
assert_allclose_f_ttn(rodrigues2R, r, torch.FloatTensor([[ 0.9019, 0.4262, -0.0702],
[-0.3814, 0.7097, -0.5923],
[-0.2027, 0.5610, 0.8026]]), atol=1e-4)
# export
@numpyify
def R2rodrigues(R):
zero, one, pi = R.new_tensor(0), R.new_tensor(1), R.new_tensor(math.pi)
A = (R-R.T)/2
rho = A[[2,0,1],[1,2,0]]
s = torch.norm(rho)
c = (R.trace()-1)/2
if torch.isclose(s, zero) and torch.isclose(c, one):
r = R.new_zeros(3)
elif torch.isclose(s, zero) and torch.isclose(c, -one):
V = R + torch.eye(3, dtype=R.dtype, device=R.device)
v = V[:, torch.where(~torch.isclose(torch.norm(V, dim=0), zero))[0][0]] # Just get first non-zero
u = unitize(v)
def S_half(r):
if torch.isclose(torch.norm(r), pi) and \
((torch.isclose(r[0], zero) and torch.isclose(r[1], zero) and r[2] < 0) or \
(torch.isclose(r[0], zero) and r[1] < 0) or \
(r[0] < 0)):
return -r
else:
return r
r = S_half(u*math.pi)
elif not torch.isclose(s, zero):
u = rho/s
theta = torch.atan2(s,c)
r = u*theta
else: raise RuntimeError('This shouldnt happen; please debug')
return r
R = torch.FloatTensor([[ 0.9019, 0.4262, -0.0702],
[-0.3814, 0.7097, -0.5923],
[-0.2027, 0.5610, 0.8026]])
assert_allclose_f_ttn(R2rodrigues, R, torch.FloatTensor([ 0.6405, 0.0736, -0.4485]), atol=1e-4)
r = torch.FloatTensor([.1,.2,.3])
assert_allclose(R2rodrigues(rodrigues2R(r)), r)
###Output
_____no_output_____
###Markdown
Other rotation stuff `approx_R` gives the nearest rotational approximation to the input matrix. Note that for a proper rotation determinant must be +1, which is checked after.
###Code
# export
@numpyify
def approx_R(R):
one = R.new_tensor(1)
[U,_,V] = torch.svd(R)
R = [email protected]
if not torch.isclose(torch.det(R), one):
R = R.new_full((3,3), math.nan)
return R
R = torch.FloatTensor([[0.0958, 0.8441, 0.2009],
[0.7877, 0.9110, 0.9277],
[0.3727, 0.7262, 0.1417]])
assert_allclose_f_ttn(approx_R, R, torch.FloatTensor([[-0.5659, 0.8152, 0.1237],
[ 0.5155, 0.2328, 0.8247],
[ 0.6434, 0.5304, -0.5520]]), atol=1e-4)
###Output
_____no_output_____
###Markdown
Rigid Rigid transforms are a rotation followed by translation.
###Code
# export
@numpyify
def Rt2M(R, t):
M = torch.cat([R, t[:,None]], dim=1)
M = torch.cat([M, M.new_tensor([[0,0,0,1]])])
return M
R = torch.FloatTensor([[-0.5659, 0.8152, 0.1237],
[ 0.5155, 0.2328, 0.8247],
[ 0.6434, 0.5304, -0.5520]])
t = torch.FloatTensor([1,2,3])
assert_allclose_f_ttn(Rt2M, (R,t), torch.FloatTensor([[-0.5659, 0.8152, 0.1237, 1.0000],
[ 0.5155, 0.2328, 0.8247, 2.0000],
[ 0.6434, 0.5304, -0.5520, 3.0000],
[ 0.0000, 0.0000, 0.0000, 1.0000]]))
# export
def M2Rt(M): return M[0:3,0:3], M[0:3,3]
M = torch.FloatTensor([[-0.5659, 0.8152, 0.1237, 1.0000],
[ 0.5155, 0.2328, 0.8247, 2.0000],
[ 0.6434, 0.5304, -0.5520, 3.0000],
[ 0.0000, 0.0000, 0.0000, 1.0000]])
assert_allclose_f_ttn(M2Rt, M, (torch.FloatTensor([[-0.5659, 0.8152, 0.1237],
[ 0.5155, 0.2328, 0.8247],
[ 0.6434, 0.5304, -0.5520]]), torch.FloatTensor([1,2,3])))
# export
def invert_rigid(M):
R, t = M2Rt(M)
return Rt2M(R.T, -R.T@t)
M = torch.FloatTensor([[-0.5659, 0.8152, 0.1237, 1.0000],
[ 0.5155, 0.2328, 0.8247, 2.0000],
[ 0.6434, 0.5304, -0.5520, 3.0000],
[ 0.0000, 0.0000, 0.0000, 1.0000]])
assert_allclose_f_ttn(invert_rigid, M, torch.inverse(M), atol=1e-3)
# export
def mult_rigid(M1, M2):
R1, t1 = M2Rt(M1)
R2, t2 = M2Rt(M2)
return Rt2M(R1@R2, R1@t2+t1)
M = torch.FloatTensor([[-0.5659, 0.8152, 0.1237, 1.0000],
[ 0.5155, 0.2328, 0.8247, 2.0000],
[ 0.6434, 0.5304, -0.5520, 3.0000],
[ 0.0000, 0.0000, 0.0000, 1.0000]])
assert_allclose_f_ttn(mult_rigid, (M,M), M@M)
###Output
_____no_output_____
###Markdown
More Vector stuff Stuff placed here mainly because it requires stuff from transforms section. For a random unit vector uniformly sampled over a sphere, if you randomly sample `theta` and `phi` directly, you will over sample near the poles. Instead, you can do an "equal area" projection of the sphere onto a cylinder (i.e. rectangle), uniformly sample the cylinder/rectangle, and then project the point back. More info here:* https://math.stackexchange.com/a/44691NOTE: `torch.rand` is random uniform between `[0,1)`, so it is rescaled as needed. Also, since `random_unit` doesnt take in a `tensor`, it cannot be `numpyified`
###Code
# export
def random_unit(dtype=None, device=None):
r = torch.ones(1, dtype=dtype, device=device)[0]
theta = torch.acos(rescale(torch.rand(1, dtype=dtype, device=device)[0], [0,1], [-1,1]))
phi = rescale(torch.rand(1, dtype=dtype, device=device)[0], [0,1], [0, 2*math.pi])
return spherical2cart(stackify((r, theta, phi)))
assert_allclose(torch.norm(random_unit()), 1)
###Output
_____no_output_____
###Markdown
`v_v_angle` is vector-vector angle and returns the angle between two vectors
###Code
# export
@numpyify
def v_v_angle(a, b):
x = torch.dot(a,b)/(torch.norm(a)*torch.norm(b))
return torch.acos(x.clamp(-1, 1)) # Precision errors can make this go outside domain [-1,1] => NaNs
a = torch.FloatTensor([ 0.5568, -0.4851, -0.6743])
b = torch.FloatTensor([-0.8482, -0.4175, -0.3260])
assert_allclose_f_ttn(v_v_angle, (a, b), 1.6207424465344742, atol=1e-5)
assert_allclose_f_ttn(v_v_angle, (a, a), 0, atol=1e-5)
###Output
_____no_output_____
###Markdown
`v_v_R` is vector vector rotation matrix and returns the rotation matrix between the two vectors
###Code
# export
@numpyify
def v_v_R(v1, v2):
zero = v1.new_tensor(0)
theta = v_v_angle(v1, v2)
if torch.isclose(theta, zero): return torch.eye(3, dtype=v1.dtype, device=v1.device)
v3 = unitize(torch.cross(v1, v2))
return rodrigues2R(theta*v3)
v1 = torch.FloatTensor([ 0.5568, -0.4851, -0.6743])
v2 = torch.FloatTensor([-0.8482, -0.4175, -0.3260])
assert_allclose_f_ttn(v_v_R, (v1,v2), torch.FloatTensor([[-0.03390428, 0.54606884, 0.83705395],
[-0.74174788, 0.54757324, -0.3872643 ],
[-0.66982131, -0.63401291, 0.38648033]]))
assert_allclose_f_ttn(v_v_R, (v1,v1), torch.FloatTensor([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]]))
assert_allclose(unitize(pmm(v1, v_v_R(v1,v2))), unitize(v2), atol=1e-4)
###Output
_____no_output_____
###Markdown
Line stuff
###Code
# export
@numpyify
def pm2l(p, m):
zero, one = p.new_tensor(0), p.new_tensor(1)
x, y = p
if not torch.isfinite(m): a, b, c = one, zero, -x
else: a, b, c = m, -one, y-m*x
return stackify((a,b,c))
p = torch.FloatTensor([1.5, 2.5])
m = torch.FloatTensor([0.5])[0]
assert_allclose_f_ttn(pm2l, (p, m), torch.FloatTensor([ 0.5000, -1.0000, 1.7500]))
# export
@numpyify
def ps2l(p1, p2):
zero = p1.new_tensor(0)
x1, y1 = p1
x2, y2 = p2
if not torch.isclose(x2-x1, zero): m = (y2-y1)/(x2-x1)
else: m = p1.new_tensor(math.inf)
return pm2l(p1, m)
p1 = torch.FloatTensor([1.5, 2.5])
p2 = torch.FloatTensor([2.5, 3.5])
assert_allclose_f_ttn(ps2l, (p1, p2), torch.FloatTensor([ 1.0000, -1.0000, 1.000]))
# export
@numpyify
def pld(p, l):
x, y = p
a, b, c = l
return torch.abs(a*x + b*y + c)/torch.sqrt(a**2 + b**2)
p = torch.FloatTensor([0.5, 1.5])
l = torch.FloatTensor([1.0000, 0.0000, -1.5000])
assert_allclose_f_ttn(pld, (p, l), 1)
# export
@numpyify
def l_l_intersect(l1, l2):
a1, b1, c1 = l1
a2, b2, c2 = l2
return stackify(((-c1*b2 + b1*c2)/(a1*b2 - b1*a2), (-a1*c2 + c1*a2)/(a1*b2 - b1*a2)))
l1 = torch.FloatTensor([.1, .2, .3])
l2 = torch.FloatTensor([.4, .2, .2])
assert_allclose_f_ttn(l_l_intersect, (l1, l2), torch.FloatTensor([ 0.3333, -1.6667]), atol=1e-4)
###Output
_____no_output_____
###Markdown
`b_ls` gets lines of boundary
###Code
# export
@numpyify
def b_ls(b):
return stackify(tuple(ps2l(b[idx], b[torch.remainder(idx+1, len(b))]) for idx in torch.arange(len(b))))
b = torch.FloatTensor([[1., 1.],
[1., 4.],
[5., 4.],
[5., 1.]])
assert_allclose_f_ttn(b_ls, b, torch.FloatTensor([[ 1., 0., -1.],
[ 0., -1., 4.],
[ 1., 0., -5.],
[-0., -1., 1.]]))
###Output
_____no_output_____
###Markdown
TODO: handle edge cases
###Code
# export
@numpyify
def b_l_intersect(b, l):
ps = []
for idx in torch.arange(len(b)):
p1, p2 = b[idx], b[torch.remainder(idx+1, len(b))]
p = l_l_intersect(ps2l(p1, p2), l)
d, d1, d2 = torch.norm(p2-p1), torch.norm(p1-p), torch.norm(p2-p)
if torch.isclose(d, d1+d2): ps.append(p)
return stackify(tuple(ps))
b = torch.FloatTensor([[-100., -100.],
[-100., 200.],
[ 200., 200.],
[ 200., -100.]])
l = torch.FloatTensor([.1, .2, .3])
assert_allclose_f_ttn(b_l_intersect, (b, l), torch.FloatTensor([[-100.0000, 48.5000],
[ 197.0000, -100.0000]]))
###Output
_____no_output_____
###Markdown
Ellipse stuff `sample_2pi` prevents accidentally resampling 2pi twice by linspacing with an additional sample and then removing the last sampleNOTE: Get numpy version working
###Code
# export
def sample_2pi(num_samples, dtype=None, device=None):
return torch.linspace(0, 2*math.pi, int(num_samples)+1, dtype=dtype, device=device)[:-1]
assert_allclose_f_ttn(sample_2pi, 3, torch.FloatTensor([0.0000, 2.0944, 4.1888]), atol=1e-4)
# export
@numpyify
def sample_ellipse(e, num_samples):
h, k, a, b, alpha = e
thetas = sample_2pi(num_samples, e.dtype, e.device)
return stackify((a*torch.cos(alpha)*torch.cos(thetas) - b*torch.sin(alpha)*torch.sin(thetas) + h,
a*torch.sin(alpha)*torch.cos(thetas) + b*torch.cos(alpha)*torch.sin(thetas) + k), dim=1)
e = torch.FloatTensor([1, 2, 3, 4, math.pi/4])
assert_allclose_f_ttn(sample_ellipse, (e, 3), torch.FloatTensor([[ 3.1213, 4.1213],
[-2.5101, 3.3888],
[ 2.3888, -1.5101]]), atol=1e-4)
# export
@numpyify
def ellipse2conic(e):
h, k, a, b, alpha = e
A = a**2*torch.sin(alpha)**2 + b**2*torch.cos(alpha)**2
B = 2*(b**2 - a**2)*torch.sin(alpha)*torch.cos(alpha)
C = a**2*torch.cos(alpha)**2 + b**2*torch.sin(alpha)**2
D = -2*A*h - B*k
E = -B*h - 2*C*k
F = A*h**2 + B*h*k + C*k**2 - a**2*b**2
return stackify((( A, B/2, D/2),
(B/2, C, E/2),
(D/2, E/2, F)))
e = torch.FloatTensor([1, 2, 3, 4, math.pi/4])
assert_allclose_f_ttn(ellipse2conic, e, torch.FloatTensor([[ 12.5000, 3.5000, -19.5000],
[ 3.5000, 12.5000, -28.5000],
[-19.5000, -28.5000, -67.5000]]))
# export
@numpyify
def conic2ellipse(Aq):
zero, pi = Aq.new_tensor(0), Aq.new_tensor(math.pi)
A = Aq[0, 0]
B = 2*Aq[0, 1]
C = Aq[1, 1]
D = 2*Aq[0, 2]
E = 2*Aq[1, 2]
F = Aq[2, 2]
# Return nans if input conic is not ellipse
if torch.any(~torch.isfinite(Aq)) or torch.isclose(B**2-4*A*C, zero) or B**2-4*A*C > 0:
return Aq.new_full((5,), math.nan)
# Equations below are from https://math.stackexchange.com/a/820896/39581
# "coefficient of normalizing factor"
q = 64*(F*(4*A*C-B**2)-A*E**2+B*D*E-C*D**2)/(4*A*C-B**2)**2
# distance between center and focal point
s = 1/4*torch.sqrt(torch.abs(q)*torch.sqrt(B**2+(A-C)**2))
# ellipse parameters
h = (B*E-2*C*D)/(4*A*C-B**2)
k = (B*D-2*A*E)/(4*A*C-B**2)
a = 1/8*torch.sqrt(2*torch.abs(q)*torch.sqrt(B**2+(A-C)**2)-2*q*(A+C))
b = torch.sqrt(a**2-s**2)
# Get alpha; note that range of alpha is [0, pi)
if torch.isclose(q*A-q*C, zero) and torch.isclose(q*B, zero): alpha = zero # Circle
elif torch.isclose(q*A-q*C, zero) and q*B > 0: alpha = 1/4*pi
elif torch.isclose(q*A-q*C, zero) and q*B < 0: alpha = 3/4*pi
elif q*A-q*C > 0 and (torch.isclose(q*B, zero) or q*B > 0): alpha = 1/2*torch.atan(B/(A-C))
elif q*A-q*C > 0 and q*B < 0: alpha = 1/2*torch.atan(B/(A-C)) + pi
elif q*A-q*C < 0: alpha = 1/2*torch.atan(B/(A-C)) + 1/2*pi
else: raise RuntimeError('"Impossible" condition reached; please debug')
return stackify((h, k, a, b, alpha))
Aq = torch.FloatTensor([[ 9.56324965, -1.90407389, -5.75510187],
[ -1.90407389, 15.43675035, -28.96942682],
[ -5.75510187, -28.96942682, -80.3060445 ]])
assert_allclose_f_ttn(conic2ellipse, Aq,
torch.FloatTensor([1.0000, 2.0000, 4.0000, 3.0000, 0.2876]), atol=1e-4)
assert_allclose(ellipse2conic(conic2ellipse(Aq)), Aq)
###Output
_____no_output_____
###Markdown
General image processing
###Code
# export
def rgb2gray(arr): # From Pillow documentation
return arr[:,:,0]*(299/1000) + arr[:,:,1]*(587/1000) + arr[:,:,2]*(114/1000)
arr = torch.FloatTensor([[[.1,.2,.3],[.2,.2,.2]]])
assert_allclose_f_ttn(rgb2gray, arr, torch.FloatTensor([[0.1815, 0.2000]]))
# export
@numpyify
def imresize(arr, sz, mode='bilinear', align_corners=True):
if not isinstance(sz, tuple): sz = tuple((shape(arr)//(shape(arr)/sz).min()).long())
return torch.nn.functional.interpolate(arr[None, None, :, :],
size=sz,
mode=mode,
align_corners=align_corners).squeeze(0).squeeze(0)
arr = torch.FloatTensor([[1,1,1,1],
[2,2,2,2]])
assert_allclose_f_ttn(imresize, (arr, 3),
torch.FloatTensor([[1.0000, 1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
[1.5000, 1.5000, 1.5000, 1.5000, 1.5000, 1.5000],
[2.0000, 2.0000, 2.0000, 2.0000, 2.0000, 2.0000]]))
###Output
_____no_output_____
###Markdown
`conv2d` is actually cross correlation, but you can transpose and do the same operation. This is mainly just a helper function to do 2d convolutions easily.
###Code
# export
@numpyify
def conv2d(arr, kernel, **kwargs):
return torch.nn.functional.conv2d(arr[None,None], kernel[None, None], **kwargs).squeeze(0).squeeze(0)
arr = torch.FloatTensor([[1,2,3],
[3,2,1],
[1,1,1]])
kernel = torch.FloatTensor([[-0.25, 0.25],
[-0.25, 0.25]])
assert_allclose_f_ttn(conv2d, (arr, kernel), torch.FloatTensor([[ 0.0000, 0.0000],
[-0.2500, -0.2500]]))
###Output
_____no_output_____
###Markdown
`pad` is just a helper function to do 2d convolutions easily
###Code
# export
@numpyify
def pad(arr, pad, mode):
return torch.nn.functional.pad(arr[None,None], pad, mode=mode).squeeze(0).squeeze(0)
arr = torch.FloatTensor([[1,2,3],
[3,2,1],
[1,1,1]])
assert_allclose_f_ttn(pad, (arr, (1,1,1,1), 'replicate'), torch.FloatTensor([[1,1,2,3,3],
[1,1,2,3,3],
[3,3,2,1,1],
[1,1,1,1,1],
[1,1,1,1,1]]))
###Output
_____no_output_____
###Markdown
`grad_array` by default uses replication to help with gradients along the edges. The default padding in `nn.functional.conv2d` is zero padding, which results in bad gradients along the edges.
###Code
# export
@numpyify
def grad_array(arr):
kernel_sobel = arr.new_tensor([[-0.1250, 0, 0.1250],
[-0.2500, 0, 0.2500],
[-0.1250, 0, 0.1250]])
arr = pad(arr, pad=(1,1,1,1), mode='replicate')
return tuple(conv2d(arr, kernel) for kernel in (kernel_sobel, kernel_sobel.T))
arr = torch.FloatTensor([[1,2,3],
[3,2,1],
[1,1,1]])
assert_allclose_f_ttn(grad_array, arr, (torch.FloatTensor([[ 0.2500, 0.5000, 0.2500],
[-0.1250, -0.2500, -0.1250],
[-0.1250, -0.2500, -0.1250]]),
torch.FloatTensor([[ 0.7500, 0.0000, -0.7500],
[-0.1250, -0.5000, -0.8750],
[-0.8750, -0.5000, -0.1250]])))
# export
@numpyify
def interp_array(arr, ps, align_corners=True, **kwargs):
ps = stackify((rescale(ps[:, 0], [0, arr.shape[1]-1], [-1, 1]),
rescale(ps[:, 1], [0, arr.shape[0]-1], [-1, 1])), dim=1) # ps must be rescaled to [-1,1]
return torch.nn.functional.grid_sample(arr[None, None],
ps[None, None],
align_corners=align_corners,
**kwargs).squeeze()
arr = torch.FloatTensor([[1,2,3],
[4,5,6],
[7,8,9]])
ps = array_ps(arr)*0.8
assert_allclose_f_ttn(interp_array, (arr, ps), torch.FloatTensor([1.,1.8,2.6,3.4,4.2,5.,5.8,6.6,7.4]))
###Output
_____no_output_____
###Markdown
Optimization stuff `wlstsq` is weighted least squares
###Code
# export
@numpyify
def wlstsq(A, b, W=None):
single = len(b.shape) == 1
if single: b = b[:, None]
if W is not None: # Weight matrix is a diagonal matrix with sqrt of the input weights
W = torch.sqrt(W.reshape(-1,1))
A, b = A*W, b*W
x = torch.lstsq(b, A).solution[:A.shape[1],:] # first n rows contains solution
if single: x = x.squeeze(1)
return x
A = torch.FloatTensor([[1, 2, 3],
[2, 3, 4],
[4, 2, 5],
[3, 3, 2],
[1, 6, 7]])
b = torch.FloatTensor([[1, 2],
[1, 2],
[1, 2],
[2, 3],
[7, 3]])
W = torch.FloatTensor([1, 2, 3, 4, 5])
assert_allclose_f_ttn(wlstsq, (A, b, W), torch.FloatTensor([[-0.5300, 0.4480],
[ 1.0744, 0.6981],
[ 0.1175, -0.2283]]), atol=1e-4)
###Output
_____no_output_____
###Markdown
Plotting
###Code
# export
def get_colors(n): return sns.color_palette(None, n)
###Output
_____no_output_____
###Markdown
Notebook stuff These are kind of hacky, but I like being able to rerun a notebook and have it auto save/build/convert at the end
###Code
# export
def get_notebook_file():
id_kernel = re.search('kernel-(.*).json', ipykernel.connect.get_connection_file()).group(1)
for server in list_running_servers():
response = requests.get(requests.compat.urljoin(server['url'], 'api/sessions'),
params={'token': server.get('token', '')})
for r in json.loads(response.text):
if 'kernel' in r and r['kernel']['id'] == id_kernel:
return Path(r['notebook']['path'])
assert_allclose(get_notebook_file().as_posix(), 'utils.ipynb')
# export
def save_notebook():
file_notebook = get_notebook_file()
_get_md5 = lambda : hashlib.md5(file_notebook.read_bytes()).hexdigest()
md5_start = _get_md5()
display(Javascript('IPython.notebook.save_checkpoint();')) # Asynchronous
while md5_start == _get_md5(): time.sleep(1e-1)
# export
def build_notebook(save=True):
if save: save_notebook()
nbdev.export.notebook2script(fname=get_notebook_file().as_posix())
# export
def convert_notebook(save=True, t='markdown'):
if save: save_notebook()
os.system(f'jupyter nbconvert --to {t} {get_notebook_file().as_posix()}')
###Output
_____no_output_____
###Markdown
Build
###Code
build_notebook()
###Output
_____no_output_____
###Markdown
Utils This notebook contains some useful tools to increase productivity through Google Colab.1. PDF 1. Compress: treat file as image and reduce its resolution and size 1. Rotate: rotate pages 1. Merge: join files into one * [ ] OCR1. Youtube 1. Transcript: get legend from YouTube Videos1. Wikipedia 1. Table: import a table from a wikipedia article1. Github 1. Clone: clone a repo and enables its execution1. Download: request and download a file to a temporary folder during a Colab execution section1. Google Drive: mount your Google Drive to work files on a persistent way Linux Download sample
###Code
# Download sample pdf
!curl -o input.pdf https://file-examples-com.github.io/uploads/2017/10/file-sample_150kB.pdf
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 139k 100 139k 0 0 663k 0 --:--:-- --:--:-- --:--:-- 663k
###Markdown
Copy Files
###Code
!cp input.pdf input1.pdf
!cp input.pdf input2.pdf
###Output
_____no_output_____
###Markdown
PDF Compress Ghostscript
###Code
!sudo apt-get clean
!sudo apt-get update
!sudo apt install ghostscript
###Output
_____no_output_____
###Markdown
* -dPDFSETTINGS=/screen lower quality, smaller size. (72 dpi)* -dPDFSETTINGS=/ebook for better quality, but slightly larger * pdfs. (150 dpi)* -dPDFSETTINGS=/prepress output similar to Acrobat Distiller * "Prepress Optimized" setting (300 dpi)* -dPDFSETTINGS=/printer selects output similar to the Acrobat * Distiller "Print Optimized" setting (300 dpi)* -dPDFSETTINGS=/default selects output intended to be useful across a wide variety of uses, possibly at the expense of a larger output file
###Code
!gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf
###Output
0% [Working]
Get:1 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease [3,626 B]
0% [Connecting to archive.ubuntu.com] [Connecting to security.ubuntu.com (91.18
0% [Connecting to archive.ubuntu.com] [Connecting to security.ubuntu.com (91.18
0% [1 InRelease gpgv 3,626 B] [Connecting to archive.ubuntu.com (91.189.88.142)
Ign:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease
0% [1 InRelease gpgv 3,626 B] [Connecting to archive.ubuntu.com (91.189.88.142)
Ign:3 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease
0% [1 InRelease gpgv 3,626 B] [Connecting to archive.ubuntu.com (91.189.88.142)
Hit:4 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release
0% [1 InRelease gpgv 3,626 B] [Connecting to archive.ubuntu.com (91.189.88.142)
Get:5 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release [564 B]
Get:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release.gpg [833 B]
Get:7 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ Packages [43.2 kB]
Get:8 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
Get:9 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease [15.9 kB]
Hit:11 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:12 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Packages [66.5 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
Hit:14 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease
Get:15 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [261 kB]
Get:16 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
Get:18 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [1,869 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [304 kB]
Get:20 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main Sources [1,707 kB]
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [2,140 kB]
Get:22 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1,376 kB]
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2,296 kB]
Get:24 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [45.6 kB]
Get:25 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic/main amd64 Packages [874 kB]
Get:26 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [49.2 kB]
Fetched 11.3 MB in 4s (2,838 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
fonts-droid-fallback fonts-noto-mono gsfonts libcupsfilters1 libcupsimage2
libgs9 libgs9-common libijs-0.35 libjbig2dec0 poppler-data
Suggested packages:
fonts-noto ghostscript-x poppler-utils fonts-japanese-mincho
| fonts-ipafont-mincho fonts-japanese-gothic | fonts-ipafont-gothic
fonts-arphic-ukai fonts-arphic-uming fonts-nanum
The following NEW packages will be installed:
fonts-droid-fallback fonts-noto-mono ghostscript gsfonts libcupsfilters1
libcupsimage2 libgs9 libgs9-common libijs-0.35 libjbig2dec0 poppler-data
0 upgraded, 11 newly installed, 0 to remove and 39 not upgraded.
Need to get 14.1 MB of archives.
After this operation, 49.9 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 fonts-droid-fallback all 1:6.0.1r16-1.1 [1,805 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 poppler-data all 0.4.8-2 [1,479 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic/main amd64 fonts-noto-mono all 20171026-2 [75.5 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcupsimage2 amd64 2.2.7-1ubuntu2.8 [18.6 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic/main amd64 libijs-0.35 amd64 0.35-13 [15.5 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic/main amd64 libjbig2dec0 amd64 0.13-6 [55.9 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgs9-common all 9.26~dfsg+0-0ubuntu0.18.04.14 [5,092 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgs9 amd64 9.26~dfsg+0-0ubuntu0.18.04.14 [2,265 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ghostscript amd64 9.26~dfsg+0-0ubuntu0.18.04.14 [51.3 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic/main amd64 gsfonts all 1:8.11+urwcyr1.0.7~pre44-4.4 [3,120 kB]
Get:11 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcupsfilters1 amd64 1.20.2-0ubuntu3.1 [108 kB]
Fetched 14.1 MB in 2s (6,419 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 11.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin:
Selecting previously unselected package fonts-droid-fallback.
(Reading database ... 145483 files and directories currently installed.)
Preparing to unpack .../00-fonts-droid-fallback_1%3a6.0.1r16-1.1_all.deb ...
Unpacking fonts-droid-fallback (1:6.0.1r16-1.1) ...
Selecting previously unselected package poppler-data.
Preparing to unpack .../01-poppler-data_0.4.8-2_all.deb ...
Unpacking poppler-data (0.4.8-2) ...
Selecting previously unselected package fonts-noto-mono.
Preparing to unpack .../02-fonts-noto-mono_20171026-2_all.deb ...
Unpacking fonts-noto-mono (20171026-2) ...
Selecting previously unselected package libcupsimage2:amd64.
Preparing to unpack .../03-libcupsimage2_2.2.7-1ubuntu2.8_amd64.deb ...
Unpacking libcupsimage2:amd64 (2.2.7-1ubuntu2.8) ...
Selecting previously unselected package libijs-0.35:amd64.
Preparing to unpack .../04-libijs-0.35_0.35-13_amd64.deb ...
Unpacking libijs-0.35:amd64 (0.35-13) ...
Selecting previously unselected package libjbig2dec0:amd64.
Preparing to unpack .../05-libjbig2dec0_0.13-6_amd64.deb ...
Unpacking libjbig2dec0:amd64 (0.13-6) ...
Selecting previously unselected package libgs9-common.
Preparing to unpack .../06-libgs9-common_9.26~dfsg+0-0ubuntu0.18.04.14_all.deb ...
Unpacking libgs9-common (9.26~dfsg+0-0ubuntu0.18.04.14) ...
Selecting previously unselected package libgs9:amd64.
Preparing to unpack .../07-libgs9_9.26~dfsg+0-0ubuntu0.18.04.14_amd64.deb ...
Unpacking libgs9:amd64 (9.26~dfsg+0-0ubuntu0.18.04.14) ...
Selecting previously unselected package ghostscript.
Preparing to unpack .../08-ghostscript_9.26~dfsg+0-0ubuntu0.18.04.14_amd64.deb ...
Unpacking ghostscript (9.26~dfsg+0-0ubuntu0.18.04.14) ...
Selecting previously unselected package gsfonts.
Preparing to unpack .../09-gsfonts_1%3a8.11+urwcyr1.0.7~pre44-4.4_all.deb ...
Unpacking gsfonts (1:8.11+urwcyr1.0.7~pre44-4.4) ...
Selecting previously unselected package libcupsfilters1:amd64.
Preparing to unpack .../10-libcupsfilters1_1.20.2-0ubuntu3.1_amd64.deb ...
Unpacking libcupsfilters1:amd64 (1.20.2-0ubuntu3.1) ...
Setting up libgs9-common (9.26~dfsg+0-0ubuntu0.18.04.14) ...
Setting up fonts-droid-fallback (1:6.0.1r16-1.1) ...
Setting up gsfonts (1:8.11+urwcyr1.0.7~pre44-4.4) ...
Setting up poppler-data (0.4.8-2) ...
Setting up fonts-noto-mono (20171026-2) ...
Setting up libcupsfilters1:amd64 (1.20.2-0ubuntu3.1) ...
Setting up libcupsimage2:amd64 (2.2.7-1ubuntu2.8) ...
Setting up libjbig2dec0:amd64 (0.13-6) ...
Setting up libijs-0.35:amd64 (0.35-13) ...
Setting up libgs9:amd64 (9.26~dfsg+0-0ubuntu0.18.04.14) ...
Setting up ghostscript (9.26~dfsg+0-0ubuntu0.18.04.14) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for fontconfig (2.12.6-0ubuntu2) ...
Processing triggers for libc-bin (2.27-3ubuntu1.2) ...
/sbin/ldconfig.real: /usr/local/lib/python3.6/dist-packages/ideep4py/lib/libmkldnn.so.0 is not a symbolic link
###Markdown
Rotate
###Code
!pip install PyPDF2
from PyPDF2 import PdfFileReader, PdfFileWriter
def pdf_rotate(input_file_name="input.pdf",
output_file_name="output.pdf",
rotate_clockwise_degree=-90):
pdf_in = open(input_file_name, 'rb')
pdf_reader = PdfFileReader(pdf_in)
pdf_writer = PdfFileWriter()
for pagenum in range(pdf_reader.numPages):
page = pdf_reader.getPage(pagenum)
page.rotateClockwise(rotate_clockwise_degree)
pdf_writer.addPage(page)
pdf_out = open(output_file_name, 'wb')
pdf_writer.write(pdf_out)
pdf_out.close()
pdf_in.close()
pass
# Test
pdf_rotate(input_file_name="input.pdf",
output_file_name="output_rot.pdf",
rotate_clockwise_degree=-90)
###Output
_____no_output_____
###Markdown
Merge
###Code
!pip install PyPDF2
from PyPDF2 import PdfFileMerger
def pdf_merge(lst_input_file_names=['input1.pdf', 'input2.pdf'],
output_file_name="output.pdf"):
merger = PdfFileMerger()
for input_file_name in lst_input_file_names:
merger.append(input_file_name)
# merger.append(lst_input_file_names[1], pages=(0, 3)) # first 3 pages
# merger.append(lst_input_file_names[0])
merger.write(output_file_name)
merger.close()
pass
# Test
pdf_merge(lst_input_file_names=['input.pdf', 'input.pdf'],
output_file_name="output_merge.pdf")
###Output
_____no_output_____
###Markdown
TODO: OCR Youtube Transcript
###Code
!pip install youtube_transcript_api
from youtube_transcript_api import YouTubeTranscriptApi
# Test
video_id = "9m7k1x9AetU" # change to a youtube video id of your own interest
raw_legend = YouTubeTranscriptApi.get_transcript(video_id, languages=['pt'])
lst = [entry['text'] for entry in raw_legend]
lst
###Output
Collecting youtube_transcript_api
Downloading https://files.pythonhosted.org/packages/21/81/c4ae5534b113f4938b482f360babbbe6fda550441a4af8e1007dba518586/youtube_transcript_api-0.3.1-py3-none-any.whl
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from youtube_transcript_api) (2.23.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->youtube_transcript_api) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->youtube_transcript_api) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->youtube_transcript_api) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->youtube_transcript_api) (3.0.4)
Installing collected packages: youtube-transcript-api
Successfully installed youtube-transcript-api-0.3.1
###Markdown
Wikipedia
###Code
!pip install wikipedia
import wikipedia as wp
###Output
Collecting wikipedia
Downloading https://files.pythonhosted.org/packages/67/35/25e68fbc99e672127cc6fbb14b8ec1ba3dfef035bf1e4c90f78f24a80b7d/wikipedia-1.4.0.tar.gz
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.6/dist-packages (from wikipedia) (4.6.3)
Requirement already satisfied: requests<3.0.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from wikipedia) (2.23.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (2020.6.20)
Building wheels for collected packages: wikipedia
Building wheel for wikipedia (setup.py) ... [?25l[?25hdone
Created wheel for wikipedia: filename=wikipedia-1.4.0-cp36-none-any.whl size=11686 sha256=5e3d9f76be01edf425dc337e975926613603206258522f1316f2a020fa1ff2d4
Stored in directory: /root/.cache/pip/wheels/87/2a/18/4e471fd96d12114d16fe4a446d00c3b38fb9efcb744bd31f4a
Successfully built wikipedia
Installing collected packages: wikipedia
Successfully installed wikipedia-1.4.0
###Markdown
Table
###Code
import pandas as pd
wp.set_lang("pt")
def import_wiki_table(page, table_nr):
"""
given page name and number of a table
return the correspondent dataframe
"""
html = wp.page(page).html().encode("UTF-8")
try:
df = pd.read_html(html, encoding='utf-8')[table_nr]
# Try 2nd table first as most pages contain contents table first
except IndexError:
raise("Erro")
print(df.to_string())
return df
# Test
page = "Número_de_parlamentares_do_Brasil_por_ano_de_eleição"
print("\n Senadores \n")
df_senador = import_wiki_table(page, 0)
###Output
_____no_output_____
###Markdown
Github Clone
###Code
!git clone https://github.com/viniciusriosfuck/hello-world
# Test
!python hello-world/main.py
###Output
Hello world!
###Markdown
Download
###Code
import requests
import time
import numpy as np
from progressbar import ProgressBar
def download_file(url, file_name, n_chunk=5):
print(f"\nDownloading\nFile: {file_name}")
r = requests.get(url, stream=True, verify=False)
# Estimates the number of bar updates
block_size_bytes = 1024 #1 kb
file_size_bytes = int(r.headers.get('Content-Length', None))
chunk_size_bytes = n_chunk * block_size_bytes
num_bars = int(np.ceil(file_size_bytes / chunk_size_bytes))
bar = ProgressBar(maxval=num_bars).start()
with open(file_name, 'wb') as f:
for i, chunk in enumerate(r.iter_content(chunk_size=chunk_size_bytes)):
f.write(chunk)
bar.update(i+1)
# Add a little sleep so you can see the bar progress
time.sleep(0.05)
return
# Test
url = "http://www.ovh.net/files/10Mb.dat" #big file test
file_name = url.split('/')[-1]
download_file(url, file_name)
###Output
_____no_output_____
###Markdown
Google Drive
###Code
from google.colab import drive
drive.mount("/content/gdrive")
###Output
_____no_output_____
###Markdown
Video
Download video
###Code
# Download sample video
!curl -o input.mp4 https://www.sample-videos.com/video123/mp4/720/big_buck_bunny_720p_1mb.mp4
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1030k 100 1030k 0 0 282k 0 0:00:03 0:00:03 --:--:-- 281k
###Markdown
Speed up
###Code
# 5 times, drop frames in between
!ffmpeg -i input.mp4 -vf "setpts=0.20*PTS" output.mp4
# https://stackoverflow.com/questions/63631973/how-can-i-use-python-to-speed-up-a-video-without-dropping-frame-rate
from moviepy.editor import VideoFileClip
import moviepy.video.fx.all as vfx
in_loc = 'input.mp4'
out_loc = 'output.mp4'
speed_up_factor = 5
# Import video clip
clip = VideoFileClip(in_loc)
print("fps: {}".format(clip.fps))
# Modify the FPS
clip = clip.set_fps(clip.fps * speed_up_factor)
# Apply speed up
final = clip.fx(vfx.speedx, speed_up_factor)
print("fps: {}".format(final.fps))
# cut video between
# final = final.subclip(0, 39.5)
# Save video clip
final.write_videofile(out_loc)
###Output
Imageio: 'ffmpeg-linux64-v3.3.1' was not found on your computer; downloading it now.
Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg-linux64-v3.3.1 (43.8 MB)
Downloading: 8192/45929032 bytes (0.0%)2850816/45929032 bytes (6.2%)6004736/45929032 bytes (13.1%)9379840/45929032 bytes (20.4%)12730368/45929032 bytes (27.7%)16007168/45929032 bytes (34.9%)19275776/45929032 bytes (42.0%)22487040/45929032 bytes (49.0%)25919488/45929032 bytes (56.4%)29220864/45929032 bytes (63.6%)32448512/45929032 bytes (70.6%)35667968/45929032 bytes (77.7%)39084032/45929032 bytes (85.1%)42483712/45929032 bytes (92.5%)45891584/45929032 bytes (99.9%)45929032/45929032 bytes (100.0%)
Done
File saved as /root/.imageio/ffmpeg/ffmpeg-linux64-v3.3.1.
fps: 25.0
fps: 125.0
[MoviePy] >>>> Building video output.mp4
[MoviePy] Writing audio in outputTEMP_MPY_wvf_snd.mp3
###Markdown
Cut video
###Code
# https://www.geeksforgeeks.org/moviepy-applying-speed-effect-on-video-clip/
from moviepy.editor import VideoFileClip
import moviepy.video.fx.all as vfx
in_loc = 'input.mp4'
out_loc = 'output.mp4'
# seconds
start_time = 0
end_time = 0.5
VideoFileClip(in_loc).subclip(start_time, end_time).write_videofile(out_loc)
###Output
[MoviePy] >>>> Building video output.mp4
[MoviePy] Writing audio in outputTEMP_MPY_wvf_snd.mp3
###Markdown
Show video in Colab
###Code
from google.colab.patches import cv2_imshow
import cv2
cap = cv2.VideoCapture('input.mp4')
while cap.isOpened():
ret, image = cap.read()
if not ret:
break
cv2_imshow(image) # Note cv2_imshow, not cv2.imshow
cv2.waitKey(1) & 0xff
cv2.destroyAllWindows()
cap.release()
###Output
_____no_output_____
###Markdown
Table of Contents1 Utilities1.1 Pandas display options1.2 Linear distance within a sphere1.3 Haversine (great circle distance)1.4 Unit tests Utilities
###Code
from typing import List, Tuple, Sequence, Optional, Dict
import unittest
from math import radians as deg_2_rad, degrees as rad_2_deg
from math import cos, sin, asin, sqrt
import pandas as pd
%matplotlib inline
def hello():
"Proof of life"
print('hello from utils')
def a():
print('a here')
###Output
_____no_output_____
###Markdown
Pandas display options
###Code
def reset_pd_display(default: bool = False,
max_cols: int = None,
max_rows: int = None,
max_col_width: int = None) -> None:
" Sets / resets pandas display options. "
if default:
resets = ['max_columns', 'max_rows', 'max_colwidth']
for _ in resets:
pd.reset_option(_)
return
if max_cols:
pd.set_option('max_columns', max_cols)
if max_rows:
pd.set_option('max_rows', max_rows)
if max_col_width:
pd.set_option('max_colwidth', max_col_width)
###Output
_____no_output_____
###Markdown
Linear distance within a sphereFunction to figure out the distance between an earthquake hypocenter and a fracking hypocenter. The USGS measures quake depth (km), magnitude, time, and location of every earthquake. From this we can derive the distance between the actual fracking (typically 2-3 km) and the actual earthquake site (hypocenter).We want the actual distance between the earthquake hypocenter and the fracking location. Since we're looking for a causal relationshipfracking -> earthquakes this is reasonable (IMHO).Using a bit of trig:a = earth_radius - quake_depthb = earth_radius - frack_depththeta = haversine_dist(fracking_site, quake_epicenter) / earth_circumferancelinear_dist = c = a**2 + b**2 = 2*a*b*cos(theta)
###Code
def linear_dist_sphere(radius: float, surf_dist: float, depth_a: float, depth_b: float) -> float:
"""
The purpose of this function is to find the actual, linear distance between the hypocenter of
and earthquake and the bottom of a fracking well. It's a more general problem/solution, though:
returns c: Linear distance through a sphere between two points at or beneath the surface of a sphere
inputs:
radius: radius of the sphere;
surf_dist: 'great circle' distance between the epicenters of the two points;
depth_a, depth_b: depth of each of the two points beneath the surface.
"""
from math import cos, pi, sqrt
circumferance = 2 * pi * radius
theta_rad = (surf_dist / circumferance) * ( 2 * pi )
a = radius - depth_a
b = radius - depth_b
c = sqrt(a**2 + b**2 - 2 * a * b * cos(theta_rad))
return c
###Output
_____no_output_____
###Markdown
Haversine (great circle distance)
###Code
def haversine(long0: float, lat0: float, long1: float, lat1: float) -> float:
"""
Calculate the surface (great circle) distance between two points
on the earth. Returns distance in km.
"""
RADIUS_EARTH_KM = 6_367 #3956 miles
# Degrees -> radians
long0, lat0, long1, lat1 = map(radians, [long0, lat0, long1, lat1])
# Haversine formula https://en.wikipedia.org/wiki/Haversine_formula
dlong = long1 - long0
dlat = lat1 - lat0
a = sin(dlat/2)**2 + cos(lat0) * cos(lat1) * sin(dlong/2)**2
c = 2 * asin(sqrt(a))
return RADIUS_EARTH_KM * c
###Output
_____no_output_____
###Markdown
Unit tests
###Code
import unittest
import sys
def run_tests():
""" Run test in a notebook or command line.
Note that this notebook can be onverted to a straight-up
*.py script by going:
$ ipynb-py-convert <notebook name> <script name>
"""
if 'ipykernel_launcher.py' in sys.argv[0]:
unittest.main(argv=[''], exit=False)
else:
unittest.main()
class UtilTests(unittest.TestCase):
def test_haversine(self):
"""
For more distances and geo-coordinates, cf:
https://www.transtats.bts.gov/Distance.asp
https://www.latlong.net/category/airports-236-19.html
Tests are known coordinate and distances between airports.
"""
tests = (
{'p0': 'IAD', 'lat0': 38.9531, 'long0': 77.4565,
'p1': 'ORD', 'lat1': 41.9803, 'long1': 87.9090,
'target': 946.},
{'p0': 'LAS', 'lat0': 36.1699, 'long0': 115.1398,
'p1': 'JFK', 'lat1': 40.6413, 'long1': 73.7781,
'target': 3_618.},
{'p0': 'DEN', 'lat0': 39.8561, 'long0': 104.6737,
'p1': 'MCO', 'lat1': 28.4312, 'long1': 81.3081,
'target': 2_488.},
)
# Test to within 1% of the expected value.
for t in tests:
result = haversine(t['long0'], t['lat0'], t['long1'], t['lat1'])
self.assertAlmostEqual(
round(result, 0), t['target'], delta=.01*result)
def test_linear_distance_sphere(self):
# Test the linear distance between two points, given epicenters and depths; and the radius of the sphere.
tests = (
# If the quake and fracking op are at the surface, target distance ~= surface dist
{'radius': 10, 'surf_dist': 10, 'depth_a': 0, 'depth_b': 0, 'target': 10},
# ... that should not depend on the diameter of the Earth.
{'radius': 1000, 'surf_dist': 10,
'depth_a': 0, 'depth_b': 0, 'target': 10},
# Half way to the middle of the earth, the distance should be half of surface dist
{'radius': 1000, 'surf_dist': 10,
'depth_a': 500, 'depth_b': 500, 'target': 5},
# Fracking on the surface, earthquake as deep as epicenters are apart makes distance = hyp of a right triangle
{'radius': 1000, 'surf_dist': 10, 'depth_a': 10,
'depth_b': 0, 'target': 14.414},
)
for t in tests:
result = linear_dist_sphere(
t['radius'], t['surf_dist'], t['depth_a'], t['depth_b'])
self.assertAlmostEqual(result, t['target'], delta=1)
if __name__ == '__main__':
run_tests()
###Output
..
----------------------------------------------------------------------
Ran 2 tests in 0.003s
OK
###Markdown
###Code
import numpy as np
import h5py
import scipy.io as sio
import matplotlib.pyplot as plt
import time
from scipy.signal import convolve2d
def showMat(matfile,varname):
f = h5py.File(matfile,'r')
data = f[varname]
data = np.array(data) # For converting to a NumPy array
data = data.transpose(1,2,0)
img = data[:,:,[10,20,30]]
img = img / np.max(img)
plt.figure(figsize = (20,20))
plt.imshow(img)
def getMat(matfile,varname):
f = h5py.File(matfile,'r')
data = f[varname]
data = np.array(data) # For converting to a NumPy array
data = data.transpose(1,2,0)
img = data[:,:,[10,20,30]]
img = img / np.max(img)
return img
def showPatch(matfile,varname,p1,p2):
try:
f = sio.loadmat(matfile)
data = f[varname]
data = np.array(data) # For converting to a NumPy array
except NotImplementedError:
f = h5py.File(matfile, 'r')
data = f[varname]
data = np.array(data) # For converting to a NumPy array
data = data.transpose(1,2,0)
except:
ValueError('could not read at all...')
print(p1[0],p2[0],p1[1],p2[1])
#f = h5py.File(matfile,'r')
print(data.shape)
print("p1[0]",p1[0],"p2[0]",p2[0])
data = data[p1[0]:p2[0],p1[1]:p2[1],:]
print("data shape2",data.shape)
img = data[:,:,[20,20,20]]
img = img / np.max(img)
#return img
plt.figure(figsize = (7,7))
plt.imshow(img)
def getMat(matfile):
try:
f = sio.loadmat(matfile)
except NotImplementedError:
f = h5py.File(matfile, 'r')
except:
ValueError('could not read at all...')
return f
def getPatch(matfile,varname,p1,p2):
try:
f = sio.loadmat(matfile)
data = f[varname]
data = np.array(data) # For converting to a NumPy array
except NotImplementedError:
f = h5py.File(matfile, 'r')
data = f[varname]
data = np.array(data) # For converting to a NumPy array
data = data.transpose(1,2,0)
except:
ValueError('could not read at all...')
print(p1[0],p2[0],p1[1],p2[1])
#f = h5py.File(matfile,'r')
print(data.shape)
print("p1[0]",p1[0],"p2[0]",p2[0])
data = data[p1[0]:p2[0],p1[1]:p2[1],:]
print("data shape2",data.shape)
img = data[:,:,[10,20,30]]
img = img / np.max(img)
return img
def getSpectralPatch(matfile,varname,p1,p2):
try:
f = sio.loadmat(matfile)
data = f[varname]
data = np.array(data) # For converting to a NumPy array
except NotImplementedError:
f = h5py.File(matfile, 'r')
data = f[varname]
data = np.array(data) # For converting to a NumPy array
data = data.transpose(1,2,0)
except:
ValueError('could not read at all...')
print(p1[0],p2[0],p1[1],p2[1])
#f = h5py.File(matfile,'r')
print(data.shape)
print("p1[0]",p1[0],"p2[0]",p2[0])
data = data[p1[0]:p2[0],p1[1]:p2[1],:]
print("data shape2",data.shape)
img = data
img = img / np.max(img)
return img
def decimateMatrix(data,factor,algo):
if algo == 1:
result = decimateMatrix1(data,factor)
elif algo == 2:
result = decimateMatrix2(data,factor)
return result
def decimateMatrix1(data,factor):
start_time = time.time()
x,y,z = data.shape
x1dec1 = np.zeros((x//factor,y//factor));
x1dec = np.zeros((x//factor,y//factor,z))
xx = 0
yy = 0
step = factor
for k in range(z):
for i in range(0,x-1,step):
for j in range(0,y-1,step):
Xc = data[i:i+2,j:j+2,k]
x1dec1[xx,yy] = sum(sum(Xc))/4
#print('Xc=',Xc)
#print("i=",i,"j=",j,"K=",k,"[",xx,yy,"]",'x1dec[xx,yy]',x1dec[xx,yy],'sum(sum(Xc)) = ',sum(sum(Xc)))
yy=yy+1
xx=xx+1
yy=0
xx=0
x1dec[:,:,k] = x1dec1
elapsed_time = (time.time() - start_time)
print("elapsed time",elapsed_time)
return x1dec
def decimateMatrix2(data,factor):
n = factor
x,y,z = data.shape
kernel = np.ones((n, n))
msr = np.zeros((x//n,y//n,z))
for i in range(z):
convolved = convolve2d(data[:,:,i], kernel, mode='valid')
a_downsampled = convolved[::n, ::n] / (n*n)
msr[:,:,i] = a_downsampled
return msr
def decimateMatrix3(data,factor):
a = np.size(data,0)//factor
b = np.size(data,1)//factor
return np.sum(data.reshape( a, factor, -1, factor), axis=(1,factor))
import torch
from torch.utils.data import Dataset as BaseDataset
import torchvision.transforms as transforms
import os
import pandas as pd
import h5py
import numpy as np
from torch.utils.data import random_split
import cv2
args = { "bands":40 }
def calculateMeasurements(batch,h):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
b,l,n,m = batch.size()
# print("calculateMsr batch size b,l,n,m:",b,l,n,m)
result = []
for k in range(b):
img = batch[k,:,:,:]
msr = torch.zeros((n,m+l-1))
for i in range(l):
temp = torch.zeros((n,m+l-1))
temp[:,i:m+i] = img[i,:,:] * h
msr = msr + temp
result.append(msr)
rsl = torch.stack(result).unsqueeze(1)
# print("calculate_msr_result",rsl.shape)
return torch.stack(result).unsqueeze(1).to(device)
def calculateMeasurementsFromMsr(batch,h):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
b,l,n,m = batch.size()
#print("calculateMsr batch size b,l,n,m:",b,l,n,m)
result = []
for k in range(b):
img = batch[k,:,:,:]
msr = torch.zeros((n,m))
for i in range(l):
msr = msr + img[i,:,:]
result.append(msr)
rsl = torch.stack(result).unsqueeze(1)
# print("calculate_msr_result",rsl.shape)
return torch.stack(result).unsqueeze(1).to(device)
class Dataset(BaseDataset):
def __init__(self, data_dir, transform=None,data_type="train"):
path2data=os.path.join(data_dir,data_type)
filenames = os.listdir(path2data)
self.full_filenames = [os.path.join(path2data, f) for f in filenames]
csv_filename=data_type+"_labels.csv"
path2csvLabels=os.path.join(data_dir,csv_filename)
labels_df=pd.read_csv(path2csvLabels)
labels_df.set_index("filename", inplace=True)
self.labels = [labels_df.loc[filename].values[1] for filename in filenames]
if transform is not None:
print("applying transform...")
self.transform = transform
def __len__(self):
return len(self.full_filenames)
def __getitem__(self, idx):
try:
image = sio.loadmat(self.full_filenames[idx])
except NotImplementedError:
image = h5py.File(self.full_filenames[idx],'r')
except:
ValueError('could not read at all...')
image = image.get("X1")
image = np.array(image)
image = decimateMatrix(image,2,2)
print("shape after decimate...",image.shape)
image = image[182:182+96,300:300+96+40]
image = image / np.max(image)
h = np.ones((image.shape[0],image.shape[1]),np.int8)
image = self.transform(image)
image = image.unsqueeze(0)
image = calculateMeasurementsFromMsr(image,h)
print("shape after calculateMeasurementsFromMsr ", image.shape)
image = image.squeeze(0)
print("shape sent to forward ", image.shape)
image = image.float()
return image, self.labels[idx]
data_transformer = transforms.Compose(
[transforms.ToTensor(),
# transforms.Resize(96)
] )
test_real_ds = Dataset("/content/drive/MyDrive/MAESTRIA/lime_dataset/real_msr/tomas/01mar2022/toma2/vanilla",transform=data_transformer,data_type="test")
trainval_ds = Dataset("/content/drive/MyDrive/MAESTRIA/lime_dataset/resnet1b/resnet1b",transform=data_transformer,data_type="train")
test0_ds = Dataset("/content/drive/MyDrive/MAESTRIA/lime_dataset/resnet1b/resnet1b",transform=data_transformer,data_type="test")
len_dataset = len(trainval_ds)
len_train = int(0.8*len_dataset)
len_val = len_dataset-len_train
train_ds,val_ds = random_split(trainval_ds,[len_train,len_val])
import matplotlib.pyplot as plt
img, label = test_real_ds[1]
img = img.squeeze(0)
#type(img)
img = img.cpu()
#img = cv2.rotate(img,cv2.ROTATE_90_CLOCKWISE)
plt.imshow(img)
#x = img.unsqueeze(0).unsqueeze(0)
#print(x.shape)
x = x.sum(axis=1).unsqueeze(0).repeat(1,3,1,1)
print(x.shape)
#matfile = '/content/drive/MyDrive/MAESTRIA/lime_dataset/real_msr/tomas/01mar2022/toma2/gt/Limon9_ac1_gt.mat'
matfile = '/content/drive/MyDrive/MAESTRIA/lime_dataset/real_msr/tomas/01mar2022/toma2/vanilla/test/Limon6_wp1_vanilla.mat'
#m = getPatch(matfile,'X1',[360,633],[557,833])
m = getSpectralPatch(matfile,'X1',[0,0],[1038,1388])
m = decimateMatrix(m,2,2)
print("decimada",m.shape)
#m = m[181:278,318:416,:]
m = m[182:182+96,300:300+96+40,:]
msr = np.zeros((96,96+40))
#plt.figure(figsize=(10,10))
#plt.imshow(m[:,:,[20,25,30]])
for i in range(40):
msr = msr + m[:,:,i]
#plt.imshow(m[:,:,i])
#plt.pause(1)
plt.imshow(msr)
#h = decimateMatrix(m,2)
from scipy import io
class TrainDataset(BaseDataset):
def __init__(self, data_dir, transform=None,data_type="train"):
# path to images
path2data=os.path.join(data_dir,data_type)
# get a list of images
filenames = os.listdir(path2data)
# get the full path to images
self.full_filenames = [os.path.join(path2data, f) for f in filenames]
# labels are in a csv file named train_labels.csv
csv_filename=data_type+"_labels.csv"
path2csvLabels=os.path.join(data_dir,csv_filename)
labels_df=pd.read_csv(path2csvLabels)
# set data frame index to id
labels_df.set_index("filename", inplace=True)
# obtain labels from data frame
self.labels = [labels_df.loc[filename].values[1] for filename in filenames]
if transform is not None:
print("applying transform...")
self.transform = transform
# def get_filename(self,idx):
# return self.full_filenames[idx]
def __len__(self):
# return size of dataset
return len(self.full_filenames)
def __getitem__(self, idx):
# open image, apply transforms and return with label
image = io.loadmat(self.full_filenames[idx]) # mat file
image = image["cube"]
image = cv2.resize(image,dsize=(96,96), interpolation=cv2.INTER_CUBIC)
image = image.transpose(0,1,2)
image = self.transform(image)
return image, self.labels[idx]
data_transformer = transforms.Compose(
[transforms.ToTensor(),
# transforms.Resize(96)
] )
# trainval_ds = Dataset("./resnet_spectral",transform=data_transformer,data_type="train")
# test0_ds = Dataset("./resnet_spectral",transform=data_transformer,data_type="test")
trainval_ds = TrainDataset("/content/drive/MyDrive/MAESTRIA/lime_dataset/resnet1b/resnet1b",transform=data_transformer,data_type="train")
test0_ds = TrainDataset("/content/drive/MyDrive/MAESTRIA/lime_dataset/resnet1b/resnet1b",transform=data_transformer,data_type="test")
len_dataset = len(trainval_ds)
len_train = int(0.8*len_dataset)
len_val = len_dataset-len_train
train_ds,val_ds = random_split(trainval_ds,[len_train,len_val])
import matplotlib.pyplot as plt
from PIL import Image
h = torch.bernoulli( torch.empty((96, 96),requires_grad=False).uniform_(0,1)).round()
#plt.imshow(h)
#plt.pause(1)
#img, label = val_ds[15]
img, label = test_real_ds[4]
img = img.squeeze(0).cpu()
#img2 = img.cpu().transpose(1,2,0)
print(img.shape)
# for i in range(40):
# plt.imshow(img2[:,:,i])
# plt.pause(0.1)
# img = img.unsqueeze(0)
# img = calculateMeasurements(img,h)
# img = img.squeeze(0).squeeze(0)
# img = img.cpu()
# print(img.shape)
# print(type(img))
# plt.imshow(img)
!mkdir models
!mkdir ca
!cp /content/drive/MyDrive/best_models/best_weights_vanilla_custom40.pt ./models/best_weights.pt
# positional arguments:
# net CNN to use
# pretrained Pretrained net
# miu Constant factor for regularizer
# train_h To train H matrix
# epochs Epochs to train
# dataset Dataset to use cifar, stl10, custom
# bands Number of bands of datacube or image
# h_factor Factor for initial H matrix
# type train, test, real
#!python /content/drive/MyDrive/classifier_alexnet_spectral_cifar_trainable_H.py vanilla True 0.00001204 True 40 custom_mat 40 1.0 train
#!python /content/drive/MyDrive/MAESTRIA/programas/clasificador/classifier_alexnet_spectral_cifar_trainable_H.py vanilla True 0.0001 True 11 custom_mat_real 41 1.0 real
!python /content/drive/MyDrive/MAESTRIA/programas/clasificador/classifier_alexnet_spectral_cifar_trainable_H.py vanilla True 0.0001 True 11 custom_mat 40 1.0 test
import numpy as np
import matplotlib.pyplot as plt
confusion = np.load("./ca/confusion_test.npy")
print(confusion)
import numpy as np
import h5py
import scipy.io as sio
import matplotlib.pyplot as plt
from PIL import Image
import cv2
matfile = '/content/drive/MyDrive/MAESTRIA/lime_dataset/Medidas_01_marzo/Escenas/Limon1_wp1_vanilla.mat'
#f = h5py.File(matfile,'r')
f = sio.loadmat(matfile)
data = f['X1']
data = np.array(data) # For converting to a NumPy array
plt.figure(figsize=(10,10))
plt.imshow(data[:,:,[10,20,30]]/np.max(data))
a = np.array([[0,1], [2,3]])
a = np.kron(a, np.ones((3,3)))
n = 2
b = a.shape[0]//n
a
matfile = '/content/drive/MyDrive/MAESTRIA/lime_dataset/real_msr/tomas/01mar2022/toma2/vanilla/test/Limon6_wp1_vanilla.mat'
a = getMat(matfile)
a = a["X1"]
from scipy.signal import convolve2d
n = 2
x,y,z = a.shape
print("x,y,z",x,y,z)
kernel = np.ones((n, n))
msr = np.zeros((x//n,y//n,z))
for i in range(z):
convolved = convolve2d(a[:,:,i], kernel, mode='valid')
a_downsampled = convolved[::n, ::n] / n
msr[:,:,i] = a_downsampled
print(msr.shape)
b = decimateMatrix1(a,2)
c = decimateMatrix2(a,2)
#a[0:2,0:2,0]
d = b - c
d
aa = np.ones((2,2,1))
print(aa.shape)
bb = decimateMatrix2(aa,2)
bb
# Xc = data[i:i+1,j:j+1,k]
# x1dec1[xx,yy] = sum(sum(Xc))/4
# print("i=",i,"j=",j,"K=",k,"[",xx,yy,"]",'x1dec[xx,yy]',x1dec[xx,yy],'Xc',Xc)
# yy=yy+1
###Output
(2, 2, 1)
###Markdown
Copyright 2020 The HuggingFace Team. All rights reserved.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.Very heavily inspired by the official evaluation script for SQuAD version 2.0 which was modified by XLNet authors toupdate `find_best_threshold` scripts for SQuAD V2.0In addition to basic functionality, we also compute additional statistics and plot precision-recall curves if anadditional na_prob.json file is provided. This file is expected to map question ID's to the model's predictedprobability that a question is unanswerable.Modified version of "squad_metrics.py" adapated for CUAD.
###Code
import collections
import json
import math
import re
import string
import json
from transformers.models.bert import BasicTokenizer
from transformers.utils import logging
logger = logging.get_logger(__name__)
def reformat_predicted_string(remaining_contract, predicted_string):
tokens = predicted_string.split()
assert len(tokens) > 0
end_idx = 0
for i, token in enumerate(tokens):
found = remaining_contract[end_idx:].find(token)
assert found != -1
end_idx += found
if i == 0:
start_idx = end_idx
end_idx += len(tokens[-1])
return remaining_contract[start_idx:end_idx]
def find_char_start_idx(contract, preceeding_tokens, predicted_string):
contract = " ".join(contract.split())
assert predicted_string in contract
if contract.count(predicted_string) == 1:
return contract.find(predicted_string)
start_idx = 0
for token in preceeding_tokens:
found = contract[start_idx:].find(token)
assert found != -1
start_idx += found
start_idx += len(preceeding_tokens[-1])
remaining_str = contract[start_idx:]
remaining_idx = remaining_str.find(predicted_string)
assert remaining_idx != -1
return start_idx + remaining_idx
def normalize_answer(s):
"""Lower text and remove punctuation, articles and extra whitespace."""
def remove_articles(text):
regex = re.compile(r"\b(a|an|the)\b", re.UNICODE)
return re.sub(regex, " ", text)
def white_space_fix(text):
return " ".join(text.split())
def remove_punc(text):
exclude = set(string.punctuation)
return "".join(ch for ch in text if ch not in exclude)
def lower(text):
return text.lower()
return white_space_fix(remove_articles(remove_punc(lower(s))))
def get_tokens(s):
if not s:
return []
return normalize_answer(s).split()
def compute_exact(a_gold, a_pred):
return int(normalize_answer(a_gold) == normalize_answer(a_pred))
def compute_f1(a_gold, a_pred):
gold_toks = get_tokens(a_gold)
pred_toks = get_tokens(a_pred)
common = collections.Counter(gold_toks) & collections.Counter(pred_toks)
num_same = sum(common.values())
if len(gold_toks) == 0 or len(pred_toks) == 0:
# If either is no-answer, then F1 is 1 if they agree, 0 otherwise
return int(gold_toks == pred_toks)
if num_same == 0:
return 0
precision = 1.0 * num_same / len(pred_toks)
recall = 1.0 * num_same / len(gold_toks)
f1 = (2 * precision * recall) / (precision + recall)
return f1
def get_raw_scores(examples, preds):
"""
Computes the exact and f1 scores from the examples and the model predictions
"""
exact_scores = {}
f1_scores = {}
for example in examples:
qas_id = example.qas_id
gold_answers = [answer["text"] for answer in example.answers if normalize_answer(answer["text"])]
if not gold_answers:
# For unanswerable questions, only correct answer is empty string
gold_answers = [""]
if qas_id not in preds:
print("Missing prediction for %s" % qas_id)
continue
prediction = preds[qas_id]
exact_scores[qas_id] = max(compute_exact(a, prediction) for a in gold_answers)
f1_scores[qas_id] = max(compute_f1(a, prediction) for a in gold_answers)
return exact_scores, f1_scores
def apply_no_ans_threshold(scores, na_probs, qid_to_has_ans, na_prob_thresh):
new_scores = {}
for qid, s in scores.items():
pred_na = na_probs[qid] > na_prob_thresh
if pred_na:
new_scores[qid] = float(not qid_to_has_ans[qid])
else:
new_scores[qid] = s
return new_scores
def make_eval_dict(exact_scores, f1_scores, qid_list=None):
if not qid_list:
total = len(exact_scores)
return collections.OrderedDict(
[
("exact", 100.0 * sum(exact_scores.values()) / total),
("f1", 100.0 * sum(f1_scores.values()) / total),
("total", total),
]
)
else:
total = len(qid_list)
return collections.OrderedDict(
[
("exact", 100.0 * sum(exact_scores[k] for k in qid_list) / total),
("f1", 100.0 * sum(f1_scores[k] for k in qid_list) / total),
("total", total),
]
)
def merge_eval(main_eval, new_eval, prefix):
for k in new_eval:
main_eval["%s_%s" % (prefix, k)] = new_eval[k]
def find_best_thresh_v2(preds, scores, na_probs, qid_to_has_ans):
num_no_ans = sum(1 for k in qid_to_has_ans if not qid_to_has_ans[k])
cur_score = num_no_ans
best_score = cur_score
best_thresh = 0.0
qid_list = sorted(na_probs, key=lambda k: na_probs[k])
for i, qid in enumerate(qid_list):
if qid not in scores:
continue
if qid_to_has_ans[qid]:
diff = scores[qid]
else:
if preds[qid]:
diff = -1
else:
diff = 0
cur_score += diff
if cur_score > best_score:
best_score = cur_score
best_thresh = na_probs[qid]
has_ans_score, has_ans_cnt = 0, 0
for qid in qid_list:
if not qid_to_has_ans[qid]:
continue
has_ans_cnt += 1
if qid not in scores:
continue
has_ans_score += scores[qid]
return 100.0 * best_score / len(scores), best_thresh, 1.0 * has_ans_score / has_ans_cnt
def find_all_best_thresh_v2(main_eval, preds, exact_raw, f1_raw, na_probs, qid_to_has_ans):
best_exact, exact_thresh, has_ans_exact = find_best_thresh_v2(preds, exact_raw, na_probs, qid_to_has_ans)
best_f1, f1_thresh, has_ans_f1 = find_best_thresh_v2(preds, f1_raw, na_probs, qid_to_has_ans)
# NOTE: For CUAD, which is about finding needles in haystacks and for which different answers should be treated
# differently, these metrics don't make complete sense. We ignore them, but don't remove them for simplicity.
main_eval["best_exact"] = best_exact
main_eval["best_exact_thresh"] = exact_thresh
main_eval["best_f1"] = best_f1
main_eval["best_f1_thresh"] = f1_thresh
main_eval["has_ans_exact"] = has_ans_exact
main_eval["has_ans_f1"] = has_ans_f1
def find_best_thresh(preds, scores, na_probs, qid_to_has_ans):
num_no_ans = sum(1 for k in qid_to_has_ans if not qid_to_has_ans[k])
cur_score = num_no_ans
best_score = cur_score
best_thresh = 0.0
qid_list = sorted(na_probs, key=lambda k: na_probs[k])
for _, qid in enumerate(qid_list):
if qid not in scores:
continue
if qid_to_has_ans[qid]:
diff = scores[qid]
else:
if preds[qid]:
diff = -1
else:
diff = 0
cur_score += diff
if cur_score > best_score:
best_score = cur_score
best_thresh = na_probs[qid]
return 100.0 * best_score / len(scores), best_thresh
def find_all_best_thresh(main_eval, preds, exact_raw, f1_raw, na_probs, qid_to_has_ans):
best_exact, exact_thresh = find_best_thresh(preds, exact_raw, na_probs, qid_to_has_ans)
best_f1, f1_thresh = find_best_thresh(preds, f1_raw, na_probs, qid_to_has_ans)
main_eval["best_exact"] = best_exact
main_eval["best_exact_thresh"] = exact_thresh
main_eval["best_f1"] = best_f1
main_eval["best_f1_thresh"] = f1_thresh
def squad_evaluate(examples, preds, no_answer_probs=None, no_answer_probability_threshold=1.0):
qas_id_to_has_answer = {example.qas_id: bool(example.answers) for example in examples}
has_answer_qids = [qas_id for qas_id, has_answer in qas_id_to_has_answer.items() if has_answer]
no_answer_qids = [qas_id for qas_id, has_answer in qas_id_to_has_answer.items() if not has_answer]
if no_answer_probs is None:
no_answer_probs = {k: 0.0 for k in preds}
exact, f1 = get_raw_scores(examples, preds)
exact_threshold = apply_no_ans_threshold(
exact, no_answer_probs, qas_id_to_has_answer, no_answer_probability_threshold
)
f1_threshold = apply_no_ans_threshold(f1, no_answer_probs, qas_id_to_has_answer, no_answer_probability_threshold)
evaluation = make_eval_dict(exact_threshold, f1_threshold)
if has_answer_qids:
has_ans_eval = make_eval_dict(exact_threshold, f1_threshold, qid_list=has_answer_qids)
merge_eval(evaluation, has_ans_eval, "HasAns")
if no_answer_qids:
no_ans_eval = make_eval_dict(exact_threshold, f1_threshold, qid_list=no_answer_qids)
merge_eval(evaluation, no_ans_eval, "NoAns")
if no_answer_probs:
find_all_best_thresh(evaluation, preds, exact, f1, no_answer_probs, qas_id_to_has_answer)
return evaluation
def get_final_text(pred_text, orig_text, do_lower_case, verbose_logging=False):
"""Project the tokenized prediction back to the original text."""
# When we created the data, we kept track of the alignment between original
# (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
# now `orig_text` contains the span of our original text corresponding to the
# span that we predicted.
#
# However, `orig_text` may contain extra characters that we don't want in
# our prediction.
#
# For example, let's say:
# pred_text = steve smith
# orig_text = Steve Smith's
#
# We don't want to return `orig_text` because it contains the extra "'s".
#
# We don't want to return `pred_text` because it's already been normalized
# (the SQuAD eval script also does punctuation stripping/lower casing but
# our tokenizer does additional normalization like stripping accent
# characters).
#
# What we really want to return is "Steve Smith".
#
# Therefore, we have to apply a semi-complicated alignment heuristic between
# `pred_text` and `orig_text` to get a character-to-character alignment. This
# can fail in certain cases in which case we just return `orig_text`.
def _strip_spaces(text):
ns_chars = []
ns_to_s_map = collections.OrderedDict()
for (i, c) in enumerate(text):
if c == " ":
continue
ns_to_s_map[len(ns_chars)] = i
ns_chars.append(c)
ns_text = "".join(ns_chars)
return (ns_text, ns_to_s_map)
# We first tokenize `orig_text`, strip whitespace from the result
# and `pred_text`, and check if they are the same length. If they are
# NOT the same length, the heuristic has failed. If they are the same
# length, we assume the characters are one-to-one aligned.
tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
tok_text = " ".join(tokenizer.tokenize(orig_text))
start_position = tok_text.find(pred_text)
if start_position == -1:
if verbose_logging:
logger.info("Unable to find text: '%s' in '%s'" % (pred_text, orig_text))
return orig_text
end_position = start_position + len(pred_text) - 1
(orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
(tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
if len(orig_ns_text) != len(tok_ns_text):
if verbose_logging:
logger.info("Length not equal after stripping spaces: '%s' vs '%s'", orig_ns_text, tok_ns_text)
return orig_text
# We then project the characters in `pred_text` back to `orig_text` using
# the character-to-character alignment.
tok_s_to_ns_map = {}
for (i, tok_index) in tok_ns_to_s_map.items():
tok_s_to_ns_map[tok_index] = i
orig_start_position = None
if start_position in tok_s_to_ns_map:
ns_start_position = tok_s_to_ns_map[start_position]
if ns_start_position in orig_ns_to_s_map:
orig_start_position = orig_ns_to_s_map[ns_start_position]
if orig_start_position is None:
if verbose_logging:
logger.info("Couldn't map start position")
return orig_text
orig_end_position = None
if end_position in tok_s_to_ns_map:
ns_end_position = tok_s_to_ns_map[end_position]
if ns_end_position in orig_ns_to_s_map:
orig_end_position = orig_ns_to_s_map[ns_end_position]
if orig_end_position is None:
if verbose_logging:
logger.info("Couldn't map end position")
return orig_text
output_text = orig_text[orig_start_position : (orig_end_position + 1)]
return output_text
def _get_best_indexes(logits, n_best_size):
"""Get the n-best logits from a list."""
index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)
best_indexes = []
for i in range(len(index_and_score)):
if i >= n_best_size:
break
best_indexes.append(index_and_score[i][0])
return best_indexes
def _compute_softmax(scores):
"""Compute softmax probability over raw logits."""
if not scores:
return []
max_score = None
for score in scores:
if max_score is None or score > max_score:
max_score = score
exp_scores = []
total_sum = 0.0
for score in scores:
x = math.exp(score - max_score)
exp_scores.append(x)
total_sum += x
probs = []
for score in exp_scores:
probs.append(score / total_sum)
return probs
def compute_predictions_logits(
json_input_dict,
all_examples,
all_features,
all_results,
n_best_size,
max_answer_length,
do_lower_case,
output_prediction_file,
output_nbest_file,
output_null_log_odds_file,
verbose_logging,
version_2_with_negative,
null_score_diff_threshold,
tokenizer,
):
"""Write final predictions to the json file and log-odds of null if needed."""
if output_prediction_file:
logger.info(f"Writing predictions to: {output_prediction_file}")
if output_nbest_file:
logger.info(f"Writing nbest to: {output_nbest_file}")
if output_null_log_odds_file and version_2_with_negative:
logger.info(f"Writing null_log_odds to: {output_null_log_odds_file}")
example_index_to_features = collections.defaultdict(list)
for feature in all_features:
example_index_to_features[feature.example_index].append(feature)
unique_id_to_result = {}
for result in all_results:
unique_id_to_result[result.unique_id] = result
_PrelimPrediction = collections.namedtuple( # pylint: disable=invalid-name
"PrelimPrediction", ["feature_index", "start_index", "end_index", "start_logit", "end_logit"]
)
all_predictions = collections.OrderedDict()
all_nbest_json = collections.OrderedDict()
scores_diff_json = collections.OrderedDict()
contract_name_to_idx = {}
for idx in range(len(json_input_dict["data"])):
contract_name_to_idx[json_input_dict["data"][idx]["title"]] = idx
for (example_index, example) in enumerate(all_examples):
features = example_index_to_features[example_index]
contract_name = example.title
contract_index = contract_name_to_idx[contract_name]
paragraphs = json_input_dict["data"][contract_index]["paragraphs"]
assert len(paragraphs) == 1
prelim_predictions = []
# keep track of the minimum score of null start+end of position 0
score_null = 1000000 # large and positive
min_null_feature_index = 0 # the paragraph slice with min null score
null_start_logit = 0 # the start logit at the slice with min null score
null_end_logit = 0 # the end logit at the slice with min null score
for (feature_index, feature) in enumerate(features):
result = unique_id_to_result[feature.unique_id]
start_indexes = _get_best_indexes(result.start_logits, n_best_size)
end_indexes = _get_best_indexes(result.end_logits, n_best_size)
# if we could have irrelevant answers, get the min score of irrelevant
if version_2_with_negative:
feature_null_score = result.start_logits[0] + result.end_logits[0]
if feature_null_score < score_null:
score_null = feature_null_score
min_null_feature_index = feature_index
null_start_logit = result.start_logits[0]
null_end_logit = result.end_logits[0]
for start_index in start_indexes:
for end_index in end_indexes:
# We could hypothetically create invalid predictions, e.g., predict
# that the start of the span is in the question. We throw out all
# invalid predictions.
if start_index >= len(feature.tokens):
continue
if end_index >= len(feature.tokens):
continue
if start_index not in feature.token_to_orig_map:
continue
if end_index not in feature.token_to_orig_map:
continue
if not feature.token_is_max_context.get(start_index, False):
continue
if end_index < start_index:
continue
length = end_index - start_index + 1
if length > max_answer_length:
continue
prelim_predictions.append(
_PrelimPrediction(
feature_index=feature_index,
start_index=start_index,
end_index=end_index,
start_logit=result.start_logits[start_index],
end_logit=result.end_logits[end_index],
)
)
if version_2_with_negative:
prelim_predictions.append(
_PrelimPrediction(
feature_index=min_null_feature_index,
start_index=0,
end_index=0,
start_logit=null_start_logit,
end_logit=null_end_logit,
)
)
prelim_predictions = sorted(prelim_predictions, key=lambda x: (x.start_logit + x.end_logit), reverse=True)
_NbestPrediction = collections.namedtuple( # pylint: disable=invalid-name
"NbestPrediction", ["text", "start_logit", "end_logit"]
)
seen_predictions = {}
nbest = []
start_indexes = []
end_indexes = []
for pred in prelim_predictions:
if len(nbest) >= n_best_size:
break
feature = features[pred.feature_index]
if pred.start_index > 0: # this is a non-null prediction
tok_tokens = feature.tokens[pred.start_index : (pred.end_index + 1)]
orig_doc_start = feature.token_to_orig_map[pred.start_index]
orig_doc_end = feature.token_to_orig_map[pred.end_index]
orig_tokens = example.doc_tokens[orig_doc_start : (orig_doc_end + 1)]
tok_text = tokenizer.convert_tokens_to_string(tok_tokens)
# Clean whitespace
tok_text = tok_text.strip()
tok_text = " ".join(tok_text.split())
orig_text = " ".join(orig_tokens)
final_text = get_final_text(tok_text, orig_text, do_lower_case, verbose_logging)
if final_text in seen_predictions:
continue
seen_predictions[final_text] = True
start_indexes.append(orig_doc_start)
end_indexes.append(orig_doc_end)
else:
final_text = ""
seen_predictions[final_text] = True
start_indexes.append(-1)
end_indexes.append(-1)
nbest.append(_NbestPrediction(text=final_text, start_logit=pred.start_logit, end_logit=pred.end_logit))
# if we didn't include the empty option in the n-best, include it
if version_2_with_negative:
if "" not in seen_predictions:
nbest.append(_NbestPrediction(text="", start_logit=null_start_logit, end_logit=null_end_logit))
start_indexes.append(-1)
end_indexes.append(-1)
# In very rare edge cases we could only have single null prediction.
# So we just create a nonce prediction in this case to avoid failure.
if len(nbest) == 1:
nbest.insert(0, _NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
start_indexes.append(-1)
end_indexes.append(-1)
# In very rare edge cases we could have no valid predictions. So we
# just create a nonce prediction in this case to avoid failure.
if not nbest:
nbest.append(_NbestPrediction(text="empty", start_logit=0.0, end_logit=0.0))
start_indexes.append(-1)
end_indexes.append(-1)
assert len(nbest) >= 1, "No valid predictions"
assert len(nbest) == len(start_indexes), "nbest length: {}, start_indexes length: {}".format(len(nbest), len(start_indexes))
total_scores = []
best_non_null_entry = None
for entry in nbest:
total_scores.append(entry.start_logit + entry.end_logit)
if not best_non_null_entry:
if entry.text:
best_non_null_entry = entry
probs = _compute_softmax(total_scores)
nbest_json = []
for (i, entry) in enumerate(nbest):
output = collections.OrderedDict()
output["text"] = entry.text
output["probability"] = probs[i]
output["start_logit"] = entry.start_logit
output["end_logit"] = entry.end_logit
output["token_doc_start"] = start_indexes[i]
output["token_doc_end"] = end_indexes[i]
nbest_json.append(output)
assert len(nbest_json) >= 1, "No valid predictions"
if not version_2_with_negative:
all_predictions[example.qas_id] = nbest_json[0]["text"]
else:
# predict "" iff the null score - the score of best non-null > threshold
score_diff = score_null - best_non_null_entry.start_logit - (best_non_null_entry.end_logit)
scores_diff_json[example.qas_id] = score_diff
if score_diff > null_score_diff_threshold:
all_predictions[example.qas_id] = ""
else:
all_predictions[example.qas_id] = best_non_null_entry.text
all_nbest_json[example.qas_id] = nbest_json
if output_prediction_file:
with open(output_prediction_file, "w") as writer:
writer.write(json.dumps(all_predictions, indent=4) + "\n")
if output_nbest_file:
with open(output_nbest_file, "w") as writer:
writer.write(json.dumps(all_nbest_json, indent=4) + "\n")
if output_null_log_odds_file and version_2_with_negative:
with open(output_null_log_odds_file, "w") as writer:
writer.write(json.dumps(scores_diff_json, indent=4) + "\n")
return all_predictions
###Output
_____no_output_____
###Markdown
fruit dataset
###Code
root = 'data/train'
size = [224,224]
imgs_a = []
imgs_b = []
for subdir in os.listdir(root):
root_sub = os.path.join(root, subdir)
for img_name in os.listdir(root_sub):
root_img = os.path.join(root_sub, img_name)
img = Image.open(root_img).resize(size)
img = np.asarray(img)
img_lab = rgb2lab(img)
imgs_a.append(img_lab[:,:,1])
imgs_b.append(img_lab[:,:,2])
a = np.stack(imgs_a, axis=0)
b = np.stack(imgs_b, axis=0)
x = a.reshape([-1,])
y = b.reshape([-1,])
idx_x = (x>110) + (x<-110) + np.isnan(x)
idx_y = (y>110) + (y<-110) + np.isnan(y)
x_new = x[~idx_x]
y_new = y[~idx_x]
print(x.shape)
print(x_new.shape)
print(y_new.shape)
h_range = np.array([[-110,110], [-110,110]])
h, xaxis, yaxis = np.histogram2d(x_new, y_new, bins=22, range=h_range, density=True)
X,Y = np.meshgrid(xaxis, yaxis)
plt.pcolormesh(X, Y, np.log(h.T))
plt.colorbar()
plt.title('logP(a,b), fruit dataset')
plt.xlabel('a')
plt.ylabel('b')
plt.show()
###Output
C:\Users\Hao\AppData\Local\Temp/ipykernel_21160/3872900061.py:4: RuntimeWarning: divide by zero encountered in log
plt.pcolormesh(X, Y, np.log(h.T))
###Markdown
\art_images\musemart\training_set\iconography
###Code
root = 'data/art_images/musemart/training_set/iconography/total'
size = [224,224]
imgs_a = []
imgs_b = []
for img_name in os.listdir(root):
root_img = os.path.join(root, img_name)
img = Image.open(root_img).resize(size)
img = np.asarray(img)
img_lab = rgb2lab(img)
imgs_a.append(img_lab[:,:,1])
imgs_b.append(img_lab[:,:,2])
a = np.stack(imgs_a, axis=0)
b = np.stack(imgs_b, axis=0)
x = a.reshape([-1,])
y = b.reshape([-1,])
idx_x = (x>110) + (x<-110) + np.isnan(x)
idx_y = (y>110) + (y<-110) + np.isnan(y)
x_new = x[~idx_x]
y_new = y[~idx_x]
print(x.shape)
print(x_new.shape)
print(y_new.shape)
h_range = np.array([[-110,110], [-110,110]])
h, xaxis, yaxis = np.histogram2d(x_new, y_new, bins=22, range=h_range, density=True)
X,Y = np.meshgrid(xaxis, yaxis)
plt.pcolormesh(X, Y, np.log(h.T))
plt.colorbar()
plt.title('logP(a,b), art image dataset')
plt.xlabel('a')
plt.ylabel('b')
plt.show()
###Output
C:\Users\Hao\AppData\Local\Temp/ipykernel_21160/232295639.py:4: RuntimeWarning: divide by zero encountered in log
plt.pcolormesh(X, Y, np.log(h.T))
###Markdown
VOCdataset\test
###Code
root = 'data/VOCdataset/test'
size = [224,224]
a = np.empty(size)
b = np.empty(size)
for img_name in os.listdir(root):
root_img = os.path.join(root, img_name)
img = Image.open(root_img).resize(size)
img = np.asarray(img)
img_lab = rgb2lab(img)
a = np.concatenate((a, img_lab[:,:,1]), axis=0)
b = np.concatenate((b, img_lab[:,:,2]), axis=0)
x = a.reshape([-1,])
y = b.reshape([-1,])
idx_x = (x>110) + (x<-110) + np.isnan(x)
idx_y = (y>110) + (y<-110) + np.isnan(y)
x_new = x[~idx_x]
y_new = y[~idx_x]
print(x.shape)
print(x_new.shape)
print(y_new.shape)
h_range = np.array([[-110,110], [-110,110]])
h, xaxis, yaxis = np.histogram2d(x_new, y_new, bins=22, range=h_range, density=True)
X,Y = np.meshgrid(xaxis, yaxis)
plt.pcolormesh(X, Y, np.log(h.T))
plt.colorbar()
plt.title('logP(a,b)')
plt.show()
###Output
C:\Users\Hao\AppData\Local\Temp/ipykernel_14676/2837664527.py:1: RuntimeWarning: divide by zero encountered in log
plt.pcolormesh(X, Y, np.log(h.T))
###Markdown
make grid
###Code
import torchvision
import torch
from torchvision import transforms
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
fruit
###Code
data_transforms = transforms.Compose([transforms.Resize([256,256]),transforms.ToTensor()])
fruit_dataset = torchvision.datasets.ImageFolder(root='data/train', transform=data_transforms)
fruit_loader = torch.utils.data.DataLoader(fruit_dataset, batch_size=32, shuffle=True)
imgs = next(iter(fruit_loader))
print(imgs[0].shape)
img_grid = torchvision.utils.make_grid(imgs[0])
# print(img_grid.shape)
img_grid = img_grid.permute([1,2,0])
plt.figure(figsize=[8,20])
plt.imshow(img_grid)
plt.axis('off')
###Output
torch.Size([3, 1034, 2066])
###Markdown
iconography
###Code
data_transforms = transforms.Compose([transforms.Resize([256,256]),transforms.ToTensor()])
fruit_dataset = torchvision.datasets.ImageFolder(root='data/art_images/musemart/training_set/iconography',
transform=data_transforms)
fruit_loader = torch.utils.data.DataLoader(fruit_dataset, batch_size=32, shuffle=True)
imgs = next(iter(fruit_loader))
img_grid = torchvision.utils.make_grid(imgs[0],padding=2)
img_grid = img_grid.permute([1,2,0])
plt.figure(figsize=[8,20])
plt.imshow(img_grid)
plt.axis('off')
###Output
_____no_output_____
###Markdown
VOC
###Code
data_transforms = transforms.Compose([transforms.Resize([256,256]),transforms.ToTensor()])
fruit_dataset = torchvision.datasets.ImageFolder(root='data/VOCdataset/test/',
transform=data_transforms)
fruit_loader = torch.utils.data.DataLoader(fruit_dataset, batch_size=32, shuffle=True)
imgs = next(iter(fruit_loader))
img_grid = torchvision.utils.make_grid(imgs[0],padding=2)
img_grid = img_grid.permute([1,2,0])
plt.figure(figsize=[8,20])
plt.imshow(img_grid)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Utililty functions etc`autoreload` etc helps when testing modules created by this notebook - i.e. we can make a change via `%%writefile` and not need to re-start the kernel ... most of the time (o:
###Code
%load_ext autoreload
%autoreload 2
from pathlib import Path
###Output
_____no_output_____
###Markdown
Start my making sure the utils directory exists
###Code
Path('utils').mkdir(exist_ok=True)
###Output
_____no_output_____
###Markdown
Create a `utils.all` module so that we can easily import all util packages in one go
###Code
%%writefile 'utils/all.py'
import importlib
from pathlib import Path
for py_file in Path('utils').glob('*.py'):
if 'all' == py_file.stem:
continue # we'll import every .py file in utils except utils/all.py
module = importlib.import_module(f'utils.{py_file.stem}')
# make everything in __all__ available via the current scopes global variables
globals().update({k: getattr(module, k) for k in module.__all__})
# this is like doing a `from module import *`
%%writefile 'utils/plot_history.py'
import matplotlib.pyplot as plt
__all__ = ['plot_history']
def _plot(history_dict, what, ignore_first_n):
plt.clf()
epochs = range(1, len(history_dict[what])+1-ignore_first_n)
plt.plot(epochs, history_dict[what][ignore_first_n:], 'bo', label=f'Training {what}')
if f'val_{what}' in history_dict:
plt.plot(epochs, history_dict[f'val_{what}'][ignore_first_n:], 'b', label=f'Validation {what}')
plt.title(f'Training and validation {what}')
else:
plt.title(f'Training {what}')
plt.xlabel('Epochs')
plt.ylabel(what.title())
plt.legend()
plt.show()
def plot_history(history, ignore_first_n=0):
"""Plot metrics in `history`.
`history` can be a `keras.callbacks.History` or a `keras.callbacks.History.history` like dictionary.
`ignore_first_n` number of epochs to ignore."""
history_dict = history if isinstance(history, dict) else history.history
for k in history_dict:
if k.startswith('val_'):
continue
_plot(history_dict, k, ignore_first_n)
from utils.plot_history import *
history_dict = {
'loss': [0.6907759308815002, 0.6212230920791626],
'accuracy': [0.5249999761581421, 0.7910000085830688],
'val_loss': [0.6763951182365417, 0.6168703436851501],
'val_accuracy': [0.6570000052452087, 0.7419999837875366]}
class MockHistory:
def __init__(self, history):
self.history = history
history = MockHistory(history_dict)
plot_history(history)
history_dict = {
'loss': [0.6907759308815002, 0.6212230920791626],
'accuracy': [0.5249999761581421, 0.7910000085830688]}
history = MockHistory(history_dict)
plot_history(history)
plot_history(history_dict, 1) # not very existing to look at (o: but we can pass a dict and ignore the 1st epoch
###Output
_____no_output_____
###Markdown
TODO (WIP)This repository is still under development! Still, I wanted to release it before it faded in some remote location of my hard-drive and here it is. Keep in mind I **am not** a python developer. Any feedback is welcome.There is still a lot to do!- More detailed documentation- Export of the fields to file to store a current configuration so that it can be retrieved later- Expose an HTTP interface to generate models on demand- Integration for simulations?- Better support for assemblies- Complete the example model to show all features of the basic class, unlike now :( It starts here! Basic initialization for the notebook, so that cadquery is properly integrated, and the preview on the sidebar is rendered.
###Code
import cadquery as cq
from jupyter_cadquery.cadquery import (PartGroup, Part, Edges, Faces, Vertices, show)
from jupyter_cadquery import set_sidecar, set_defaults
set_defaults(axes=False, grid=True, axes0=True, ortho=True, transparent=True)
set_sidecar("CadQuery", init=True)
###Output
_____no_output_____
###Markdown
Additional libraries specifically needed for this notebook:
###Code
import ipywidgets as widgets
import time
import random
import hashlib
###Output
_____no_output_____
###Markdown
Helper functionsA selection of functions which will be employed across multiple classes and which are nice to have.
###Code
# Generate a (almost surely) unique name by combining a pseudo-random hash and a timestamp.
def unique_name():
return str(int(time.time()))+"_"+hashlib.md5(str(random.randint(0,99999999)).encode('utf-8')).hexdigest()
###Output
_____no_output_____
###Markdown
LOD classSupport class to describe the level of detail we want for the target model to be generated. It is pretty much the same idea as we would find in most game engines, just for CAD design. Ideally we want to use coarser LODs for simplified view or complex assemblies where performance would be affected. We want finer LODs for final models which must be exported for machining/printing.In general we can recognize the following three stages:- Basic volume: to ensure that this object can fit inside an other. No holes, no fine details like textures or slots. Often it could be implemented as the convex hull of our object. It is meant to be used for cuts.- Detailed volume: it contains holes and slots, even more so when they have a functional role in the design. It is used for assemblies and fast rendering.- Complete model: in addition to holes and slots, textures, threads and any other fine detail is captured. They are meant for high quality renderings or final exports to be machined/printed.More complex and model specific flags can be defined to target a finer control.
###Code
class LOD:
def __init__(self, threads=False, holes=False):
self.threads = threads
self.holes = holes
###Output
_____no_output_____
###Markdown
Model classBase class which can be used to define all future models. This implementation is arbitrary, and there is plenty of alternative paths which might have been followed.**Model** is virtual and should not be instanced directly.
###Code
class Model:
_shapes = []
_holes = []
_refs = []
def _build(self, LOD):
raise Exception("Model is a virtual class")
return
def _fields(self):
raise Exception("Model is a virtual class")
return
def build(self, LOD = LOD()):
return self._build(LOD)
def fields(self):
return self._fields()
# Standard reduction scheme, based on union of shapes minus union of holes.
# Performance should be profiled to decide which is the best way to implement it.
def draw(self):
if len(self._shapes)==0:
raise Exception("Empty shape list")
tmp = self._shapes[0]
for v in self._shapes:
tmp = v.union(tmp)
for v in self._holes:
tmp = tmp.cut(v)
return tmp
# Show buttons to control the generation in the viewport and exporting files.
def gui(self, LOD = LOD()):
def refresh(model):
self.build(LOD)
display(self.draw())
def export_step(model):
self.build(LOD)
tmp=self.draw()
display(tmp)
tmpFile="./tmp/"+unique_name()+".step"
cq.exporters.export(tmp, tmpFile)
link.value="<a href=\""+tmpFile+"\" download>Dowload STEP</a>"
def export_mesh(model):
self.build(LOD)
tmp=self.draw()
display(tmp)
tmpFile="./tmp/"+unique_name()+".amf"
cq.exporters.export(tmp, tmpFile)
link.value="<a href=\""+tmpFile+"\" download>Dowload mesh</a>"
btn_refresh = widgets.Button(description="Refresh")
btn_export_step = widgets.Button(description="Export STEP")
btn_export_mesh = widgets.Button(description="Export mesh")
link = widgets.HTML("<a>[No file]</a>")
btn_refresh.on_click(refresh)
btn_export_step.on_click(export_step)
btn_export_mesh.on_click(export_mesh)
display(btn_refresh,btn_export_step,btn_export_mesh,link)
return
@property
def shapes(self):
return self._shape
@shapes.setter
def shapes(self,val):
raise Exception("shape is readonly")
@property
def holes(self):
return self._holes
@holes.setter
def holes(self,val):
raise Exception("holes is readonly")
@property
def refs(self):
return self._refs
@refs.setter
def refs(self,val):
raise Exception("refs is readonly")
###Output
_____no_output_____
###Markdown
Example of a derived modelA test class derived from model to show how the full workflow to generate single components is meant. In this case it is a simple prism with a square base and a cylindrical hole centered on the vertical axis.
###Code
class DerivedModel(Model):
_par = {"height":0.5}
def _fields(self):
#w=widgets.IntSlider()
#display(w)
#@widgets.interact(has_hole=True, y=1.0)
#def g(has_hole, y):
# self._shape = [has_hole]
# return (has_hole, y)
#w = widgets.interactive(g, has_hole=True, y=2.0)
#display(w)
x = 5
p = widgets.IntSlider()
p.value = self._par["height"]
def on_change(v):
self._par["height"] = v['new']
p.observe(on_change, names='value')
display(p)
return
def _build(self, LOD):
self._shapes = [cq.Workplane("front").rect(1,0.5).extrude(self._par["height"])]
###Output
_____no_output_____
###Markdown
We can test this example implementation with few lines of code. Interactive fields and export buttons can be easily generated! You can play around with the sliders, and press refresh to see the updated model which can also be exported by generating the temporary file and downloading it.
###Code
testModel = DerivedModel()
testModel.build()
display(testModel.draw())
testModel.fields()
testModel.gui()
###Output
Done, using side car 'Cadquery'
###Markdown
###Code
def regression_holdout(X, y, seed, test_size):
'''
Parameters
----------
X : TYPE
DESCRIPTION.
y : TYPE
DESCRIPTION.
seed : TYPE
DESCRIPTION.
test_size : TYPE
DESCRIPTION.
Returns
-------
None.
'''
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed, shuffle=True)
#print("*** SEED:", SEED)
#print("Training age:", y_train.describe())
#print("Test age:", y_test.describe())
#print()
clf = SVR(kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=- 1)
clf.fit(X_train, y_train)
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
print("MAE train:", mean_absolute_error(y_train, y_pred_train))
print("MAE test:", mean_absolute_error(y_test, y_pred_test))
print()
################################################################################
def classification_holdout(X, y, seed, test_size, stratify):
'''
Parameters
----------
X : TYPE
DESCRIPTION.
y : TYPE
DESCRIPTION.
seed : TYPE
DESCRIPTION.
test_size : TYPE
DESCRIPTION.
stratify : TYPE
DESCRIPTION.
Returns
-------
None.
'''
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=seed, shuffle=True, stratify=stratify)
clf = SVC(C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None)
clf.fit(X_train, y_train)
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
print("ACC train", accuracy_score(y_train, y_pred_train))
print("ACC test", accuracy_score(y_test, y_pred_test))
print()
################################################################################
def regression_CV(X, y, seed, n_folds):
'''
Parameters
----------
X : TYPE
DESCRIPTION.
y : TYPE
DESCRIPTION.
seed : TYPE
DESCRIPTION.
n_folds : TYPE
DESCRIPTION.
Returns
-------
None.
'''
X = np.asarray(X)
y = np.asarray(y)
kf = KFold(n_splits= n_folds, random_state=seed, shuffle=True)
Iteration = 1
MAE_train = []
MAE_test = []
for train_index, test_index in kf.split(X):
print("*** Iteration:", Iteration)
#print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf = SVR(kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=- 1)
clf.fit(X_train, y_train)
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
MAE_tr= mean_absolute_error(y_train, y_pred_train)
MAE_train.append(MAE_tr)
MAE_te= mean_absolute_error(y_test, y_pred_test)
MAE_test.append(MAE_te)
print("MAE train:", MAE_tr)
print("MAE test:", MAE_te)
print()
Iteration += 1
return MAE_train, MAE_test
################################################################################
def classification_CV(X, y, seed, n_folds):
'''
Parameters
----------
X : TYPE
DESCRIPTION.
y : TYPE
DESCRIPTION.
seed : TYPE
DESCRIPTION.
n_folds : TYPE
DESCRIPTION.
Returns
-------
None.
'''
X = np.asarray(X)
y = np.asarray(y)
kf = KFold(n_splits= n_folds, random_state=seed, shuffle=True)
Iteration = 1
ACC_train = []
ACC_test = []
for train_index, test_index in kf.split(X):
print("* Iteration:", Iteration)
#print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf = SVC(C=1.0, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None)
clf.fit(X_train, y_train)
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
ACC_tr= accuracy_score(y_train, y_pred_train)
ACC_train.append(ACC_tr)
ACC_te= accuracy_score(y_test, y_pred_test)
ACC_test.append(ACC_te)
print("ACC train", ACC_tr)
print("ACC test", ACC_te)
print()
Iteration += 1
return ACC_train, ACC_test
################################################################################
def print_to_std(score_train, score_test, score_name):
'''
Parameters
----------
score_train : TYPE
DESCRIPTION.
score_test : TYPE
DESCRIPTION.
score_name : TYPE
DESCRIPTION.
Returns
-------
None.
'''
print("#########################################################")
print("Average " + score_name + " train:", np.mean(score_train))
print("Std " + score_name + " train:", np.std(score_train))
print("Average " + score_name + " test:", np.mean(score_test))
print("Std " + score_name + " test:", np.std(score_test))
print("#########################################################")
print()
################################################################################
def regression_holdout_val_set(X, y, seed, test_set_size, val_set_size, C):
'''
Parameters
----------
X : TYPE
DESCRIPTION.
y : TYPE
DESCRIPTION.
seed : TYPE
DESCRIPTION.
test_set_size : TYPE
DESCRIPTION.
val_set_size : TYPE
DESCRIPTION.
C : TYPE
DESCRIPTION.
Returns
-------
None.
'''
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size=test_set_size, random_state=seed, shuffle=True)
X_train, X_val, y_train, y_val = train_test_split(X_train_val, y_train_val, test_size=val_set_size, random_state=seed, shuffle=True)
C = C #[0.1, 1, 100, 1000]
MAE_train = []
MAE_val = []
for c in C:
print("Training with c:", c)
clf = SVR(kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=c, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=- 1)
clf.fit(X_train, y_train)
y_pred_train = clf.predict(X_train)
y_pred_val = clf.predict(X_val)
MAE_tr= mean_absolute_error(y_train, y_pred_train)
MAE_train.append(MAE_tr)
MAE_v= mean_absolute_error(y_val, y_pred_val)
MAE_val.append(MAE_v)
print("MAE train:", MAE_tr)
print("MAE val:", MAE_v)
print()
#MAE_best = np.min(MAE_val)
MAE_best_ind = np.argmin(MAE_val)
C_best = C[MAE_best_ind]
print("Best C param on validation set:", C_best)
clf = SVR(kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=C_best, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=- 1)
clf.fit(X_train_val, y_train_val)
y_pred_train_val = clf.predict(X_train_val)
y_pred_test = clf.predict(X_test)
MAE_tr_v= mean_absolute_error(y_train_val, y_pred_train_val)
MAE_te= mean_absolute_error(y_test, y_pred_test)
print("MAE train_val:", MAE_tr_v)
print("MAE test:", MAE_te)
print()
################################################################################
def classification_holdout_val_set(X, y, seed, test_set_size, val_set_size, C):
'''
Parameters
----------
X : TYPE
DESCRIPTION.
y : TYPE
DESCRIPTION.
seed : TYPE
DESCRIPTION.
test_set_size : TYPE
DESCRIPTION.
val_set_size : TYPE
DESCRIPTION.
C : TYPE
DESCRIPTION.
Returns
-------
None.
'''
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size=test_set_size, random_state=seed, shuffle=True)
X_train, X_val, y_train, y_val = train_test_split(X_train_val, y_train_val, test_size=val_set_size, random_state=seed, shuffle=True)
C = C #[0.1, 1, 100, 1000]
ACC_train = []
ACC_val = []
for c in C:
print("Training with c:", c)
clf = SVC(C=c, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None)
clf.fit(X_train, y_train)
y_pred_train = clf.predict(X_train)
y_pred_val = clf.predict(X_val)
ACC_tr= accuracy_score(y_train, y_pred_train)
ACC_train.append(ACC_tr)
ACC_v= accuracy_score(y_val, y_pred_val)
ACC_val.append(ACC_v)
print("ACC train:", ACC_tr)
print("ACC val:", ACC_v)
print()
#MAE_best = np.min(MAE_val)
ACC_best_ind = np.argmax(ACC_val)
C_best = C[ACC_best_ind]
print("Best C param on validation set:", C_best)
clf = SVC(C=C_best, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=None)
clf.fit(X_train_val, y_train_val)
y_pred_train_val = clf.predict(X_train_val)
y_pred_test = clf.predict(X_test)
ACC_tr_v= accuracy_score(y_train_val, y_pred_train_val)
ACC_te= accuracy_score(y_test, y_pred_test)
print("ACC train_val:", ACC_tr_v)
print("ACC test:", ACC_te)
print()
################################################################################
def regression_nestedCV(X, y, seed, outer_n_folds, inner_n_folds, C):
'''
Parameters
----------
X : TYPE
DESCRIPTION.
y : TYPE
DESCRIPTION.
seed : TYPE
DESCRIPTION.
outer_n_folds : TYPE
DESCRIPTION.
inner_n_folds : TYPE
DESCRIPTION.
C : TYPE
DESCRIPTION.
Returns
-------
None.
'''
X = np.asarray(X)
y = np.asarray(y)
outer_kf = KFold(n_splits= outer_n_folds, random_state=seed, shuffle=True)
inner_kf = KFold(n_splits= inner_n_folds, random_state=seed, shuffle=True)
Outer_iteration = 1
MAE_tr_val = []
MAE_test = []
C = C #[1, 100]
for train_val_index, test_index in outer_kf.split(X):
print("*** Outer iteration:", Outer_iteration)
#print("TRAIN:", train_index, "TEST:", test_index)
X_train_val, X_test = X[train_val_index], X[test_index]
y_train_val, y_test = y[train_val_index], y[test_index]
MAE_tmp = []
for c in C:
print("Training with c:", c)
MAE_train = []
MAE_val = []
Inner_iteration = 1
for train_index, val_index in inner_kf.split(X_train_val):
#print("* Inner iteration:", Inner_iteration)
#print(val_index)
X_train, X_val = X_train_val[train_index], X_train_val[val_index]
y_train, y_val = y_train_val[train_index], y_train_val[val_index]
clf = SVR(kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=c, epsilon=0.1, shrinking=True, cache_size=200, verbose=0, max_iter=- 1)
clf.fit(X_train, y_train)
y_pred_train = clf.predict(X_train)
y_pred_val = clf.predict(X_val)
MAE_tr= mean_absolute_error(y_train, y_pred_train)
MAE_train.append(MAE_tr)
MAE_v= mean_absolute_error(y_val, y_pred_val)
MAE_val.append(MAE_v)
print("* Inner iteration: " + str(Inner_iteration) + " with MAE: " + str(MAE_v))
Inner_iteration += 1
#print("MAE", MAE_v)
MAE_tmp.append(np.mean(MAE_val))
print("Average MAE on validation set", MAE_tmp)
MAE_best_ind = np.argmin(MAE_tmp)
C_best = C[MAE_best_ind]
print("Best C param on validation set:", C_best)
clf = SVR(kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=C_best, epsilon=0.1, shrinking=True, cache_size=200, verbose=0, max_iter=- 1)
clf.fit(X_train_val, y_train_val)
y_pred_train_val = clf.predict(X_train_val)
y_pred_test = clf.predict(X_test)
MAE_tr_v= mean_absolute_error(y_train_val, y_pred_train_val)
MAE_tr_val.append(MAE_tr_v)
MAE_te= mean_absolute_error(y_test, y_pred_test)
MAE_test.append(MAE_te)
#print("MAE train_val:", MAE_tr_v)
#print("MAE test:", MAE_te)
#print()
Outer_iteration += 1
return MAE_tr_val, MAE_test
################################################################################
def classification_nestedCV(X, y, seed, outer_n_folds, inner_n_folds, C):
'''
Parameters
----------
X : TYPE
DESCRIPTION.
y : TYPE
DESCRIPTION.
seed : TYPE
DESCRIPTION.
outer_n_folds : TYPE
DESCRIPTION.
inner_n_folds : TYPE
DESCRIPTION.
C : TYPE
DESCRIPTION.
Returns
-------
None.
'''
X = np.asarray(X)
y = np.asarray(y)
outer_kf = KFold(n_splits= outer_n_folds, random_state=seed, shuffle=True)
inner_kf = KFold(n_splits= inner_n_folds, random_state=seed, shuffle=True)
Outer_iteration = 1
ACC_tr_val = []
ACC_test = []
C = C #[1, 100]
for train_val_index, test_index in outer_kf.split(X):
print("*** Outer iteration:", Outer_iteration)
#print("TRAIN:", train_index, "TEST:", test_index)
X_train_val, X_test = X[train_val_index], X[test_index]
y_train_val, y_test = y[train_val_index], y[test_index]
ACC_tmp = []
for c in C:
print("Training with c:", c)
ACC_train = []
ACC_val = []
Inner_iteration = 1
for train_index, val_index in inner_kf.split(X_train_val):
#print("* Inner iteration:", Inner_iteration)
#print(train_val_index)
#print(train_index)
X_train, X_val = X_train_val[train_index], X_train_val[val_index]
y_train, y_val = y_train_val[train_index], y_train_val[val_index]
clf = SVC(C=c, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=0, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=seed)
clf.fit(X_train, y_train)
y_pred_train = clf.predict(X_train)
y_pred_val = clf.predict(X_val)
ACC_tr= accuracy_score(y_train, y_pred_train)
ACC_train.append(ACC_tr)
ACC_v= accuracy_score(y_val, y_pred_val)
ACC_val.append(ACC_v)
print("* Inner iteration: " + str(Inner_iteration) + " with ACC: " + str(ACC_v))
Inner_iteration += 1
ACC_tmp.append(np.mean(ACC_val))
print("Average ACC on validation set", ACC_tmp)
ACC_best_ind = np.argmax(ACC_tmp)
C_best = C[ACC_best_ind]
print("Best C param on validation set:", C_best)
clf = SVC(C=C_best, kernel='rbf', degree=3, gamma='scale', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=0, max_iter=- 1, decision_function_shape='ovr', break_ties=False, random_state=seed)
clf.fit(X_train_val, y_train_val)
y_pred_train_val = clf.predict(X_train_val)
y_pred_test = clf.predict(X_test)
ACC_tr_v= accuracy_score(y_train_val, y_pred_train_val)
ACC_tr_val.append(ACC_tr_v)
ACC_te= accuracy_score(y_test, y_pred_test)
ACC_test.append(ACC_te)
#print("MAE train_val:", MAE_tr_v)
#print("MAE test:", MAE_te)
#print()
Outer_iteration += 1
return ACC_tr_val, ACC_test
###Output
_____no_output_____
###Markdown
utils
###Code
# export
import pkg_resources
import configparser
import yaml
# export
def load_config(*configs):
config = configparser.ConfigParser()
config.read(configs)
return config
def config(new_config=None):
default_config=pkg_resources.resource_filename('pybiotools4p','default.ini')
if None is new_config:
print('loading default_config['+default_config+']')
return load_config(default_config)
else:
print('loading default_config and '+ new_config)
return load_config(pkg_resources.resource_filename('pybiotools4p','default.ini'),new_config)
def load_yaml(*yamls):
my_dict={}
for y in yamls:
with open(y,'r') as yf:
my_dict.update(yaml.load(yf))
return my_dict
def default_yaml(new_yaml=None):
default_config=pkg_resources.resource_filename('pybiotools4p','default.yaml')
if None is new_yaml:
print('loading default_config['+default_config+']')
return load_yaml(default_config)
else:
print('loading default_config and '+ new_yaml)
return load_yaml(pkg_resources.resource_filename('pybiotools4p','default.yaml'),new_yaml)
def dict_to_paras(mydict):
'''
using dict to store extension parameters
'''
return ' '.join([f'{i} {mydict[i]}' for i in mydict.keys()])
a=load_config(pkg_resources.resource_filename('pybiotools4p','default.ini'))
print(a.items('software'))
dict_to_paras({'a':16,'--index':'abc'})
a=config('tests/test.ini')
print(a.items('software'))
a=config()
print(a.items('software'))
###Output
loading default_config[/home/logan/PycharmProjects/pybiotools/pybiotools4p/default.ini]
[('fastp', 'fastp'), ('mkdir', 'mkdir'), ('fastqc', 'fastqc'), ('star', 'STAR'), ('hisat2', 'hisat2'), ('samtools', 'samtools'), ('stringtie', 'stringtie'), ('featurecounts', 'featureCounts'), ('ihr', 'IHR.R'), ('bitk', 'bitk.py'), ('rnasamba', 'rnasamba'), ('gffread', 'gffread'), ('minimap2', 'minimap2'), ('gffcompare', 'gffcompare'), ('kallisto', 'kallisto'), ('picard', 'picard -Xmx30g -XX:+UseParallelGC -XX:ParallelGCThreads=2'), ('nanopolish', 'nanopolish'), ('nanopolishcomp', 'NanopolishComp')]
###Markdown
Importing HDFSUtils
###Code
import sys
sys.path.insert(0, '/var/sds/homes/XP96619/workspace/utils/')
from HDFSUtils import HDFSUtils
###Output
_____no_output_____
###Markdown
Testing HDFSUtils
###Code
import unittest
from test.HDFSUtilsTest import HDFSUtilsTest
if __name__ == '__main__':
unittest.main(argv=[''], verbosity=3, exit=False)
###Output
test__filter_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__filter_dates (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__format_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__format_process_date (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__format_process_date_in_range (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_content (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_date (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_date_when_path_not_found (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_files (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_folders (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_jvm_content (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__sort_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__to_date (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__to_string_jvm (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_get_exception_with_invalid_path (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_get_master_with_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_get_master_without_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_get_raw_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_init (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
----------------------------------------------------------------------
Ran 19 tests in 0.950s
OK
###Markdown
Getting the content list
###Code
path = "/data/raw/pext/data/t_pext_rcc_balance/"
list(HDFSUtils(sc).get_content(path))
###Output
_____no_output_____
###Markdown
Getting the folders list
###Code
path = "/data/raw/pext/data/t_pext_rcc_balance/"
list(HDFSUtils(sc).get_folders(path))
###Output
_____no_output_____
###Markdown
Getting the files list
###Code
path = "/data/raw/pext/data/t_pext_rcc_balance/cutoff_date=20180531"
list(HDFSUtils(sc).get_files(path))
###Output
_____no_output_____
###Markdown
Getting Partitions from a Parquet Source
###Code
# Getting all partitions
path = "/data/master/pdco/data/retailBusinessBanking/t_pdco_credit_card_mov/"
HDFSUtils(sc).get_date_partitions(path)
# Getting with partition_number:
path = "/data/master/pdco/data/retailBusinessBanking/t_pdco_credit_card_mov/"
HDFSUtils(sc).get_date_partitions(path, partition_number = 3)
# Getting with a date range,
# Put the minor date in the first element then in the second the major date.
path = "/data/master/pdco/data/retailBusinessBanking/t_pdco_credit_card_mov/"
HDFSUtils(sc).get_date_partitions(path, process_date = ["2018-08-14", "2020-01-21"])
# Getting with a cut-off date
path = "/data/master/pdco/data/retailBusinessBanking/t_pdco_credit_card_mov/"
HDFSUtils(sc).get_date_partitions(path, process_date = "2018-08-14")
# Max Partitions
path = "/data/master/pdco/data/retailBusinessBanking/t_pdco_credit_card_mov/"
HDFSUtils(sc).get_date_partitions(path, partition_number= 1)
# Min Partitions
path = "/data/master/pdco/data/retailBusinessBanking/t_pdco_credit_card_mov/"
HDFSUtils(sc).get_date_partitions(path, partition_number= 1, in_reverse = False)
# Getting with a cut-off date, adding an operation
path = "/data/master/pdco/data/retailBusinessBanking/t_pdco_credit_card_mov/"
HDFSUtils(sc).get_date_partitions(path, process_date = "2018-08-14", operation="<")
# Getting partitions from an avro source, for this we pass the date_format = '%Y%m%d'
path = "/data/raw/pext/data/t_pext_rcc_balance"
HDFSUtils(sc, date_format = '%Y%m%d').get_date_partitions(path)
###Output
_____no_output_____
###Markdown
Importing DataFrameUtils
###Code
import sys
sys.path.insert(0, '/var/sds/homes/XP96619/workspace/utils/')
from DataFrameUtils import DataFrameUtils
###Output
_____no_output_____
###Markdown
Testing DataFrameUtils
###Code
import unittest
from test.DataFrameUtilsTest import DataFrameUtilsTest
if __name__ == '__main__':
unittest.main(argv=[''], verbosity=3, exit=False)
###Output
test__add_partition_name (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__concat_path_with_partition_name (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__extract_extension (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__extract_extension_with_invalid_extension (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__extract_path (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__extract_path_with_partition_name (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__extract_path_with_paths (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__get_format_type (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__get_paths_with_process_name (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test__search_extension (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test_get_format_type_from_path (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test_read_dataframe (test.DataFrameUtilsTest.DataFrameUtilsTest) ... /usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216, got 192
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216, got 192
return f(*args, **kwds)
ok
test_read_dataframe_with_path (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test_read_dataframe_with_path_retrieving_partition_name (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test_read_dataframes (test.DataFrameUtilsTest.DataFrameUtilsTest) ... ok
test_read_dataframes_with_date_range (test.DataFrameUtilsTest.DataFrameUtilsTest) ... /usr/lib/python3.5/socket.py:646: ResourceWarning: unclosed <socket.socket fd=53, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('127.0.0.1', 32970), raddr=('127.0.0.1', 39124)>
self._sock = None
ok
test__filter_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__filter_dates (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__format_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__format_process_date (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__format_process_date_in_range (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_content (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_date (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_date_when_path_not_found (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_files (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_folders (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__get_jvm_content (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__sort_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__to_date (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test__to_string_jvm (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_get_exception_with_invalid_path (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_get_master_with_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_get_master_without_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_get_raw_date_partitions (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
test_init (test.HDFSUtilsTest.HDFSUtilsTest) ... ok
----------------------------------------------------------------------
Ran 35 tests in 7.136s
OK
###Markdown
Reading without parameters.
###Code
# Reading .parquet file path without specifying partition
path = "/data/master/pctk/data/t_pctk_rcc_balance"
DataFrameUtils(spark, sc=sc).read_dataframe(path).show(2)
# Reading .parquet files by specifying partition
paths = ["/data/master/pctk/data/t_pctk_rcc_balance/cutoff_date=2020-06-30", "/data/master/pctk/data/t_pctk_rcc_balance/cutoff_date=2020-07-31"]
DataFrameUtils(spark, sc=sc).read_dataframe(paths=paths).show(2)
# Reading .parquet paths by specifying partition and retrieving the partition name
paths = ["/data/master/pctk/data/t_pctk_rcc_balance/cutoff_date=2020-06-30", "/data/master/pctk/data/t_pctk_rcc_balance/cutoff_date=2020-07-31"]
DataFrameUtils(spark, sc=sc).read_dataframe(paths=paths, options = {'basePath': "/data/master/pctk/data/t_pctk_rcc_balance/"}).show(2)
# Reading .avro file path without specifying partition
path = "/data/raw/pext/data/t_pext_rcc_balance/"
DataFrameUtils(spark, sc=sc, date_format = '%Y%m%d').read_dataframe(path).show(2)
# Reading .txt file path
path = "/in/staging/ratransmit/external/unsubs_20161031.txt"
DataFrameUtils(spark, sc=sc).read_dataframe(path, options={"delimiter":"|", "header": True}).show(2)
# Reading .csv file path
path = "/in/staging/ratransmit/external/users_20180627.csv"
DataFrameUtils(spark, sc=sc).read_dataframe(path, options={"header":True}).show(2)
# Reading .dat file path
path = "/in/staging/ratransmit/external/v_pdco_monthly_transactional_rcd_20190128.dat"
DataFrameUtils(spark, sc=sc).read_dataframe(path, options={"delimiter":"|"}).show(2)
# Reading .ctl file path
path = "/in/staging/ratransmit/external/kexc/controlFeedBack_JOURNEY_20200512.ctl"
DataFrameUtils(spark, sc=sc).read_dataframe(path).show(2)
# Reading path without partition
path = "/data/master/pdco/data/cross/v_pdco_geo_location_catalog/"
DataFrameUtils(spark, sc=sc).read_dataframe(path).show(2)
###Output
+---------+----------+---------------------------+-----------------------------+----------------------------+----------------------+----------------------+----------------+----------+--------------------+-----------+--------------------+
|entity_id|country_id|inei_address_geolocation_id|reniec_address_geolocation_id|sunat_address_geolocation_id|address_geolocation_id|geolocation_group_desc|geolocation_type|zipcode_id|address_zone_type_id|cutoff_date| audtiminsert_date|
+---------+----------+---------------------------+-----------------------------+----------------------------+----------------------+----------------------+----------------+----------+--------------------+-----------+--------------------+
| 0011| PE| 040115| null| null| 0601015| QUEQUE�A| 1| @| P| 2017-09-30|2018-09-07 20:55:...|
| 0011| PE| 040408| null| null| 0604008| MACHAGUAY| 1| @| P| 2017-09-30|2018-09-07 20:55:...|
+---------+----------+---------------------------+-----------------------------+----------------------------+----------------------+----------------------+----------------+----------+--------------------+-----------+--------------------+
only showing top 2 rows
###Markdown
Reading with parameters:
###Code
# Reading parquet file with last partition
path = "/data/master/pctk/data/t_pctk_rcc_balance"
DataFrameUtils(spark, sc=sc).read_dataframes(path, partition_number = 1).show(2)
# Reading parquet file with the last three partitions retaining the partition name
path = "/data/master/pctk/data/t_pctk_rcc_balance"
DataFrameUtils(spark, sc=sc).read_dataframes(path, partition_number = 3, options = {"basePath": path}).show(2)
# Reading parquet file from a date range
path = "/data/master/pctk/data/t_pctk_rcc_balance"
DataFrameUtils(spark, sc=sc).read_dataframes(path, process_date=["2020-05-31", "2020-07-31"], options = {"basePath": path}).show(2)
###Output
+------------------+---------------+-------------+---------------+------------------+-----------------+--------------+----------------+--------------------+-----------+
|register_type_type|sbs_customer_id|sbs_entity_id|sbs_credit_type|product_definer_id|delay_days_number|balance_amount|credit_risk_type| audtiminsert_date|cutoff_date|
+------------------+---------------+-------------+---------------+------------------+-----------------+--------------+----------------+--------------------+-----------+
| 2| 0198590495| 00006| 09| 14111302000000| 0| 39201.770000| 0|2020-06-01 02:08:...| 2020-07-31|
| 2| 0198105163| 00109| 09| 14181300000000| 0| 651.230000| 0|2020-06-01 02:08:...| 2020-07-31|
+------------------+---------------+-------------+---------------+------------------+-----------------+--------------+----------------+--------------------+-----------+
only showing top 2 rows
###Markdown
UtilitiesThis notebook contains utility code that is useful to multiple notebooks. Derivative FilterIn many control schemes that follow trajectories, higher order derivatives are often needed. This section discusses an implementation of a digital derivative based in Laplace system theory.Recall from system theory that the Laplace transform is a useful way of analyzing linear time-invariant (LTI) systems. It takes time-domain systems and expresses them in the Laplace domain (a superset of the Fourier frequency domain). The unilateral Laplace transform is defined by$$\begin{equation}F(s) = \int_0^\infty f(t) e^{-st} dt,\end{equation}$$where $t$ is a time variable and $s$ is the Laplace variable. This transform can be thought as projecting a time-domain function onto exponentials of varying phase.This transform is particularly useful when analyzing LTI systems of differential equations. In particular, the Laplace transform of the time-derivative of a continuous system $f(t)$ can be found to be$$\begin{equation}\frac{d}{dt} f(t) \stackrel{\mathcal{L}}{\rightleftharpoons} sF(s).\end{equation}$$Therefore, differentiation in the time-domain corresponds to multiplication by $s$ in the Laplace domain. Although a pure derivative in the Laplace domain is causal, it is not realizable (i.e., can you build an RLC circuit that realizes $F(s)=s$?). Therefore, a band-limited derivative (i.e., a *dirty derivative*) is used:$$\begin{equation}\label{eq:dirty-derivative}G(s) = \frac{s}{\tau s + 1}.\end{equation}$$From analog filter design, remember that a first-order low-pass filter with cutoff frequency $\omega_c$ is defined in the Laplace domain as$$\begin{equation}\label{eq:lpf}H_{LPF}(s) = \frac{\omega_c}{s + \omega_c} = \frac{1}{s/\omega_c + 1},\end{equation}$$and a first-order high-pass filter is$$\begin{equation}\label{eq:hpf}H_{HPF}(s) = \frac{s}{s + \omega_c} = \frac{s/\omega_c}{s/\omega_c + 1}.\end{equation}$$Note that the cutoff frequency is related to the filter time-constant (and RC circuits) by$$\begin{equation}\tau = RC = \frac{1}{2\pi f_c} = \frac{1}{\omega_c}.\end{equation}$$From this discussion, we can see that the dirty derivative \eqref{eq:dirty-derivative} can be thought of as a filtered version of a pure derivative with bandwidth $1/\tau$. This is useful as the LPF will prevent the derivative operator from picking up high-frequency components of the input signal and amplifying noise.A common value of $\tau$ is $0.5$ ($20$ Hz). A higher $\tau$ corresponds to less bandwidth and more rejection of high-frequency components, which results in a smoother output. Digital ImplementationIn order to implement a derivative filter in a computer, it of course must be discretized. The technique used here is to map $G(s)$ to the $z$-domain via the bilinear transform (AKA, Tustin approximation)$$\begin{equation}s \mapsto \frac{2}{T}\frac{1-z^{-1}}{1+z^{-1}} ,\end{equation}$$where $T$ is the sample period. Once we have an expression in the $z$-domain, the inverse $z$-transform can be used to give a discrete-time implementation of the dirty derivative. Resources- [SE.DSP: First-Derivative Analog Filter](https://dsp.stackexchange.com/questions/41109/first-derivative-analog-filter)- [Blog: Causal, but not Realizable](http://blog.jafma.net/2015/10/04/differentiation-derivative-is-causal-but-not-exactly-realizable/)
###Code
class DirtyDerivative:
"""Dirty Derivative
Provides a first-order derivative of a signal.
This class creates a filtered derivative based on a
band-limited low-pass filter with transfer function:
G(s) = s/(tau*s + 1)
This is done because a pure differentiator (D(s) = s)
is not realizable.
"""
def __init__(self, order=1, tau=0.05):
# time constant of dirty-derivative filter.
# Higher leads to increased smoothing.
self.tau = tau
# Although this class only provides a first-order
# derivative, we use this parameter to know how
# many measurements to ignore so that the incoming
# data is smooth and stable. Otherwise, the filter
# would be hit with a step function, causing
# downstream dirty derivatives to be hit with very
# large step functions.
self.order = order
# internal memory for lagged signal value
self.x_d1 = None
# Current value of derivative
self.dxdt = None
def update(self, x, Ts):
# Make sure to store the first `order` measurements,
# but don't use them until we have seen enough
# measurements to produce a stable output
if self.order > 0:
self.order -= 1
self.x_d1 = x
return np.zeros(x.shape)
# Calculate digital derivative constants
a1 = (2*self.tau - Ts)/(2*self.tau + Ts)
a2 = 2/(2*self.tau + Ts)
if self.dxdt is None:
self.dxdt = np.zeros(x.shape)
# calculate derivative
self.dxdt = a1*self.dxdt + a2*(x - self.x_d1)
# store value for next time
self.x_d1 = x
return self.dxdt
###Output
_____no_output_____
###Markdown
--- Rotation MatricesThroughout this notebook, we use *right-handed*, *passive* rotations to express the same geometrical vector in various coordinate frames. When Euler angles are used, the *intrinsic* 3-2-1 (Z-Y-X) sequence is used. This means that a yaw ($\psi$) rotation is first performed in $\mathcal{F}_A$ around the $\mathbf{k}^A$ axis to get to $\mathcal{F}_B$. Then, a pitch ($\theta$) rotation is performed about the $\mathbf{j}^B$ axis of frame B to get to $\mathcal{F}_C$. Lastly, a roll ($\phi$) rotation is performed about the $\mathbf{i}^C$ axis to get to $\mathcal{F}_D$. The order of rotation is important, and if data from $\mathcal{F}_A$ is meant to be expressed in $\mathcal{F}_D$, the rotations are composed as$$\begin{equation}x^D = R_C^D(\phi) R_B^C(\theta) R_A^B(\psi)x^A = R_A^D(\phi,\theta,\psi)x^A = R_A^D x^A.\end{equation}$$Python implementations of these passive rotations are defined below.
###Code
# input angle in radians
def rotx(ph): return np.array([[1,0,0],[0,np.cos(ph),np.sin(ph)],[0,-np.sin(ph),np.cos(ph)]])
def roty(th): return np.array([[np.cos(th),0,-np.sin(th)],[0,1,0],[np.sin(th),0,np.cos(th)]])
def rotz(ps): return np.array([[np.cos(ps),np.sin(ps),0],[-np.sin(ps),np.cos(ps),0],[0,0,1]])
def rot3(ph,th,ps): return rotx(ph).dot(roty(th).dot(rotz(ps)))
# input angle in degrees
def rotxd(ph): return rotx(np.radians(ph))
def rotyd(th): return roty(np.radians(th))
def rotzd(ps): return rotz(np.radians(ps))
def rot3d(ph,th,ps): return rot3(np.radians(ph),np.radians(th),np.radians(ps))
###Output
_____no_output_____
###Markdown
--- PID ControllerOne of the simplest forms of feedback control is the proportional-integral-derivative (PID) controller. It is perhaps the most widely used form of control because it is an intuitive technique for minimizing error. However, stability can only be guaranteed for second-order systems. Digital ImplementationIn order to implement a PID controller on a computer, we need to discretize our continuous expression for the integral and derivative terms of the PID controller.
###Code
class SimplePID:
def __init__(self, kp, ki, kd, min, max, tau=0.05):
self.kp = kp
self.ki = ki
self.kd = kd
self.min = min
self.max = max
self.tau = tau
self.derivative = 0.0
self.integral = 0.0
self.last_error = 0.0
@staticmethod
def _clamp(v, limit):
return v if np.abs(v) < limit else limit*np.sign(v)
def run(self, error, dt, derivative=None, pclamp=None):
# P term
if self.kp:
# Proportional error clamp, it specified
e = error if pclamp is None else self._clamp(error, pclamp)
p_term = self.kp * e
else:
p_term = 0.0
# D term
if self.kd:
if derivative:
self.derivative = derivative
elif dt > 0.0001:
self.derivative = (2.0*self.tau - dt)/(2.0*self.tau + dt)*self.derivative + 2.0/(2.0*self.tau + dt)*(error - self.last_error)
else:
self.derivative = 0.0
d_term = self.kd * self.derivative
else:
d_term = 0.0
# I term
if self.ki:
self.integral += (dt/2.0) * (error + self.last_error)
i_term = self.ki * self.integral
else:
i_term = 0.0
# combine
u = p_term + d_term + i_term
# saturate
if u < self.min:
u_sat = self.min
elif u > self.max:
u_sat = self.max
else:
u_sat = u
# integrator anti-windup
if self.ki:
if abs(p_term + d_term) > abs(u_sat):
# PD is already saturating, so set integrator to 0 but don't let it run backwards
self.integral = 0
else:
# otherwise only let integral term at most take us just up to saturation
self.integral = (u_sat - p_term - d_term) / self.ki
# bookkeeping
self.last_error = error
return u_sat
###Output
_____no_output_____
###Markdown
--- Numerical IntegrationBefore our discussion of the dynamical equations of motion that describe the time evolution of the quadrotor in $\text{SE}(3)$, we should discuss how to properly integrate these differential equations numerically for use in a simulation. The two most common types of numerical integrators are Euler's method and the fourth order Runge-Kutta method, or RK4. The Euler MethodThe Euler method is an explicit algorithm that uses the limit definition of the derivative:$$\frac{df}{dt} = \lim_{h\to0} \frac{f(t + h) - f(t)}{h}.$$Let $g(t) \triangleq \frac{df}{dt}(t)$. Disregarding the limit and rearranging the difference quotient yields$$f(t+h) = f(t) + g(t)h.$$Using the step size $h$ as the sample period and using discrete notation, we have$$f[k+1] = f[k] + g[k] T_s.$$ RK4Of course, the Euler Method is a first-order, rough approximation of differential equations. Given a differential equation $\dot{y} = f(t,y)$, the RK4 method is given by$$\begin{align}y_{n+1} &= y_{n}+\frac{h}{6}\left(k_{1}+2k_{2}+2k_{3}+k4\right) \\t_{n+1} &= t_{n}+h, \end{align}$$where$$\begin{align}k_{1} &= f\left(t_{n},y_{n}\right) \\k_{2} &= f\left(t_{n}+\frac{h}{2},y_{n}+\frac{h}{2}k_{1}\right) \\k_{3} &= f\left(t_{n}+\frac{h}{2},y_{n}+\frac{h}{2}k_{2}\right) \\k_{4} &= f\left(t_{n}+h,y_{n}+hk_{3}\right).\end{align}$$For an expository derivation of RK4, see [2].
###Code
def rk4(f, y, dt):
"""Runge-Kutta 4th Order
Solves an autonomous (time-invariant) differential equation of the form dy/dt = f(y).
"""
k1 = f(y)
k2 = f(y + dt/2*k1)
k3 = f(y + dt/2*k2)
k4 = f(y + dt *k3)
return y + (dt/6)*(k1 + 2*k2 + 2*k3 + k4)
###Output
_____no_output_____
###Markdown
RK4 ExampleConsider the autonomous evolution of position $x \in \mathbb{R}$ as$$\dot{x} = \cos(\omega(t))$$where$$\omega(t) =\begin{cases} 2 \pi t, & 0 < t < 2.5 \\ 4 \pi t, & 2.5 \leq t < 5 \\\end{cases}.$$Note that this equation is not smooth (there is a jump at $t = 2.5$), but we are still able to numerically integrate it.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# simulation timing
Tf = 5
Ts = 0.01
N = int(Tf/Ts)
# initital condition
x = 0
# simulation history
x_hist = np.zeros((N,1))
for i in range(N-1):
# dynamics
freq = 1 if i < N//2 else 2
v = np.cos(2*np.pi*freq*(Ts*i))
f = lambda x: v
# propagate dynamics by solving the differential equation
x = rk4(f, x, Ts)
# add to history
x_hist[i+1] = x
plt.plot(np.arange(0, Tf, Ts), x_hist)
plt.grid(); plt.xlabel('Time (s)'); plt.ylabel('Position (m)')
plt.show()
###Output
_____no_output_____
###Markdown
1.[enumerate](1) 2.[lambda](2) 3.[pandas](3) 4.[numpy](4) 5.[markdown](5) 6.[pytorch](6) 7.[plot](7) L.[其他操作](last)
###Code
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
from torchviz import make_dot
from colorama import Fore, Back
import seaborn as sns; sns.set(color_codes=True)
###Output
_____no_output_____
###Markdown
1.enumerate
###Code
#枚举
LL = ['a','b','c']
aa = enumerate(LL)
print(aa, type(aa))
for i ,e in enumerate(LL):
print(i,e)
for e in enumerate(LL,3):
print(e)
###Output
(3, 'a')
(4, 'b')
(5, 'c')
###Markdown
2.lambda * 快速构建函数
###Code
epsilon_by_frame = lambda frame_idx:math.exp(-frame_idx/100)
plt.plot([epsilon_by_frame(i) for i in range(500)])
###Output
_____no_output_____
###Markdown
* 多返回值
###Code
multi_return_value = lambda a, b :(a+1,b+2)
a, b =multi_return_value(1,1)
print(a,b)
###Output
2 3
###Markdown
* 和map连用快速定义并返回
###Code
L = [1,2,3,4]
list(map(lambda x : x**3, L))
###Output
_____no_output_____
###Markdown
* 和filter连用进行数据的挑选
###Code
list(filter(lambda x: x>=2, L))
###Output
_____no_output_____
###Markdown
3.pandas * pd.qcut根据值均匀分配将数据分为n个梯度
###Code
pd.qcut(range(6),3)
pd.qcut(range(5), 3, labels=["good", "medium", "bad"])
###Output
_____no_output_____
###Markdown
4.numpy * np.random.choice进行选择
###Code
# np.random.choice(a,size=None,replace=True,p=None)
# a:一维数组(表示在其中选一个)或者int数(表示在[0,i-1]中选一个)
# size:int数代表一维数组,也可返回指定大小的多维矩阵
# replace=True 表示可能会返回重复的项
# p:对a中的每个数设置权重,即被选中的概率
print(np.random.choice(range(6)))
print(np.random.choice(["a","b","c","d"]))
print(np.random.choice(["a","b","c","d"],p = [0,1,0,0]))
###Output
5
c
b
###Markdown
* 图片的numpy和torch.Tensor之间的转化要用到transpose()函数 对于numpy 的图片:H x W x C 对于$\;$torch$\;$ 的图片:C x H x W 5.markdown $${\max_{a'}}$$ 6.pytorch * pytorch 使用 **(a)** torch.tensor.item()和 **(b)** torch.tensor.cpu().numpy()的异同 同 : 都是将tensor张量转化为python标量值 异 : (a)只能对单个张量操作 | (b)既可以对单个张量操作也可以对进行数组操作
###Code
test = torch.tensor(5)
print("test :", test)
print(".item() :", test.item())
print(".cpu().numpy() :", test.cpu().numpy())
test = torch.tensor([1,2])
print("test :", test)
print(".cpu().numpy() :",test.cpu().numpy())
print("")
try:
print(".item() :", test.item())
except ValueError :
print(Fore.WHITE + Back.RED + "ValueError: only one element tensors can be converted to Python scalars")
###Output
test : tensor([1, 2])
.cpu().numpy() : [1 2]
[37m[41mValueError: only one element tensors can be converted to Python scalars
###Markdown
* torchviz画图该包有两个方法,make_dot_from_trace和make_dot。感觉我只用到了第一个其中,make_dot(model).render("picture")保存的是pdf文档make_dot(model).render("picture", format="png")保存的是png格式图片和图片信息文件
###Code
lstm_cell = torch.nn.LSTMCell(128, 128)
x = torch.randn(3, 128)
make_dot(lstm_cell(x), params=dict(list(lstm_cell.named_parameters())))
###Output
_____no_output_____
###Markdown
* 理解.detach() 1. 简单理解为该tensor的复制(所以和.detach()的顺序没有关系),且require_grad = False 2. 通常我们使用到**神经网络**里输出的量defaut是require_grad = True的,也就是说其在.backward()时是可求导的。 如果自己定义的量有求导的需要,则需加上require_grad = True。 例如在图片的分类问题中,label是不需要求导的,而预测值由神经网络输出,是可导的,这满足我们的需要,所以可以不用管.detach() 3. 但在强化学习中,要根据公式中导数做出选择 4. 在[官网示例](https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html)中讲解了每次的求导过程使用```loss.grad_fn```或者```loss.grad_fn.next_functions[0][0]```
###Code
x = torch.ones(1, requires_grad=True)
y = x**2
z = x**3
r = (y + z).sum()
r.backward()
print("x.grad:", x.grad)
make_dot(r)
x = torch.ones(10, requires_grad=True)
y = x**2
z = x.detach()**3
r = (y + z).sum()
r.backward()
print("x.grad:", x.grad)
make_dot(r)
###Output
x.grad: tensor([2., 2., 2., 2., 2., 2., 2., 2., 2., 2.])
###Markdown
* eval() && train() only changes the behaviour of modules like dropout or batch norm and should not be enabled during training. * cat vs stack [ref](https://stackoverflow.com/questions/54307225/whats-the-difference-between-torch-stack-and-torch-cat-functions/54307331) ```stack:``` >Concatenates sequence of tensors along a new dimension. ```cat:``` >Concatenates the given sequence of seq tensors in the given dimension. So if A and B are of shape (3, 4), torch.cat([A, B], dim=0) will be of shape (6, 4) and torch.stack([A, B], dim=0) will be of shape (2, 3, 4). * nn.Linear()的初始化 1. 默认初始化 : ``` stdv = 1. / math.sqrt(self.weight.size(1)) 构造并sample从-stdv到stdv的均匀分布 self.weight.data.uniform_(-stdv, stdv) if self.bias is not None: self.bias.data.uniform_(-stdv, stdv) ``` 2. 手动初始化 : ``` def weights_init_(m): if isinstance(m, nn.Linear): torch.nn.init.xavier_uniform_(m.weight, gain=1) torch.nn.init.constant_(m.bias, 0) ``` plot * 使用seaborn画出论文里带有置信区间的图
###Code
num_fig = 5
x_list_sub = lambda : np.arange(-1, 1, 0.1)
x_list_target = np.stack((x_list_sub() for _ in range(num_fig)), axis=1).flatten()
y_list_target = x_list_target ** 2 + np.random.randn(x_list_target.shape[0]) * 0.2
data = pd.DataFrame(data=dict(x=x_list_target, y=y_list_target))
data["test"] = pd.Series("left" if i<data.shape[0]/2 else "right" for i in range(data.shape[0]))
sns.lineplot(x="x", y="y", hue="test", data=data, color="g")
###Output
_____no_output_____
###Markdown
其他操作 * 对axis = 0,或axis = 1 的理解 都表示跨越,0跨行,1跨列
###Code
a = np.arange(9).reshape(3,3)
print(a)
print("0跨行:",a.sum(0))
print("1跨列:",a.sum(1))
print("总和:",a.sum())
###Output
总和: 36
###Markdown
* 对函数中\__len__(self)的理解 重写len方法供调用
###Code
class Test():
def __init__(self, *num):
self.nums = num
def __len__(self):
return len(self.nums)
test = Test("A","B","C")
print(len(test),test.__len__())
###Output
3 3
###Markdown
* \*args 和 \*\*kwargs的异同(some code reference Geeks for Geeks) 都是可变参数的传值,两者都可当正常函数的传值处理 \*args是不定长的可变参数,而\*\*kwargs使用的是类似于元组的key to value的可变参数
###Code
#对于*args
def myFun(*argv):
for arg in argv:
print(arg)
myFun('Hello', 'Welcome', 'to', 'HHQ\'utils')
#对于**kwargs我分为了两种情况:
#第一种(定义可变参数函数):
def myFun(**kwargs):
for key, value in kwargs.items():
print("%s == %s"%(key, value))
myFun(first ='Geeks', mid ='for', last='Geeks')
#第二种(传入可变参数的字典):
def myFun(first_list, last_list):
print("the first_list is {}, last_list is {}".format(first_list, last_list))
sample = {"first_list":[1,2],"last_list":[3,4]}
myFun(**sample)
###Output
the first_list is [1, 2], last_list is [3, 4]
###Markdown
* 提取list中索引值为奇数或者偶数的数 some_list[start:stop:step]
###Code
np.arange(8)[::2]
###Output
_____no_output_____
###Markdown
* 关于类中方法的错误使用(DDPG 添加Noise的时候发现的)
###Code
class Test:
def __init__(self):
self.state = np.zeros(1)
def sample_right(self):
self.state = self.state + np.random.randn(1)
return self.state
def sample_error(self):
self.state += np.random.randn(1)
return self.state
plt.figure(figsize=(16, 6))
plt.subplot(121)
test = Test()
right_sample = [test.sample_right() for _ in range(1000)]
error_sample = [test.sample_error() for _ in range(1000)]
plt.plot(right_sample, label="a")
plt.plot(error_sample, label="b")
plt.legend()
plt.subplot(122)
test1 = Test()
right_sample = [test1.sample_right()[0] for _ in range(1000)]
error_sample = [test1.sample_error()[0] for _ in range(1000)]
plt.plot(right_sample, label="a")
plt.plot(error_sample, label="b")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Lemmatize the octacorpus
###Code
def lemmatize(sentence):
lemma_sentence = " ".join(estnltk.Text(sentence.strip()).lemmas)
return lemma_sentence
lemmatize(sentence)
with tqdm(open("../datasets/octacorpus.txt")) as corpus_file:
lemmatized_sentences = Parallel(n_jobs=30)(delayed(lemmatize)(sentence) for sentence in corpus_file)
###Output
_____no_output_____
###Markdown
To create new dataset by filtering the octacorpus, based on a word, run the two following cells.
###Code
lemma_sentences = open("../datasets/octakorpus/octacorpus_lemmatized.txt").readlines()
word_sentences = open("../datasets/octakorpus/octacorpus.txt").readlines()
word_to_sentences('mesikäpp')
# words = ['Eesti', 'Saksamaa', 'Soome', 'Itaalia', 'Leedu', 'Šveits', '']
countries = ['Saksamaa', 'Prantsusmaa', 'Inglismaa', 'Soome', 'Hispaania', 'Läti', 'Rootsi', 'Eesti', 'Holland', 'Šveits']
cities = ['Berliin', 'Pariis', 'London', 'Helsingi', 'Madrid', 'Riia', 'Stockholm', 'Tallinn', 'Amsterdam', 'Zürich']
def word_to_sentences(word):
print(word)
filtered_lemma_sentences = []
filtered_word_sentences = []
for i, lemma_sentence in enumerate(lemma_sentences):
lemma_words = lemma_sentence.split()
if len(lemma_words) <= 1:
continue
if word in lemma_words:
filtered_lemma_sentences.append(lemma_sentence)
filtered_word_sentences.append(word_sentences[i])
open('../datasets/sentences/{}_lemmatized.txt'.format(word), 'w').writelines(filtered_lemma_sentences)
open('../datasets/sentences/{}_word.txt'.format(word), 'w').writelines(filtered_word_sentences)
Parallel(n_jobs=20)(delayed(word_to_sentences)(word) for word in countries + cities)
word1_sentences = open('../datasets/contexts/tee_jook_contexts_s_True_w_4.txt').readlines()
###Output
_____no_output_____
###Markdown
Lemma file --> contexts {window,symmety}
###Code
sent = 'Kõrvalosades on Eesti komitee , loomeliidud , ERSP ja Saksamaa ainuke laip - interliikumine .'
sentence_to_contexts('countries', sent, True, 2)
success_indexes = []
countries = ['Saksamaa', 'Prantsusmaa', 'Inglismaa', 'Soome', 'Hispaania', 'Läti', 'Rootsi', 'Eesti', 'Holland', 'Šveits']
cities = ['Berliin', 'Helsingi', 'Pariis', 'London', 'Madrid', 'Riia', 'Stockholm', 'Amsterdam', 'Tallinn', 'Zürich']
def sentence_to_contexts(word, sentence, symmetric, window_size):
sentence_text = estnltk.Text(sentence)
df = sentence_text.get.word_texts.lemmas.postags.postag_descriptions.as_dataframe
df = df[(df.postags != 'Z') & (df.postags != 'J')].reset_index()
# the usual case.
indexes = df.loc[df.lemmas==word].index
# special cases
if word == 'countries':
indexes = df.loc[df.lemmas.isin(countries)].index
if word == 'cities':
indexes = df.loc[df.lemmas.isin(cities)].index
if word == 'TallinnTartu':
indexes = df.loc[df.lemmas.isin(['Tallinn', 'Tartu'])].index
results = []
for index in indexes:
left_context = " ".join(df.word_texts[max(index-window_size,0):index])
right_context = " ".join(df.word_texts[index+1:index+window_size+1])
word_context = "{} {}".format(left_context, right_context).strip().lower()
lemma_left_context = " ".join(df.lemmas[max(index-window_size,0):index])
lemma_right_context = " ".join(df.lemmas[index+1:index+window_size+1])
lemma_context = "{} {}".format(lemma_left_context, lemma_right_context).strip().lower()
if symmetric and (len(left_context.split()) != window_size or len(right_context.split()) != window_size):
continue
try:
model[lemma_context.split()]
results.append({'word_context': word_context, 'lemma_context': lemma_context})
except KeyError:
pass
except ValueError:
pass
return results
def sentences_to_contexts(word, sentences, symmetric, window_size):
word_contexts = []
lemma_contexts = []
for i, sentence in enumerate(sentences):
if i %1000 == 0:
print(word, i)
contexts = sentence_to_contexts(word, sentence, symmetric=symmetric, window_size=window_size)
for context in contexts:
word_contexts.append(context['word_context'])
lemma_contexts.append(context['lemma_context'])
return {'word_contexts': word_contexts, 'lemma_contexts': lemma_contexts}
# apple_sentences = open('../datasets/oun_word.txt').readlines()
# rock_sentences = open('../datasets/kivi_word.txt').readlines()
# pear_sentences = open('../datasets/pirn_word.txt').readlines()
words = [('joogitee', 'sõidutee'),
('õun', 'banaan'),
('õun', 'puder'),
('õun', 'kivi'),
('ämber', 'pang'),
('hea', 'halb'),
('countries', 'cities'),
('Eesti', 'TallinnTartu')]
words_5k = [('hea', 'halb'),
('countries', 'cities'),
('Eesti', 'TallinnTartu')]
combinations = []
for window in [2,3,4]:
for symmetric in [True, False]:
combinations.append([window, symmetric])
for word1, word2 in words_5k[1:]:
print(word1, word2)
# word1 = 'tee_sõidu'
# word2 = 'tee_jook'
word1_sentences = open('../datasets/sentences/{}_word.txt'.format(word1)).readlines()
word2_sentences = open('../datasets/sentences/{}_word.txt'.format(word2)).readlines()
Parallel(n_jobs=6)(delayed(generate_dataset)(window, symmetric, word1, word2, word1_sentences, word2_sentences)
for window, symmetric in combinations)
def generate_dataset(window, symmetric, word1, word2, word1_sentences, word2_sentences):
print(window, symmetric)
if word1 == 'joogitee':
find_word_1 = 'tee'
find_word_2 = 'tee'
else:
find_word_1 = word1
find_word_2 = word2
word1_contexts = sentences_to_contexts(find_word_1, word1_sentences[:11000], symmetric, window)['lemma_contexts']
word2_contexts = sentences_to_contexts(find_word_2, word2_sentences[:11000], symmetric, window)['lemma_contexts']
l1 = len(word1_contexts)
l2 = len(word2_contexts)
l = min(l1,l2,5000)
word1_contexts = word1_contexts[:l]
word2_contexts = word2_contexts[:l]
print(len(word1_contexts), len(word2_contexts), min(l1,l2,5000))
open('../datasets/contexts/{}_s_{}_w_{}.txt'.format(word1, symmetric, window), 'w').writelines(list(map(lambda x: x+'\n', word1_contexts)))
open('../datasets/contexts/{}_s_{}_w_{}.txt'.format(word2, symmetric, window), 'w').writelines(list(map(lambda x: x+'\n', word2_contexts)))
print(combinations)
for window, symmetric in combinations:
print(window)
print(symmetric)
symmetric = True
window = 2
apple_contexts = open('../datasets/apple_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
rock_contexts = open('../datasets/rock_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
pear_contexts = open('../datasets/pear_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
contexts = apple_contexts + rock_contexts + pear_contexts
labels = [0]*len(apple_contexts) + [1]*len(rock_contexts) + [2]*len(pear_contexts)
len(contexts)
###Output
_____no_output_____
###Markdown
contexts -> dist/sim
###Code
def cos_metric_contexts(s1, s2, sum_init, comparison, metric):
if type(s1) == str:
s1 = s1.split()
s2 = s2.split()
if len(s2) < len(s1):
s1, s2 = s2, s1
m = len(s1)
n = len(s2)
similarity_matrix = np.empty((m,n))
similarity_matrix[:] = np.NAN
for i, w1 in enumerate(s1):
for j, w2 in enumerate(s2):
similarity_matrix[i,j] = metric([model[w1]], [model[w2]])[0]
best_sum = sum_init
for perm in itertools.permutations(list(range(n)), m):
perm_sum = 0
for i, j in enumerate(perm):
perm_sum += similarity_matrix[i][j]
if comparison(perm_sum, best_sum):
best_sum = perm_sum
return best_sum/m
def cos_sim_contexts(s1, s2):
return cos_metric_contexts(s1, s2,
sum_init=-np.inf,
comparison=operator.gt,
metric=cosine_similarity)
def cos_dist_contexts(s1, s2):
return cos_metric_contexts(s1, s2,
sum_init=np.inf,
comparison=operator.lt,
metric=cosine_distances)
n = len(contexts)
similarity_matrix = np.empty((n,n))
similarity_matrix[:] = np.NAN
for i, s1 in tqdm(enumerate(contexts)):
for j, s2 in enumerate(contexts):
similarity_matrix[i][j] = cos_sim_contexts(s1, s2)
similarity_matrix
# n_similarity matrix
n = len(contexts)
similarity_matrix = np.empty((n,n))
similarity_matrix[:] = np.NAN
for i, s1 in tqdm(enumerate(contexts)):
# for j, s2 in enumerate(contexts):
# similarity_matrix[i][j] = model.n_similarity(s1.split(), s2.split())
similarity_matrix[i] = Parallel(n_jobs=1)(delayed(n_sim)(s1.split(), s2.split()) for s2 in contexts)
similarity_matrix
def n_similarity(s1, s2):
vec1 = np.mean(model[s1.split()], axis=0)
vec2 = np.mean(model[s2.split()], axis=0)
return cosine_similarity([vec1], [vec2])[0][0]
def n_distance(s1, s2):
vec1 = np.mean(model[s1.split()], axis=0)
vec2 = np.mean(model[s2.split()], axis=0)
return cosine_distances([vec1], [vec2])[0][0]
def matrix_row_sim(s1, contexts, row_length):
row = np.empty(row_length)
for j, s2 in enumerate(contexts):
# row[j] = model.n_similarity(s1.split(), s2.split())
row[j] = n_similarity(s1, s2)
return row
def matrix_row_dist(s1, contexts, row_length):
row = np.empty(row_length)
for j, s2 in enumerate(contexts):
row[j] = n_distance(s1, s2)
return row
for window in [2,3,4]:
for symmetric in [True, False]:
print(window, symmetric)
apple_contexts = open('../datasets/apple_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
rock_contexts = open('../datasets/rock_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
pear_contexts = open('../datasets/pear_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
contexts = apple_contexts + rock_contexts + pear_contexts
labels = [0]*len(apple_contexts) + [1]*len(rock_contexts) + [2]*len(pear_contexts)
n = len(contexts)
distance_matrix_rows = Parallel(n_jobs=12)(delayed(matrix_row_dist)(s1, contexts, n) for s1 in contexts)
distance_matrix = np.array(distance_matrix_rows)
filename = '../datasets/apple-rock-pear/cos_dist_w_{}_s_{}.npy'.format(window, symmetric)
np.save(filename, distance_matrix)
for metric in [cosine_similarity, cosine_distances][:1]:
for window in [2,3,4]:
for symmetric in [True, False]:
print(metric.__name__, window, symmetric)
apple_contexts = open('../datasets/apple_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
rock_contexts = open('../datasets/rock_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
pear_contexts = open('../datasets/pear_contexts_s_{}_w_{}.txt'.format(symmetric, window)).read().splitlines()
contexts = apple_contexts + rock_contexts + pear_contexts
labels = [0]*len(apple_contexts) + [1]*len(rock_contexts) + [2]*len(pear_contexts)
n = len(contexts)
matrix = np.empty((n,n))
matrix[:] = np.NAN
print('constructing matrix')
tfidf_vectorizer = TfidfVectorizer()
tfidf = tfidf_vectorizer.fit_transform(contexts)
for i in range(n):
matrix[i,:] = metric(tfidf[i], tfidf).flatten()
print('saving')
filename = '../datasets/apple-rock-pear/tfidf_{}_w_{}_s_{}.npy'.format(metric.__name__, window, symmetric)
np.save(filename, matrix)
cosine_similarity.__name__
distance_matrix = 1 - similarity_matrix
mds = MDS(n_components=2, dissimilarity='precomputed', n_jobs=28, verbose=10)
mds.fit(distance_matrix)
similarity_matrix
similarity_matrix = np.array(similarity_matrix)
plt.scatter(mds.embedding_[:,0], mds.embedding_[:,1], s=2, alpha=0.1,c=np.array(labels)+5)
mds.embedding_.shape[0]
y = np.array(labels)
# y[np.where(y==0)[0]]
labels0 = y[np.where(y==0)[0]]
labels1 = y[np.where(y==1)[0]]
labels2 = y[np.where(y==2)[0]]
y[np.where(y==0)]
mds_label0 = y[np.where(y==0)[0]]
mds_label1 = y[np.where(y==1)[0]]
mds_label2 = y[np.where(y==2)[0]]
plt.scatter(mds.embedding_[:,0][np.where(y==0)[0]], mds.embedding_[:,1][np.where(y==0)[0]], s=2, alpha=0.1)
plt.show()
plt.scatter(mds.embedding_[:,0][np.where(y==1)[0]], mds.embedding_[:,1][np.where(y==1)[0]], s=2, alpha=0.1)
plt.show()
plt.scatter(mds.embedding_[:,0][np.where(y==2)[0]], mds.embedding_[:,1][np.where(y==2)[0]], s=2, alpha=0.1)
###Output
_____no_output_____
###Markdown
Make dist from sim
###Code
for dist_file in glob.glob('../datasets/apple-rock-pear/cos_dist*'):
dist = np.load(dist_file)
sim = 1-dist
sim_filename = dist_file.replace('dist', 'sim')
np.save(sim_filename, sim)
print(dist_file, sim_filename)
###Output
_____no_output_____
###Markdown
Clipped datasets
###Code
for dist_file in glob.glob('../datasets/apple-rock-pear/cos_dist*'):
dist = np.load(dist_file)
dist[dist>1] = 1
filename = dist_file.replace('cos_dist', 'clipped_cos_dist')
np.save(filename, dist)
print(dist_file, filename)
for dist_file in glob.glob('../datasets/apple-rock-pear/clipped_cos_dist*'):
dist = np.load(dist_file)
sim = 1-dist
sim_filename = dist_file.replace('dist', 'sim')
np.save(sim_filename, sim)
print(dist_file, sim_filename)
###Output
../datasets/apple-rock-pear/clipped_cos_dist_w_2_s_False.npy ../datasets/apple-rock-pear/clipped_cos_sim_w_2_s_False.npy
../datasets/apple-rock-pear/clipped_cos_dist_w_4_s_True.npy ../datasets/apple-rock-pear/clipped_cos_sim_w_4_s_True.npy
../datasets/apple-rock-pear/clipped_cos_dist_w_3_s_True.npy ../datasets/apple-rock-pear/clipped_cos_sim_w_3_s_True.npy
../datasets/apple-rock-pear/clipped_cos_dist_w_3_s_False.npy ../datasets/apple-rock-pear/clipped_cos_sim_w_3_s_False.npy
../datasets/apple-rock-pear/clipped_cos_dist_w_2_s_True.npy ../datasets/apple-rock-pear/clipped_cos_sim_w_2_s_True.npy
../datasets/apple-rock-pear/clipped_cos_dist_w_4_s_False.npy ../datasets/apple-rock-pear/clipped_cos_sim_w_4_s_False.npy
###Markdown
utils> Utility routines.
###Code
#hide
from nbdev.showdoc import *
#export
import numpy as np
def softmax(x):
e_x = np.exp(x - x.max(axis=-1, keepdims=True))
return e_x / e_x.sum(axis=-1, keepdims=True)
np.random.seed(1)
delta = 1e-6
assert np.abs(softmax(np.random.rand(2,3)) - \
np.array([[0.33185042, 0.44943301, 0.21871657],[0.37502195, 0.32098933, 0.30398872]]) ).sum() < delta
#export
def calc_prob(n=1000, s=2.5, dim=3):
"s = 'scale': how strongly to 'push' the points towards the ends"
logits = (np.random.rand(n,dim)*2-1)*s
prob = softmax(logits)
targ = np.argmax(prob, axis=1) # target values
return prob, targ
p, t = calc_prob(n=1,s=5,dim=4)
assert np.abs(p - np.array([[0.02079038, 0.10225762, 0.17064115, 0.70631085]])).flatten().sum() < delta
#export
def one_hot(targs):
"""convert array of single target values to set of one-hot vectors"""
out = np.zeros((targs.size, targs.max()+1))
out[np.arange(targs.size),targs] = 1
return out
assert one_hot(np.random.randint(7,size=25)).shape == (25,7)
assert np.abs(one_hot(np.array([1,0,3,2])) - np.array([[0,1,0,0],[1,0,0,0],[0,0,0,1],[0,0,1,0]])).sum() < delta
#export
def on_colab(): # cf https://stackoverflow.com/questions/53581278/test-if-notebook-is-running-on-google-colab
"""Returns true if code is being executed on Colab, false otherwise"""
try:
return 'google.colab' in str(get_ipython())
except NameError: # no get_ipython, so definitely not on Colab
return False
###Output
_____no_output_____
###Markdown
Since we'll never run the test suite *on* Colab, the test should be negative:
###Code
assert not on_colab()
###Output
_____no_output_____
###Markdown
UtilsThe utils notebook contains all the methods shared between all the other notebooks
###Code
import tensorflow as tf
from tensorflow.python.keras.preprocessing import dataset_utils
import os
###Output
_____no_output_____
###Markdown
Load information about the dataset such as file path (to retrieve images' names), labels (for each image) and class names (of the entire dataset). With shuffle false, the images are in alphanumerical order.
###Code
def ids_and_labels_from_file(path):
'''the path must be the root directory (the folder befor the classes)'''
image_paths, labels, class_names = dataset_utils.index_directory(path, labels='inferred', formats='.jpg', shuffle=False)
ft_ids = [path.split('/')[4] for path in image_paths]
return ft_ids, labels, class_names
###Output
_____no_output_____
###Markdown
Preprocess images to move pixel values from range [0,255] to [-1,1].
###Code
def preprocess(images, labels):
images = tf.keras.applications.mobilenet_v2.preprocess_input(images)
return images, labels
###Output
_____no_output_____
###Markdown
Some handy util toolseach cell should provide different functions
###Code
## Copy all training images specified in split.csv to another folder
import pandas as pd
import shutil
SPLIT_CSV = 'interim/by_dop80c_1312.split.csv'
DEST_TRAINING_PATH = 'interim/by_dop80c_1312'
split_df = pd.read_csv(SPLIT_CSV)
for index, d in split_df[split_df['train']].iterrows():
#print (d.img_filepath)
shutil.copy(d.img_filepath, DEST_TRAINING_PATH)
# copy files
import shutil
from pathlib import Path
import os
ds = 'by_dop80c_1312' # 'opendata_luftbild_dop60_1312'
FROM_DIR = Path('aerial_images_resampled/{ds}'.format(ds=ds))
DST_DIR = Path('interim/{ds}/deepforest_r1/train2'.format(ds=ds))
PATTERN = [
'1288638.722245550_6129295.399946138_1289422.201785473_6130078.879486061.tiff',
'1288638.722245550_6128569.248177430_1289422.201785473_6129352.727717352.tiff',
'1290091.025782968_6128569.248177430_1290874.505322890_6129352.727717352.tiff',
'1290817.177551676_6127843.096408719_1291600.657091599_6128626.575948643.tiff',
'1287186.418708133_6127116.944640011_1287969.898248056_6127900.424179933.tiff',
'1290091.025782968_6127843.096408719_1290874.505322890_6128626.575948643.tiff',
'1289364.874014259_6128569.248177430_1290148.353554182_6129352.727717352.tiff',
'1287912.570476842_6129295.399946138_1288696.050016765_6130078.879486061.tiff',
'1288638.722245550_6130021.551714847_1289422.201785473_6130805.031254769.tiff']
os.makedirs(DST_DIR, exist_ok=True)
for pattern in PATTERN:
for s in FROM_DIR.glob(pattern):
print(s)
shutil.copy(s, DST_DIR)
## Downsampling map tiles from a higher zoom level down to specified lower zoom levels
#
import mercantile
import supermercado
import rasterio
from rasterio import plot, transform
import numpy
import json
import os
import pathlib
import os
os.environ['GDAL_PAM_ENABLED'] = 'NO'
AREA = 'contrib/munich/munich.boundary.geojson'
SRC_ZOOM = 15
SOURCE_MAP_TILE_BAND_COUNT = 4
SOURCE_MAP_TILE_WIDTH_PX = 256
SOURCE_MAP_TILE_HEIGHT_PX = 256
SOURCE_MAP_TILE_DTYPE = numpy.uint8
SOURCE_MAP_TILE_TYPE = ".png"
OUTPUT_ZOOMS = range(0, SRC_ZOOM)
SOURCE_MAP_TILE_PATH = 'temp/diff/heatmap.500.clipped'
OUTPUT_TILE_PATH = 'temp/diff/heatmap.500.clipped'
SOURCE_MAP_TILE_EPSG = 3857 # only epsg:3857 is supported
f = open(AREA)
area = json.load(f)
f.close()
features = area["features"]
features = [f for f in supermercado.super_utils.filter_features(features)]
for z in reversed(OUTPUT_ZOOMS):
for t in supermercado.burntiles.burn(features, z):
tile = t.tolist()
#print(tile)
children = mercantile.children(tile)
temp = numpy.zeros((SOURCE_MAP_TILE_BAND_COUNT, SOURCE_MAP_TILE_HEIGHT_PX*2, SOURCE_MAP_TILE_WIDTH_PX*2), dtype=SOURCE_MAP_TILE_DTYPE)
for y in range(children[1].y, children[3].y+1):
for x in range(children[0].x, children[1].x+1):
try:
path = SOURCE_MAP_TILE_PATH if children[0].z == SRC_ZOOM else OUTPUT_TILE_PATH
child = path + "/" + str(children[0].z) + "/" + str(x) + "/" + str(y) + SOURCE_MAP_TILE_TYPE
with rasterio.open(child) as tile_src:
data_src = tile_src.read()
temp[:,
(y-children[1].y)*SOURCE_MAP_TILE_HEIGHT_PX:(y-children[1].y+1)*SOURCE_MAP_TILE_HEIGHT_PX,
(x-children[0].x)*SOURCE_MAP_TILE_WIDTH_PX :(x-children[0].x+1)*SOURCE_MAP_TILE_WIDTH_PX] = data_src[:,:,:]
except rasterio.errors.RasterioIOError as e:
pass
dest_path = OUTPUT_TILE_PATH + "/" + str(tile[2]) + "/" + str(tile[0]) + "/" + str(tile[1]) + SOURCE_MAP_TILE_TYPE
os.makedirs(pathlib.Path(dest_path).parent, exist_ok=True)
bb = mercantile.xy_bounds(tile[0], tile[1], tile[2])
tile_tf = rasterio.transform.from_bounds(bb.left, bb.bottom, bb.right, bb.top, SOURCE_MAP_TILE_WIDTH_PX, SOURCE_MAP_TILE_HEIGHT_PX)
with rasterio.open(dest_path, 'w', driver='PNG',
width=SOURCE_MAP_TILE_WIDTH_PX, height=SOURCE_MAP_TILE_HEIGHT_PX,
count=SOURCE_MAP_TILE_BAND_COUNT, dtype=SOURCE_MAP_TILE_DTYPE, nodata=0,
transform=tile_tf,
crs=rasterio.crs.CRS.from_epsg(SOURCE_MAP_TILE_EPSG)) as dst:
dst.write(temp)
#rasterio.plot.show(temp)
## convert tiff to png
#
from pathlib import Path
import os
os.environ['GDAL_PAM_ENABLED'] = 'NO'
PATH = 'aerial_images/opendata_luftbild_dop60_2017_wip/'
for path in Path(PATH).rglob('*.tiff'):
with rasterio.open(path) as src:
dest_path = path.parent.joinpath(path.stem+'.png')
with rasterio.open(dest_path, 'w',
driver='PNG',
height=src.shape[0],
width=src.shape[1],
count=src.count,
dtype=src.meta['dtype'],
nodata=0,
compress='deflate') as dst:
dst.write(src.read())
#for path in Path(PATH).rglob('*.aux.xml'):
# os.remove(path)
for path in Path(PATH).rglob('*.tiff'):
os.remove(path)
## show image and meta info
#
import rasterio
from rasterio import plot
PATH = 'temp/png/12/2177/1420.png'
with rasterio.open(PATH) as src:
print(src.meta)
rasterio.plot.show(src.read())
# calculate resolution of an XYZ map tile
import math
lat = 48.137154
def calc_resolution(lat, zoom):
return 156543.04 * math.cos(math.radians(lat)) / (2**zoom)
print("18","18",calc_resolution(lat, 18), "meter/pixel") # meter per pixel
print("11",calc_resolution(lat, 11), "meter/pixel") # meter per pixel
print("9",calc_resolution(lat, 9), "meter/pixel") # meter per pixel
print("8",calc_resolution(lat, 8), "meter/pixel") # meter per pixel
print("7",calc_resolution(lat, 7), "meter/pixel") # meter per pixel
# remove "small trees" labels in labelme annotation json files
import glob
import math
import json
import pathlib
from pathlib import Path
ds = 'by_dop80c_1312' # 'opendata_luftbild_dop60_1312' #
FILTER_LABEL = "Tree"
FILTER_TYPE = "circle"
FILTER_DIAMETER = 10 # pixel
labels = glob.glob('interim/{ds}/deepforest_r1/train2/*.json'.format(ds=ds))
def diameter(points):
p1 = points[0]
p2 = points[1]
return 2 * math.sqrt(math.pow(p1[0]-p2[0],2) + math.pow(p1[1]-p2[1],2))
def filter(shape):
if shape['label'] == FILTER_LABEL and \
shape['shape_type'] == FILTER_TYPE and \
diameter(shape['points']) >= FILTER_DIAMETER:
return True
return False
for label in labels:
f = Path(label)
gjson = None
print(f)
with open(f) as json_file:
gjson = json.load(json_file)
gjson['shapes'] = [s for s in gjson['shapes'] if filter(s)]
with open(f, 'w') as outfile:
json.dump(gjson, outfile, indent=2)
# recreate labeme annotation json file from (e.g. deepforest) annotation csv
import json
import csv
import os
import pandas as pd
import glob
from pathlib import Path
PATH = "interim/by_dop80c_1312/deepforest_r1/response/crop/"
imageHeight = imageWidth = 1312
for c in glob.glob(PATH + "*.csv_"):
df = pd.read_csv(c)
files = list(df['image_path'].unique())
for file in files:
label = { "version": "4.5.10",
"flags": {},
"shapes": [],
"imagePath": Path(file).name,
"imageData": None,
"imageHeight": imageHeight,
"imageWidth": imageWidth
}
bboxes = df[df['image_path'] == file]
for index, row in bboxes.iterrows():
shape = {
"label": row["label"],
"points": [
[
row["xmin"],
row["ymin"]
],
[
row["xmax"],
row["ymax"]
]
],
"group_id": None,
"shape_type": "rectangle",
"flags": {}
}
label["shapes"].append(shape)
dest = PATH + os.path.splitext(file)[0] + ".json"
with open(dest, 'w') as outfile:
json.dump(label, outfile, indent=2)
# recreate labeme annotation json file from pickl
import json
import csv
import os
import pandas as pd
import glob
import rasterio
import torch
from torchvision.ops import nms
from urbantree.deepforest.detection import run_nms
from pathlib import Path
import numpy as np
ds = 'opendata_luftbild_dop60_1312' #'by_dop80c_1312' #
PICKL_DIR = Path('interim/{ds}/deepforest_r1/predict/b'.format(ds=ds))
IMG_DIR = Path('interim/{ds}/deepforest_r1/train2'.format(ds=ds))
for f in IMG_DIR.glob('*.tiff'):
with rasterio.open(f) as img:
imageHeight = img.height
imageWidth = img.width
df = pd.read_pickle(PICKL_DIR.joinpath(f.stem + ".pkl"))
df = run_nms(df, iou_threshold=0.1)
label = { "version": "4.5.10",
"flags": {},
"shapes": [],
"imagePath": f.name,
"imageData": None,
"imageHeight": imageHeight,
"imageWidth": imageWidth
}
for index, row in df.iterrows():
shape = {
"label": row.label,
"points": [
[
(row.xmin + row.xmax)/2.0,
(row.ymin + row.ymax)/2.0
],
[
(row.xmin + row.xmax)/2.0,
row.ymax
]
],
"group_id": None,
"shape_type": "circle",
"flags": {}
}
label["shapes"].append(shape)
dest = IMG_DIR.joinpath(f.stem +".json")
with open(dest, 'w') as outfile:
json.dump(label, outfile, indent=2)
# geojson to shapefile
import geopandas as gpd
gdf = gpd.read_file('temp/diff2/diff.geojson')
gdf.to_file('temp/diff2/diff.shp')
#gdf.to_file("temp/diff2/diff.export.geojson", driver='GeoJSON')
# Overpass API for querying Admin boundary
## {{geocodeArea:Munich}}->.searchArea;
## (
## //node["admin_level"="9"](area.searchArea);
## //way["admin_level"="9"](area.searchArea);
## relation["admin_level"="9"](area.searchArea);
## );
## out body;
## >;
## out skel qt;
import geopandas as gpd
# boundary
city = gpd.read_file('contrib/munich/munich.boundary.geojson')
district = gpd.read_file('contrib/munich/munich-admin.boundary.geojson')
## trees
trees = gpd.read_file('temp/2017.shp')
#
## calculated missing tree
#missing= gpd.read_file('contrib/munich/missing-2017-2020.geojson')
#
#missing_city = gpd.clip(missing, city)
#missing_city.to_file("temp/missing/missing-tree-in-city-2017-2020.shp")
#
#for _, row in district.iterrows():
# name = row['name'].replace(' ', '_')
# missing_district = gpd.clip(missing, row.geometry)
# missing_district.to_file("temp/missing/missing-tree-in-dist-{d}-2017-2020.shp".format(d=name))
district_6933 = district.to_crs('epsg:6933')
trees_6933 = trees.to_crs('epsg:6933')
for idx, dist in district_6933.iterrows():
trees_dist = gpd.clip(trees_6933, dist.geometry)
tree_ratio = trees_dist.geometry.area.sum() / dist.geometry.area
district_6933.loc[idx, "tree_cover_ratio"] = tree_ratio
district_6933['district_area'] = district_6933.geometry.area / 10**6
district_6933['district_name'] = district_6933.name
district_6933[['district_name','district_area','tree_cover_ratio']].to_json('temp/district_tree_cover-2017.json')
# filter bbox cluster (keep large neighborhood)
import geopandas as gpd
import rtree.index
from functools import reduce
from tqdm import tqdm
SRC = 'temp/diff/diff.shp'
OUTPUT = 'temp/diff/diff_major.4.shp'
MIN_NEIGHBORS = 4
MAX_THRESHOLD = 60 # meter
GEOMETRY_ONLY = True
bboxes = gpd.read_file(SRC).to_crs('EPSG:3857')
index = rtree.index.Index(interleaved=True)
for idx, bbox in bboxes.iterrows():
index.insert(idx, bbox.geometry.bounds, obj=bbox)
selected = []
for idx, bbox in tqdm(bboxes.iterrows(), total=len(bboxes)):
neighbors = index.nearest(bbox.geometry.bounds, MIN_NEIGHBORS+1, objects='raw')
neighbors = filter(lambda x: x is not None, neighbors)
distances = map(lambda x: bbox.geometry.distance(x.geometry), neighbors)
distances = list(filter(lambda x: x <= MAX_THRESHOLD, distances))
if len(distances) >= MIN_NEIGHBORS:
selected.append(idx)
filtered = bboxes.loc[selected]
if GEOMETRY_ONLY:
output_df = filtered.geometry
else:
output_df = filtered
output_df.to_file(OUTPUT)
output_df.to_crs('EPSG:4326').to_file(OUTPUT + ".geojson", driver='GeoJSON')
# convert shp to geojson
import geopandas as gpd
SRC = 'temp/diff/2017.shp'
OUTPUT = 'temp/diff/2017.geojson'
gpd.read_file(SRC).to_file(OUTPUT, driver='GeoJSON')
# scale a geometry
import geopandas as gpd
import json
BOUNDARY = 'contrib/munich/munich.boundary.geojson'
SCALE = 0.95
# shrunk boundary to avoid edge cases
b = gpd.read_file(BOUNDARY)
b.geometry = b.geometry.scale(xfact=SCALE, yfact=SCALE)
b.to_file('temp/2017.s.shp')
# clip tile images from geometry mask
from pathlib import Path
import imageio
from PIL import Image
import numpy as np
SIZE = 256
SRC = 'temp/diff/heatmap.500'
MASK = 'aerial_images/munich_s_area'
OUTPUT = 'temp/diff/heatmap.500.clipped'
PATTERN = "*.png"
for p in Path(SRC).rglob(PATTERN):
y = p.name
x = p.parent.name
z = p.parent.parent.name
mask = Path(MASK) / z / x / y
output = Path(OUTPUT) / z / x / y
output.parent.mkdir(parents=True,exist_ok=True)
src = imageio.imread(p)
mask = imageio.imread(mask)
idx = mask > 0
idx[:,:,0] = idx[:,:,3]
idx[:,:,1] = idx[:,:,3]
idx[:,:,2] = idx[:,:,3]
array = np.zeros([SIZE, SIZE, 4], dtype=np.uint8)
array[idx]=src[idx]
im = Image.fromarray(array)
im.save(output)
###Output
_____no_output_____
###Markdown
###Code
history=model.fit(train_images[..., np.newaxis],
train_labels, epochs=2, batch_size=256)
#train_images[..., np.newaxis] -> adds new dimension
#df= pd.DataFrame(history.history) returns history of model run, contains accuracy and loss and other metrics
#df.head()
#loss_plot= df.plot(y='loss', title='Loss vs Epoch', legend=False) used to draw loss diagram over epochs
#loss_plot.set(xlabel='Epoch', ylabel='Loss')
i=0
img = train_images[i,:,:]
plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
Import Dependencies
###Code
import pandas as pd
import numpy as np
import operator
from scipy.stats import randint, uniform
from sklearn.model_selection import KFold
from sklearn.model_selection import RandomizedSearchCV
from sklearn import metrics
from sklearn.model_selection import cross_validate
###Output
_____no_output_____
###Markdown
Import Models
###Code
from xgboost.sklearn import XGBClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
# NOT RECOMMENDED! ignore warnings
import warnings
import sys
if not sys.warnoptions:
warnings.simplefilter("ignore")
import networkx as nx
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
***
###Code
# TODO: Correlation summary
class SingleModelSearch:
"""
TODO
1. Review hyperparameters grid.
3. Add average training AUC
Search for the best model type using cross validation.
Parameters
----------
n_iter: int
Number of parameter settings that are sampled.
cv: int
Number of cross validation folds.
scoring: string, default='roc_auc'
Scoring metrics (e.g: AUC, Accuracy, F1, etc)
n_jobs: int, default=None
Number of thread to use for parallel computing.
random_state: int, default=None
Random seed for RandomizedSearchCV().
Set seed to integer to get reproducible result.
verbose: boolean, default=False
Set to True to print messages when searching in progress.
Attributes
----------
model_search_params_: dict
Contains all the optimal CV parameters for every searched model.
model_search_score_: DataFrame
Contains all the best CV score for every searched model.
best_model_: dict
Contains the name, parameters and score of best model.
"""
_model = {
'logistic': LogisticRegression(max_iter=1000),
'lda': LinearDiscriminantAnalysis(),
'naivebayes': GaussianNB(),
'rf': RandomForestClassifier(n_estimators=100),
'xgboost': XGBClassifier(),
'knn': KNeighborsClassifier(),
'svc': SVC(kernel='rbf'),
'mlp': MLPClassifier(max_iter=500)
}
_params_grid = {
'logistic': {
'solver' : ['liblinear', 'saga'],
'C' : [1e-3, 1e-2, 0.1, 1, 10, 100]
},
'rf': {
'max_depth': randint(10, 100),
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': randint(1, 4),
'min_samples_split': randint(2, 10),
'bootstrap': [True, False]
},
'xgboost': {
'max_depth': randint(1,6),
'min_child_weight': randint(0,6),
'subsample': uniform(loc=0.6, scale=0.4),
'colsample_bytree': uniform(loc=0.6, scale=0.4),
'gamma': [i/10.0 for i in range(0,5)],
'reg_alpha': [1e-5, 1e-2, 0.1, 1, 100]
},
'knn': {
'n_neighbors': randint(1, 100)
},
'svc': {
'gamma': [0.1, 1, 10, 100],
'C': [0.1, 1, 10, 100, 1000]
},
'mlp': {
'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam'],
'alpha': [0.0001, 0.05],
'learning_rate': ['constant','adaptive']
}
}
def __init__(self, n_iter, cv, scoring='roc_auc', n_jobs=None,
random_state=None, verbose=False):
self.n_iter = n_iter
self.scoring = scoring
self.n_jobs = n_jobs
self.cv = cv
self.random_state = random_state
self.verbose = verbose
def _get_params_score(self, X, y, model):
"""
Find the optimal CV parameters and score for each model types.
Parameters
----------
X: DataFrame
Input features
Y: DataFrame
Input label
model: string
Name of the model to be tested. (See class variables)
Returns
-------
params: dict
Best CV model parameters.
score: int
Best CV score.
"""
# valid model name that has searching grid
if model in self._params_grid.keys():
rand_search = RandomizedSearchCV(
estimator = self._model[model],
param_distributions = self._params_grid[model],
scoring = self.scoring,
n_iter = self.n_iter,
cv = self.cv,
n_jobs = self.n_jobs,
random_state = self.random_state)
rand_search.fit(X, y)
return rand_search.best_params_, rand_search.best_score_
# valid model name without searching grid
elif model in self._model.keys():
cv_fit = cross_validate(self._model[model], X, y,
scoring=self.scoring, cv=self.cv,
n_jobs=self.n_jobs)
return self._model[model].get_params(), cv_fit['test_score'].mean()
else:
raise Exception('Invalid model! Please input a valid model name')
def fit(self, X, y):
"""
Parameters
----------
X: DataFrame
Input features
Y: DataFrame
Input label
"""
# declare dtype for desired output
self.model_search_params_ = {}
self.model_search_score_ = pd.DataFrame([], columns=['model', 'score'])
self.best_model_ = {}
for model in self._model.keys():
if self.verbose == True:
print("Currently testing for {}".format(model))
# find the CV results and optimal params
params, score= self._get_params_score(X, y, model)
# record best CV params & score
self.model_search_params_[model] = params
self.model_search_score_ = self.model_search_score_.\
append({'model': model, 'score': score},
ignore_index=True)
# sort search results in descending order
self.model_search_score_ = self.model_search_score_.\
sort_values(by='score', ascending=False).\
reset_index(drop=True)
# extract the name, params and score for best model
best_model = self.model_search_score_.\
loc[self.model_search_score_.score.idxmax(), :]
self.best_model_['name'] = best_model.model
self.best_model_['score'] = best_model.score
self.best_model_['params'] = self.model_search_params_[best_model.model]
class BlendedModelSearch(SingleModelSearch):
"""
TODO:
1. Suggest candidate models (Check correlation - choose uncorrelated)
2. Add average training AUC
Search for the candidate models for blending using cross validation.
Parameters
----------
frac: float, range within (0, 1]
Proportion of features that are sampled.
n_experiments:
Number of times to repeat the sampling plus searching process.
n_iter: int
Number of parameter settings that are sampled.
cv: int
Number of cross validation folds.
scoring: string, default='roc_auc'
Scoring metrics (e.g: AUC, Accuracy, F1, etc)
n_jobs: int, default=None
Number of thread to use for parallel computing.
random_state: int, default=None
Random seed for RandomizedSearchCV() and features sampling.
Set seed to integer to get reproducible result.
verbose: boolean, default=False
Set to True to print messages when searching in progress.
Attributes
----------
model_search_params_: dict
Contains all the optimal CV parameters for every searched model.
model_search_score_: DataFrame
Contains all the best CV score for every searched model.
"""
def __init__(self, frac, n_experiment, n_iter, cv, scoring='roc_auc',
n_jobs=None, random_state=None, verbose=False):
super().__init__(n_iter=n_iter, cv=cv, scoring=scoring, n_jobs=n_jobs,
random_state=random_state, verbose=verbose)
self.frac = frac
self.n_experiment = n_experiment
def fit(self, X, y):
"""
Parameters
----------
X: DataFrame
Input features
Y: DataFrame
Input label
"""
# declare dtype for the desired output
self.model_search_params_ = {}
self.model_search_score_ = pd.DataFrame(
[],
columns=['experiment', 'model', 'score'])
for i in range(self.n_experiment):
# sample the features randomly
X_sample = X.sample(frac=self.frac, axis=1,
random_state=self.random_state + i)
y_sample = y
# test for all model types
for model in self._model.keys():
if self.verbose == True:
print("Experiment {}: Testing for {}".format(i, model))
# find the CV results and optimal params
params, score = super()._get_params_score(
X_sample, y_sample, model)
# save the searched parameters
self.model_search_params_[(i, model)] = \
{'params': params, 'feature': list(X_sample.columns)}
# save the search results
self.model_search_score_ = \
self.model_search_score_.append(
{'experiment': i, 'model': model, 'score': score},
ignore_index=True)
# sort the search results in descending order
self.model_search_score_ = self.model_search_score_.\
sort_values(by='score', ascending=False)\
.reset_index(drop=True)
class CorrFilter:
"""
Determine highly correlated pairs.
Parameters
----------
correlation_threshold: float, range = [0, 1]
Threshold to be classified as 'highly correlated'
Attributes
----------
output: DataFrame
Shows the correlated features pairs and their
corresponding correlated coefficent.
"""
def __init__(self, correlation_threshold=0.8):
self.correlation_threshold = correlation_threshold
def fit(self, data):
corr = data.corr()
# Extract the upper triangle of the correlation matrix
upper = corr.where(np.triu(np.ones(corr.shape), k = 1).astype(np.bool))
record_collinear = pd.DataFrame(columns = ['pair1', 'pair2', 'corr'])
# unique features that are correlated to some other feature
to_drop = [column for column in upper.columns
if any(upper[column].abs() > self.correlation_threshold)]
for column in to_drop:
# Find the correlated features
corr_features = list(upper.index[upper[column].abs() >
self.correlation_threshold])
# Find the correlated values
corr_values = list(upper[column][upper[column].abs() >
self.correlation_threshold])
# duplicate len(corr_feat) times
drop_features = [column for _ in range(len(corr_features))]
# Record the information (need a temp df for now)
temp_df = pd.DataFrame.from_dict({'pair1': drop_features,
'pair2': corr_features,
'corr': corr_values})
# Add to dataframe
record_collinear = record_collinear.append(temp_df, ignore_index = True)
print(record_collinear)
self.output = record_collinear
def plot(self):
"""
Visualise the correlated pairs.
"""
G = nx.from_pandas_edgelist(self.output, 'pair1', 'pair2')
pos = nx.spring_layout(G, k=0.3*1/np.sqrt(len(G.nodes())), iterations=20)
# Plot the network:
nx.draw(G, with_labels=True, node_color='orange', node_size=400,
edge_color='black', linewidths=1, font_size=10)
plt.show()
###Output
_____no_output_____
###Markdown
*** Demonstration
###Code
data = pd.read_csv("data.csv")
# Test single model mode
msearch = SingleModelSearch(n_iter=1, cv=3, verbose=True, n_jobs=-1)
msearch.fit(data.loc[:, ~data.columns.isin(['bookingID', 'label'])], data['label'])
msearch.model_search_score_
###Output
Currently testing for mlp
Currently testing for naivebayes
Currently testing for lda
Currently testing for svc
Currently testing for logistic
Currently testing for knn
Currently testing for rf
Currently testing for xgboost
###Markdown
***
###Code
# Test blended mode
msearch = BlendedModelSearch(frac=0.5, n_experiment=2, n_iter=5,
cv=3, random_state=89,
n_jobs=-1,verbose=True)
msearch.fit(data.loc[:, ~data.columns.isin(['bookingID', 'label'])], data['label'])
msearch.model_search_score_
###Output
Experiment 0: Testing for mlp
Experiment 0: Testing for naivebayes
Experiment 0: Testing for lda
Experiment 0: Testing for svc
Experiment 0: Testing for logistic
Experiment 0: Testing for knn
Experiment 0: Testing for rf
Experiment 0: Testing for xgboost
Experiment 1: Testing for mlp
Experiment 1: Testing for naivebayes
Experiment 1: Testing for lda
Experiment 1: Testing for svc
Experiment 1: Testing for logistic
Experiment 1: Testing for knn
Experiment 1: Testing for rf
Experiment 1: Testing for xgboost
###Markdown
***
###Code
# test correlation filter
corr = CorrFilter()
corr.fit(data)
corr.plot()
###Output
_____no_output_____
###Markdown
Handling osmnx graphs graphs returned from `osmnx` are `networkx` multidigraphs which is expected because we need to represent self-loops and directed and parallel roads in maps.
###Code
G = ox.graph_from_address("toronto")
type(G)
fig, ax = ox.plot_graph(G)
###Output
_____no_output_____
###Markdown
let's list all the nodes of the graph
###Code
[*G.nodes()]
###Output
_____no_output_____
###Markdown
accessing a single node
###Code
G[3597533187]
###Output
_____no_output_____
###Markdown
This is the adjacency list of this node as dictionary with keys as the osmid of the neighbour nodes and as we said before that the osmid inside the values in the dictionary are the osmid of the roads. So one of the neighbours ofthe node `3597544187` is `6710562607` and the edge connecting both of them is `96152051` which is "Eaton Centre level 2" which apparently is highway. The ids of the nodes are unique and the ids of the roads are uniques but as we have discussed before that it is acceptable to have the same id for a node and road without any problem. ------As you would imagine doing anything useful with the graph requires accessing the neighbour of some node and the edge connecting them together. We can do that with our graph like that.
###Code
G[3597533187][6710562607] # returning the edge between them
G[3597533187][6710562607][0]['length'] # accessing the length of the edge between them
###Output
_____no_output_____
###Markdown
Obviously accessing the edge data and traversing the node neighbours with the above statements would obfuscate any algorithm and make it really hard to follow and to be read. That is why we provided a wrapper around the returned from `osmnx` that offers the same results but in a more idiomatic way.
###Code
# from utilitis/utils/common.py
node = Node(G, 3597533187)
###Output
_____no_output_____
###Markdown
to create the node you need to pass the whole graph and the id of that node
###Code
# accessing a give node's id -- useful when
# you have a container of these objects and
# want to retrieve one
node.osmid
neighbours = node.expand()
neighbours
###Output
_____no_output_____
###Markdown
as you can see node.expand() returned the neighbours of that nodes and to check them you need to access their osmid
###Code
for node in neighbours:
print(node.osmid)
###Output
6710562607
3597533185
6710562608
###Markdown
to know the distance between two nodes you access `distance` attribute on the child node to know for example the distance between the node with id `6710562607` and `3597533187`
###Code
# 6710562607 is the first element in neighbours list
neighbours[0].distance
###Output
_____no_output_____
###Markdown
and obviously you need to know from where a given node was expanded
###Code
(neighbours[0].parent).osmid
###Output
_____no_output_____
###Markdown
let's expand `neighbours[0]` one more time
###Code
neighbours_2 = neighbours[0].expand()
for node in neighbours_2:
print(node.osmid)
###Output
3597533187
6710500027
6710562606
###Markdown
Pay attention to the expansion sequence -- we have expanded `3597533187` and then `6710562607`(first child) and then `6710500027` (second child) `3597533187` appeared twice because there is a way from `3597533187` to `6710562607` and another way in the opposite direction to know the node that caused other node to be expanded, just access the `parent` attribute
###Code
(neighbours_2[1].parent).osmid
###Output
_____no_output_____
###Markdown
and to get the path from the first node ever expanded to the current node, invoke the function `path()`
###Code
(neighbours_2[1]).path()
###Output
_____no_output_____
###Markdown
Random Paths In some situations we need to generate a random path from two nodes, we can do that using `randomized search`
###Code
source(randomized_search)
route = randomized_search(G, 3715285073, 416727716)
route
ox.plot_graph_route(G, route)
###Output
_____no_output_____
###Markdown
Route as a state In some categories from algorithms like trajectory based search for finding the best route, we need to deal with the route as a single and independent state in the search space and how to move from one space to another one in the search space. let's use the route we plotted above
###Code
# its length is
len(route)
###Output
_____no_output_____
###Markdown
We needed to have a determinstic policy that would generate the neighbouring states of a given state. The following is just a heuristic for generating children of a route that we tried and we got a very good (sometimes optimal) solutions to a various problem. 1. Begin with the initial route and start from the second node.2. Delete/Fail that node from the graph and try to find the shortest path between the preceding node (source of the route) and the following node (third in the route).3. Stitch the shortest route from source node to the third node in the initial route.4. We got a new route/first children.5. To generate the next child we fail the following node (third node) so now we have the second and third node flagged as failed in the graph and we find shortest path from (source/first node) to the fourth node, and to generate the third child we fail the second and third and fourth node.6. We keep doing that until we have failed the nodes from the start/second to the node just before the target/last node.7. This is the first "batch" of children, we get the second batch of children by starting the failure node from the third node and we keep failing successive node untill we reach the node adjacent to the last node in the route. And we get the fourth batch by starting to fail the third node and keep going with the same strategy. This means that a route with a length N produces $O(((N-2))*((N-2)+1)/2)$ children. The minus $2$ because we don't fail either the start/source node or the last/target node of the route. Why we used upper bound in the formula for the children? because this method of failing a list of successive nodes from the route and trying to stitch the route could fail for the following reasons which made use skip that children. 1. Failing the source or target node, this is obviously would invalidate the heuristic.2. If we failed articulation points in the graph which lead to make the source node and target node of the route to be on a different component. This situation obviously invalidates the existence of a route between these two nodes, so this child in discarded.3. If we have a child that would cause cycles in our route, this would happend because if for example we failed nodes starting from node 4 to node 10 and we need to find a route from 3 to 11 but in producing such a route we expanded node 1 (source) to be in our new route. When we need to produce the next child, we would need to fail node 1, but the algorithm does not know that node 1 is the source node and failing it would produce invalid child, so this child in discarded. so the above example would produce a maximum number of $(26*27)/2 = 351$ routes as children if nothing from these above points happened. ---------The function `children_route` returns an iterator thar returns all the neighbouring states of a route, and we can for example take the first 9 children by using [`itertools.islice`](https://docs.python.org/3/library/itertools.htmlitertools.islice) as the following
###Code
# unpacking the first 7 children
children = [*islice(children_route(G, route), 7)]
# to unpack all children [*children_route(G, route)]
# and as iterator you can use it in for loop
###Output
_____no_output_____
###Markdown
let's draw each node of the route independently to keep track of what is going on in the children routes.
###Code
# don't dwell so much on the following code
import folium as fl
center_osmid = 394497127
G_gdfs = ox.graph_to_gdfs(G)
nodes_frame = G_gdfs[0]
ways_frame = G_gdfs[1]
center_node = nodes_frame.loc[center_osmid]
loc = [center_node['y'], center_node['x']]
m = fl.Map(location = loc, zoom_start = 15)
start_node = nodes_frame.loc[route[0]]
end_node = nodes_frame.loc[route[len(route)-1]]
kw = dict(fill_color='red', radius=5)
start_xy = [start_node['y'], start_node['x']]
end_xy = [end_node['y'], end_node['x']]
marker = fl.Marker(location = start_xy, **kw).add_to(m)
marker = fl.Marker(location = end_xy, **kw).add_to(m)
for u, v in zip(route[0:], route[1:]):
try:
x, y = (ways_frame.query(f'u == {u} and v == {v}').to_dict('list')['geometry'])[0].coords.xy
except:
x, y = (ways_frame.query(f'u == {v} and v == {u}').to_dict('list')['geometry'])[0].coords.xy
points = [*zip([*y],[*x])]
for u, v in zip(points[0:], points[1:]):
line = [[u, v]]
polyline = fl.PolyLine(locations = line, color = 'black', weight=2)
m.add_child(polyline)
for node in route:
node_location = nodes_frame.loc[node]
loc = [node_location['y'], node_location['x']]
marker = fl.Marker(location = loc, **kw).add_to(m)
m
draw_route(G, route)
###Output
The graph has 4539 which is a lot, we will use basic faster folium instead
###Markdown
see how when we failed the second node in the route we needed to obviously avoid it and choose another way to reach the third node
###Code
draw_route(G, children[0])
###Output
The graph has 4539 which is a lot, we will use basic faster folium instead
###Markdown
Here the second and third node are failing and we need to go around a building to get to the fourth node
###Code
draw_route(G, children[1])
draw_route(G, children[2])
draw_route(G, children[3])
draw_route(G, children[4])
draw_route(G, children[5])
draw_route(G, children[6])
###Output
The graph has 4539 which is a lot, we will use basic faster folium instead
###Markdown
Import
###Code
# export
import re
import json
import time
import hashlib
from pathlib import Path
import numpy
import numba
import requests
import seaborn as sns
import ipykernel
import nbdev.export
from IPython.display import Javascript
from notebook.notebookapp import list_running_servers
from IPython.core.debugger import set_trace
###Output
_____no_output_____
###Markdown
Tests `assert_allclose` checks if two things, `A` and `B`, are close to each other.NOTE: I'm assuming the format of the inputs is the same; if not I'm assuming this is programmer error.
###Code
# export
def assert_allclose(A, B, **kwargs):
if isinstance(A, tuple):
for a,b in zip(A,B): assert_allclose(a, b, **kwargs) # Possibly add "strict" keyword here
elif isinstance(A, dict):
for key in A.keys() | B.keys(): assert_allclose(A[key], B[key], **kwargs)
else:
try: assert(numpy.allclose(A, B, **kwargs))
except: assert(numpy.all(A == B))
A = numpy.random.normal((4,3))
B = numpy.random.normal(size=(4,3))
A, B
assert_allclose(A, A+1e-5, atol=1e-5)
assert_allclose((A, (B, {'test': 1.})), (A+1e-5, (B+1e-5, {'test': 1 + 1e-5})), atol=1e-5)
###Output
_____no_output_____
###Markdown
Image processing
###Code
# export
def rgb2gray(arr): # From Pillow documentation
return arr[:,:,0]*(299/1000) + arr[:,:,1]*(587/1000) + arr[:,:,2]*(114/1000)
###Output
_____no_output_____
###Markdown
Plotting
###Code
# export
def get_colors(n): return sns.color_palette(None, n)
###Output
_____no_output_____
###Markdown
Notebook stuff These are kind of hacky, but I like being able to rerun a notebook and have it auto save/build/convert at the end
###Code
# export
def get_notebook_file():
id_kernel = re.search('kernel-(.*).json', ipykernel.connect.get_connection_file()).group(1)
for server in list_running_servers():
response = requests.get(requests.compat.urljoin(server['url'], 'api/sessions'),
params={'token': server.get('token', '')})
for r in json.loads(response.text):
if 'kernel' in r and r['kernel']['id'] == id_kernel:
return Path(r['notebook']['path'])
assert_allclose(get_notebook_file().as_posix(), 'utils.ipynb')
# export
def save_notebook():
file_notebook = get_notebook_file()
_get_md5 = lambda : hashlib.md5(file_notebook.read_bytes()).hexdigest()
md5_start = _get_md5()
display(Javascript('IPython.notebook.save_checkpoint();')) # Asynchronous
while md5_start == _get_md5(): time.sleep(1e-1)
# export
def build_notebook(save=True):
if save: save_notebook()
nbdev.export.notebook2script(fname=get_notebook_file().as_posix())
# export
def convert_notebook(save=True, t='markdown'):
if save: save_notebook()
os.system(f'jupyter nbconvert --to {t} {get_notebook_file().as_posix()}')
###Output
_____no_output_____
###Markdown
Build
###Code
build_notebook()
###Output
_____no_output_____
###Markdown
--- Author : alvinwatner Inspired by : Aladdin Persson (https://www.youtube.com/watch?v=EoGUlvhRYpk&t=2146s)--- This notebook will create utils.py inside your google drive based on the path specified on the second cell. **CTRL + F9**, to run all cells.
###Code
from google.colab import drive
drive.mount('/content/drive')
# Path for utils.py root folder
%cd /content/drive/MyDrive/Colab Notebooks/NLP/Pytorch/Seq2seq
!ls
%%writefile utils.py
import torch
import spacy
from torchtext.data.metrics import bleu_score
import sys
def translate_sentence(model, sentence, german, english, device, max_length=50):
# print(sentence)
# sys.exit()
# Load german tokenizer
spacy_ger = spacy.load("de")
# Create tokens using spacy and everything in lower case (which is what our vocab is)
if type(sentence) == str:
tokens = [token.text.lower() for token in spacy_ger(sentence)]
else:
tokens = [token.lower() for token in sentence]
# print(tokens)
# sys.exit()
# Add <SOS> and <EOS> in beginning and end respectively
tokens.insert(0, german.init_token)
tokens.append(german.eos_token)
# Go through each german token and convert to an index
text_to_indices = [german.vocab.stoi[token] for token in tokens]
# Convert to Tensor
sentence_tensor = torch.LongTensor(text_to_indices).unsqueeze(1).to(device)
# Build encoder hidden, cell state
with torch.no_grad():
hidden, cell = model.encoder(sentence_tensor)
outputs = [english.vocab.stoi["<sos>"]]
for _ in range(max_length):
previous_word = torch.LongTensor([outputs[-1]]).to(device)
with torch.no_grad():
output, hidden, cell = model.decoder(previous_word, hidden, cell)
best_guess = output.argmax(1).item()
outputs.append(best_guess)
# Model predicts it's the end of the sentence
if output.argmax(1).item() == english.vocab.stoi["<eos>"]:
break
translated_sentence = [english.vocab.itos[idx] for idx in outputs]
# remove start token
return translated_sentence[1:]
def bleu(data, model, german, english, device):
targets = []
outputs = []
for example in data:
src = vars(example)["src"]
trg = vars(example)["trg"]
prediction = translate_sentence(model, src, german, english, device)
prediction = prediction[:-1] # remove <eos> token
targets.append([trg])
outputs.append(prediction)
return bleu_score(outputs, targets)
def save_checkpoint(state, filename="my_checkpoint.pth.tar"):
print("=> Saving checkpoint")
torch.save(state, filename)
def load_checkpoint(checkpoint, model, optimizer):
print("=> Loading checkpoint")
model.load_state_dict(checkpoint["state_dict"])
optimizer.load_state_dict(checkpoint["optimizer"])
###Output
Writing utils.py
###Markdown
Make random dir
###Code
working_dir = '/Users/tyler/Desktop/dissertation/programming/tcav_on_azure/concepts/'
#working_dir = '/home/tyler/Desktop/tcav_on_azure'
_ = '/Users/tyler/Desktop/dissertation/programming/tcav/2012_val/val'
source_images = '/Users/tyler/Desktop/dissertation/programming/data/old/tcav/2012_val/val'
images = [f for f in listdir(source_images) if isfile(join(source_images, f))]
total_images = len(images)
num_random_images = 500
start = 0
num_random_dir = 10
total_images
x = images[0]
for count,i in enumerate(images):
if count > 20000 and count < 50000 :
x_path = os.path.join(source_images,i)
subprocess.call(['rm',x_path])
x_path = os.path.join(source_images,x)
subprocess.call(['rm',x_path])
len(images)
for r in range(start,start+num_random_dir):
random_dir = join(working_dir,'N_' + str(r))
if not os.path.isdir(random_dir):
os.mkdir(random_dir)
for i in range(num_random_images):
rand = np.random.randint(0,total_images)
source_image = join(source_images,images[rand])
subprocess.call(['cp',source_image,random_dir])
###Output
_____no_output_____
###Markdown
Make conepts folders
###Code
working_dir = '/home/tyler/Desktop/tcav_on_azure'
concpet_dir = '/home/tyler/Desktop/data/images/dtd/images'
concept = 'zigzagged'
this_concept_dir = join(concpet_dir,concept)
image_list = [f for f in listdir(this_concept_dir) if isfile(join(this_concept_dir, f))]
#image_list
#working_dir = '/Users/tyler/Desktop/dissertation/programming/tcav/'
working_dir = '/home/tyler/Desktop/tcav_on_azure'
concept = 'woven'
concept_dir = working_dir + 'concepts/concept_sources/' + concept + '/'
urls_dir = concept_dir + '/urls.txt'
file = open(urls_dir,'r').readlines()
urls_list = [line.rstrip('\n') for line in file]
img_dir
urllib.request.urlretrieve(url, img_dir + str(idx) + '.jpg')
img_dir = concept_dir + 'images/'
idx = 0
for idx in range(len(urls_list)):
url = urls_list[idx]
try:
urllib.request.urlretrieve(url, img_dir + str(idx) + '.jpg')
except:
print('error on idx ' + str(idx))
#response = requests.get(url)
#img = Image.open(BytesIO(response.content))
# cp -r concept_sources/raw/* striped/
###Output
error on idx 3
error on idx 5
error on idx 7
###Markdown
Make labels.txt
###Code
def get_rank(idx, sorted_d):
rank = None
i = 0
for s, p in sorted_d:
if s == idx:
rank = i
i += 1
return rank
def get_label_from_rank(rank, top):
label = None
a = top[0][rank]
label = a[1]
return label
d = {}
i = 0
for i in range(len(pred[0])):
d[i] = pred[0][i]
sorted_d = sorted(d.items(), key=operator.itemgetter(1),reverse=True)
top = decode_predictions(pred,top=1000)
labels = []
for i in range(1000):
rank = get_rank(i, sorted_d)
label = get_label_from_rank(rank, top)
labels.append(label)
file_2 = '/Users/tyler/Desktop/dissertation/programming/tcav/labels/labels.txt'
outfile = open(file_2,'a')
for label in labels:
to_write = label
outfile.write(to_write)
outfile.write('\n')
outfile.close()
###Output
_____no_output_____
###Markdown
Util code snippetsneccessary code snippets to help with the tree classifier
###Code
# run this cell before running any other function cells
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import cv2
###Output
_____no_output_____
###Markdown
image resizeResize image to ideal size like 224x224, 384x384 ...etc*Each picture should be named by their parent folder.*
###Code
image_size = 224 # change this line to alter image size
targ_folder_name = 'sugar maple leaf' # target folder name, also the kind of the tree of those images
dest_folder_name = 'sugar maple leaf' # destination folder name, also the kind of the tree of those images
targ_path = f'./src/{targ_folder_name}/' # the path of the original images under src folder
dest_path = f'./data/train/{dest_folder_name}/' # the path of destination path under data folder
if not os.path.exists(dest_path): os.mkdir(dest_path)
def _resize_image_from_path(image_path, dim):
image = cv2.imread(image_path)
#image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (dim, dim), interpolation=cv2.INTER_AREA)
return image
def resize_and_store_images(dest_path, targ_path, img_size):
for image_name in os.listdir(targ_path):
try:
image = _resize_image_from_path(targ_path+image_name, image_size)
print(f"Writing resized image to: {dest_path+image_name}")
cv2.imwrite(dest_path+image_name, image)
except:
print(f'Invalid picture type that cv2 is unable to handle.\n File name: {image_name}')
continue
resize_and_store_images(dest_path=dest_path, targ_path=targ_path, img_size=image_size)
###Output
e: images251.jpg
Invalid picture type that cv2 is unable to handle.
File name: images252.jpg
Invalid picture type that cv2 is unable to handle.
File name: images253.jpg
Invalid picture type that cv2 is unable to handle.
File name: images254.jpg
Invalid picture type that cv2 is unable to handle.
File name: images255.jpg
Invalid picture type that cv2 is unable to handle.
File name: images256.jpg
Invalid picture type that cv2 is unable to handle.
File name: images257.jpg
Invalid picture type that cv2 is unable to handle.
File name: images258.jpg
Invalid picture type that cv2 is unable to handle.
File name: images259.jpg
Invalid picture type that cv2 is unable to handle.
File name: images26.jpg
Invalid picture type that cv2 is unable to handle.
File name: images260.jpg
Invalid picture type that cv2 is unable to handle.
File name: images261.jpg
Invalid picture type that cv2 is unable to handle.
File name: images262.jpg
Invalid picture type that cv2 is unable to handle.
File name: images263.jpg
Invalid picture type that cv2 is unable to handle.
File name: images264.jpg
Invalid picture type that cv2 is unable to handle.
File name: images265.jpg
Invalid picture type that cv2 is unable to handle.
File name: images266.jpg
Invalid picture type that cv2 is unable to handle.
File name: images267.jpg
Invalid picture type that cv2 is unable to handle.
File name: images268.jpg
Invalid picture type that cv2 is unable to handle.
File name: images269.jpg
Invalid picture type that cv2 is unable to handle.
File name: images27.jpg
Invalid picture type that cv2 is unable to handle.
File name: images270.jpg
Invalid picture type that cv2 is unable to handle.
File name: images271.jpg
Invalid picture type that cv2 is unable to handle.
File name: images272.jpg
Invalid picture type that cv2 is unable to handle.
File name: images273.jpg
Invalid picture type that cv2 is unable to handle.
File name: images274.jpg
Invalid picture type that cv2 is unable to handle.
File name: images275.jpg
Invalid picture type that cv2 is unable to handle.
File name: images276.jpg
Invalid picture type that cv2 is unable to handle.
File name: images277.jpg
Invalid picture type that cv2 is unable to handle.
File name: images278.jpg
Invalid picture type that cv2 is unable to handle.
File name: images279.jpg
Invalid picture type that cv2 is unable to handle.
File name: images28.jpg
Invalid picture type that cv2 is unable to handle.
File name: images280.jpg
Invalid picture type that cv2 is unable to handle.
File name: images281.jpg
Invalid picture type that cv2 is unable to handle.
File name: images282.jpg
Invalid picture type that cv2 is unable to handle.
File name: images283.jpg
Invalid picture type that cv2 is unable to handle.
File name: images284.jpg
Invalid picture type that cv2 is unable to handle.
File name: images285.jpg
Invalid picture type that cv2 is unable to handle.
File name: images286.jpg
Invalid picture type that cv2 is unable to handle.
File name: images287.jpg
Invalid picture type that cv2 is unable to handle.
File name: images288.jpg
Invalid picture type that cv2 is unable to handle.
File name: images289.jpg
Invalid picture type that cv2 is unable to handle.
File name: images29.jpg
Invalid picture type that cv2 is unable to handle.
File name: images290.jpg
Invalid picture type that cv2 is unable to handle.
File name: images291.jpg
Invalid picture type that cv2 is unable to handle.
File name: images292.jpg
Invalid picture type that cv2 is unable to handle.
File name: images293.jpg
Invalid picture type that cv2 is unable to handle.
File name: images294.jpg
Invalid picture type that cv2 is unable to handle.
File name: images295.jpg
Invalid picture type that cv2 is unable to handle.
File name: images296.jpg
Invalid picture type that cv2 is unable to handle.
File name: images297.jpg
Invalid picture type that cv2 is unable to handle.
File name: images298.jpg
Invalid picture type that cv2 is unable to handle.
File name: images299.jpg
Invalid picture type that cv2 is unable to handle.
File name: images3.jpg
Invalid picture type that cv2 is unable to handle.
File name: images30.jpg
Invalid picture type that cv2 is unable to handle.
File name: images300.jpg
Invalid picture type that cv2 is unable to handle.
File name: images301.jpg
Invalid picture type that cv2 is unable to handle.
File name: images302.jpg
Invalid picture type that cv2 is unable to handle.
File name: images303.jpg
Invalid picture type that cv2 is unable to handle.
File name: images304.jpg
Invalid picture type that cv2 is unable to handle.
File name: images305.jpg
Invalid picture type that cv2 is unable to handle.
File name: images306.jpg
Invalid picture type that cv2 is unable to handle.
File name: images307.jpg
Invalid picture type that cv2 is unable to handle.
File name: images308.jpg
Invalid picture type that cv2 is unable to handle.
File name: images309.jpg
Invalid picture type that cv2 is unable to handle.
File name: images31.jpg
Invalid picture type that cv2 is unable to handle.
File name: images310.jpg
Invalid picture type that cv2 is unable to handle.
File name: images311.jpg
Invalid picture type that cv2 is unable to handle.
File name: images312.jpg
Invalid picture type that cv2 is unable to handle.
File name: images313.jpg
Invalid picture type that cv2 is unable to handle.
File name: images314.jpg
Invalid picture type that cv2 is unable to handle.
File name: images315.jpg
Invalid picture type that cv2 is unable to handle.
File name: images316.jpg
Invalid picture type that cv2 is unable to handle.
File name: images317.jpg
Invalid picture type that cv2 is unable to handle.
File name: images318.jpg
Invalid picture type that cv2 is unable to handle.
File name: images319.jpg
Invalid picture type that cv2 is unable to handle.
File name: images32.jpg
Invalid picture type that cv2 is unable to handle.
File name: images320.jpg
Invalid picture type that cv2 is unable to handle.
File name: images321.jpg
Invalid picture type that cv2 is unable to handle.
File name: images322.jpg
Invalid picture type that cv2 is unable to handle.
File name: images323.jpg
Invalid picture type that cv2 is unable to handle.
File name: images324.jpg
Invalid picture type that cv2 is unable to handle.
File name: images325.jpg
Invalid picture type that cv2 is unable to handle.
File name: images326.jpg
Invalid picture type that cv2 is unable to handle.
File name: images327.jpg
Invalid picture type that cv2 is unable to handle.
File name: images328.jpg
Invalid picture type that cv2 is unable to handle.
File name: images329.jpg
Invalid picture type that cv2 is unable to handle.
File name: images33.jpg
Invalid picture type that cv2 is unable to handle.
File name: images330.jpg
Invalid picture type that cv2 is unable to handle.
File name: images331.jpg
Invalid picture type that cv2 is unable to handle.
File name: images332.jpg
Invalid picture type that cv2 is unable to handle.
File name: images333.jpg
Invalid picture type that cv2 is unable to handle.
File name: images334.jpg
Invalid picture type that cv2 is unable to handle.
File name: images335.jpg
Invalid picture type that cv2 is unable to handle.
File name: images336.jpg
Invalid picture type that cv2 is unable to handle.
File name: images337.jpg
Invalid picture type that cv2 is unable to handle.
File name: images338.jpg
Invalid picture type that cv2 is unable to handle.
File name: images339.jpg
Invalid picture type that cv2 is unable to handle.
File name: images34.jpg
Invalid picture type that cv2 is unable to handle.
File name: images340.jpg
Invalid picture type that cv2 is unable to handle.
File name: images341.jpg
Invalid picture type that cv2 is unable to handle.
File name: images342.jpg
Invalid picture type that cv2 is unable to handle.
File name: images343.jpg
Invalid picture type that cv2 is unable to handle.
File name: images344.jpg
Invalid picture type that cv2 is unable to handle.
File name: images345.jpg
Invalid picture type that cv2 is unable to handle.
File name: images346.jpg
Invalid picture type that cv2 is unable to handle.
File name: images347.jpg
Invalid picture type that cv2 is unable to handle.
File name: images348.jpg
Invalid picture type that cv2 is unable to handle.
File name: images349.jpg
Invalid picture type that cv2 is unable to handle.
File name: images35.jpg
Invalid picture type that cv2 is unable to handle.
File name: images350.jpg
Invalid picture type that cv2 is unable to handle.
File name: images351.jpg
Invalid picture type that cv2 is unable to handle.
File name: images352.jpg
Invalid picture type that cv2 is unable to handle.
File name: images353.jpg
Invalid picture type that cv2 is unable to handle.
File name: images354.jpg
Invalid picture type that cv2 is unable to handle.
File name: images355.jpg
Invalid picture type that cv2 is unable to handle.
File name: images356.jpg
Invalid picture type that cv2 is unable to handle.
File name: images357.jpg
Invalid picture type that cv2 is unable to handle.
File name: images358.jpg
Invalid picture type that cv2 is unable to handle.
File name: images359.jpg
Invalid picture type that cv2 is unable to handle.
File name: images36.jpg
Invalid picture type that cv2 is unable to handle.
File name: images360.jpg
Invalid picture type that cv2 is unable to handle.
File name: images361.jpg
Invalid picture type that cv2 is unable to handle.
File name: images362.jpg
Invalid picture type that cv2 is unable to handle.
File name: images363.jpg
Invalid picture type that cv2 is unable to handle.
File name: images364.jpg
Invalid picture type that cv2 is unable to handle.
File name: images365.jpg
Invalid picture type that cv2 is unable to handle.
File name: images366.jpg
Invalid picture type that cv2 is unable to handle.
File name: images367.jpg
Invalid picture type that cv2 is unable to handle.
File name: images37.jpg
Invalid picture type that cv2 is unable to handle.
File name: images38.jpg
Invalid picture type that cv2 is unable to handle.
File name: images39.jpg
Invalid picture type that cv2 is unable to handle.
File name: images4.jpg
Invalid picture type that cv2 is unable to handle.
File name: images40.jpg
Invalid picture type that cv2 is unable to handle.
File name: images41.jpg
Invalid picture type that cv2 is unable to handle.
File name: images42.jpg
Invalid picture type that cv2 is unable to handle.
File name: images43.jpg
Invalid picture type that cv2 is unable to handle.
File name: images44.jpg
Invalid picture type that cv2 is unable to handle.
File name: images45.jpg
Invalid picture type that cv2 is unable to handle.
File name: images46.jpg
Invalid picture type that cv2 is unable to handle.
File name: images47.jpg
Invalid picture type that cv2 is unable to handle.
File name: images48.jpg
Invalid picture type that cv2 is unable to handle.
File name: images49.jpg
Invalid picture type that cv2 is unable to handle.
File name: images5.jpg
Invalid picture type that cv2 is unable to handle.
File name: images50.jpg
Invalid picture type that cv2 is unable to handle.
File name: images51.jpg
Invalid picture type that cv2 is unable to handle.
File name: images52.jpg
Invalid picture type that cv2 is unable to handle.
File name: images53.jpg
Invalid picture type that cv2 is unable to handle.
File name: images54.jpg
Invalid picture type that cv2 is unable to handle.
File name: images55.jpg
Invalid picture type that cv2 is unable to handle.
File name: images56.jpg
Invalid picture type that cv2 is unable to handle.
File name: images57.jpg
Invalid picture type that cv2 is unable to handle.
File name: images58.jpg
Invalid picture type that cv2 is unable to handle.
File name: images59.jpg
Invalid picture type that cv2 is unable to handle.
File name: images6.jpg
Invalid picture type that cv2 is unable to handle.
File name: images60.jpg
Invalid picture type that cv2 is unable to handle.
File name: images61.jpg
Invalid picture type that cv2 is unable to handle.
File name: images62.jpg
Invalid picture type that cv2 is unable to handle.
File name: images63.jpg
Invalid picture type that cv2 is unable to handle.
File name: images64.jpg
Invalid picture type that cv2 is unable to handle.
File name: images65.jpg
Invalid picture type that cv2 is unable to handle.
File name: images66.jpg
Invalid picture type that cv2 is unable to handle.
File name: images67.jpg
Invalid picture type that cv2 is unable to handle.
File name: images68.jpg
Invalid picture type that cv2 is unable to handle.
File name: images69.jpg
Invalid picture type that cv2 is unable to handle.
File name: images7.jpg
Invalid picture type that cv2 is unable to handle.
File name: images70.jpg
Invalid picture type that cv2 is unable to handle.
File name: images71.jpg
Invalid picture type that cv2 is unable to handle.
File name: images72.jpg
Invalid picture type that cv2 is unable to handle.
File name: images73.jpg
Invalid picture type that cv2 is unable to handle.
File name: images74.jpg
Invalid picture type that cv2 is unable to handle.
File name: images75.jpg
Invalid picture type that cv2 is unable to handle.
File name: images76.jpg
Invalid picture type that cv2 is unable to handle.
File name: images77.jpg
Invalid picture type that cv2 is unable to handle.
File name: images78.jpg
Invalid picture type that cv2 is unable to handle.
File name: images79.jpg
Invalid picture type that cv2 is unable to handle.
File name: images8.jpg
Invalid picture type that cv2 is unable to handle.
File name: images80.jpg
Invalid picture type that cv2 is unable to handle.
File name: images81.jpg
Invalid picture type that cv2 is unable to handle.
File name: images82.jpg
Invalid picture type that cv2 is unable to handle.
File name: images83.jpg
Invalid picture type that cv2 is unable to handle.
File name: images84.jpg
Invalid picture type that cv2 is unable to handle.
File name: images85.jpg
Invalid picture type that cv2 is unable to handle.
File name: images86.jpg
Invalid picture type that cv2 is unable to handle.
File name: images87.jpg
Invalid picture type that cv2 is unable to handle.
File name: images88.jpg
Invalid picture type that cv2 is unable to handle.
File name: images89.jpg
Invalid picture type that cv2 is unable to handle.
File name: images9.jpg
Invalid picture type that cv2 is unable to handle.
File name: images90.jpg
Invalid picture type that cv2 is unable to handle.
File name: images91.jpg
Invalid picture type that cv2 is unable to handle.
File name: images92.jpg
Invalid picture type that cv2 is unable to handle.
File name: images93.jpg
Invalid picture type that cv2 is unable to handle.
File name: images94.jpg
Invalid picture type that cv2 is unable to handle.
File name: images95.jpg
Invalid picture type that cv2 is unable to handle.
File name: images96.jpg
Invalid picture type that cv2 is unable to handle.
File name: images97.jpg
Invalid picture type that cv2 is unable to handle.
File name: images98.jpg
Invalid picture type that cv2 is unable to handle.
File name: images99.jpg
Invalid picture type that cv2 is unable to handle.
File name: imgbin-red-maple-maple-leaf-sugar-maple-japanese-maple-red-maple-leaf-canada-red-maple-leaf-illustration-2wk4MTtz6ix36iZrerzGgaYV8.jpg
Invalid picture type that cv2 is unable to handle.
File name: kisspng-maple-leaf-red-maple-sugar-maple-green-5b2169613cdec0.8828483915289163212493.jpg
Invalid picture type that cv2 is unable to handle.
File name: leaf-sugar-maple-pastel.jpg
Invalid picture type that cv2 is unable to handle.
File name: Leaves20and20Fruit-20Photo20Taken20by20Michael20Clayton.jpg
Invalid picture type that cv2 is unable to handle.
File name: leaves_spr07_600_465_80_s.jpg
Invalid picture type that cv2 is unable to handle.
File name: maple-leaf-sugar-maple-silver-maple-tree-png-favpng-rZwbQy4SXNEP7n8TdqpRnqFni.jpg
Invalid picture type that cv2 is unable to handle.
File name: maple-leaf-sugar-maple-tree-png-favpng-KzLRhHiX1AVcSEQjwujTwcpzE.jpg
Invalid picture type that cv2 is unable to handle.
File name: maple-leaf-tattoo-sugar-maple-drawing-leaf.jpg
Invalid picture type that cv2 is unable to handle.
File name: maple-leaf-transparent-21.png
Invalid picture type that cv2 is unable to handle.
File name: maple-leaves.jpg
Invalid picture type that cv2 is unable to handle.
File name: Maple20Petiole20Borer20200620220REPROCESSED.jpg
Invalid picture type that cv2 is unable to handle.
File name: maxresdefault.jpg
Invalid picture type that cv2 is unable to handle.
File name: Norway-and-sugar-maple-leaves.jpg
Invalid picture type that cv2 is unable to handle.
File name: photo_camera_grey600_24dp.png
Invalid picture type that cv2 is unable to handle.
File name: rawImage.jpg
Invalid picture type that cv2 is unable to handle.
File name: Real-Sugar-Maple-Leaf-Brooch-In-Iridescent-Copper-back_2000x.jpg
Invalid picture type that cv2 is unable to handle.
File name: red-sugar-maple-leaf-against-260nw-2926758.jpg
Invalid picture type that cv2 is unable to handle.
File name: red-sugar-maple-leaf-against-white-stock-photography__u10067465.jpg
Invalid picture type that cv2 is unable to handle.
File name: red-sugar-maple-leaf-on-white-background-michael-russell.jpg
Invalid picture type that cv2 is unable to handle.
File name: s051107_b.jpg
Invalid picture type that cv2 is unable to handle.
File name: s070407_b.jpg
Invalid picture type that cv2 is unable to handle.
File name: silver-and-sugar-maple-leaves-russell-shively.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-acer-saccharum-hard-maple-rock-maple-sap-syrup-ftimg-712x1024.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-image-2.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaf-a-symbol-of-canada-on-a-rock-in-the-fall-in-rouge-B5MK3N.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaf-canada.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaf-ii-photopoint-art.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaf-iv-photopoint-art.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaf-underside.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaf.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaves-1.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaves-2.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaves-20.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaves-acer-saccharum-6128368.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-leaves-fall-41141.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugar-maple-silver-maple-maple-leaf-red-maple-png-favpng-BSaWwZHKtBbYaTU2b4Z9UDYTR.jpg
Invalid picture type that cv2 is unable to handle.
File name: Sugar-Maple_1024x1024.jpg
Invalid picture type that cv2 is unable to handle.
File name: SUGARMAPLE-GOLD-6_grande.jpg
Invalid picture type that cv2 is unable to handle.
File name: sugarmaple1.gif
Invalid picture type that cv2 is unable to handle.
File name: SugarMapleLeafgreen.jpg
Invalid picture type that cv2 is unable to handle.
File name: Sugar_Maple_Acer_saccharum_saccharum.jpg
Invalid picture type that cv2 is unable to handle.
File name: the-norway-maple-leaf-Image.jpg
Invalid picture type that cv2 is unable to handle.
File name: thumb_sugar-maple-leaf.jpg
Invalid picture type that cv2 is unable to handle.
File name: tree-sugar-maple-leaf.jpg
Invalid picture type that cv2 is unable to handle.
File name: Trees-of-the-Adirondacks-Sugar-Maple-Acer-saccharum-Heaven-Hill-Trails-12-September-2018-71.jpg
Invalid picture type that cv2 is unable to handle.
File name: v4-460px-Identify-Sugar-Maple-Trees-Step-2-Version-4.jpg
|
versions/2022/tools/python/pytorch_demo.ipynb | ###Markdown
Using a Pre-trained PyTorch Model for InferenceIn this demo, we will use a pre-trained model to perform inference on a single image. There are 3 components to this demo:1. Input2. Model3. OutputWe will cover these components in detail below.Let us first import the required packages.
###Code
import torch
import torchvision
import torchvision.transforms as transforms
import timm
from einops import rearrange
from PIL import Image
###Output
_____no_output_____
###Markdown
Model: Loading a pre-trained ResNet18 modelWe use a pre-trained ResNet18 model for inference. The model is available from `torchvision` or from `timm`.When we use a model for inference, we need to specify the `eval` mode. This is because the model in `train` mode by default, and we need to disable all the dropout layers.
###Code
use_timm = False
# Download and load the pretrained ResNet-18.
if use_timm:
resnet = timm.create_model('resnet18', pretrained=True)
else:
resnet = torchvision.models.resnet18(pretrained=True)
resnet.eval()
###Output
_____no_output_____
###Markdown
Input: Loading an input imageWe can use matplotlib `image` to load an image into a numpy array. However, PyTorch `transforms` expects a PIL image. While we can convert numpy array to PIL, we can load an image directly into a PIL image.
###Code
filename = input()
# Load a PIL Image given a file name from the current directory.
img = Image.open(filename)
# Display the loaded image on notebook.
display(img)
# Resize the image to 256x256.
# Then crop the center square of the image.
# Next, convert the image to a PyTorch Tensor.
# Lastly, normalize the image so that it has mean and standard deviation as shown below.
# Reference for image transforms: https://github.com/pytorch/examples/blob/42e5b996718797e45c46a25c55b031e6768f8440/imagenet/main.py#L89-L101
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
# PIL image undergoes transforms.
img = transform(img)
# A simplified version is to simply transform the image to a tensor
#img = transforms.ToTensor()(img)
# Check img type and shape
print("Type:", img.dtype)
print("Shape:", img.shape)
###Output
_____no_output_____
###Markdown
Output: Making a predictionWe will now use `img` tensor as input to the pre-trained `resnet18` model. Before running the model for prediction, there are 2 things that we should do:1. Include a batch dimension. In this case, we are using a single image, so we need to add a batch size of 1. We use `rearrange` for this.2. Execute inference within `torch.no_grad()` context manager. This is because we do not want to track the gradients.The expected output is a `torch.Tensor` of shape `(1, 1000)`. `resnet18` was pre-trained on ImageNet1k. We can use `torch.argmax` to get the index of the maximum value.
###Code
# We need the tensor to have a batch dimension of 1.
img = rearrange(img, 'c h w -> 1 c h w')
print("New shape:", img.shape)
with torch.no_grad():
pred = resnet(img)
print("Prediction shape:", pred.shape)
pred = torch.argmax(pred, dim=1)
print("Predicted index", pred)
###Output
New shape: torch.Size([1, 3, 224, 224])
Prediction shape: torch.Size([1, 1000])
Predicted index tensor([285])
###Markdown
Human: Convert class index to labelTo make sense of the predicted index, we need to convert it to a label. We can use `https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a` to get the mapping from index to label.
###Code
import urllib
filename = "imagenet1000_labels.txt"
url = "https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt"
# Download the file if it does not exist
if not os.path.isfile(filename):
urllib.request.urlretrieve(url, filename)
with open(filename) as f:
idx2label = eval(f.read())
print("Predicted label:", idx2label[pred.cpu().numpy()[0]])
###Output
Predicted label: Egyptian cat
###Markdown
Using a Pre-trained PyTorch Model for InferenceIn this demo, we will use a pre-trained model to perform inference on a single image. There are 3 components to this demo:1. Input2. Model3. OutputWe will cover these components in detail below.Let us first import the required packages.
###Code
import torch
import torchvision
import torchvision.transforms as transforms
import timm
from einops import rearrange
from PIL import Image
###Output
_____no_output_____
###Markdown
Model: Loading a pre-trained ResNet18 modelWe use a pre-trained ResNet18 model for inference. The model is available from `torchvision`.When we use a model for inference, we need to specify the `eval` mode. This is because the model is designed for training, and we need to disable all the dropout layers.
###Code
use_timm = False
# Download and load the pretrained ResNet-18.
if use_timm:
resnet = timm.create_model('resnet18', pretrained=True)
else:
resnet = torchvision.models.resnet18(pretrained=True)
resnet.eval()
###Output
_____no_output_____
###Markdown
Input: Loading an input imageWe can use matplotlib `image` to load an image into a numpy array. However, PyTorch `transforms` expects a PIL image. While we can convert numpy array to PIL, we can load an image directly into a PIL image.
###Code
filename = input()
# Load a PIL Image given a file name from the current directory.
img = Image.open(filename)
# Display the loaded image on notebook.
display(img)
# Resize the image to 256x256.
# Then crop the center square of the image.
# Next, convert the image to a PyTorch Tensor.
# Lastly, normalize the image so that it has mean and standard deviation as shown below.
# Reference for image transforms: https://github.com/pytorch/examples/blob/42e5b996718797e45c46a25c55b031e6768f8440/imagenet/main.py#L89-L101
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
transform = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])
# PIL image undergoes transforms.
img = transform(img)
# A simplified version is to simply transform the image to a tensor
#img = transforms.ToTensor()(img)
# Check img type and shape
print("Type:", img.dtype)
print("Shape:", img.shape)
###Output
_____no_output_____
###Markdown
Output: Making a predictionWe will now use `img` tensor as input to the pre-trained `resnet18` model. Before running the model for prediction, there are 2 things that we should do:1. Include a batch dimension. In this case, we are using a single image, so we need to add a batch size of 1. We use `rearrange` for this.2. Execute inference withing `torch.no_grad()` context manager. This is because we do not want to track the gradients.The expected output is a `torch.Tensor` of shape `(1, 1000)`. `resnet18` was pre-trained on ImageNet1k. We can use `torch.argmax` to get the index of the maximum value.
###Code
# We need the tensor to have a batch dimension of 1.
img = rearrange(img, 'c h w -> 1 c h w')
print("New shape:", img.shape)
with torch.no_grad():
pred = resnet(img)
print("Prediction shape:", pred.shape)
print(pred[0][0], pred[0][1],pred[0][284], pred[0][286])
print(pred[0][285], pred[0][999])
pred = torch.argmax(pred, dim=1)
print("Predicted index", pred)
###Output
New shape: torch.Size([1, 3, 224, 224])
Prediction shape: torch.Size([1, 1000])
tensor(1.7247) tensor(2.2064) tensor(-0.0312)
tensor(11.4601) tensor(3.2967)
Predicted index tensor([285])
###Markdown
Human: Convert class index to labelTo make sense of the predicted index, we need to convert it to a label. We can use `https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a` to get the mapping from index to label.
###Code
import urllib
filename = "imagenet1000_labels.txt"
url = "https://gist.githubusercontent.com/yrevar/942d3a0ac09ec9e5eb3a/raw/238f720ff059c1f82f368259d1ca4ffa5dd8f9f5/imagenet1000_clsidx_to_labels.txt"
# Download the file if it does not exist
if not os.path.isfile(filename):
urllib.request.urlretrieve(url, filename)
with open(filename) as f:
idx2label = eval(f.read())
print("Predicted label:", idx2label[pred.cpu().numpy()[0]])
###Output
Predicted label: Egyptian cat
|
community/awards/teach_me_qiskit_2018/w_state/W State 1 - Multi-Qubit Systems.ipynb | ###Markdown
Trusted Notebook" width="500 px" align="left"> W state in multi-qubit systemsThe latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.For more information about how to use the IBM Q experience (QX), consult the [tutorials](https://quantumexperience.ng.bluemix.net/qstage//tutorial?sectionId=c59b3710b928891a1420190148a72cce&pageIndex=0), or check out the [community](https://quantumexperience.ng.bluemix.net/qstage//community).*** ContributorsPierre Decoodt, Université Libre de Bruxelles
###Code
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import time
from pprint import pprint
# importing Qiskit
from qiskit import Aer, IBMQ
from qiskit.providers.ibmq import least_busy
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
# import basic plot tools
from qiskit.tools.visualization import plot_histogram
IBMQ.load_accounts()
###Output
_____no_output_____
###Markdown
Theoretical backgroundIn addition to the GHZ states, the generalized W states, as proposed by Dür, Vidal and Cirac, in 2000, is a class of interesting examples of multiple qubit entanglement.A generalized $n$ qubit W state can be written as :$$ |W_{n}\rangle \; = \; \sqrt{\frac{1}{n}} \: (\:|10...0\rangle \: + |01...0\rangle \: +...+ |00...1\rangle \:) $$Here are presented circuits allowing to deterministically produce respectively a three, a four and a five qubit W state.A 2016 paper by Firat Diker proposes an algorithm in the form of nested boxes allowing the deterministic construction of W states of any size $n$. The experimental setup proposed by the author is essentially an optical assembly including half-wave plates. The setup includes $n-1$ so-called two-qubit $F$ gates (not to be confounded with the Fredkin's three-qubit gate).It is possible to construct the equivalent of such a $F$ gate on a superconducting quantum computing system using transmon qubits in ground and excited states. A $F_{k,\, k+1}$ gate with control qubit $q_{k}$ and target qubit $q_{k+1}$ is obtained here by: - First a rotation round Y-axis $R_{y}(-\theta_{k})$ applied on $q_{k+1}$ - Then a controlled Z-gate $cZ$ in any direction between the two qubits $q_{k}$ and $q_{k+1}$ - Finally a rotation round Y-axis $R_{y}(\theta_{k})$ applied on $q_{k+1}$ The matrix representations of a $R_{y}(\theta)$ rotation and of the $cZ$ gate can be found in the "Quantum gates and linear algebra" Jupyter notebook of the Qiskit tutorial. The value of $\theta_{k}$ depends on $n$ and $k$ following the relationship:$$\theta_{k} = \arccos \left(\sqrt{\frac{1}{n-k+1}}\right) $$Note that this formula for $\theta$ is different from the one mentioned in the Diker's paper. This is due to the fact that we use here Y-axis rotation matrices instead of $W$ optical gates composed of half-wave plates.At the beginning, the qubits are placed in the state: $|\varphi_{0} \rangle \, = \, |10...0 \rangle$.This is followed by the application of $n-1$ sucessive $F$ gates. $$|\varphi_{1}\rangle = F_{n-1,\,n}\, ... \, F_{k,\, k+1}\, ... \, F_{2,\, 3} \,F_{1,\, 2}\,|\varphi_{0} \rangle \,= \; \sqrt{\frac{1}{n}} \: (\:|10...0\rangle \: + |11...0\rangle \: +...+ |11...1\rangle \:) $$Then, $n-1$ $cNOT$ gates are applied. The final circuit is: $$|W_{n}\rangle \,= cNOT_{n,\, n-1}\, cNOT_{n-1,\, n-2}...cNOT_{k,\, k-1}...cNOT_{2,\, 1}\,\,|\varphi_{1} \rangle$$Let's launch now in the adventure of producing deterministically W states, on simulator or in the real world! Now you will have the opportunity to choose your backend.(If you run the following cells in sequence, you will end with the local simulator, which is a good choice for a first trial).
###Code
"Choice of the backend"
# using local qasm simulator
backend = Aer.get_backend('qasm_simulator')
# using IBMQ qasm simulator
# backend = IBMQ.get_backend('ibmq_qasm_simulator')
# using real device
# backend = least_busy(IBMQ.backends(simulator=False))
flag_qx2 = True
if backend.name() == 'ibmqx4':
flag_qx2 = False
print("Your choice for the backend is: ", backend, "flag_qx2 is: ", flag_qx2)
# Here, two useful routine
# Define a F_gate
def F_gate(circ,q,i,j,n,k) :
theta = np.arccos(np.sqrt(1/(n-k+1)))
circ.ry(-theta,q[j])
circ.cz(q[i],q[j])
circ.ry(theta,q[j])
circ.barrier(q[i])
# Define the cxrv gate which uses reverse CNOT instead of CNOT
def cxrv(circ,q,i,j) :
circ.h(q[i])
circ.h(q[j])
circ.cx(q[j],q[i])
circ.h(q[i])
circ.h(q[j])
circ.barrier(q[i],q[j])
###Output
_____no_output_____
###Markdown
Three-qubit W state, step 1In this section, the production of a three qubit W state will be examined step by step.In this circuit, the starting state is now: $ |\varphi_{0} \rangle \, = \, |100\rangle$.The entire circuit corresponds to: $$ |W_{3}\rangle \,=\, cNOT_{3,2}\, \, cNOT_{2,1}\, \, F_{2,3} \, \, F_{1,2} \, \, |\varphi_{0} \rangle \, $$ Run the following cell to see what happens when we first apply $F_{1,2}$.
###Code
# 3-qubit W state Step 1
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
for i in range(3) :
W_states.measure(q[i] , c[i])
# circuits = ['W_states']
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit (step 1) on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit (step 1) on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
###Output
start W state 3-qubit (step 1) on qasm_simulator N= 1024 08/01/2019 17:39:52
end W state 3-qubit (step 1) on qasm_simulator N= 1024 08/01/2019 17:39:52
###Markdown
Three-qubit W state: adding step 2In the previous step you obtained an histogram compatible with the following state:$$ |\varphi_{1} \rangle= F_{1,2}\, |\varphi_{0} \rangle\,=F_{1,2}\, \,|1 0 0 \rangle=\frac{1}{\sqrt{3}} \: |1 0 0 \rangle \: + \sqrt{\frac{2}{3}} \: |1 1 0 \rangle $$NB: Depending on the backend, it happens that the order of the qubits is modified, but without consequence for the state finally reached.We seem far from the ultimate goal.Run the following circuit to obtain $|\varphi_{2} \rangle =F_{2,3}\, \, |\varphi_{1} \rangle$
###Code
# 3-qubit W state, first and second steps
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
F_gate(W_states,q,1,0,3,2) # Applying F23
for i in range(3) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit (steps 1 + 2) on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit (steps 1 + 2) on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
###Output
start W state 3-qubit (steps 1 + 2) on qasm_simulator N= 1024 08/01/2019 17:40:00
end W state 3-qubit (steps 1 + 2) on qasm_simulator N= 1024 08/01/2019 17:40:00
###Markdown
Three-qubit W state, full circuitIn the previous step, we got an histogram compatible with the state:$$ |\varphi_{2} \rangle =F_{2,3}\, \, |\varphi_{1} \rangle=F_{2,3}\, \, (\frac{1}{\sqrt{3}} \: |1 0 0 \rangle \: + \sqrt{\frac{2}{3}} \: |1 1 0 )= \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |1 1 0 \:\rangle + |1 1 1\rangle) $$NB: Again, depending on the backend, it happens that the order of the qubits is modified, but without consequence for the state finally reached.It looks like we are nearing the goal.Indeed, two $cNOT$ gates will make it possible to create a W state.Run the following cell to see what happens. Did we succeed?
###Code
# 3-qubit W state
n = 3
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[2]) #start is |100>
F_gate(W_states,q,2,1,3,1) # Applying F12
F_gate(W_states,q,1,0,3,2) # Applying F23
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 21
W_states.cx(q[0],q[1]) # cNOT 32
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(3) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 3-qubit on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 3-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
###Output
start W state 3-qubit on qasm_simulator N= 1024 08/01/2019 17:40:04
end W state 3-qubit on qasm_simulator N= 1024 08/01/2019 17:40:04
###Markdown
Now you get an histogram compatible with the final state $|W_{3}\rangle$ through the following steps:$$ |\varphi_{3} \rangle = cNOT_{2,1}\, \, |\varphi_{2} \rangle =cNOT_{2,1}\,\frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |1 1 0 \rangle\: + |1 1 1\rangle) = \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |0 1 0 \: + |0 1 1\rangle) $$$$ |W_{3} \rangle = cNOT_{3,2}\, \, |\varphi_{3} \rangle =cNOT_{3,2}\,\frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |010 \: \rangle+ |0 1 1\rangle) = \frac{1}{\sqrt{3}} \: (|1 0 0 \rangle \: + |0 1 0 \: + |0 0 1\rangle) $$Bingo! Four-qubit W stateIn this section, the production of a four-qubit W state will be obtained by extending the previous circuit.In this circuit, the starting state is now: $ |\varphi_{0} \rangle \, = \, |1000\rangle$.A $F$ gate was added at the beginning of the circuit and a $cNOT$ gate was added before the measurement phase.The entire circuit corresponds to:$$ |W_{4}\rangle \,=\, cNOT_{4,3}\, \, cNOT_{3,2}\, \, cNOT_{2,1}\, \, F_{3,4} \, \, F_{2,3} \, \, F_{1,2} \, \,|\varphi_{0} \rangle \, $$ Run the following circuit and see what happens.
###Code
# 4-qubit W state
n = 4
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[3]) #start is |1000>
F_gate(W_states,q,3,2,4,1) # Applying F12
F_gate(W_states,q,2,1,4,2) # Applying F23
F_gate(W_states,q,1,0,4,3) # Applying F34
cxrv(W_states,q,2,3) # cNOT 21
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 32
W_states.cx(q[0],q[1]) # cNOT 43
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(4) :
W_states.measure(q[i] , c[i])
# circuits = ['W_states']
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 4-qubit ', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 4-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
###Output
start W state 4-qubit qasm_simulator N= 1024 08/01/2019 17:40:07
end W state 4-qubit on qasm_simulator N= 1024 08/01/2019 17:40:07
###Markdown
Now, if you used a simulator, you get an histogram clearly compatible with the state:$$ |W_{4}\rangle \;=\; \frac{1}{2} \: (\:|1000\rangle + |0100\rangle + |0010\rangle + |0001\rangle \:) $$If you used a real quantum computer, the columns of the histogram compatible with a $|W_{4}\rangle$ state are not all among the highest one. Errors are spreading... Five-qubit W stateIn this section, a five-qubit W state will be obtained, again by extending the previous circuit.In this circuit, the starting state is now: $ |\varphi_{0} \rangle = |10000\rangle$.A $F$ gate was added at the beginning of the circuit and an additionnal $cNOT$ gate was added before the measurement phase.$$ |W_{5}\rangle = cNOT_{5,4} cNOT_{4,3} cNOT_{3,2} cNOT_{2,1} F_{4,5} F_{3,4} F_{2,3} F_{1,2} |\varphi_{0} \rangle $$Run the following cell and see what happens.
###Code
# 5-qubit W state
n = 5
q = QuantumRegister(n)
c = ClassicalRegister(n)
W_states = QuantumCircuit(q,c)
W_states.x(q[4]) #start is |10000>
F_gate(W_states,q,4,3,5,1) # Applying F12
F_gate(W_states,q,3,2,5,2) # Applying F23
F_gate(W_states,q,2,1,5,3) # Applying F34
F_gate(W_states,q,1,0,5,4) # Applying F45
W_states.cx(q[3],q[4]) # cNOT 21
cxrv(W_states,q,2,3) # cNOT 32
if flag_qx2 : # option ibmqx2
W_states.cx(q[1],q[2]) # cNOT 43
W_states.cx(q[0],q[1]) # cNOT 54
else : # option ibmqx4
cxrv(W_states,q,1,2)
cxrv(W_states,q,0,1)
for i in range(5) :
W_states.measure(q[i] , c[i])
shots = 1024
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('start W state 5-qubit on', backend, "N=", shots,time_exp)
result = execute(W_states, backend=backend, shots=shots)
time_exp = time.strftime('%d/%m/%Y %H:%M:%S')
print('end W state 5-qubit on', backend, "N=", shots,time_exp)
plot_histogram(result.result().get_counts(W_states))
###Output
start W state 5-qubit on qasm_simulator N= 1024 08/01/2019 17:40:10
end W state 5-qubit on qasm_simulator N= 1024 08/01/2019 17:40:10
|
fairness_benchmark/notebooks/aggregate_results_starr_20200523.ipynb | ###Markdown
Generate the cohort table
###Code
### Cohort table
cohort_df_long = (
cohort
.melt(
id_vars = ['person_id'] + attributes,
value_vars = tasks,
var_name = 'task',
value_name = 'labels'
)
.melt(
id_vars = ['person_id', 'task', 'labels'],
value_vars = attributes,
var_name = 'attribute',
value_name = 'group'
)
)
cohort_statistics_df = (
cohort_df_long
.groupby(['task', 'attribute', 'group'])
.agg(
prevalence=('labels', 'mean'),
)
.reset_index()
.groupby('attribute')
.apply(lambda x: x.pivot_table(index = 'group', columns = 'task', values = 'prevalence'))
.reset_index()
)
group_size_df = (
cohort_df_long
.groupby(['task', 'attribute', 'group'])
.agg(
size = ('labels', lambda x: x.shape[0])
)
.reset_index()
.drop(columns = 'task')
.drop_duplicates()
)
cohort_statistics_df = cohort_statistics_df.merge(group_size_df)
cohort_statistics_df = (
cohort_statistics_df
.set_index(['attribute', 'group'])
[['size'] + tasks]
)
cohort_statistics_df
## Write to Latex
table_path = './../figures/starr_20200523'
os.makedirs(table_path, exist_ok=True)
with open(os.path.join(table_path, 'cohort_table.txt'), 'w') as fp:
(
cohort_statistics_df
.reset_index().drop(columns='attribute').set_index(['group'])
.to_latex(
fp,
float_format = '%.3g',
index_names = False,
index=True
)
)
###Output
_____no_output_____
###Markdown
Get the results
###Code
def get_result_df_baseline(base_path, filename='result_df_group_standard_eval.parquet'):
"""
Gets the results for training the baseline models
"""
selected_models_path = os.path.join(
base_path,
'config',
'selected_models', '**', '*.yaml'
)
selected_models_dict = {
filename.split('/')[-2]: filename.split('/')[-1]
for filename in glob.glob(selected_models_path, recursive=True)
}
paths = [
glob.glob(
os.path.join(
base_path,
'performance',
task,
config_filename,
'**',
filename
),
recursive=True
)
for task, config_filename in selected_models_dict.items()
]
paths = list(itertools.chain(*paths))
result_df_baseline = df_dict_concat(
{
tuple(filename.split('/'))[-4:-1]:
pd.read_parquet(filename)
for filename in paths
},
['task2', 'config_filename', 'fold_id']
).drop(columns='task2')
return result_df_baseline
result_df_baseline = get_result_df_baseline(
os.path.join(
project_dir,
'experiments',
experiment_name_baseline,
)
)
result_df_baseline.task.unique()
result_df_calibration_baseline = get_result_df_baseline(
os.path.join(
project_dir,
'experiments',
experiment_name_baseline,
),
filename='calibration_result.parquet'
)
id_vars = ['fold_id', 'phase', 'config_filename', 'task', 'attribute', 'group']
result_df_calibration_baseline = result_df_calibration_baseline.melt(
id_vars = id_vars,
value_vars = set(result_df_calibration_baseline.columns) - set(id_vars),
var_name = 'metric',
value_name = 'performance'
).query('metric != "brier"')
result_df_calibration_baseline.metric.unique()
# Import fair_ova metrics
result_df_ova_baseline = get_result_df_baseline(
os.path.join(
project_dir,
'experiments',
experiment_name_baseline,
),
filename='result_df_group_fair_ova.parquet'
)
id_vars = ['fold_id', 'phase', 'config_filename', 'task', 'attribute', 'group']
result_df_ova_baseline = result_df_ova_baseline.melt(
id_vars = id_vars,
value_vars = set(result_df_ova_baseline.columns) - set(id_vars),
var_name = 'metric',
value_name = 'performance'
)
result_df_baseline = pd.concat([result_df_baseline, result_df_calibration_baseline, result_df_ova_baseline], ignore_index=True)
result_df_baseline
def flatten_multicolumns(df):
"""
Converts multi-index columns into single colum
"""
df.columns = ['_'.join([el for el in col if el != '']).strip() for col in df.columns.values if len(col) > 1]
return df
result_df_baseline_mean = (
result_df_baseline
.groupby(list(set(result_df_baseline.columns) - {'fold_id', 'performance', 'performance_overall'}))
[['performance', 'performance_overall']]
.agg(['mean', 'std', 'sem'])
.reset_index()
)
result_df_baseline_mean = result_df_baseline_mean.rename(
columns={
'performance': 'performance_baseline',
'performance_overall': 'performance_overall_baseline'
}
)
result_df_baseline_mean = flatten_multicolumns(result_df_baseline_mean)
result_df_baseline_mean
result_df_baseline_mean.task.unique()
def get_result_df_fair(base_path=None, filename='result_df_group_standard_eval.parquet', paths=None):
if paths is None:
performance_path = os.path.join(
base_path,
'performance',
)
paths = glob.glob(os.path.join(performance_path, '**', filename), recursive=True)
result_df_fair = df_dict_concat(
{
tuple(file_name.split('/'))[-5:-1]:
pd.read_parquet(file_name)
for file_name in paths
},
['task2', 'sensitive_attribute', 'config_filename', 'fold_id']
).drop(columns='task2')
return result_df_fair
# Fair results
result_df_fair = get_result_df_fair(
os.path.join(
project_dir,
'experiments',
experiment_name_fair
)
)
# List config_filenames without ten results
(
result_df_fair
.groupby(
list(set(result_df_fair.columns) - set(['fold_id', 'performance', 'performance_overall']))
)
.agg(lambda x: len(x))
.query("fold_id != 10")
.reset_index()
.config_filename
.sort_values()
.unique()
)
result_df_calibration_fair = get_result_df_fair(
os.path.join(
project_dir,
'experiments',
experiment_name_fair
),
filename='calibration_result.parquet'
)
id_vars = ['fold_id', 'phase', 'config_filename', 'task', 'sensitive_attribute', 'attribute', 'group']
result_df_calibration_fair = result_df_calibration_fair.melt(
id_vars = id_vars,
value_vars = set(result_df_calibration_fair.columns) - set(id_vars),
var_name = 'metric',
value_name = 'performance'
).query('metric != "brier"')
result_df_ova_fair = get_result_df_fair(
os.path.join(
project_dir,
'experiments',
experiment_name_fair
),
filename='result_df_group_fair_ova.parquet'
)
id_vars = ['fold_id', 'phase', 'config_filename', 'task', 'sensitive_attribute', 'attribute', 'group']
result_df_ova_fair = result_df_ova_fair.melt(
id_vars = id_vars,
value_vars = set(result_df_ova_fair.columns) - set(id_vars),
var_name = 'metric',
value_name = 'performance'
)
# List config_filenames without ten results
(
result_df_ova_fair
.groupby(
list(set(result_df_ova_fair.columns) - set(['fold_id', 'performance', 'performance_overall']))
)
.agg(lambda x: len(x))
.query("fold_id != 10")
.reset_index()
.config_filename
.sort_values()
.unique()
)
result_df_fair = pd.concat([result_df_fair, result_df_calibration_fair, result_df_ova_fair], ignore_index=True)
result_df_fair_mean = (
result_df_fair
.groupby(list(set(result_df_fair.columns) - set(['fold_id', 'performance', 'performance_overall'])))
[['performance', 'performance_overall']]
.agg(['mean', 'std', 'sem'])
.reset_index()
)
result_df_fair_mean = flatten_multicolumns(result_df_fair_mean)
ci_func = lambda x: x * 1.96
result_df_fair_mean = result_df_fair_mean.assign(
performance_CI = lambda x: ci_func(x['performance_sem']),
performance_overall_CI = lambda x: ci_func(x['performance_overall_sem']),
)
def label_fair_mode(df):
df['fair_mode'] = (
df['regularization_metric']
.where(~df['regularization_metric'].str.match('mmd'),
df['regularization_metric'].astype(str) + '_' + df['mmd_mode'].astype(str),
axis=0)
)
df['fair_mode'] = (
df['fair_mode']
.where(~df['fair_mode'].str.match('mean_prediction'),
df['fair_mode'].astype(str) + '_' + df['mean_prediction_mode'].astype(str),
axis=0
)
)
return df
def get_fair_config_df(base_path):
config_path = os.path.join(
base_path,
'config',
)
fair_config_files = glob.glob(
os.path.join(config_path, '**', '*.yaml'),
recursive=True
)
fair_config_dict_dict = {
tuple(file_name.split('/'))[-2:]:
yaml_read(file_name)
for file_name in fair_config_files
}
fair_config_df = df_dict_concat(
{
key: pd.DataFrame(value, index=[key])
for key, value in fair_config_dict_dict.items()
},
['task', 'config_filename']
)
fair_config_df = label_fair_mode(fair_config_df)[['task', 'config_filename', 'fair_mode', 'lambda_group_regularization']]
return fair_config_df
fair_config_df = get_fair_config_df(
os.path.join(
project_dir,
'experiments',
experiment_name_fair
)
)
fair_config_df
result_df_fair_mean.task.unique()
result_df = pd.merge(result_df_baseline_mean.drop(columns='config_filename'), result_df_fair_mean,
how='outer', indicator=True).merge(fair_config_df)
assert result_df_fair_mean.shape[0] == result_df.shape[0]
result_df.head()
assert result_df.query('_merge == "right_only"').shape[0] == 0
result_df.metric.unique()
result_df = result_df.query('phase == "test"')
result_df.head()
result_df.columns
result_df = result_df.drop(columns = '_merge')
result_df.to_csv(os.path.join(result_path, 'group_results.csv'), index=False)
###Output
_____no_output_____ |
notebooks/PandasKickoff.ipynb | ###Markdown
Exploring data using PandasSo far we explored Python and a few native libraries. Now we will play a little to simplify our life with tools to conduct some **data analysis**.**Pandas** is the most popular library (so far) to import and handle data in Python. Let's import some data from a CSV file**When downloading my ipynb, remember to also get the `commits_pr.csv` file**
###Code
import pandas
cpr = pandas.read_csv("commits_pr.csv")
###Output
_____no_output_____
###Markdown
It became this easy to read a CSV file!!!And more... Look at what my `cpr` is:
###Code
type(cpr)
###Output
_____no_output_____
###Markdown
Yes! A DataFrame. And it reads really nice, look:
###Code
cpr.head(15)
### We can use head() and tail() functions to see a bit less
###Output
_____no_output_____
###Markdown
Before moving forward... Explaining a little about this dataset.This dataset represents a series of Pull Requests made to a subset of projects hosted by GitHub. We worked on this data to capture a specific type of contributor, which we called *casual contributor*. These contributors are known by having a single pull request accepted in a project and not coming back (i.e., they have no long-term commitment to the project).In this specific dataset, you will find the following columns:* `user`: represent a user on GitHub (anonymized here)* `project_name`: the name of GitHub project in which the pull request was accepted* `prog_lang`: programming language of the project* `pull_req_num`: unique identifier of the pull request* `num_commits`: number of commits sent within that specific pull request Some information about the dataframe Dimensions/shape of the dataset (lines vs. columns)
###Code
cpr.shape
###Output
_____no_output_____
###Markdown
What about the column names?
###Code
cpr.columns
###Output
_____no_output_____
###Markdown
And the datatype per column?
###Code
cpr.dtypes
###Output
_____no_output_____
###Markdown
Some more information: `info()` method prints information including the index dtype and column dtypes, non-null values and memory usage.
###Code
cpr.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 42092 entries, 0 to 42091
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 user 42092 non-null object
1 project_name 42092 non-null object
2 prog_lang 42092 non-null object
3 pull_req_number 42092 non-null int64
4 num_commits 42092 non-null int64
dtypes: int64(2), object(3)
memory usage: 1.6+ MB
###Markdown
What is the type of a specific column???
###Code
type(cpr["num_commits"])
###Output
_____no_output_____
###Markdown
A *serie* is a list, with one dimension, indexed. Each column of a dataframe is a series Before moving ahead, we can use the types to filter some columns. Let's say we want only the columns that store `int`:
###Code
int_columns = cpr.dtypes[cpr.dtypes == "int64"].index
int_columns
###Output
_____no_output_____
###Markdown
Now... I just want to see these columns... **BOOM**
###Code
cpr[int_columns].head()
###Output
_____no_output_____
###Markdown
What about statistical information about my DataFrame?`describe()` method provides a summary of numeric values in your dataset: mean, standard deviation, minimum, maximum, 1st quartile, 2nd quartile (median), 3rd quartile of the columns with numeric values. It also counts the number of variables in the dataset (are there missing variables?)
###Code
cpr.describe()
###Output
_____no_output_____
###Markdown
We can do it for a Series...
###Code
#cpr["num_commits"].describe()
cpr.num_commits.describe()
#LOOK at this with a non-numeric column
cpr.prog_lang.describe() #either way work.
###Output
_____no_output_____
###Markdown
And we can get specific information per column
###Code
cpr.num_commits.median()
cpr.num_commits.mean()
cpr.num_commits.std()
###Output
_____no_output_____
###Markdown
-------------- Playing with the data: sortingWe can sort our data easily using pandas.In this example, sorting by Programming Language
###Code
cpr.sort_values("pull_req_number", ascending=False, inplace=True)
###Output
_____no_output_____
###Markdown
We can sort using *many columns*, by using a list (sort will happen from the first item to the last)
###Code
cpr.head(10)
###Output
_____no_output_____
###Markdown
If you want to keep the sorted version, you can use the parameter `inplace`:
###Code
cpr.sort_values(["num_commits"], ascending=False, inplace=True)
cpr.head(10)
#cpr = pandas.read_csv("commits_pr.csv") #--> to return to the original order
###Output
_____no_output_____
###Markdown
Counting the occurences of variablesSo, to count the occurrences in a column we have to select the column first, and use the method `value_counts()`
###Code
cpr.prog_lang.value_counts()
###Output
_____no_output_____
###Markdown
But... I just want to know what are the languages out there. Is there a way?*Always*
###Code
cpr["prog_lang"].unique()
###Output
_____no_output_____
###Markdown
OK! Let's do something else... Like, selecting columns and filtering dataLet's say that I just want to look at the columns programming language, project name and number of commits. I can select them and create a new DF
###Code
selected_columns = ["prog_lang", "project_name", "num_commits"]
type(selected_columns)
selected_columns = ["prog_lang", "project_name", "num_commits"]
my_subset = cpr[selected_columns]
my_subset.head(10)
###Output
_____no_output_____
###Markdown
What if now I want to filter those projects written in `C` language?
###Code
only_C = cpr[(cpr["prog_lang"]=='C') & (cpr["num_commits"]==1)]
only_C.describe()
###Output
_____no_output_____
###Markdown
We can filter whatever we want:
###Code
single_commit = cpr[cpr["num_commits"] == 1]
single_commit.describe()
###Output
_____no_output_____
###Markdown
We can create filters in variables, and use whenever we want, as well
###Code
one_commit = cpr["num_commits"]==1
language_C = cpr["prog_lang"]=="C"
multi_commit = cpr["num_commits"]>1
cpr[one_commit & language_C].head(10)
###Output
_____no_output_____
###Markdown
And... we can use OR (|) and AND(&) to play!
###Code
cpr[one_commit & language_C].head(10)
###Output
_____no_output_____
###Markdown
What if we want the pull requests with more than one commit for the projects written in "C" + those with 2 commits for the projects written in "typescript"???Let's do it!
###Code
two_commits = cpr["num_commits"]==2
typescript = cpr["prog_lang"]=="typescript"
cpr[(multi_commit & language_C) | (two_commits & typescript)]
cpr[((cpr["num_commits"]>1) & (cpr["prog_lang"]=="C")) | ((cpr["num_commits"]==2) & (cpr["prog_lang"]=="typescript"))]
###Output
_____no_output_____
###Markdown
What if I wanted to convert number of commits into a feature by creating bands of values that we define:* 1 commit = group 1* 2 - 5 commits = group 2* 6 - 20 commits = group 3* more than 20 = group 4
###Code
commits_from_2_to_5 = (cpr["num_commits"]<=5) & (cpr["num_commits"]>=2)
cpr.loc[one_commit, "group_commit"]=1
cpr.loc[commits_from_2_to_5, "group_commit"]=2
cpr.head(5)
###Output
_____no_output_____
###Markdown
I challenge you:What if: I wanted to know what is the average of num_commits for those pull requests in group_commit 2???
###Code
group_two = cpr["group_commit"]==2
group_two_dataframe = cpr[group_two]
group_two_dataframe.num_commits.mean()
###Output
_____no_output_____
###Markdown
I challenge you (2):Can you do that average per language?
###Code
for lang in cpr["prog_lang"].unique():
data = cpr[cpr["prog_lang"] == lang]
print(lang + ": \t" + str(data.num_commits.mean()))
cpr.groupby(["prog_lang"]).describe()
cpr.groupby(["user"]).describe()
###Output
_____no_output_____
###Markdown
Some more... Let's work with a new dataset... This is not only related to casual contributors, but all contributors
###Code
commits_complete = pandas.read_csv('commit_complete.csv')
commits_complete.sort_values('num_commits', ascending=False).head(20)
commits_complete['deletions'].corr(commits_complete['additions'])
commits_complete.corr()
commits_complete.corr(method='pearson').style.background_gradient(cmap='coolwarm')
###Output
_____no_output_____
###Markdown
Can we play with graphics? **Plot types:**- 'line' : line plot (default)- 'bar' : vertical bar plot- 'barh' : horizontal bar plot- 'hist' : histogram- 'box' : boxplot- 'kde' : Kernel Density Estimation plot- 'density' : same as 'kde'- 'area' : area plot- 'pie' : pie plot- 'scatter' : scatter plot- 'hexbin' : hexbin plot **Histogram**
###Code
cpr[multi_commit]
cpr[cpr["num_commits"]>7].num_commits.plot.hist(bins=200)
cpr[cpr["prog_lang"]=="C"].num_commits.plot.hist(bins=20, color="red", alpha=0.5)
cpr[cpr["prog_lang"]=="java"].num_commits.plot.hist(bins=20, alpha=0.5).legend(["C", "Java"])
cpr.prog_lang.str.lower().value_counts().plot.bar()
cpr[cpr["prog_lang"]== "C"].project_name.value_counts().plot.bar()
commits_complete.plot.scatter(x = "additions", y = "num_commits", color="red")
lang_c = cpr.prog_lang=="C"
lang_java = cpr.prog_lang=="java"
lang_php = cpr.prog_lang=="php"
cpr[(lang_c) | (lang_java) | (lang_php)].boxplot(by='prog_lang', column=['num_commits'])
plot = cpr[(lang_c) | (lang_java) | (lang_php)].boxplot(by='prog_lang', column=['num_commits'], showfliers=False, grid=True)
plot.set_xlabel("Language")
plot.set_ylabel("# of commits")
plot.set_title("")
###Output
_____no_output_____
###Markdown
**Just to show...**that it is possible to do statistical analysis
###Code
from scipy import stats
stats.mannwhitneyu(cpr[(lang_c)].num_commits, cpr[(lang_java)].num_commits)
###Output
_____no_output_____
###Markdown
Exporting
###Code
my_subset.to_dict()
cpr.to_csv('test.csv', sep=',')
###Output
_____no_output_____
###Markdown
Exploring data using PandasSo far we explored Python and a few native libraries. Now we will play a little to simplify our life with tools to conduct some **data analysis**.**Pandas** is the most popular library (so far) to import and handle data in Python. Let's import some data from a CSV file**When downloading my ipynb, remember to also get the `commits_pr.csv` file**
###Code
import pandas
cpr = pandas.read_csv("commits_pr.csv")
###Output
_____no_output_____
###Markdown
It became this easy to read a CSV file!!!And more... Look at what my `cpr` is:
###Code
type(cpr)
###Output
_____no_output_____
###Markdown
Yes! A DataFrame. And it reads really nice, look:
###Code
cpr.tail()
### We can use head() and tail() functions to see a bit less
###Output
_____no_output_____
###Markdown
Before moving forward... Explaining a little about this dataset.This dataset represents a series of Pull Requests made to a subset of projects hosted by GitHub. We worked on this data to capture a specific type of contributor, which we called *casual contributor*. These contributors are known by having a single pull request accepted in a project and not coming back (i.e., they have no long-term commitment to the project).In this specific dataset, you will find the following columns:* `user`: represent a user in GitHub (anonymized here)* `project_name`: the name of GitHub project in which the pull request was accepted* `prog_lang`: programming language of the project* `pull_req_num`: unique identifier of the pull request* `num_commits`: number of commits sent within that specific pull request Some information about the dataframe Dimensions/shape of the dataset (lines vs. columns)
###Code
cpr.shape
###Output
_____no_output_____
###Markdown
What about the column names?
###Code
cpr.columns
###Output
_____no_output_____
###Markdown
And the datatype per column?
###Code
cpr.dtypes
###Output
_____no_output_____
###Markdown
Some more information: `info()` method prints information including the index dtype and column dtypes, non-null values and memory usage.
###Code
cpr.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 42092 entries, 0 to 42091
Data columns (total 5 columns):
user 42092 non-null object
project_name 42092 non-null object
prog_lang 42092 non-null object
pull_req_number 42092 non-null int64
num_commits 42092 non-null int64
dtypes: int64(2), object(3)
memory usage: 1.6+ MB
###Markdown
What is the type of a specific column???
###Code
type(cpr["num_commits"])
###Output
_____no_output_____
###Markdown
A *serie* is a list, with one dimension, indexed. Each column of a dataframe is a series Before moving ahead, we can use the types to filter some columns. Let's say we want only the columns that store `int`:
###Code
int_columns = cpr.dtypes[cpr.dtypes == "int64"].index
int_columns
###Output
_____no_output_____
###Markdown
Now... I just want to see these columns... **BOOM**
###Code
cpr[int_columns].head()
###Output
_____no_output_____
###Markdown
What about statistical information about my DataFrame?`describe()` method provides a summary of numeric values in your dataset: mean, standard deviation, minimum, maximum, 1st quartile, 2nd quartile (median), 3rd quartile of the columns with numeric values. It also counts the number of variables in the dataset (are there missing variables?)
###Code
cpr.describe()
###Output
_____no_output_____
###Markdown
We can do it for a Series...
###Code
#cpr["num_commits"].describe()
cpr.num_commits.describe()
#LOOK at this with a non-numeric column
cpr.prog_lang.describe() #either way work.
###Output
_____no_output_____
###Markdown
And we can get specific information per column
###Code
cpr.num_commits.median()
cpr.num_commits.mean()
cpr.num_commits.std()
###Output
_____no_output_____
###Markdown
-------------- Playing with the data: sortingWe can sort our data easily using pandas.In this example, sorting by Programming Language
###Code
cpr.sort_values("num_commits", ascending=False).head(10)
###Output
_____no_output_____
###Markdown
We can sort using *many columns*, by using a list (sort will happen from the first item to the last)
###Code
cpr.sort_values(["prog_lang", "project_name", "num_commits"], ascending=False).head(10)
cpr.head(10)
###Output
_____no_output_____
###Markdown
If you want to keep the sorted version, you can use the parameter `inplace`:
###Code
cpr.sort_values(["prog_lang", "project_name", "num_commits"], ascending=False, inplace=True)
cpr.head(10)
#cpr = pandas.read_csv("commits_pr.csv") #--> to return to the original order
###Output
_____no_output_____
###Markdown
Counting the occurences of variablesSo, to count the occurrences in a column we have to select the column first, and use the method `value_counts()`
###Code
cpr.prog_lang.value_counts()
###Output
_____no_output_____
###Markdown
But... I just want to know what are the languages out there. Is there a way?*Always*
###Code
cpr["prog_lang"].unique()
###Output
_____no_output_____
###Markdown
OK! Let's do something else... Like, selecting columns and filtering dataLet's say that I just want to look at the columns programming language, project name and number of commits. I can select them and create a new DF
###Code
selected_columns = ["prog_lang", "project_name", "num_commits"]
my_subset = cpr[selected_columns]
my_subset.head()
###Output
_____no_output_____
###Markdown
What if now I want to filter those projects written in `C` language?
###Code
only_C = cpr[(cpr["prog_lang"]=='C') & (cpr["num_commits"]==2)]
only_C.describe()
###Output
_____no_output_____
###Markdown
We can filter whatever we want:
###Code
single_commit = cpr[cpr["num_commits"] == 1]
###Output
_____no_output_____
###Markdown
We can create filters in variables, and use whenever we want, as well
###Code
one_commit = cpr["num_commits"]==1
language_C = cpr["prog_lang"]=="C"
multi_commit = cpr["num_commits"]>1
cpr[one_commit & language_C].head(10)
###Output
_____no_output_____
###Markdown
And... we can use OR (|) and AND(&) to play!
###Code
cpr[one_commit & language_C].head(10)
###Output
_____no_output_____
###Markdown
What if we want the pull requests with more than one commit for the projects written in "C" and those with 2 commits for the projects written in "typescript"???Let's do it!
###Code
#####
two_commits = cpr["num_commits"]==2
language_typescript = cpr["prog_lang"]=="typescript"
cpr[(one_commit & language_C) | (two_commits & language_typescript)]
###Output
_____no_output_____
###Markdown
What if I wanted to convert number of commits into a feature by creating bands of values that we define:* 1 commit = group 1* 2 - 5 commits = group 2* 6 - 20 commits = group 3* more than 20 = group 4
###Code
cpr.loc[cpr["num_commits"]==1, "group_commit"]=1
cpr.loc[(cpr["num_commits"]>1) & (cpr["num_commits"]<=5), "group_commit"]=2
cpr.loc[(cpr["num_commits"]>5) & (cpr["num_commits"]<=20), "group_commit"]=3
cpr.loc[cpr["num_commits"]>20, "group_commit"]=4
cpr.group_commit = cpr.group_commit.astype('int32')
cpr.head()
###Output
_____no_output_____
###Markdown
I challenge you:What if: I wanted to know how the average of num_commits for those pull requests in group_commit 4???
###Code
###Output
_____no_output_____
###Markdown
I challenge you (2):Can you do that average per language?
###Code
cpr[cpr["prog_lang"] == "typescript"].quantile(0.75)
###Output
_____no_output_____
###Markdown
Some more... Let's work with a new dataset... This is not only related to casual contributors, but all contributors
###Code
commits_complete = pandas.read_csv('commit_complete.csv')
commits_complete.sort_values('num_commits', ascending=False).head(10)
commits_complete['num_commits'].corr(commits_complete['additions'])
commits_complete.corr()
commits_complete.corr(method='pearson').style.background_gradient(cmap='coolwarm')
###Output
_____no_output_____
###Markdown
Can we play with graphics? **Plot types:**- 'line' : line plot (default)- 'bar' : vertical bar plot- 'barh' : horizontal bar plot- 'hist' : histogram- 'box' : boxplot- 'kde' : Kernel Density Estimation plot- 'density' : same as 'kde'- 'area' : area plot- 'pie' : pie plot- 'scatter' : scatter plot- 'hexbin' : hexbin plot **Histogram**
###Code
cpr.num_commits.plot.hist(bins=200)
cpr[cpr["prog_lang"]=="C"].num_commits.plot.hist(bins=20, color="red", alpha=0.5)
cpr[cpr["prog_lang"]=="java"].num_commits.plot.hist(bins=20, alpha=0.5).legend(["C", "Java"])
cpr['prog_lang'].value_counts().plot.bar()
cpr[cpr["prog_lang"]== "C"].project_name.value_counts().plot.bar()
commits_complete.plot.scatter(x = "files_changed", y = "num_commits")
lang_c = cpr.prog_lang=="C"
lang_java = cpr.prog_lang=="java"
lang_php = cpr.prog_lang=="php"
cpr[(lang_c) | (lang_java) | (lang_php)].boxplot(by='prog_lang', column=['num_commits'])
plot = cpr[(lang_c) | (lang_java) | (lang_php)].boxplot(by='prog_lang', column=['num_commits'], showfliers=False, grid=False)
plot.set_xlabel("Language")
plot.set_ylabel("# of commits")
plot.set_title("")
###Output
_____no_output_____
###Markdown
**Just to show...**that it is possible to do statistical analysis
###Code
from scipy import stats
stats.mannwhitneyu(cpr[(lang_c)].num_commits, cpr[(lang_java)].num_commits)
###Output
_____no_output_____
###Markdown
Exporting
###Code
my_subset.to_dict()
cpr.to_csv('test.csv', sep=',')
###Output
_____no_output_____ |
2_BoW_Models/1_MLP/MLP_TREC.ipynb | ###Markdown
MLP Classification with CR DatasetWe will build a text classification model using MLP model on the Customer Reviews Dataset. Since there is no standard train/test split for this dataset, we will use 10-Fold Cross Validation (CV). Load the library
###Code
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
import nltk
import random
from nltk.corpus import stopwords, twitter_samples
# from nltk.tokenize import TweetTokenizer
from sklearn.model_selection import KFold
from nltk.stem import PorterStemmer
from string import punctuation
from sklearn.preprocessing import OneHotEncoder
from tensorflow.keras.preprocessing.text import Tokenizer
import time
%config IPCompleter.greedy=True
%config IPCompleter.use_jedi=False
# nltk.download('twitter_samples')
tf.config.experimental.list_physical_devices('GPU')
###Output
_____no_output_____
###Markdown
Load the Dataset
###Code
corpus = pd.read_pickle('../../0_data/TREC/TREC.pkl')
corpus.label = corpus.label.astype(int)
print(corpus.shape)
corpus
corpus.info()
corpus.groupby( by=['split','label']).count()
corpus.groupby(by='split').count()
# Separate the sentences and the labels
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print(len(train_x))
print(len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print(len(test_x))
print(len(test_y))
###Output
5452
5452
500
500
###Markdown
Raw Number of Vocabulary
###Code
# Build the raw vocobulary for first inspection
tokenizer = Tokenizer()
tokenizer.fit_on_texts(corpus.sentence)
vocab_raw = tokenizer.word_index
print('\nThe vocabulary size: {}\n'.format(len(vocab_raw)))
print(vocab_raw)
###Output
The vocabulary size: 8759
{'the': 1, 'what': 2, 'is': 3, 'of': 4, 'in': 5, 'a': 6, 'how': 7, "'s": 8, 'was': 9, 'who': 10, 'to': 11, 'are': 12, 'for': 13, 'and': 14, 'did': 15, 'does': 16, "''": 17, 'do': 18, 'name': 19, 'on': 20, 'many': 21, 'where': 22, 'first': 23, 'when': 24, 'i': 25, 'you': 26, 'can': 27, 'from': 28, 'world': 29, 's': 30, 'u': 31, 'which': 32, 'that': 33, 'most': 34, 'by': 35, 'an': 36, 'country': 37, 'as': 38, 'city': 39, 'with': 40, 'have': 41, 'has': 42, 'why': 43, 'it': 44, 'there': 45, 'year': 46, 'state': 47, 'called': 48, 'be': 49, 'president': 50, 'people': 51, 'at': 52, 'get': 53, 'were': 54, 'find': 55, 'his': 56, 'american': 57, 'mean': 58, 'two': 59, 'largest': 60, 'war': 61, 'made': 62, 'new': 63, 'much': 64, 'fear': 65, 'long': 66, 'between': 67, "'": 68, 'its': 69, 'used': 70, 'word': 71, 'known': 72, 'origin': 73, 'day': 74, 'company': 75, 'kind': 76, 'movie': 77, 'about': 78, 'tv': 79, 'one': 80, 'film': 81, 'all': 82, 'famous': 83, 'stand': 84, 'invented': 85, 'make': 86, 'or': 87, 'color': 88, 'best': 89, 'game': 90, 'live': 91, 'take': 92, 'he': 93, 'your': 94, 'up': 95, 'man': 96, 'time': 97, 'old': 98, 'states': 99, 'john': 100, 'only': 101, 'into': 102, 'book': 103, 'come': 104, 'play': 105, 'river': 106, 'wrote': 107, 'my': 108, 'not': 109, 'out': 110, 'term': 111, 'born': 112, 'their': 113, 'show': 114, 'america': 115, 'star': 116, 'baseball': 117, 'highest': 118, 'south': 119, 'last': 120, 'call': 121, 'won': 122, 'team': 123, 'home': 124, 'use': 125, 'countries': 126, 'united': 127, 'four': 128, 'named': 129, 'if': 130, 'had': 131, 'population': 132, 'difference': 133, 'character': 134, 'king': 135, 'number': 136, 'after': 137, 'english': 138, 'capital': 139, 'water': 140, 'died': 141, '1': 142, 'three': 143, 'us': 144, 'become': 145, 'body': 146, 'dog': 147, 'average': 148, 'earth': 149, 'north': 150, 'die': 151, 'work': 152, 'song': 153, 'novel': 154, 'played': 155, 'some': 156, 'said': 157, 'black': 158, 'information': 159, 'go': 160, 'common': 161, 'space': 162, 'been': 163, 'york': 164, 'they': 165, 'actress': 166, 'years': 167, 'say': 168, 'computer': 169, 'actor': 170, 'will': 171, 'mountain': 172, 'woman': 173, 'located': 174, 'college': 175, 'group': 176, 'names': 177, 'during': 178, 'second': 179, 'would': 180, 'california': 181, 'longest': 182, 'sport': 183, 'food': 184, 'killed': 185, 'national': 186, 'line': 187, 'sea': 188, 'island': 189, 'drink': 190, 'part': 191, 'money': 192, 'way': 193, 'university': 194, 'like': 195, 'good': 196, 'top': 197, 'than': 198, 'times': 199, 'popular': 200, 'major': 201, 'date': 202, 'great': 203, 'should': 204, 'through': 205, 'animal': 206, "'t": 207, 'school': 208, 'person': 209, 'language': 210, 'e': 211, 'red': 212, 'west': 213, 'over': 214, 'history': 215, 'law': 216, 'big': 217, 'portrayed': 218, 'more': 219, 'her': 220, 'life': 221, 'french': 222, 'makes': 223, 'car': 224, 'cities': 225, 'different': 226, 'miles': 227, 'd': 228, 'place': 229, 'horse': 230, 'five': 231, 'meaning': 232, 'internet': 233, '2': 234, 'moon': 235, 'battle': 236, 'address': 237, 'leader': 238, 'general': 239, 'title': 240, 'each': 241, 'abbreviation': 242, 'write': 243, 'white': 244, 'causes': 245, 'international': 246, 'cost': 247, 'contains': 248, 'created': 249, 'russian': 250, 'so': 251, 'charles': 252, 'craft': 253, 'colors': 254, 'kennedy': 255, 'rate': 256, 'became': 257, 'power': 258, 'human': 259, 'form': 260, 'feet': 261, 'century': 262, 'begin': 263, 'baby': 264, 'little': 265, 'me': 266, 'biggest': 267, 'f': 268, 'british': 269, 'c': 270, 'letter': 271, 'built': 272, 'randy': 273, 'airport': 274, 'type': 275, 'bridge': 276, 'whose': 277, 'park': 278, 'league': 279, '5': 280, 'st': 281, 'system': 282, 'nickname': 283, 'found': 284, 'features': 285, 'female': 286, 'o': 287, 'disease': 288, 'george': 289, 'eat': 290, 'no': 291, 'games': 292, 'queen': 293, 'san': 294, 'love': 295, 'high': 296, 'far': 297, 'seven': 298, 'ii': 299, 'real': 300, 'boasts': 301, 'islands': 302, 'house': 303, 'following': 304, 'comic': 305, 'blood': 306, 'air': 307, 'death': 308, 'canada': 309, 'james': 310, 'center': 311, 'william': 312, 'children': 313, 'spanish': 314, 'london': 315, 'men': 316, 'england': 317, 'someone': 318, 'office': 319, 'nixon': 320, 'germany': 321, 'player': 322, 'newspaper': 323, 'european': 324, 'animals': 325, 'product': 326, 'bowl': 327, 'japanese': 328, 'mother': 329, 'washington': 330, 'nn': 331, 'area': 332, 'ocean': 333, 'win': 334, 'television': 335, 'hole': 336, 'bill': 337, 'hit': 338, 'soft': 339, 'tree': 340, 'letters': 341, 'father': 342, 'may': 343, 'held': 344, 'oldest': 345, 'super': 346, 'series': 347, 'sun': 348, 'prime': 349, 'minister': 350, 'county': 351, 'business': 352, 'member': 353, 'before': 354, 'radio': 355, 'lawyer': 356, 'hitler': 357, 'married': 358, 'another': 359, 'runs': 360, 'fast': 361, 'building': 362, 'know': 363, 'music': 364, 'words': 365, 'happened': 366, 'chemical': 367, 'store': 368, 'ever': 369, 'definition': 370, 'singing': 371, 'percentage': 372, 'mile': 373, 'lives': 374, 'africa': 375, 'ball': 376, 'ship': 377, '3': 378, 'prize': 379, 'we': 380, 'once': 381, 'ice': 382, 'served': 383, 'soldiers': 384, 'm': 385, 'kentucky': 386, 'being': 387, 'whom': 388, 'age': 389, 'tax': 390, 'code': 391, 'lake': 392, 'christmas': 393, 'com': 394, 'travel': 395, 'golf': 396, 'six': 397, 'mississippi': 398, 'cnn': 399, 'cold': 400, 'point': 401, 'cross': 402, 'founded': 403, 'sports': 404, 'around': 405, 'department': 406, 'web': 407, 'birth': 408, 'gold': 409, 'other': 410, 'beer': 411, 'role': 412, 'girl': 413, 'texas': 414, 'jack': 415, 'them': 416, 'soviet': 417, 'indians': 418, 'original': 419, 'night': 420, 'main': 421, 'boy': 422, 'australia': 423, '10': 424, 'end': 425, 'greek': 426, 'singer': 427, 'alaska': 428, 'flag': 429, 'dick': 430, 'tuberculosis': 431, 'tell': 432, 'civil': 433, 'gas': 434, 'comedian': 435, 'story': 436, 'football': 437, 'rock': 438, 'china': 439, 'then': 440, 'olympic': 441, 'lived': 442, 'paper': 443, 'starred': 444, 'head': 445, 'hair': 446, 'now': 447, 'japan': 448, 'cartoon': 449, 'back': 450, 'acid': 451, 'pope': 452, 'musical': 453, 'discovered': 454, 'indian': 455, 'board': 456, 'street': 457, 'start': 458, 'fame': 459, 'introduced': 460, 'size': 461, 'answers': 462, '8': 463, 'saw': 464, 'down': 465, 'but': 466, 'card': 467, 'vietnam': 468, 'never': 469, 'bible': 470, 'need': 471, 'worth': 472, 'thing': 473, 'shot': 474, 'east': 475, 'originate': 476, 'former': 477, 'light': 478, 'see': 479, 'richard': 480, 'own': 481, 'claim': 482, 'while': 483, 'god': 484, 'tennis': 485, 'blue': 486, 'art': 487, 'famed': 488, 'fly': 489, 'produce': 490, 'appear': 491, 'continent': 492, 'son': 493, 'basketball': 494, 'african': 495, 'marvel': 496, 'planet': 497, 'list': 498, 'chinese': 499, 'full': 500, 'cards': 501, 'family': 502, 'sometimes': 503, 'bear': 504, 'website': 505, 'de': 506, 'under': 507, 'los': 508, 'species': 509, 'brothers': 510, 'symbol': 511, 'instrument': 512, 't': 513, 'tom': 514, 'director': 515, 'prince': 516, 'child': 517, 'organization': 518, 'caused': 519, 'tallest': 520, 'flight': 521, '1984': 522, 'berlin': 523, 'latin': 524, 'mount': 525, 'weight': 526, 'fought': 527, 'author': 528, 'fastest': 529, 'union': 530, 'dubbed': 531, 'went': 532, 'types': 533, 'women': 534, 'give': 535, 'steven': 536, 'sioux': 537, 'temperature': 538, 'wife': 539, 'plant': 540, 'often': 541, 'wine': 542, 'chicago': 543, '15': 544, 'nuclear': 545, 'produced': 546, 'americans': 547, 'also': 548, 'europe': 549, 'look': 550, 'income': 551, 'jackson': 552, 'favorite': 553, 'record': 554, 'strip': 555, 'tall': 556, 'every': 557, 'started': 558, 'boxing': 559, 'monopoly': 560, 'motto': 561, 'beach': 562, 'believe': 563, 'cat': 564, 'l': 565, 'angeles': 566, 'j': 567, 'r': 568, 'element': 569, 'square': 570, 'eye': 571, 'desert': 572, 'thatcher': 573, 'months': 574, 'elected': 575, 'france': 576, 'jaws': 577, 'winter': 578, 'wall': 579, 'field': 580, '7': 581, 'brand': 582, 'green': 583, 'eyes': 584, 'band': 585, 'buried': 586, 'el': 587, 'heart': 588, 'peter': 589, 'million': 590, 'speed': 591, 'shakespeare': 592, 'stop': 593, 'telephone': 594, 'comes': 595, 'fire': 596, 'hand': 597, 'spoken': 598, 'government': 599, 'oil': 600, 'rights': 601, 'mail': 602, 'because': 603, 'characters': 604, 'museum': 605, 'asian': 606, 'hands': 607, 'numbers': 608, 'run': 609, 'writer': 610, 'languages': 611, 'march': 612, '6': 613, 'shea': 614, 'gould': 615, 'left': 616, 'italy': 617, 'middle': 618, 'massachusetts': 619, 'henry': 620, 'captain': 621, 'plays': 622, 'birds': 623, 'side': 624, 'per': 625, 'eggs': 626, 'orange': 627, 'van': 628, 'court': 629, 'magazine': 630, 'sound': 631, 'greatest': 632, 'presidents': 633, 'led': 634, 'german': 635, 'month': 636, 'days': 637, 'lost': 638, 'featured': 639, 'considered': 640, 'nine': 641, 'aids': 642, 'video': 643, 'cowboy': 644, 'rain': 645, 'headquarters': 646, 'small': 647, 'rule': 648, 'off': 649, 'products': 650, 'foot': 651, 'party': 652, 'oscar': 653, 'put': 654, 'cover': 655, 'build': 656, 'k': 657, 'southern': 658, 'based': 659, 'dead': 660, 'watch': 661, 'titanic': 662, 'olympics': 663, 'security': 664, 'chicken': 665, 'buy': 666, 'bond': 667, 'poet': 668, 'nobel': 669, 'opera': 670, '12': 671, 'assassinated': 672, 'roman': 673, 'empire': 674, 'governor': 675, 'titled': 676, 'painted': 677, 'this': 678, 'golden': 679, 'inside': 680, 'artist': 681, 'wear': 682, 'cars': 683, 'thomas': 684, 'lakes': 685, 'nations': 686, 'milk': 687, 'allowed': 688, 'paid': 689, 'students': 690, 'week': 691, 'stars': 692, 'brown': 693, 'ten': 694, 'jimmy': 695, 'claimed': 696, 'she': 697, 'setting': 698, 'producer': 699, 'b': 700, 'ireland': 701, 'castle': 702, 'living': 703, 'current': 704, 'elephant': 705, 'medical': 706, 'least': 707, 'got': 708, 'balls': 709, 'near': 710, 'sleep': 711, 'race': 712, 'birthday': 713, 'companies': 714, 'face': 715, 'twins': 716, 'male': 717, 'developed': 718, 'columbia': 719, 'program': 720, 'gave': 721, 'follow': 722, 'selling': 723, 'set': 724, 'federal': 725, 'town': 726, 'nationality': 727, 'wars': 728, 'mexico': 729, 'making': 730, 'formed': 731, 'spain': 732, 'currency': 733, 'louis': 734, '11': 735, 'points': 736, 'free': 737, 'inches': 738, 'caribbean': 739, 'ask': 740, 'corpus': 741, 'page': 742, 'nnp': 743, 'bird': 744, 'w': 745, 'lyrics': 746, 'books': 747, 'historical': 748, 'don': 749, 'spumante': 750, 'occur': 751, 'season': 752, 'mayor': 753, 'could': 754, 'police': 755, 'commercial': 756, 'design': 757, 'cigarette': 758, 'schools': 759, 'ray': 760, 'hero': 761, 'came': 762, 'turn': 763, 'automobile': 764, 'remove': 765, 'followed': 766, 'online': 767, 'dollar': 768, 'pounds': 769, 'bone': 770, 'miss': 771, 'successful': 772, 'right': 773, 'serve': 774, 'mountains': 775, 'yellow': 776, 'reason': 777, 'must': 778, 'snow': 779, 'g': 780, 'always': 781, 'order': 782, '1899': 783, 'kid': 784, 'army': 785, 'secretary': 786, 'machines': 787, 'sign': 788, 'stock': 789, 'given': 790, 'electric': 791, 'diego': 792, 'expression': 793, 'contact': 794, 'players': 795, 'india': 796, 'buffalo': 797, 'paint': 798, 'without': 799, 'early': 800, 'told': 801, 'italian': 802, 'written': 803, 'murder': 804, 'mozambique': 805, 'minimum': 806, 'wage': 807, '0': 808, 'husband': 809, 'atlantic': 810, 'glass': 811, 'comics': 812, 'committee': 813, '1983': 814, 'career': 815, 'create': 816, 'celebrated': 817, 'mark': 818, 'same': 819, 'awarded': 820, 'amount': 821, 'cup': 822, 'weigh': 823, 'brain': 824, 'society': 825, 'flower': 826, 'natural': 827, 'silver': 828, 'cream': 829, 'p': 830, 'eleven': 831, 'alphabet': 832, 'fish': 833, '1963': 834, 'tokyo': 835, 'aaron': 836, 'our': 837, 'korea': 838, 'vatican': 839, 'lady': 840, 'period': 841, 'nfl': 842, 'salt': 843, 'affect': 844, 'florida': 845, 'friend': 846, 'trial': 847, 'transplant': 848, 'originally': 849, 'effect': 850, 'richest': 851, 'leave': 852, 'films': 853, 'silly': 854, 'course': 855, 'bay': 856, 'perfect': 857, 'bowling': 858, 'score': 859, 'religion': 860, 'inspired': 861, 'arch': 862, 'johnny': 863, 'fox': 864, 'sister': 865, 'reims': 866, 'painting': 867, 'read': 868, 'complete': 869, 'church': 870, 'jude': 871, 'elements': 872, 'received': 873, 'broadway': 874, 'produces': 875, "n't": 876, 'border': 877, 'album': 878, 'poem': 879, 'grow': 880, 'maurizio': 881, 'pellegrin': 882, 'cocaine': 883, 'forest': 884, 'pole': 885, 'taste': 886, 'education': 887, 'watergate': 888, 'daily': 889, 'against': 890, 'composer': 891, 'swimming': 892, '21': 893, 'dc': 894, 'vegas': 895, 'microsoft': 896, '1994': 897, 'ways': 898, 'official': 899, 'spielberg': 900, 'done': 901, 'fifth': 902, 'adult': 903, 'leading': 904, 'cancer': 905, '1991': 906, 'christian': 907, 'jewish': 908, 'declared': 909, 'chocolate': 910, 'award': 911, 'equal': 912, '24': 913, 'represented': 914, 'lee': 915, 'mrs': 916, 'emperor': 917, 'energy': 918, 'correct': 919, 'model': 920, 'jane': 921, 'any': 922, 'questions': 923, 'charlie': 924, 'pop': 925, 'appeared': 926, 'commonly': 927, 'tale': 928, 'gate': 929, 'kept': 930, 'degrees': 931, 'meant': 932, 'host': 933, 'liberty': 934, 'mccarren': 935, 'magic': 936, 'presidential': 937, 'file': 938, 'members': 939, 'drive': 940, 'turned': 941, '000': 942, 'contract': 943, 'site': 944, 'going': 945, 'stone': 946, 'pound': 947, 'bureau': 948, 'investigation': 949, 'johnson': 950, 'phone': 951, 'daughter': 952, 'lincoln': 953, 'gulf': 954, 'literary': 955, 'bottle': 956, 'away': 957, 'houses': 958, 'birthstone': 959, 'sold': 960, 'keep': 961, 'reach': 962, 'alley': 963, 'events': 964, 'francisco': 965, 'madonna': 966, 'native': 967, 'medicine': 968, 'himself': 969, 'voice': 970, 'project': 971, 'mercury': 972, 'caffeine': 973, 'took': 974, 'using': 975, 'owns': 976, 'vice': 977, 'vhs': 978, 'research': 979, 'large': 980, 'michael': 981, 'drug': 982, 'britain': 983, 'invent': 984, 'treat': 985, 'mr': 986, 'mary': 987, 'contain': 988, 'late': 989, 'jersey': 990, 'seen': 991, 'conference': 992, 'justice': 993, 'iron': 994, 'simpsons': 995, 'close': 996, 'having': 997, 'location': 998, 'usa': 999, 'muppets': 1000, 'rocky': 1001, 'hockey': 1002, 'yankee': 1003, '27': 1004, 'franklin': 1005, 'roosevelt': 1006, 'sex': 1007, 'animated': 1008, 'fruit': 1009, 'diamond': 1010, 'non': 1011, 'single': 1012, 'asia': 1013, 'across': 1014, 'distance': 1015, 'dam': 1016, 'universe': 1017, 'working': 1018, 'harvey': 1019, 'measure': 1020, '9': 1021, 'visit': 1022, 'wings': 1023, 'volcano': 1024, 'summer': 1025, 'control': 1026, 'wind': 1027, 'mutombo': 1028, 'submarine': 1029, 'salary': 1030, 'kids': 1031, 'test': 1032, 'lion': 1033, 'russia': 1034, 'stole': 1035, 'profession': 1036, 'amendment': 1037, 'constitution': 1038, 'clothing': 1039, 'putty': 1040, 'weapon': 1041, 'prophet': 1042, 'stage': 1043, 'fuel': 1044, 'cd': 1045, 'central': 1046, 'oceans': 1047, 'medium': 1048, 'stuart': 1049, 'hamblen': 1050, 'tiger': 1051, 'election': 1052, 'claims': 1053, 'starring': 1054, 'hollywood': 1055, 'jean': 1056, 'rascals': 1057, 'standard': 1058, '1967': 1059, 'market': 1060, 'just': 1061, 'directed': 1062, 'v': 1063, 'occupation': 1064, 'nicholas': 1065, 'ventura': 1066, 'published': 1067, 'races': 1068, '1960': 1069, 'widely': 1070, 'folic': 1071, 'qigong': 1072, 'him': 1073, 'astronaut': 1074, 'rum': 1075, 'am': 1076, 'december': 1077, 'las': 1078, '1980': 1079, 'pregnancy': 1080, 'lead': 1081, '16th': 1082, 'level': 1083, 'egg': 1084, 'pink': 1085, 'social': 1086, 'silent': 1087, 'pearl': 1088, 'harbor': 1089, 'academy': 1090, 'ago': 1091, 'sink': 1092, 'calories': 1093, 'creature': 1094, 'robert': 1095, 'literature': 1096, 'join': 1097, 'professional': 1098, 'peace': 1099, 'define': 1100, 'club': 1101, 'coach': 1102, '1969': 1103, "'ll": 1104, 'act': 1105, 'troops': 1106, 'rivers': 1107, 'rogers': 1108, '1989': 1109, 'function': 1110, 'hour': 1111, 'clock': 1112, '1965': 1113, 'powerful': 1114, 'irish': 1115, 'still': 1116, 'plastic': 1117, 'republic': 1118, 'well': 1119, 'classical': 1120, 'goodall': 1121, 'bob': 1122, 'procter': 1123, 'gamble': 1124, 'carolina': 1125, 'theme': 1126, 'master': 1127, 'purpose': 1128, 'here': 1129, 'reign': 1130, 'bee': 1131, 'dictator': 1132, 'pacific': 1133, 'spent': 1134, 'kevin': 1135, 'costner': 1136, 'thunder': 1137, 'baking': 1138, 'peachy': 1139, 'oat': 1140, 'muffins': 1141, 'patent': 1142, 'ads': 1143, 'associated': 1144, 'gay': 1145, 'desmond': 1146, 'operating': 1147, 'ibm': 1148, 'compatible': 1149, 'pennsylvania': 1150, '1939': 1151, 'battery': 1152, 'substance': 1153, 'blind': 1154, 'question': 1155, 'council': 1156, 'beat': 1157, 'martin': 1158, 'chapter': 1159, 'la': 1160, 'ben': 1161, 'clean': 1162, 'email': 1163, 'try': 1164, 'growing': 1165, 'upon': 1166, 'these': 1167, 'cells': 1168, 'convicted': 1169, 'bomb': 1170, 'surrounds': 1171, 'songs': 1172, 'al': 1173, 'percent': 1174, 'station': 1175, 'ride': 1176, 'third': 1177, 'pitcher': 1178, 'road': 1179, 'dogtown': 1180, 'camp': 1181, 'congress': 1182, 'price': 1183, 'depression': 1184, 'winning': 1185, 'position': 1186, 'colin': 1187, 'powell': 1188, 'detective': 1189, 'stewart': 1190, 'january': 1191, 'advertise': 1192, 'range': 1193, 'humans': 1194, 'itself': 1195, '1998': 1196, 'diameter': 1197, 'enter': 1198, 'senate': 1199, 'mammal': 1200, 'sinn': 1201, 'fein': 1202, 'bush': 1203, 'phoenix': 1204, 'astronauts': 1205, 'attend': 1206, 'auto': 1207, 'confederate': 1208, 'criminal': 1209, 'example': 1210, 'kuwait': 1211, 'ford': 1212, 'better': 1213, 'navy': 1214, 'engines': 1215, 'matter': 1216, 'luther': 1217, 'lindbergh': 1218, 'shape': 1219, 'dickens': 1220, 'andy': 1221, 'typical': 1222, 'service': 1223, 'appearance': 1224, 'statistics': 1225, '13': 1226, 'cereal': 1227, 'philippines': 1228, 'engine': 1229, 'magna': 1230, 'carta': 1231, 'citizen': 1232, 'garry': 1233, 'kasparov': 1234, 'clouds': 1235, 'western': 1236, 'mouth': 1237, 'restaurant': 1238, 'happen': 1239, 'zip': 1240, 'paul': 1241, 'supreme': 1242, 'writing': 1243, 'roll': 1244, 'describe': 1245, 'pull': 1246, 'tab': 1247, 'robin': 1248, 'ages': 1249, 'airplane': 1250, 'canadian': 1251, 'williams': 1252, 'georgia': 1253, '2000': 1254, 'rome': 1255, 'aspartame': 1256, 'source': 1257, 'stations': 1258, 'forces': 1259, 'creator': 1260, 'benny': 1261, 'carter': 1262, 'future': 1263, 'along': 1264, 'theory': 1265, 'caliente': 1266, 'exchange': 1267, 'sexual': 1268, 'arthur': 1269, 'prevent': 1270, 'milky': 1271, 'nation': 1272, 'pregnant': 1273, 'pro': 1274, 'motion': 1275, 'surface': 1276, 'abraham': 1277, 'mexican': 1278, 'oz': 1279, 'rubber': 1280, 'covers': 1281, 'lose': 1282, 'expectancy': 1283, 'mouse': 1284, 'chain': 1285, 'today': 1286, 'grand': 1287, 'catch': 1288, 'hot': 1289, 'secret': 1290, 'walk': 1291, 'post': 1292, 'islamic': 1293, 'counterpart': 1294, 'sing': 1295, 'coffee': 1296, 'signed': 1297, 'crown': 1298, 'asked': 1299, 'independent': 1300, 'flying': 1301, 'dinner': 1302, 'terms': 1303, 'butler': 1304, 'acronym': 1305, 'drugs': 1306, 'oscars': 1307, 'policy': 1308, 'short': 1309, 'dikembe': 1310, 'land': 1311, 'camera': 1312, 'weather': 1313, 'finger': 1314, 'sites': 1315, 'rotary': 1316, 'cpr': 1317, 'help': 1318, 'ruth': 1319, 'popeye': 1320, 'cork': 1321, 'bounty': 1322, 'hunter': 1323, 'gandhi': 1324, 'insurance': 1325, 'executed': 1326, 'scarlett': 1327, 'sells': 1328, 'nature': 1329, 'adventures': 1330, 'yahoo': 1331, 'cash': 1332, 'exist': 1333, 'next': 1334, 'door': 1335, '19th': 1336, 'painter': 1337, 'callosum': 1338, 'command': 1339, 'occupy': 1340, 'domesticated': 1341, 'bull': 1342, 'circle': 1343, 'fred': 1344, 'michelangelo': 1345, 'trip': 1346, 'purchase': 1347, 'sides': 1348, 'raise': 1349, 'iq': 1350, 'brazil': 1351, 'beatles': 1352, 'root': 1353, '14': 1354, 'wide': 1355, 'steps': 1356, 'sings': 1357, 'doing': 1358, 'involved': 1359, 'cage': 1360, 'process': 1361, 'royal': 1362, 'explorer': 1363, 'planted': 1364, '1972': 1365, 'technique': 1366, 'detect': 1367, 'defects': 1368, 'railroad': 1369, 'independence': 1370, 'brought': 1371, 'windsor': 1372, 'minutes': 1373, 'bar': 1374, '98': 1375, 'scotland': 1376, 'dt': 1377, 'ring': 1378, 'playing': 1379, 'ad': 1380, 'eruption': 1381, 'fathom': 1382, 'windows': 1383, 'seattle': 1384, 'sons': 1385, 'lowest': 1386, 'direct': 1387, 'spy': 1388, 'columbus': 1389, 'romans': 1390, 'dealt': 1391, '22': 1392, 'chemicals': 1393, 'toy': 1394, 'bought': 1395, 'stadium': 1396, 'einstein': 1397, '1942': 1398, 'latitude': 1399, 'longitude': 1400, 'rainbow': 1401, 'normal': 1402, 'ohio': 1403, 'golfer': 1404, 'equivalent': 1405, 'behind': 1406, 'pig': 1407, 'tour': 1408, 'fight': 1409, 'cleveland': 1410, 'case': 1411, 'records': 1412, 'closest': 1413, 'triangle': 1414, 'soap': 1415, 'assassination': 1416, 'failure': 1417, 'unaccounted': 1418, 'everest': 1419, 'billy': 1420, 'peak': 1421, 'jones': 1422, 'eternity': 1423, 'hold': 1424, 'sweet': 1425, 'michigan': 1426, 'gene': 1427, 'manufacturer': 1428, 'outside': 1429, 'spider': 1430, 'wheel': 1431, 'statue': 1432, 'rent': 1433, 'label': 1434, 'export': 1435, 'stamp': 1436, 'logan': 1437, 'wives': 1438, 'soup': 1439, 'poker': 1440, 'dakota': 1441, 'seaport': 1442, 'lord': 1443, 'borders': 1444, 'acted': 1445, 'santa': 1446, 'edward': 1447, 'sang': 1448, 'israel': 1449, 'apples': 1450, 'bastille': 1451, 'contest': 1452, 'sees': 1453, 'flies': 1454, 'such': 1455, 'low': 1456, 'championship': 1457, 'maryland': 1458, 'publish': 1459, 'laugh': 1460, 'nadia': 1461, 'comaneci': 1462, 'virginia': 1463, 'chief': 1464, 'peanut': 1465, 'hearing': 1466, 'receive': 1467, '19': 1468, 'sperm': 1469, 'victoria': 1470, 'hydrogen': 1471, 'toll': 1472, 'pass': 1473, '4': 1474, 'holds': 1475, 'nothing': 1476, 'feature': 1477, 'seas': 1478, 'discover': 1479, 'horses': 1480, 'geese': 1481, '1981': 1482, 'bug': 1483, 'plants': 1484, 'hurricane': 1485, 'hugo': 1486, 'attack': 1487, 'infectious': 1488, 'fungal': 1489, 'infection': 1490, 'disc': 1491, 'jockey': 1492, 'smallest': 1493, 'congressman': 1494, 'fourth': 1495, 'killer': 1496, 'less': 1497, 'dr': 1498, 'breed': 1499, '1978': 1500, 'banned': 1501, 'force': 1502, 'takes': 1503, 'gods': 1504, 'think': 1505, 'chairman': 1506, 'tried': 1507, 'cartoons': 1508, 'august': 1509, 'fare': 1510, 'return': 1511, 'probability': 1512, '25': 1513, 'something': 1514, 'display': 1515, 'reactivity': 1516, 'teaspoon': 1517, 'stick': 1518, 'province': 1519, 'jr': 1520, 'value': 1521, 'david': 1522, 'twin': 1523, 'brother': 1524, 'marijuana': 1525, 'wolfe': 1526, 'worked': 1527, '18': 1528, 'thank': 1529, 'july': 1530, 'credit': 1531, 'co': 1532, 'colorado': 1533, 'winnie': 1534, 'pooh': 1535, 'larry': 1536, 'established': 1537, 'clark': 1538, 'mission': 1539, 'enterprise': 1540, 'chess': 1541, 'search': 1542, 'beauty': 1543, 'morning': 1544, 'canyon': 1545, 'alice': 1546, 'gain': 1547, 'forced': 1548, 'target': 1549, 'released': 1550, 'style': 1551, 'pizza': 1552, 'hearst': 1553, 'boxer': 1554, 'draft': 1555, 'gives': 1556, 'minnesota': 1557, 'cosmology': 1558, 'eisenhower': 1559, 'superman': 1560, 'science': 1561, 'round': 1562, 'legal': 1563, 'half': 1564, 'unemployment': 1565, 'meat': 1566, 'presidency': 1567, 'virgin': 1568, 'edmund': 1569, 'penned': 1570, 'important': 1571, 'trivial': 1572, 'pursuit': 1573, 'talk': 1574, 'firm': 1575, 'trail': 1576, 'colony': 1577, 'span': 1578, 'sodium': 1579, 'trials': 1580, 'jail': 1581, 'shuttle': 1582, 'sank': 1583, '1953': 1584, 'oven': 1585, '16': 1586, 'becoming': 1587, 'organ': 1588, 'medal': 1589, 'ton': 1590, 'ends': 1591, '1940': 1592, 'samuel': 1593, 'pollock': 1594, 'ileana': 1595, 'cotrubas': 1596, 'included': 1597, 'ended': 1598, '1990': 1599, 'zeppelin': 1600, 'suit': 1601, '1950': 1602, 'sullivan': 1603, 'edgar': 1604, 'poe': 1605, 'witch': 1606, 'cathedral': 1607, 'editor': 1608, 'rest': 1609, 'potlatch': 1610, 'meters': 1611, 'utah': 1612, 'eight': 1613, 'rolling': 1614, 'active': 1615, 'pittsburgh': 1616, 'oswald': 1617, 'trees': 1618, 'vegetable': 1619, 'assassinate': 1620, 'february': 1621, 'network': 1622, 'wild': 1623, 'roger': 1624, 'allen': 1625, 'software': 1626, 'exercise': 1627, 'formula': 1628, 'move': 1629, 'fawaz': 1630, 'younis': 1631, 'happens': 1632, '1956': 1633, 'shirley': 1634, 'flows': 1635, 'hamburger': 1636, 'ships': 1637, 'staff': 1638, 'tip': 1639, 'apple': 1640, 'falls': 1641, 'institute': 1642, 'carries': 1643, 'gone': 1644, 'hemingway': 1645, 'coming': 1646, 'doctor': 1647, 'continental': 1648, 'hawaii': 1649, 'businesses': 1650, 'screen': 1651, 'steinbeck': 1652, 'davis': 1653, 'pulitzer': 1654, '1957': 1655, 'communist': 1656, '1975': 1657, 'louisiana': 1658, 'exactly': 1659, 'gorbachev': 1660, 'chancellor': 1661, 'colorful': 1662, 'h': 1663, 'spacecraft': 1664, 'fields': 1665, 'manufactured': 1666, 'derby': 1667, 'room': 1668, 'production': 1669, 'direction': 1670, 'length': 1671, 'soda': 1672, 'material': 1673, 'modern': 1674, 'primary': 1675, 'howard': 1676, 'sergeant': 1677, 'stripes': 1678, 'deep': 1679, 'carbon': 1680, 'babe': 1681, 'celebrities': 1682, 'monkey': 1683, 'contemptible': 1684, 'scoundrel': 1685, 'lunch': 1686, 'liver': 1687, 'faced': 1688, 'objects': 1689, 'yankees': 1690, 'wwii': 1691, 'bibliography': 1692, 'inventor': 1693, 'collect': 1694, 'manufactures': 1695, 'community': 1696, 'mormons': 1697, 'relative': 1698, 'speak': 1699, 'venus': 1700, 'wonder': 1701, 'learning': 1702, 'young': 1703, 'flags': 1704, 'tourist': 1705, 'attractions': 1706, 'founder': 1707, 'nun': 1708, 'nazis': 1709, 'nebraska': 1710, 'marx': 1711, 'markets': 1712, 'dew': 1713, 'jesus': 1714, 'era': 1715, 'scholar': 1716, 'bears': 1717, 'signature': 1718, 'classic': 1719, '6th': 1720, 'piano': 1721, 'mutiny': 1722, 'delaware': 1723, 'raid': 1724, 'hard': 1725, 'temple': 1726, 'sales': 1727, 'canal': 1728, 'busiest': 1729, 'rose': 1730, 'maker': 1731, 'colonies': 1732, 'revolution': 1733, 'shows': 1734, 'plan': 1735, 'hemisphere': 1736, 'northernmost': 1737, 'seized': 1738, 'tongue': 1739, 'journal': 1740, 'syndrome': 1741, 'andrew': 1742, 'stopped': 1743, 'nicknamed': 1744, 'pilot': 1745, 'began': 1746, 'rites': 1747, 'cable': 1748, 'safe': 1749, 'deer': 1750, 'broken': 1751, 'expectant': 1752, 'beethoven': 1753, 'fortune': 1754, 'harry': 1755, 'duke': 1756, 'review': 1757, 'z': 1758, 'pan': 1759, 'cc': 1760, 'steel': 1761, 'hotel': 1762, 'burned': 1763, 'november': 1764, 'loss': 1765, 'helen': 1766, 'significant': 1767, 'penny': 1768, 'success': 1769, 'getting': 1770, 'marks': 1771, 'bottles': 1772, 'christopher': 1773, 'florence': 1774, 'tower': 1775, 'according': 1776, 'genesis': 1777, 'wood': 1778, 'likely': 1779, 'teddy': 1780, 'arctic': 1781, 'paso': 1782, 'boat': 1783, 'havoc': 1784, 'match': 1785, 'hazmat': 1786, 'earn': 1787, 'piece': 1788, 'movies': 1789, 'lay': 1790, 'conrad': 1791, 'swimmer': 1792, '1948': 1793, 'association': 1794, '1973': 1795, 'iii': 1796, 'owned': 1797, 'individuals': 1798, 'disabilities': 1799, 'missouri': 1800, 'spectrum': 1801, 'instead': 1802, 'promote': 1803, 'dry': 1804, 'crop': 1805, 'korean': 1806, 'dogs': 1807, 'running': 1808, 'generator': 1809, 'nevada': 1810, 'soccer': 1811, 'marley': 1812, 'ears': 1813, '1941': 1814, 'rich': 1815, 'closing': 1816, 'beers': 1817, 'rules': 1818, 'tie': 1819, 'clothes': 1820, 'intercourse': 1821, 'www': 1822, 'autobiography': 1823, 'angels': 1824, 'mao': 1825, 'belgium': 1826, 'fresh': 1827, 'festival': 1828, 'nino': 1829, 'flavor': 1830, 'novelist': 1831, 'spice': 1832, 'includes': 1833, 'household': 1834, 'tube': 1835, 'beaver': 1836, 'jefferson': 1837, 'sensitive': 1838, 'let': 1839, 'facial': 1840, 'territory': 1841, 'muhammad': 1842, 'degas': 1843, 'illinois': 1844, 'bees': 1845, 'professor': 1846, 'incredible': 1847, 'hulk': 1848, 'dots': 1849, 'reading': 1850, 'hooligans': 1851, 'fans': 1852, 'include': 1853, 'terrorist': 1854, 'youngest': 1855, 'equipment': 1856, 'gates': 1857, 'sort': 1858, 'arnold': 1859, 'graced': 1860, 'calculator': 1861, 'bulls': 1862, '1993': 1863, 'easter': 1864, 'tutu': 1865, 'verses': 1866, 'worst': 1867, 'saint': 1868, 'abbreviated': 1869, 'increase': 1870, 'monarch': 1871, 'nazi': 1872, 'woodrow': 1873, 'wilson': 1874, 'chronic': 1875, 'movement': 1876, 'typewriter': 1877, 'keyboard': 1878, 'subway': 1879, 'kansas': 1880, 'railway': 1881, 'bombay': 1882, 'double': 1883, 'expensive': 1884, 'virtual': 1885, 'landed': 1886, 'legs': 1887, 'lobster': 1888, 'divorce': 1889, 'recorded': 1890, 'want': 1891, 'manned': 1892, 'bank': 1893, 'determine': 1894, 'hate': 1895, 'gray': 1896, 'democratic': 1897, 'event': 1898, 'skin': 1899, 'fingers': 1900, 'vacuum': 1901, 'division': 1902, 'cube': 1903, 'glitters': 1904, 'fired': 1905, 'maria': 1906, 'hunting': 1907, 'bars': 1908, 'deadliest': 1909, 'quit': 1910, 'vietnamese': 1911, 'fiction': 1912, 'seller': 1913, 'anthony': 1914, 'hitting': 1915, 'northeast': 1916, 'rice': 1917, 'davies': 1918, 'various': 1919, 'americas': 1920, 'daughters': 1921, 'jerry': 1922, 'erected': 1923, 'scrabble': 1924, 'brooks': 1925, 'diseases': 1926, '1999': 1927, 'box': 1928, 'april': 1929, 'met': 1930, 'strong': 1931, 'global': 1932, 'nile': 1933, 'supposed': 1934, 'parks': 1935, 'boys': 1936, 'chris': 1937, 'hall': 1938, 'sentence': 1939, 'ticket': 1940, '1966': 1941, 'shall': 1942, 'companion': 1943, 'tennessee': 1944, 'russians': 1945, 'release': 1946, 'roy': 1947, 'sherlock': 1948, 'holmes': 1949, 'recipe': 1950, 'starting': 1951, 'infamous': 1952, 'format': 1953, 'competition': 1954, 'study': 1955, 'tea': 1956, 'teams': 1957, 'angel': 1958, 'perform': 1959, 'girls': 1960, 'coast': 1961, 'capita': 1962, 'cut': 1963, 'possession': 1964, 'notes': 1965, 'pigs': 1966, 'brush': 1967, 'anniversary': 1968, 'trying': 1969, 'jackie': 1970, 'battles': 1971, 'generals': 1972, 'send': 1973, 'un': 1974, 'job': 1975, 'rust': 1976, 'tracy': 1977, 'monster': 1978, 'neil': 1979, 'peninsula': 1980, 'vincent': 1981, 'gogh': 1982, 'twice': 1983, 'cap': 1984, 'treatment': 1985, 'hawaiian': 1986, 'ivy': 1987, 'remain': 1988, '1971': 1989, 'offer': 1990, 'treasure': 1991, 'leon': 1992, 'hat': 1993, 'jamiroquai': 1994, 'ms': 1995, 'homelite': 1996, 'inc': 1997, 'region': 1998, 'humpty': 1999, 'dumpty': 2000, '1933': 2001, 'harrison': 2002, '28': 2003, '1992': 2004, 'mix': 2005, 'figure': 2006, 'graduate': 2007, 'owner': 2008, 'rider': 2009, 'cow': 2010, 'etc': 2011, 'perpetual': 2012, 'calendar': 2013, 'reference': 2014, 'floor': 2015, 'easiest': 2016, 'cats': 2017, 'dwight': 2018, 'giving': 2019, 'designer': 2020, 'meter': 2021, 'separates': 2022, 'personality': 2023, 'tournament': 2024, 'polio': 2025, 'vaccine': 2026, '1936': 2027, 'save': 2028, 'scientists': 2029, 'kosovo': 2030, 'economic': 2031, 'approximate': 2032, 'tristar': 2033, 'bolivia': 2034, 'ali': 2035, 'journey': 2036, 'dracula': 2037, 'wonders': 2038, 'ancient': 2039, 'sir': 2040, 'hillary': 2041, 'joseph': 2042, 'fined': 2043, 'mystery': 2044, 'glory': 2045, 'honey': 2046, 'launched': 2047, "'re": 2048, 'pit': 2049, '2th': 2050, 'q': 2051, 'manager': 2052, 'ear': 2053, 'coastline': 2054, 'count': 2055, 'odds': 2056, 'hiv': 2057, 'populated': 2058, 'luke': 2059, 'hours': 2060, 'hass': 2061, 'imaginary': 2062, 'chronicles': 2063, 'honecker': 2064, 'architect': 2065, 'broadcast': 2066, 'malaysia': 2067, 'chance': 2068, 'armed': 2069, 'sequel': 2070, 'maneuver': 2071, 'boston': 2072, 'frank': 2073, 'establish': 2074, 'shelley': 2075, 'stanley': 2076, 'yalta': 2077, 'agent': 2078, 'humphrey': 2079, 'bogart': 2080, 'sculpture': 2081, 'front': 2082, 'cans': 2083, 'landmark': 2084, 'moore': 2085, 'variety': 2086, 'saying': 2087, 'dealer': 2088, 'performer': 2089, 'butter': 2090, 'anglican': 2091, '1923': 2092, 'putting': 2093, 'gallon': 2094, 'spill': 2095, 'scientific': 2096, 'fur': 2097, 'recently': 2098, 'toilet': 2099, 'fax': 2100, 'sam': 2101, 'dancing': 2102, 'headed': 2103, 'armstrong': 2104, 'military': 2105, 'effects': 2106, 'idea': 2107, 'guitar': 2108, 'constellation': 2109, 'travels': 2110, 'gestation': 2111, 'kinds': 2112, 'nba': 2113, 'ash': 2114, 'outcome': 2115, 'clay': 2116, 'lucas': 2117, 'seasons': 2118, 'elvis': 2119, 'presley': 2120, 'switzerland': 2121, 'wave': 2122, 'valdez': 2123, 'representative': 2124, 'electoral': 2125, 'votes': 2126, 'flowers': 2127, '1945': 2128, 'lens': 2129, 'prayer': 2130, 'ioc': 2131, 'uses': 2132, 'faces': 2133, 'celebrity': 2134, 'thalassemia': 2135, 'method': 2136, 'drinking': 2137, 'officer': 2138, 'nautical': 2139, 'algeria': 2140, 'foods': 2141, 'anything': 2142, 'triple': 2143, 'drew': 2144, 'barrymore': 2145, 'identity': 2146, 'table': 2147, 'mc2': 2148, 'leg': 2149, 'barbie': 2150, 'usually': 2151, 'storm': 2152, 'needle': 2153, '80': 2154, 'advertising': 2155, 'ewoks': 2156, 'september': 2157, 'playboy': 2158, 'lucy': 2159, 'trade': 2160, 'ambassador': 2161, 'maclaine': 2162, 'kings': 2163, 'nhl': 2164, 'practice': 2165, 'colonists': 2166, 'darth': 2167, 'vader': 2168, 'moby': 2169, 'winners': 2170, 'vienna': 2171, 'tunnel': 2172, 'trademark': 2173, 'panama': 2174, 'hear': 2175, 'actually': 2176, 'juice': 2177, '1935': 2178, 'victor': 2179, 'saddam': 2180, 'hussein': 2181, 'compared': 2182, '17': 2183, 'organism': 2184, 'argentina': 2185, 'add': 2186, 'stephen': 2187, 'paris': 2188, 'shoe': 2189, 'dime': 2190, 'commerce': 2191, 'train': 2192, 'birthplace': 2193, 'appearances': 2194, 'beginning': 2195, 'wake': 2196, 'lack': 2197, 'everything': 2198, 'library': 2199, 'self': 2200, 'willie': 2201, "'hara": 2202, 'steal': 2203, 'ernest': 2204, 'properties': 2205, "'ve": 2206, 'joan': 2207, 'palace': 2208, 'opens': 2209, 'cop': 2210, 'check': 2211, 'christ': 2212, 'june': 2213, 'indonesia': 2214, 'godfather': 2215, 'diabetes': 2216, 'screenplay': 2217, 'spawned': 2218, 'empty': 2219, 'typist': 2220, 'hungarian': 2221, 'pay': 2222, 'flew': 2223, 'lips': 2224, 'dwarfs': 2225, 'info': 2226, 'dialing': 2227, 'oklahoma': 2228, 'wearing': 2229, 'bones': 2230, 'vote': 2231, 'purposes': 2232, 'taylor': 2233, 'sent': 2234, '1952': 2235, 'worldwide': 2236, 'penis': 2237, '1968': 2238, 'albert': 2239, 'fine': 2240, 'opened': 2241, 'sauce': 2242, 'galaxy': 2243, 'opposite': 2244, 'mikhail': 2245, 'operation': 2246, 'journalist': 2247, 'traditional': 2248, 'district': 2249, 'greeting': 2250, 'butterfield': 2251, 'orleans': 2252, 'austria': 2253, 'pitchers': 2254, 'spend': 2255, '1974': 2256, 'airline': 2257, 'ceremony': 2258, 'mediterranean': 2259, 'training': 2260, 'gateway': 2261, 'represent': 2262, 'pyramid': 2263, 'object': 2264, 'goes': 2265, 'present': 2266, 'pictures': 2267, 'biography': 2268, 'others': 2269, 'senator': 2270, 'strait': 2271, 'designed': 2272, 'solar': 2273, "'em": 2274, 'woody': 2275, 'starts': 2276, 'nasa': 2277, 'fitzgerald': 2278, 'picture': 2279, 'cooking': 2280, 'websites': 2281, 'digits': 2282, 'those': 2283, 'dollars': 2284, 'erupt': 2285, 'mentioned': 2286, 'joe': 2287, 'coca': 2288, 'cola': 2289, 'casablanca': 2290, 'struck': 2291, 'class': 2292, 'comedy': 2293, 'plane': 2294, 'winner': 2295, 'lawrence': 2296, 'toward': 2297, 'zero': 2298, 'egypt': 2299, 'map': 2300, 'charge': 2301, 'clown': 2302, 'alexander': 2303, 'doodle': 2304, 'gallons': 2305, 'again': 2306, 'x': 2307, 'vs': 2308, 'worlds': 2309, 'howdy': 2310, 'doody': 2311, '500': 2312, 'foundation': 2313, 'hymn': 2314, 'fair': 2315, 'melting': 2316, 'satellite': 2317, 'develop': 2318, 'browns': 2319, 'enzymes': 2320, 'heavier': 2321, 'pride': 2322, 'airports': 2323, 'deal': 2324, 'jews': 2325, 'concentration': 2326, 'inch': 2327, 'articles': 2328, 'vowel': 2329, 'myrtle': 2330, 'goddess': 2331, 'robinson': 2332, 'register': 2333, '1920s': 2334, 'rode': 2335, 'tony': 2336, 'cocktail': 2337, 'total': 2338, 'apollo': 2339, 'enemy': 2340, 'wears': 2341, 'issue': 2342, 'herb': 2343, 'relationship': 2344, 'danube': 2345, 'flow': 2346, 'log': 2347, 'nns': 2348, 'valuable': 2349, 'resource': 2350, 'lice': 2351, 'zealand': 2352, 'broadcasting': 2353, 'temperatures': 2354, 'tolkien': 2355, 'architecture': 2356, 'flourish': 2357, 'rail': 2358, 'rhodes': 2359, 'champions': 2360, 'fort': 2361, 'knox': 2362, 'rabbit': 2363, 'flintstones': 2364, 'database': 2365, '5th': 2366, 'graders': 2367, 'announced': 2368, 'catholic': 2369, 'calcium': 2370, 'joined': 2371, 'andrews': 2372, 'sisters': 2373, 'pistol': 2374, 'crust': 2375, 'equity': 2376, 'orinoco': 2377, 'penn': 2378, 'landing': 2379, 'commune': 2380, 'athlete': 2381, 'clip': 2382, 'electricity': 2383, 'yet': 2384, 'colored': 2385, 'hook': 2386, 'worms': 2387, 'sale': 2388, 'valley': 2389, 'revolve': 2390, '49': 2391, 'tribe': 2392, 'labels': 2393, 'very': 2394, 'simple': 2395, 'territories': 2396, 'luck': 2397, 'pneumonia': 2398, 'attempts': 2399, 'kalahari': 2400, 'menace': 2401, 'honorary': 2402, 'areas': 2403, 'eagle': 2404, 'wedding': 2405, 'modem': 2406, 'access': 2407, 'ross': 2408, 'offices': 2409, 'requirement': 2410, 'minute': 2411, 'chappellet': 2412, 'vineyard': 2413, 'correctly': 2414, 'coal': 2415, 'sharp': 2416, 'minor': 2417, 'revolutions': 2418, 'spacewalk': 2419, 'mosquito': 2420, 'proposition': 2421, 'ground': 2422, 'bombing': 2423, 'powers': 2424, 'lantern': 2425, 'mill': 2426, 'diminutive': 2427, 'vbp': 2428, 'cry': 2429, 'arma': 2430, 'mascot': 2431, 'beanie': 2432, 'afghanistan': 2433, 'stretch': 2434, 'besides': 2435, 'wines': 2436, 'nelson': 2437, 'cuckoo': 2438, 'espionage': 2439, 'producing': 2440, 'balance': 2441, 'account': 2442, 'valentine': 2443, 'italians': 2444, 'eiffel': 2445, 'bet': 2446, 'lethal': 2447, 'injection': 2448, 'doonesbury': 2449, 'werewolf': 2450, 'miami': 2451, 'speaking': 2452, 'wallpaper': 2453, 'voting': 2454, 'awards': 2455, 'deepest': 2456, 'bloom': 2457, 'healthy': 2458, 'stands': 2459, 'caesar': 2460, 'systems': 2461, 'analysis': 2462, 'piggy': 2463, 'unknown': 2464, 'devices': 2465, 'memphis': 2466, 'dominoes': 2467, 'holiday': 2468, 'admit': 2469, 'toes': 2470, 'tend': 2471, 'motors': 2472, 'lie': 2473, 'arkansas': 2474, 'ram': 2475, 'dwarf': 2476, 'olsen': 2477, 'ulysses': 2478, 'rings': 2479, 'patented': 2480, 'grant': 2481, '1797': 2482, '185': 2483, 'ratified': 2484, 'row': 2485, 'handle': 2486, 'conditioning': 2487, 'interesting': 2488, 'danced': 2489, 'astaire': 2490, 'headquartered': 2491, 'garden': 2492, 'eskimo': 2493, 'dentist': 2494, 'teenager': 2495, 'prewitt': 2496, 'sarge': 2497, 'metal': 2498, '197': 2499, 'brightest': 2500, 'marked': 2501, 'debut': 2502, 'dna': 2503, 'maid': 2504, 'agency': 2505, 'responsible': 2506, 'whisky': 2507, 'vermouth': 2508, 'cranberry': 2509, 'stereo': 2510, 'sitcom': 2511, 'maiden': 2512, 'ethel': 2513, 'diary': 2514, 'singles': 2515, 'cooler': 2516, 'cucumber': 2517, 'moscow': 2518, 'almost': 2519, 'ranger': 2520, 'yogi': 2521, 'lovers': 2522, 'superstar': 2523, 'float': 2524, 'fraction': 2525, 'sheep': 2526, 'indies': 2527, 'importer': 2528, 'cognac': 2529, 'walker': 2530, 'aged': 2531, 'commodity': 2532, 'cent': 2533, 'zones': 2534, 'employ': 2535, 'stanford': 2536, 'innings': 2537, 'softball': 2538, 'coney': 2539, 'gained': 2540, 'pea': 2541, 'physical': 2542, 'colleges': 2543, 'wyoming': 2544, 'stocks': 2545, 'hastings': 2546, 'monet': 2547, 'rocks': 2548, 'background': 2549, 'viscosity': 2550, 'bog': 2551, 'claus': 2552, 'tourists': 2553, 'steve': 2554, 'whitcomb': 2555, 'judson': 2556, 'consist': 2557, 'addresses': 2558, 'representatives': 2559, 'creeps': 2560, 'hearts': 2561, 'octopus': 2562, '1979': 2563, 'shouts': 2564, 'felt': 2565, 'holes': 2566, 'tenpin': 2567, 'housewife': 2568, 'daycare': 2569, 'provider': 2570, 'zodiacal': 2571, 'technical': 2572, 'debate': 2573, 'hardest': 2574, 'several': 2575, 'serving': 2576, 'tornado': 2577, 'suffer': 2578, 'artificial': 2579, 'intelligence': 2580, 'holy': 2581, 'fatal': 2582, 'trinidad': 2583, 'carl': 2584, 'handheld': 2585, 'pressure': 2586, 'equator': 2587, 'shooting': 2588, 'determined': 2589, 'concorde': 2590, 'wash': 2591, 'stops': 2592, 'chairs': 2593, 'coin': 2594, 'data': 2595, 'collection': 2596, 'tourism': 2597, 'artists': 2598, 'statues': 2599, 'amphibians': 2600, 'razor': 2601, 'costs': 2602, 'uniform': 2603, 'loved': 2604, 'killing': 2605, 'regarding': 2606, 'outer': 2607, 'fat': 2608, 'neurological': 2609, 'attacks': 2610, 'protein': 2611, 'causing': 2612, 'highway': 2613, 'mckinley': 2614, 'duck': 2615, 'talking': 2616, 'estate': 2617, 'hudson': 2618, 'layers': 2619, 'pencil': 2620, 'enough': 2621, 'ip': 2622, '55': 2623, 'might': 2624, 'files': 2625, 'pairs': 2626, 'louie': 2627, 'aldrin': 2628, 'mel': 2629, 'closed': 2630, 'ouija': 2631, 'requirements': 2632, 'handed': 2633, 'care': 2634, 'jordan': 2635, 'connecticut': 2636, 'millennium': 2637, 'draw': 2638, 'heavily': 2639, 'caffeinated': 2640, 'varian': 2641, 'associates': 2642, 'rubik': 2643, 'prepared': 2644, 'measured': 2645, 'employee': 2646, 'fictional': 2647, 'dropped': 2648, 'jogging': 2649, 'orca': 2650, 'news': 2651, 'belushi': 2652, 'saturday': 2653, 'thanksgiving': 2654, 'liner': 2655, 'hijacked': 2656, '1985': 2657, 'philadelphia': 2658, 'private': 2659, 'marlowe': 2660, 'copies': 2661, '1958': 2662, 'suite': 2663, 'bra': 2664, 'cooper': 2665, 'colt': 2666, 'plo': 2667, 'bronze': 2668, 'laser': 2669, 'lauren': 2670, 'bacall': 2671, 'faith': 2672, 'scrooge': 2673, 'student': 2674, 'amherst': 2675, 'dow': 2676, 'churchill': 2677, '1964': 2678, 'martha': 2679, 'whale': 2680, 'blacks': 2681, 'efficient': 2682, 'sistine': 2683, 'chapel': 2684, 'cuba': 2685, '26': 2686, 'betting': 2687, 'nj': 2688, 'cement': 2689, 'chromosome': 2690, 'brandenburg': 2691, 'spaces': 2692, 'groups': 2693, 'lung': 2694, 'spelling': 2695, 'stefan': 2696, 'edberg': 2697, 'phenomenon': 2698, 'photographer': 2699, 'wise': 2700, 'begins': 2701, 'condiment': 2702, '1929': 2703, 'embassy': 2704, 'costa': 2705, 'rica': 2706, 'gatsby': 2707, 'share': 2708, 'select': 2709, 'bottom': 2710, 'surname': 2711, 'massacre': 2712, 'lagoon': 2713, '1982': 2714, 'heaven': 2715, 'lou': 2716, 'gehrig': 2717, 'camaro': 2718, 'replaced': 2719, 'bert': 2720, 'homer': 2721, 'snap': 2722, 'whip': 2723, 'jogis': 2724, 'pollution': 2725, 'trilogy': 2726, 'murdered': 2727, 'cairo': 2728, 'barbados': 2729, 'hills': 2730, 'holidays': 2731, 'numerals': 2732, 'veterans': 2733, 'dana': 2734, 'invade': 2735, 'driven': 2736, '199': 2737, 'smokey': 2738, 'gross': 2739, 'gerald': 2740, 'pseudonym': 2741, 'tragedy': 2742, 'statement': 2743, 'both': 2744, 'brooklyn': 2745, 'net': 2746, 'amateur': 2747, 'legged': 2748, 'heavyweight': 2749, 'champion': 2750, 'jacques': 2751, 'cousteau': 2752, 'carry': 2753, 'epic': 2754, 'dates': 2755, 'unique': 2756, 'message': 2757, 'colombia': 2758, 'gang': 2759, '1976': 2760, 'poetry': 2761, 'corner': 2762, 'public': 2763, 'abuse': 2764, 'hell': 2765, 'sugar': 2766, 'salesman': 2767, 'snore': 2768, 'capp': 2769, 'lengths': 2770, 'prompted': 2771, 'dozen': 2772, 'glacier': 2773, 'toast': 2774, 'revolt': 2775, 'available': 2776, 'junk': 2777, 'provides': 2778, 'guinea': 2779, 'slinky': 2780, 'scooby': 2781, 'doo': 2782, 'shevardnadze': 2783, 'villain': 2784, '1951': 2785, 'upper': 2786, 'rating': 2787, "'n": 2788, 'brazilian': 2789, 'impress': 2790, 'tomato': 2791, 'educational': 2792, 'naval': 2793, 'luxury': 2794, 'centers': 2795, 'alexandra': 2796, 'josie': 2797, 'pussycats': 2798, 'poison': 2799, 'sand': 2800, 'sierra': 2801, 'parents': 2802, 'dating': 2803, 'trek': 2804, 'spangled': 2805, 'banner': 2806, 'anthem': 2807, 'appears': 2808, 'gettysburg': 2809, 'bobby': 2810, 'walls': 2811, 'touching': 2812, 'hub': 2813, 'panic': 2814, 'sigmund': 2815, 'freud': 2816, 'uris': 2817, 'fever': 2818, 'breakfast': 2819, 'mixing': 2820, 'saltpeter': 2821, '39': 2822, 'wonderland': 2823, 'gore': 2824, 'martinis': 2825, 'rex': 2826, 'numeral': 2827, 'legendary': 2828, 'wrist': 2829, 'agree': 2830, 'really': 2831, 'rank': 2832, 'among': 2833, 'introduce': 2834, 'growth': 2835, 'literal': 2836, 'translations': 2837, 'ezra': 2838, 'differences': 2839, 'religions': 2840, 'oxygen': 2841, 'keeps': 2842, 'sided': 2843, 'depicted': 2844, 'sailor': 2845, 'racing': 2846, 'bat': 2847, 'patricia': 2848, 'kidnaped': 2849, 'viking': 2850, 'deodorant': 2851, 'triplets': 2852, 'eaten': 2853, 'phrase': 2854, 'bourbon': 2855, 'restored': 2856, 'napoleon': 2857, 'polish': 2858, 'viii': 2859, 'distinction': 2860, 'marshall': 2861, 'mae': 2862, 'comedienne': 2863, 'inauguration': 2864, 'randolph': 2865, 'fusion': 2866, 'costume': 2867, 'glove': 2868, 'rows': 2869, '35': 2870, 'millimeter': 2871, 'goat': 2872, 'naples': 2873, 'lawyers': 2874, 'salonen': 2875, 'ethnic': 2876, 'category': 2877, 'fantastic': 2878, 'ballet': 2879, 'daminozide': 2880, 'zimbabwe': 2881, 'amazing': 2882, 'fraze': 2883, 'cases': 2884, 'avery': 2885, 'political': 2886, 'kitchen': 2887, 'economy': 2888, 'seventeen': 2889, 'n': 2890, 'apart': 2891, 'synonymous': 2892, 'plot': 2893, 'stalin': 2894, 'jump': 2895, 'parrot': 2896, 'beak': 2897, 'hermann': 2898, 'condition': 2899, 'studios': 2900, 'biblical': 2901, 'traveled': 2902, 'proper': 2903, 'nominated': 2904, 'coastal': 2905, 'greece': 2906, 'grapes': 2907, 'wrath': 2908, 'li': 2909, "'l": 2910, 'abner': 2911, 'calls': 2912, 'mall': 2913, 'lies': 2914, 'baltic': 2915, 'avenue': 2916, 'press': 2917, 'attempt': 2918, 'cache': 2919, 'jazz': 2920, 'blues': 2921, 'programming': 2922, 'adopted': 2923, 'borrow': 2924, 'atom': 2925, 'aol': 2926, 'princess': 2927, 'strikes': 2928, 'ella': 2929, '1642': 2930, '1649': 2931, 'gender': 2932, 'bradbury': 2933, 'illustrated': 2934, 'processor': 2935, '1913': 2936, 'uk': 2937, 'frogs': 2938, 'stevie': 2939, 'taiwanese': 2940, 'skywalker': 2941, 'weapons': 2942, 'nero': 2943, 'barbeque': 2944, 'things': 2945, 'atm': 2946, 'fibrosis': 2947, 'majority': 2948, 'truman': 2949, 'quadruplets': 2950, 'strontium': 2951, 'purified': 2952, 'molecules': 2953, 'australian': 2954, 'graces': 2955, 'daniel': 2956, 'cousins': 2957, 'mars': 2958, 'tenses': 2959, 'norway': 2960, 'toys': 2961, 'donated': 2962, 'tammy': 2963, 'judy': 2964, 'garland': 2965, 'stationed': 2966, 'strips': 2967, 'relations': 2968, 'giraffe': 2969, 'heat': 2970, 'livingstone': 2971, 'edith': 2972, 'smoking': 2973, 'medals': 2974, 'medieval': 2975, 'guild': 2976, 'marry': 2977, 'occurs': 2978, 'honor': 2979, 'url': 2980, 'extensions': 2981, 'gravity': 2982, 'marciano': 2983, 'details': 2984, 'underwater': 2985, 'varieties': 2986, 'falklands': 2987, 'placed': 2988, 'pascal': 2989, 'opener': 2990, 'regular': 2991, 'recognize': 2992, 'declaration': 2993, 'pickering': 2994, 'hawks': 2995, 'credited': 2996, 'rooftops': 2997, 'steam': 2998, 'frozen': 2999, 'seeing': 3000, 'candy': 3001, 'split': 3002, 'marriage': 3003, 'madrid': 3004, 'horsepower': 3005, 'boosters': 3006, 'brave': 3007, 'refer': 3008, 'hibernia': 3009, 'chefs': 3010, 'topped': 3011, 'moving': 3012, 'cowboys': 3013, 'experienced': 3014, 'mighty': 3015, 'crokinole': 3016, 'monaco': 3017, 'likes': 3018, 'helium': 3019, 'true': 3020, 'dream': 3021, 'pendulum': 3022, 'frederick': 3023, 'cubic': 3024, 'munich': 3025, 'suspension': 3026, 'soldier': 3027, 'clear': 3028, 'michener': 3029, 'subtitled': 3030, 'feudal': 3031, 'solo': 3032, 'semper': 3033, 'fidelis': 3034, 'candle': 3035, 'kiss': 3036, 'pronounce': 3037, 'squares': 3038, 'means': 3039, 'neurosurgeon': 3040, 'changed': 3041, 'jiggy': 3042, 'preacher': 3043, 'leads': 3044, 'slave': 3045, 'teeth': 3046, 'decathlon': 3047, 'volcanoes': 3048, 'victorian': 3049, 'nests': 3050, 'aristotle': 3051, '23': 3052, 'quarts': 3053, 'lebanon': 3054, 'principles': 3055, 'stains': 3056, 'frequency': 3057, 'vhf': 3058, 'edition': 3059, 'commandments': 3060, 'damage': 3061, 'math': 3062, 'proud': 3063, 'kappa': 3064, 'hepburn': 3065, 'shake': 3066, 'declare': 3067, '1961': 3068, 'guy': 3069, 'mine': 3070, 'hoover': 3071, 'bugs': 3072, 'beetle': 3073, 'spins': 3074, 'larger': 3075, 'ruled': 3076, '31': 3077, 'gymnastics': 3078, 'dot': 3079, '1962': 3080, 'frankfurt': 3081, 'feeling': 3082, 'incident': 3083, 'khrushchev': 3084, 'odors': 3085, 'calculate': 3086, 'apartment': 3087, 'report': 3088, 'industrial': 3089, 'classification': 3090, 'codes': 3091, 'rosa': 3092, 'seat': 3093, 'marine': 3094, 'adopt': 3095, 'equation': 3096, 'rear': 3097, 'weeks': 3098, 'trophy': 3099, 'commit': 3100, 'piracy': 3101, 'spread': 3102, 'buys': 3103, 'bakery': 3104, 'iraq': 3105, 'invasion': 3106, 'sinatra': 3107, 'dooby': 3108, 'maurice': 3109, 'port': 3110, 'attraction': 3111, 'lightning': 3112, 'von': 3113, 'poland': 3114, 'pepper': 3115, 'miller': 3116, 'lite': 3117, 'hamlet': 3118, 'operations': 3119, '1847': 3120, 'sydney': 3121, 'handful': 3122, 'mad': 3123, 'afternoon': 3124, 'postal': 3125, 'choo': 3126, 'since': 3127, 'shoot': 3128, 'jake': 3129, 'horoscope': 3130, 'hungary': 3131, 'uprising': 3132, 'arcadia': 3133, 'exports': 3134, 'pines': 3135, 'walter': 3136, 'huston': 3137, 'terry': 3138, 'genetics': 3139, 'nuts': 3140, 'ham': 3141, 'cotton': 3142, 'caught': 3143, 'englishman': 3144, 'canine': 3145, 'huckleberry': 3146, 'forests': 3147, 'dinosaur': 3148, 'camel': 3149, 'clara': 3150, 'learn': 3151, 'pages': 3152, 'horrors': 3153, 'jungle': 3154, 'alcoholic': 3155, 'syrup': 3156, 'vcr': 3157, 'plain': 3158, 'inventors': 3159, 'image': 3160, 'truth': 3161, 'approximately': 3162, 'fbi': 3163, 'hughes': 3164, 'milton': 3165, 'tools': 3166, 'scandinavian': 3167, 'connected': 3168, 'alan': 3169, 'leukemia': 3170, 'tail': 3171, 'elections': 3172, 'grooves': 3173, 'consecutive': 3174, 'chip': 3175, 'premiered': 3176, 'noble': 3177, 'even': 3178, 'aid': 3179, 'helps': 3180, 'verdict': 3181, 'holland': 3182, 'says': 3183, 'administration': 3184, 'defense': 3185, 'moral': 3186, 'willy': 3187, 'uniforms': 3188, 'swap': 3189, 'stones': 3190, 'shampoo': 3191, 'banana': 3192, 'evil': 3193, 'rhett': 3194, 'leaving': 3195, 'marcos': 3196, 'treasury': 3197, 'lemmon': 3198, 'thrilled': 3199, 'cricket': 3200, 'kong': 3201, 'amazon': 3202, 'boxes': 3203, 'barnum': 3204, 'thumb': 3205, 'civilization': 3206, 'fun': 3207, 'prison': 3208, 'leper': 3209, 'ghost': 3210, 'wright': 3211, 'gin': 3212, 'youngsters': 3213, 'exxon': 3214, 'certain': 3215, 'village': 3216, 'shield': 3217, 'vehicles': 3218, 'nowadays': 3219, 'digital': 3220, 'change': 3221, 'kafka': 3222, 'engineer': 3223, 'postage': 3224, 'slow': 3225, 'condoms': 3226, 'bottled': 3227, 'ranges': 3228, 'nostradamus': 3229, 'maine': 3230, 'annual': 3231, 'meeting': 3232, 'experts': 3233, 'constructed': 3234, 'alphabetically': 3235, 'wrestler': 3236, 'tells': 3237, 'geographical': 3238, 'including': 3239, 'device': 3240, 'cardinal': 3241, 'ingredient': 3242, 'ranch': 3243, 'bligh': 3244, 'monitor': 3245, 'linked': 3246, 'nintendo': 3247, 'francis': 3248, 'pow': 3249, '1977': 3250, 'chilean': 3251, 'coup': 3252, 'punch': 3253, 'entertainer': 3254, 'directly': 3255, 'magazines': 3256, 'else': 3257, 'beautiful': 3258, 'disk': 3259, 'corn': 3260, '1812': 3261, '100': 3262, 'imposed': 3263, 'related': 3264, 'fdr': 3265, 'special': 3266, '1779': 3267, 'archie': 3268, 'farthest': 3269, 'rick': 3270, 'rise': 3271, 'buildings': 3272, 'capone': 3273, 'fairy': 3274, 'importers': 3275, '900': 3276, '740': 3277, 'yards': 3278, 'hermit': 3279, 'crabs': 3280, 'advertises': 3281, 'persian': 3282, '1988': 3283, 'carol': 3284, 'crystals': 3285, 'controls': 3286, 'ripening': 3287, 'breeds': 3288, 'bette': 3289, '196': 3290, 'satellites': 3291, 'bread': 3292, 'interview': 3293, 'hampshire': 3294, 'broke': 3295, 'polo': 3296, 'hike': 3297, 'entire': 3298, 'sled': 3299, 'defined': 3300, 'cologne': 3301, 'ready': 3302, 'deals': 3303, 'partner': 3304, 'slam': 3305, 'spaghetti': 3306, 'indiglo': 3307, 'touch': 3308, 'labor': 3309, 'parking': 3310, 'printing': 3311, 'output': 3312, 'globe': 3313, '1946': 3314, 'development': 3315, 'nursery': 3316, 'ferry': 3317, 'chances': 3318, 'pregnacy': 3319, 'penetrate': 3320, 'vagina': 3321, 'conversion': 3322, 'spears': 3323, 'action': 3324, 'par': 3325, '455': 3326, 'yard': 3327, 'fit': 3328, 'tiny': 3329, 'bonnie': 3330, 'foreclosure': 3331, 'scottish': 3332, 'edinburgh': 3333, 'bell': 3334, "'clock": 3335, 'recommended': 3336, 'scopes': 3337, '15th': 3338, 'veins': 3339, 'panel': 3340, 'actors': 3341, 'factor': 3342, 'ernie': 3343, 'yellowstone': 3344, 'palmer': 3345, 'hoffman': 3346, 'mills': 3347, 'wizard': 3348, 'frankenstein': 3349, 'doubles': 3350, 'lsd': 3351, 'alternative': 3352, 'dutch': 3353, 'renaissance': 3354, 'bust': 3355, 'episode': 3356, 'colonial': 3357, 'until': 3358, 'visitors': 3359, 'frontier': 3360, 'restore': 3361, 'scotch': 3362, 'liverpool': 3363, 'grace': 3364, 'athletic': 3365, 'underworld': 3366, 'elevation': 3367, 'structure': 3368, 'looking': 3369, 'seventh': 3370, '1853': 3371, 'within': 3372, 'principle': 3373, 'parker': 3374, 'print': 3375, 'false': 3376, 'consciousness': 3377, 'athletes': 3378, 'bed': 3379, 'rockefeller': 3380, 'warner': 3381, 'bros': 3382, 'spelled': 3383, 'y': 3384, 'mont': 3385, 'blanc': 3386, 'ill': 3387, 'fated': 3388, 'amtrak': 3389, 'eastern': 3390, 'netherlands': 3391, 'orgasm': 3392, 'cane': 3393, 'block': 3394, 'hundred': 3395, '1940s': 3396, 'donald': 3397, 'shores': 3398, 'section': 3399, 'jeep': 3400, 'leaders': 3401, 'brunei': 3402, 'steamboat': 3403, 'marrow': 3404, 'violins': 3405, 'snail': 3406, 'lewis': 3407, 'surrendered': 3408, 'cabinet': 3409, 'da': 3410, 'wimbledon': 3411, 'chile': 3412, 'ratio': 3413, 'kangaroo': 3414, 'romantic': 3415, 'gordon': 3416, 'support': 3417, 'idealab': 3418, 'sizes': 3419, 'roller': 3420, 'legend': 3421, 'separated': 3422, 'originated': 3423, 'shared': 3424, 'geographic': 3425, 'limits': 3426, 'frog': 3427, 'channel': 3428, 'duel': 3429, 'santos': 3430, 'dumont': 3431, 'woodpecker': 3432, 'gilbert': 3433, 'figures': 3434, 'ability': 3435, 'prism': 3436, 'heroine': 3437, 'scruples': 3438, 'charley': 3439, 'wayne': 3440, 'machine': 3441, 'eighth': 3442, 'wet': 3443, 'desire': 3444, 'thought': 3445, 'pox': 3446, 'typing': 3447, 'knight': 3448, 'possible': 3449, 'hypertension': 3450, 'casting': 3451, 'poisoning': 3452, 'pineapple': 3453, 'mau': 3454, 'kills': 3455, 'jolson': 3456, 'past': 3457, 'shouldn': 3458, 'abstract': 3459, 'vessels': 3460, 'silversmiths': 3461, 'jules': 3462, 'verne': 3463, 'hood': 3464, 'harper': 3465, 'cheese': 3466, 'coconut': 3467, 'measures': 3468, 'clinton': 3469, 'defeat': 3470, 'poll': 3471, 'cause': 3472, 'developing': 3473, 'magnetic': 3474, 'nose': 3475, 'pilots': 3476, 'seizure': 3477, 'sicily': 3478, 'maximum': 3479, 'appointed': 3480, 'dicken': 3481, 'helens': 3482, 'specifically': 3483, 'heaviest': 3484, 'bald': 3485, 'bud': 3486, 'cave': 3487, 'bestselling': 3488, 'lights': 3489, 'too': 3490, 'mineral': 3491, 'spoke': 3492, 'revolutionaries': 3493, 'darkness': 3494, '48': 3495, 'namath': 3496, 'oriented': 3497, 'limit': 3498, 'drunk': 3499, 'beans': 3500, 'sailors': 3501, 'pet': 3502, 'ruin': 3503, 'ce': 3504, 'blow': 3505, 'disney': 3506, 'margaret': 3507, 'engineering': 3508, 'benefits': 3509, 'hank': 3510, 'smith': 3511, 'simpson': 3512, 'surfing': 3513, 'manufacture': 3514, 'mayonnaise': 3515, 'narrates': 3516, 'affected': 3517, 'lloyd': 3518, 'bad': 3519, 'mirror': 3520, 'opening': 3521, 'taj': 3522, 'argentine': 3523, 'shoes': 3524, 'cookies': 3525, 'mentor': 3526, 'mules': 3527, 'habitat': 3528, 'alcohol': 3529, 'simon': 3530, 'denver': 3531, 'gourd': 3532, 'combat': 3533, 'fountain': 3534, 'earthquake': 3535, 'cool': 3536, 'armor': 3537, 'jonathan': 3538, 'matterhorn': 3539, 'pins': 3540, '1937': 3541, 'hope': 3542, 'fall': 3543, 'ronald': 3544, 'reagan': 3545, 'dreams': 3546, 'yukon': 3547, 'themselves': 3548, 'dolly': 3549, 'parton': 3550, 'throat': 3551, 'pronounced': 3552, 'raised': 3553, 'floors': 3554, 'results': 3555, 'drinks': 3556, 'pepsi': 3557, 'dextropropoxyphen': 3558, 'napsylate': 3559, 'slogan': 3560, 'neon': 3561, 'layer': 3562, 'ozone': 3563, 'electrical': 3564, 'purchased': 3565, 'corporate': 3566, 'dioxide': 3567, 'removed': 3568, 'surrender': 3569, 'habeas': 3570, 'translated': 3571, 'pets': 3572, 'litmus': 3573, 'tin': 3574, 'materials': 3575, 'problems': 3576, 'tune': 3577, 'diamonds': 3578, 'teflon': 3579, 'bunker': 3580, 'flavors': 3581, 'roe': 3582, 'millions': 3583, 'casey': 3584, 'copper': 3585, 'resistance': 3586, '2112': 3587, 'existence': 3588, 'endangered': 3589, 'doll': 3590, 'sell': 3591, '1959': 3592, 'usenet': 3593, 'melissa': 3594, 'turkey': 3595, 'scouts': 3596, 'creatures': 3597, 'canary': 3598, 'helicopter': 3599, 'sundaes': 3600, 'singular': 3601, 'monument': 3602, 'testament': 3603, 'fatman': 3604, 'joint': 3605, 'couple': 3606, 'divided': 3607, 'scale': 3608, 'earthquakes': 3609, 'abandoned': 3610, 'anti': 3611, 'acne': 3612, 'muscles': 3613, 'victory': 3614, 'merchant': 3615, 'crash': 3616, 'marco': 3617, 'climate': 3618, 'vitamin': 3619, 'fool': 3620, 'dale': 3621, 'gulliver': 3622, 'mold': 3623, 'humidity': 3624, 'ph': 3625, 'mona': 3626, 'lisa': 3627, 'poor': 3628, 'whole': 3629, 'jesse': 3630, 'alphabetical': 3631, 'fiber': 3632, 'aborigines': 3633, 'corporation': 3634, 'curious': 3635, 'inaugurated': 3636, 'please': 3637, 'beverly': 3638, 'hillbillies': 3639, 'arrow': 3640, 'iceland': 3641, 'puppy': 3642, 'coral': 3643, 'joplin': 3644, 'streak': 3645, 'legally': 3646, 'interest': 3647, 'crickets': 3648, 'disorder': 3649, 'worn': 3650, 'mo': 3651, 'issued': 3652, 'osteoporosis': 3653, 'lyndon': 3654, 'braves': 3655, 'serfdom': 3656, 'doyle': 3657, 'fowl': 3658, 'grabs': 3659, 'spotlight': 3660, 'scar': 3661, 'ozzy': 3662, 'osbourne': 3663, 'downhill': 3664, 'faster': 3665, 'costliest': 3666, 'disaster': 3667, 'industry': 3668, 'sprawling': 3669, 'repealed': 3670, 'camps': 3671, 'nails': 3672, 'annotated': 3673, 'tokens': 3674, 'martyrs': 3675, 'waterfall': 3676, 'enclose': 3677, 'chesapeake': 3678, 'spermologer': 3679, 'fivepin': 3680, 'hardware': 3681, 'chest': 3682, 'neanderthal': 3683, 'isis': 3684, 'swiss': 3685, 'racoon': 3686, 'conscious': 3687, 'colonel': 3688, 'edwin': 3689, 'drake': 3690, 'drill': 3691, 'milo': 3692, 'choose': 3693, 'witnesses': 3694, 'execution': 3695, 'doxat': 3696, 'stirred': 3697, 'shaken': 3698, 'isps': 3699, 'stranger': 3700, 'mythological': 3701, 'proficient': 3702, 'galapagos': 3703, 'belong': 3704, 'ethology': 3705, 'muslim': 3706, '86ed': 3707, 'snoopy': 3708, 'kashmir': 3709, 'loop': 3710, 'extended': 3711, 'tirana': 3712, 'titanium': 3713, 'tootsie': 3714, 'caldera': 3715, 'calluses': 3716, 'cushman': 3717, 'wakefield': 3718, 'scientology': 3719, 'footed': 3720, 'musca': 3721, 'domestica': 3722, 'enters': 3723, 'skateboarding': 3724, 'bricks': 3725, 'recycled': 3726, 'marquesas': 3727, 'shark': 3728, 'villi': 3729, 'intestine': 3730, 'wenceslas': 3731, 'shadows': 3732, 'excluded': 3733, 'anzus': 3734, 'alliance': 3735, 'shiver': 3736, 'bilbo': 3737, 'baggins': 3738, 'logo': 3739, 'cascade': 3740, 'rococo': 3741, 'passenger': 3742, 'via': 3743, 'attic': 3744, 'delilah': 3745, 'samson': 3746, 'paleozoic': 3747, 'comprised': 3748, 'defunct': 3749, 'paintball': 3750, 'chihuahuas': 3751, 'barney': 3752, 'rubble': 3753, 'drops': 3754, 'snake': 3755, 'similar': 3756, 'server': 3757, 'injury': 3758, 'lawsuit': 3759, 'acetylsalicylic': 3760, 'georgetown': 3761, 'hoya': 3762, 'chickens': 3763, 'chicks': 3764, 'bingo': 3765, 'crooner': 3766, 'packin': 3767, 'mama': 3768, 'immortals': 3769, 'bladerunner': 3770, 'transistor': 3771, 'arms': 3772, 'harlow': 3773, '1932': 3774, 'tent': 3775, 'nominations': 3776, 'securities': 3777, 'gymnophobia': 3778, 'fossils': 3779, 'banks': 3780, 'cervantes': 3781, 'quixote': 3782, 'jj': 3783, 'hostages': 3784, 'entebbe': 3785, 'inri': 3786, 'earns': 3787, 'merchandise': 3788, 'bends': 3789, 'dental': 3790, 'wanna': 3791, 'riots': 3792, 'domestic': 3793, 'marbella': 3794, 'blasted': 3795, 'mojave': 3796, 'gaulle': 3797, 'themes': 3798, 'dawson': 3799, 'creek': 3800, 'felicity': 3801, 'terrific': 3802, 'troop': 3803, 'perpetually': 3804, 'pudding': 3805, 'nonchlorine': 3806, 'bleach': 3807, 'pictorial': 3808, 'directions': 3809, 'treehouse': 3810, 'irkutsk': 3811, 'yakutsk': 3812, 'kamchatka': 3813, 'managing': 3814, 'apricot': 3815, 'horseshoes': 3816, 'bring': 3817, '401': 3818, 'gringo': 3819, 'therapy': 3820, 'elicit': 3821, 'primal': 3822, 'scream': 3823, 'shipment': 3824, 'gotham': 3825, 'dangles': 3826, 'palate': 3827, 'dennis': 3828, 'dummy': 3829, 'degree': 3830, 'northwestern': 3831, 'alvin': 3832, 'styloid': 3833, 'jayne': 3834, 'mansfield': 3835, 'fergie': 3836, 'either': 3837, 'iberia': 3838, 'hits': 3839, 'foul': 3840, 'hamburgers': 3841, 'steakburgers': 3842, 'browser': 3843, 'mosaic': 3844, 'accompanying': 3845, 'circumcision': 3846, 'newly': 3847, 'judaism': 3848, 'betsy': 3849, 'sharks': 3850, 'snoogans': 3851, 'shrubs': 3852, 'forms': 3853, 'acreage': 3854, 'entered': 3855, 'filmmakers': 3856, 'collabrative': 3857, 'prelude': 3858, 'houdini': 3859, 'lp': 3860, 'nightmare': 3861, 'elm': 3862, 'slowest': 3863, 'stroke': 3864, 'transmitted': 3865, 'anopheles': 3866, 'mixable': 3867, 'contents': 3868, '103': 3869, 'lockerbie': 3870, 'paracetamol': 3871, 'weaknesses': 3872, 'bite': 3873, 'draws': 3874, 'gymnast': 3875, 'katie': 3876, 'terrence': 3877, 'malick': 3878, 'notre': 3879, 'dame': 3880, '84': 3881, 'nicois': 3882, 'tragic': 3883, 'owe': 3884, 'lifting': 3885, 'brontosauruses': 3886, 'rhine': 3887, 'sleepless': 3888, 'ozzie': 3889, 'harriet': 3890, 'molybdenum': 3891, 'judiciary': 3892, 'somme': 3893, 'kill': 3894, 'akita': 3895, 'hackers': 3896, 'tracking': 3897, 'maze': 3898, 'founding': 3899, 'floyd': 3900, 'budweis': 3901, 'psi': 3902, 'mare': 3903, 'nostrum': 3904, 'incompetent': 3905, 'apparel': 3906, 'pusher': 3907, 'poorly': 3908, 'micro': 3909, 'bowls': 3910, 'nasdaq': 3911, 'bph': 3912, 'dolphins': 3913, 'poop': 3914, 'attacked': 3915, 'affectionate': 3916, 'tabulates': 3917, 'ballots': 3918, 'tradition': 3919, 'halloween': 3920, 'silk': 3921, 'screening': 3922, 'gopher': 3923, 'screensaver': 3924, 'resident': 3925, 'wreaks': 3926, 'resting': 3927, 'mac': 3928, 'scares': 3929, 'zenger': 3930, 'deveopment': 3931, 'chessboard': 3932, 'videotape': 3933, 'advertizing': 3934, 'frito': 3935, 'competitor': 3936, 'trans': 3937, 'dare': 3938, 'knock': 3939, 'shoulder': 3940, 'approaches': 3941, 'doctors': 3942, 'diagnose': 3943, 'pulls': 3944, 'strings': 3945, 'speaks': 3946, 'developmental': 3947, 'stages': 3948, 'nicklaus': 3949, 'golfers': 3950, 'sonnets': 3951, 'redness': 3952, 'cheeks': 3953, 'blush': 3954, 'challengers': 3955, 'pheonix': 3956, 'crusaders': 3957, 'recapture': 3958, 'muslims': 3959, 'rupee': 3960, 'depreciates': 3961, 'gaming': 3962, 'marbles': 3963, 'arometherapy': 3964, 'mideast': 3965, 'copier': 3966, 'guarantee': 3967, 'dudley': 3968, 'whores': 3969, 'shan': 3970, 'ripping': 3971, 'contestant': 3972, 'picking': 3973, 'shower': 3974, 'rolls': 3975, 'royce': 3976, 'nightingale': 3977, 'stardust': 3978, 'necessarily': 3979, 'consecutively': 3980, 'g2': 3981, 'angles': 3982, 'isosceles': 3983, 'cockroaches': 3984, 'cbs': 3985, 'interrupted': 3986, 'bulletin': 3987, 'dreamed': 3988, 'marvin': 3989, 'basque': 3990, 'hirohito': 3991, 'euchre': 3992, 'intractable': 3993, 'plantar': 3994, 'keratoma': 3995, 'joyce': 3996, 'famine': 3997, 'kimpo': 3998, 'tails': 3999, 'predators': 4000, 'antarctica': 4001, 'alternator': 4002, 'advertised': 4003, 'quantity': 4004, 'golfing': 4005, 'accessory': 4006, 'ligurian': 4007, 'watchers': 4008, 'suffrage': 4009, 'polynesian': 4010, 'inhabit': 4011, 'admonition': 4012, '219': 4013, 'oftentimes': 4014, 'dropping': 4015, 'goalie': 4016, 'permitted': 4017, 'operant': 4018, 'facts': 4019, 'dogsledding': 4020, 'joel': 4021, 'babar': 4022, 'dumbo': 4023, 'insured': 4024, 'stardom': 4025, 'stein': 4026, 'eriksen': 4027, 'billie': 4028, 'allah': 4029, 'bicornate': 4030, 'weakness': 4031, 'equals': 4032, 'inuit': 4033, 'dye': 4034, 'doesn': 4035, 'yesterdays': 4036, 'emil': 4037, 'goldfus': 4038, 'tornadoes': 4039, 'covrefeu': 4040, 'popcorn': 4041, 'iguana': 4042, 'gustav': 4043, '195': 4044, 'visible': 4045, 'redford': 4046, 'directorial': 4047, 'hyenas': 4048, 'cinderslut': 4049, 'skein': 4050, 'wool': 4051, 'governmental': 4052, 'dealing': 4053, 'racism': 4054, 'referees': 4055, 'concoct': 4056, 'hives': 4057, 'bogs': 4058, 'lynmouth': 4059, 'floods': 4060, 'tulip': 4061, 'madilyn': 4062, 'kahn': 4063, 'wilder': 4064, 'maximo': 4065, 'slightly': 4066, 'ahead': 4067, 'potter': 4068, 'lifesaver': 4069, 'crucifixion': 4070, 'grass': 4071, 'firewall': 4072, 'correspondent': 4073, 'reuter': 4074, 'picked': 4075, 'stalled': 4076, 'evaporate': 4077, 'deck': 4078, 'facility': 4079, 'ballcock': 4080, 'overflow': 4081, 'betty': 4082, 'boop': 4083, 'luis': 4084, 'rey': 4085, 'economists': 4086, 'dominica': 4087, 'johnnie': 4088, 'sids': 4089, 'grenada': 4090, 'clitoris': 4091, 'mandrake': 4092, 'goddamn': 4093, 'airborne': 4094, 'commercially': 4095, 'bayer': 4096, 'leverkusen': 4097, 'nanometer': 4098, 'regulation': 4099, 'brigham': 4100, 'hiding': 4101, 'scars': 4102, 'boardwalk': 4103, 'renown': 4104, 'fogs': 4105, 'janelle': 4106, 'deadwood': 4107, 'distinct': 4108, 'characterstics': 4109, 'arabian': 4110, 'traded': 4111, 'extension': 4112, 'dbf': 4113, 'derogatory': 4114, 'applied': 4115, 'painters': 4116, 'sisley': 4117, 'pissarro': 4118, 'renoir': 4119, 'wop': 4120, 'noise': 4121, 'climb': 4122, 'nepal': 4123, 'alyssa': 4124, 'milano': 4125, 'danza': 4126, 'nordic': 4127, 'vladimir': 4128, 'nabokov': 4129, 'humbert': 4130, 'myth': 4131, 'chairbound': 4132, 'basophobic': 4133, 'wrestling': 4134, 'attracts': 4135, 'mcqueen': 4136, 'cincinnati': 4137, 'fastener': 4138, '1893': 4139, 'auberge': 4140, 'booth': 4141, 'catherine': 4142, 'ribavirin': 4143, 'toothbrush': 4144, 'abominable': 4145, 'snowman': 4146, 'wander': 4147, 'alone': 4148, 'adding': 4149, 'lactobacillus': 4150, 'bulgaricus': 4151, 'classified': 4152, 'eenty': 4153, 'seed': 4154, 'approaching': 4155, 'cyclist': 4156, 'earned': 4157, 'chomper': 4158, 'comediennes': 4159, 'nora': 4160, 'wiggins': 4161, 'eunice': 4162, 'justin': 4163, 'muscle': 4164, 'veal': 4165, 'roasts': 4166, 'chops': 4167, 'amish': 4168, '187s': 4169, 'mining': 4170, 'ringo': 4171, 'stagecoach': 4172, 'coppertop': 4173, 'carrier': 4174, 'open': 4175, 'shostakovich': 4176, 'rostropovich': 4177, 'nipsy': 4178, 'russell': 4179, 'vehicle': 4180, 'shipyard': 4181, 'inspector': 4182, 'kilroy': 4183, 'designate': 4184, 'satisfactory': 4185, 'jellicle': 4186, 'evidence': 4187, 'hermitage': 4188, 'shots': 4189, 'm16': 4190, 'protanopia': 4191, 'guys': 4192, 'whoever': 4193, 'finds': 4194, 'wins': 4195, 'coronado': 4196, 'sharon': 4197, 'schwarzenegger': 4198, 'flatfish': 4199, 'stubborn': 4200, 'gummed': 4201, 'diskettes': 4202, 'accidents': 4203, 'shag': 4204, 'presided': 4205, 'airwaves': 4206, 'pearls': 4207, 'ya': 4208, 'lo': 4209, 'ove': 4210, 'naked': 4211, 'portly': 4212, 'criminologist': 4213, 'hyatt': 4214, 'checkmate': 4215, 'gina': 4216, 'advanced': 4217, 'belt': 4218, 'encyclopedia': 4219, 'sunday': 4220, 'narragansett': 4221, 'tribes': 4222, 'rhode': 4223, 'extinct': 4224, 'quarters': 4225, 'fodor': 4226, 'rowan': 4227, 'thine': 4228, 'bic': 4229, 'flame': 4230, 'clockwise': 4231, 'counterclockwise': 4232, 'wealthiest': 4233, 'producers': 4234, 'promoters': 4235, 'kingdom': 4236, 'passing': 4237, 'mourning': 4238, 'rarest': 4239, 'judith': 4240, 'rossner': 4241, 'diane': 4242, 'keaton': 4243, 'parasites': 4244, 'powhatan': 4245, 'rolfe': 4246, 'pecan': 4247, 'nihilist': 4248, 'feynman': 4249, 'physics': 4250, 'relevant': 4251, '118': 4252, '126': 4253, '134': 4254, '142': 4255, '158': 4256, '167': 4257, '177': 4258, 'founders': 4259, 'ejaculation': 4260, 'disposable': 4261, 'cents': 4262, 'tropical': 4263, 'distributions': 4264, 'oddsmaker': 4265, 'snyder': 4266, 'autoimmune': 4267, 'sheath': 4268, 'nerve': 4269, 'gradual': 4270, 'dorsets': 4271, 'lincolns': 4272, 'oxfords': 4273, 'southdowns': 4274, 'shortest': 4275, 'hajo': 4276, 'plural': 4277, 'yes': 4278, 'copyright': 4279, 'homes': 4280, '280': 4281, 'crew': 4282, 'doctorate': 4283, 'sunnyside': 4284, 'faber': 4285, 'mongol': 4286, 'lucky': 4287, 'sprayed': 4288, 'cook': 4289, 'rawhide': 4290, 'wingspan': 4291, 'condor': 4292, 'typically': 4293, 'hairs': 4294, 'admiral': 4295, 'viceroy': 4296, 'granted': 4297, 'profits': 4298, 'voyage': 4299, 'transport': 4300, '468': 4301, 'marlin': 4302, 'harness': 4303, '193': 4304, 'attendant': 4305, 'malta': 4306, 'blackhawk': 4307, '1832': 4308, 'ante': 4309, 'mortem': 4310, 'shawn': 4311, 'settlement': 4312, '85': 4313, 'sheika': 4314, 'dena': 4315, 'farri': 4316, 'delivered': 4317, 'newscast': 4318, 'knowpost': 4319, 'liked': 4320, 'decade': 4321, 'buzz': 4322, 'permanent': 4323, 'gibson': 4324, 'boards': 4325, 'zorro': 4326, 'tabs': 4327, 'heating': 4328, 'barbara': 4329, 'nominee': 4330, 'gillette': 4331, 'olestra': 4332, 'cows': 4333, 'bow': 4334, 'enlist': 4335, 'mostly': 4336, 'quarterbacks': 4337, 'solve': 4338, 'mustard': 4339, 'warned': 4340, 'antidisestablishmentarianism': 4341, 'topophobic': 4342, 'ybarra': 4343, 'retrievers': 4344, 'tee': 4345, 'masters': 4346, 'puccini': 4347, 'boheme': 4348, 'showed': 4349, 'fondness': 4350, 'munching': 4351, 'pollen': 4352, 'curies': 4353, 'universal': 4354, 'import': 4355, 'melancholy': 4356, 'dane': 4357, 'longtime': 4358, 'yeat': 4359, 'hermits': 4360, 'filthiest': 4361, 'alive': 4362, 'elect': 4363, 'isthmus': 4364, 'dan': 4365, 'aykroyd': 4366, 'injectors': 4367, 'macy': 4368, 'parade': 4369, 'wolfman': 4370, 'touted': 4371, 'nafta': 4372, 'phonograph': 4373, 'tips': 4374, 'fireplace': 4375, 'manatees': 4376, 'certified': 4377, 'nurse': 4378, 'midwife': 4379, 'philip': 4380, 'boris': 4381, 'pasternak': 4382, 'tout': 4383, 'coined': 4384, 'cyberspace': 4385, 'neuromancer': 4386, 'multicolored': 4387, '42': 4388, 'quintillion': 4389, 'potential': 4390, 'combinations': 4391, 'rhomboideus': 4392, 'bjorn': 4393, 'borg': 4394, 'forehand': 4395, 'recruited': 4396, 'winston': 4397, 'finish': 4398, '1926': 4399, 'pokemon': 4400, 'benelux': 4401, 'petrified': 4402, 'fences': 4403, 'neighbors': 4404, 'mandy': 4405, 'costumed': 4406, 'personas': 4407, 'pym': 4408, 'conditions': 4409, 'buses': 4410, 'explorers': 4411, 'goldie': 4412, 'hawn': 4413, 'boyfriend': 4414, 'virtues': 4415, 'casinos': 4416, 'embedded': 4417, 'rev': 4418, 'falwell': 4419, 'crossword': 4420, 'nonaggression': 4421, 'pact': 4422, 'traversed': 4423, 'wwi': 4424, 'doughboys': 4425, 'corporal': 4426, 'cds': 4427, 'garth': 4428, 'vessel': 4429, 'atari': 4430, 'specializing': 4431, 'punctuation': 4432, 'drills': 4433, 'grader': 4434, 'sacred': 4435, 'gland': 4436, 'regenerate': 4437, 'gutenberg': 4438, 'bibles': 4439, 'astronomical': 4440, 'jan': 4441, 'fairground': 4442, 'folk': 4443, 'chapman': 4444, 'median': 4445, 'yousuf': 4446, 'karsh': 4447, 'shiest': 4448, 'godiva': 4449, 'chocolates': 4450, 'sullen': 4451, 'untamed': 4452, 'rainfall': 4453, 'archy': 4454, 'mehitabel': 4455, 'glowsticks': 4456, 'barroom': 4457, 'judge': 4458, 'pecos': 4459, 'excuse': 4460, 'nato': 4461, 'meyer': 4462, 'wolfsheim': 4463, 'fixed': 4464, 'defensive': 4465, 'diplomacy': 4466, 'khyber': 4467, 'braun': 4468, 'variations': 4469, 'canfield': 4470, 'klondike': 4471, 'chivington': 4472, 'pirate': 4473, 'dialogue': 4474, 'gm': 4475, 'pageant': 4476, 'winslow': 4477, '1872': 4478, 'aim': 4479, '54c': 4480, 'shorn': 4481, 'provide': 4482, 'intake': 4483, '1990s': 4484, 'extant': 4485, 'neoclassical': 4486, 'romanticism': 4487, 'inducted': 4488, 'fats': 4489, 'blobbo': 4490, 'leno': 4491, 'rosemary': 4492, 'labianca': 4493, 'rita': 4494, 'hayworth': 4495, 'prefix': 4496, 'surnames': 4497, 'oxidation': 4498, 'boob': 4499, 'prehistoric': 4500, 'etta': 4501, 'butch': 4502, 'cassidey': 4503, 'sundance': 4504, 'laos': 4505, 'wilbur': 4506, 'reed': 4507, 'breeding': 4508, 'organized': 4509, 'pulaski': 4510, '1866': 4511, 'marion': 4512, 'actual': 4513, 'fourteenth': 4514, 'perfume': 4515, 'rosanne': 4516, 'rosanna': 4517, 'despondent': 4518, 'freddie': 4519, 'prinze': 4520, 'iraqi': 4521, 'referring': 4522, 'bandit': 4523, 'backup': 4524, '1915': 4525, '86': 4526, 'ticklish': 4527, 'canadians': 4528, 'emmigrate': 4529, 'menus': 4530, 'washed': 4531, 'vodka': 4532, 'mpilo': 4533, 'drinker': 4534, 'respirator': 4535, 'batteries': 4536, 'recommend': 4537, 'boil': 4538, 'snowboard': 4539, '200': 4540, 'suzette': 4541, 'assume': 4542, 'sutcliffe': 4543, 'emma': 4544, 'peel': 4545, 'journalism': 4546, 'befell': 4547, 'immaculate': 4548, 'conception': 4549, 'aircraft': 4550, 'carriers': 4551, 'candlemas': 4552, 'ouarterly': 4553, 'doublespeak': 4554, 'inoperative': 4555, 'tufts': 4556, 'pyrotechnic': 4557, 'jobs': 4558, 'pianos': 4559, 'motorcycles': 4560, 'argon': 4561, 'boeing': 4562, '737': 4563, '1930s': 4564, 'uber': 4565, 'cornell': 4566, '175': 4567, 'tons': 4568, 'edmonton': 4569, 'sonny': 4570, 'liston': 4571, 'succeed': 4572, 'belonged': 4573, 'zatanna': 4574, 'dispatched': 4575, 'cruiser': 4576, 'novels': 4577, 'hooters': 4578, 'swim': 4579, 'copyrighted': 4580, 'refuge': 4581, 'preserve': 4582, 'wildlife': 4583, 'wilderness': 4584, 'plc': 4585, 'acres': 4586, 'carrying': 4587, 'barkis': 4588, 'willin': 4589, 'peggy': 4590, 'hyperlink': 4591, 'artemis': 4592, 'watching': 4593, 'grammys': 4594, 'elysium': 4595, 'shalom': 4596, 'pump': 4597, 'organic': 4598, 'dwellers': 4599, 'slane': 4600, 'manhatten': 4601, 'prizes': 4602, 'rid': 4603, 'woodpeckers': 4604, 'syllables': 4605, 'hendecasyllabic': 4606, 'waco': 4607, '1885': 4608, 'mainland': 4609, 'victims': 4610, 'generation': 4611, 'dear': 4612, 'abby': 4613, 'sussex': 4614, 'ottawa': 4615, 'jell': 4616, 'incorporate': 4617, 'panoramic': 4618, 'ones': 4619, 'scene': 4620, 'dish': 4621, 'intestines': 4622, 'guam': 4623, 'executor': 4624, 'inescapable': 4625, 'purveyor': 4626, 'commission': 4627, 'microprocessors': 4628, 'microcontrollers': 4629, 'sought': 4630, 'necklaces': 4631, 'proof': 4632, 'houseplants': 4633, 'metabolize': 4634, 'carcinogens': 4635, 'enola': 4636, 'hen': 4637, 'protestant': 4638, 'supremacy': 4639, 'arab': 4640, 'strap': 4641, 'whatever': 4642, 'catalogues': 4643, 'coastlines': 4644, 'biscay': 4645, 'egyptians': 4646, 'shave': 4647, 'eyebrows': 4648, 'waverly': 4649, 'assign': 4650, 'agents': 4651, 'eduard': 4652, '000th': 4653, 'michagin': 4654, 'eh': 4655, 'approval': 4656, 'funk': 4657, 'lata': 4658, 'pc': 4659, 'hinckley': 4660, 'jodie': 4661, 'foster': 4662, 'liners': 4663, 'trafalgar': 4664, 'carmania': 4665, 'slime': 4666, 'feathered': 4667, 'yugoslavians': 4668, 'vlaja': 4669, 'gaja': 4670, 'raja': 4671, 'stolen': 4672, 'unusual': 4673, 'mind': 4674, 'anybody': 4675, 'dying': 4676, 'wiener': 4677, 'schnitzel': 4678, 'urals': 4679, 'itch': 4680, 'dunes': 4681, 'older': 4682, 'lacan': 4683, 'riding': 4684, 'budweiser': 4685, 'fischer': 4686, 'exterminate': 4687, 'gaelic': 4688, 'transcript': 4689, 'quart': 4690, 'filling': 4691, 'fascinated': 4692, 'experimenting': 4693, 'neurasthenia': 4694, 'terrible': 4695, 'typhoid': 4696, 'jay': 4697, 'kay': 4698, '9th': 4699, 'symphony': 4700, 'explosive': 4701, 'charcoal': 4702, 'sulfur': 4703, 'mountainous': 4704, 'lhasa': 4705, 'apso': 4706, 'karenna': 4707, 'teenage': 4708, 'olives': 4709, 'rossetti': 4710, 'beata': 4711, 'beatrix': 4712, 'wiz': 4713, 'touring': 4714, 'modestly': 4715, 'stove': 4716, 'reputation': 4717, 'stealing': 4718, 'jokes': 4719, 'cyclone': 4720, 'amendements': 4721, 'passed': 4722, 'painful': 4723, 'sleet': 4724, 'freezing': 4725, 'econoline': 4726, 'f25': 4727, 'v1': 4728, 'ian': 4729, 'fleming': 4730, 'm3': 4731, 'routinely': 4732, 'dem': 4733, 'bums': 4734, 'frustrated': 4735, 'depended': 4736, 'turning': 4737, 'gaza': 4738, 'jericho': 4739, 'methodist': 4740, 'retrograde': 4741, 'breweries': 4742, 'meta': 4743, 'appropriately': 4744, 'masterson': 4745, 'cwt': 4746, 'tenants': 4747, 'adjoining': 4748, 'cabinets': 4749, 'raging': 4750, 'advised': 4751, 'listeners': 4752, 'chevrolet': 4753, 'ctbt': 4754, 'waste': 4755, 'dairy': 4756, 'laureate': 4757, 'expelled': 4758, 'timor': 4759, 'carried': 4760, 'multiple': 4761, 'births': 4762, 'ge': 4763, 'ejaculate': 4764, 'pandoro': 4765, 'throne': 4766, 'abdication': 4767, 'anka': 4768, 'install': 4769, 'tile': 4770, 'quickest': 4771, 'nail': 4772, 'zipper': 4773, 'industrialized': 4774, 'anne': 4775, 'boleyn': 4776, 'stat': 4777, 'quickly': 4778, 'thurgood': 4779, 'useful': 4780, 'battleship': 4781, 'upstaged': 4782, 'amicable': 4783, 'publisher': 4784, 'molly': 4785, 'skim': 4786, 'decided': 4787, 'sprocket': 4788, 'sponsor': 4789, 'czech': 4790, 'algiers': 4791, 'seawater': 4792, 'finnish': 4793, 'caucasian': 4794, 'stratocaster': 4795, 'sculptress': 4796, 'lightest': 4797, 'twirl': 4798, 'giant': 4799, 'masquerade': 4800, 'bikini': 4801, 'bathing': 4802, 'rubens': 4803, 'dyck': 4804, 'bruegel': 4805, 'citizens': 4806, 'examples': 4807, 'tex': 4808, 'arriving': 4809, 'mgm': 4810, 'shirtwaist': 4811, 'spiritual': 4812, 'ocho': 4813, 'rios': 4814, '32': 4815, 'mauritania': 4816, 'cookers': 4817, 'jury': 4818, 'computers': 4819, 'impact': 4820, 'salvador': 4821, 'dali': 4822, 'irate': 4823, 'oxide': 4824, 'gallery': 4825, 'viagra': 4826, 'monarchs': 4827, 'crowned': 4828, 'bellworts': 4829, 'darius': 4830, 'anna': 4831, 'anderson': 4832, 'czar': 4833, 'dita': 4834, 'beard': 4835, 'coot': 4836, 'turks': 4837, 'libya': 4838, 'fray': 4839, 'bentos': 4840, 'magoo': 4841, 'flog': 4842, 'inspiration': 4843, 'schoolteacher': 4844, 'poets': 4845, 'positions': 4846, 'succession': 4847, 'flights': 4848, '165': 4849, 'gunpowder': 4850, 'burkina': 4851, 'faso': 4852, 'stripped': 4853, 'barred': 4854, 'blondes': 4855, 'kelly': 4856, 'phalanx': 4857, 'mustachioed': 4858, 'frankie': 4859, 'utilities': 4860, 'airman': 4861, 'goering': 4862, 'storms': 4863, 'nicolo': 4864, 'paganini': 4865, 'sheila': 4866, 'burnford': 4867, 'hammer': 4868, 'believed': 4869, 'quotation': 4870, 'together': 4871, 'elevators': 4872, 'infant': 4873, 'seal': 4874, 'respones': 4875, 'goodnight': 4876, 'mankiewicz': 4877, 'electronics': 4878, 'bulge': 4879, 'grandeur': 4880, 'destination': 4881, '1830': 4882, 'iraqis': 4883, 'adorns': 4884, 'rwanda': 4885, 'vb': 4886, 'pos': 4887, 'darwin': 4888, 'olympia': 4889, 'overlook': 4890, 'chronicled': 4891, 'katy': 4892, 'holstrum': 4893, 'glen': 4894, 'morley': 4895, 'farthings': 4896, 'abundant': 4897, 'furth': 4898, 'rednitz': 4899, 'pegnitz': 4900, 'converge': 4901, 'dixville': 4902, 'notch': 4903, 'cisalpine': 4904, 'shopping': 4905, 'protagonist': 4906, 'dostoevski': 4907, 'idiot': 4908, 'penalty': 4909, 'dismissed': 4910, 'burglary': 4911, 'meerkat': 4912, 'nicolet': 4913, 'authors': 4914, 'memory': 4915, 'midwest': 4916, 'slang': 4917, 'darn': 4918, 'tootin': 4919, 'cupboard': 4920, 'bare': 4921, 'cleaner': 4922, 'attends': 4923, 'pencey': 4924, 'prep': 4925, 'antonio': 4926, 'ducats': 4927, 'melts': 4928, 'prescription': 4929, 'voyager': 4930, 'jim': 4931, 'bohannon': 4932, 'commanders': 4933, 'alamein': 4934, 'replies': 4935, 'leia': 4936, 'confession': 4937, 'rockettes': 4938, 'elroy': 4939, 'hirsch': 4940, 'oh': 4941, 'ermal': 4942, 'seashell': 4943, 'haunted': 4944, 'bigger': 4945, 'thighs': 4946, 'cheetahs': 4947, '45mhz': 4948, 'puzzle': 4949, 'alexandre': 4950, 'dumas': 4951, 'mystical': 4952, 'ravens': 4953, 'odin': 4954, 'virus': 4955, 'hesse': 4956, 'fe': 4957, 'shock': 4958, 'dipsomaniac': 4959, 'crave': 4960, 'contributions': 4961, 'personal': 4962, 'braille': 4963, 'isle': 4964, 'pinatubo': 4965, 'officially': 4966, 'garfield': 4967, 'delegate': 4968, 'processing': 4969, 'lagos': 4970, 'greenland': 4971, '985': 4972, 'distinguishing': 4973, 'cystic': 4974, 'equivalence': 4975, 'philatelist': 4976, 'wheatfield': 4977, 'crows': 4978, 'uol': 4979, 'dimensions': 4980, 'goal': 4981, 'signs': 4982, 'recession': 4983, 'circumorbital': 4984, 'hematoma': 4985, 'governed': 4986, 'ouagadougou': 4987, 'sunflowers': 4988, 'frequent': 4989, 'enemies': 4990, 'translation': 4991, 'handicraft': 4992, 'requires': 4993, 'interlace': 4994, 'warp': 4995, 'weft': 4996, 'sunlight': 4997, 'milliseconds': 4998, 'cherokee': 4999, 'camcorders': 5000, 'conceiving': 5001, 'cody': 5002, 'biceps': 5003, 'tender': 5004, 'resignation': 5005, 'thucydides': 5006, 'boulevard': 5007, 'tonsils': 5008, 'fluorine': 5009, 'magnesium': 5010, 'mayans': 5011, 'balloon': 5012, 'posh': 5013, 'anyone': 5014, 'loomis': 5015, 'shillings': 5016, 'tarantula': 5017, 'secondary': 5018, 'sabrina': 5019, 'conductor': 5020, 'pops': 5021, 'fiedler': 5022, 'hebephrenia': 5023, 'terrorized': 5024, 'stalker': 5025, 'christina': 5026, 'peanuts': 5027, 'guitarist': 5028, 'camels': 5029, 'humps': 5030, 'hospital': 5031, 'orthopedics': 5032, 'sense': 5033, 'betrayed': 5034, 'doodyville': 5035, 'orphans': 5036, 'fund': 5037, 'bourdon': 5038, 'wisconsin': 5039, 'badgers': 5040, 'picts': 5041, 'caroll': 5042, 'baker': 5043, 'grimes': 5044, 'debbie': 5045, 'reynolds': 5046, 'noir': 5047, 'swampy': 5048, 'diplomatic': 5049, 'dressed': 5050, 'affair': 5051, 'winters': 5052, 'seeking': 5053, 'missile': 5054, 'sidewinder': 5055, 'sheboygan': 5056, 'firemen': 5057, 'nantucket': 5058, 'shipwreck': 5059, 'divers': 5060, 'exploring': 5061, '52': 5062, 'stories': 5063, 'contained': 5064, 'wharton': 5065, 'problem': 5066, '1997': 5067, 'constipation': 5068, 'symptom': 5069, 'viewing': 5070, 'ursula': 5071, 'andress': 5072, 'honeymooners': 5073, 'promising': 5074, 'submarines': 5075, 'mortgage': 5076, 'lifter': 5077, 'll': 5078, 'associaton': 5079, 'havlicek': 5080, '46': 5081, '227': 5082, 'happy': 5083, 'donor': 5084, 'thor': 5085, 'canon': 5086, 'eels': 5087, 'madding': 5088, 'crowd': 5089, 'graphic': 5090, 'recessed': 5091, 'filter': 5092, 'appendix': 5093, 'heir': 5094, 'raising': 5095, 'wreckage': 5096, 'andrea': 5097, 'doria': 5098, 'leprosy': 5099, 'chartered': 5100, 'vermont': 5101, 'shoreline': 5102, 'cartesian': 5103, 'diver': 5104, 'libraries': 5105, 'document': 5106, 'copy': 5107, 'burial': 5108, 'remembered': 5109, 'blaise': 5110, 'obtained': 5111, 'dams': 5112, 'metropolis': 5113, 'anorexia': 5114, 'tyler': 5115, 'tadeus': 5116, 'wladyslaw': 5117, 'konopka': 5118, 'cactus': 5119, 'sky': 5120, 'overalls': 5121, 'dungri': 5122, 'suburb': 5123, 'rainstorm': 5124, 'permanently': 5125, 'connection': 5126, 'krypton': 5127, 'daxam': 5128, 'blackjack': 5129, 'reaches': 5130, 'yemen': 5131, 'reunified': 5132, 'skrunch': 5133, 'oompas': 5134, 'organs': 5135, 'necrosis': 5136, 'magnetar': 5137, 'feud': 5138, '1891': 5139, 'stripe': 5140, 'coho': 5141, 'salmon': 5142, 'lucia': 5143, 'primitives': 5144, 'rural': 5145, 'serial': 5146, 'yell': 5147, 'hail': 5148, 'taxi': 5149, 'apartheid': 5150, 'yale': 5151, 'lock': 5152, 'loosely': 5153, 'aztec': 5154, 'populous': 5155, '576': 5156, 'knute': 5157, 'rockne': 5158, 'aldous': 5159, 'huxley': 5160, 'risk': 5161, 'venture': 5162, 'devo': 5163, 'hiccup': 5164, 'warfare': 5165, 'bombshell': 5166, 'descendents': 5167, 'tutankhamun': 5168, 'exhibit': 5169, 'transported': 5170, 'involves': 5171, 'martian': 5172, 'attire': 5173, 'pothooks': 5174, 'seriously': 5175, 'toulmin': 5176, 'logic': 5177, 'corgi': 5178, 'dangling': 5179, 'participle': 5180, 'stores': 5181, 'nightclubs': 5182, 'conceived': 5183, 'flush': 5184, 'nathan': 5185, 'hamill': 5186, 'prequel': 5187, 'disks': 5188, 'departments': 5189, 'woo': 5190, 'wu': 5191, 'dialect': 5192, 'neither': 5193, 'borrower': 5194, 'nor': 5195, 'lender': 5196, 'moxie': 5197, 'spade': 5198, 'patti': 5199, 'gestapo': 5200, 'exclusive': 5201, 'copacabana': 5202, 'ipanema': 5203, 'bavaria': 5204, 'guernsey': 5205, 'sark': 5206, 'herm': 5207, 'hence': 5208, 'overcome': 5209, 'plea': 5210, 'destructor': 5211, 'destroying': 5212, 'creativity': 5213, 'midsummer': 5214, 'ed': 5215, 'allegedly': 5216, 'obscene': 5217, 'gesture': 5218, 'allan': 5219, 'minded': 5220, 'casper': 5221, 'girlfriend': 5222, 'devoured': 5223, 'mob': 5224, 'starving': 5225, 'potsdam': 5226, 'contagious': 5227, 'recruits': 5228, 'nitrates': 5229, 'environment': 5230, 'nitrox': 5231, 'diving': 5232, 'unsuccessful': 5233, 'overthrow': 5234, 'bavarian': 5235, 'earthworms': 5236, 'pasture': 5237, 'stonehenge': 5238, 'hourly': 5239, 'workers': 5240, 'snatches': 5241, 'jerks': 5242, 'aeul': 5243, 'laid': 5244, 'relax': 5245, 'culture': 5246, 'jimi': 5247, 'hendrix': 5248, 'castor': 5249, 'pollux': 5250, 'reflections': 5251, 'milligrams': 5252, 'gram': 5253, 'stringed': 5254, 'fires': 5255, 'bolt': 5256, 'hepcats': 5257, 'angelica': 5258, 'wick': 5259, 'musician': 5260, 'prophecies': 5261, 'witches': 5262, 'macbeth': 5263, 'dominos': 5264, 'contraceptives': 5265, 'jeremy': 5266, 'piven': 5267, 'yo': 5268, 'yos': 5269, 'fossilizes': 5270, 'coprolite': 5271, 'mayo': 5272, 'clinic': 5273, 'amsterdam': 5274, 'knows': 5275, 'sewer': 5276, 'commissioner': 5277, 'provo': 5278, 'pasta': 5279, 'coulee': 5280, 'psorisis': 5281, 'disappear': 5282, '4th': 5283, 'bermuda': 5284, 'lust': 5285, 'constitutes': 5286, 'reliable': 5287, 'download': 5288, 'heretic': 5289, 'required': 5290, '1879': 5291, '1880': 5292, '1881': 5293, 'amaretto': 5294, 'biscuits': 5295, 'ailment': 5296, 'vernal': 5297, 'equinox': 5298, 'protect': 5299, 'innocent': 5300, '1873': 5301, 'massage': 5302, 'cracker': 5303, 'manifest': 5304, 'latent': 5305, 'theories': 5306, 'styron': 5307, 'nitrogen': 5308, 'fade': 5309, 'employed': 5310, '72': 5311, 'baseemen': 5312, 'snowballs': 5313, 'rodder': 5314, 'tyrannosaurus': 5315, 'ozymandias': 5316, 'kenyan': 5317, 'safari': 5318, 'le': 5319, 'carre': 5320, 'echidna': 5321, 'adjournment': 5322, '25th': 5323, 'session': 5324, 'assembly': 5325, 'rifle': 5326, 'folklore': 5327, 'laundry': 5328, 'detergent': 5329, 'manche': 5330, 'easy': 5331, 'onassis': 5332, 'yacht': 5333, 'dirty': 5334, 'tumbled': 5335, 'marble': 5336, 'user': 5337, 'satisfaction': 5338, 'athens': 5339, 'bocci': 5340, 'void': 5341, 'pulse': 5342, 'easily': 5343, 'shirts': 5344, 'mla': 5345, 'bibliographies': 5346, 'imam': 5347, 'hussain': 5348, 'shia': 5349, 'barr': 5350, 'arabic': 5351, 'policeman': 5352, 'digitalis': 5353, 'flintknapping': 5354, 'canker': 5355, 'sores': 5356, 'castellated': 5357, 'kremlin': 5358, 'bureaucracy': 5359, 'gambler': 5360, 'consider': 5361, 'blunder': 5362, 'trinitrotoluene': 5363, 'absolute': 5364, 'mammals': 5365, 'alpha': 5366, 'theta': 5367, 'freidreich': 5368, 'wilhelm': 5369, 'ludwig': 5370, 'leichhardt': 5371, 'prussian': 5372, 'unfamiliar': 5373, 'spokespeople': 5374, 'katharine': 5375, 'fcc': 5376, 'newton': 5377, 'minow': 5378, 'miniature': 5379, 'landlocked': 5380, 'tiffany': 5381, 'magnets': 5382, 'attract': 5383, 'diana': 5384, 'determines': 5385, 'sunk': 5386, 'havana': 5387, 'bunch': 5388, 'emblazoned': 5389, 'jolly': 5390, 'alveoli': 5391, 'konigsberg': 5392, 'leos': 5393, 'herbert': 5394, 'neal': 5395, 'martini': 5396, 'bernard': 5397, 'crack': 5398, '1835': 5399, 'tolling': 5400, 'fiddlers': 5401, 'cole': 5402, 'promises': 5403, 'ethylene': 5404, 'irwin': 5405, 'widmark': 5406, 'butt': 5407, 'kicked': 5408, 'mess': 5409, 'vw': 5410, 'changes': 5411, 'pallbearer': 5412, 'pub': 5413, 'bathroom': 5414, '192': 5415, 'baja': 5416, 'mar': 5417, 'shallow': 5418, 'deadrise': 5419, 'omni': 5420, 'ultimate': 5421, 'unanswerable': 5422, 'cookbook': 5423, '198': 5424, 'liz': 5425, 'chandler': 5426, 'guiteau': 5427, 'snakes': 5428, 'superbowls': 5429, 'ers': 5430, 'gandy': 5431, 'dancer': 5432, 'biz': 5433, 'belmont': 5434, 'stakes': 5435, 'iowa': 5436, 'loco': 5437, 'rathaus': 5438, 'canning': 5439, 'summit': 5440, 'pi': 5441, 'bachelor': 5442, 'bedroom': 5443, 'cobol': 5444, 'fortran': 5445, 'supergirl': 5446, 'sic': 5447, 'woodward': 5448, 'brethren': 5449, 'surfboard': 5450, 'neighborhood': 5451, 'refusing': 5452, 'bus': 5453, 'paying': 5454, 'roulette': 5455, 'corps': 5456, 'ingmar': 5457, 'bergman': 5458, 'deltiologist': 5459, 'hans': 5460, 'henderson': 5461, 'walking': 5462, 'hog': 5463, 'shadow': 5464, 'packers': 5465, 'philosophized': 5466, 'stokes': 5467, 'lawn': 5468, 'challenge': 5469, 'nylon': 5470, 'stockings': 5471, 'expedition': 5472, 'climbing': 5473, 'hostage': 5474, 'taking': 5475, 'wart': 5476, 'spaceball': 5477, 'rabies': 5478, 'revive': 5479, 'lying': 5480, 'preface': 5481, 'foreword': 5482, 'fingernails': 5483, '123': 5484, 'calcutta': 5485, 'sterilize': 5486, 'eclairs': 5487, 'businessman': 5488, 'humor': 5489, 'enigmatic': 5490, 'acquitted': 5491, 'treason': 5492, 'jurassic': 5493, 'chemiosmotic': 5494, 'cromwell': 5495, 'ion': 5496, 'trace': 5497, 'roots': 5498, 'diet': 5499, 'courier': 5500, 'creams': 5501, 'seaweed': 5502, 'dumb': 5503, 'loveable': 5504, 'gosfield': 5505, 'phil': 5506, 'silvers': 5507, 'usb': 5508, 'cayman': 5509, 'vegetation': 5510, 'lol': 5511, 'amelia': 5512, 'earhart': 5513, 'disappeared': 5514, 'holden': 5515, 'caulfield': 5516, 'ace': 5517, 'manfred': 5518, 'richthofen': 5519, 'invaded': 5520, 'petroleum': 5521, 'asthma': 5522, 'biloxi': 5523, 'esperanto': 5524, 'nouns': 5525, 'heuristic': 5526, 'ostriches': 5527, 'blackhawks': 5528, 'maintain': 5529, 'freed': 5530, 'slaves': 5531, 'porter': 5532, 'gift': 5533, 'magi': 5534, 'waugh': 5535, 'dust': 5536, 'linus': 5537, 'cheery': 5538, 'fellow': 5539, '9971': 5540, 'yoo': 5541, 'hoo': 5542, 'settle': 5543, 'nearest': 5544, 'inoco': 5545, 'assassinations': 5546, '1865': 5547, 'brake': 5548, 'touchdowns': 5549, 'bullets': 5550, 'bebrenia': 5551, 'amazonis': 5552, 'natick': 5553, 'gimli': 5554, 'sparkling': 5555, '33': 5556, 'christi': 5557, 'caber': 5558, 'tossing': 5559, 'houston': 5560, 'oilers': 5561, 'oakland': 5562, 'raiders': 5563, 'stockyards': 5564, 'weird': 5565, 'knife': 5566, 'kinks': 5567, 'madre': 5568, 'lapwarmers': 5569, 'bovine': 5570, 'weakest': 5571, 'garrett': 5572, 'morgan': 5573, 'dipper': 5574, 'ishmael': 5575, 'flea': 5576, 'antilles': 5577, 'comparisons': 5578, 'prices': 5579, 'multimedia': 5580, 'cloth': 5581, 'camptown': 5582, 'racetrack': 5583, 'blob': 5584, 'herculoids': 5585, 'ninety': 5586, 'theses': 5587, 'inkhorn': 5588, 'pocahontas': 5589, 'bloodhound': 5590, 'plymouth': 5591, 'bunyan': 5592, 'ox': 5593, 'dartmouth': 5594, 'environmental': 5595, 'influences': 5596, 'backstreet': 5597, 'murdering': 5598, 'budapest': 5599, 'belgrade': 5600, 'wreaked': 5601, 'marching': 5602, 'conservationist': 5603, 'spokesperson': 5604, 'grape': 5605, 'carpal': 5606, 'acidic': 5607, 'redskin': 5608, 'fan': 5609, 'belly': 5610, 'buttons': 5611, 'suburban': 5612, 'feminine': 5613, 'mystique': 5614, 'ukrainians': 5615, 'perry': 5616, '600': 5617, '387': 5618, 'airplanes': 5619, 'hound': 5620, 'daws': 5621, 'palindromic': 5622, 'sailing': 5623, 'twain': 5624, 'ethnological': 5625, 'belle': 5626, 'beast': 5627, 'microscope': 5628, 'remains': 5629, 'tap': 5630, 'grandma': 5631, 'shoplifts': 5632, 'socratic': 5633, 'hypertext': 5634, 'peller': 5635, 'wendy': 5636, 'beef': 5637, 'blatty': 5638, 'recounts': 5639, 'regan': 5640, 'macneil': 5641, 'devil': 5642, 'lions': 5643, 'pomegranate': 5644, 'magee': 5645, 'calypso': 5646, 'basilica': 5647, 'advantages': 5648, 'selecting': 5649, 'bernadette': 5650, 'peters': 5651, 'reviews': 5652, 'turbulent': 5653, 'souls': 5654, 'gothic': 5655, 'alleged': 5656, 'shroud': 5657, 'turin': 5658, 'salk': 5659, 'martialled': 5660, 'criticizing': 5661, 'insanity': 5662, 'convert': 5663, 'enrolled': 5664, 'bands': 5665, 'instruments': 5666, 'koresh': 5667, 'langston': 5668, 'achievements': 5669, 'naacp': 5670, 'grinch': 5671, 'gompers': 5672, 'rayburn': 5673, 'pita': 5674, 'peacocks': 5675, 'mate': 5676, 'obote': 5677, 'niagara': 5678, 'crewel': 5679, 'narcolepsy': 5680, '1896': 5681, 'alda': 5682, 'smithsonian': 5683, 'mixture': 5684, 'sondheim': 5685, 'ballad': 5686, 'maybe': 5687, 'kite': 5688, 'pere': 5689, 'lachaise': 5690, 'cemetery': 5691, 'occurred': 5692, 'marilyn': 5693, 'monroe': 5694, 'skunks': 5695, 'medina': 5696, 'zoological': 5697, 'ruminant': 5698, 'hyperopia': 5699, 'assigned': 5700, 'longer': 5701, 'aladdin': 5702, 'tzimisce': 5703, "'the": 5704, 'boycott': 5705, 'funeral': 5706, 'springfield': 5707, 'merrick': 5708, 'ogre': 5709, 'urged': 5710, 'outstanding': 5711, 'dynasty': 5712, 'remote': 5713, 'hurt': 5714, 'hurting': 5715, 'ultraviolet': 5716, 'lizzie': 5717, 'borden': 5718, 'polis': 5719, 'minneapolis': 5720, 'detailed': 5721, 'manchukuo': 5722, 'learned': 5723, 'saxophone': 5724, 'gum': 5725, 'pelvic': 5726, 'carson': 5727, '7847': 5728, '5943': 5729, 'preservation': 5730, 'favoured': 5731, 'struggle': 5732, 'chickenpoxs': 5733, 'attorneys': 5734, 'sheri': 5735, 'primate': 5736, 'pigment': 5737, 'palms': 5738, 'scalene': 5739, 'bearer': 5740, 'wants': 5741, 'gets': 5742, 'chilly': 5743, 'respond': 5744, 'millenium': 5745, 'hypnotherapy': 5746, 'rcd': 5747, 'pursued': 5748, 'tweety': 5749, 'pie': 5750, 'internal': 5751, 'combustion': 5752, 'biorhythm': 5753, 'portrait': 5754, 'grilled': 5755, 'bacon': 5756, 'brunettes': 5757, 'conservancy': 5758, 'sung': 5759, 'pajamas': 5760, 'transplants': 5761, 'wee': 5762, 'winkie': 5763, 'philippine': 5764, 'ex': 5765, 'prostitute': 5766, 'pimp': 5767, 'fighting': 5768, 'cinzano': 5769, 'fiesta': 5770, 'honors': 5771, '1996': 5772, 'vocal': 5773, 'sampling': 5774, 'windmills': 5775, 'hong': 5776, 'slum': 5777, 'badge': 5778, 'courage': 5779, 'kodak': 5780, 'inuits': 5781, 'trigonometry': 5782, 'compaq': 5783, 'trading': 5784, 'lap': 5785, 'sit': 5786, 'traffic': 5787, 'cone': 5788, 'jett': 5789, 'warhol': 5790, 'visine': 5791, 'cozumel': 5792, 'teenagers': 5793, 'sixties': 5794, 'granary': 5795, 'arsenal': 5796, 'mint': 5797, 'telegraph': 5798, 'whorehouse': 5799, 'fallen': 5800, 'haunt': 5801, 'roommates': 5802, 'saved': 5803, 'tanker': 5804, 'snowiest': 5805, 'crabgrass': 5806, 'mancha': 5807, 'hawking': 5808, 'kindergarden': 5809, 'optical': 5810, 'clause': 5811, 'altered': 5812, 'amended': 5813, 'guess': 5814, 'anus': 5815, 'rectum': 5816, 'jpeg': 5817, 'bitmap': 5818, 'franz': 5819, 'battlefield': 5820, 'wheat': 5821, 'compass': 5822, 'counties': 5823, 'indiana': 5824, 'folies': 5825, 'bergeres': 5826, 'aesop': 5827, 'fable': 5828, 'swift': 5829, 'steady': 5830, 'bound': 5831, 'venetian': 5832, 'venice': 5833, 'treated': 5834, 'protection': 5835, 'limited': 5836, 'partnership': 5837, 'roses': 5838, 'embracing': 5839, 'napoleonic': 5840, 'critical': 5841, 'consisting': 5842, 'corners': 5843, 'spritsail': 5844, 'baghdad': 5845, 'multiplexer': 5846, 'centurion': 5847, 'poconos': 5848, 'nike': 5849, 'powered': 5850, 'norwegian': 5851, 'southernmost': 5852, 'brian': 5853, 'boru': 5854, '11th': 5855, 'asleep': 5856, 'norman': 5857, 'poems': 5858, 'fools': 5859, 'docklands': 5860, 'wonderbra': 5861, 'proverb': 5862, 'stitch': 5863, 'saves': 5864, 'thursday': 5865, 'telephones': 5866, 'vichyssoise': 5867, 'manson': 5868, 'rams': 5869, 'grab': 5870, 'gusto': 5871, 'portraits': 5872, 'expect': 5873, 'monthly': 5874, 'publication': 5875, 'bigfoot': 5876, 'collins': 5877, 'punishment': 5878, 'mailman': 5879, 'beasley': 5880, 'provided': 5881, 'listen': 5882, 'incubate': 5883, 'parts': 5884, 'surgeon': 5885, 'performed': 5886, 'haifa': 5887, 'yogurt': 5888, 'benedict': 5889, 'agricultural': 5890, 'electronic': 5891, 'visual': 5892, 'displays': 5893, 'corresponding': 5894, 'signals': 5895, 'goldenseal': 5896, 'composition': 5897, 'rodeo': 5898, 'iris': 5899, 'tetrinet': 5900, 'marvelous': 5901, 'spokesman': 5902, 'chiricahua': 5903, 'beryl': 5904, 'romania': 5905, 'lcd': 5906, 'amezaiku': 5907, 'brenner': 5908, '64': 5909, 'predominant': 5910, 'assisi': 5911, 'megawatts': 5912, 'consortium': 5913, 'chancery': 5914, 'sexiest': 5915, 'photograph': 5916, 'quirk': 5917, 'germanic': 5918, 'hungry': 5919, 'kisser': 5920, 'beating': 5921, 'conjugations': 5922, 'woke': 5923, "'etat": 5924, 'photographs': 5925, 'calhoun': 5926, 'acting': 5927, 'lunt': 5928, 'fontanne': 5929, 'exercises': 5930, 'juices': 5931, 'principal': 5932, 'poodle': 5933, 'zionism': 5934, 'bills': 5935, 'backgammon': 5936, 'volley': 5937, 'pulp': 5938, 'rabbits': 5939, 'swastika': 5940, 'stood': 5941, 'goosebumps': 5942, 'emotional': 5943, 'aroused': 5944, 'frames': 5945, 'theo': 5946, 'rousseau': 5947, 'fontaine': 5948, 'yield': 5949, 'maturity': 5950, 'bonds': 5951, 'surpassing': 5952, 'crypt': 5953, 'beneath': 5954, 'rotunda': 5955, 'superbowl': 5956, 'cribbage': 5957, 'prisoner': 5958, 'refugee': 5959, 'tsetse': 5960, 'agreement': 5961, 'serves': 5962, 'kane': 5963, 'troilism': 5964, 'flyer': 5965, 'mistakenly': 5966, 'html': 5967, 'identify': 5968, 'migrates': 5969, 'everybody': 5970, 'hocks': 5971, 'flogged': 5972, 'inga': 5973, 'nielsen': 5974, 'lund': 5975, 'lifelong': 5976, 'funnel': 5977, 'spouting': 5978, 'kissed': 5979, 'pushes': 5980, 'executive': 5981, 'cranes': 5982, 'finally': 5983, 'imprisoned': 5984, '1931': 5985, 'possum': 5986, 'cholera': 5987, 'firehole': 5988, 'base': 5989, 'indicator': 5990, 'eckley': 5991, 'stairway': 5992, 'curl': 5993, 'pushed': 5994, 'coupled': 5995, 'hump': 5996, 'hosted': 5997, 'breony': 5998, 'sardonyx': 5999, 'wallbanger': 6000, 'beholder': 6001, 'oldtime': 6002, 'guide': 6003, 'jeff': 6004, 'greenfield': 6005, 'subversive': 6006, 'collier': 6007, 'saudi': 6008, 'arabia': 6009, 'lingo': 6010, 'cambodia': 6011, 'profit': 6012, '836': 6013, 'vamp': 6014, 'portrays': 6015, 'joad': 6016, 'dustbowl': 6017, 'eli': 6018, 'lilly': 6019, 'servers': 6020, 'oilseeds': 6021, 'thru': 6022, 'bulbs': 6023, 'jar': 6024, 'mayan': 6025, 'warmup': 6026, 'pitches': 6027, 'reliever': 6028, 'dylan': 6029, 'livestock': 6030, 'creating': 6031, 'scandal': 6032, 'daring': 6033, 'gown': 6034, 'wassermann': 6035, 'specific': 6036, 'ornaments': 6037, 'communications': 6038, 'yachts': 6039, 'fig': 6040, 'newtons': 6041, 'premier': 6042, 'cigar': 6043, 'chewing': 6044, 'observed': 6045, 'feel': 6046, 'stickers': 6047, 'cisco': 6048, 'packages': 6049, 'vichy': 6050, 'kidnaping': 6051, 'termed': 6052, 'crime': 6053, '1922': 6054, 'buxom': 6055, 'blonde': 6056, 'recruitment': 6057, 'donation': 6058, 'entail': 6059, 'blythe': 6060, 'rises': 6061, 'garment': 6062, 'bradley': 6063, 'voorhees': 6064, 'barrier': 6065, 'destroyed': 6066, 'occam': 6067, 'grimace': 6068, 'mccheese': 6069, 'appalachian': 6070, 'fruits': 6071, 'survival': 6072, 'clitoridectomy': 6073, 'tampa': 6074, 'surge': 6075, 'farther': 6076, 'opposed': 6077, 'further': 6078, 'alternate': 6079, 'ran': 6080, 'nickel': 6081, 'cadmium': 6082, 'rechargeable': 6083, 'recharged': 6084, 'seats': 6085, 'batmobile': 6086, 'rummy': 6087, 'phillip': 6088, 'kramer': 6089, 'erica': 6090, 'jong': 6091, 'isadora': 6092, 'wing': 6093, 'thai': 6094, 'tournaments': 6095, 'prevailing': 6096, 'winds': 6097, 'metamorphosis': 6098, 'awakes': 6099, 'translate': 6100, 'mia': 6101, 'farrow': 6102, 'svga': 6103, 'adapter': 6104, 'g7': 6105, 'walked': 6106, 'vocalist': 6107, 'hansel': 6108, 'gretel': 6109, 'pain': 6110, 'canonize': 6111, 'nonconsecutive': 6112, 'tornados': 6113, 'lot': 6114, 'carolingian': 6115, 'merrie': 6116, 'melodies': 6117, 'reports': 6118, 'emperors': 6119, 'cabarnet': 6120, 'sauvignon': 6121, 'frosted': 6122, 'flakes': 6123, 'brief': 6124, 'conquered': 6125, 'spock': 6126, 'newspapers': 6127, 'dispose': 6128, 'garbage': 6129, 'prosecutor': 6130, 'later': 6131, 'screens': 6132, 'magnet': 6133, 'nina': 6134, 'theatre': 6135, 'burn': 6136, '1954': 6137, 'sed': 6138, 'nomadic': 6139, 'gathering': 6140, 'caine': 6141, 'flab': 6142, 'chin': 6143, 'rhyme': 6144, 'needs': 6145, 'freedy': 6146, 'johnston': 6147, 'gametophytic': 6148, 'tissue': 6149, 'catsup': 6150, 'conifer': 6151, 'perfectly': 6152, 'textiles': 6153, 'ambassadorial': 6154, 'shays': 6155, 'rebellion': 6156, '1787': 6157, 'chromatology': 6158, 'edge': 6159, 'aclu': 6160, 'albums': 6161, 'goldfish': 6162, 'dimly': 6163, 'lit': 6164, 'prix': 6165, 'driving': 6166, 'straight': 6167, 'lesson': 6168, 'teaching': 6169, 'metric': 6170, 'kythnos': 6171, 'siphnos': 6172, 'seriphos': 6173, 'mykonos': 6174, 'skater': 6175, 'lines': 6176, 'footballs': 6177, 'savings': 6178, 'mature': 6179, 'abigail': 6180, 'arcane': 6181, 'villainous': 6182, 'opponent': 6183, 'swamp': 6184, 'harmful': 6185, 'spray': 6186, 'kenya': 6187, 'bernstein': 6188, 'fermont': 6189, 'theorem': 6190, 'tim': 6191, 'heliologist': 6192, 'prevents': 6193, 'eczema': 6194, 'seborrhea': 6195, 'psoriasis': 6196, 'antichrist': 6197, 'exclusively': 6198, 'residence': 6199, 'teats': 6200, 'kilamanjaro': 6201, 'crocodile': 6202, 'swallow': 6203, 'mushroom': 6204, 'deployed': 6205, 'microwaves': 6206, 'bullfighting': 6207, 'article': 6208, 'estimated': 6209, 'whitetail': 6210, 'farmer': 6211, 'almanac': 6212, 'assent': 6213, 'emblem': 6214, 'dartboard': 6215, 'dramatized': 6216, 'offered': 6217, 'aquatic': 6218, 'scenes': 6219, 'springs': 6220, 'brimstone': 6221, 'monk': 6222, 'burnt': 6223, 'stake': 6224, 'audio': 6225, 'afs': 6226, 'quetzalcoatl': 6227, 'sparkles': 6228, 'circulatory': 6229, 'bagdad': 6230, 'bubble': 6231, 'wrap': 6232, 'java': 6233, 'squats': 6234, 'doubleheader': 6235, 'rhymes': 6236, 'solomon': 6237, 'health': 6238, 'nutrition': 6239, 'sebastian': 6240, 'yiddish': 6241, 'theater': 6242, 'stethoscope': 6243, 'mathematical': 6244, 'millionth': 6245, 'nbc': 6246, 'congressional': 6247, 'delegation': 6248, 'erupts': 6249, 'retired': 6250, '755': 6251, 'represents': 6252, 'abbey': 6253, 'rubin': 6254, 'hayden': 6255, 'rossini': 6256, 'siskel': 6257, 'snoring': 6258, 'ridge': 6259, 'eastward': 6260, 'westward': 6261, 'flowing': 6262, 'wished': 6263, 'looked': 6264, 'cowardly': 6265, 'chiropodist': 6266, 'porphyria': 6267, 'soy': 6268, 'kurt': 6269, 'cobain': 6270, 'shine': 6271, 'clot': 6272, 'pleasure': 6273, 'fertile': 6274, 'jeans': 6275, 'calvin': 6276, 'klein': 6277, 'comfortable': 6278, 'abbie': 6279, 'dose': 6280, 'friction': 6281, 'mormon': 6282, '69': 6283, 'indianapolis': 6284, 'tucson': 6285, 'melbourne': 6286, 'compare': 6287, 'pillar': 6288, 'contemplating': 6289, 'brilliant': 6290, 'economist': 6291, 'creation': 6292, 'sally': 6293, 'dyke': 6294, 'experience': 6295, 'mythical': 6296, 'hourglass': 6297, 'scythe': 6298, 'twenty': 6299, 'didn': 6300, 'challenged': 6301, 'explore': 6302, 'sleeping': 6303, 'donate': 6304, 'truly': 6305, 'numbered': 6306, 'vats': 6307, 'judged': 6308, '1863': 6309, 'criticism': 6310, 'throw': 6311, 'housewarming': 6312, 'hurley': 6313, 'impulse': 6314, 'hardening': 6315, 'kim': 6316, 'philby': 6317, 'freddy': 6318, 'freeman': 6319, 'rona': 6320, 'barrett': 6321, 'lustrum': 6322, 'encounters': 6323, 'mathematician': 6324, 'glamorous': 6325, 'metalious': 6326, 'unleashed': 6327, 'celestials': 6328, 'paths': 6329, 'enhance': 6330, 'sporting': 6331, 'collapsed': 6332, 'erle': 6333, 'gardner': 6334, 'terrified': 6335, 'cleopatra': 6336, 'expert': 6337, 'describing': 6338, 'residents': 6339, 'lesbos': 6340, 'organizational': 6341, 'delhi': 6342, 'indira': 6343, 'mistletoe': 6344, 'plugged': 6345, 'spectacle': 6346, 'telecast': 6347, 'amen': 6348, 'baffin': 6349, 'frobisher': 6350, 'limbo': 6351, 'credits': 6352, 'physician': 6353, 'inventions': 6354, 'bremer': 6355, 'escape': 6356, 'apostle': 6357, 'caldwell': 6358, 'zone': 6359, 'archery': 6360, 'anesthetic': 6361, 'allow': 6362, 'periodic': 6363, 'solid': 6364, 'liquid': 6365, 'tonne': 6366, 'entirely': 6367, 'deet': 6368, 'sagebrush': 6369, 'bernoulli': 6370, 'poster': 6371, 'scrum': 6372, 'improve': 6373, 'morale': 6374, 'bowler': 6375, 'facing': 6376, '37803': 6377, 'pin': 6378, 'resources': 6379, 'teachers': 6380, 'israeli': 6381, '168': 6382, 'recomended': 6383, 'switch': 6384, 'crib': 6385, 'jdr3': 6386, 'mendelevium': 6387, 'users': 6388, 'friz': 6389, 'freleng': 6390, 'ranks': 6391, 'sideburns': 6392, 'resulting': 6393, '1849': 6394, 'sutter': 6395, 'moorish': 6396, 'erich': 6397, 'melt': 6398, 'taught': 6399, 'matt': 6400, 'murdock': 6401, 'extraordinary': 6402, 'abilities': 6403, 'wile': 6404, 'coyote': 6405, 'lent': 6406, 'mandibulofacial': 6407, 'dysostosis': 6408, 'partition': 6409, 'churches': 6410, 'famously': 6411, 'warn': 6412, 'dtmf': 6413, 'sandra': 6414, 'bullock': 6415, 'blew': 6416, 'lakehurst': 6417, 'commanded': 6418, 'individual': 6419, 'tested': 6420, 'captained': 6421, 'ernst': 6422, 'lehmann': 6423, 'sprouted': 6424, 'opposition': 6425, 'konrad': 6426, 'adenauer': 6427, 'lipstick': 6428, 'wax': 6429, 'madame': 6430, 'tussaud': 6431, 'terror': 6432, 'horton': 6433, 'touched': 6434, 'shortstop': 6435, 'iditarod': 6436, 'stay': 6437, 'reinstate': 6438, 'selective': 6439, 'registration': 6440, 'pamplona': 6441, 'motor': 6442, 'collectible': 6443, '7th': 6444, 'inning': 6445, 'gitchee': 6446, 'gumee': 6447, 'tristan': 6448, 'reb': 6449, 'yank': 6450, 'guidance': 6451, 'jpl': 6452, 'goldfinger': 6453, 'hobby': 6454, 'shelf': 6455, 'beside': 6456, 'crouching': 6457, '1886': 6458, 'tub': 6459, 'treatments': 6460, 'jessica': 6461, 'gangland': 6462, 'slaughter': 6463, 'membership': 6464, 'moran': 6465, 'outfit': 6466, 'exile': 6467, 'tailors': 6468, 'elongated': 6469, 'afoot': 6470, 'goldilocks': 6471, 'kreme': 6472, 'collided': 6473, 'truck': 6474, 'swatch': 6475, 'nuremberg': 6476, 'keller': 6477, 'taken': 6478, 'track': 6479, 'etched': 6480, 'excellence': 6481, 'exposition': 6482, 'campbell': 6483, 'parma': 6484, 'traditions': 6485, 'elizabethian': 6486, 'quicker': 6487, 'sultan': 6488, 'ski': 6489, 'dolomites': 6490, 'weekend': 6491, 'monterey': 6492, 'stern': 6493, 'caul': 6494, 'propaganda': 6495, 'successfully': 6496, 'quantum': 6497, 'leaps': 6498, 'simpler': 6499, 'acoustic': 6500, 'med': 6501, 'edentulous': 6502, 'smile': 6503, 'jealousy': 6504, 'flytrap': 6505, '327': 6506, 'shelves': 6507, 'banking': 6508, 'makepeace': 6509, 'thackeray': 6510, 'kubrick': 6511, 'reproduce': 6512, 'reputed': 6513, 'priest': 6514, 'marxism': 6515, 'boiled': 6516, 'skyline': 6517, 'belize': 6518, 'paine': 6519, 'sued': 6520, 'dannon': 6521, 'yougurt': 6522, 'ron': 6523, 'raider': 6524, 'promotion': 6525, 'carroll': 6526, 'robb': 6527, 'hydroelectricity': 6528, 'taller': 6529, 'unsafe': 6530, 'antigua': 6531, 'abacus': 6532, 'popularly': 6533, 'mass': 6534, 'exposed': 6535, 'granite': 6536, 'commander': 6537, 'yorktown': 6538, '1781': 6539, 'kinsey': 6540, 'preference': 6541, 'males': 6542, 'procedure': 6543, 'drilling': 6544, 'skull': 6545, 'acheive': 6546, 'higher': 6547, 'garmat': 6548, 'karl': 6549, 'madsen': 6550, 'byzantine': 6551, 'appoint': 6552, 'splatterpunk': 6553, 'orgin': 6554, 'xoxoxox': 6555, 'southeast': 6556, 'wang': 6557, 'joining': 6558, 'ping': 6559, 'tak': 6560, '155': 6561, 'leonardo': 6562, 'vinci': 6563, 'michaelangelo': 6564, 'machiavelli': 6565, 'fascist': 6566, 'lottery': 6567, 'haboob': 6568, 'blows': 6569, 'fabric': 6570, 'cake': 6571, 'msg': 6572, 'saks': 6573, 'zoo': 6574, 'yaroslavl': 6575, 'gemstone': 6576, 'nebbish': 6577, 'powdered': 6578, 'recognition': 6579, 'services': 6580, 'quelling': 6581, 'rebellions': 6582, 'bytes': 6583, 'terabyte': 6584, 'hooked': 6585, 'ally': 6586, 'mcbeal': 6587, 'ivan': 6588, 'iv': 6589, 'expansion': 6590, 'forged': 6591, 'cliff': 6592, 'robertson': 6593, 'damocles': 6594, 'televised': 6595, 'dondi': 6596, 'adoptive': 6597, 'grandfather': 6598, 'smelly': 6599, 'lemon': 6600, 'automobiles': 6601, 'zolotow': 6602, 'concerts': 6603, 'groundshog': 6604, 'andie': 6605, 'macdowell': 6606, 'hairy': 6607, 'chiang': 6608, 'kai': 6609, 'shek': 6610, 'hijack': 6611, 'rah': 6612, 'enlivens': 6613, 'hanover': 6614, 'cousin': 6615, 'theodore': 6616, 'arts': 6617, 'footwear': 6618, 'boats': 6619, "'neal": 6620, 'unification': 6621, 'zeros': 6622, 'trillion': 6623, 'crimean': 6624, 'eligible': 6625, 'drunken': 6626, 'drivers': 6627, 'dragged': 6628, 'terrier': 6629, 'forfeited': 6630, 'lawnmower': 6631, 'letterman': 6632, 'knicks': 6633, 'titles': 6634, 'rated': 6635, 'sony': 6636, 'playstation': 6637, 'symbolizes': 6638, 'urban': 6639, 'bells': 6640, 'bering': 6641, 'smartnet': 6642, 'synonym': 6643, 'vermicilli': 6644, 'rigati': 6645, 'zitoni': 6646, 'tubetti': 6647, 'grocer': 6648, 'fingertips': 6649, 'philosophy': 6650, 'plans': 6651, 'forerunner': 6652, 'buds': 6653, 'snickers': 6654, 'musketeers': 6655, 'sysrq': 6656, 'key': 6657, 'stricken': 6658, 'contibution': 6659, 'experiment': 6660, 'gabel': 6661, 'maris': 6662, '61': 6663, 'submerged': 6664, 'fringe': 6665, 'ossining': 6666, 'application': 6667, 'hydrosulfite': 6668, 'allsburg': 6669, 'tries': 6670, 'components': 6671, 'polyester': 6672, 'dig': 6673, 'intergovernmental': 6674, 'affairs': 6675, 'espn': 6676, 'laptop': 6677, 'natchitoches': 6678, 'pointed': 6679, 'handwriting': 6680, 'analyst': 6681, 'recovery': 6682, 'taiwan': 6683, 'hawkins': 6684, '1562': 6685, 'burr': 6686, 'cartier': 6687, 'aviator': 6688, 'tempelhol': 6689, 'igor': 6690, 'suicides': 6691, 'regardless': 6692, 'priestley': 6693, 'erase': 6694, 'licensed': 6695, 'blend': 6696, 'herbs': 6697, 'spices': 6698, 'mid': 6699, '1900s': 6700, 'janet': 6701, 'enthalpy': 6702, 'reaction': 6703, 'sired': 6704, 'hustle': 6705, 'gemini': 6706, 'grange': 6707, 'gretzky': 6708, 'nones': 6709, 'warm': 6710, 'peabody': 6711, 'sherman': 6712, 'bullwinkle': 6713, "'d": 6714, 'lovely': 6715, 'dumplings': 6716, 'celestial': 6717, '864': 6718, 'circus': 6719, 'wittenberg': 6720, 'kathryn': 6721, 'hinduism': 6722, 'denmark': 6723, 'ankle': 6724, 'sprain': 6725, '313': 6726, 'biochemists': 6727, 'alpert': 6728, 'moss': 6729, 'thermal': 6730, 'equilibrium': 6731, 'behavior': 6732, 'violates': 6733, 'accepted': 6734, 'standards': 6735, 'morality': 6736, 'magnate': 6737, 'initials': 6738, 'sleeve': 6739, 'padres': 6740, 'neurons': 6741, 'reptiles': 6742, 'ridder': 6743, 'kdge': 6744, 'executioner': 6745, 'bid': 6746, 'chamber': 6747, 'doegs': 6748, 'plumbism': 6749, 'relatives': 6750, 'tears': 6751, 'salzburg': 6752, 'shown': 6753, '188': 6754, 'arles': 6755, 'stings': 6756, 'below': 6757, 'fulton': 6758, 'infomatics': 6759, 'bios': 6760, 'keck': 6761, 'telescope': 6762, 'apartments': 6763, 'brunswick': 6764, 'resurrectionist': 6765, 'vegetables': 6766, 'combined': 6767, 'succotash': 6768, 'reality': 6769, 'manufacturers': 6770, 'poke': 6771, 'cullion': 6772, 'safest': 6773, 'pedestrians': 6774, 'craig': 6775, 'stevens': 6776, 'meanie': 6777, 'angela': 6778, 'divide': 6779, 'mvp': 6780, '999': 6781, 'celebrations': 6782, 'fears': 6783, 'palpatine': 6784, 'wilkes': 6785, 'plantation': 6786, 'flat': 6787, 'explosion': 6788, 'sphere': 6789, 'statistical': 6790, 'barnstorming': 6791, 'dumbest': 6792, 'importance': 6793, 'magellan': 6794, 'grades': 6795, 'husbands': 6796, 'hilton': 6797, 'wilding': 6798, 'fubu': 6799, 'oop': 6800, 'moo': 6801, 'tastes': 6802, 'distinguish': 6803, 'travelers': 6804, 'covered': 6805, 'menu': 6806, 'item': 6807, 'spicey': 6808, 'supporting': 6809, 'stradivarius': 6810, 'childhood': 6811, 'ticker': 6812, '1870': 6813, 'afraid': 6814, 'debts': 6815, 'qintex': 6816, 'hates': 6817, 'mankind': 6818, 'milt': 6819, 'austerlitz': 6820, 'ty': 6821, 'cobb': 6822, 'philanthropist': 6823, 'portal': 6824, 'goodness': 6825, 'describes': 6826, 'usage': 6827, 'avoid': 6828, 'darning': 6829, 'needles': 6830, 'stingers': 6831, 'excite': 6832, 'proceed': 6833, 'vitamins': 6834, 'penguins': 6835, 'richards': 6836, 'idle': 6837, 'fordham': 6838, 'waynesburg': 6839, '12601': 6840, 'serigraph': 6841, 'hallie': 6842, 'woods': 6843, 'macarthur': 6844, '1767': 6845, '1834': 6846, 'racehorse': 6847, '20th': 6848, 'eminem': 6849, 'slim': 6850, 'shady': 6851, 'final': 6852, 'weir': 6853, 'subaru': 6854, 'endometriosis': 6855, 'geoscientist': 6856, 'robust': 6857, 'imported': 6858, 'instructor': 6859, 'judo': 6860, 'stem': 6861, 'edessa': 6862, 'levitation': 6863, 'btu': 6864, 'untouchables': 6865, 'vdrl': 6866, 'tackle': 6867, 'eagles': 6868, 'xv': 6869, 'endurance': 6870, 'hardy': 6871, 'silversmith': 6872, 'violent': 6873, 'niece': 6874, 'nephew': 6875, 'assassin': 6876, 'tumbling': 6877, 'maudie': 6878, 'frickett': 6879, 'leaky': 6880, 'valve': 6881, 'myself': 6882, 'manicure': 6883, 'circumnavigator': 6884, 'syzygy': 6885, 'waterways': 6886, '76': 6887, 'liberated': 6888, 'strasbourg': 6889, 'baseman': 6890, 'ports': 6891, 'christine': 6892, 'possessed': 6893, 'goals': 6894, 'scored': 6895, 'resembled': 6896, 'jackass': 6897, 'tattoo': 6898, 'forever': 6899, 'frommer': 6900, 'observances': 6901, 'chair': 6902, 'reserve': 6903, 'friendliness': 6904, 'scsi': 6905, 'funny': 6906, 'preferably': 6907, 'radiation': 6908, 'marzipan': 6909, 'polyorchid': 6910, 'abolished': 6911, 'permutations': 6912, 'osteichthyes': 6913, 'nasty': 6914, 'topic': 6915, 'outline': 6916, 'conformist': 6917, 'dripper': 6918, 'furlongs': 6919, 'quarter': 6920, 'recetrack': 6921, 'millimeters': 6922, 'symbolize': 6923, '1699': 6924, '172': 6925, 'foreigner': 6926, 'sum': 6927, 'genetic': 6928, 'soundtrack': 6929, 'melman': 6930, 'limestone': 6931, 'deposit': 6932, 'rising': 6933, 'swing': 6934, 'bookshop': 6935, 'silkworm': 6936, 'moth': 6937, 'domestication': 6938, 'tenths': 6939, 'marl': 6940, 'sourness': 6941, 'lan': 6942, 'activated': 6943, 'insects': 6944, 'spiracles': 6945, 'arches': 6946, 'natives': 6947, 'stevenson': 6948, 'deacon': 6949, 'brodie': 6950, 'cabinetmaker': 6951, 'burglar': 6952, 'rejection': 6953, 'rallying': 6954, 'dubliners': 6955, 'underage': 6956, 'watchman': 6957, 'wills': 6958, 'sword': 6959, 'candice': 6960, 'bergen': 6961, 'jacqueline': 6962, 'bisset': 6963, 'remake': 6964, '1943': 6965, 'acquaintance': 6966, '43rd': 6967, 'aerodynamics': 6968, 'laboratory': 6969, '1912': 6970, 'calder': 6971, 'oas': 6972, 'forsyth': 6973, 'toppling': 6974, 'mercenaries': 6975, 'baretta': 6976, 'cockatoo': 6977, 'trader': 6978, 'conterminous': 6979, 'sequencing': 6980, 'chop': 6981, 'suey': 6982, 'satelite': 6983, 'archimedes': 6984, 'lucille': 6985, 'delicate': 6986, 'tasting': 6987, 'onion': 6988, '239': 6989, '48th': 6990, 'quotes': 6991, 'bullseye': 6992, 'darts': 6993, 'mythology': 6994, 'cunnilingus': 6995, 'reunited': 6996, 'maltese': 6997, 'falconers': 6998, 'astor': 6999, 'sidney': 7000, 'greenstreet': 7001, 'deranged': 7002, 'otto': 7003, 'octavius': 7004, 'acceptance': 7005, 'speech': 7006, 'yous': 7007, 'seuss': 7008, 'verdandi': 7009, 'dined': 7010, 'oysters': 7011, 'carpenter': 7012, 'guadalcanal': 7013, 'elk': 7014, 'badly': 7015, 'tarnished': 7016, 'brass': 7017, 'tied': 7018, 'ruble': 7019, 'irl': 7020, 'scott': 7021, '194': 7022, 'ants': 7023, 'ku': 7024, 'klux': 7025, 'klan': 7026, 'ukraine': 7027, 'hdlc': 7028, 'joins': 7029, 'spritz': 7030, 'spritzer': 7031, 'nematode': 7032, 'phobophobe': 7033, 'capitalism': 7034, 'max': 7035, 'weber': 7036, 'arson': 7037, 'refuse': 7038, 'orly': 7039, 'woodstock': 7040, 'gambling': 7041, 'task': 7042, 'bouvier': 7043, 'somene': 7044, 'solved': 7045, 'bella': 7046, 'abzug': 7047, 'sartorial': 7048, 'macdonald': 7049, 'lew': 7050, 'archer': 7051, 'superb': 7052, 'affiant': 7053, 'raced': 7054, 'threat': 7055, 'thefts': 7056, 'lickin': 7057, 'commandant': 7058, 'stalag': 7059, 'terrorism': 7060, 'accompanied': 7061, 'missions': 7062, 'est': 7063, 'pas': 7064, 'except': 7065, 'repeats': 7066, 'tampon': 7067, 'cct': 7068, 'diagram': 7069, 'ismail': 7070, 'farouk': 7071, 'enchanted': 7072, 'evening': 7073, 'supplement': 7074, 'locomotive': 7075, 'horlick': 7076, 'adventuring': 7077, 'rann': 7078, 'adam': 7079, 'strange': 7080, 'desk': 7081, 'loud': 7082, 'inaction': 7083, 'ecological': 7084, 'niche': 7085, 'fireplug': 7086, 'walt': 7087, 'none': 7088, 'employees': 7089, 'kwai': 7090, 'vending': 7091, 'distribute': 7092, 'humanitarian': 7093, 'relief': 7094, 'somalia': 7095, 'elephants': 7096, 'doris': 7097, 'certainly': 7098, 'practical': 7099, 'marketed': 7100, 'drought': 7101, '173': 7102, '732': 7103, 'mailing': 7104, 'lists': 7105, 'billingsgate': 7106, 'fishmarket': 7107, 'oj': 7108, 'detroit': 7109, 'calleda': 7110, '1928': 7111, 'thin': 7112, 'clubs': 7113, 'peasant': 7114, 'peugeot': 7115, 'continents': 7116, 'refers': 7117, 'automation': 7118, 'knighted': 7119, 'eating': 7120, 'utensils': 7121, 'handicapped': 7122, 'daylight': 7123, 'lutine': 7124, 'announce': 7125, 'chernobyl': 7126, 'accident': 7127, 'maya': 7128, 'scratch': 7129, 'ancients': 7130, 'haversian': 7131, 'canals': 7132, 'julie': 7133, 'poppins': 7134, 'status': 7135, 'predicted': 7136, 'topple': 7137, '2010': 7138, '2020': 7139, 'wasps': 7140, 'manuel': 7141, 'noriega': 7142, 'ousted': 7143, 'authorities': 7144, 'breaking': 7145, 'ladybugs': 7146, 'taft': 7147, 'benson': 7148, 'majal': 7149, 'brothel': 7150, 'doughnut': 7151, 'pompeii': 7152, 'farrier': 7153, 'saliva': 7154, 'nearsightedness': 7155, 'mayfly': 7156, 'petersburg': 7157, 'petrograd': 7158, 'dissented': 7159, 'pia': 7160, 'zadora': 7161, 'millionaire': 7162, 'compounds': 7163, 'astronomer': 7164, 'umbrellas': 7165, 'feminist': 7166, 'politics': 7167, 'zodiac': 7168, 'districts': 7169, 'snafu': 7170, 'chablis': 7171, 'vince': 7172, 'lombardi': 7173, 'coaching': 7174, 'fatalism': 7175, 'determinism': 7176, 'fractal': 7177, 'blockade': 7178, '1603': 7179, 'cameras': 7180, 'naseem': 7181, 'hamed': 7182, 'scorpion': 7183, 'logarithmic': 7184, 'scales': 7185, 'slide': 7186, 'webster': 7187, 'circulation': 7188, 'britney': 7189, 'everyday': 7190, 'midi': 7191, 'pesth': 7192, 'buda': 7193, 'merged': 7194, 'fishing': 7195, 'pail': 7196, 'gangster': 7197, 'youngman': 7198, 'beats': 7199, 'papers': 7200, 'textile': 7201, 'snakebite': 7202, 'admitted': 7203, 'billion': 7204, 'appointments': 7205, 'worm': 7206, '1980s': 7207, 'captured': 7208, 'syrian': 7209, 'cloud': 7210, '924': 7211, 'rebounds': 7212, 'continuing': 7213, 'dialog': 7214, 'contemporary': 7215, 'issues': 7216, 'readers': 7217, 'tape': 7218, 'understand': 7219, 'cables': 7220, 'manchester': 7221, 'discontinued': 7222, 'batman': 7223, 'batcycle': 7224, 'saute': 7225, 'schematics': 7226, 'windshield': 7227, 'wiper': 7228, 'mechanism': 7229, 'archenemy': 7230, 'schoolhouse': 7231, 'schooling': 7232, 'highschool': 7233, 'crops': 7234, 'showers': 7235, 'aztecs': 7236, 'flightless': 7237, 'sawyer': 7238, 'aunt': 7239, 'micronauts': 7240, 'traveling': 7241, 'microverse': 7242, 'pugilist': 7243, 'cauliflower': 7244, 'mcpugg': 7245, 'seagull': 7246, 'ouzo': 7247, '137': 7248, 'uruguay': 7249, 'eliot': 7250, 'wordsworth': 7251, 'replica': 7252, 'disneyland': 7253, 'deserts': 7254, 'qatar': 7255, 'crisscrosses': 7256, 'urologist': 7257, 'jeroboams': 7258, 'rugby': 7259, 'warren': 7260, 'spahn': 7261, '20': 7262, 'skittles': 7263, 'qualifications': 7264, 'donating': 7265, 'olivia': 7266, 'havilland': 7267, 'pressured': 7268, 'appointing': 7269, 'conflicts': 7270, 'hairdryer': 7271, 'eats': 7272, 'sleeps': 7273, 'underground': 7274, 'portuguese': 7275, 'martial': 7276, 'sinning': 7277, 'edison': 7278, 'extends': 7279, 'alabama': 7280, 'lemurs': 7281, 'agencies': 7282, 'employment': 7283, 'verification': 7284, 'onetime': 7285, 'socialism': 7286, 'claws': 7287, 'bucks': 7288, 'condensed': 7289, 'spamming': 7290, 'scores': 7291, 'rockin': 7292, 'protects': 7293, 'realm': 7294, 'droppings': 7295, 'feat': 7296, 'homerian': 7297, 'trojan': 7298, 'vesuvius': 7299, 'prenatal': 7300, 'supercontinent': 7301, 'pangaea': 7302, 'break': 7303, 'lime': 7304, 'cherry': 7305, 'thirds': 7306, 'preston': 7307, 'snarly': 7308, 'shelleen': 7309, 'pens': 7310, 'englishmen': 7311, 'walks': 7312, '1919': 7313, 'occurrence': 7314, 'unarmed': 7315, 'protestors': 7316, 'moog': 7317, 'synthesizer': 7318, 'niigata': 7319, 'filenes': 7320, 'radiographer': 7321, 'disaccharide': 7322, 'faring': 7323, 'inhumans': 7324, 'appropriates': 7325, 'rarely': 7326, 'lavender': 7327, 'wwf': 7328, 'rude': 7329, 'porgy': 7330, 'bess': 7331, 'clone': 7332, 'larynx': 7333, 'luggage': 7334, 'flier': 7335, 'rearranged': 7336, 'lucelly': 7337, 'garcia': 7338, 'honduras': 7339, 'sneezing': 7340, 'quick': 7341, 'tbk': 7342, 'seafaring': 7343, 'swapped': 7344, 'families': 7345, 'plagues': 7346, 'wheels': 7347, 'rounded': 7348, 'matchbook': 7349, 'gregorian': 7350, 'corsica': 7351, 'hive': 7352, 'slotbacks': 7353, 'tailbacks': 7354, 'touchbacks': 7355, 'complemented': 7356, 'potatoes': 7357, 'peas': 7358, 'repeating': 7359, 'voter': 7360, 'dingoes': 7361, 'atlas': 7362, 'leoncavallo': 7363, 'prologue': 7364, 'stratton': 7365, 'southwestern': 7366, 'pomegranates': 7367, 'pharmacists': 7368, 'allies': 7369, 'avalanche': 7370, 'hernando': 7371, 'soto': 7372, 'epicenter': 7373, 'quality': 7374, 'charcter': 7375, 'chiefly': 7376, 'enormous': 7377, 'corbett': 7378, '1892': 7379, 'marino': 7380, 'tastebud': 7381, 'astroturf': 7382, 'hiemal': 7383, 'activity': 7384, 'normally': 7385, 'beaches': 7386, "'m": 7387, 'jealous': 7388, 'prankster': 7389, 'waved': 7390, 'caboose': 7391, 'haven': 7392, 'hairless': 7393, 'volume': 7394, 'jewelry': 7395, 'pictured': 7396, 'entries': 7397, '1669': 7398, 'walden': 7399, 'puddle': 7400, 'socrates': 7401, 'obelisk': 7402, 'albee': 7403, 'regained': 7404, 'ted': 7405, 'predict': 7406, 'observing': 7407, 'cannon': 7408, 'divides': 7409, 'frenchman': 7410, 'necessary': 7411, 'stomach': 7412, 'directors': 7413, 'advisory': 7414, 'voices': 7415, 'hurdle': 7416, 'runner': 7417, 'steeplechase': 7418, 'owning': 7419, 'svhs': 7420, 'mackenzie': 7421, 'cultural': 7422, 'condemn': 7423, 'pushy': 7424, 'mtv': 7425, 'sap': 7426, 'atmosphere': 7427, 'feather': 7428, 'macaroni': 7429, 'particularly': 7430, 'photoshop': 7431, 'pitched': 7432, 'nevermind': 7433, 'steering': 7434, '1842': 7435, 'westview': 7436, 'funky': 7437, 'winkerbean': 7438, 'chick': 7439, 'breathe': 7440, 'mcgwire': 7441, 'maids': 7442, 'milking': 7443, 'celtic': 7444, 'transparent': 7445, 'limelight': 7446, 'tequila': 7447, 'galliano': 7448, 'geological': 7449, 'dunk': 7450, 'massive': 7451, 'complex': 7452, 'hohenzollerns': 7453, 'snowboarding': 7454, 'stallone': 7455, 'rhinestone': 7456, 'turnkey': 7457, 'extinction': 7458, '528': 7459, 'destroyers': 7460, 'maddox': 7461, 'turner': 7462, 'joy': 7463, 'kemper': 7464, 'genome': 7465, 'coordinate': 7466, 'mapping': 7467, 'abbreviate': 7468, 'chaplin': 7469, 'uncle': 7470, 'replied': 7471, 'begun': 7472, 'writ': 7473, 'categorized': 7474, 'bourgeoisie': 7475, 'leo': 7476, 'tolstoy': 7477, 'zapper': 7478, 'interlata': 7479, 'hanks': 7480, 'dimension': 7481, 'motown': 7482, 'anymore': 7483, 'skiing': 7484, 'calgary': 7485, 'dennison': 7486, 'railways': 7487, 'drain': 7488, 'bmw': 7489, 'biologist': 7490, 'revelation': 7491, 'mauis': 7492, 'extensively': 7493, 'grown': 7494, 'pythagoras': 7495, '1927': 7496, 'revival': 7497, 'aaa': 7498, 'liability': 7499, 'lmds': 7500, 'pointsettia': 7501, 'hiking': 7502, 'graveyard': 7503, 'writers': 7504, 'smothers': 7505, 'prewett': 7506, 'panther': 7507, 'louse': 7508, 'madeira': 7509, 'travelling': 7510, 'iberian': 7511, 'mines': 7512, 'properly': 7513, 'niagra': 7514, 'turns': 7515, '36893': 7516, 'adults': 7517, 'machinery': 7518, 'fickle': 7519, 'fate': 7520, 'sinclair': 7521, 'hide': 7522, 'seek': 7523, 'annie': 7524, 'neurotic': 7525, 'duane': 7526, 'thirst': 7527, 'quencher': 7528, 'prussia': 7529, 'node': 7530, 'tiles': 7531, 'teaspoons': 7532, 'tablespoon': 7533, 'lyricist': 7534, '3rd': 7535, 'langerhans': 7536, 'sql': 7537, 'queries': 7538, 'improved': 7539, 'radioactive': 7540, 'previous': 7541, 'commonwealth': 7542, 'taxed': 7543, '1789': 7544, 'scarlet': 7545, 'sara': 7546, 'linux': 7547, 'builders': 7548, 'mainly': 7549, 'offers': 7550, 'cad': 7551, 'doorstep': 7552, 'gasoline': 7553, 'bailey': 7554, 'stinger': 7555, 'tweezers': 7556, 'europeans': 7557, 'oceania': 7558, 'slavery': 7559, 'eldercare': 7560, 'decompose': 7561, 'contributed': 7562, 'plains': 7563, 'farmers': 7564, '1800s': 7565, '1960s': 7566, '1970s': 7567, "'50s": 7568, 'effective': 7569, 'protecting': 7570, 'comprises': 7571, 'highlands': 7572, 'lowlands': 7573, 'uplands': 7574, 'marshal': 7575, 'erwin': 7576, 'rommel': 7577, 'quiz': 7578, 'vera': 7579, 'lynn': 7580, 'meet': 7581, 'pitch': 7582, 'sweeter': 7583, 'mined': 7584, 'safety': 7585, 'constitute': 7586, 'leif': 7587, 'ericson': 7588, 'baskin': 7589, 'robbins': 7590, 'starship': 7591, 'crosstalk': 7592, 'relate': 7593, 'insb': 7594, 'thickness': 7595, 'infrared': 7596, 'detectors': 7597, 'aftra': 7598, 'sexy': 7599, 'punchbowl': 7600, 'hill': 7601, 'ukulele': 7602, 'seccession': 7603, 'jackal': 7604, 'sweden': 7605, 'finland': 7606, 'ghana': 7607, 'denied': 7608, 'andorra': 7609, 'nestled': 7610, 'wade': 7611, 'decision': 7612, 'defreeze': 7613, 'radius': 7614, 'ellipse': 7615, 'heads': 7616, '1955': 7617, 'psychologically': 7618, 'fell': 7619, 'elizabeth': 7620, 'immigration': 7621, 'laws': 7622, 'cough': 7623, 'medication': 7624, 'tesla': 7625, 'jaco': 7626, 'pastorius': 7627, 'veronica': 7628, 'mig': 7629, 'khaki': 7630, 'chino': 7631, 'infinity': 7632, 'alloy': 7633, 'estuary': 7634, 'mevacor': 7635, 'achievement': 7636, '2001': 7637, 'odyssey': 7638, 'renaud': 7639, 'percival': 7640, 'lovell': 7641, 'rocket': 7642, 'surveyor': 7643, 'westernmost': 7644, 'abolitionists': 7645, 'tenderness': 7646, 'ruckus': 7647, 'insisted': 7648, 'clarabell': 7649, 'patrons': 7650, 'stonewall': 7651, 'greenwich': 7652, 'confucius': 7653, 'snack': 7654, 'ridges': 7655, 'tatiana': 7656, 'estonia': 7657, 'burroughs': 7658, 'chickadee': 7659, 'patients': 7660, 'senses': 7661, 'develops': 7662, 'kickoff': 7663, 'climbs': 7664, 'paleontologist': 7665, 'currently': 7666, 'captive': 7667, 'nautilus': 7668, 'rush': 7669, 'homeostasis': 7670, 'pies': 7671, 'wound': 7672, 'manufacturing': 7673, 'throwing': 7674, 'bandleader': 7675, 'cowrote': 7676, 'tisket': 7677, 'tasket': 7678, 'registers': 7679, 'trademarks': 7680, 'osmosis': 7681, 'joke': 7682, 'ancestral': 7683, 'overlooking': 7684, 'hyde': 7685, 'douglas': 7686, 'mcarthur': 7687, 'recalled': 7688, 'deadly': 7689, 'sins': 7690, 'formation': 7691, 'injuries': 7692, 'recreational': 7693, 'skating': 7694, 'lasts': 7695, 'sabres': 7696, 'impenetrable': 7697, 'fortifications': 7698, '95': 7699, 'polka': 7700, 'gran': 7701, 'bernardo': 7702, 'cuckquean': 7703, 'factors': 7704, 'teen': 7705, 'spartanburg': 7706, 'imitations': 7707, 'jellies': 7708, 'rca': 7709, 'dice': 7710, 'olive': 7711, 'oyl': 7712, 'dragonflies': 7713, 'boycotted': 7714, 'leslie': 7715, 'hornby': 7716, 'mahal': 7717, 'distinctive': 7718, 'palmiped': 7719, '139': 7720, 'papal': 7721, 'goulash': 7722, 'parachute': 7723, 'sub': 7724, 'saharan': 7725, 'spartacus': 7726, 'gladiator': 7727, 'supports': 7728, 'badaling': 7729, 'turret': 7730, 'drag': 7731, 'currents': 7732, 'shetland': 7733, 'orkney': 7734, 'ugly': 7735, 'duckling': 7736, 'tel': 7737, 'aviv': 7738, 'crossed': 7739, 'slits': 7740, 'castles': 7741, 'accommodate': 7742, 'aging': 7743, 'freckles': 7744, 'cos': 7745, 'cob': 7746, 'ct': 7747, 'psychology': 7748, 'values': 7749, 'motorcycle': 7750, 'bodies': 7751, 'visited': 7752, 'succeeded': 7753, 'nikita': 7754, 'chosen': 7755, 'chiefs': 7756, 'chef': 7757, 'coddle': 7758, 'bails': 7759, 'wicket': 7760, 'piles': 7761, 'bernini': 7762, 'bristol': 7763, 'dial': 7764, 'trainer': 7765, 'tungsten': 7766, 'quebec': 7767, 'buffett': 7768, 'concert': 7769, 'camden': 7770, 'stamps': 7771, '1st': 7772, 'sao': 7773, 'paulo': 7774, 'boc': 7775, 'boxcars': 7776, 'bestowed': 7777, 'figs': 7778, 'ripe': 7779, 'thee': 7780, 'sicilian': 7781, 'accused': 7782, 'janurary': 7783, 'billionth': 7784, 'crayon': 7785, 'crayola': 7786, 'hydroelectric': 7787, 'highways': 7788, 'binomial': 7789, 'coefficients': 7790, 'birthdate': 7791, 'suzy': 7792, 'montana': 7793, 'ussr': 7794, 'dissolved': 7795, 'edo': 7796, 'distilling': 7797, 'silence': 7798, 'lambs': 7799, 'napolean': 7800, 'jena': 7801, 'auerstadt': 7802, 'angelus': 7803, 'orbit': 7804, 'capture': 7805, 'retirement': 7806, 'jerk': 7807, 'urgent': 7808, 'fury': 7809, 'robbers': 7810, 'nevil': 7811, 'shute': 7812, 'doomed': 7813, 'survivors': 7814, 'newsmen': 7815, 'warlock': 7816, 'forehead': 7817, 'softest': 7818, 'temperance': 7819, 'advocate': 7820, 'wielded': 7821, 'hatchet': 7822, 'saloons': 7823, 'fiji': 7824, 'cecum': 7825, 'volleyball': 7826, 'baryshnikov': 7827, 'normans': 7828, 'galloping': 7829, 'gourmet': 7830, 'nutrients': 7831, 'ninjitsu': 7832, 'kung': 7833, 'fu': 7834, 'prisoners': 7835, 'lobsters': 7836, 'wolverine': 7837, 'habits': 7838, 'fix': 7839, 'squeaky': 7840, 'thompson': 7841, 'flood': 7842, 'mosquitoes': 7843, 'bubblegum': 7844, 'carpet': 7845, 'wembley': 7846, 'sci': 7847, 'fi': 7848, 'peloponnesian': 7849, 'extremes': 7850, 'swims': 7851, 'tide': 7852, 'ebb': 7853, 'tannins': 7854, 'cheerios': 7855, 'durante': 7856, 'burst': 7857, 'commercials': 7858, 'delicacy': 7859, 'indelicately': 7860, 'pickled': 7861, '1916': 7862, 'jung': 7863, 'noodle': 7864, 'factory': 7865, 'hamilton': 7866, 'fahrenheit': 7867, 'centigrade': 7868, 'oyster': 7869, 'derived': 7870, 'biritch': 7871, 'whist': 7872, 'ado': 7873, 'collective': 7874, 'noun': 7875, 'traits': 7876, 'capricorns': 7877, 'concerning': 7878, 'custody': 7879, 'campaign': 7880, 'invention': 7881, 'conservation': 7882, 'impossible': 7883, 'ranking': 7884, 'roles': 7885, 'streetcar': 7886, 'physically': 7887, 'subject': 7888, 'mast': 7889, 'seafarers': 7890, 'kindergarten': 7891, 'mechanical': 7892, 'achieves': 7893, 'speeds': 7894, 'boilermaker': 7895, 'pilgrim': 7896, 'survivor': 7897, 'dresden': 7898, 'firestorm': 7899, 'indoor': 7900, 'inferno': 7901, '111': 7902, 'flu': 7903, 'bridges': 7904, 'upstairs': 7905, 'downstairs': 7906, 'clearer': 7907, 'monsters': 7908, 'rare': 7909, 'symptoms': 7910, 'involuntary': 7911, 'movements': 7912, 'tics': 7913, 'swearing': 7914, 'incoherent': 7915, 'vocalizations': 7916, 'grunts': 7917, 'otters': 7918, 'finn': 7919, 'heimlich': 7920, '287': 7921, 'vasco': 7922, 'gama': 7923, 'megan': 7924, 'listing': 7925, 'showtimes': 7926, 'montenegro': 7927, 'transistors': 7928, 'hazel': 7929, 'glasgow': 7930, 'ink': 7931, 'anteater': 7932, 'gleason': 7933, 'bendix': 7934, 'planned': 7935, 'berth': 7936, 'lane': 7937, 'converting': 7938, 'floating': 7939, 'pedometer': 7940, 'thousands': 7941, 'speaker': 7942, 'titans': 7943, 'suspect': 7944, 'clue': 7945, 'commentary': 7946, 'deconstructionism': 7947, 'lenny': 7948, 'bruce': 7949, 'arrested': 7950, 'returned': 7951, 'fraudulent': 7952, 'airliners': 7953, 'gliding': 7954, 'reflectors': 7955, 'sweetheart': 7956, 'darla': 7957, 'saline': 7958, 'cooling': 7959, 'justify': 7960, 'emergency': 7961, 'decrees': 7962, 'imprisoning': 7963, 'opponents': 7964, 'vesting': 7965, 'visiting': 7966, 'duvalier': 7967, 'attorney': 7968, 'ordered': 7969, 'alcatraz': 7970, 'congo': 7971, 'text': 7972, 'internet2': 7973, 'foreign': 7974, 'financial': 7975, 'button': 7976, 'gills': 7977, 'dubai': 7978, 'concrete': 7979, 'remembrance': 7980, 'kubla': 7981, 'khan': 7982, 'islam': 7983, 'maggio': 7984, 'zebras': 7985, 'considering': 7986, 'antonia': 7987, 'shimerda': 7988, 'farm': 7989, 'bucher': 7990, 'infatuation': 7991, 'genie': 7992, 'conjured': 7993, 'nancy': 7994, 'chuck': 7995, 'b12': 7996, 'owed': 7997, 'illegally': 7998, 'exact': 7999, 'sunset': 8000, 'particular': 8001, 'picasso': 8002, 'vocals': 8003, 'karnak': 8004, 'rowing': 8005, 'queensland': 8006, 'poing': 8007, 'carelessness': 8008, 'carefreeness': 8009, 'afflict': 8010, 'flash': 8011, 'tyvek': 8012, 'zoonose': 8013, 'gunboat': 8014, 'pebbles': 8015, 'sinemet': 8016, 'selleck': 8017, 'tylo': 8018, 'volkswagen': 8019, 'natalie': 8020, 'audrey': 8021, 'sprouts': 8022, 'freeway': 8023, 'construction': 8024, 'louise': 8025, 'fletcher': 8026, 'stronger': 8027, 'vitreous': 8028, 'technology': 8029, 'cellulose': 8030, 'combatting': 8031, 'discontent': 8032, 'fang': 8033, 'tooth': 8034, 'pookie': 8035, 'burns': 8036, 'leftovers': 8037, 'imperial': 8038, 'initial': 8039, 'whiskers': 8040, 'saratoga': 8041, 'eliminates': 8042, 'germs': 8043, 'mildew': 8044, 'bullheads': 8045, 'feeding': 8046, 'pigeons': 8047, 'piazza': 8048, 'studio': 8049, 'bateau': 8050, 'lavoir': 8051, 'montmartre': 8052, 'cid': 8053, 'napalm': 8054, 'yohimbine': 8055, 'drafted': 8056, 'builder': 8057, 'ribbon': 8058, 'malls': 8059, 'decorations': 8060, 'burma': 8061, 'collector': 8062, 'johnsons': 8063, 'biographer': 8064, 'erotic': 8065, 'shortage': 8066, 'keeping': 8067, 'roads': 8068, 'hummingbird': 8069, 'ostrich': 8070, 'missionary': 8071, 'researches': 8072, '1857': 8073, 'pregnancies': 8074, 'methods': 8075, 'regulate': 8076, 'monopolies': 8077, 'denote': 8078, 'boomer': 8079, 'ferret': 8080, 'steepest': 8081, 'streets': 8082, 'giants': 8083, 'twelve': 8084, '90': 8085, 'entering': 8086, 'constantly': 8087, 'sweaty': 8088, 'sine': 8089, 'socioeconomic': 8090, 'ignores': 8091, 'friends': 8092, 'mccall': 8093, 'cruise': 8094, 'kathie': 8095, 'gifford': 8096, 'jenna': 8097, 'bras': 8098, 'cawdor': 8099, 'glamis': 8100, 'blair': 8101, 'horsemen': 8102, 'apocalypse': 8103, 'geckos': 8104, 'watts': 8105, 'kilowatt': 8106, 'jennifer': 8107, 'healer': 8108, 'inspirational': 8109, 'miracles': 8110, 'jinnah': 8111, 'esquire': 8112, 'hang': 8113, 'intranet': 8114, 'harold': 8115, 'stassen': 8116, 'caps': 8117, 'tramped': 8118, 'youth': 8119, 'noah': 8120, 'ark': 8121, 'mounted': 8122, 'guerrilla': 8123, 'coleman': 8124, 'younger': 8125, 'ridden': 8126, 'plagued': 8127, 'choice': 8128, 'height': 8129, '1925': 8130, 'pelt': 8131, 'psychiatric': 8132, 'sessions': 8133, 'thrillers': 8134, 'cortez': 8135, 'parthenon': 8136, '1895': 8137, 'wells': 8138, 'argonauts': 8139, 'dolphin': 8140, 'funded': 8141, 'elders': 8142, 'cricketer': 8143, '1898': 8144, 'dip': 8145, 'fries': 8146, 'adjacent': 8147, 'corridors': 8148, 'pentagon': 8149, 'juan': 8150, 'playwright': 8151, 'sucks': 8152, 'barbershop': 8153, 'beany': 8154, 'cecil': 8155, 'sailed': 8156, 'flora': 8157, 'pampas': 8158, 'needed': 8159, 'tailoring': 8160, 'bordering': 8161, 'due': 8162, 'morris': 8163, 'bishop': 8164, 'becomes': 8165, 'boarders': 8166, 'ekg': 8167, 'lends': 8168, 'surroundings': 8169, 'yearly': 8170, 'specimen': 8171, 'basidiomycetes': 8172, 'faults': 8173, 'asiento': 8174, 'appropriate': 8175, 'yom': 8176, 'kippur': 8177, 'mclean': 8178, 'laments': 8179, 'buddy': 8180, 'holly': 8181, 'srpska': 8182, 'krajina': 8183, 'supplier': 8184, 'cannabis': 8185, 'pergament': 8186, 'esa': 8187, 'pekka': 8188, 'crackle': 8189, 'locking': 8190, 'brakes': 8191, 'magenta': 8192, 'apache': 8193, 'hormone': 8194, 'isolationist': 8195, 'fellatio': 8196, 'characteristics': 8197, 'contribute': 8198, '1815': 8199, 'mitty': 8200, 'portraying': 8201, 'cartoonist': 8202, 'jets': 8203, 'vapor': 8204, 'childbirth': 8205, 'honda': 8206, 'ashen': 8207, 'eidologist': 8208, 'moderated': 8209, 'cohan': 8210, 'dandy': 8211, 'philebus': 8212, 'ingredients': 8213, 'proliferation': 8214, 'theresa': 8215, 'bless': 8216, 'sneeze': 8217, 'consumption': 8218, 'tire': 8219, 'spin': 8220, 'slows': 8221, 'cartoondom': 8222, 'pluribus': 8223, 'unum': 8224, 'tft': 8225, 'dual': 8226, 'scan': 8227, 'oath': 8228, 'paradise': 8229, '47': 8230, 'cookie': 8231, 'ny': 8232, 'bang': 8233, 'koran': 8234, 'heptagon': 8235, 'wasn': 8236, 'anglicans': 8237, 'adventours': 8238, 'tours': 8239, 'becket': 8240, 'barbary': 8241, 'multicultural': 8242, 'multilingual': 8243, 'climbed': 8244, 'mt': 8245, 'photosynthesis': 8246, 'projects': 8247, '8th': 8248, 'links': 8249, 'piccadilly': 8250, 'pocket': 8251, 'billiards': 8252, 'satirized': 8253, 'countinghouse': 8254, 'counting': 8255, 'shifting': 8256, 'rom': 8257, 'headaches': 8258, 'locations': 8259, 'stained': 8260, 'window': 8261, 'version': 8262, 'commonplace': 8263, 'masons': 8264, 'enforce': 8265, 'daisy': 8266, 'moses': 8267, 'menstruation': 8268, 'makeup': 8269, 'capitalizes': 8270, 'pronoun': 8271, 'agra': 8272, 'mammoth': 8273, 'jfk': 8274, 'witness': 8275, 'hearings': 8276, 'stuck': 8277, 'friendly': 8278, 'basic': 8279, 'strokes': 8280, 'sen': 8281, 'everett': 8282, 'dirkson': 8283, "'70": 8284, 'erykah': 8285, 'badu': 8286, 'pony': 8287, 'gangsters': 8288, 'clyde': 8289, 'thalia': 8290, 'suffering': 8291, 'diphallic': 8292, 'terata': 8293, 'panties': 8294, 'pacer': 8295, 'compete': 8296, 'cured': 8297, 'cumin': 8298, 'ficus': 8299, 'aurora': 8300, 'blinking': 8301, 'aimed': 8302, 'audience': 8303, 'syringe': 8304, 'medicinal': 8305, 'barton': 8306, 'bith': 8307, 'sounded': 8308, 'chiffons': 8309, 'giza': 8310, 'historically': 8311, 'completed': 8312, 'corvette': 8313, 'lump': 8314, '191': 8315, 'mcdonald': 8316, 'horologist': 8317, 'rugs': 8318, 'attendance': 8319, 'supper': 8320, 'feed': 8321, 'purina': 8322, 'chow': 8323, 'operate': 8324, 'titus': 8325, 'sellers': 8326, 'creative': 8327, 'genius': 8328, 'hustles': 8329, 'waits': 8330, 'paraguay': 8331, 'vacations': 8332, 'conditioner': 8333, 'efficiency': 8334, 'bounded': 8335, 'tasman': 8336, 'foreman': 8337, 'victim': 8338, 'dsl': 8339, 'boss': 8340, 'multitalented': 8341, 'failed': 8342, 'ayer': 8343, 'craps': 8344, 'cult': 8345, 'marcus': 8346, 'garvey': 8347, 'cultivated': 8348, 'crazy': 8349, 'cruel': 8350, 'theatrical': 8351, 'roaring': 8352, 'forties': 8353, 'mack': 8354, 'sennett': 8355, 'lifetime': 8356, 'kilvington': 8357, 'compound': 8358, 'levine': 8359, 'hispaniola': 8360, 'powder': 8361, 'lotion': 8362, 'smell': 8363, 'janis': 8364, 'brandt': 8365, 'peruvian': 8366, 'mummified': 8367, 'pizarro': 8368, 'mackinaw': 8369, 'somebody': 8370, 'kyriakos': 8371, 'theotokopoulos': 8372, 'englishwoman': 8373, 'autry': 8374, 'regards': 8375, 'builds': 8376, 'odor': 8377, 'stated': 8378, 'rulebook': 8379, 'auh2o': 8380, 'debt': 8381, 'claiming': 8382, 'bankruptcy': 8383, 'shakespearean': 8384, 'shylock': 8385, 'entertainment': 8386, 'roosters': 8387, 'maldive': 8388, 'cullions': 8389, 'popularized': 8390, 'brillo': 8391, 'pad': 8392, 'mccain': 8393, 'rifleman': 8394, 'amazons': 8395, 'lai': 8396, 'hasidic': 8397, 'refrain': 8398, 'airforce': 8399, 'poetic': 8400, 'blank': 8401, 'verse': 8402, 'pibb': 8403, 'berry': 8404, 'blackberry': 8405, 'raspberry': 8406, 'strawberry': 8407, 'benjamin': 8408, 'ruby': 8409, 'platinum': 8410, 'hawkeye': 8411, 'seine': 8412, 'freshen': 8413, 'breath': 8414, 'toothpaste': 8415, 'hijacking': 8416, 'anita': 8417, 'bryant': 8418, 'compiled': 8419, 'propellers': 8420, 'helped': 8421, 'patents': 8422, 'malawi': 8423, 'bend': 8424, 'dimaggio': 8425, 'compile': 8426, '56': 8427, 'graffiti': 8428, 'quilting': 8429, 'stored': 8430, 'exceeded': 8431, 'sonic': 8432, 'boom': 8433, 'iran': 8434, 'contra': 8435, 'indicate': 8436, '007': 8437, 'revolutionary': 8438, 'castro': 8439, 'botanical': 8440, 'nebuchadnezzar': 8441, 'aortic': 8442, 'abdominal': 8443, 'aneurysm': 8444, 'chloroplasts': 8445, 'deere': 8446, 'tractors': 8447, 'zebulon': 8448, 'pike': 8449, 'forward': 8450, 'thinking': 8451, 'insert': 8452, 'bagels': 8453, 'boost': 8454, 'purple': 8455, 'brew': 8456, 'cherubs': 8457, 'webpage': 8458, 'cleaveland': 8459, 'cavaliers': 8460, 'monarchy': 8461, 'isn': 8462, 'added': 8463, 'quisling': 8464, 'heineken': 8465, 'puerto': 8466, 'rico': 8467, 'repossession': 8468, 'butcher': 8469, 'spine': 8470, 'aspen': 8471, 'modesto': 8472, 'galileo': 8473, 'sears': 8474, 'autism': 8475, '1900': 8476, 'labrador': 8477, 'idaho': 8478, 'vaccination': 8479, 'epilepsy': 8480, 'biosphere': 8481, 'muddy': 8482, 'bipolar': 8483, 'cholesterol': 8484, 'macintosh': 8485, 'halfway': 8486, 'poles': 8487, 'invertebrates': 8488, 'linen': 8489, 'amitriptyline': 8490, 'shaman': 8491, 'walrus': 8492, 'turkeys': 8493, 'rip': 8494, 'winkle': 8495, 'triglycerides': 8496, 'liters': 8497, 'rays': 8498, 'fibromyalgia': 8499, 'outdated': 8500, 'yugoslavia': 8501, 'milan': 8502, 'hummingbirds': 8503, 'fargo': 8504, 'moorhead': 8505, 'bats': 8506, 'bighorn': 8507, 'newborn': 8508, 'lennon': 8509, 'ladybug': 8510, 'helpful': 8511, 'amoxicillin': 8512, 'xerophytes': 8513, 'ponce': 8514, 'desktop': 8515, 'publishing': 8516, 'cryogenics': 8517, 'reefs': 8518, 'neurology': 8519, 'ellington': 8520, 'az': 8521, 'micron': 8522, 'core': 8523, 'acupuncture': 8524, 'hindenberg': 8525, 'cubs': 8526, 'perth': 8527, 'eclipse': 8528, 'unmarried': 8529, 'thunderstorms': 8530, 'abolitionist': 8531, '1859': 8532, 'fault': 8533, 'platelets': 8534, 'severance': 8535, 'archives': 8536, 'poliomyelitis': 8537, 'philosopher': 8538, 'phi': 8539, 'beta': 8540, 'nicotine': 8541, 'b1': 8542, 'radium': 8543, 'sunspots': 8544, 'colonized': 8545, 'mongolia': 8546, 'nanotechnology': 8547, '1700': 8548, 'convicts': 8549, 'populate': 8550, 'lower': 8551, 'obtuse': 8552, 'angle': 8553, 'polymers': 8554, 'mauna': 8555, 'loa': 8556, 'astronomic': 8557, 'northern': 8558, 'acetaminophen': 8559, 'milwaukee': 8560, 'atlanta': 8561, 'absorbed': 8562, 'solstice': 8563, 'supernova': 8564, 'shawnee': 8565, 'lourve': 8566, 'pluto': 8567, 'neuropathy': 8568, 'euphrates': 8569, 'cryptography': 8570, 'composed': 8571, 'ruler': 8572, 'defeated': 8573, 'waterloo': 8574, 'wal': 8575, 'mart': 8576, '35824': 8577, 'hula': 8578, 'hoop': 8579, 'pastrami': 8580, 'enquirer': 8581, 'backbones': 8582, 'olympus': 8583, 'mons': 8584, '23rd': 8585, 'defibrillator': 8586, 'abolish': 8587, 'montreal': 8588, 'towers': 8589, 'fungus': 8590, 'frequently': 8591, 'chloride': 8592, 'spots': 8593, 'influenza': 8594, 'depletion': 8595, 'sitting': 8596, 'shiva': 8597, 'stretches': 8598, 'nigeria': 8599, 'spleen': 8600, 'phenylalanine': 8601, 'legislative': 8602, 'branch': 8603, 'sonar': 8604, 'phosphorus': 8605, 'tranquility': 8606, 'bandwidth': 8607, 'parasite': 8608, 'meteorologists': 8609, 'criterion': 8610, 'binney': 8611, '1903': 8612, 'pilates': 8613, 'depth': 8614, 'dress': 8615, 'mardi': 8616, 'gras': 8617, 'pesos': 8618, 'dodgers': 8619, 'admirals': 8620, 'glenn': 8621, 'arc': 8622, 'fortnight': 8623, 'dianetics': 8624, 'ethiopia': 8625, 'janice': 8626, 'fm': 8627, 'peyote': 8628, 'esophagus': 8629, 'mortarboard': 8630, 'chunnel': 8631, 'antacids': 8632, 'pulmonary': 8633, 'quaaludes': 8634, 'naproxen': 8635, 'strep': 8636, 'drawer': 8637, 'hybridization': 8638, 'indigo': 8639, 'barometer': 8640, 'usps': 8641, 'strike': 8642, 'hiroshima': 8643, 'bombed': 8644, 'savannah': 8645, 'strongest': 8646, 'planets': 8647, 'mussolini': 8648, 'seize': 8649, 'persia': 8650, 'cell': 8651, 'tmj': 8652, 'yak': 8653, 'isdn': 8654, 'mozart': 8655, 'semolina': 8656, 'melba': 8657, 'ursa': 8658, 'content': 8659, 'reform': 8660, 'ontario': 8661, 'ceiling': 8662, 'stimulant': 8663, 'griffith': 8664, 'champlain': 8665, 'quicksilver': 8666, 'divine': 8667, 'width': 8668, 'toto': 8669, 'thyroid': 8670, 'ciao': 8671, 'artery': 8672, 'lungs': 8673, 'faithful': 8674, 'acetic': 8675, 'moulin': 8676, 'rouge': 8677, 'atomic': 8678, 'pathogens': 8679, 'zinc': 8680, 'snails': 8681, 'ethics': 8682, 'annuity': 8683, 'turquoise': 8684, 'muscular': 8685, 'dystrophy': 8686, 'neuschwanstein': 8687, 'propylene': 8688, 'glycol': 8689, 'instant': 8690, 'polaroid': 8691, 'carcinogen': 8692, 'nepotism': 8693, 'myopia': 8694, 'comprise': 8695, 'naturally': 8696, 'occurring': 8697, 'mason': 8698, 'dixon': 8699, 'metabolism': 8700, 'cigarettes': 8701, 'semiconductors': 8702, 'tsunami': 8703, 'kidney': 8704, 'genocide': 8705, 'monastery': 8706, 'raided': 8707, 'vikings': 8708, 'coaster': 8709, 'bangers': 8710, 'mash': 8711, 'jewels': 8712, 'ulcer': 8713, 'vertigo': 8714, 'spirometer': 8715, 'sos': 8716, 'gasses': 8717, 'troposphere': 8718, 'gypsy': 8719, 'rainiest': 8720, 'patrick': 8721, 'mixed': 8722, 'refrigerator': 8723, 'schizophrenia': 8724, 'angiotensin': 8725, 'organize': 8726, 'susan': 8727, 'catskill': 8728, 'backwards': 8729, 'forwards': 8730, 'pediatricians': 8731, 'bentonville': 8732, 'compounded': 8733, 'capers': 8734, 'antigen': 8735, 'luxembourg': 8736, 'venezuela': 8737, 'polymer': 8738, 'bulletproof': 8739, 'vests': 8740, 'thermometer': 8741, 'precious': 8742, 'pure': 8743, 'fluorescent': 8744, 'bulb': 8745, 'rheumatoid': 8746, 'arthritis': 8747, 'rowe': 8748, 'cerebral': 8749, 'palsy': 8750, 'shepard': 8751, 'historic': 8752, 'pectin': 8753, 'bio': 8754, 'diversity': 8755, '22nd': 8756, 'zambia': 8757, 'october': 8758, 'coli': 8759}
###Markdown
Data Preprocessing Define `clean_doc` function
###Code
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# remove remaining tokens that are not alphabetic
# tokens = [word for word in tokens if word.isalpha()]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
###Output
_____no_output_____
###Markdown
Develop VocabularyA part of preparing text for text classification involves defining and tailoring the vocabulary of words supported by the model. **We can do this by loading all of the documents in the dataset and building a set of words.**The larger the vocabulary, the more sparse the representation of each word or document. So, we may decide to support all of these words, or perhaps discard some. The final chosen vocabulary can then be saved to a file for later use, such as filtering words in new documents in the future. We can use `Counter` class and create an instance called `vocab` as follows:
###Code
from collections import Counter
vocab = Counter()
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
# Example
add_doc_to_vocab(train_x, vocab)
print(len(vocab))
vocab
vocab.items()
# #########################
# # Define the vocabulary #
# #########################
# from collections import Counter
# from nltk.corpus import stopwords
# stopwords = stopwords.words('english')
# stemmer = PorterStemmer()
# def clean_doc(doc):
# # split into tokens by white space
# tokens = doc.split()
# # prepare regex for char filtering
# re_punc = re.compile('[%s]' % re.escape(punctuation))
# # remove punctuation from each word
# tokens = [re_punc.sub('', w) for w in tokens]
# # filter out stop words
# tokens = [w for w in tokens if not w in stopwords]
# # filter out short tokens
# tokens = [word for word in tokens if len(word) >= 1]
# # Stem the token
# tokens = [stemmer.stem(token) for token in tokens]
# return tokens
# def add_doc_to_vocab(docs, vocab):
# '''
# input:
# docs: a list of sentences (docs)
# vocab: a vocabulary dictionary
# output:
# return an updated vocabulary
# '''
# for doc in docs:
# tokens = clean_doc(doc)
# vocab.update(tokens)
# return vocab
# # Separate the sentences and the labels for training and testing
# train_x = list(corpus[corpus.split=='train'].sentence)
# train_y = np.array(corpus[corpus.split=='train'].label)
# print(len(train_x))
# print(len(train_y))
# test_x = list(corpus[corpus.split=='test'].sentence)
# test_y = np.array(corpus[corpus.split=='test'].label)
# print(len(test_x))
# print(len(test_y))
# # Instantiate a vocab object
# vocab = Counter()
# vocab = add_doc_to_vocab(train_x, vocab)
# print(len(train_x), len(test_x))
# print(len(vocab))
###Output
5452
5452
500
500
5452 500
6840
###Markdown
Bag-of-Words RepresentationOnce we define our vocab obtained from the training data, we need to **convert each review into a representation that we can feed to a Multilayer Perceptron Model.**As a reminder, here are the summary what we will do:- extract features from the text so the text input can be used with ML algorithms like neural networks- we do by converting the text into a vector representation. The larger the vocab, the longer the representation.- we will score the words in a document inside the vector. These scores are placed in the corresponding location in the vector representation.
###Code
def doc_to_line(doc):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join([token for token in tokens])
return line
def clean_docs(docs):
lines = []
for doc in docs:
line = doc_to_line(doc)
lines.append(line)
return lines
print(train_x[:5])
clean_sentences = clean_docs(train_x[:5])
print()
print( clean_sentences)
###Output
['how did serfdom develop in and then leave russia ?', 'what films featured the character popeye doyle ?', "how can i find a list of celebrities ' real names ?", 'what fowl grabs the spotlight after the chinese year of the monkey ?', 'what is the full form of .com ?']
['serfdom develop leav russia', 'film featur charact popey doyl', 'find list celebr real name', 'fowl grab spotlight chines year monkey', 'full form com']
###Markdown
Bag-of-Words VectorsWe will use the **Keras API** to **convert sentences to encoded document vectors**. Although the `Tokenizer` class from TF Keras provides cleaning and vocab definition, it's better we do this ourselves so that we know exactly we are doing.
###Code
def create_tokenizer(sentence):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(lines)
return tokenizer
###Output
_____no_output_____
###Markdown
This process determines a consistent way to **convert the vocabulary to a fixed-length vector**, which is the total number of words in the vocabulary `vocab`. Next, documents can then be encoded using the Tokenizer by calling `texts_to_matrix()`. The function takes both a list of documents to encode and an encoding mode, which is the method used to score words in the document. Here we specify **freq** to score words based on their frequency in the document. This can be used to encode the loaded training and test data, for example:`Xtrain = tokenizer.texts_to_matrix(train_docs, mode='freq')``Xtest = tokenizer.texts_to_matrix(test_docs, mode='freq')`
###Code
# #########################
# # Define the vocabulary #
# #########################
# from collections import Counter
# from nltk.corpus import stopwords
# stopwords = stopwords.words('english')
# stemmer = PorterStemmer()
# def clean_doc(doc):
# # split into tokens by white space
# tokens = doc.split()
# # prepare regex for char filtering
# re_punc = re.compile('[%s]' % re.escape(punctuation))
# # remove punctuation from each word
# tokens = [re_punc.sub('', w) for w in tokens]
# # filter out stop words
# tokens = [w for w in tokens if not w in stopwords]
# # filter out short tokens
# tokens = [word for word in tokens if len(word) >= 1]
# # Stem the token
# tokens = [stemmer.stem(token) for token in tokens]
# return tokens
# def add_doc_to_vocab(docs, vocab):
# '''
# input:
# docs: a list of sentences (docs)
# vocab: a vocabulary dictionary
# output:
# return an updated vocabulary
# '''
# for doc in docs:
# tokens = clean_doc(doc)
# vocab.update(tokens)
# return vocab
# def doc_to_line(doc, vocab):
# tokens = clean_doc(doc)
# # filter by vocab
# tokens = [token for token in tokens if token in vocab]
# line = ' '.join(tokens)
# return line
# def clean_docs(docs, vocab):
# lines = []
# for doc in docs:
# line = doc_to_line(doc, vocab)
# lines.append(line)
# return lines
# def create_tokenizer(sentences):
# tokenizer = Tokenizer()
# tokenizer.fit_on_texts(sentences)
# return tokenizer
# # Separate the sentences and the labels for training and testing
# train_x = list(corpus[corpus.split=='train'].sentence)
# train_y = np.array(corpus[corpus.split=='train'].label)
# print('train_x size: ', len(train_x))
# print('train_y size: ', len(train_y))
# test_x = list(corpus[corpus.split=='test'].sentence)
# test_y = np.array(corpus[corpus.split=='test'].label)
# print('test_x size: ', len(test_x))
# print('test_y size: ', len(test_y))
# # Instantiate a vocab object
# vocab = Counter()
# # Define a vocabulary for each fold
# vocab = add_doc_to_vocab(train_x, vocab)
# print('The number of vocab: ', len(vocab))
# # Clean the sentences
# train_x = clean_docs(train_x, vocab)
# test_x = clean_docs(test_x, vocab)
# # Define the tokenizer
# tokenizer = create_tokenizer(train_x)
# # encode data using freq mode
# Xtrain = tokenizer.texts_to_matrix(train_x, mode='freq')
# Xtest = tokenizer.texts_to_matrix(test_x, mode='freq')
###Output
train_x size: 5452
train_y size: 5452
test_x size: 500
test_y size: 500
The number of vocab: 6840
###Markdown
Training and Testing the Model 3 MLP Model 3Now, we will build Multilayer Perceptron (MLP) models to classify encoded documents as either positive or negative.As you might have expected, the models are simply feedforward network with fully connected layers called `Dense` in the `Keras` library.Now, we will define our MLP neural network with very little trial and error so cannot be considered tuned for this problem. The configuration is as follows:- First hidden layer with 100 neurons and Relu activation function- Second hidden layer with 50 neurons and Relu activation function- Output layer with softmax activation function- Optimizer: Adam (The best learning algorithm so far)- Loss function: sparse categorical cross-entropy (suited for multiclass classification problem)
###Code
def train_mlp_3(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=50, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
callbacks = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', min_delta=0,
patience=10, verbose=2,
mode='auto', restore_best_weights=True)
###Output
_____no_output_____
###Markdown
Train and Test the Model
###Code
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
def create_tokenizer(sentences):
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences)
return tokenizer
def train_mlp_3(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=50, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print('train_x size: ', len(train_x))
print('train_y size: ', len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print('test_x size: ', len(test_x))
print('test_y size: ', len(test_y))
# Instantiate a vocab object
vocab = Counter()
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
print('The number of vocab: ', len(vocab))
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# Define the tokenizer
tokenizer = create_tokenizer(train_x)
# encode data using freq mode
Xtrain = tokenizer.texts_to_matrix(train_x, mode='freq')
Xtest = tokenizer.texts_to_matrix(test_x, mode='freq')
# train the model
model = train_mlp_3(Xtrain, train_y, Xtest, test_y, epochs = 30)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 100) 684200
_________________________________________________________________
dropout (Dropout) (None, 100) 0
_________________________________________________________________
dense_1 (Dense) (None, 50) 5050
_________________________________________________________________
dropout_1 (Dropout) (None, 50) 0
_________________________________________________________________
dense_2 (Dense) (None, 6) 306
=================================================================
Total params: 689,556
Trainable params: 689,556
Non-trainable params: 0
_________________________________________________________________
###Markdown
Comparing the Word Scoring Methods When we use `text_to_matrix()` function, we are given 4 different methods for scoring words:- `binary`: words are marked as 1 (present) or 0 (absent)- `count`: words are counted based on their occurrence (integer)- `tfidf`: words are scored based on their frequency of occurrence in their own document, but also are being penalized if they are common across all documents- `freq`: wrods are scored based on their frequency of occurrence in their own document
###Code
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
def train_mlp_3(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=50, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print('train_x size: ', len(train_x))
print('train_y size: ', len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print('test_x size: ', len(test_x))
print('test_y size: ', len(test_y))
# Run Experiment of 4 different modes
modes = ['binary', 'count', 'tfidf', 'freq']
results = pd.DataFrame()
for mode in modes:
print('mode: ', mode)
# Instantiate a vocab object
vocab = Counter()
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# encode data using freq mode
Xtrain, Xtest = prepare_data(train_x, test_x, mode)
# train the model
model = train_mlp_3(Xtrain, train_y, Xtest, test_y, verbose=0, epochs = 30)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
results[mode] = [acc*100]
print()
print(results)
###Output
train_x size: 5452
train_y size: 5452
test_x size: 500
test_y size: 500
mode: binary
Restoring model weights from the end of the best epoch.
Epoch 00021: early stopping
Test Accuracy: 74.40000176429749
mode: count
Restoring model weights from the end of the best epoch.
Epoch 00014: early stopping
Test Accuracy: 76.2000024318695
mode: tfidf
Restoring model weights from the end of the best epoch.
Epoch 00018: early stopping
Test Accuracy: 72.79999852180481
mode: freq
Restoring model weights from the end of the best epoch.
Epoch 00013: early stopping
Test Accuracy: 72.60000109672546
binary count tfidf freq
0 74.400002 76.200002 72.799999 72.600001
###Markdown
Summary
###Code
import seaborn as sns
results.boxplot()
plt.show()
results
report = results
report = report.to_excel('BoW_MLP_TREC_3.xlsx', sheet_name='model_3')
###Output
_____no_output_____
###Markdown
Training and Testing the Model 2 MLP Model 2Now, we will build Multilayer Perceptron (MLP) models to classify encoded documents as either positive or negative.As you might have expected, the models are simply feedforward network with fully connected layers called `Dense` in the `Keras` library.Now, we will define our MLP neural network with very little trial and error so cannot be considered tuned for this problem. The configuration is as follows:- First hidden layer with 100 neurons and Relu activation function- Dropout layer with p = 0.5- Output layer with Sigmoid activation function- Optimizer: Adam (The best learning algorithm so far)- Loss function: binary cross-entropy (suited for binary classification problem)
###Code
def train_mlp_2(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
###Output
_____no_output_____
###Markdown
Comparing the Word Scoring Methods
###Code
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
def train_mlp_2(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=100, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print('train_x size: ', len(train_x))
print('train_y size: ', len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print('test_x size: ', len(test_x))
print('test_y size: ', len(test_y))
# Run Experiment of 4 different modes
modes = ['binary', 'count', 'tfidf', 'freq']
results = pd.DataFrame()
for mode in modes:
print('mode: ', mode)
# Instantiate a vocab object
vocab = Counter()
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# encode data using freq mode
Xtrain, Xtest = prepare_data(train_x, test_x, mode)
# train the model
model = train_mlp_2(Xtrain, train_y, Xtest, test_y, verbose=0, epochs = 30)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
results[mode] = [acc*100]
print()
print(results)
###Output
train_x size: 5452
train_y size: 5452
test_x size: 500
test_y size: 500
mode: binary
Restoring model weights from the end of the best epoch.
Epoch 00015: early stopping
Test Accuracy: 75.59999823570251
mode: count
Restoring model weights from the end of the best epoch.
Epoch 00015: early stopping
Test Accuracy: 75.0
mode: tfidf
Restoring model weights from the end of the best epoch.
Epoch 00012: early stopping
Test Accuracy: 72.79999852180481
mode: freq
Restoring model weights from the end of the best epoch.
Epoch 00020: early stopping
Test Accuracy: 74.19999837875366
binary count tfidf freq
0 75.599998 75.0 72.799999 74.199998
###Markdown
Summary
###Code
results.boxplot()
plt.show()
results
report = results
report = report.to_excel('BoW_MLP_TREC_2.xlsx', sheet_name='model_2')
###Output
_____no_output_____
###Markdown
Training and Testing the Model 1 MLP Model 1Now, we will build Multilayer Perceptron (MLP) models to classify encoded documents as either positive or negative.As you might have expected, the models are simply feedforward network with fully connected layers called `Dense` in the `Keras` library.Now, we will define our MLP neural network with very little trial and error so cannot be considered tuned for this problem. The configuration is as follows:- First hidden layer with 50 neurons and Relu activation function- Dropout layer with p = 0.5- Output layer with Sigmoid activation function- Optimizer: Adam (The best learning algorithm so far)- Loss function: binary cross-entropy (suited for binary classification problem)
###Code
def train_mlp_1(train_x, train_y, batch_size = 50, epochs = 10, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=50, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose)
return model
###Output
_____no_output_____
###Markdown
Comparing the Word Scoring Methods
###Code
#########################
# Define the vocabulary #
#########################
from collections import Counter
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
stemmer = PorterStemmer()
def clean_doc(doc):
# split into tokens by white space
tokens = doc.split()
# prepare regex for char filtering
re_punc = re.compile('[%s]' % re.escape(punctuation))
# remove punctuation from each word
tokens = [re_punc.sub('', w) for w in tokens]
# filter out stop words
tokens = [w for w in tokens if not w in stopwords]
# filter out short tokens
tokens = [word for word in tokens if len(word) >= 1]
# Stem the token
tokens = [stemmer.stem(token) for token in tokens]
return tokens
def add_doc_to_vocab(docs, vocab):
'''
input:
docs: a list of sentences (docs)
vocab: a vocabulary dictionary
output:
return an updated vocabulary
'''
for doc in docs:
tokens = clean_doc(doc)
vocab.update(tokens)
return vocab
def doc_to_line(doc, vocab):
tokens = clean_doc(doc)
# filter by vocab
tokens = [token for token in tokens if token in vocab]
line = ' '.join(tokens)
return line
def clean_docs(docs, vocab):
lines = []
for doc in docs:
line = doc_to_line(doc, vocab)
lines.append(line)
return lines
# prepare bag-of-words encoding of docs
def prepare_data(train_docs, test_docs, mode):
# create the tokenizer
tokenizer = Tokenizer()
# fit the tokenizer on the documents
tokenizer.fit_on_texts(train_docs)
# encode training data set
Xtrain = tokenizer.texts_to_matrix(train_docs, mode=mode)
# encode test data set
Xtest = tokenizer.texts_to_matrix(test_docs, mode=mode)
return Xtrain, Xtest
def train_mlp_1(train_x, train_y, test_x, test_y, batch_size = 50, epochs = 20, verbose =2):
n_words = train_x.shape[1]
model = tf.keras.models.Sequential([
tf.keras.layers.Dense( units=50, activation='relu', input_shape=(n_words,)),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense( units=6, activation='softmax')
])
model.compile( loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(train_x, train_y, batch_size, epochs, verbose, callbacks = [callbacks], validation_data=(test_x, test_y))
return model
# Separate the sentences and the labels for training and testing
train_x = list(corpus[corpus.split=='train'].sentence)
train_y = np.array(corpus[corpus.split=='train'].label)
print('train_x size: ', len(train_x))
print('train_y size: ', len(train_y))
test_x = list(corpus[corpus.split=='test'].sentence)
test_y = np.array(corpus[corpus.split=='test'].label)
print('test_x size: ', len(test_x))
print('test_y size: ', len(test_y))
# Run Experiment of 4 different modes
modes = ['binary', 'count', 'tfidf', 'freq']
results = pd.DataFrame()
for mode in modes:
print('mode: ', mode)
# Instantiate a vocab object
vocab = Counter()
# Define a vocabulary for each fold
vocab = add_doc_to_vocab(train_x, vocab)
# Clean the sentences
train_x = clean_docs(train_x, vocab)
test_x = clean_docs(test_x, vocab)
# encode data using freq mode
Xtrain, Xtest = prepare_data(train_x, test_x, mode)
# train the model
model = train_mlp_1(Xtrain, train_y, Xtest, test_y, verbose=0, epochs = 30)
# evaluate the model
loss, acc = model.evaluate(Xtest, test_y, verbose=0)
print('Test Accuracy: {}'.format(acc*100))
results[mode] = [acc*100]
print()
print(results)
###Output
train_x size: 5452
train_y size: 5452
test_x size: 500
test_y size: 500
mode: binary
Restoring model weights from the end of the best epoch.
Epoch 00015: early stopping
Test Accuracy: 75.0
mode: count
Restoring model weights from the end of the best epoch.
Epoch 00015: early stopping
Test Accuracy: 74.59999918937683
mode: tfidf
Restoring model weights from the end of the best epoch.
Epoch 00016: early stopping
Test Accuracy: 73.60000014305115
mode: freq
Restoring model weights from the end of the best epoch.
Epoch 00021: early stopping
Test Accuracy: 74.00000095367432
binary count tfidf freq
0 75.0 74.599999 73.6 74.000001
###Markdown
Summary
###Code
results.boxplot()
plt.show()
results
report = results
report = report.to_excel('BoW_MLP_TREC_1.xlsx', sheet_name='model_1')
###Output
_____no_output_____ |
bronze/Q28_Quantum_State.ipynb | ###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Quantum State _prepared by Abuzer Yakaryilmaz_[](https://youtu.be/6OE96rgQz8s) _The overall probability must be 1 when we observe a quantum system._For example, the following vectors cannot be a valid quantum state:$$ \myvector{ \dfrac{1}{2} \\ \dfrac{1}{2} } \mbox{ and } \myvector{ \dfrac{\sqrt{3}}{2} \\ \dfrac{1}{\sqrt{2}} }.$$For the first vector, the probabilities of observing the states $\ket{0} $ and $ \ket{1} $ are $ \dfrac{1}{4} $. So, the overall probability of getting a result is $ \dfrac{1}{4} + \dfrac{1}{4} = \dfrac{1}{2} $, which is less than 1.For the second vector, the probabilities of observing the states $\ket{0} $ and $ \ket{1} $ are respectively $ \dfrac{3}{4} $ and $ \dfrac{1}{2} $. So, the overall probability of getting a result is $ \dfrac{3}{4} + \dfrac{1}{2} = \dfrac{5}{4} $, which is greater than 1. The summation of amplitude squares must be 1 for a valid quantum state. More formally, a quantum state can be represented by a vector having length 1, and vice versa.The summation of amplitude squares gives the square of the length of vector.But, this summation is 1, and its square root is also 1. So, we can use the term length in the definition. Technical notes: We represent a quantum state as $ \ket{u} $ instead of $ u $. Remember the relation between the length and dot product: $ \norm{u} = \sqrt{\dot{u}{u}} $. In quantum computation, we use inner product instead of dot product, which is defined on complex numbers. By using bra-ket notation, $ \norm{ \ket{u} } = \sqrt{ \braket{u}{u} } = 1 $, or equivalently $ \braket{u}{u} = 1 $, where $ \braket{u}{u} $ is a short form of $ \bra{u}\ket{u} $. For real-valued vectors, $ \braket{v}{v} = \dot{v}{v} $. Task 1 If the following vectors are valid quantum states defined with real numbers, then what can be the values of $a$ and $b$?$$ \ket{v} = \myrvector{a \\ -0.1 \\ -0.3 \\ 0.4 \\ 0.5} ~~~~~ \mbox{and} ~~~~~ \ket{u} = \myrvector{ \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{b}} \\ -\frac{1}{\sqrt{3}} }.$$
###Code
#
# your code is here
# (you may find the values by hand (in mind) as well)
#
###Output
_____no_output_____
###Markdown
click for our solution Quantum Operators Once the quantum state is defined, the definition of quantum operator is very easy.Any length preserving (square) matrix is a quantum operator, and vice versa. Task 2Remember Hadamard operator:$$ H = \hadamard.$$ Randomly create a 2-dimensional quantum state, and test whether Hadamard operator preserves its length or not.Write a function that returns a randomly created 2-dimensional quantum state.Hint: Pick two random values between -100 and 100 for the amplitudes of state 0 and state 1 Find an appropriate normalization factor to divide each amplitude such that the length of quantum state should be 1 Write a function that determines whether a given vector is a valid quantum state or not.(Due to precision problem, the summation of squares may not be exactly 1 but very close to 1, e.g., 0.9999999999999998.)Repeat 10 times: Randomly pick a quantum state Check whether the picked quantum state is valid Multiply Hadamard operator with the randomly created quantum state Check whether the quantum state in result is valid
###Code
#
# you may define your first function in a separate cell
#
from random import randrange
def random_quantum_state():
# quantum state
quantum_state=[0,0]
#
#
#
return quantum_state
#
# your code is here
#
###Output
_____no_output_____ |
RNA_loading_human-Copy1.ipynb | ###Markdown
load tons of datasets (~60,000 RNAseq samples)
###Code
from taigapy import TaigaClient
tc = TaigaClient()
from depmapomics import tracker as track
from depmapomics import expressions
import dalmatian as dm
from gsheets import Sheets
import pandas as pd
MY_ID = '~/.client_secret.json'
MYSTORAGE_ID = "~/.storage.json"
Sheets.from_files(MY_ID, MYSTORAGE_ID)
#autoreload
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
CCLE + TCGA
###Code
# load from taiga public (figshare link)
# load internal expression,
# latest version can be found at https://depmap.org/portal/download/
# can also be loaded like so pd.read_csv('gs://ccle_default_params/celligner_ex/CCLE_expression.csv.gz', index_col=0)
CCLE_expression = tc.get(name='internal-21q3-fe4c',
file='CCLE_expression_full') #40,000x1,500
# load TCGA expression
# this dataset was generated from ,using this script:
# caan be found here: pd.read_csv('gs://ccle_default_params/celligner_ex/TCGA_expression.csv.gz', index_col=0)
TCGA_expression = tc.get(name='celligner-input-9827',
file='tumor_expression') # 40,000x13,000
len(CCLE_annotation[(CCLE_annotation.blacklist==0)&(CCLE_annotation.version>1)&(CCLE_annotation.datatype=="rna")])
# we have .. replicates in CCLE
# loading annotations
CCLE_annotation = track.getTracker() # the function uses pygsheets to load this: REFSHEET_URL=https://docs.google.com/spreadsheets/d/1Pgb5fIClGnErEqzxpU7qqX6ULpGTDjvzWwDN8XUJKIY
# Sheets.from_files(MY_ID, MYSTORAGE_ID).get(REFSHEET_URL).sheets[0].to_frame(index_col=0)
# you can also get it from pd.read_csv('gs://ccle_default_params/celligner_ex/CCLE_annotation.csv.gz', index_col=0)
# can be loaded from
# pd.read_csv('gs://ccle_default_params/celligner_ex/TCGA_annotation.csv.gz', index_col=0)
TCGA_annotation = tc.get(name='celligner-input-9827',
file='tumor_annotations') # generated manually
CCLE_annotation.iloc[0, :-25]
rename = {
"stripped_cell_line_name": "sample_id",
"primary_disease": "disease_type",
"subtype": 'disease_subtype',
'sampleID': 'sample_id',
'age_at_dx':"age",
"gender": "sex",
"site_id": "site_id",
"lineage": "primary_site",
"disease": "disease_type",
"subtype": "disease_subtype",
'Tumor_type': 'tumor_type',
'Sample_type': 'sample_type',
'RNA_Seq_cancertype': 'disease_type',
"CCLF_ID": 'sample_id',
"Sequencing on Tissue or Cell model? (MT confirm)": 'tissue_type',
"Days to First Agg": 'exp_date',
"Contamination % (First Agg)": 'contamination',
"collection": 'origin',
"Original Material Type": 'history',
"sampleID": 'sample_id',
'lineage': 'tissue_type',
'subtype': "disease_type",
"type": "cell_type",
"Sex": "sex",
'Phase':'stage',
'sample_source': 'patient_id',
'tc': 'tumor_purity'
CCLE_annotation = CCLE_annotation.rename(columns=rename)[['origin', 'sequencing_type', 'doublingt','hasebv'] + list(rename.values())]
CCLE_annotation['method']="bulk"
CCLE_annotation['cell_type']="historical_CL"
TCGA_annotation.iloc[0]
TCGA_annotation = TCGA_annotation.rename(columns=rename)[rename.values()]
TCGA_annotation['method']="bulk"
TCGA_annotation['cell_type']="tumor"
TCGA_annotation['metastasis']="Primary"
###Output
_____no_output_____
###Markdown
CCLF
###Code
cclf_orga_info = tc.get(name='cclf-organoids-c23d', version=1, file='cclf_orga_info')
cclf_orga_info = cclf_orga_info.rename(columns=rename)[rename.values()]
cclf_orga_rnaseq = tc.get(name='cclf-organoids-c23d', version=1, file='cclf_orga_rnaseq').T # 40,000x24
cclf_orga_info.index = [i.split("_")[1] for i in cclf_orga_info.sample_id]
cclf_orga_rnaseq.index = [i.split('_')[0][:-1] for i in cclf_orga_rnaseq.index]
cclf_orga_info
cclf_orga_info['sequencer'] =
cclf_orga_info['method'] =
#cclf other
cclfrna = dm.WorkspaceManager("nci-mimoun-bi-org/CCLF_RNA_2_0").get_samples() #40,000x160
cclfrna_anno = cclfrna[["external_id_rna"]].replace({'NA': np.nan})
cclfrna_annot = Sheets.from_files(MY_ID, MYSTORAGE_ID).get("https://docs.google.com/spreadsheets/d/1O9IV_v2vMbebkk_KDWu3LdKBQ16c8lThJKiiWvRxMUo").sheets[2].to_frame()
cclfrna_annot2 = Sheets.from_files(MY_ID, MYSTORAGE_ID).get("https://docs.google.com/spreadsheets/d/1O9IV_v2vMbebkk_KDWu3LdKBQ16c8lThJKiiWvRxMUo").sheets[3].to_frame()
# get it from https://docs.google.com/spreadsheets/d/1O9IV_v2vMbebkk_KDWu3LdKBQ16c8lThJKiiWvRxMUo and get
#files, failed, _, _, lowqual, _ = await expressions.postProcess("nci-mimoun-bi-org/CCLF_RNA_2_0", "all_samples", samplesetToLoad = "all_samples", compute_enrichment=False, trancriptLevelCols = ['rsem_transcripts_expected_count', 'rsem_transcripts_tpm'], geneLevelCols = ["rsem_genes_tpm", "rsem_genes_expected_count"], save_output="data/")
#cclfrna = files['rsem_genes_tpm']
cclfrna = pd.read_csv('data/expression_genes_tpm.csv', index_col=0)
ina = (cclfrna_annot2['Passage Number'].isna() | (cclfrna_annot2['Passage Number']=="Unknown")) & ~(cclfrna_annot2["Passage Number on Receipt"].isna() | (cclfrna_annot2["Passage Number on Receipt"]=="Unknown"))
cclfrna_annot2.loc[ina, "Passage Number"] = cclfrna_annot2.loc[ina, "Passage Number on Receipt"].values
ina = (cclfrna_annot2['Gender'].isna() | (cclfrna_annot2['Gender']=="Unknown")) & ~(cclfrna_annot2["Gender.1"].isna() | (cclfrna_annot2["Gender.1"]=="Unknown"))
cclfrna_annot2.loc[ina, "genderA"] = cclfrna_annot2.loc[ina, "Gender.1"].values
ina = (cclfrna_annot2['Gender'].isna() | (cclfrna_annot2['Gender']=="Unknown")) & ~(cclfrna_annot2["FP Gender"].isna() | (cclfrna_annot2["FP Gender"]=="Unknown"))
cclfrna_annot2.loc[ina, "Gender"] = cclfrna_annot2.loc[ina, "FP Gender"].values
ina = (cclfrna_annot2['Race'].isna() | (cclfrna_annot2['Race']=="Unknown")) & ~(cclfrna_annot2["Ethnicity"].isna() | (cclfrna_annot2["Ethnicity"]=="Unknown"))
cclfrna_annot2.loc[ina, "Race"] = cclfrna_annot2.loc[ina, "Ethnicity"].values
cclfrna_annot2.iloc[0]
cclfrna_annot.iloc[0]
cclfrna_annot2 = cclfrna_annot2.set_index('Collaborator Sample ID')[["Age",
"Gender",
"Tumor Type",
"Tissue Site",
"Primary Disease",
"Race",
"Culture Medium",
"Passage Number",]]
cclfrna_annot = cclfrna_annot[[
'Sequencing on Tissue or Cell model? (MT confirm)',
'External ID for BAM',
'Product',
'RIN',
'Collaborator Sample ID',
'Original Material Type',
'Collaborator Participant ID',
'Aggregated',
'Actual Seq Technology',
'Contamination %',
]].set_index('Collaborator Sample ID', drop=True)
for val in h.dups(cclfrna_annot2.index):
for i in range(len(cclfrna_annot2.loc[val])-1):
if cclfrna_annot2.loc[val].iloc[0].isna().sum() > cclfrna_annot2.loc[val].iloc[i+1].isna().sum():
cclfrna_annot2.iloc[np.argwhere(cclfrna_annot2.index == val).flatten()[0]] = cclfrna_annot2.loc[val].iloc[i+1].values
cclfrna_annot2 = cclfrna_annot2[~cclfrna_annot2.index.duplicated(keep='first')]
for val in h.dups(cclfrna_annot.index):
for i in range(len(cclfrna_annot.loc[val])-1):
if cclfrna_annot.loc[val].iloc[0].isna().sum() > cclfrna_annot.loc[val].iloc[i+1].isna().sum():
cclfrna_annot.iloc[np.argwhere(cclfrna_annot.index == val).flatten()[0]] = cclfrna_annot.loc[val].iloc[i+1].values
cclfrna_annot = cclfrna_annot[~cclfrna_annot.index.duplicated(keep='first')]
cclfrna_annot = pd.concat([cclfrna_annot, cclfrna_annot2], axis=1)
for i, val in cclfrna_annot.iterrows():
cclfrna_anno.loc[cclfrna_anno.external_id_rna==i, cclfrna_annot.columns] = val.values
del cclfrna_annot
###Output
_____no_output_____
###Markdown
MET500 and PDXs
###Code
met500_meta.iloc[0]
tcga_dict = {
"LAML": "Acute Myeloid Leukemia",
"ACC": "Adrenocortical carcinoma",
"BLCA": "Bladder Urothelial Carcinoma",
"BOCA": "Bone Cancer",
"LGG": "Brain Lower Grade Glioma",
"BRCA": "Breast invasive carcinoma",
"CESC": "Cervical squamous cell carcinoma and endocervical adenocarcinoma",
"CHOL": "Cholangiocarcinoma",
"CLLE": "Chronic Lymphocytic Leukemia",
"CMDI": "Chronic Myeloid Disorders",
"COAD": "Colon adenocarcinoma",
"COLO": "Colorectal Cancer",
"COADREAD": "Colorectal cancer",
"EOPC": "Early Onset Prostate Cancer",
"ESAD": "Esophageal Adenocarcinoma",
"ESCA": "Esophageal carcinoma",
"CHOL": "Gallbladder cancer",
"GBM": "Glioblastoma multiforme",
"HNSC": "Head and Neck squamous cell carcinoma",
"KDNY": "Kidney Cancer",
"KICH": "Kidney Chromophobe",
"KIRC": "Kidney renal clear cell carcinoma",
"KIRP": "Kidney renal papillary cell carcinoma",
"LIRI": "Liver Cancer",
"LICA": "Liver Cancer",
"LINC": "Liver Cancer",
"HCC": "Liver hepatocellular carcinoma",
"LIHC": "Liver hepatocellular carcinoma",
"LGG": "Lower Grade GLioma",
"LUNG": "Lung Cancer",
"LUAD": "Lung adenocarcinoma",
"LUSC": "Lung squamous cell carcinoma",
"DLBC": "Lymphoid Neoplasm Diffuse Large B-cell Lymphoma",
"MCTP": "MCTP",
"MALY": "Malignant Lymphoma",
"MESO": "Mesothelioma",
"NBL": "Neuroblastoma",
"ORCA": "Oral Cancer",
"MISC": "Other Cancer",
"OV": "Ovarian serous cystadenocarcinoma",
"PACA": "Pancreatic Cancer",
"PAEN": "Pancreatic Cancer Endocrine neoplasms",
"PAAD": "Pancreatic adenocarcinoma",
"PBCA": "Pediatric Brain Cancer",
"PCPG": "Pheochromocytoma and Paraganglioma",
"PRAD": "Prostate adenocarcinoma",
"READ": "Rectum adenocarcinoma",
"RECA": "Renal Cancer",
"SARC": "Sarcoma",
"SECR": "Secretory Cancer",
"SKCM": "Skin Cutaneous Melanoma",
"STAD": "Stomach adenocarcinoma",
"TGCT": "Testicular Germ Cell Tumor",
"TGCT": "Testicular Germ Cell Tumors",
"THYM": "Thymoma",
"THYM": "Thymoma",
"THCA": "Thyroid carcinoma",
"UCS": "Uterine Carcinosarcoma",
"UCEC": "Uterine Corpus Endometrial Carcinoma",
"UVM": "Uveal Melanoma",
"ACC": "adrenocortical carcinoma",
}
met500_meta = met500_meta.replace({"cohort": tcga_dict})
# met500
met500_meta = tc.get(name='met500-fc3c', file='met500_meta')
met500_TPM = tc.get(name='met500-fc3c', file='met500_TPM') #20,979x868 matrix
#Novartis_PDX
Novartis_PDX_ann = tc.get(name='pdx-data-3d29', file='Novartis_PDX_ann')
Novartis_PDX_TPM = tc.get(name='pdx-data-3d29', file='Novartis_PDX_TPM').T # 38,087x445
#pediatric_PDX
pediatric_PDX_ann = tc.get(name='pdx-data-3d29', file='pediatric_PDX_ann')
pediatric_PDX_TPM = tc.get(name='pdx-data-3d29', file='pediatric_PDX_TPM') #80,000x250
met500_meta = met500_meta.rename(columns={**rename, **{'subtype': "disease_type"}}).set_index('sample_id', drop=True)[rename.values()]
met500_meta['sequencer'] = ""
met500_meta['method'] = ""
pediatric_PDX_ann.iloc[0]
[(i.split('me patient as ')[-1].split(' (')[0],v) if type(i) is str and 'ame patient' in i else '' for v, i in pediatric_PDX_ann[["sampleID","Other_info1"]].values]
pediatric_PDX_ann['participant_id'] = pediatric_PDX_ann.index
#created frrom manual inspection
samepatient = [('NCH-CA-2', 'NCH-CA-1'), ('ALL-105', 'ALL-102', "ALL-115"), ('ALL-46', 'ALL-121'), ('ALL-25', 'ALL-61'), ('ALL-81', 'ALL-80'), ('ALL-32', 'ALL-90'), ('ALL-58', 'ALL-123'), ('ALL-82', 'ALL-83'), ("COG-N-623x", "COG-N-603x"), ("COG-N-453x","COG-N-452x"), ("COG-N-618x", "COG-N-619x"), ('22909PNET', '9850PNET'), ('OS-34', 'OS-34-SJ'), ('OS-36', 'OS-36-SJ', 'OS-32'), ('Rh-30R', 'Rh-30')]
for val in samepatient:
for i in val[1:]:
pediatric_PDX_ann.loc[i, 'participant_id']=val[0]
pediatric_PDX_ann['age'] = ['adult' if i =='Adult' else 'child' for i in pediatric_PDX_ann['Other_info1']]
pediatric_PDX_ann.rename(columns={**rename, **{'subtype': "disease_type"}})[rename.values()]
pediatric_PDX_ann['sequencer'] = ""
pediatric_PDX_ann['method'] = ""
Novartis_PDX_ann.iloc[0]
Novartis_PDX_ann = Novartis_PDX_ann.rename(columns=rename).set_index('sample_id', drop=True)
Novartis_PDX_ann['sequencer'] = ""
Novartis_PDX_ann['method'] = ""
###Output
_____no_output_____
###Markdown
tumor inf elife
###Code
elife_tumorinf = tc.get(name='tumor-infiltration-3307', version=1, file='elife_tumorinf')
elife_tumorinf = elife_tumorinf.rename(columns={"Bcells": "B-cell", "CAFs": "CAF", "CD4_Tcells": "CD4_T-cells", "CD8_Tcells": "CD8_T-cells","macrophage": "macrophage", "Endothelial": "endothelial", "NKcells": "NK-cell"})
elife_tumorinf_ann = pd.DataFrame()
elife_tumorinf_ann["cell_type"] = "normal"
elife_tumorinf_ann["tissue_type"] = elife_tumorinf.columns
elife_tumorinf_ann["sample_ID"] = elife_tumorinf.columns
melanoma_ann['sequencer'] = ""
melanoma_ann['method'] = ""
melanoma_ann['age'] = ""
melanoma_ann['sex'] = ""
###Output
_____no_output_____
###Markdown
tirosh's melanoma
###Code
melanoma = tc.get(name='tirosh-melanoma-scrnaseq-60f0', file='melanoma')
melanoma.columns = [i.replace('-', '_').replace('Cy', "CY").replace('cy', "CY").replace('CY88C', 'CY88_C').replace('CY89A', "CY89_A").replace('CY89C', 'CY89_C').replace('CY89F', 'CY89_F').replace('CY89N', 'CY89_N').replace('CY94C', 'CY94_C') for i in melanoma.columns]
melanoma_ann = pd.DataFrame()
typ={1:"normal", 2:"tumor",0: np.nan}
orig={1:"melanoma", 2:"B-cell", 3: "macrophage", 4: "endothelial", 5: "CAF", 6:"NK-cell", 0: np.nan}
melanoma_ann['age'] = [int(i) for i in melanoma.loc['tumor']]
melanoma_ann["cell_type"] = [typ[int(i)] for i in melanoma.loc['malignant(1=no,2=yes,0=unresolved)']]
melanoma_ann['tissue_type'] = [orig[int(i)] for i in melanoma.loc['non-malignant cell type (1=T,2=B,3=Macro.4=Endo.,5=CAF;6=NK)']]
melanoma_ann['name'] = [i.split('_')[0] for i in melanoma.columns]
melanoma_ann['sample_id'] = melanoma.columns
melanoma_ann['other'] = [i.split('_')[-2] for i in melanoma.columns]
melanoma_ann['sequencer'] =
melanoma_ann['method'] =
###Output
_____no_output_____
###Markdown
GTEX
###Code
from anndata import AnnData, read_h5ad
#! curl https://storage.googleapis.com/gtex_analysis_v9/snrna_seq_data/GTEx_8_tissues_snRNAseq_atlas_071421.public_obs.h5ad --output temp/gtex_8_atlas_public.h5ad
## GTEX additional
https://storage.googleapis.com/gtex_external_datasets/eyegex_data/rna_seq_data/EyeGEx_retina_combined_genelevel_expectedcounts_byrid_nooutlier.tpm.matrix.gct
https://storage.googleapis.com/gtex_external_datasets/eyegex_data/annotations/EyeGEx_meta_combined_inferior_retina_summary_deidentified_geo_ids.csv
gtex_v9 = read_h5ad("temp/gtex_8_atlas_public.h5ad") #209,126 × 17,695
gtex_v9.obs = gtex_v9.obs[["Age_bin","Sex","Sample ID", "Participant ID", "RIN score from PAXgene tissue Aliquot", "Tissue"]].rename(columns={**rename, **{"tissue": "tissue_type", "Tissue": "collection_site"}})
gtex_v9.obs['sequencer']="Illumina TrueSeq"
gtex_add = # 80,000 x 500
###Output
_____no_output_____
###Markdown
THEIS LAB scRNAseq datasets
###Code
https://theislab.github.io/sfaira-portal/Datasets #50,000x13,000
###Output
_____no_output_____
###Markdown
HCMI
###Code
# HCMI dataset
# Code to generate this dataset can be found here:
# https://github.com/broadinstitute/hcmi-processing/blob/main/hcmi-rna-analysis-210226.ipynb
hcmi_ltpm = tc.get(name='hcmi-data-ac4b', file='hcmi_ltpm').T # 60486 x 157
hcmi_sample_info = tc.get(name='hcmi-data-ac4b', file='hcmi_sample_info')
#sample_info = tc.get(name='hcmi-data-ac4b', file='sample-info')
set(hcmi_sample_info.Race)
hcmi_sample_info.iloc[0]
hcmi_sample_info.columns
rename {'Case ID': 'patient_ID', 'Primary Site': 'primary_site', 'subtype', 'lineage',
###Output
_____no_output_____
###Markdown
GDSC
###Code
for i in range(1):
val = "E-MTAB-3610.raw."+i+".zip"
! curl https://www.ebi.ac.uk/arrayexpress/files/E-MTAB-3610/$val -o temp/$val
! gunzip temp/$val # 40,000x1200
###Output
_____no_output_____
###Markdown
L1000 dataset
###Code
# you will need python2
import cmapPy.pandasGEXpress.parse_gct as pg
pg.parse('temp/'+"level5_beta_trt_misc_n8283x12328.gctx")
# you will need R > 4.0
# https://www.charlesbordet.com/en/how-to-upgrade-to-R-4-0-0-on-debian/#the-naive-solution
! R -e "if(!requireNamespace('BiocManager', quietly = TRUE)){install.packages('BiocManager', repos='http://cran.us.r-project.org')};BiocManager::install('cmapR');"
folder = "gs://ccle_default_params/celligner_ex/"
for val in ["level5_beta_ctl_n58022x12328.gctx",
"level5_beta_trt_cp_n720216x12328.gctx",
"level5_beta_trt_misc_n8283x12328.gctx",]:
#"level5_beta_trt_oe_n34171x12328.gctx",
#"level5_beta_trt_sh_n238351x12328.gctx",
#"level5_beta_trt_xpr_n142901x12328.gctx",]:
cmd = "gsutil cp " + folder + val + " temp/"
! $cmd
res = pg('temp/'+val)
###Output
Copying gs://ccle_default_params/celligner_ex/level5_beta_trt_cp_n720216x12328.gctx...
==> NOTE: You are downloading one or more large file(s), which would
run significantly faster if you enabled sliced object downloads. This
feature is enabled by default but requires that compiled crcmod be
installed (see "gsutil help crcmod").
| [1 files][ 33.1 GiB/ 33.1 GiB] 51.2 MiB/s
Operation completed over 1 objects/33.1 GiB.
###Markdown
encode
###Code
todl = h.fileToList('data/encode_rna.txt')
todl
report.iloc[0]
# 40,000 x 1100
report = pd.read_csv('data/encode_report.tsv', sep="\t", skiprows=1)
###Output
_____no_output_____
###Markdown
NCI 60 tumor cell atlas other random datasets from SRA ICGC
###Code
# 40,000 x 2,000 ALL NONE TCGA ICGC
for i in ["https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/BOCA-FR/exp_seq.BOCA-FR.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/BPLL-FR/exp_seq.BPLL-FR.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/BRCA-KR/exp_seq.BRCA-KR.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/LICA-FR/exp_seq.LICA-FR.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/LIRI-JP/exp_seq.LIRI-JP.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/ORCA-IN/exp_seq.ORCA-IN.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/OV-AU/exp_seq.OV-AU.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/PACA-AU/exp_seq.PACA-AU.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/PACA-CA/exp_seq.PACA-CA.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/PRAD-CA/exp_seq.PRAD-CA.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/PRAD-FR/exp_seq.PRAD-FR.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/CLLE-ES/exp_seq.CLLE-ES.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/MALY-DE/exp_seq.MALY-DE.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/PAEN-AU/exp_seq.PAEN-AU.tsv.gz",
"https://dcc.icgc.org/api/v1/download\?fn\=/current/Projects/RECA-EU/exp_seq.RECA-EU.tsv.gz"]:
! wget $i -o data/
val = pd.read_csv('data/download?fn=%2Fcurrent%2FProjects%2FBOCA-FR%2Fexp_seq.BOCA-FR.tsv.gz', sep='\t')
val
###Output
_____no_output_____
###Markdown
st jude
###Code
pd.read_csv('') # 40,000 x 3500
###Output
_____no_output_____
###Markdown
DUOS datasets
###Code
#https://duos.broadinstitute.org/dataset_catalog
###Output
_____no_output_____ |
tensorflow_red_simple/tensorflow_red_simple.ipynb | ###Markdown
[Curso de Redes Neuronales](https://curso-redes-neuronales-unison.github.io/Temario/) Una red neuronal multicapa simple usando TensorFlow[**Julio Waissman Vilanova**](http://mat.uson.mx/~juliowaissman/), 27 de septiembre de 2017.En esta libreta se muestra el ejemplo básico para una red multicapa sencillaaplicada al conjunto de datos [MNIST](http://yann.lecun.com/exdb/mnist/).Esta libreta es básicamente una traducción del ejemplodesarrollado por [Aymeric Damien](https://github.com/aymericdamien/TensorFlow-Examples/)
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
1. Cargar datosPrimero cargamos los archivos que se utilizan para el aprendizaje. Para otro tipo de problemas, es necesario hacer un proceso conocido como *Data Wrangling*, que normalmente se realiza con la ayuda de *Pandas*.
###Code
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
###Output
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
###Markdown
Para que un aprendizaje tenga sentido es necesario tener bien separado un conjunto de datos de aprendizaje y otro de prueba (en caso de grandes conjuntos de datos es la opción). Como vemos tanto las imágenes como las etiquetas están separados en archivos de datos y de aprendizaje.El objeto `mnist` es un objeto tensorflow que contiene 3 objetos tipo tensorflow: *test*, *train* y *validation*, los cuales a su vez contienen *ndarrays* de *numpy*. La estructura es la misma para cada conjunto de datos. Veamos su estructura:
###Code
print("Tipo de images: {}".format(type(mnist.train.images)))
print("Tipo de epochs_completed: {}".format(type(mnist.train.epochs_completed)))
print("Tipo de labels: {}".format(type(mnist.train.labels)))
print("Tipo de nest_batch: {}".format(type(mnist.train.next_batch)))
print("Tipo de num_examples: {}".format(type(mnist.train.num_examples)))
###Output
Tipo de images: <class 'numpy.ndarray'>
Tipo de epochs_completed: <class 'int'>
Tipo de labels: <class 'numpy.ndarray'>
Tipo de nest_batch: <class 'method'>
Tipo de num_examples: <class 'int'>
###Markdown
Como generar el conjunto de datos para ser utilizado dentro de TensorFlow es objeto de otra libreta. Por el momento concentremonos en como hacer una red neuronal rápido y sin dolor.Sin embargo, vamos a ver unos cuantos datos que nos pueden ser de útilidad para la construcción de la red neuronal.
###Code
print("Forma del ndarray con las imágenes: {}".format(mnist.train.images.shape))
print("Forma del ndarray con las etiquetas: {}".format(mnist.train.labels.shape))
print("-" * 79)
print("Número de imagenes de entrenamiento: {}".format(mnist.train.images.shape[0]))
print("Tamaño de las imagenes: {}".format(mnist.train.images.shape[1]))
print("Clases diferentes: {}".format(mnist.train.labels.shape[1]))
###Output
Forma del ndarray con las imágenes: (55000, 784)
Forma del ndarray con las etiquetas: (55000, 10)
-------------------------------------------------------------------------------
Número de imagenes de entrenamiento: 55000
Tamaño de las imagenes: 784
Clases diferentes: 10
###Markdown
2. Construcción de la red neuronal Para hacer una red neuronal lo más genérica posible y que pdamos reutilizar en otros proyectos, vamos a establecer los parámetros base independientemente de la inicialización de la red, independientemente de la forma en que construimos la red. Comencemos por establecer una función genérica que nos forme una red neuronal con dos capas ocultas. No agrego más comentarios porque, con la experiencia de las libretas anteriores, la construcción de la red neuronal se explica sola.
###Code
def red_neuronal_dos_capas_ocultas(x, pesos, sesgos):
"""
Genera una red neuronal de dos capas para usar en TensorFlow
Parámetros
----------
pesos: un diccionario con tres etiquetas: 'h1', 'h2' y 'ho'
en donde cada una es una tf.Variable conteniendo una
matriz de dimensión [num_neuronas_capa_anterior, num_neuronas_capa]
sesgos: un diccionario con tres etiquetas: 'b1', 'b2' y 'bo'
en donde cada una es una tf.Variable conteniendo un
vector de dimensión [numero_de_neuronas_capa]
Devuelve
--------
Un ops de tensorflow que calcula la salida de una red neuronal
con dos capas ocultas, y activaciones RELU.
"""
# Primera capa oculta con activación ReLU
capa_1 = tf.matmul(x, pesos['h1'])
capa_1 = tf.add(capa_1, sesgos['b1'])
capa_1 = tf.nn.relu(capa_1)
# Segunda capa oculta con activación ReLU
capa_2 = tf.matmul(capa_1, pesos['h2'])
capa_2 = tf.add(capa_2, sesgos['b2'])
capa_2 = tf.nn.relu(capa_2)
# Capa de salida con activación lineal
# En Tensorflow la salida es siempre lineal, y luego se especifica
# la función de salida a la hora de calcularla como vamos a ver
# más adelante
capa_salida = tf.matmul(capa_2, pesos['ho']) + sesgos['bo']
return capa_salida
###Output
_____no_output_____
###Markdown
Y ahora necesitamos poder generar los datos de entrada a la red neuronal dealguna manera posible. Afortunadamente sabemos exactamente que necesitaos, asíque vamos a hacer una función que nos genere las variables de peso y sesgo.Por el momento, y muy a la brava, solo vamos a generarlas con números aletorios con una distribución $\mathcal{N}(0, 1)$.
###Code
def inicializa_pesos(entradas, n1, n2, salidas):
"""
Genera un diccionario con pesos
para ser utilizado en la función red_neuronal_dos_capas_ocultas
Parámetros
----------
entradas: Número de neuronas en la capa de entrada
n1: Número de neuronas en la primer capa oculta
n2: Número de neuronas en la segunda capa oculta
salidas: Número de neuronas de salida
Devuelve
--------
Dos diccionarios, uno con los pesos por capa y otro con los sesgos por capa
"""
pesos = {
'h1': tf.Variable(tf.random_normal([entradas, n1])),
'h2': tf.Variable(tf.random_normal([n1, n2])),
'ho': tf.Variable(tf.random_normal([n2, salidas]))
}
sesgos = {
'b1': tf.Variable(tf.random_normal([n1])),
'b2': tf.Variable(tf.random_normal([n2])),
'bo': tf.Variable(tf.random_normal([salidas]))
}
return pesos, sesgos
###Output
_____no_output_____
###Markdown
Ahora necesitamos establecer los parámetros de la topología de la red neuronal. Tomemos en cuenta que estos prámetros los podríamos haber establecido desdela primer celda, si el fin es estar variando los parámetros para escoger los que ofrezcan mejor desempeño.
###Code
num_entradas = 784 # Lo sabemos por la inspección que hicimos a mnist
num_salidas = 10 # Ídem
# Aqui es donde podemos jugar
num_neuronas_capa_1 = 256
num_neuronas_capa_2 = 256
###Output
_____no_output_____
###Markdown
¡A construir la red! Para esto vamos a necesitar crear las entradascon un placeholder, y crear nuestra topología de red neuronal.Observa que la dimensión de x será [None, num_entradas], lo que significa que la cantidad de renglones es desconocida (o variable).
###Code
# La entrada a la red neuronal
x = tf.placeholder("float", [None, num_entradas])
# Los pesos y los sesgos
w, b = inicializa_pesos(num_entradas, num_neuronas_capa_1, num_neuronas_capa_2, num_salidas)
# Crea la red neuronal
estimado = red_neuronal_dos_capas_ocultas(x, w, b)
###Output
_____no_output_____
###Markdown
Parecería que ya está todo listo. Sin ambargo falta algo muy importante: No hemos explicadoni cual es el criterio de error (loss) que vamos a utilizar, ni cual va a ser el método deoptimización (aprendizaje) que hemos decidido aplicar.Primero definamos el costo que queremos minimizar, y ese costo va a estar en función de loestimado con lo real, por lo que necesitamos otra entrada de datos para los datos de salida.Sin ningun lugar a dudas, el costo que mejor describe este problema es el de *softmax*
###Code
# Creamos la variable de datos de salida conocidos
y = tf.placeholder("float", [None, num_salidas])
# Definimos la función de costo
costo = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=estimado, labels=y))
###Output
_____no_output_____
###Markdown
Y ahora definimos que función de aprendizaje vamos a utilizar. Existen muchas funcionesde aprendizaje en tensorflow, las cuales se pueden consultar en `tf.train.`. Entre lasexistentes podemos ver algunas conocidas del curso como descenso de gradiente simple,momento, rprop, rmsprop entre otras. Casi todas las funciones de optimización (aprendizaje)acaban su nombre con `Optimize`.En este caso vamos a usar un método comocido como el *algoritmo de Adam* el cual se puede consultar [aqui](http://arxiv.org/pdf/1412.6980.pdf). El metodo utiliza dos calculosde momentos diferentes, y por lo visto genera resultados muy interesantes desde el punto de vista práctico.¿Cual es el mejor método? Pues esto es en función de tu problema y de la cantidad de datos que tengas.Lo mejor es practicar con varios métodos para entender sus ventajas y desventajas.En todo caso el método de optimización requiere que se le inicialice con una tasa de aprendizaje.
###Code
alfa = 0.001
optimizador = tf.train.AdamOptimizer(learning_rate=alfa)
paso_entrenamiento = optimizador.minimize(costo)
###Output
_____no_output_____
###Markdown
3. Ejecutar la sesión usando mini-batchesAhora, ya que la red neuronal está lista vamos a ejecutar la red utilizando el algoritmo deAdam pero en forma de mini-batches. Con el fin de tener control sobre el problema, vamos a establecer un número máximo de epoch (ciclos de aprendizaje), el tamaño de los mini-batches, y cada cuandos epoch quisieramos ver como está evolucionando la red neuronal.Como entrenar una red neuronal no tiene sentido, si no es porque la queremos usar para reconocer,no tendría sentido entrenarla y luego perderla y tener que reentrenar en cada ocasión. Recuerda que cuandose cierra la sesión se borra todo lo que se tenía en memoria. Para esto vamos a usar una ops especial llamada `Saver`, que permite guardar en un archivo la red neuronal y después utilizarla en otra sesión (en otro script, computadora, ....).
###Code
archivo_modelo = "/tmp/rnn2.ckpt"
saver = tf.train.Saver()
###Output
_____no_output_____
###Markdown
Como todo se ejecuta dentro de una sesión, no es posible hacerlo por partes (si usamos el `with` que debería ser la única forma en la que iniciaramos una sesión). Por lo tanto procuraré dejar comentado el código.
###Code
numero_epochs = 30
tamano_minibatch = 100
display_step = 1
# Muy importante la primera vez que se ejecuta inicializar todas las variables
init = tf.global_variables_initializer()
# La manera correcta de iniciar una sesión y realizar calculos
with tf.Session() as sess:
sess.run(init)
# Ciclos de entrenamiento
for epoch in range(numero_epochs):
# Inicializa el costo promedio de todos los minibatches en 0
avg_cost = 0.
# Calcula el número de minibatches que se pueden usar
total_batch = int(mnist.train.num_examples/tamano_minibatch)
# Por cada minibatch
for i in range(total_batch):
# Utiliza un generador incluido en mnist que obtiene
# tamano_minibatch ejemplos selecionados aleatoriamente del total
batch_x, batch_y = mnist.train.next_batch(tamano_minibatch)
# Ejecuta la ops del paso_entrenamiento para aprender
# y la del costo, con el fin de mostrar el aprendizaje
_, c = sess.run([paso_entrenamiento, costo], feed_dict={x: batch_x, y: batch_y})
# Calcula el costo del minibatch y lo agrega al costo total
avg_cost += c / total_batch
# Muestra los resultados
if epoch % display_step == 0:
print (("Epoch: " + str(epoch)).ljust(20)
+ ("Costo: " + str(avg_cost)))
# Guarda la sesión en el archivo rnn2.cptk
saver.save(sess, archivo_modelo)
print("Se acabaron los epochs, saliendo de la sesión de tensorflow.")
###Output
Epoch: 0 Costo: 187.72642700542107
Epoch: 1 Costo: 43.19682508035141
Epoch: 2 Costo: 27.412228202819808
Epoch: 3 Costo: 19.27745437516407
Epoch: 4 Costo: 14.184633423740198
Epoch: 5 Costo: 10.54863675079563
Epoch: 6 Costo: 7.9121140013465805
Epoch: 7 Costo: 6.108162654860973
Epoch: 8 Costo: 4.629566703145594
Epoch: 9 Costo: 3.4602970071999177
Epoch: 10 Costo: 2.502782579066773
Epoch: 11 Costo: 1.8987668838607643
Epoch: 12 Costo: 1.5786736396973031
Epoch: 13 Costo: 1.1323068128340523
Epoch: 14 Costo: 0.9644100692850527
Epoch: 15 Costo: 0.7322741474965192
Epoch: 16 Costo: 0.6440696595873219
Epoch: 17 Costo: 0.5339284736222762
Epoch: 18 Costo: 0.5359366274544372
Epoch: 19 Costo: 0.4570502999219788
Epoch: 20 Costo: 0.33340147908575163
Epoch: 21 Costo: 0.456141080354728
Epoch: 22 Costo: 0.3586877974441566
Epoch: 23 Costo: 0.37086191789254497
Epoch: 24 Costo: 0.38918990505555584
Epoch: 25 Costo: 0.30872195632927024
Epoch: 26 Costo: 0.28585485328015636
Epoch: 27 Costo: 0.3069413245319734
Epoch: 28 Costo: 0.36104231996463104
Epoch: 29 Costo: 0.27113897660028896
Se acabaron los epochs, saliendo de la sesión de tensorflow.
###Markdown
Ahora vamos a revisar que tan bien realizó el aprendizaje cuando se aplica la red adatos queno se usaron para entrenamiento. Para esto vamos a utilizar dos ops extas: una para definir la operaración de datos bien estimados o mal estimados, y otra paracalcular el promedio de datos bien estimados. Para calcular los datos bien estimados vamos a utilizar `tf.cast` que permite ajustar los tiposal tipo tensor.
###Code
prediction_correcta = tf.equal(tf.argmax(estimado, 1), tf.argmax(y, 1))
precision = tf.reduce_mean(tf.cast(prediction_correcta, "float"))
###Output
_____no_output_____
###Markdown
Ahora si, vamos a abrir una nueva sesión, vamos a restaurar los valores de la sesión anterior,y vamos a ejecutar el grafo con el fin de evaluar la ops precision, pero ahora con eldiccionario de alimentación con los datos de prueba.
###Code
with tf.Session() as sess:
sess.run(init)
saver.restore(sess, archivo_modelo)
porcentaje_acierto = sess.run(precision, feed_dict={x: mnist.test.images,
y: mnist.test.labels})
print("Precisión: {}".format(porcentaje_acierto))
###Output
INFO:tensorflow:Restoring parameters from /tmp/rnn2.ckpt
Precisión: 0.9555000066757202
###Markdown
4. Contesta las siguientes preguntas1. ¿Que pasa si aumenta el número de epochs? ¿Cuando deja de ser util aumentar los epoch?2. ¿Que pasa si aumentas o disminuyes la tasa de aprendizaje?3. Utiliza al menos otros 2 métodos de optimización (existentes en Tensorflow), ajustalos y comparalos. ¿Cual de los métodos te gusta más y porque preferirías unos sobre otros?4. ¿Que pasa si cambias el tamaño de los minibatches?5. ¿Como harías si dejaste a medias un proceso de aprendizaje (en 10 epochs por ejemplo) y quisieras entrenar la red 10 epoch más, y mañana quisieras entrenarla otros 10 epoch más?**Para contestar las preguntas, agrega cuantas celdas con comentarios y con códgo sean necesarias.** Aprovecha que las libretas de *Jupyter* te permite hacerte una especie de tutorial personalizado. Para la pregunta 1:
###Code
archivo_modelo = "/tmp/rnn2_mas_epochs_.ckpt"
saver = tf.train.Saver()
numero_epochs = 250
tamano_minibatch = 100
display_step = 25
# Muy importante la primera vez que se ejecuta inicializar todas las variables
init = tf.global_variables_initializer()
# La manera correcta de iniciar una sesión y realizar calculos
with tf.Session() as sess:
sess.run(init)
# Ciclos de entrenamiento
for epoch in range(numero_epochs):
# Inicializa el costo promedio de todos los minibatches en 0
avg_cost = 0.
# Calcula el número de minibatches que se pueden usar
total_batch = int(mnist.train.num_examples/tamano_minibatch)
# Por cada minibatch
for i in range(total_batch):
# Utiliza un generador incluido en mnist que obtiene
# tamano_minibatch ejemplos selecionados aleatoriamente del total
batch_x, batch_y = mnist.train.next_batch(tamano_minibatch)
# Ejecuta la ops del paso_entrenamiento para aprender
# y la del costo, con el fin de mostrar el aprendizaje
_, c = sess.run([paso_entrenamiento, costo], feed_dict={x: batch_x, y: batch_y})
# Calcula el costo del minibatch y lo agrega al costo total
avg_cost += c / total_batch
# Muestra los resultados
if epoch % display_step == 0:
print (("Epoch: " + str(epoch)).ljust(20)
+ ("Costo: " + str(avg_cost)))
# Guarda la sesión en el archivo rnn2.cptk
saver.save(sess, archivo_modelo)
print("Se acabaron los epochs, saliendo de la sesión de tensorflow.")
prediction_correcta = tf.equal(tf.argmax(estimado, 1), tf.argmax(y, 1))
precision = tf.reduce_mean(tf.cast(prediction_correcta, "float"))
with tf.Session() as sess:
sess.run(init)
saver.restore(sess, archivo_modelo)
porcentaje_acierto = sess.run(precision, feed_dict={x: mnist.test.images,
y: mnist.test.labels})
print("Precisión: {}".format(porcentaje_acierto))
###Output
INFO:tensorflow:Restoring parameters from /tmp/rnn2_mas_epochs_.ckpt
Precisión: 0.970300018787384
###Markdown
La precisión aumentó pero vemos que el Costo empezó a no bajar después de los 150 epochs. Para la pregunta 2:
###Code
alfa = 0.1
optimizador = tf.train.AdamOptimizer(learning_rate=alfa)
paso_entrenamiento = optimizador.minimize(costo)
archivo_modelo = "/tmp/rnn2_paso_de_aprendizaje.ckpt"
saver = tf.train.Saver()
numero_epochs = 20
tamano_minibatch = 100
display_step = 3
# Muy importante la primera vez que se ejecuta inicializar todas las variables
init = tf.global_variables_initializer()
# La manera correcta de iniciar una sesión y realizar calculos
with tf.Session() as sess:
sess.run(init)
# Ciclos de entrenamiento
for epoch in range(numero_epochs):
# Inicializa el costo promedio de todos los minibatches en 0
avg_cost = 0.
# Calcula el número de minibatches que se pueden usar
total_batch = int(mnist.train.num_examples/tamano_minibatch)
# Por cada minibatch
for i in range(total_batch):
# Utiliza un generador incluido en mnist que obtiene
# tamano_minibatch ejemplos selecionados aleatoriamente del total
batch_x, batch_y = mnist.train.next_batch(tamano_minibatch)
# Ejecuta la ops del paso_entrenamiento para aprender
# y la del costo, con el fin de mostrar el aprendizaje
_, c = sess.run([paso_entrenamiento, costo], feed_dict={x: batch_x, y: batch_y})
# Calcula el costo del minibatch y lo agrega al costo total
avg_cost += c / total_batch
# Muestra los resultados
if epoch % display_step == 0:
print (("Epoch: " + str(epoch)).ljust(20)
+ ("Costo: " + str(avg_cost)))
# Guarda la sesión en el archivo rnn2.cptk
saver.save(sess, archivo_modelo)
print("Se acabaron los epochs, saliendo de la sesión de tensorflow.")
with tf.Session() as sess:
sess.run(init)
saver.restore(sess, archivo_modelo)
porcentaje_acierto = sess.run(precision, feed_dict={x: mnist.test.images,
y: mnist.test.labels})
print("Precisión: {}".format(porcentaje_acierto))
###Output
INFO:tensorflow:Restoring parameters from /tmp/rnn2_paso_de_aprendizaje.ckpt
Precisión: 0.1898999959230423
###Markdown
Vemos que no obtiene una precisión tan buena como antes, bastante mala de hehco, y oscila bastante entre los costos, no llega a bajar del todo. No converge Para la pregunta 3, se usará AdagradOptimizer y AdadeltaOptimizer
###Code
alfa = 0.001
optimizador = tf.train.AdagradOptimizer(learning_rate=alfa)
paso_entrenamiento = optimizador.minimize(costo)
archivo_modelo = "/tmp/rnn2_adagrad.ckpt"
saver = tf.train.Saver()
numero_epochs = 30
tamano_minibatch = 100
display_step = 5
# Muy importante la primera vez que se ejecuta inicializar todas las variables
init = tf.global_variables_initializer()
# La manera correcta de iniciar una sesión y realizar calculos
with tf.Session() as sess:
sess.run(init)
# Ciclos de entrenamiento
for epoch in range(numero_epochs):
# Inicializa el costo promedio de todos los minibatches en 0
avg_cost = 0.
# Calcula el número de minibatches que se pueden usar
total_batch = int(mnist.train.num_examples/tamano_minibatch)
# Por cada minibatch
for i in range(total_batch):
# Utiliza un generador incluido en mnist que obtiene
# tamano_minibatch ejemplos selecionados aleatoriamente del total
batch_x, batch_y = mnist.train.next_batch(tamano_minibatch)
# Ejecuta la ops del paso_entrenamiento para aprender
# y la del costo, con el fin de mostrar el aprendizaje
_, c = sess.run([paso_entrenamiento, costo], feed_dict={x: batch_x, y: batch_y})
# Calcula el costo del minibatch y lo agrega al costo total
avg_cost += c / total_batch
# Muestra los resultados
if epoch % display_step == 0:
print (("Epoch: " + str(epoch)).ljust(20)
+ ("Costo: " + str(avg_cost)))
# Guarda la sesión en el archivo rnn2.cptk
saver.save(sess, archivo_modelo)
print("Se acabaron los epochs, saliendo de la sesión de tensorflow.")
prediction_correcta = tf.equal(tf.argmax(estimado, 1), tf.argmax(y, 1))
precision = tf.reduce_mean(tf.cast(prediction_correcta, "float"))
with tf.Session() as sess:
sess.run(init)
saver.restore(sess, archivo_modelo)
porcentaje_acierto = sess.run(precision, feed_dict={x: mnist.test.images,
y: mnist.test.labels})
print("Precisión: {}".format(porcentaje_acierto))
alfa = 0.001
optimizador = tf.train.AdadeltaOptimizer(learning_rate=alfa)
paso_entrenamiento = optimizador.minimize(costo)
archivo_modelo = "/tmp/rnn2_adaDelta_.ckpt"
saver = tf.train.Saver()
numero_epochs = 30
tamano_minibatch = 100
display_step = 5
# Muy importante la primera vez que se ejecuta inicializar todas las variables
init = tf.global_variables_initializer()
# La manera correcta de iniciar una sesión y realizar calculos
with tf.Session() as sess:
sess.run(init)
# Ciclos de entrenamiento
for epoch in range(numero_epochs):
# Inicializa el costo promedio de todos los minibatches en 0
avg_cost = 0.
# Calcula el número de minibatches que se pueden usar
total_batch = int(mnist.train.num_examples/tamano_minibatch)
# Por cada minibatch
for i in range(total_batch):
# Utiliza un generador incluido en mnist que obtiene
# tamano_minibatch ejemplos selecionados aleatoriamente del total
batch_x, batch_y = mnist.train.next_batch(tamano_minibatch)
# Ejecuta la ops del paso_entrenamiento para aprender
# y la del costo, con el fin de mostrar el aprendizaje
_, c = sess.run([paso_entrenamiento, costo], feed_dict={x: batch_x, y: batch_y})
# Calcula el costo del minibatch y lo agrega al costo total
avg_cost += c / total_batch
# Muestra los resultados
if epoch % display_step == 0:
print (("Epoch: " + str(epoch)).ljust(20)
+ ("Costo: " + str(avg_cost)))
# Guarda la sesión en el archivo rnn2.cptk
saver.save(sess, archivo_modelo)
print("Se acabaron los epochs, saliendo de la sesión de tensorflow.")
prediction_correcta = tf.equal(tf.argmax(estimado, 1), tf.argmax(y, 1))
precision = tf.reduce_mean(tf.cast(prediction_correcta, "float"))
with tf.Session() as sess:
sess.run(init)
saver.restore(sess, archivo_modelo)
porcentaje_acierto = sess.run(precision, feed_dict={x: mnist.test.images,
y: mnist.test.labels})
print("Precisión: {}".format(porcentaje_acierto))
###Output
Epoch: 0 Costo: 1779.5958884499285
Epoch: 5 Costo: 1627.3125770152694
Epoch: 10 Costo: 1489.3762926136367
Epoch: 15 Costo: 1372.3628688742892
Epoch: 20 Costo: 1272.8280913751776
Epoch: 25 Costo: 1187.4538899369675
Se acabaron los epochs, saliendo de la sesión de tensorflow.
INFO:tensorflow:Restoring parameters from /tmp/rnn2_adaDelta_.ckpt
Precisión: 0.14190000295639038
###Markdown
Basándome sólo en la presición obtenida elegiría el AdamOptimizer. Los resultados son AdamOptimizer > AdagradOptimizer > AdadeltaOptimizer con presición de 95 > 80 > 14 Para la pregunta 4 se usará un tamaño de mini-batches de 200 en lugar de 100
###Code
alfa = 0.001
optimizador = tf.train.AdamOptimizer(learning_rate=alfa)
paso_entrenamiento = optimizador.minimize(costo)
archivo_modelo = "/tmp/rnn2_adaDelta.ckpt"
saver = tf.train.Saver()
numero_epochs = 30
tamano_minibatch = 200
display_step = 3
# Muy importante la primera vez que se ejecuta inicializar todas las variables
init = tf.global_variables_initializer()
# La manera correcta de iniciar una sesión y realizar calculos
with tf.Session() as sess:
sess.run(init)
# Ciclos de entrenamiento
for epoch in range(numero_epochs):
# Inicializa el costo promedio de todos los minibatches en 0
avg_cost = 0.
# Calcula el número de minibatches que se pueden usar
total_batch = int(mnist.train.num_examples/tamano_minibatch)
# Por cada minibatch
for i in range(total_batch):
# Utiliza un generador incluido en mnist que obtiene
# tamano_minibatch ejemplos selecionados aleatoriamente del total
batch_x, batch_y = mnist.train.next_batch(tamano_minibatch)
# Ejecuta la ops del paso_entrenamiento para aprender
# y la del costo, con el fin de mostrar el aprendizaje
_, c = sess.run([paso_entrenamiento, costo], feed_dict={x: batch_x, y: batch_y})
# Calcula el costo del minibatch y lo agrega al costo total
avg_cost += c / total_batch
# Muestra los resultados
if epoch % display_step == 0:
print (("Epoch: " + str(epoch)).ljust(20)
+ ("Costo: " + str(avg_cost)))
# Guarda la sesión en el archivo rnn2.cptk
saver.save(sess, archivo_modelo)
print("Se acabaron los epochs, saliendo de la sesión de tensorflow.")
prediction_correcta = tf.equal(tf.argmax(estimado, 1), tf.argmax(y, 1))
precision = tf.reduce_mean(tf.cast(prediction_correcta, "float"))
with tf.Session() as sess:
sess.run(init)
saver.restore(sess, archivo_modelo)
porcentaje_acierto = sess.run(precision, feed_dict={x: mnist.test.images,
y: mnist.test.labels})
print("Precisión: {}".format(porcentaje_acierto))
###Output
Epoch: 0 Costo: 271.70641891479494
Epoch: 3 Costo: 29.89705223430287
Epoch: 6 Costo: 14.67652648882433
Epoch: 9 Costo: 7.942085644115106
Epoch: 12 Costo: 4.275069047924754
Epoch: 15 Costo: 2.184900130168914
Epoch: 18 Costo: 1.158415535677547
Epoch: 21 Costo: 0.6386113788557456
Epoch: 24 Costo: 0.3631481176766401
Epoch: 27 Costo: 0.26874979827378664
Se acabaron los epochs, saliendo de la sesión de tensorflow.
INFO:tensorflow:Restoring parameters from /tmp/rnn2_adaDelta.ckpt
Precisión: 0.9455999732017517
###Markdown
Al aumentar el tamaño se ocupa más memoria por lo que el proceso es más costoso en cuanto a memoria. Y curiosamente vemos que la presición salió un poco más baja. Para la pregunta 5 primero, si quisiera guardar el modelo y conservarlo para cada epoch modificaría el límite por default que tensorflow da de 5 de la siguiente manera:
###Code
saver = tf.train.Saver(max_to_keep=None)
###Output
_____no_output_____
###Markdown
Y podría hacer una función como la siguiente:
###Code
def save(self, epoch):
model_name = "MODEL_save"
checkpoint_dir = os.path.join(model_name)
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
self.saver.save(self.sess, checkpoint_dir + '/model', global_step=epoch)
self.saver.save(self.sess, checkpoint_dir + '/model')
print("path for saved %s" % checkpoint_dir)
###Output
_____no_output_____ |
.ipynb_checkpoints/2-1-Deriving-N-Grams-from-Text-checkpoint.ipynb | ###Markdown
Deriving N-Grams from Text Based on [N-Gram-Based Text Categorization: Categorizing Text With Python by Alejandro Nolla](http://blog.alejandronolla.com/2013/05/20/n-gram-based-text-categorization-categorizing-text-with-python/)What are n-grams? See [here](http://cloudmark.github.io/Language-Detection/). 1. Tokenization
###Code
s = "Le temps est un grand maître, dit-on, le malheur est qu'il tue ses élèves."
s = s.lower()
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer("[a-zA-Z'`éèî]+")
s_tokenized = tokenizer.tokenize(s)
s_tokenized
from nltk.util import ngrams
generated_4grams = []
for word in s_tokenized:
generated_4grams.append(list(ngrams(word, 4, pad_left=True, pad_right=True, left_pad_symbol='_', right_pad_symbol='_'))) # n = 4.
generated_4grams
###Output
_____no_output_____
###Markdown
It seems that `generated_4grams` needs flattening since it's supposed to be a list of 4-grams:
###Code
generated_4grams = [word for sublist in generated_4grams for word in sublist]
generated_4grams[:10]
###Output
_____no_output_____
###Markdown
2. Obtaining n-grams (n = 4)
###Code
ng_list_4grams = generated_4grams
for idx, val in enumerate(generated_4grams):
ng_list_4grams[idx] = ''.join(val)
ng_list_4grams
###Output
_____no_output_____
###Markdown
3. Sorting n-grams by frequency (n = 4)
###Code
freq_4grams = {}
for ngram in ng_list_4grams:
if ngram not in freq_4grams:
freq_4grams.update({ngram: 1})
else:
ngram_occurrences = freq_4grams[ngram]
freq_4grams.update({ngram: ngram_occurrences + 1})
from operator import itemgetter # The operator module exports a set of efficient functions corresponding to the intrinsic operators of Python. For example, operator.add(x, y) is equivalent to the expression x + y.
freq_4grams_sorted = sorted(freq_4grams.items(), key=itemgetter(1), reverse=True)[0:300] # We only keep the 300 most popular n-grams. This was suggested in the original paper written about n-grams.
freq_4grams_sorted
###Output
_____no_output_____
###Markdown
4. Obtaining n-grams for multiple values of n To get n-grams for n = 1, 2, 3 and 4 we can use:
###Code
from nltk import everygrams
s_clean = ' '.join(s_tokenized) # For the code below we need the raw sentence as opposed to the tokens.
s_clean
def ngram_extractor(sent):
return [''.join(ng) for ng in everygrams(sent.replace(' ', '_ _'), 1, 4)
if ' ' not in ng and '\n' not in ng and ng != ('_',)]
ngram_extractor(s_clean)
###Output
_____no_output_____ |
source/examples/basics/gog/facet_grid.ipynb | ###Markdown
Facet GridFacets divide a plot into subplots based on the values of one or more discrete variable.See [facet_grid()](https://jetbrains.github.io/lets-plot-docs/pages/api/lets_plot.facet_grid.htmllets_plot.facet_grid).
###Code
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
ggplot(df, aes('cty', 'hwy')) + geom_point() + facet_grid(x='fl', y='year')
###Output
_____no_output_____
###Markdown
Facet GridFacets divide a plot into subplots based on the values of one or more discrete variable.See [facet_grid()](https://jetbrains.github.io/lets-plot-docs/pages/api/lets_plot.facet_grid.htmllets_plot.facet_grid).
###Code
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
ggplot(df, aes('cty', 'hwy')) + geom_point() + facet_grid(x='fl', y='year')
###Output
_____no_output_____ |
exploration/Perspective_API.ipynb | ###Markdown
About this NotebookThe goal of this notebook is to get a benchmark of a toxicity model on the Kaggle/Jigsaw Wikipedia comments dataset labeled for toxicity. I will use the Perspective API from Jigaw for this.
###Code
import json
import numpy as np
import pandas as pd
import os
import requests
import time
from tqdm import tqdm
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
pd.options.display.max_rows = 999
from toxicity import constants, data, features, text_preprocessing, model, metrics, visualize
###Output
_____no_output_____
###Markdown
Load data
###Code
df_train = data.load(constants.INPUT_PATH, filter=False)
###Output
_____no_output_____
###Markdown
EDA
###Code
print(df_train.info())
df_train.head()
print(df_train.describe())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 223549 entries, 0 to 223548
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 223549 non-null object
1 comment_text 223549 non-null object
2 toxic 223549 non-null int64
3 severe_toxic 223549 non-null int64
4 obscene 223549 non-null int64
5 threat 223549 non-null int64
6 insult 223549 non-null int64
7 identity_hate 223549 non-null int64
dtypes: int64(6), object(2)
memory usage: 13.6+ MB
None
toxic severe_toxic obscene threat \
count 223549.000000 223549.000000 223549.000000 223549.000000
mean 0.095657 0.008777 0.054306 0.003082
std 0.294121 0.093272 0.226621 0.055431
min 0.000000 0.000000 0.000000 0.000000
25% 0.000000 0.000000 0.000000 0.000000
50% 0.000000 0.000000 0.000000 0.000000
75% 0.000000 0.000000 0.000000 0.000000
max 1.000000 1.000000 1.000000 1.000000
insult identity_hate
count 223549.000000 223549.000000
mean 0.050566 0.009470
std 0.219110 0.096852
min 0.000000 0.000000
25% 0.000000 0.000000
50% 0.000000 0.000000
75% 0.000000 0.000000
max 1.000000 1.000000
###Markdown
Example comments
###Code
df_train.comment_text
###Output
_____no_output_____
###Markdown
Make Perspective API requests
###Code
api_key = os.environ.get("PERSPECTIVE_API_KEY")
url = ('https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze' +
'?key=' + api_key)
def make_data_dict(text):
data_dict = {
'comment': {'text': text},
'languages': ['en'],
'requestedAttributes': {'TOXICITY': {}}
}
return data_dict
def return_score(data_dict):
response = requests.post(url=url, data=json.dumps(data_dict))
response_dict = json.loads(response.content)
try:
score = response_dict["attributeScores"]["TOXICITY"]["summaryScore"]["value"]
except:
print(json.dumps(response_dict))
return score
#Start fresh or from populated list
try:
tox_scores
except:
tox_scores = []
#Iterate through the comment text to send to the API. Add sleep to manage rate limiting.
scores_length=len(tox_scores)
print('Number of scores: ' + str(scores_length))
for i, comment in enumerate(df_train.comment_text[scores_length:]):
if i%50==1:
print(i)
time.sleep(60)
data_dict = make_data_dict(comment)
score = return_score(data_dict)
tox_scores.append(score)
#Write out scores to csv
pd.DataFrame(tox_scores).to_csv("perspective_tox_scores.csv", index=False, header=None)
#Load scores
#tox_scores = pd.read_csv("perspective_tox_scores.csv", header=None)
pred = np.array(tox_scores)>.5
metrics.run_metrics(pred, tox_scores, df_train.toxic[:len(tox_scores)], visualize=True)
###Output
Average precision-recall score: 0.62
[[39944 2282]
[ 256 4346]]
Accuracy Score: 0.95
|
python/coursera_python/deeplearning_ai_Andrew_Ng/4_CNN/Keras+-+Tutorial+-+Happy+House+v2.ipynb | ###Markdown
Keras tutorial - the Happy HouseWelcome to the first assignment of week 2. In this assignment, you will:1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK. 2. See how you can in a couple of hours build a deep learning algorithm.Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models. In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!
###Code
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
**Note**: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: `X = Input(...)` or `X = ZeroPadding2D(...)`. 1 - The Happy House For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness. **Figure 1** : **the Happy House**As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy. You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled. Run the following code to normalize the dataset and learn about its shapes.
###Code
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
number of training examples = 600
number of test examples = 150
X_train shape: (600, 64, 64, 3)
Y_train shape: (600, 1)
X_test shape: (150, 64, 64, 3)
Y_test shape: (150, 1)
###Markdown
**Details of the "Happy" dataset**:- Images are of shape (64,64,3)- Training: 600 pictures- Test: 150 picturesIt is now time to solve the "Happy" Challenge. 2 - Building a model in KerasKeras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.Here is an example of a model in Keras:```pythondef model(input_shape): Define the input placeholder as a tensor with shape input_shape. Think of this as your input image! X_input = Input(input_shape) Zero-Padding: pads the border of X_input with zeroes X = ZeroPadding2D((3, 3))(X_input) CONV -> BN -> RELU Block applied to X X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') return model```Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as `X`, `Z1`, `A1`, `Z2`, `A2`, etc. for the computations for the different layers, in Keras code each line above just reassigns `X` to a new value using `X = ...`. In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable `X`. The only exception was `X_input`, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (`model = Model(inputs = X_input, ...)` above). **Exercise**: Implement a `HappyModel()`. This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as `AveragePooling2D()`, `GlobalMaxPooling2D()`, `Dropout()`. **Note**: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
###Code
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
"""
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
"""
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. The come back also try out other
# network architectures as well.
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
You have now built a function to describe your model. To train and test this model, there are four steps in Keras:1. Create the model by calling the function above2. Compile the model by calling `model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])`3. Train the model on train data by calling `model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)`4. Test the model on test data by calling `model.evaluate(x = ..., y = ...)`If you want to know more about `model.compile()`, `model.fit()`, `model.evaluate()` and their arguments, refer to the official [Keras documentation](https://keras.io/models/model/).**Exercise**: Implement step 1, i.e. create the model.
###Code
### START CODE HERE ### (1 line)
happyModel = HappyModel(X_train.shape[1:])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
**Exercise**: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of `compile()` wisely. Hint: the Happy Challenge is a binary classification problem.
###Code
### START CODE HERE ### (1 line)
happyModel.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
**Exercise**: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.
###Code
### START CODE HERE ### (1 line)
happyModel.fit(X_train, Y_train, epochs=40, batch_size=50)
### END CODE HERE ###
###Output
Epoch 1/40
600/600 [==============================] - 15s - loss: 2.2096 - acc: 0.5350
Epoch 2/40
600/600 [==============================] - 15s - loss: 0.5858 - acc: 0.7800
Epoch 3/40
600/600 [==============================] - 15s - loss: 0.2601 - acc: 0.9033
Epoch 4/40
600/600 [==============================] - 15s - loss: 0.2847 - acc: 0.8850
Epoch 5/40
600/600 [==============================] - 15s - loss: 0.1676 - acc: 0.9267
Epoch 6/40
600/600 [==============================] - 15s - loss: 0.1609 - acc: 0.9333
Epoch 7/40
600/600 [==============================] - 15s - loss: 0.1099 - acc: 0.9617
Epoch 8/40
600/600 [==============================] - 15s - loss: 0.0831 - acc: 0.9800
Epoch 9/40
600/600 [==============================] - 15s - loss: 0.0741 - acc: 0.9783
Epoch 10/40
600/600 [==============================] - 15s - loss: 0.0896 - acc: 0.9667
Epoch 11/40
600/600 [==============================] - 15s - loss: 0.0628 - acc: 0.9817
Epoch 12/40
600/600 [==============================] - 15s - loss: 0.0582 - acc: 0.9850
Epoch 13/40
600/600 [==============================] - 15s - loss: 0.0666 - acc: 0.9833
Epoch 14/40
600/600 [==============================] - 15s - loss: 0.0671 - acc: 0.9800
Epoch 15/40
600/600 [==============================] - 15s - loss: 0.0546 - acc: 0.9867
Epoch 16/40
600/600 [==============================] - 15s - loss: 0.0439 - acc: 0.9883
Epoch 17/40
600/600 [==============================] - 15s - loss: 0.0409 - acc: 0.9917
Epoch 18/40
600/600 [==============================] - 15s - loss: 0.0370 - acc: 0.9883
Epoch 19/40
600/600 [==============================] - 15s - loss: 0.0360 - acc: 0.9917
Epoch 20/40
600/600 [==============================] - 15s - loss: 0.0277 - acc: 0.9933
Epoch 21/40
600/600 [==============================] - 15s - loss: 0.0262 - acc: 0.9933
Epoch 22/40
600/600 [==============================] - 15s - loss: 0.0344 - acc: 0.9900
Epoch 23/40
600/600 [==============================] - 15s - loss: 0.0339 - acc: 0.9917
Epoch 24/40
600/600 [==============================] - 15s - loss: 0.0388 - acc: 0.9850
Epoch 25/40
600/600 [==============================] - 15s - loss: 0.0416 - acc: 0.9883
Epoch 26/40
600/600 [==============================] - 15s - loss: 0.0266 - acc: 0.9933
Epoch 27/40
600/600 [==============================] - 15s - loss: 0.0291 - acc: 0.9917
Epoch 28/40
600/600 [==============================] - 15s - loss: 0.0277 - acc: 0.9950
Epoch 29/40
600/600 [==============================] - 15s - loss: 0.0205 - acc: 0.9967
Epoch 30/40
600/600 [==============================] - 15s - loss: 0.0170 - acc: 0.9983
Epoch 31/40
600/600 [==============================] - 15s - loss: 0.0164 - acc: 0.9983
Epoch 32/40
600/600 [==============================] - 15s - loss: 0.0354 - acc: 0.9883
Epoch 33/40
600/600 [==============================] - 15s - loss: 0.0228 - acc: 0.9900
Epoch 34/40
600/600 [==============================] - 15s - loss: 0.0218 - acc: 0.9933
Epoch 35/40
600/600 [==============================] - 15s - loss: 0.0171 - acc: 0.9950
Epoch 36/40
600/600 [==============================] - 15s - loss: 0.0239 - acc: 0.9917
Epoch 37/40
600/600 [==============================] - 15s - loss: 0.0301 - acc: 0.9917
Epoch 38/40
600/600 [==============================] - 15s - loss: 0.0175 - acc: 0.9950
Epoch 39/40
600/600 [==============================] - 15s - loss: 0.0115 - acc: 0.9967
Epoch 40/40
600/600 [==============================] - 15s - loss: 0.0092 - acc: 1.0000
###Markdown
Note that if you run `fit()` again, the `model` will continue to train with the parameters it has already learnt instead of reinitializing them.**Exercise**: Implement step 4, i.e. test/evaluate the model.
###Code
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(X_test, Y_test, batch_size=32, verbose=1, sample_weight=None)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
###Output
150/150 [==============================] - 2s
Loss = 0.534771858056
Test Accuracy = 0.786666667461
###Markdown
If your `happyModel()` function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.To give you a point of comparison, our model gets around **95% test accuracy in 40 epochs** (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare. If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:- Try using blocks of CONV->BATCHNORM->RELU such as:```pythonX = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)X = BatchNormalization(axis = 3, name = 'bn0')(X)X = Activation('relu')(X)```until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.- You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.- Change your optimizer. We find Adam works well. - If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)- Run on more epochs, until you see the train accuracy plateauing. Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results. **Note**: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here. 3 - ConclusionCongratulations, you have solved the Happy House challenge! Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here. **What we would like you to remember from this assignment:**- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras? - Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test. 4 - Test with your own image (Optional)Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)! The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!
###Code
### START CODE HERE ###
img_path = 'images/my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
###Output
[[ 1.]]
###Markdown
5 - Other useful functions in Keras (Optional)Two other basic features of Keras that you'll find useful are:- `model.summary()`: prints the details of your layers in a table with the sizes of its inputs/outputs- `plot_model()`: plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.Run the following code.
###Code
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
###Output
_____no_output_____ |
doc/source/interface/termination.ipynb | ###Markdown
.. _nb_termination:
###Code
## Termination Criterion
Whenever an algorithm is executed, it needs to be decided whether a next iteration should be started. For single-objective algorithms, a naive implementation can consider the relative improvement in the last $n$ generations.
### Default Termination ('default')
We have added recently developed a termination criterion set if no termination is supplied to the `minimize()` method:
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problemfrom pymoo.optimize import minimizeproblem = get_problem("zdt1")algorithm = NSGA2(pop_size=100)res = minimize(problem, algorithm, pf=False, seed=2, verbose=False)print(res.algorithm.n_gen)
###Code
This allows you to terminated based on a couple of criteria also explained later on this page.
Commonly used are the movement in the design space `f_tol` and the convergence in the constraint `cv_tol` and objective space `f_tol`.
To provide an upper bound for the algorithm, we recommend supplying a maximum number of generations `n_max_gen` or function evaluations `n_max_evals`.
Moreover, it is worth mentioning that tolerance termination is based on a sliding window. Not only the last, but a sequence of the `n_last` generations are used to calculate compare the tolerances with an bound defined by the user.
By default for multi-objective problems, the termination will be set to
###Output
_____no_output_____
###Markdown
from pymoo.util.termination.default import MultiObjectiveDefaultTerminationtermination = MultiObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=0.0025, nth_gen=5, n_last=30, n_max_gen=1000, n_max_evals=100000)
###Code
And for single-optimization to
###Output
_____no_output_____
###Markdown
from pymoo.util.termination.default import SingleObjectiveDefaultTerminationtermination = SingleObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=1e-6, nth_gen=5, n_last=20, n_max_gen=1000, n_max_evals=100000)
###Code
.. _nb_n_eval:
###Output
_____no_output_____
###Markdown
Number of Evaluations ('n_eval') The termination can simply be reached by providing an upper bound for the number of function evaluations. Whenever in an iteration, the number of function evaluations is greater than this upper bound the algorithm terminates.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_termination
from pymoo.optimize import minimize
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = get_termination("n_eval", 300)
res = minimize(problem,
algorithm,
termination,
pf=problem.pareto_front(),
seed=1,
verbose=True)
###Output
============================================================
n_gen | n_eval | igd | gd | hv
============================================================
1 | 100 | 2.083143534 | 2.470275711 | 0.00000E+00
2 | 200 | 2.083143534 | 2.541455861 | 0.00000E+00
3 | 300 | 1.439763149 | 2.254136798 | 0.00000E+00
###Markdown
.. _nb_n_gen:
###Code
### Number of Generations ('n_gen')
Moreover, the number of generations / iterations can be limited as well.
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problem, get_terminationfrom pymoo.optimize import minimizeproblem = get_problem("zdt3")algorithm = NSGA2(pop_size=100)termination = get_termination("n_gen", 10)res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=True)
###Code
.. _nb_time:
###Output
_____no_output_____
###Markdown
Based on Time ('time') The termination can also be based on the time of the algorithm to be executed. For instance, to run an algorithm for 3 seconds the termination can be defined by `get_termination("time", "00:00:03")` or for 1 hour and 30 minutes by `get_termination("time", "01:30:00")`.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_termination
from pymoo.optimize import minimize
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = get_termination("time", "00:00:03")
res = minimize(problem,
algorithm,
termination,
pf=problem.pareto_front(),
seed=1,
verbose=False)
print(res.algorithm.n_gen)
###Output
410
###Markdown
.. _nb_xtol:
###Code
### Design Space Tolerance ('x_tol')
Also, we can track the change in the design space. For a parameter explanation, please have a look at 'ftol'.
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problemfrom pymoo.optimize import minimizefrom pymoo.util.termination.x_tol import DesignSpaceToleranceTerminationproblem = get_problem("zdt3")algorithm = NSGA2(pop_size=100)termination = DesignSpaceToleranceTermination(tol=0.0025, n_last=20)res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=False)print(res.algorithm.n_gen)
###Code
.. _nb_ftol:
###Output
_____no_output_____
###Markdown
Objective Space Tolerance ('f_tol')The most interesting stopping criterion is to use objective space change to decide whether to terminate the algorithm. Here, we mostly use a simple and efficient procedure to determine whether to stop or not. We aim to improve it further in the future. If somebody is interested in collaborating, please let us know.The parameters of our implementation are:**tol**: What is the tolerance in the objective space on average. If the value is below this bound, we terminate.**n_last**: To make the criterion more robust, we consider the last $n$ generations and take the maximum. This considers the worst case in a window.**n_max_gen**: As a fallback, the generation number can be used. For some problems, the termination criterion might not be reached; however, an upper bound for generations can be defined to stop in that case.**nth_gen**: Defines whenever the termination criterion is calculated by default, every 10th generation.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
from pymoo.util.termination.f_tol import MultiObjectiveSpaceToleranceTermination
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = MultiObjectiveSpaceToleranceTermination(tol=0.0025,
n_last=30,
nth_gen=5,
n_max_gen=None,
n_max_evals=None)
res = minimize(problem,
algorithm,
termination,
pf=True,
seed=1,
verbose=False)
print("Generations", res.algorithm.n_gen)
plot = Scatter(title="ZDT3")
plot.add(problem.pareto_front(use_cache=False, flatten=False), plot_type="line", color="black")
plot.add(res.F, color="red", alpha=0.8, s=20)
plot.show()
###Output
Generations 165
###Markdown
.. _nb_termination:
###Code
## Termination Criterion
Whenever an algorithm is executed, it needs to be decided whether a next iteration should be started. For single-objective algorithms, a naive implementation can consider the relative improvement in the last $n$ generations.
### Default Termination ('default')
We have added recently developed a termination criterion set if no termination is supplied to the `minimize()` method:
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problemfrom pymoo.optimize import minimizeproblem = get_problem("zdt1")algorithm = NSGA2(pop_size=100)res = minimize(problem, algorithm, pf=False, seed=2, verbose=False)print(res.algorithm.n_gen)
###Code
This allows you to terminated based on a couple of criteria also explained later on this page.
Commonly used are the movement in the design space `f_tol` and the convergence in the constraint `cv_tol` and objective space `f_tol`.
To provide an upper bound for the algorithm, we recommend supplying a maximum number of generations `n_max_gen` or function evaluations `n_max_evals`.
Moreover, it is worth mentioning that tolerance termination is based on a sliding window. Not only the last, but a sequence of the `n_last` generations are used to calculate compare the tolerances with an bound defined by the user.
By default for multi-objective problems, the termination will be set to
###Output
_____no_output_____
###Markdown
from pymoo.util.termination.default import MultiObjectiveDefaultTerminationtermination = MultiObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=0.0025, nth_gen=5, n_last=30, n_max_gen=1000, n_max_evals=100000)
###Code
And for single-optimization to
###Output
_____no_output_____
###Markdown
from pymoo.util.termination.default import SingleObjectiveDefaultTerminationtermination = SingleObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=1e-6, nth_gen=5, n_last=20, n_max_gen=1000, n_max_evals=100000)
###Code
.. _nb_n_eval:
###Output
_____no_output_____
###Markdown
Number of Evaluations ('n_eval') The termination can simply be reached by providing an upper bound for the number of function evaluations. Whenever in an iteration, the number of function evaluations is greater than this upper bound the algorithm terminates.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_termination
from pymoo.optimize import minimize
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = get_termination("n_eval", 300)
res = minimize(problem,
algorithm,
termination,
pf=problem.pareto_front(),
seed=1,
verbose=True)
###Output
============================================================
n_gen | n_eval | igd | gd | hv
============================================================
1 | 100 | 2.083143534 | 2.470275711 | 0.00000E+00
2 | 200 | 2.083143534 | 2.541455861 | 0.00000E+00
3 | 300 | 1.439763149 | 2.254136798 | 0.00000E+00
###Markdown
.. _nb_n_gen:
###Code
### Number of Generations ('n_gen')
Moreover, the number of generations / iterations can be limited as well.
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problem, get_terminationfrom pymoo.optimize import minimizeproblem = get_problem("zdt3")algorithm = NSGA2(pop_size=100)termination = get_termination("n_gen", 10)res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=True)
###Code
.. _nb_time:
###Output
_____no_output_____
###Markdown
Based on Time ('time') The termination can also be based on the time of the algorithm to be executed. For instance, to run an algorithm for 3 seconds the termination can be defined by `get_termination("time", "00:00:03")` or for 1 hour and 30 minutes by `get_termination("time", "01:30:00")`.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_termination
from pymoo.optimize import minimize
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = get_termination("time", "00:00:03")
res = minimize(problem,
algorithm,
termination,
pf=problem.pareto_front(),
seed=1,
verbose=False)
print(res.algorithm.n_gen)
###Output
441
###Markdown
.. _nb_xtol:
###Code
### Design Space Tolerance ('x_tol')
Also, we can track the change in the design space. For a parameter explanation, please have a look at 'ftol'.
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problemfrom pymoo.optimize import minimizefrom pymoo.util.termination.x_tol import DesignSpaceToleranceTerminationproblem = get_problem("zdt3")algorithm = NSGA2(pop_size=100)termination = DesignSpaceToleranceTermination(tol=0.0025, n_last=20)res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=False)print(res.algorithm.n_gen)
###Code
.. _nb_ftol:
###Output
_____no_output_____
###Markdown
Objective Space Tolerance ('f_tol')The most interesting stopping criterion is to use objective space change to decide whether to terminate the algorithm. Here, we mostly use a simple and efficient procedure to determine whether to stop or not. We aim to improve it further in the future. If somebody is interested in collaborating, please let us know.The parameters of our implementation are:**tol**: What is the tolerance in the objective space on average. If the value is below this bound, we terminate.**n_last**: To make the criterion more robust, we consider the last $n$ generations and take the maximum. This considers the worst case in a window.**n_max_gen**: As a fallback, the generation number can be used. For some problems, the termination criterion might not be reached; however, an upper bound for generations can be defined to stop in that case.**nth_gen**: Defines whenever the termination criterion is calculated by default, every 10th generation.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
from pymoo.util.termination.f_tol import MultiObjectiveSpaceToleranceTermination
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = MultiObjectiveSpaceToleranceTermination(tol=0.0025,
n_last=30,
nth_gen=5,
n_max_gen=None,
n_max_evals=None)
res = minimize(problem,
algorithm,
termination,
pf=True,
seed=1,
verbose=False)
print("Generations", res.algorithm.n_gen)
plot = Scatter(title="ZDT3")
plot.add(problem.pareto_front(use_cache=False, flatten=False), plot_type="line", color="black")
plot.add(res.F, color="red", alpha=0.8, s=20)
plot.show()
###Output
Generations 165
###Markdown
.. _nb_termination:
###Code
## Termination Criterion
Whenever an algorithm is executed, it needs to be decided whether a next iteration should be started. For single-objective algorithms, a naive implementation can consider the relative improvement in the last $n$ generations.
### Default Termination ('default')
We have added recently developed a termination criterion set if no termination is supplied to the `minimize()` method:
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problemfrom pymoo.optimize import minimizeproblem = get_problem("zdt1")algorithm = NSGA2(pop_size=100)res = minimize(problem, algorithm, pf=False, seed=2, verbose=False)print(res.algorithm.n_gen)
###Code
This allows you to terminated based on a couple of criteria also explained later on this page.
Commonly used are the movement in the design space `f_tol` and the convergence in the constraint `cv_tol` and objective space `f_tol`.
To provide an upper bound for the algorithm, we recommend supplying a maximum number of generations `n_max_gen` or function evaluations `n_max_evals`.
Moreover, it is worth mentioning that tolerance termination is based on a sliding window. Not only the last, but a sequence of the `n_last` generations are used to calculate compare the tolerances with an bound defined by the user.
By default for multi-objective problems, the termination will be set to
###Output
_____no_output_____
###Markdown
from pymoo.util.termination.default import MultiObjectiveDefaultTerminationtermination = MultiObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=0.0025, nth_gen=5, n_last=30, n_max_gen=1000, n_max_evals=100000)
###Code
And for single-optimization to
###Output
_____no_output_____
###Markdown
from pymoo.util.termination.default import SingleObjectiveDefaultTerminationtermination = SingleObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=1e-6, nth_gen=5, n_last=20, n_max_gen=1000, n_max_evals=100000)
###Code
.. _nb_n_eval:
###Output
_____no_output_____
###Markdown
Number of Evaluations ('n_eval') The termination can simply be reached by providing an upper bound for the number of function evaluations. Whenever in an iteration, the number of function evaluations is greater than this upper bound the algorithm terminates.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_termination
from pymoo.optimize import minimize
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = get_termination("n_eval", 300)
res = minimize(problem,
algorithm,
termination,
pf=problem.pareto_front(),
seed=1,
verbose=True)
###Output
============================================================
n_gen | n_eval | igd | gd | hv
============================================================
1 | 100 | 2.083143534 | 2.470275711 | 0.00000E+00
2 | 200 | 2.083143534 | 2.541455861 | 0.00000E+00
3 | 300 | 1.439763149 | 2.254136798 | 0.00000E+00
###Markdown
.. _nb_n_gen:
###Code
### Number of Generations ('n_gen')
Moreover, the number of generations / iterations can be limited as well.
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problem, get_terminationfrom pymoo.optimize import minimizeproblem = get_problem("zdt3")algorithm = NSGA2(pop_size=100)termination = get_termination("n_gen", 10)res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=True)
###Code
.. _nb_time:
###Output
_____no_output_____
###Markdown
Based on Time ('time') The termination can also be based on the time of the algorithm to be executed. For instance, to run an algorithm for 3 seconds the termination can be defined by `get_termination("time", "00:00:03")` or for 1 hour and 30 minutes by `get_termination("time", "01:30:00")`.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_termination
from pymoo.optimize import minimize
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = get_termination("time", "00:00:03")
res = minimize(problem,
algorithm,
termination,
pf=problem.pareto_front(),
seed=1,
verbose=False)
print(res.algorithm.n_gen)
###Output
392
###Markdown
.. _nb_xtol:
###Code
### Design Space Tolerance ('x_tol')
Also, we can track the change in the design space. For a parameter explanation, please have a look at 'ftol'.
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problemfrom pymoo.optimize import minimizefrom pymoo.util.termination.x_tol import DesignSpaceToleranceTerminationproblem = get_problem("zdt3")algorithm = NSGA2(pop_size=100)termination = DesignSpaceToleranceTermination(tol=0.0025, n_last=20)res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=False)print(res.algorithm.n_gen)
###Code
.. _nb_ftol:
###Output
_____no_output_____
###Markdown
Objective Space Tolerance ('f_tol')The most interesting stopping criterion is to use objective space change to decide whether to terminate the algorithm. Here, we mostly use a simple and efficient procedure to determine whether to stop or not. We aim to improve it further in the future. If somebody is interested in collaborating, please let us know.The parameters of our implementation are:**tol**: What is the tolerance in the objective space on average. If the value is below this bound, we terminate.**n_last**: To make the criterion more robust, we consider the last $n$ generations and take the maximum. This considers the worst case in a window.**n_max_gen**: As a fallback, the generation number can be used. For some problems, the termination criterion might not be reached; however, an upper bound for generations can be defined to stop in that case.**nth_gen**: Defines whenever the termination criterion is calculated by default, every 10th generation.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
from pymoo.util.termination.f_tol import MultiObjectiveSpaceToleranceTermination
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = MultiObjectiveSpaceToleranceTermination(tol=0.0025,
n_last=30,
nth_gen=5,
n_max_gen=None,
n_max_evals=None)
res = minimize(problem,
algorithm,
termination,
pf=True,
seed=1,
verbose=False)
print("Generations", res.algorithm.n_gen)
plot = Scatter(title="ZDT3")
plot.add(problem.pareto_front(use_cache=False, flatten=False), plot_type="line", color="black")
plot.add(res.F, color="red", alpha=0.8, s=20)
plot.show()
###Output
Generations 165
###Markdown
.. _nb_termination:
###Code
## Termination Criterion
Whenever an algorithm is executed it needs to be decided whether a next iteration should be started or not. For single-objective algorithms a naive implementation can consider the relative improvement in the last $n$ generations.
### Default Termination ('default')
We have added recently developed a termination criterion which is set if no termination is supplied to the `minimize()` method:
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problemfrom pymoo.optimize import minimizeproblem = get_problem("zdt1")algorithm = NSGA2(pop_size=100)res = minimize(problem, algorithm, pf=False, seed=2, verbose=False)print(res.algorithm.n_gen)
###Code
This is allows you to terminated based on a couple of criteria also explained later on this page.
Commonly used are the movement in the design space `f_tol` and the convergence in the constraint `cv_tol` and objective space `f_tol`.
To provide an upper bound for the algorithm, we also recommend to supply a maximum number of generations `n_max_gen` or function evaluations `n_max_evals`.
Moreover, it is worth mentioning that the tolerance termination is based on a sliding window. Not only the last, but a sequence of the `n_last` generations are used to calculate compare the tolerances with an bound defined by the user.
By default for multi-objective problems the termination will be set to
###Output
_____no_output_____
###Markdown
from pymoo.util.termination.default import MultiObjectiveDefaultTerminationtermination = MultiObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=0.0025, nth_gen=5, n_last=30, n_max_gen=1000, n_max_evals=100000)
###Code
And for single-optimization to
###Output
_____no_output_____
###Markdown
from pymoo.util.termination.default import SingleObjectiveDefaultTerminationtermination = SingleObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=1e-6, nth_gen=5, n_last=20, n_max_gen=1000, n_max_evals=100000)
###Code
.. _nb_n_eval:
###Output
_____no_output_____
###Markdown
Number of Evaluations ('n_eval') The termination can simply be reached by providing an upper bound for the number of function evaluations. Whenever, in an iteration the number of function evaluations is larger than this upper bound the algorithm terminates.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_termination
from pymoo.optimize import minimize
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = get_termination("n_eval", 300)
res = minimize(problem,
algorithm,
termination,
pf=problem.pareto_front(),
seed=1,
verbose=True)
###Output
============================================================
n_gen | n_eval | igd | gd | hv
============================================================
1 | 100 | 2.083143534 | 2.470275711 | 0.00000E+00
2 | 200 | 2.083143534 | 2.541455861 | 0.00000E+00
3 | 300 | 1.439763149 | 2.254136798 | 0.00000E+00
###Markdown
.. _nb_n_gen:
###Code
### Number of Generations ('n_gen')
Moreover, the number of generations / iterations can be limited as well.
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problem, get_terminationfrom pymoo.optimize import minimizeproblem = get_problem("zdt3")algorithm = NSGA2(pop_size=100)termination = get_termination("n_gen", 10)res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=True)
###Code
.. _nb_time:
###Output
_____no_output_____
###Markdown
Based on Time ('time') The termination can be also based on the time of the algorithm to be executed. For instance, to run an algorithm for 3 seconds the termination can be defined by `get_termination("time", "00:00:03")` or for 1 hour and 30 minutes by `get_termination("time", "01:30:00")`.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem, get_termination
from pymoo.optimize import minimize
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = get_termination("time", "00:00:03")
res = minimize(problem,
algorithm,
termination,
pf=problem.pareto_front(),
seed=1,
verbose=False)
print(res.algorithm.n_gen)
###Output
467
###Markdown
.. _nb_xtol:
###Code
### Design Space Tolerance ('x_tol')
Also, we can track the change in the design space. For a parameter explanation please have a look at 'ftol'.
###Output
_____no_output_____
###Markdown
from pymoo.algorithms.nsga2 import NSGA2from pymoo.factory import get_problemfrom pymoo.optimize import minimizefrom pymoo.util.termination.x_tol import DesignSpaceToleranceTerminationproblem = get_problem("zdt3")algorithm = NSGA2(pop_size=100)termination = DesignSpaceToleranceTermination(tol=0.0025, n_last=20)res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=False)print(res.algorithm.n_gen)
###Code
.. _nb_ftol:
###Output
_____no_output_____
###Markdown
Objective Space Tolerance ('f_tol')Probably the most interesting stopping criterion is to use the objective space change to make decision whether to continue or not. Here, we mostly use a naive and efficient procedure to determine whether to stop or not. We aim to improve it further in the future. If somebody in interested in collaborating please let us know.The parameters of our implementation are:**tol**: What is the tolerance in the objective space in average. If the value is below this bound we terminate.**n_last**: To make the criterion more robust, we consider the last $n$ generations and take the maximum. This considers the worst case in a window.**n_max_gen**: As a fallback the generation number can be used. For some problems the termination criterion might not be reached, however, an upper bound for generations can be defined to stop in that case.**nth_gen**: Defines whenever the termination criterion is calculated. By default, every 10th generation.
###Code
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_problem
from pymoo.optimize import minimize
from pymoo.util.termination.f_tol import MultiObjectiveSpaceToleranceTermination
from pymoo.visualization.scatter import Scatter
problem = get_problem("zdt3")
algorithm = NSGA2(pop_size=100)
termination = MultiObjectiveSpaceToleranceTermination(tol=0.0025,
n_last=30,
nth_gen=5,
n_max_gen=None,
n_max_evals=None)
res = minimize(problem,
algorithm,
termination,
pf=True,
seed=1,
verbose=False)
print("Generations", res.algorithm.n_gen)
plot = Scatter(title="ZDT3")
plot.add(problem.pareto_front(use_cache=False, flatten=False), plot_type="line", color="black")
plot.add(res.F, color="red", alpha=0.8, s=20)
plot.show()
###Output
Generations 165
|
_docs/nbs/T632722-PyTorch-Fundamentals-Part-1.ipynb | ###Markdown
Note that we are building a tensor of differences, taking their square element-wise, and finally producing a scalar loss function by averaging all of the elements in the resulting tensor. It is a mean square loss.We can now initialize the parameters, invoke the model:
###Code
w = torch.ones(())
b = torch.zeros(())
t_p = model(t_u, w, b)
t_p
###Output
_____no_output_____
###Markdown
and check the value of the loss:
###Code
loss = loss_fn(t_p, t_c)
loss
###Output
_____no_output_____
###Markdown
We implemented the model and the loss in this section. We’ve finally reached the meat of the example: how do we estimate w and b such that the loss reaches a minimum? We’ll first work things out by hand and then learn how to use PyTorch’s superpowers to solve the same problem in a more general, off-the-shelf way. We’ll optimize the loss function with respect to the parameters using the gradient descent algorithm. Gradient descent is actually a very simple idea, and it scales up surprisingly well to large neural network models with millions of parameters. Let’s start with a mental image. Suppose we are in front of a machine sporting two knobs, labeled w and b. We are allowed to see the value of the loss on a screen, and we are told to minimize that value. Not knowing the effect of the knobs on the loss, we start fiddling with them and decide for each knob which direction makes the loss decrease. We decide to rotate both knobs in their direction of decreasing loss. Suppose we’re far from the optimal value: we’d likely see the loss decrease quickly and then slow down as it gets closer to the minimum. We notice that at some point, the loss climbs back up again, so we invert the direction of rotation for one or both knobs. We also learn that when the loss changes slowly, it’s a good idea to adjust the knobs more finely, to avoid reaching the point where the loss goes back up. After a while, eventually, we converge to a minimum. Gradient descent is not that different from the scenario we just described. The idea is to compute the rate of change of the loss with respect to each parameter, and modify each parameter in the direction of decreasing loss. Just like when we were fiddling with the knobs, we can estimate the rate of change by adding a small number to w and b and seeing how much the loss changes in that neighborhood:
###Code
delta = 0.1
loss_rate_of_change_w = \
(loss_fn(model(t_u, w + delta, b), t_c) -
loss_fn(model(t_u, w - delta, b), t_c)) / (2.0 * delta)
###Output
_____no_output_____
###Markdown
This is saying that in the neighborhood of the current values of w and b, a unit increase in w leads to some change in the loss. If the change is negative, then we need to increase w to minimize the loss, whereas if the change is positive, we need to decrease w. By how much? Applying a change to w that is proportional to the rate of change of the loss is a good idea, especially when the loss has several parameters: we apply a change to those that exert a significant change on the loss. It is also wise to change the parameters slowly in general, because the rate of change could be dramatically different at a distance from the neighborhood of the current w value. Therefore, we typically should scale the rate of change by a small factor. This scaling factor has many names; the one we use in machine learning is learning_rate:
###Code
learning_rate = 1e-2
w = w - learning_rate * loss_rate_of_change_w
###Output
_____no_output_____
###Markdown
We can do the same with b:
###Code
loss_rate_of_change_b = \
(loss_fn(model(t_u, w, b + delta), t_c) -
loss_fn(model(t_u, w, b - delta), t_c)) / (2.0 * delta)
b = b - learning_rate * loss_rate_of_change_b
###Output
_____no_output_____
###Markdown
This represents the basic parameter-update step for gradient descent. By reiterating these evaluations (and provided we choose a small enough learning rate), we will converge to an optimal value of the parameters for which the loss computed on the given data is minimal. We’ll show the complete iterative process soon, but the way we just computed our rates of change is rather crude and needs an upgrade before we move on. Let’s see why and how. Computing the rate of change by using repeated evaluations of the model and loss in order to probe the behavior of the loss function in the neighborhood of w and b doesn’t scale well to models with many parameters. Also, it is not always clear how large the neighborhood should be. We chose delta equal to 0.1 in the previous section, but it all depends on the shape of the loss as a function of w and b. If the loss changes too quickly compared to delta, we won’t have a very good idea of in which direction the loss is decreasing the most.What if we could make the neighborhood infinitesimally small? That’s exactly what happens when we analytically take the derivative of the loss with respect to a parameter. In a model with two or more parameters like the one we’re dealing with, we compute the individual derivatives of the loss with respect to each parameter and put them in a vector of derivatives: the gradient. In order to compute the derivative of the loss with respect to a parameter, we can apply the chain rule and compute the derivative of the loss with respect to its input (which is the output of the model), times the derivative of the model with respect to the parameter:
###Code
def dloss_fn(t_p, t_c):
dsq_diffs = 2 * (t_p - t_c) / t_p.size(0)
return dsq_diffs
def dmodel_dw(t_u, w, b):
return t_u
def dmodel_db(t_u, w, b):
return 1.0
###Output
_____no_output_____
###Markdown
Putting all of this together, the function returning the gradient of the loss with respect to w and b is:
###Code
def grad_fn(t_u, t_c, t_p, w, b):
dloss_dtp = dloss_fn(t_p, t_c)
dloss_dw = dloss_dtp * dmodel_dw(t_u, w, b)
dloss_db = dloss_dtp * dmodel_db(t_u, w, b)
return torch.stack([dloss_dw.sum(), dloss_db.sum()])
###Output
_____no_output_____
###Markdown
We now have everything in place to optimize our parameters. Starting from a tentative value for a parameter, we can iteratively apply updates to it for a fixed number of iterations, or until w and b stop changing. There are several stopping criteria; for now, we’ll stick to a fixed number of iterations.
###Code
def training_loop(n_epochs, learning_rate, params, t_u, t_c,
print_params=True):
for epoch in range(1, n_epochs + 1):
w, b = params
t_p = model(t_u, w, b)
loss = loss_fn(t_p, t_c)
grad = grad_fn(t_u, t_c, t_p, w, b)
params = params - learning_rate * grad
if epoch in {1, 2, 3, 10, 11, 99, 100, 4000, 5000}:
print('Epoch %d, Loss %f' % (epoch, float(loss)))
if print_params:
print(' Params:', params)
print(' Grad: ', grad)
if epoch in {4, 12, 101}:
print('...')
if not torch.isfinite(loss).all():
break
return params
training_loop(
n_epochs = 100,
learning_rate = 1e-2,
params = torch.tensor([1.0, 0.0]),
t_u = t_u,
t_c = t_c)
###Output
Epoch 1, Loss 1763.884766
Params: tensor([-44.1730, -0.8260])
Grad: tensor([4517.2964, 82.6000])
Epoch 2, Loss 5802484.500000
Params: tensor([2568.4011, 45.1637])
Grad: tensor([-261257.4062, -4598.9702])
Epoch 3, Loss 19408029696.000000
Params: tensor([-148527.7344, -2616.3931])
Grad: tensor([15109614.0000, 266155.6875])
...
Epoch 10, Loss 90901105189019073810297959556841472.000000
Params: tensor([3.2144e+17, 5.6621e+15])
Grad: tensor([-3.2700e+19, -5.7600e+17])
Epoch 11, Loss inf
Params: tensor([-1.8590e+19, -3.2746e+17])
Grad: tensor([1.8912e+21, 3.3313e+19])
###Markdown
Wait, what happened? Our training process literally blew up, leading to losses becoming inf. This is a clear sign that params is receiving updates that are too large, and their values start oscillating back and forth as each update overshoots and the next overcorrects even more. The optimization process is unstable: it diverges instead of converging to a minimum. We want to see smaller and smaller updates to params, not larger. How can we limit the magnitude of learning_rate * grad? Well, that looks easy. We could simply choose a smaller learning_rate, and indeed, the learning rate is one of the things we typically change when training does not go as well as we would like. We usually change learning rates by orders of magnitude, so we might try with 1e-3 or 1e-4, which would decrease the magnitude of the updates by orders of magnitude. Let’s go with 1e-4 and see how it works out:
###Code
training_loop(
n_epochs = 100,
learning_rate = 1e-4,
params = torch.tensor([1.0, 0.0]),
t_u = t_u,
t_c = t_c)
###Output
Epoch 1, Loss 1763.884766
Params: tensor([ 0.5483, -0.0083])
Grad: tensor([4517.2964, 82.6000])
Epoch 2, Loss 323.090515
Params: tensor([ 0.3623, -0.0118])
Grad: tensor([1859.5493, 35.7843])
Epoch 3, Loss 78.929634
Params: tensor([ 0.2858, -0.0135])
Grad: tensor([765.4666, 16.5122])
...
Epoch 10, Loss 29.105247
Params: tensor([ 0.2324, -0.0166])
Grad: tensor([1.4803, 3.0544])
Epoch 11, Loss 29.104168
Params: tensor([ 0.2323, -0.0169])
Grad: tensor([0.5781, 3.0384])
...
Epoch 99, Loss 29.023582
Params: tensor([ 0.2327, -0.0435])
Grad: tensor([-0.0533, 3.0226])
Epoch 100, Loss 29.022667
Params: tensor([ 0.2327, -0.0438])
Grad: tensor([-0.0532, 3.0226])
###Markdown
Nice--the behavior is now stable. But there’s another problem: the updates to parameters are very small, so the loss decreases very slowly and eventually stalls. We could obviate this issue by making learning_rate adaptive: that is, change according to the magnitude of updates.However, there’s another potential troublemaker in the update term: the gradient itself. Let’s go back and look at grad at epoch 1 during optimization. We can see that the first-epoch gradient for the weight is about 50 times larger than the gradient for the bias. This means the weight and bias live in differently scaled spaces. If this is the case, a learning rate that’s large enough to meaningfully update one will be so large as to be unstable for the other; and a rate that’s appropriate for the other won’t be large enough to meaningfully change the first. That means we’re not going to be able to update our parameters unless we change something about our formulation of the problem. We could have individual learning rates for each parameter, but for models with many parameters, this would be too much to bother with; it’s babysitting of the kind we don’t like.There’s a simpler way to keep things in check: changing the inputs so that the gradients aren’t quite so different. We can make sure the range of the input doesn’t get too far from the range of -1.0 to 1.0, roughly speaking. In our case, we can achieve something close enough to that by simply multiplying t_u by 0.1:
###Code
t_un = 0.1 * t_u
###Output
_____no_output_____
###Markdown
Here, we denote the normalized version of t_u by appending an n to the variable name. At this point, we can run the training loop on our normalized input:
###Code
training_loop(
n_epochs = 100,
learning_rate = 1e-2,
params = torch.tensor([1.0, 0.0]),
t_u = t_un,
t_c = t_c)
###Output
Epoch 1, Loss 80.364342
Params: tensor([1.7761, 0.1064])
Grad: tensor([-77.6140, -10.6400])
Epoch 2, Loss 37.574913
Params: tensor([2.0848, 0.1303])
Grad: tensor([-30.8623, -2.3864])
Epoch 3, Loss 30.871077
Params: tensor([2.2094, 0.1217])
Grad: tensor([-12.4631, 0.8587])
...
Epoch 10, Loss 29.030489
Params: tensor([ 2.3232, -0.0710])
Grad: tensor([-0.5355, 2.9295])
Epoch 11, Loss 28.941877
Params: tensor([ 2.3284, -0.1003])
Grad: tensor([-0.5240, 2.9264])
...
Epoch 99, Loss 22.214186
Params: tensor([ 2.7508, -2.4910])
Grad: tensor([-0.4453, 2.5208])
Epoch 100, Loss 22.148710
Params: tensor([ 2.7553, -2.5162])
Grad: tensor([-0.4446, 2.5165])
###Markdown
Let’s run the loop for enough iterations to see the changes in params get small. We’ll change n_epochs to 5,000:
###Code
params = training_loop(
n_epochs = 5000,
learning_rate = 1e-2,
params = torch.tensor([1.0, 0.0]),
t_u = t_un,
t_c = t_c,
print_params = False)
print('W={} and b={}'.format(params[0].item(), params[1].item()))
###Output
W=5.367083549499512 and b=-17.301189422607422
###Markdown
Good: our loss decreases while we change parameters along the direction of gradient descent. It doesn’t go exactly to zero; this could mean there aren’t enough iterations to converge to zero, or that the data points don’t sit exactly on a line. As we anticipated, our measurements were not perfectly accurate, or there was noise involved in the reading.But look: the values for w and b look an awful lot like the numbers we need to use to convert Celsius to Fahrenheit (after accounting for our earlier normalization when we multiplied our inputs by 0.1). The exact values would be w=5.5556 and b=-17.7778. Our fancy thermometer was showing temperatures in Fahrenheit the whole time. No big discovery, except that our gradient descent optimization process works! Let’s revisit something we did right at the start: plotting our data. Seriously, this is the first thing anyone doing data science should do. Always plot the heck out of the data:
###Code
t_p = model(t_un, *params)
fig = plt.figure(dpi=80)
plt.xlabel("Temperature (°Fahrenheit)")
plt.ylabel("Temperature (°Celsius)")
plt.plot(t_u.numpy(), t_p.detach().numpy())
plt.plot(t_u.numpy(), t_c.numpy(), 'o')
plt.show()
###Output
_____no_output_____
###Markdown
In our little adventure, we just saw a simple example of backpropagation: we computed the gradient of a composition of functions--the model and the loss--with respect to their innermost parameters (w and b) by propagating derivatives backward using the chain rule. The basic requirement here is that all functions we’re dealing with can be differentiated analytically. If this is the case, we can compute the gradient--what we earlier called “the rate of change of the loss”--with respect to the parameters in one sweep.Even if we have a complicated model with millions of parameters, as long as our model is differentiable, computing the gradient of the loss with respect to the parameters amounts to writing the analytical expression for the derivatives and evaluating them once. Granted, writing the analytical expression for the derivatives of a very deep composition of linear and nonlinear functions is not a lot of fun.9 It isn’t particularly quick, either. This is when PyTorch tensors come to the rescue, with a PyTorch component called autograd. PyTorch tensors can remember where they come from, in terms of the operations and parent tensors that originated them, and they can automatically provide the chain of derivatives of such operations with respect to their inputs. This means we won’t need to derive our model by hand; given a forward expression, no matter how nested, PyTorch will automatically provide the gradient of that expression with respect to its input parameters. At this point, the best way to proceed is to rewrite our thermometer calibration code, this time using autograd, and see what happens. First, we recall our model and loss function.
###Code
t_c = torch.tensor([0.5, 14.0, 15.0, 28.0, 11.0, 8.0,
3.0, -4.0, 6.0, 13.0, 21.0])
t_u = torch.tensor([35.7, 55.9, 58.2, 81.9, 56.3, 48.9,
33.9, 21.8, 48.4, 60.4, 68.4])
t_un = 0.1 * t_u
def model(t_u, w, b):
return w * t_u + b
def loss_fn(t_p, t_c):
squared_diffs = (t_p - t_c)**2
return squared_diffs.mean()
###Output
_____no_output_____
###Markdown
Let’s again initialize a parameters tensor:
###Code
params = torch.tensor([1.0, 0.0], requires_grad=True)
params.grad
###Output
_____no_output_____
###Markdown
All we have to do to populate it is to start with a tensor with requires_grad set to True, then call the model and compute the loss, and then call backward on the loss tensor:
###Code
loss = loss_fn(model(t_u, *params), t_c)
loss.backward()
params.grad
###Output
_____no_output_____
###Markdown
When we compute our loss while the parameters w and b require gradients, in addition to performing the actual computation, PyTorch creates the autograd graph with the operations (in black circles) as nodes, as shown in the top row of fig-ure 5.10. When we call loss.backward(), PyTorch traverses this graph in the reverse direction to compute the gradients, as shown by the arrows in the bottom row of the figure. Calling backward will lead derivatives to accumulate at leaf nodes. We need to zero the gradient explicitly after using it for parameter updates. In order to prevent this from occurring, we need to zero the gradient explicitly at each iteration. We can do this easily using the in-place zero_ method:
###Code
if params.grad is not None:
params.grad.zero_()
params.grad
###Output
_____no_output_____
###Markdown
Having this reminder drilled into our heads, let’s see what our autograd-enabled training code looks like, start to finish:
###Code
def training_loop(n_epochs, learning_rate, params, t_u, t_c):
for epoch in range(1, n_epochs + 1):
if params.grad is not None:
params.grad.zero_()
t_p = model(t_u, *params)
loss = loss_fn(t_p, t_c)
loss.backward()
with torch.no_grad():
params -= learning_rate * params.grad
if epoch % 500 == 0:
print('Epoch %d, Loss %f' % (epoch, float(loss)))
return params
training_loop(
n_epochs = 5000,
learning_rate = 1e-2,
params = torch.tensor([1.0, 0.0], requires_grad=True),
t_u = t_un,
t_c = t_c)
###Output
Epoch 500, Loss 7.860115
Epoch 1000, Loss 3.828538
Epoch 1500, Loss 3.092191
Epoch 2000, Loss 2.957698
Epoch 2500, Loss 2.933134
Epoch 3000, Loss 2.928648
Epoch 3500, Loss 2.927830
Epoch 4000, Loss 2.927679
Epoch 4500, Loss 2.927652
Epoch 5000, Loss 2.927647
###Markdown
The result is the same as we got previously. Good for us! It means that while we are capable of computing derivatives by hand, we no longer need to. Earlier, we used vanilla gradient descent for optimization, which worked fine for our simple case. Needless to say, there are several optimization strategies and tricks that can assist convergence, especially when models get complicated. This saves us from the boilerplate busywork of having to update each and every parameter to our model ourselves. The torch module has an optim submodule where we can find classes implementing different optimization algorithms. Each optimizer exposes two methods: zero_grad and step. zero_grad zeroes the grad attribute of all the parameters passed to the optimizer upon construction. step updates the value of those parameters according to the optimization strategy implemented by the specific optimizer.
###Code
import torch.optim as optim
dir(optim)
###Output
_____no_output_____
###Markdown
Let’s create params and instantiate a gradient descent optimizer:
###Code
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-5
optimizer = optim.SGD([params], lr=learning_rate)
###Output
_____no_output_____
###Markdown
Anyway, let’s take our fancy new optimizer for a spin:
###Code
t_p = model(t_u, *params)
loss = loss_fn(t_p, t_c)
loss.backward()
optimizer.step()
params
###Output
_____no_output_____
###Markdown
The value of params is updated upon calling step without us having to touch it ourselves! What happens is that the optimizer looks into params.grad and updates params, subtracting learning_rate times grad from it, exactly as in our former hand-rolled code.Ready to stick this code in a training loop? Nope! The big gotcha almost got us--we forgot to zero out the gradients. Had we called the previous code in a loop, gradients would have accumulated in the leaves at every call to backward, and our gradient descent would have been all over the place! Here’s the loop-ready code, with the extra zero_grad at the correct spot (right before the call to backward):
###Code
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-2
optimizer = optim.SGD([params], lr=learning_rate)
t_p = model(t_un, *params)
loss = loss_fn(t_p, t_c)
optimizer.zero_grad()
loss.backward()
optimizer.step()
params
###Output
_____no_output_____
###Markdown
Perfect! See how the optim module helps us abstract away the specific optimization scheme? All we have to do is provide a list of params to it (that list can be extremely long, as is needed for very deep neural network models), and we can forget about the details.Let’s update our training loop accordingly:
###Code
def training_loop(n_epochs, optimizer, params, t_u, t_c):
for epoch in range(1, n_epochs + 1):
t_p = model(t_u, *params)
loss = loss_fn(t_p, t_c)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 500 == 0:
print('Epoch %d, Loss %f' % (epoch, float(loss)))
return params
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-2
optimizer = optim.SGD([params], lr=learning_rate)
training_loop(
n_epochs = 5000,
optimizer = optimizer,
params = params,
t_u = t_un,
t_c = t_c)
###Output
Epoch 500, Loss 7.860120
Epoch 1000, Loss 3.828538
Epoch 1500, Loss 3.092191
Epoch 2000, Loss 2.957698
Epoch 2500, Loss 2.933134
Epoch 3000, Loss 2.928648
Epoch 3500, Loss 2.927830
Epoch 4000, Loss 2.927679
Epoch 4500, Loss 2.927652
Epoch 5000, Loss 2.927647
###Markdown
Again, we get the same result as before. Great: this is further confirmation that we know how to descend a gradient by hand! In order to test more optimizers, all we have to do is instantiate a different optimizer, say Adam, instead of SGD. The rest of the code stays as it is. Pretty handy stuff.We won’t go into much detail about Adam; suffice to say that it is a more sophisticated optimizer in which the learning rate is set adaptively. In addition, it is a lot less sensitive to the scaling of the parameters--so insensitive that we can go back to using the original (non-normalized) input t_u, and even increase the learning rate to 1e-1, and Adam won’t even blink:
###Code
params = torch.tensor([1.0, 0.0], requires_grad=True)
learning_rate = 1e-1
optimizer = optim.Adam([params], lr=learning_rate)
training_loop(
n_epochs = 2000,
optimizer = optimizer,
params = params,
t_u = t_u,
t_c = t_c)
###Output
Epoch 500, Loss 7.612900
Epoch 1000, Loss 3.086700
Epoch 1500, Loss 2.928579
Epoch 2000, Loss 2.927644
###Markdown
Now, we will change another thing: the model architecture. Logically, linear model makes sense but we will anyway fit a non-linear neural net to see how things change. Recall that t_u and t_c were two 1D tensors of size B. Thanks to broadcasting, we could write our linear model as w * x + b, where w and b were two scalar parameters. This worked because we had a single input feature: if we had two, we would need to add an extra dimension to turn that 1D tensor into a matrix with samples in the rows and features in the columns.That’s exactly what we need to do to switch to using nn.Linear. We reshape our B inputs to B × Nin, where Nin is 1. That is easily done with unsqueeze:
###Code
t_c = [0.5, 14.0, 15.0, 28.0, 11.0, 8.0, 3.0, -4.0, 6.0, 13.0, 21.0]
t_u = [35.7, 55.9, 58.2, 81.9, 56.3, 48.9, 33.9, 21.8, 48.4, 60.4, 68.4]
t_c = torch.tensor(t_c).unsqueeze(1)
t_u = torch.tensor(t_u).unsqueeze(1)
t_u.shape
###Output
_____no_output_____
###Markdown
To avoid the overfitting we will do spliting and shuffling also:
###Code
n_samples = t_u.shape[0]
n_val = int(0.2 * n_samples)
shuffled_indices = torch.randperm(n_samples)
train_indices = shuffled_indices[:-n_val]
val_indices = shuffled_indices[-n_val:]
train_indices, val_indices
t_u_train = t_u[train_indices]
t_c_train = t_c[train_indices]
t_u_val = t_u[val_indices]
t_c_val = t_c[val_indices]
t_un_train = 0.1 * t_u_train
t_un_val = 0.1 * t_u_val
###Output
_____no_output_____
###Markdown
We’re done; let’s update our training code. First, we replace our handmade model with nn.Linear(1,1), and then we need to pass the linear model parameters to the optimizer:
###Code
import torch.nn as nn
linear_model = nn.Linear(1, 1)
linear_model(t_un_val)
###Output
_____no_output_____
###Markdown
Earlier, it was our responsibility to create parameters and pass them as the first argument to optim.SGD. Now we can use the parameters method to ask any nn.Module for a list of parameters owned by it or any of its submodules:
###Code
linear_model.weight
linear_model.bias
###Output
_____no_output_____
###Markdown
At this point, the SGD optimizer has everything it needs. When optimizer.step() is called, it will iterate through each Parameter and change it by an amount proportional to what is stored in its grad attribute. Pretty clean design.Let’s take a look a the training loop now:
###Code
def training_loop(n_epochs, optimizer, model, loss_fn, t_u_train, t_u_val,
t_c_train, t_c_val):
for epoch in range(1, n_epochs + 1):
t_p_train = model(t_u_train)
loss_train = loss_fn(t_p_train, t_c_train)
t_p_val = model(t_u_val)
loss_val = loss_fn(t_p_val, t_c_val)
optimizer.zero_grad()
loss_train.backward()
optimizer.step()
if epoch == 1 or epoch % 1000 == 0:
print(f"Epoch {epoch}, Training loss {loss_train.item():.4f},"
f" Validation loss {loss_val.item():.4f}")
def loss_fn(t_p, t_c):
squared_diffs = (t_p - t_c)**2
return squared_diffs.mean()
linear_model = nn.Linear(1, 1)
optimizer = optim.SGD(linear_model.parameters(), lr=1e-2)
training_loop(
n_epochs = 3000,
optimizer = optimizer,
model = linear_model,
loss_fn = loss_fn,
t_u_train = t_un_train,
t_u_val = t_un_val,
t_c_train = t_c_train,
t_c_val = t_c_val)
print()
print(linear_model.weight)
print(linear_model.bias)
linear_model = nn.Linear(1, 1)
optimizer = optim.SGD(linear_model.parameters(), lr=1e-2)
training_loop(
n_epochs = 3000,
optimizer = optimizer,
model = linear_model,
loss_fn = nn.MSELoss(),
t_u_train = t_un_train,
t_u_val = t_un_val,
t_c_train = t_c_train,
t_c_val = t_c_val)
print()
print(linear_model.weight)
print(linear_model.bias)
###Output
Epoch 1, Training loss 149.1001, Validation loss 82.8564
Epoch 1000, Training loss 3.7170, Validation loss 1.8634
Epoch 2000, Training loss 2.8091, Validation loss 4.0064
Epoch 3000, Training loss 2.7798, Validation loss 4.5222
Parameter containing:
tensor([[5.4920]], requires_grad=True)
Parameter containing:
tensor([-18.3081], requires_grad=True)
###Markdown
It’s been a long journey--there has been a lot to explore for these 20-something lines of code we require to define and train a model. Hopefully by now the magic involved in training has vanished and left room for the mechanics. What we learned so far will allow us to own the code we write instead of merely poking at a black box when things get more complicated.There’s one last step left to take: replacing our linear model with a neural network as our approximating function. We said earlier that using a neural network will not result in a higher-quality model, since the process underlying our calibration problem was fundamentally linear. However, it’s good to make the leap from linear to neural network in a controlled environment so we won’t feel lost later. ```nn``` provides a simple way to concatenate modules through the nn.Sequential container:
###Code
seq_model = nn.Sequential(
nn.Linear(1, 13),
nn.Tanh(),
nn.Linear(13, 1))
seq_model
###Output
_____no_output_____
###Markdown
Calling model.parameters() will collect weight and bias from both the first and second linear modules. It’s instructive to inspect the parameters in this case by printing their shapes:
###Code
[param.shape for param in seq_model.parameters()]
###Output
_____no_output_____
###Markdown
These are the tensors that the optimizer will get. Again, after we call model.backward(), all parameters are populated with their grad, and the optimizer then updates their values accordingly during the optimizer.step() call. Not that different from our previous linear model, eh? After all, they’re both differentiable models that can be trained using gradient descent.A few notes on parameters of nn.Modules. When inspecting parameters of a model made up of several submodules, it is handy to be able to identify parameters by name. There’s a method for that, called named_parameters:
###Code
for name, param in seq_model.named_parameters():
print(name, param.shape)
###Output
0.weight torch.Size([13, 1])
0.bias torch.Size([13])
2.weight torch.Size([1, 13])
2.bias torch.Size([1])
###Markdown
The name of each module in Sequential is just the ordinal with which the module appears in the arguments. Interestingly, Sequential also accepts an OrderedDict, in which we can name each module passed to Sequential:
###Code
from collections import OrderedDict
seq_model = nn.Sequential(OrderedDict([
('hidden_linear', nn.Linear(1, 8)),
('hidden_activation', nn.Tanh()),
('output_linear', nn.Linear(8, 1))
]))
seq_model
###Output
_____no_output_____
###Markdown
This allows us to get more explanatory names for submodules:
###Code
for name, param in seq_model.named_parameters():
print(name, param.shape)
###Output
hidden_linear.weight torch.Size([8, 1])
hidden_linear.bias torch.Size([8])
output_linear.weight torch.Size([1, 8])
output_linear.bias torch.Size([1])
###Markdown
Running the training loop:
###Code
optimizer = optim.SGD(seq_model.parameters(), lr=1e-3) # <1>
training_loop(
n_epochs = 5000,
optimizer = optimizer,
model = seq_model,
loss_fn = nn.MSELoss(),
t_u_train = t_un_train,
t_u_val = t_un_val,
t_c_train = t_c_train,
t_c_val = t_c_val)
print('output', seq_model(t_un_val))
print('answer', t_c_val)
print('hidden', seq_model.hidden_linear.weight.grad)
###Output
Epoch 1, Training loss 195.6297, Validation loss 110.7736
Epoch 1000, Training loss 3.8279, Validation loss 7.3894
Epoch 2000, Training loss 3.4167, Validation loss 11.3598
Epoch 3000, Training loss 1.9194, Validation loss 8.5613
Epoch 4000, Training loss 1.3862, Validation loss 6.8235
Epoch 5000, Training loss 1.2873, Validation loss 6.1687
output tensor([[13.2770],
[-0.0607]], grad_fn=<AddmmBackward>)
answer tensor([[15.],
[ 3.]])
hidden tensor([[-0.0094],
[-0.0067],
[ 0.0140],
[-0.0049],
[ 0.0139],
[ 0.0015],
[-0.0025],
[-0.0173]])
###Markdown
We can also evaluate the model on all of the data and see how it differs from a line:
###Code
t_range = torch.arange(20., 90.).unsqueeze(1)
fig = plt.figure(dpi=80)
plt.xlabel("Fahrenheit")
plt.ylabel("Celsius")
plt.plot(t_u.numpy(), t_c.numpy(), 'o')
plt.plot(t_range.numpy(), seq_model(0.1 * t_range).detach().numpy(), 'c-')
plt.plot(t_u.numpy(), seq_model(0.1 * t_u).detach().numpy(), 'kx')
plt.show()
###Output
_____no_output_____
###Markdown
We can appreciate that the neural network has a tendency to overfit, as we discussed in chapter 5, since it tries to chase the measurements, including the noisy ones. Even our tiny neural network has too many parameters to fit the few measurements we have. It doesn’t do a bad job, though, overall. Let's also try on some other settings:
###Code
neuron_count = 20
seq_model = nn.Sequential(OrderedDict([
('hidden_linear', nn.Linear(1, neuron_count)),
('hidden_activation', nn.Tanh()),
('output_linear', nn.Linear(neuron_count, 1))
]))
optimizer = optim.SGD(seq_model.parameters(), lr=1e-4)
training_loop(
n_epochs = 5000,
optimizer = optimizer,
model = seq_model,
loss_fn = nn.MSELoss(),
t_u_train = t_un_train,
t_u_val = t_un_val,
t_c_train = t_c_train,
t_c_val = t_c_val)
t_range = torch.arange(20., 90.).unsqueeze(1)
fig = plt.figure(dpi=80)
plt.xlabel("Fahrenheit")
plt.ylabel("Celsius")
plt.plot(t_u.numpy(), t_c.numpy(), 'o')
plt.plot(t_range.numpy(), seq_model(0.1 * t_range).detach().numpy(), 'c-')
plt.plot(t_u.numpy(), seq_model(0.1 * t_u).detach().numpy(), 'kx')
plt.show()
###Output
_____no_output_____
###Markdown
Dataset Classes PyTorch supports map- and iterable-style dataset classes. A map-style dataset is derived from the abstract class torch.utils.data.Dataset. It implements the getitem() and len() functions, and represents a map from (possibly nonintegral) indices/keys to data samples. For example, such a dataset, when accessed with dataset[idx], could read the idx-th image and its corresponding label from a folder on the disk. Map-style datasets are more commonly used than iterable-style datasets, and all datasets that represent a map made from keys or data samples should use this subclass.The simplest way to create your own dataset class is to subclass the map-style torch.utils.data.Dataset class and override the getitem() and len() functions with your own code.All subclasses should overwrite getitem(), which fetches a data sample for a given key. Subclasses can also optionally overwrite len(), which returns the size of the dataset by many Sampler implementations and the default options of DataLoader.An iterable-style dataset, on the other hand, is derived from the torch.utils.data.IterableDataset abstract class. It implements the iter() protocol and represents an iterable over data samples. This type of dataset is typically used when reading data from a database or a remote server, as well as data generated in real time. Iterable datasets are useful when random reads are expensive or uncertain, and when the batch size depends on fetched data. The Dataset class returns a dataset object that includes data and information about the data. The dataset and sampler objects are not iterables, meaning you cannot run a for loop on them. The dataloader object solves this problem. The DataLoader class combines a dataset with a sampler and returns an iterable. The PyTorch NN Module (torch.nn) One of the most powerful features of PyTorch is its Python module torch.nn, which makes it easy to design and experiment with new models. The following code illustrates how you can create a simple model with torch.nn. In this example, we will create a fully connected model called SimpleNet. It consists of an input layer, a hidden layer, and an output layer that takes in 2,048 input values and returns 2 output values for classification:
###Code
import torch.nn as nn
import torch.nn.functional as F
class SimpleNet(nn.Module):
def __init__(self):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(2048, 256)
self.fc2 = nn.Linear(256, 64)
self.fc3 = nn.Linear(64,2)
def forward(self, x):
x = x.view(-1, 2048)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.softmax(self.fc3(x),dim=1)
return x
###Output
_____no_output_____
###Markdown
Creating a model in PyTorch is said to be very “Pythonic,” meaning it creates objects in the preferred Python fashion. We first create a new subclass called SimpleNet that inherits from the nn.Module class, and then we define the __init__() and forward() methods. The __init__() function initializes the model parameters and the forward() function defines how data is passed through our model.In __init__(), we call the super() function to execute the parent nn.Module class’s __init__() method to initialize the class parameters. Then we define some layers using the nn.Linear module.The forward() function defines how data is passed through the network. In the forward() function, we first use view() to reshape the input into a 2,048-element vector, then we process the input through each layer and apply relu() activation functions. Finally, we apply the softmax() function and return the output.
###Code
simplenet = SimpleNet()
print(simplenet)
input = torch.rand(2048)
output = simplenet(input)
###Output
_____no_output_____
###Markdown
The Perceptron: The Simplest Neural Network The simplest neural network unit is a perceptron. The perceptron was historically and very loosely modeled after the biological neuron. As with a biological neuron, there is input and output, and “signals” flow from the inputs to the outputs.Each perceptron unit has an input (x), an output (y), and three “knobs”: a set of weights (w), a bias (b), and an activation function (f). The weights and the bias are learned from the data, and the activation function is handpicked depending on the network designer’s intuition of the network and its target outputs.
###Code
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
%matplotlib inline
# Global Settings
LEFT_CENTER = (3, 3)
RIGHT_CENTER = (3, -2)
seed = 1337
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
class Perceptron(nn.Module):
""" A Perceptron is one Linear layer """
def __init__(self, input_dim):
"""
Args:
input_dim (int): size of the input features
"""
super(Perceptron, self).__init__()
self.fc1 = nn.Linear(input_dim, 1)
def forward(self, x_in):
"""The forward pass of the MLP
Args:
x_in (torch.Tensor): an input data tensor.
x_in.shape should be (batch, input_dim)
Returns:
the resulting tensor. tensor.shape should be (batch, 1)
"""
return torch.sigmoid(self.fc1(x_in))
###Output
_____no_output_____
###Markdown
In machine learning, it is a common practice to create synthetic data with well-understood properties when trying to understand an algorithm. For this section, we use synthetic data for the task of classifying two-dimensional points into one of two classes. To construct the data, we sample the points from two different parts of the xy-plane, creating an easy-to-learn situation for the model. The goal of the model is to classify the stars (⋆) as one class, and the circles (◯) as another class. This is visualized on the righthand side, where everything above the line is classified differently than everything below the line.
###Code
def get_toy_data(batch_size, left_center=LEFT_CENTER, right_center=RIGHT_CENTER):
x_data = []
y_targets = np.zeros(batch_size)
for batch_i in range(batch_size):
if np.random.random() > 0.5:
x_data.append(np.random.normal(loc=left_center))
else:
x_data.append(np.random.normal(loc=right_center))
y_targets[batch_i] = 1
return torch.tensor(x_data, dtype=torch.float32), torch.tensor(y_targets, dtype=torch.float32)
x_data, y_truth = get_toy_data(batch_size=1000)
x_data = x_data.data.numpy()
y_truth = y_truth.data.numpy()
left_x = []
right_x = []
left_colors = []
right_colors = []
for x_i, y_true_i in zip(x_data, y_truth):
color = 'black'
if y_true_i == 0:
left_x.append(x_i)
left_colors.append(color)
else:
right_x.append(x_i)
right_colors.append(color)
left_x = np.stack(left_x)
right_x = np.stack(right_x)
_, ax = plt.subplots(1, 1, figsize=(10,4))
ax.scatter(left_x[:, 0], left_x[:, 1], color=left_colors, marker='*', s=100)
ax.scatter(right_x[:, 0], right_x[:, 1], facecolor='white', edgecolor=right_colors, marker='o', s=100)
plt.axis('off');
def visualize_results(perceptron, x_data, y_truth, n_samples=1000, ax=None, epoch=None,
title='', levels=[0.3, 0.4, 0.5], linestyles=['--', '-', '--']):
y_pred = perceptron(x_data)
y_pred = (y_pred > 0.5).long().data.numpy().astype(np.int32)
x_data = x_data.data.numpy()
y_truth = y_truth.data.numpy().astype(np.int32)
n_classes = 2
all_x = [[] for _ in range(n_classes)]
all_colors = [[] for _ in range(n_classes)]
colors = ['black', 'white']
markers = ['o', '*']
for x_i, y_pred_i, y_true_i in zip(x_data, y_pred, y_truth):
all_x[y_true_i].append(x_i)
if y_pred_i == y_true_i:
all_colors[y_true_i].append("white")
else:
all_colors[y_true_i].append("black")
#all_colors[y_true_i].append(colors[y_pred_i])
all_x = [np.stack(x_list) for x_list in all_x]
if ax is None:
_, ax = plt.subplots(1, 1, figsize=(10,10))
for x_list, color_list, marker in zip(all_x, all_colors, markers):
ax.scatter(x_list[:, 0], x_list[:, 1], edgecolor="black", marker=marker, facecolor=color_list, s=300)
xlim = (min([x_list[:,0].min() for x_list in all_x]),
max([x_list[:,0].max() for x_list in all_x]))
ylim = (min([x_list[:,1].min() for x_list in all_x]),
max([x_list[:,1].max() for x_list in all_x]))
# hyperplane
xx = np.linspace(xlim[0], xlim[1], 30)
yy = np.linspace(ylim[0], ylim[1], 30)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = perceptron(torch.tensor(xy, dtype=torch.float32)).detach().numpy().reshape(XX.shape)
ax.contour(XX, YY, Z, colors='k', levels=levels, linestyles=linestyles)
plt.suptitle(title)
if epoch is not None:
plt.text(xlim[0], ylim[1], "Epoch = {}".format(str(epoch)))
lr = 0.01
input_dim = 2
batch_size = 1000
n_epochs = 12
n_batches = 5
perceptron = Perceptron(input_dim=input_dim)
optimizer = optim.Adam(params=perceptron.parameters(), lr=lr)
bce_loss = nn.BCELoss()
losses = []
x_data_static, y_truth_static = get_toy_data(batch_size)
fig, ax = plt.subplots(1, 1, figsize=(10,5))
visualize_results(perceptron, x_data_static, y_truth_static, ax=ax, title='Initial Model State')
plt.axis('off')
#plt.savefig('initial.png')
change = 1.0
last = 10.0
epsilon = 1e-3
epoch = 0
while change > epsilon or epoch < n_epochs or last > 0.3:
#for epoch in range(n_epochs):
for _ in range(n_batches):
optimizer.zero_grad()
x_data, y_target = get_toy_data(batch_size)
y_pred = perceptron(x_data).squeeze()
loss = bce_loss(y_pred, y_target)
loss.backward()
optimizer.step()
loss_value = loss.item()
losses.append(loss_value)
change = abs(last - loss_value)
last = loss_value
fig, ax = plt.subplots(1, 1, figsize=(10,5))
visualize_results(perceptron, x_data_static, y_truth_static, ax=ax, epoch=epoch,
title=f"{loss_value}; {change}")
plt.axis('off')
epoch += 1
#plt.savefig('epoch{}_toylearning.png'.format(epoch))
_, axes = plt.subplots(1,2,figsize=(12,4))
axes[0].scatter(left_x[:, 0], left_x[:, 1], facecolor='white',edgecolor='black', marker='o', s=300)
axes[0].scatter(right_x[:, 0], right_x[:, 1], facecolor='white', edgecolor='black', marker='*', s=300)
axes[0].axis('off');
visualize_results(perceptron, x_data_static, y_truth_static, epoch=None, levels=[0.5], ax=axes[1])
axes[1].axis('off');
plt.savefig('perceptron_final.png')
plt.savefig('perceptron_final.pdf')
###Output
_____no_output_____
###Markdown
The Multilayer Perceptron The multilayer perceptron is considered one of the most basic neural network building blocks. The simplest MLP is an extension to the perceptron. The perceptron takes the data vector as input and computes a single output value. In an MLP, many perceptrons are grouped so that the output of a single layer is a new vector instead of a single output value. In PyTorch, this is done simply by setting the number of output features in the Linear layer. An additional aspect of an MLP is that it combines multiple layers with a nonlinearity in between each layer.The simplest MLP is composed of three stages of representation and two Linear layers. The first stage is the input vector. This is the vector that is given to the model. Given the input vector, the first Linear layer computes a hidden vector—the second stage of representation. The hidden vector is called such because it is the output of a layer that’s between the input and the output. What do we mean by “output of a layer”? One way to understand this is that the values in the hidden vector are the output of different perceptrons that make up that layer. Using this hidden vector, the second Linear layer computes an output vector. In a multiclass setting, the size of the output vector is equal to the number of classes. Always, the final hidden vector is mapped to the output vector using a combination of Linear layer and a nonlinearity. Let’s take a look at the XOR example described earlier and see what would happen with a perceptron versus an MLP. In this example, we train both the perceptron and an MLP in a binary classification task: identifying stars and circles. Each data point is a 2D coordinate. Without diving into the implementation details yet, the final model predictions are shown in figure below. In this plot, incorrectly classified data points are filled in with black, whereas correctly classified data points are not filled in. In the left panel, you can see that the perceptron has difficulty in learning a decision boundary that can separate the stars and circles, as evidenced by the filled in shapes. However, the MLP (right panel) learns a decision boundary that classifies the stars and circles much more accurately.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
seed = 1337
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
class MultilayerPerceptron(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
"""
Args:
input_dim (int): the size of the input vectors
hidden_dim (int): the output size of the first Linear layer
output_dim (int): the output size of the second Linear layer
"""
super(MultilayerPerceptron, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, x_in, apply_softmax=False):
"""The forward pass of the MLP
Args:
x_in (torch.Tensor): an input data tensor.
x_in.shape should be (batch, input_dim)
apply_softmax (bool): a flag for the softmax activation
should be false if used with the Cross Entropy losses
Returns:
the resulting tensor. tensor.shape should be (batch, output_dim)
"""
intermediate = F.relu(self.fc1(x_in))
output = self.fc2(intermediate)
if apply_softmax:
output = F.softmax(output, dim=1)
return output
###Output
_____no_output_____
###Markdown
Let's instantiate the MLP. Due to the generality of the MLP implementation, we can model inputs of any size. To demonstrate, we use an input dimension of size 3, an output dimension of size 4, and a hidden dimension of size 100. Notice how in the output of the print statement, the number of units in each layer nicely line up to produce an output of dimension 4 for an input of dimension 3.
###Code
batch_size = 2 # number of samples input at once
input_dim = 3
hidden_dim = 100
output_dim = 4
# Initialize model
mlp = MultilayerPerceptron(input_dim, hidden_dim, output_dim)
print(mlp)
###Output
MultilayerPerceptron(
(fc1): Linear(in_features=3, out_features=100, bias=True)
(fc2): Linear(in_features=100, out_features=4, bias=True)
)
###Markdown
We can quickly test the “wiring” of the model by passing some random inputs. Because the model is not yet trained, the outputs are random. Doing this is a useful sanity check before spending time training a model. Notice how PyTorch’s interactivity allows us to do all this in real time during development, in a way not much different from using NumPy or Pandas.
###Code
def describe(x):
print("Type: {}".format(x.type()))
print("Shape/size: {}".format(x.shape))
print("Values: \n{}".format(x))
x_input = torch.rand(batch_size, input_dim)
describe(x_input)
y_output = mlp(x_input, apply_softmax=False)
describe(y_output)
###Output
Type: torch.FloatTensor
Shape/size: torch.Size([2, 4])
Values:
tensor([[-0.2456, 0.0723, 0.1589, -0.3294],
[-0.3497, 0.0828, 0.3391, -0.4271]], grad_fn=<AddmmBackward>)
###Markdown
It is important to learn how to read inputs and outputs of PyTorch models. In the preceding example, the output of the MLP model is a tensor that has two rows and four columns. The rows in this tensor correspond to the batch dimension, which is the number of data points in the minibatch. The columns are the final feature vectors for each data point. In some cases, such as in a classification setting, the feature vector is a prediction vector. The name “prediction vector” means that it corresponds to a probability distribution. What happens with the prediction vector depends on whether we are currently conducting training or performing inference. During training, the outputs are used as is with a loss function and a representation of the target class labels. However, if you want to turn the prediction vector into probabilities, an extra step is required. Specifically, you require the softmax activation function, which is used to transform a vector of values into probabilities. The softmax function has many roots. In physics, it is known as the Boltzmann or Gibbs distribution; in statistics, it’s multinomial logistic regression; and in the natural language processing (NLP) community it’s known as the maximum entropy (MaxEnt) classifier.7 Whatever the name, the intuition underlying the function is that large positive values will result in higher probabilities, and lower negative values will result in smaller probabilities.
###Code
y_output = mlp(x_input, apply_softmax=True)
describe(y_output)
###Output
Type: torch.FloatTensor
Shape/size: torch.Size([2, 4])
Values:
tensor([[0.2087, 0.2868, 0.3127, 0.1919],
[0.1832, 0.2824, 0.3649, 0.1696]], grad_fn=<SoftmaxBackward>)
###Markdown
To conclude, MLPs are stacked Linear layers that map tensors to other tensors. Nonlinearities are used between each pair of Linear layers to break the linear relationship and allow for the model to twist the vector space around. In a classification setting, this twisting should result in linear separability between classes. Additionally, you can use the softmax function to interpret MLP outputs as probabilities, but you should not use softmax with specific loss functions, because the underlying implementations can leverage superior mathematical/computational shortcuts. Image Neural Model
###Code
import time
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.transforms as transforms
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Define transforms for data preprocessing
transform = transforms.Compose([
transforms.ToTensor()
])
# Datasets
trainset = torchvision.datasets.FashionMNIST(root='./data', train=True, download=True, transform=transform)
testset = torchvision.datasets.FashionMNIST(root='./data', train=False, download=True, transform=transform)
# Dataloaders to feed the data in batches
trainloader = torch.utils.data.DataLoader(trainset, batch_size=1000, shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2)
print('No. of train images: {}'.format(len(trainset)))
print('No. of test images: {}'.format(len(testset)))
print('No. of train batches: {}'.format(len(trainloader)))
print('No. of test batches: {}'.format(len(testloader)))
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)
self.pool = nn.MaxPool2d(kernel_size = 2, stride = 2)
self.fc1 = nn.Linear(12 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 60)
self.fc3 = nn.Linear(60, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.pool(x)
x = self.conv2(x)
x = F.relu(x)
x = self.pool(x)
x = x.reshape(-1, 12 * 4 * 4)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
return x
def find_acc(pred, label):
"""pixelwise accuracy"""
correct = pred.argmax(dim = 1).eq(label)
accuracy = correct.to(torch.float32).mean().item() * 100
return accuracy
def train(network, epoch, criterion, optimizer, trainloader):
loss_train = 0
acc_train = 0
network.train()
for step in range(len(trainloader)):
images , labels = next(iter(trainloader))
# move the images and labels to GPU
images = images.to(device)
labels = labels.to(device)
pred = network(images)
# clear all the gradients before calculating them
optimizer.zero_grad()
# find the loss for the current step
loss_train_step = criterion(pred , labels)
# find accuracy
acc_train_step = find_acc(pred, labels)
# calculate the gradients
loss_train_step.backward()
# update the parameters
optimizer.step()
loss_train += loss_train_step.item()
acc_train += acc_train_step
loss_train /= len(trainloader)
acc_train /= len(testloader)
return loss_train, acc_train
def validate(network, epoch, criterion, testloader):
loss_valid = 0
acc_valid = 0
network.eval()
for step in range(len(testloader)):
images , labels = next(iter(testloader))
# move the images and labels to GPU
images = images.to(device)
labels = labels.to(device)
pred = network(images)
# clear all the gradients before calculating them
optimizer.zero_grad()
# find the loss and acc for the current step
loss_valid_step = criterion(pred , labels)
acc_valid_step = find_acc(pred, labels)
loss_valid += loss_valid_step.item()
acc_valid += acc_valid_step
loss_valid /= len(trainloader)
acc_valid /= len(testloader)
return loss_valid, acc_valid
network = Network().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(network.parameters(), lr=0.005)
num_epochs = 10
start_time = time.time()
for epoch in range(1, num_epochs+1):
loss_train, acc_train = train(network, epoch, criterion, optimizer, trainloader)
loss_valid, acc_valid = validate(network, epoch, criterion, testloader)
print('Epoch: {} Train Loss: {:.4f} Train Acc: {:.4f} Valid Loss: {:.4f} Valid Acc: {:.4f}'.format(epoch, loss_train, acc_train, loss_valid, acc_valid))
print("Time Elapsed : {:.4f}s".format(time.time() - start_time))
torch.save(network.state_dict(), "model.h5")
def test_model(model):
start_time = time.time()
num_correct = 0
accuracy = 0
with torch.no_grad():
for batch in testloader:
images, labels = batch
images = images.to(device)
labels = labels.to(device)
total_images = len(testset)
pred = model(images)
num_correct_batch = pred.argmax(dim = 1).eq(labels).sum().item()
accuracy_batch = pred.argmax(dim = 1).eq(labels).float().mean().item()
num_correct += num_correct_batch
accuracy += accuracy_batch
accuracy /= len(testloader)
print('Number of test images: {}'.format(total_images))
print('Number of correct predictions: {}'.format(num_correct))
print('Accuracy: {}'.format(accuracy * 100))
print("Time Elapsed : {:.4f}s".format(time.time() - start_time))
# test the trained network
test_model(network)
###Output
Number of test images: 10000
Number of correct predictions: 5580
Accuracy: 55.799999803304665
Time Elapsed : 1.5537s
###Markdown
Linear Regression
###Code
import numpy as np
import torch
import matplotlib.pyplot as plt
from matplotlib import colors
plt.rcParams.update({'font.size': 16})
x = torch.rand(20, 5)
x
input_dim = 1
output_dim = 1
W = 2 * np.random.rand(output_dim, input_dim) - 1
b = 2 * np.random.rand(output_dim) - 1
true_model = lambda x: W @ x + b
n_train = 1000
noise_level = 0.04
# Generate a random set of n_train samples
X_train = np.random.rand(n_train, input_dim)
y_train = np.array([true_model(x) for x in X_train])
# Add some noise
y_train += noise_level * np.random.standard_normal(size=y_train.shape)
if input_dim == output_dim == 1:
fig = plt.figure()
fig.clf()
ax = fig.gca()
ax.plot(X_train, y_train, '.')
ax.grid(True)
ax.set_xlabel('X_train')
ax.set_ylabel('y_train')
class VectorialDataset(torch.utils.data.Dataset):
def __init__(self, input_data, output_data):
super(VectorialDataset, self).__init__()
self.input_data = torch.tensor(input_data.astype('f'))
self.output_data = torch.tensor(output_data.astype('f'))
def __len__(self):
return self.input_data.shape[0]
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
sample = (self.input_data[idx, :],
self.output_data[idx, :])
return sample
training_set = VectorialDataset(input_data=X_train, output_data=y_train)
training_set[10:12]
train_loader = torch.utils.data.DataLoader(training_set,
batch_size=120,
shuffle=True)
import torch.nn as nn
import torch
class LinearModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(LinearModel, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.linear = nn.Linear(self.input_dim, self.output_dim, bias=True)
def forward(self, x):
out = self.linear(x)
return out
def reset(self):
self.linear.reset_parameters()
model = LinearModel(input_dim, output_dim)
model
list(model.parameters())
model.linear.weight
model.linear.bias
x = torch.randn(5, input_dim)
model.forward(x)
[model.linear.weight @ xx + model.linear.bias for xx in x]
if input_dim == output_dim == 1:
fig = plt.figure()
fig.clf()
ax = fig.gca()
ax.plot(training_set.input_data, training_set.output_data, '.')
ax.plot(training_set.input_data, model.forward(training_set.input_data).detach().numpy(), '.')
ax.grid(True)
ax.set_xlabel('X_train')
ax.legend(['y_train', 'model(X_train)'])
import torch.nn as nn
loss_fun = nn.MSELoss(reduction='mean')
x = torch.tensor(np.array([1, 2, 1]).astype('f'))
z = torch.tensor(np.array([0, 0, 0]).astype('f'))
loss_fun(x, z)
if input_dim == output_dim == 1:
state_dict = model.state_dict()
ww, bb = np.meshgrid(np.linspace(-2, 2, 30), np.linspace(-2, 2, 30))
loss_values = 0 * ww
for i in range(ww.shape[0]):
for j in range(ww.shape[1]):
state_dict['linear.weight'] = torch.tensor([[ww[i, j]]])
state_dict['linear.bias'] = torch.tensor([bb[i, j]])
model.load_state_dict(state_dict)
loss_values[i, j] = loss_fun(model.forward(training_set.input_data), training_set.output_data)
fig = plt.figure(figsize=(10, 8))
fig.clf()
ax = fig.gca()
levels = np.logspace(np.log(np.min(loss_values)), np.log(np.max(loss_values)), 20)
c=ax.contourf(ww, bb, loss_values, levels=levels, norm=colors.LogNorm())
plt.colorbar(c)
ax.plot(W[0], b, 'r*', markersize=10)
ax.set_ylabel('bias')
ax.set_xlabel('weight')
ax.legend(['(W, b)'])
ax.grid(True)
x = torch.randn(1, input_dim)
y = torch.randn(1, output_dim)
model.zero_grad()
loss = loss_fun(model.forward(x), y)
loss.backward()
if input_dim == output_dim == 1:
print(model.linear.weight.grad)
print(2 * x * (model.linear.weight * x + model.linear.bias - y))
print(model.linear.bias.grad)
print(2 * (model.linear.weight * x + model.linear.bias - y))
if input_dim == output_dim == 1:
num_iter = 200
lr = 0.5 # 0.01
train_hist = {}
train_hist['weight'] = []
train_hist['bias'] = []
model.reset()
state_dict = model.state_dict()
for _ in range(num_iter):
model.zero_grad()
loss = loss_fun(model.forward(training_set.input_data), training_set.output_data)
loss.backward()
w = model.linear.weight.item()
b = model.linear.bias.item()
dw = model.linear.weight.grad.item()
db = model.linear.bias.grad.item()
state_dict['linear.weight'] += torch.tensor([-lr * dw])
state_dict['linear.bias'] += torch.tensor([-lr * db])
model.load_state_dict(state_dict)
train_hist['weight'].append(w)
train_hist['bias'].append(b)
for label in train_hist:
train_hist[label] = np.array(train_hist[label])
if input_dim == output_dim == 1:
fig = plt.figure(figsize=(8, 8))
fig.clf()
ax = fig.gca()
levels = np.logspace(np.log(np.min(loss_values)), np.log(np.max(loss_values)), 20)
ax.contourf(ww, bb, loss_values, levels=levels, norm=colors.LogNorm())
ax.set_xlabel('weight')
ax.set_ylabel('bias')
ax.grid(True)
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
ax.plot(train_hist['weight'], train_hist['bias'], '.-b')
ax.plot(W[0], b, 'r*', markersize=10)
ax.legend(['optim', '(W, b)'])
lr = 0.1
weight_decay = 5e-4
optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=weight_decay)
n_epochs = 100
train_hist = {}
train_hist['loss'] = []
if input_dim == output_dim == 1:
train_hist['weight'] = []
train_hist['bias'] = []
# Initialize training
model.reset()
model.train()
for epoch in range(n_epochs):
for idx, batch in enumerate(train_loader):
optimizer.zero_grad()
loss = loss_fun(model.forward(batch[0]), batch[1])
loss.backward()
optimizer.step()
train_hist['loss'].append(loss.item())
if input_dim == output_dim == 1:
train_hist['weight'].append(model.linear.weight.item())
train_hist['bias'].append(model.linear.bias.item())
print('[Epoch %4d/%4d] [Batch %4d/%4d] Loss: % 2.2e' % (epoch + 1, n_epochs,
idx + 1, len(train_loader),
loss.item()))
model.eval()
if input_dim == output_dim == 1:
n_test = 500
X_test = np.random.rand(n_test, input_dim)
y_pred = []
state_dict = model.state_dict()
for idx in range(len(train_hist['weight'])):
state_dict['linear.weight'] = torch.tensor([[train_hist['weight'][idx]]])
state_dict['linear.bias'] = torch.tensor([train_hist['bias'][idx]])
model.load_state_dict(state_dict)
y_pred.append(model.forward(torch.tensor(X_test.astype('f'))).detach().numpy())
if input_dim == output_dim == 1:
fig = plt.figure(figsize=(15, 5))
fig.clf()
ax = fig.add_subplot(1, 3, 1)
levels = np.logspace(np.log(np.min(loss_values)), np.log(np.max(loss_values)), 20)
ax.contourf(ww, bb, loss_values, levels=levels, norm=colors.LogNorm())
ax.plot(train_hist['weight'], train_hist['bias'], '.-b')
ax.plot(W[0], b, 'r*', markersize=10)
ax.set_xlabel('weight')
ax.set_ylabel('bias')
ax.legend(['optim', '(W, b)'])
ax.grid(True)
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)
ax = fig.add_subplot(1, 3, 2)
ax.loglog(np.abs(train_hist['loss']))
ax.set_xlabel('Iter')
ax.set_ylabel('Loss')
ax.grid(True)
ax = fig.add_subplot(1, 3, 3)
ax.plot(X_train, y_train, '.')
a=ax.plot(X_test, y_pred[0], '-', alpha=0.1)
for y in y_pred[1:]:
ax.plot(X_test, y, '-', alpha=0.1, color=a[0].get_color())
ax.plot(X_test, y_pred[-1], 'k')
ax.grid(True)
fig.tight_layout()
else:
fig = plt.figure()
fig.clf()
ax = fig.gca()
ax.loglog(np.abs(train_hist['loss']))
ax.set_xlabel('Iter')
ax.set_ylabel('Loss')
ax.grid(True)
###Output
_____no_output_____
###Markdown
PyTorch Fundamentals Part 1 The Hot Problem
###Code
import numpy as np
import torch
from matplotlib import pyplot as plt
%matplotlib inline
torch.set_printoptions(edgeitems=2, linewidth=75)
###Output
_____no_output_____
###Markdown
We just got back from a trip to some obscure location, and we brought back a fancy, wall-mounted analog thermometer. It looks great, and it’s a perfect fit for our living room. Its only flaw is that it doesn’t show units. Not to worry, we’ve got a plan: we’ll build a dataset of readings and corresponding temperature values in our favorite units, choose a model, adjust its weights iteratively until a measure of the error is low enough, and finally be able to interpret the new readings in units we understand.We’ll start by making a note of temperature data in good old Celsius and measurements from our new thermometer, and figure things out. After a couple of weeks, here’s the data:
###Code
t_c = [0.5, 14.0, 15.0, 28.0, 11.0, 8.0, 3.0, -4.0, 6.0, 13.0, 21.0]
t_u = [35.7, 55.9, 58.2, 81.9, 56.3, 48.9, 33.9, 21.8, 48.4, 60.4, 68.4]
t_c = torch.tensor(t_c)
t_u = torch.tensor(t_u)
###Output
_____no_output_____
###Markdown
Here, the t_c values are temperatures in Celsius, and the t_u values are our unknown units. We can expect noise in both measurements, coming from the devices themselves and from our approximate readings. For convenience, we’ve already put the data into tensors; we’ll use it in a minute. A quick plot of our data tells us that it’s noisy, but we think there’s a pattern here.
###Code
fig = plt.figure(dpi=80)
plt.xlabel("Measurement")
plt.ylabel("Temperature (°Celsius)")
plt.plot(t_u.numpy(), t_c.numpy(), 'o')
plt.show()
###Output
_____no_output_____
###Markdown
In the absence of further knowledge, we assume the simplest possible model for converting between the two sets of measurements, just like Kepler might have done. The two may be linearly related--that is, multiplying t_u by a factor and adding a constant, we may get the temperature in Celsius (up to an error that we omit):Is this a reasonable assumption? Probably; we’ll see how well the final model performs. We chose to name w and b after weight and bias, two very common terms for linear scaling and the additive constant--we’ll bump into those all the time.OK, now we need to estimate w and b, the parameters in our model, based on the data we have. We must do it so that temperatures we obtain from running the unknown temperatures t_u through the model are close to temperatures we actually measured in Celsius. If that sounds like fitting a line through a set of measurements, well, yes, because that’s exactly what we’re doing. We’ll go through this simple example using PyTorch and realize that training a neural network will essentially involve changing the model for a slightly more elaborate one, with a few (or a metric ton) more parameters. Let’s flesh it out again: we have a model with some unknown parameters, and we need to estimate those parameters so that the error between predicted outputs and measured values is as low as possible. We notice that we still need to exactly define a measure of the error. Such a measure, which we refer to as the loss function, should be high if the error is high and should ideally be as low as possible for a perfect match. Our optimization process should therefore aim at finding w and b so that the loss function is at a minimum.
###Code
def model(t_u, w, b):
return w * t_u + b
###Output
_____no_output_____
###Markdown
A loss function (or cost function) is a function that computes a single numerical value that the learning process will attempt to minimize. The calculation of loss typically involves taking the difference between the desired outputs for some training samples and the outputs actually produced by the model when fed those samples. In our case, that would be the difference between the predicted temperatures t_p output by our model and the actual measurements: t_p - t_c.We need to make sure the loss function makes the loss positive both when t_p is greater than and when it is less than the true t_c, since the goal is for t_p to match t_c. We have a few choices, the most straightforward being |t_p - t_c| and (t_p - t_c)^2. Based on the mathematical expression we choose, we can emphasize or discount certain errors. Conceptually, a loss function is a way of prioritizing which errors to fix from our training samples, so that our parameter updates result in adjustments to the outputs for the highly weighted samples instead of changes to some other samples’ output that had a smaller loss.Both of the example loss functions have a clear minimum at zero and grow monotonically as the predicted value moves further from the true value in either direction. Because the steepness of the growth also monotonically increases away from the minimum, both of them are said to be convex. Since our model is linear, the loss as a function of w and b is also convex. Cases where the loss is a convex function of the model parameters are usually great to deal with because we can find a minimum very efficiently through specialized algorithms. However, we will instead use less powerful but more generally applicable methods in this chapter. We do so because for the deep neural networks we are ultimately interested in, the loss is not a convex function of the inputs.For our two loss functions |t_p - t_c| and (t_p - t_c)^2, we notice that the square of the differences behaves more nicely around the minimum: the derivative of the error-squared loss with respect to t_p is zero when t_p equals t_c. The absolute value, on the other hand, has an undefined derivative right where we’d like to converge. This is less of an issue in practice than it looks like, but we’ll stick to the square of differences for the time being. We’re expecting t_u, w, and b to be the input tensor, weight parameter, and bias parameter, respectively. In our model, the parameters will be PyTorch scalars (aka zero-dimensional tensors), and the product operation will use broadcasting to yield the returned tensors. Anyway, time to define our loss:
###Code
def loss_fn(t_p, t_c):
squared_diffs = (t_p - t_c)**2
return squared_diffs.mean()
###Output
_____no_output_____ |
Linear Regression Model/Linear_Regression.ipynb | ###Markdown
LinearRegression
###Code
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(train_X,train_y)
preds_y = lr.predict(test_X)
from sklearn.metrics import mean_squared_error
mean_squared_error(test_y,preds_y)
import pickle
filename = 'finalized_model.sav'
pickle.dump(lr, open(filename, 'wb'))
###Output
_____no_output_____ |
scratch/011_graphs.ipynb | ###Markdown
strogatz
###Code
# make page
paper_size = '11x17 inches'
border:float=20
paper = utils.Paper(paper_size)
drawbox = paper.get_drawbox(border)
DEGREE = 13
SCALE = 15
(xbins, ybins), (xs, ys) = gp.overlay_grid(drawbox, xstep=40, ystep=40, flatmesh=True)
p_gen = lambda x: np.interp(x, [xs.min(), xs.max()], [0.2, 1.] )
_p_gen = gp.make_callable(p_gen)
k_gen = 3
_k_gen = gp.make_callable(k_gen)
df = pd.DataFrame({
'x':xs,
'y':ys,
'k':_k_gen(xs),
'p':_p_gen(xs)
})
df['k'] = df['k'].astype(int)
new_rows = []
for i, row in df.iterrows():
k = row['k'].astype(int)
G = nx.connected_watts_strogatz_graph(n=DEGREE, k=k, p=row['p'])
gg = GraphGram(graph=G, layout_method='kamada_kawai_layout',
xoff=row['x'], yoff=row['y'], scale=SCALE)
new_row = row.to_dict()
new_row['geometry'] = gg.lines
new_rows.append(new_row)
gdf = geopandas.GeoDataFrame(new_rows)
layers = []
layers.append(gp.merge_LineStrings(gdf.geometry))
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
for i, layer in enumerate(layers):
sk.stroke(i+1)
sk.geometry(layer)
tolerance=0.5
sk.vpype(f'linesort')
sk.display()
savedir='/mnt/c/code/side/plotter_images/oned_outputs'
filename = '0149_graph_test.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
strogatz
###Code
# make page
paper_size = '11x17 inches'
border:float=18
paper = utils.Paper(paper_size)
drawbox = paper.get_drawbox(border)
DEGREE = 33
SCALE = 8
(xbins, ybins), (xs, ys) = gp.overlay_grid(drawbox, xstep=20, ystep=20, flatmesh=True)
p_gen = lambda x: np.interp(x, [xs.min(), xs.max()], [0., 0.6] )
_p_gen = gp.make_callable(p_gen)
k_gen = 2
_k_gen = gp.make_callable(k_gen)
df = pd.DataFrame({
'x':xs,
'y':ys,
'k':_k_gen(xs),
'p':_p_gen(xs)
})
df['k'] = df['k'].astype(int)
new_rows = []
for i, row in df.iterrows():
k = row['k'].astype(int)
G = nx.connected_watts_strogatz_graph(n=DEGREE, k=k, p=row['p'])
gg = GraphGram(graph=G, layout_method='spring_layout',
xoff=row['x'], yoff=row['y'], scale=SCALE)
bezs = []
for ls in gg.lines:
bez = gp.LineString_to_jittered_bezier(
ls, xstd=0., ystd=0., normalized=True, n_eval_points=2)
bezs.append(bez)
bezs = gp.merge_LineStrings(bezs)
new_row = row.to_dict()
new_row['geometry'] = bezs
new_rows.append(new_row)
gdf = geopandas.GeoDataFrame(new_rows)
layers = []
layers.append(gp.merge_LineStrings(gdf.geometry))
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
for i, layer in enumerate(layers):
sk.stroke(i+1)
sk.geometry(layer)
tolerance= 0.5
sk.vpype(f'linemerge --tolerance {tolerance}mm linesort')
sk.display()
savedir='/mnt/c/code/side/plotter_images/oned_outputs'
filename = '0151_strogatz_graphs.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
binary tree
###Code
# make page
paper_size = '11x17 inches'
border:float=18
paper = utils.Paper(paper_size)
drawbox = paper.get_drawbox(border)
DEGREE = 33
SCALE = 20
(xbins, ybins), (xs, ys) = gp.overlay_grid(drawbox, xstep=50, ystep=50, flatmesh=True)
r_gen = lambda x: int(np.interp(x, [xs.min(), xs.max()], [2, 8] ))
_r_gen = gp.make_callable(r_gen)
k_gen = 3
_k_gen = gp.make_callable(k_gen)
df = pd.DataFrame({
'x':xs,
'y':ys,
'k':_k_gen(xs),
'p':_p_gen(xs)
})
df['k'] = df['k'].astype(int)
nx.number_of_nonisomorphic_trees(10)
new_rows = []
for i, row in df.iterrows():
k = row['k'].astype(int)
G = nx.connected_watts_strogatz_graph(n=DEGREE, k=k, p=row['p'])
gg = GraphGram(graph=G, layout_method='spectral_layout',
xoff=row['x'], yoff=row['y'], scale=SCALE)
bezs = []
for ls in gg.lines:
bez = gp.LineString_to_jittered_bezier(
ls, xstd=0., ystd=0., normalized=True, n_eval_points=2)
bezs.append(bez)
bezs = gp.merge_LineStrings(bezs)
new_row = row.to_dict()
new_row['geometry'] = bezs
new_rows.append(new_row)
gdf = geopandas.GeoDataFrame(new_rows)
layers = []
layers.append(gp.merge_LineStrings(gdf.geometry))
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
for i, layer in enumerate(layers):
sk.stroke(i+1)
sk.geometry(layer)
tolerance= 0.5
sk.vpype(f'linemerge --tolerance {tolerance}mm linesort')
sk.display()
savedir='/mnt/c/code/side/plotter_images/oned_outputs'
filename = '0151_strogatz_graphs.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
strogatz
###Code
# make page
paper_size = '23.42x16.92 inches'
border:float=35
paper = utils.Paper(paper_size)
drawbox = paper.get_drawbox(border)
DEGREE = 33
SCALE = 8
(xbins, ybins), (xs, ys) = gp.overlay_grid(drawbox, xstep=20, ystep=20, flatmesh=True)
p_gen = lambda x: np.interp(x, [xs.min(), xs.max()], [0., 0.6] )
_p_gen = gp.make_callable(p_gen)
k_gen = 2
_k_gen = gp.make_callable(k_gen)
df = pd.DataFrame({
'x':xs,
'y':ys,
'k':_k_gen(xs),
'p':_p_gen(xs)
})
df['k'] = df['k'].astype(int)
new_rows = []
for i, row in df.iterrows():
k = row['k'].astype(int)
G = nx.connected_watts_strogatz_graph(n=DEGREE, k=k, p=row['p'])
gg = GraphGram(graph=G, layout_method='spring_layout',
xoff=row['x'], yoff=row['y'], scale=SCALE)
bezs = []
for ls in gg.lines:
bez = gp.LineString_to_jittered_bezier(
ls, xstd=0., ystd=0., normalized=True, n_eval_points=2)
bezs.append(bez)
bezs = gp.merge_LineStrings(bezs)
new_row = row.to_dict()
new_row['geometry'] = bezs
new_rows.append(new_row)
gdf = geopandas.GeoDataFrame(new_rows)
layers = []
layers.append(gp.merge_LineStrings(gdf.geometry))
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
for i, layer in enumerate(layers):
sk.stroke(i+1)
sk.geometry(layer)
tolerance= 0.5
sk.vpype(f'linemerge --tolerance {tolerance}mm linesort')
sk.display()
savedir='/mnt/c/code/side/plotter_images/oned_outputs'
filename = '0151_strogatz_graphs.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
strogatz
###Code
# make page
paper_size = '11x17 inches'
border:float=20
paper = utils.Paper(paper_size)
drawbox = paper.get_drawbox(border)
DEGREE = 13
SCALE = 15
(xbins, ybins), (xs, ys) = gp.overlay_grid(drawbox, xstep=40, ystep=40, flatmesh=True)
p_gen = lambda x: np.interp(x, [xs.min(), xs.max()], [0.2, 1.] )
_p_gen = gp.make_callable(p_gen)
k_gen = 3
_k_gen = gp.make_callable(k_gen)
df = pd.DataFrame({
'x':xs,
'y':ys,
'k':_k_gen(xs),
'p':_p_gen(xs)
})
df['k'] = df['k'].astype(int)
new_rows = []
for i, row in df.iterrows():
k = row['k'].astype(int)
G = nx.connected_watts_strogatz_graph(n=DEGREE, k=k, p=row['p'])
gg = GraphGram(graph=G, layout_method='kamada_kawai_layout',
xoff=row['x'], yoff=row['y'], scale=SCALE)
new_row = row.to_dict()
new_row['geometry'] = gg.lines
new_rows.append(new_row)
gdf = geopandas.GeoDataFrame(new_rows)
layers = []
layers.append(gp.merge_LineStrings(gdf.geometry))
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
for i, layer in enumerate(layers):
sk.stroke(i+1)
sk.geometry(layer)
tolerance=0.5
sk.vpype(f'linesort')
sk.display()
savedir='/mnt/c/code/side/plotter_images/oned_outputs'
filename = '0149_graph_test.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
strogatz
###Code
# make page
paper_size = '11x17 inches'
border:float=18
paper = utils.Paper(paper_size)
drawbox = paper.get_drawbox(border)
DEGREE = 33
SCALE = 8
(xbins, ybins), (xs, ys) = gp.overlay_grid(drawbox, xstep=20, ystep=20, flatmesh=True)
p_gen = lambda x: np.interp(x, [xs.min(), xs.max()], [0., 0.6] )
_p_gen = gp.make_callable(p_gen)
k_gen = 2
_k_gen = gp.make_callable(k_gen)
df = pd.DataFrame({
'x':xs,
'y':ys,
'k':_k_gen(xs),
'p':_p_gen(xs)
})
df['k'] = df['k'].astype(int)
new_rows = []
for i, row in df.iterrows():
k = row['k'].astype(int)
G = nx.connected_watts_strogatz_graph(n=DEGREE, k=k, p=row['p'])
gg = GraphGram(graph=G, layout_method='spring_layout',
xoff=row['x'], yoff=row['y'], scale=SCALE)
bezs = []
for ls in gg.lines:
bez = gp.LineString_to_jittered_bezier(
ls, xstd=0., ystd=0., normalized=True, n_eval_points=2)
bezs.append(bez)
bezs = gp.merge_LineStrings(bezs)
new_row = row.to_dict()
new_row['geometry'] = bezs
new_rows.append(new_row)
gdf = geopandas.GeoDataFrame(new_rows)
layers = []
layers.append(gp.merge_LineStrings(gdf.geometry))
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
for i, layer in enumerate(layers):
sk.stroke(i+1)
sk.geometry(layer)
tolerance= 0.5
sk.vpype(f'linemerge --tolerance {tolerance}mm linesort')
sk.display()
savedir='/mnt/c/code/side/plotter_images/oned_outputs'
filename = '0151_strogatz_graphs.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
binary tree
###Code
# make page
paper_size = '11x17 inches'
border:float=18
paper = utils.Paper(paper_size)
drawbox = paper.get_drawbox(border)
DEGREE = 33
SCALE = 20
(xbins, ybins), (xs, ys) = gp.overlay_grid(drawbox, xstep=50, ystep=50, flatmesh=True)
r_gen = lambda x: int(np.interp(x, [xs.min(), xs.max()], [2, 8] ))
_r_gen = gp.make_callable(r_gen)
k_gen = 3
_k_gen = gp.make_callable(k_gen)
df = pd.DataFrame({
'x':xs,
'y':ys,
'k':_k_gen(xs),
'p':_p_gen(xs)
})
df['k'] = df['k'].astype(int)
nx.number_of_nonisomorphic_trees(10)
new_rows = []
for i, row in df.iterrows():
k = row['k'].astype(int)
G = nx.connected_watts_strogatz_graph(n=DEGREE, k=k, p=row['p'])
gg = GraphGram(graph=G, layout_method='spectral_layout',
xoff=row['x'], yoff=row['y'], scale=SCALE)
bezs = []
for ls in gg.lines:
bez = gp.LineString_to_jittered_bezier(
ls, xstd=0., ystd=0., normalized=True, n_eval_points=2)
bezs.append(bez)
bezs = gp.merge_LineStrings(bezs)
new_row = row.to_dict()
new_row['geometry'] = bezs
new_rows.append(new_row)
gdf = geopandas.GeoDataFrame(new_rows)
layers = []
layers.append(gp.merge_LineStrings(gdf.geometry))
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
for i, layer in enumerate(layers):
sk.stroke(i+1)
sk.geometry(layer)
tolerance= 0.5
sk.vpype(f'linemerge --tolerance {tolerance}mm linesort')
sk.display()
savedir='/mnt/c/code/side/plotter_images/oned_outputs'
filename = '0151_strogatz_graphs.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____
###Markdown
strogatz
###Code
# make page
paper_size = '23.42x16.92 inches'
border:float=35
paper = utils.Paper(paper_size)
drawbox = paper.get_drawbox(border)
DEGREE = 33
SCALE = 8
(xbins, ybins), (xs, ys) = gp.overlay_grid(drawbox, xstep=20, ystep=20, flatmesh=True)
p_gen = lambda x: np.interp(x, [xs.min(), xs.max()], [0., 0.6] )
_p_gen = gp.make_callable(p_gen)
k_gen = 2
_k_gen = gp.make_callable(k_gen)
df = pd.DataFrame({
'x':xs,
'y':ys,
'k':_k_gen(xs),
'p':_p_gen(xs)
})
df['k'] = df['k'].astype(int)
new_rows = []
for i, row in df.iterrows():
k = row['k'].astype(int)
G = nx.connected_watts_strogatz_graph(n=DEGREE, k=k, p=row['p'])
gg = GraphGram(graph=G, layout_method='spring_layout',
xoff=row['x'], yoff=row['y'], scale=SCALE)
bezs = []
for ls in gg.lines:
bez = gp.LineString_to_jittered_bezier(
ls, xstd=0., ystd=0., normalized=True, n_eval_points=2)
bezs.append(bez)
bezs = gp.merge_LineStrings(bezs)
new_row = row.to_dict()
new_row['geometry'] = bezs
new_rows.append(new_row)
gdf = geopandas.GeoDataFrame(new_rows)
layers = []
layers.append(gp.merge_LineStrings(gdf.geometry))
sk = vsketch.Vsketch()
sk.size(paper.page_format_mm)
sk.scale('1mm')
for i, layer in enumerate(layers):
sk.stroke(i+1)
sk.geometry(layer)
tolerance= 0.5
sk.vpype(f'linemerge --tolerance {tolerance}mm linesort')
sk.display()
savedir='/mnt/c/code/side/plotter_images/oned_outputs'
filename = '0151_strogatz_graphs.svg'
savepath = Path(savedir).joinpath(filename).as_posix()
sk.save(savepath)
###Output
_____no_output_____ |
1. Single Degree of Freedom Oscillation/1. SDOF Code.ipynb | ###Markdown
Problem 2
###Code
import numpy as np
import sympy as sp
import scipy as sc
import matplotlib.pyplot as plt
from math import e
%matplotlib inline
###Output
_____no_output_____
###Markdown
The given equation is $x(t)=A_1e^{s_1t} + A_2e^{s_2t}$ thus we can find the values of $A_1$ and $A_2$ by substituting $A_2=1-A_1$ and $A_1=\frac{-s_2}{s_1-s_2}$ using the given intital conditions. Then get the values of $s_1$ and $s_2$ from the provided equation for parts A to D using the different values of $a$ and $b$. Part A
###Code
a=1
b=0.25
s1=-a/2 + np.sqrt((a/2)**2 - b**2)
s2=-a/2 - np.sqrt((a/2)**2 - b**2)
print("s1=",s1,"s2=",s2)
A1=-s2/(s1-s2)
A2=1-A1
# A1=A2=1
print("A1=",A1,"A2=",A2)
t=np.arange(0,2*np.pi,0.1)
x=A1*(e**(s1*t)) + A2*(e**(s2*t))
# # plt.ylim(-10,10,0.5)
# plt.xlim(-6,6,0.5)
plt.xlabel("Period")
plt.ylabel("x(t)")
plt.plot(t,x)
plt.legend(['for a=1 & b=0.25'])
plt.show()
###Output
s1= -0.0669872981077807 s2= -0.9330127018922193
A1= 1.0773502691896257 A2= -0.07735026918962573
###Markdown
Part B
###Code
a=-1
b=0.25
s1=-a/2 + np.sqrt((a/2)**2 - b**2)
s2=-a/2 - np.sqrt((a/2)**2 - b**2)
print("s1=",s1,"s2=",s2)
A1=-s2/(s1-s2)
A2=1-A1
print("A1=",A1,"A2=",A2)
t=np.arange(0,2*np.pi,0.1)
x=A1*(e**(s1*t)) + A2*(e**(s2*t))
plt.xlabel("Period")
plt.ylabel("x(t)")
plt.plot(t,x)
plt.legend(['for a=-1 & b=0.25'])
plt.show()
###Output
s1= 0.9330127018922193 s2= 0.0669872981077807
A1= -0.0773502691896258 A2= 1.0773502691896257
###Markdown
Part C
###Code
a=1
b=1
s1=complex(-0.5,0.5)
s2=complex(-0.5,-0.5)
print("s1=",s1,"s2=",s2)
A1=-s1/(s1-s2)
A2=1-A1
print("A1=",A1,"A2=",A2)
t=np.arange(0,2*np.pi,0.1)
x=A1.real*(e**(s1.real*t)) + A2.real*(e**(s2.real*t)) #ignoring imaginary parts
plt.xlabel("Period")
plt.ylabel("x(t)")
plt.plot(t,x)
plt.legend(['for a=1 & b=1'])
plt.show()
###Output
s1= (-0.5+0.5j) s2= (-0.5-0.5j)
A1= (-0.5-0.5j) A2= (1.5+0.5j)
###Markdown
Part D
###Code
a=-1
b=1
s1=complex(0.5,0.5)
s2=complex(0.5,-0.5)
print("s1=",s1,"s2=",s2)
A1=-s2/(s1-s2)
A2=1-A1
print("A1=",A1,"A2=",A2)
t=np.arange(0,2*np.pi,0.1)
x=A1.real*(e**(s1.real*t)) + A2.real*(e**(s2.real*t)) #ignoring imaginary parts
plt.xlabel("Period")
plt.ylabel("x(t)")
plt.plot(t,x)
plt.legend(['for a=-1 & b=1'])
plt.show()
###Output
s1= (0.5+0.5j) s2= (0.5-0.5j)
A1= (0.5+0.5j) A2= (0.5-0.5j)
###Markdown
Part E
###Code
#Undamped oscillation which is pretty much a straight line.
a=0
b=1
s1=complex(0,1)
s2=complex(0,-1)
print("s1=",s1,"s2=",s2)
A1=-s2/(s1-s2)
A2=1-A1
print("A1=",A1,"A2=",A2)
t=np.arange(0,2*np.pi,0.1)
x=A1.real*(e**(s1.real*t)) + A2.real*(e**(s2.real*t)) #ignoring imaginary parts
plt.xlabel("Period")
plt.ylabel("x(t)")
plt.plot(t,x)
plt.legend(['for a=0 & b=1'])
plt.show()
###Output
s1= 1j s2= -1j
A1= (0.5+0j) A2= (0.5+0j)
|
coursera_ml/a2_w1_s3_SparkML_SVM.ipynb | ###Markdown
This notebook is designed to run in a IBM Watson Studio default runtime (NOT the Watson Studio Apache Spark Runtime as the default runtime with 1 vCPU is free of charge). Therefore, we install Apache Spark in local mode for test purposes only. Please don't use it in production.In case you are facing issues, please read the following two documents first:https://github.com/IBM/skillsnetwork/wiki/Environment-Setuphttps://github.com/IBM/skillsnetwork/wiki/FAQThen, please feel free to ask:https://coursera.org/learn/machine-learning-big-data-apache-spark/discussions/allPlease make sure to follow the guidelines before asking a question:https://github.com/IBM/skillsnetwork/wiki/FAQim-feeling-lost-and-confused-please-help-meIf running outside Watson Studio, this should work as well. In case you are running in an Apache Spark context outside Watson Studio, please remove the Apache Spark setup in the first notebook cells.
###Code
from IPython.display import Markdown, display
def printmd(string):
display(Markdown('# <span style="color:red">'+string+'</span>'))
if ('sc' in locals() or 'sc' in globals()):
printmd('<<<<<!!!!! It seems that you are running in a IBM Watson Studio Apache Spark Notebook. Please run it in an IBM Watson Studio Default Runtime (without Apache Spark) !!!!!>>>>>')
!pip install pyspark==2.4.5
try:
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
except ImportError as e:
printmd('<<<<<!!!!! Please restart your kernel after installing Apache Spark !!!!!>>>>>')
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession \
.builder \
.getOrCreate()
###Output
_____no_output_____
###Markdown
In case you want to learn how ETL is done, please run the following notebook first and update the file name below accordinglyhttps://github.com/IBM/coursera/blob/master/coursera_ml/a2_w1_s3_ETL.ipynb
###Code
# delete files from previous runs
!rm -f hmp.parquet*
# download the file containing the data in PARQUET format
!wget https://github.com/IBM/coursera/raw/master/hmp.parquet
# create a dataframe out of it
df = spark.read.parquet('hmp.parquet')
# register a corresponding query table
df.createOrReplaceTempView('df')
splits = df.randomSplit([0.8, 0.2])
df_train = splits[0]
df_test = splits[1]
from pyspark.ml.feature import StringIndexer
from pyspark.ml.feature import OneHotEncoder
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import MinMaxScaler
indexer = StringIndexer(inputCol="class", outputCol="label")
encoder = OneHotEncoder(inputCol="label", outputCol="labelVec")
vectorAssembler = VectorAssembler(inputCols=["x","y","z"],
outputCol="features")
normalizer = MinMaxScaler(inputCol="features", outputCol="features_norm")
from pyspark.ml.classification import LinearSVC
lsvc = LinearSVC(maxIter=10, regParam=0.1)
df.createOrReplaceTempView('df')
df_two_class = spark.sql("select * from df where class in ('Use_telephone','Standup_chair')")
splits = df_two_class.randomSplit([0.8, 0.2])
df_train = splits[0]
df_test = splits[1]
from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[indexer, encoder, vectorAssembler, normalizer,lsvc])
model = pipeline.fit(df_train)
prediction = model.transform(df_train)
from pyspark.ml.evaluation import BinaryClassificationEvaluator
# Evaluate model
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
evaluator.evaluate(prediction)
prediction = model.transform(df_test)
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
evaluator.evaluate(prediction)
###Output
_____no_output_____ |
F_Camp/c19_RNN_Hamlet.ipynb | ###Markdown
Load Data
###Code
nltk.download("gutenberg")
raw = nltk.corpus.gutenberg.raw("shakespeare-hamlet.txt")
print(len(raw), '\n')
print(raw[:500])
###Output
162881
[The Tragedie of Hamlet by William Shakespeare 1599]
Actus Primus. Scoena Prima.
Enter Barnardo and Francisco two Centinels.
Barnardo. Who's there?
Fran. Nay answer me: Stand & vnfold
your selfe
Bar. Long liue the King
Fran. Barnardo?
Bar. He
Fran. You come most carefully vpon your houre
Bar. 'Tis now strook twelue, get thee to bed Francisco
Fran. For this releefe much thankes: 'Tis bitter cold,
And I am sicke at heart
Barn. Haue you had quiet Guard?
Fran. Not
###Markdown
Char to Dicword가 아닌 character 단위 RNN
###Code
char2index = {}
index2char = []
for char in raw :
if char not in char2index.keys() :
char2index[char] = len(char2index)
index2char.append(char)
char2index
len(char2index)
char2vec = {}
eye = np.eye(len(char2index)) # identity matrix (대각행렬) --> one hot encoding
for item in char2index.keys() :
char2vec[item] = eye[char2index[item],:]
char2vec['a']
char2vec['b']
# text 문서의 전체 문자를 데이터 행렬로 변환
data = np.array([char2vec[char] for char in raw])
data.shape
len(data[0])
data.shape[1]
###Output
_____no_output_____
###Markdown
Define Model Parameters:* input_size – The number of expected features in the input x* hidden_size – The number of features in the hidden state h* num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1* nonlinearity – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’* bias – If False, then the layer does not use bias weights b_ih and b_hh. Default: True* batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False* dropout – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout. Default: 0* bidirectional – If True, becomes a bidirectional RNN. Default: False
###Code
class CharRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_layers):
super(CharRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.num_layers = num_layers
# self.rnn = nn.GRU(input_size, hidden_size, num_layers) # GRU는 nonlinearity 지원 안함.
# self.rnn = nn.RNN(input_size, hidden_size, num_layers, dropout=0.5)
self.rnn = nn.RNN(input_size, hidden_size, num_layers)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, input, hidden):
out, hidden = self.rnn(input.view(1,1,-1), hidden) # 1*1*67
out = self.fc(out.view(1,-1))
return out, hidden
def init_hidden(self):
hidden = Variable(torch.zeros(self.num_layers, 1, self.hidden_size)).cuda() # weight 초기화
return hidden
# input_size, hidden_size, output_size, num_layers
model = CharRNN(data.shape[1], 500, data.shape[1], 1).cuda()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
loss = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Training
###Code
step = 100
num_epochs = 5
for epoch in range(num_epochs):
# starting point
sp = list(range(0, len(data) - 2 * step, step))
sp = np.add(sp, random.randint(0, step))
random.shuffle(sp)
print(len(sp))
for i in range(len(sp)) :
hidden = model.init_hidden()
cost = 0
for pos in range(sp[i], sp[i] + step):
X = Variable(torch.from_numpy(data[pos]).type(torch.FloatTensor)).cuda()
y = torch.from_numpy(data[pos+1]).cuda()
_, y = y.max(dim=0)
y = y.unsqueeze(0)
pred, hidden = model(X, hidden)
cost += loss(pred, Variable(y).cuda())
cost.backward()
nn.utils.clip_grad_norm_(model.parameters(), 5) # explosion 방지.
optimizer.step()
if (i+1) == len(sp):
print('Epoch [%d/%d], Iter [%d/%d] Loss: %.4f'%(epoch+1, num_epochs, i+1, len(sp), cost.item()))
start_num = 1
text = index2char[start_num]
model.eval()
hidden = model.init_hidden()
X_test = Variable(torch.from_numpy(data[start_num]).type(torch.FloatTensor)).cuda()
for i in range(500) :
pre, hidden = model(X_test, hidden)
temp = pre.cpu().data.numpy()[0] # 확률
best_5 = np.argsort(temp)[::-1][:5]
# softmax
temp = np.exp(temp[best_5])
temp = temp / temp.sum()
pre = np.random.choice(best_5, 1, p = temp)[0]
curr_char = index2char[pre]
text += curr_char
# 다음 y 입력
X_test = Variable(torch.from_numpy(char2vec[curr_char]).type(torch.FloatTensor)).cuda()
print("* Generated Text : \n", text)
###Output
* Generated Text :
That in what is the fore in to the King on a Poltendinge
Ham. A sight and mence this wat has whan in when with as the ferriner and
Polon. There holdon that the heare of tise wall seefe:
Anght and stone
Hor. I my Lort
Hor. There shat with has, wath his do a Prechiosens. But bout not mige thos of in mone same of your who eat ingre sore werce,
I know mide in heaue the fantring of on of in hend
To his not, where'd and serale: who han heere
A there windent of my Laer? she thy fould war
|
Udemy/Refactored_Py_DS_ML_Bootcamp-master/14-K-Nearest-Neighbors/01-K Nearest Neighbors with Python.ipynb | ###Markdown
___ ___ K Nearest Neighbors with PythonYou've been given a classified data set from a company! They've hidden the feature column names but have given you the data and the target classes. We'll try to use KNN to create a model that directly predicts a class for a new data point based off of the features.Let's grab it and use it! Import Libraries
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Get the DataSet index_col=0 to use the first column as the index.
###Code
df = pd.read_csv("Classified Data",index_col=0)
df.head()
###Output
_____no_output_____
###Markdown
Standardize the VariablesBecause the KNN classifier predicts the class of a given test observation by identifying the observations that are nearest to it, the scale of the variables matters. Any variables that are on a large scale will have a much larger effect on the distance between the observations, and hence on the KNN classifier, than variables that are on a small scale.
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df.drop('TARGET CLASS',axis=1))
scaled_features = scaler.transform(df.drop('TARGET CLASS',axis=1))
df_feat = pd.DataFrame(scaled_features,columns=df.columns[:-1])
df_feat.head()
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(scaled_features,df['TARGET CLASS'],
test_size=0.30)
###Output
_____no_output_____
###Markdown
Using KNNRemember that we are trying to come up with a model to predict whether someone will TARGET CLASS or not. We'll start with k=1.
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
###Output
_____no_output_____
###Markdown
Predictions and EvaluationsLet's evaluate our KNN model!
###Code
from sklearn.metrics import classification_report,confusion_matrix
print(confusion_matrix(y_test,pred))
print(classification_report(y_test,pred))
###Output
precision recall f1-score support
0 0.91 0.87 0.89 143
1 0.89 0.92 0.90 157
avg / total 0.90 0.90 0.90 300
###Markdown
Choosing a K ValueLet's go ahead and use the elbow method to pick a good K Value:
###Code
error_rate = []
# Will take some time
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train,y_train)
pred_i = knn.predict(X_test)
error_rate.append(np.mean(pred_i != y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,40),error_rate,color='blue', linestyle='dashed', marker='o',
markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
###Output
_____no_output_____
###Markdown
Here we can see that that after arouns K>23 the error rate just tends to hover around 0.06-0.05 Let's retrain the model with that and check the classification report!
###Code
# FIRST A QUICK COMPARISON TO OUR ORIGINAL K=1
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
print('WITH K=1')
print('\n')
print(confusion_matrix(y_test,pred))
print('\n')
print(classification_report(y_test,pred))
# NOW WITH K=23
knn = KNeighborsClassifier(n_neighbors=23)
knn.fit(X_train,y_train)
pred = knn.predict(X_test)
print('WITH K=23')
print('\n')
print(confusion_matrix(y_test,pred))
print('\n')
print(classification_report(y_test,pred))
###Output
WITH K=23
[[132 11]
[ 5 152]]
precision recall f1-score support
0 0.96 0.92 0.94 143
1 0.93 0.97 0.95 157
avg / total 0.95 0.95 0.95 300
|
examples/Tutorial 2.ipynb | ###Markdown
Riskfolio-Lib Tutorial: Part II: Portfolio Optimization with Risk Factors 1. Downloading the data:
###Code
import numpy as np
import pandas as pd
import yfinance as yf
yf.pdr_override()
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'NBL', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'AGN', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'DHR',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI']
assets.sort()
# Tickers of factors
factors = ['MTUM', 'QUAL', 'VLUE', 'SIZE', 'USMV']
factors.sort()
tickers = assets + factors
tickers.sort()
# Downloading data
data = yf.download(tickers, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = tickers
# Calculating returns
X = data[factors].pct_change().dropna()
Y = data[assets].pct_change().dropna()
display(X.head())
###Output
_____no_output_____
###Markdown
2. Estimating Mean Variance Portfolios 2.1 Estimating the loadings matrix.This part is just to visualize how Riskfolio-Lib calculate a loadings matrix.
###Code
import riskfolio.ParamsEstimation as pe
step = 'Forward' # Could be Forward or Backward stepwise regression
loadings = pe.loadings_matrix(X=X, Y=Y, stepwise=step)
loadings.style.format("{:.4f}").background_gradient(cmap='RdYlGn')
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Calculating optimum portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
2.3 Plotting portfolio composition
###Code
import riskfolio.PlotFunctions as plf
# Plotting the composition of the portfolio
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
2.3 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu_fm # Expected returns
cov = port.cov_fm # Covariance matrix
returns = port.returns_fm # Returns of the assets
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.01, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
2.4 Optimization with Constraints on Risk Factors 3.1 Statistics of Risk Factors
###Code
# Displaying factors statistics
display(loadings.min())
display(loadings.max())
display(X.corr())
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Risk Factors
###Code
# Creating risk factors constraints
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False, False, False],
'Factor': ['MTUM', 'QUAL', 'SIZE', 'USMV', 'VLUE'],
'Sign': ['<=', '<=', '<=', '>=', '<='],
'Value': [-0.3, 0.8, 0.4, 0.8 , 0.9],}
constraints = pd.DataFrame(constraints)
display(constraints)
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
To check if the constraints are verified, I will make a regression among the portfolio returns and risk factors:
###Code
import statsmodels.api as sm
X1 = sm.add_constant(X)
y = np.matrix(returns) * np.matrix(w)
results = sm.OLS(y, X1).fit()
coefs = results.params
print(coefs)
###Output
const 0.0237%
MTUM -30.0000%
QUAL 9.1258%
SIZE -0.0480%
USMV 101.2775%
VLUE 21.3800%
dtype: float64
###Markdown
3.3 Plotting portfolio composition
###Code
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3.4 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting efficient frontier composition
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.01, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
display(returns)
###Output
_____no_output_____
###Markdown
4. Estimating Using Risk Factors with Other Risk MeasuresIn this part I will calculate optimal portfolios for several risk measures. I will find the portfolios that maximize the risk adjusted return for all available risk measures. 4.1 Calculate Optimal Portfolios for Several Risk Measures.I will mantain the constraints on risk factors.
###Code
# Risk Measures available:
#
# 'MV': Standard Deviation.
# 'MAD': Mean Absolute Deviation.
# 'MSV': Semi Standard Deviation.
# 'FLPM': First Lower Partial Moment (Omega Ratio).
# 'SLPM': Second Lower Partial Moment (Sortino Ratio).
# 'CVaR': Conditional Value at Risk.
# 'WR': Worst Realization (Minimax)
# 'MDD': Maximum Drawdown of uncompounded returns (Calmar Ratio).
# 'ADD': Average Drawdown of uncompounded returns.
# 'CDaR': Conditional Drawdown at Risk of uncompounded returns.
# port.reset_linear_constraints() # To reset linear constraints (factor constraints)
rms = ['MV', 'MAD', 'MSV', 'FLPM', 'SLPM',
'CVaR', 'WR', 'MDD', 'ADD', 'CDaR']
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = False
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = True
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Tutorial 2: Portfolio Optimization with Risk Factors using Stepwise Regression 1. Downloading the data:
###Code
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA']
assets.sort()
# Tickers of factors
factors = ['MTUM', 'QUAL', 'VLUE', 'SIZE', 'USMV']
factors.sort()
tickers = assets + factors
tickers.sort()
# Downloading data
data = yf.download(tickers, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = tickers
# Calculating returns
X = data[factors].pct_change().dropna()
Y = data[assets].pct_change().dropna()
display(X.head())
###Output
_____no_output_____
###Markdown
2. Estimating Mean Variance Portfolios 2.1 Estimating the loadings matrix.This part is just to visualize how Riskfolio-Lib calculates a loadings matrix.
###Code
import riskfolio as rp
step = 'Forward' # Could be Forward or Backward stepwise regression
loadings = rp.loadings_matrix(X=X, Y=Y, stepwise=step)
loadings.style.format("{:.4f}").background_gradient(cmap='RdYlGn')
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
# Building the portfolio object
port = rp.Portfolio(returns=Y)
# Calculating optimal portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
port.alpha = 0.05
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
2.3 Plotting portfolio composition
###Code
# Plotting the composition of the portfolio
ax = rp.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
2.3 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu_fm # Expected returns
cov = port.cov_fm # Covariance matrix
returns = port.returns_fm # Returns of the assets
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = rp.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3. Optimization with Constraints on Risk Factors 3.1 Statistics of Risk Factors
###Code
# Displaying factors statistics
display(loadings.min())
display(loadings.max())
display(X.corr())
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Risk Factors
###Code
# Creating risk factors constraints
constraints = {'Disabled': [False, False, False, False, False],
'Factor': ['MTUM', 'QUAL', 'SIZE', 'USMV', 'VLUE'],
'Sign': ['<=', '<=', '<=', '>=', '<='],
'Value': [-0.3, 0.8, 0.4, 0.8 , 0.9],
'Relative Factor': ['', 'USMV', '', '', '']}
constraints = pd.DataFrame(constraints)
display(constraints)
C, D = rp.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
To check if the constraints are verified, I will make a regression among the portfolio returns and risk factors:
###Code
import statsmodels.api as sm
X1 = sm.add_constant(X)
y = np.matrix(returns) * np.matrix(w)
results = sm.OLS(y, X1).fit()
coefs = results.params
print(coefs)
###Output
const 0.0229%
MTUM -30.0000%
QUAL 15.4387%
SIZE 1.8659%
USMV 92.6051%
VLUE 21.8624%
dtype: float64
###Markdown
3.3 Plotting portfolio composition
###Code
ax = rp.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3.4 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting efficient frontier composition
ax = rp.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = rp.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
display(returns)
###Output
_____no_output_____
###Markdown
4. Estimating Portfolios Using Risk Factors with Other Risk MeasuresIn this part I will calculate optimal portfolios for several risk measures. I will find the portfolios that maximize the risk adjusted return for all available risk measures. 4.1 Calculate Optimal Portfolios for Several Risk Measures.I will mantain the constraints on risk factors.
###Code
# Risk Measures available:
#
# 'MV': Standard Deviation.
# 'MAD': Mean Absolute Deviation.
# 'MSV': Semi Standard Deviation.
# 'FLPM': First Lower Partial Moment (Omega Ratio).
# 'SLPM': Second Lower Partial Moment (Sortino Ratio).
# 'CVaR': Conditional Value at Risk.
# 'EVaR': Entropic Value at Risk.
# 'WR': Worst Realization (Minimax)
# 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio).
# 'ADD': Average Drawdown of uncompounded cumulative returns.
# 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
# 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
# 'UCI': Ulcer Index of uncompounded cumulative returns.
# port.reset_linear_constraints() # To reset linear constraints (factor constraints)
rms = ['MV', 'MAD', 'MSV', 'FLPM', 'SLPM', 'CVaR',
'EVaR', 'WR', 'MDD', 'ADD', 'CDaR', 'UCI', 'EDaR']
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = False
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = True
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Part II: Portfolio Optimization with Risk Factors using Stepwise Regression 1. Downloading the data:
###Code
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
yf.pdr_override()
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'NBL', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'DHR',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI']
assets.sort()
# Tickers of factors
factors = ['MTUM', 'QUAL', 'VLUE', 'SIZE', 'USMV']
factors.sort()
tickers = assets + factors
tickers.sort()
# Downloading data
data = yf.download(tickers, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = tickers
# Calculating returns
X = data[factors].pct_change().dropna()
Y = data[assets].pct_change().dropna()
display(X.head())
###Output
_____no_output_____
###Markdown
2. Estimating Mean Variance Portfolios 2.1 Estimating the loadings matrix.This part is just to visualize how Riskfolio-Lib calculates a loadings matrix.
###Code
import riskfolio.ParamsEstimation as pe
step = 'Forward' # Could be Forward or Backward stepwise regression
loadings = pe.loadings_matrix(X=X, Y=Y, stepwise=step)
loadings.style.format("{:.4f}").background_gradient(cmap='RdYlGn')
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Calculating optimum portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
2.3 Plotting portfolio composition
###Code
import riskfolio.PlotFunctions as plf
# Plotting the composition of the portfolio
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
2.3 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu_fm # Expected returns
cov = port.cov_fm # Covariance matrix
returns = port.returns_fm # Returns of the assets
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.01, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3. Optimization with Constraints on Risk Factors 3.1 Statistics of Risk Factors
###Code
# Displaying factors statistics
display(loadings.min())
display(loadings.max())
display(X.corr())
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Risk Factors
###Code
# Creating risk factors constraints
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False, False, False],
'Factor': ['MTUM', 'QUAL', 'SIZE', 'USMV', 'VLUE'],
'Sign': ['<=', '<=', '<=', '>=', '<='],
'Value': [-0.3, 0.8, 0.4, 0.8 , 0.9],}
constraints = pd.DataFrame(constraints)
display(constraints)
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
To check if the constraints are verified, I will make a regression among the portfolio returns and risk factors:
###Code
import statsmodels.api as sm
X1 = sm.add_constant(X)
y = np.matrix(returns) * np.matrix(w)
results = sm.OLS(y, X1).fit()
coefs = results.params
print(coefs)
###Output
const 0.0235%
MTUM -30.0000%
QUAL 9.5310%
SIZE -0.2347%
USMV 100.9194%
VLUE 21.5617%
dtype: float64
###Markdown
3.3 Plotting portfolio composition
###Code
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3.4 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting efficient frontier composition
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.01, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
display(returns)
###Output
_____no_output_____
###Markdown
4. Estimating Portfolios Using Risk Factors with Other Risk MeasuresIn this part I will calculate optimal portfolios for several risk measures. I will find the portfolios that maximize the risk adjusted return for all available risk measures. 4.1 Calculate Optimal Portfolios for Several Risk Measures.I will mantain the constraints on risk factors.
###Code
# Risk Measures available:
#
# 'MV': Standard Deviation.
# 'MAD': Mean Absolute Deviation.
# 'MSV': Semi Standard Deviation.
# 'FLPM': First Lower Partial Moment (Omega Ratio).
# 'SLPM': Second Lower Partial Moment (Sortino Ratio).
# 'CVaR': Conditional Value at Risk.
# 'WR': Worst Realization (Minimax)
# 'MDD': Maximum Drawdown of uncompounded returns (Calmar Ratio).
# 'ADD': Average Drawdown of uncompounded returns.
# 'CDaR': Conditional Drawdown at Risk of uncompounded returns.
# port.reset_linear_constraints() # To reset linear constraints (factor constraints)
rms = ['MV', 'MAD', 'MSV', 'FLPM', 'SLPM',
'CVaR', 'WR', 'MDD', 'ADD', 'CDaR']
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = False
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = True
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Part II: Portfolio Optimization with Risk Factors using Stepwise Regression 1. Downloading the data:
###Code
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
yf.pdr_override()
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'NBL', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'DHR',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI']
assets.sort()
# Tickers of factors
factors = ['MTUM', 'QUAL', 'VLUE', 'SIZE', 'USMV']
factors.sort()
tickers = assets + factors
tickers.sort()
# Downloading data
data = yf.download(tickers, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = tickers
# Calculating returns
X = data[factors].pct_change().dropna()
Y = data[assets].pct_change().dropna()
display(X.head())
###Output
_____no_output_____
###Markdown
2. Estimating Mean Variance Portfolios 2.1 Estimating the loadings matrix.This part is just to visualize how Riskfolio-Lib calculates a loadings matrix.
###Code
import riskfolio.ParamsEstimation as pe
step = 'Forward' # Could be Forward or Backward stepwise regression
loadings = pe.loadings_matrix(X=X, Y=Y, stepwise=step)
loadings.style.format("{:.4f}").background_gradient(cmap='RdYlGn')
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Calculating optimum portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
2.3 Plotting portfolio composition
###Code
import riskfolio.PlotFunctions as plf
# Plotting the composition of the portfolio
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
2.3 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu_fm # Expected returns
cov = port.cov_fm # Covariance matrix
returns = port.returns_fm # Returns of the assets
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.01, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3. Optimization with Constraints on Risk Factors 3.1 Statistics of Risk Factors
###Code
# Displaying factors statistics
display(loadings.min())
display(loadings.max())
display(X.corr())
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Risk Factors
###Code
# Creating risk factors constraints
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False, False, False],
'Factor': ['MTUM', 'QUAL', 'SIZE', 'USMV', 'VLUE'],
'Sign': ['<=', '<=', '<=', '>=', '<='],
'Value': [-0.3, 0.8, 0.4, 0.8 , 0.9],}
constraints = pd.DataFrame(constraints)
display(constraints)
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
To check if the constraints are verified, I will make a regression among the portfolio returns and risk factors:
###Code
import statsmodels.api as sm
X1 = sm.add_constant(X)
y = np.matrix(returns) * np.matrix(w)
results = sm.OLS(y, X1).fit()
coefs = results.params
print(coefs)
###Output
const 0.0235%
MTUM -30.0000%
QUAL 9.5310%
SIZE -0.2347%
USMV 100.9194%
VLUE 21.5617%
dtype: float64
###Markdown
3.3 Plotting portfolio composition
###Code
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3.4 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting efficient frontier composition
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.01, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
display(returns)
###Output
_____no_output_____
###Markdown
4. Estimating Portfolios Using Risk Factors with Other Risk MeasuresIn this part I will calculate optimal portfolios for several risk measures. I will find the portfolios that maximize the risk adjusted return for all available risk measures. 4.1 Calculate Optimal Portfolios for Several Risk Measures.I will mantain the constraints on risk factors.
###Code
# Risk Measures available:
#
# 'MV': Standard Deviation.
# 'MAD': Mean Absolute Deviation.
# 'MSV': Semi Standard Deviation.
# 'FLPM': First Lower Partial Moment (Omega Ratio).
# 'SLPM': Second Lower Partial Moment (Sortino Ratio).
# 'CVaR': Conditional Value at Risk.
# 'WR': Worst Realization (Minimax)
# 'MDD': Maximum Drawdown of uncompounded returns (Calmar Ratio).
# 'ADD': Average Drawdown of uncompounded returns.
# 'CDaR': Conditional Drawdown at Risk of uncompounded returns.
# port.reset_linear_constraints() # To reset linear constraints (factor constraints)
rms = ['MV', 'MAD', 'MSV', 'FLPM', 'SLPM',
'CVaR', 'WR', 'MDD', 'ADD', 'CDaR']
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = False
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = True
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.com)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Part II: Portfolio Optimization with Risk Factors 1. Downloading the data:
###Code
import numpy as np
import pandas as pd
import yfinance as yf
yf.pdr_override()
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'NBL', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'AGN', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'DHR',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI']
assets.sort()
# Tickers of factors
factors = ['MTUM', 'QUAL', 'VLUE', 'SIZE', 'USMV']
factors.sort()
tickers = assets + factors
tickers.sort()
# Downloading data
data = yf.download(tickers, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = tickers
# Calculating returns
X = data[factors].pct_change().dropna()
Y = data[assets].pct_change().dropna()
display(X.head())
###Output
_____no_output_____
###Markdown
2. Estimating Mean Variance Portfolios 2.1 Estimating the loadings matrix.This part is just to visualize how Riskfolio-Lib calculate a loadings matrix.
###Code
import riskfolio.ParamsEstimation as pe
step = 'Forward' # Could be Forward or Backward stepwise regression
loadings = pe.loadings_matrix(X=X, Y=Y, stepwise=step)
loadings.style.format("{:.4f}").background_gradient(cmap='RdYlGn')
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Calculating optimum portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
2.3 Plotting portfolio composition
###Code
import riskfolio.PlotFunctions as plf
# Plotting the composition of the portfolio
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
2.3 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu_fm # Expected returns
cov = port.cov_fm # Covariance matrix
returns = port.returns_fm # Returns of the assets
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.01, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3. Optimization with Constraints on Risk Factors 3.1 Statistics of Risk Factors
###Code
# Displaying factors statistics
display(loadings.min())
display(loadings.max())
display(X.corr())
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Risk Factors
###Code
# Creating risk factors constraints
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False, False, False],
'Factor': ['MTUM', 'QUAL', 'SIZE', 'USMV', 'VLUE'],
'Sign': ['<=', '<=', '<=', '>=', '<='],
'Value': [-0.3, 0.8, 0.4, 0.8 , 0.9],}
constraints = pd.DataFrame(constraints)
display(constraints)
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
To check if the constraints are verified, I will make a regression among the portfolio returns and risk factors:
###Code
import statsmodels.api as sm
X1 = sm.add_constant(X)
y = np.matrix(returns) * np.matrix(w)
results = sm.OLS(y, X1).fit()
coefs = results.params
print(coefs)
###Output
const 0.0237%
MTUM -30.0000%
QUAL 9.1258%
SIZE -0.0480%
USMV 101.2775%
VLUE 21.3800%
dtype: float64
###Markdown
3.3 Plotting portfolio composition
###Code
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3.4 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting efficient frontier composition
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.01, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
display(returns)
###Output
_____no_output_____
###Markdown
4. Estimating Portfolios Using Risk Factors with Other Risk MeasuresIn this part I will calculate optimal portfolios for several risk measures. I will find the portfolios that maximize the risk adjusted return for all available risk measures. 4.1 Calculate Optimal Portfolios for Several Risk Measures.I will mantain the constraints on risk factors.
###Code
# Risk Measures available:
#
# 'MV': Standard Deviation.
# 'MAD': Mean Absolute Deviation.
# 'MSV': Semi Standard Deviation.
# 'FLPM': First Lower Partial Moment (Omega Ratio).
# 'SLPM': Second Lower Partial Moment (Sortino Ratio).
# 'CVaR': Conditional Value at Risk.
# 'WR': Worst Realization (Minimax)
# 'MDD': Maximum Drawdown of uncompounded returns (Calmar Ratio).
# 'ADD': Average Drawdown of uncompounded returns.
# 'CDaR': Conditional Drawdown at Risk of uncompounded returns.
# port.reset_linear_constraints() # To reset linear constraints (factor constraints)
rms = ['MV', 'MAD', 'MSV', 'FLPM', 'SLPM',
'CVaR', 'WR', 'MDD', 'ADD', 'CDaR']
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = False
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = True
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Tutorial 2: Portfolio Optimization with Risk Factors using Stepwise Regression 1. Downloading the data:
###Code
import numpy as np
import pandas as pd
import yfinance as yf
import warnings
warnings.filterwarnings("ignore")
yf.pdr_override()
pd.options.display.float_format = '{:.4%}'.format
# Date range
start = '2016-01-01'
end = '2019-12-30'
# Tickers of assets
assets = ['JCI', 'TGT', 'CMCSA', 'CPB', 'MO', 'APA', 'MMC', 'JPM',
'ZION', 'PSA', 'BAX', 'BMY', 'LUV', 'PCAR', 'TXT', 'TMO',
'DE', 'MSFT', 'HPQ', 'SEE', 'VZ', 'CNP', 'NI', 'T', 'BA']
assets.sort()
# Tickers of factors
factors = ['MTUM', 'QUAL', 'VLUE', 'SIZE', 'USMV']
factors.sort()
tickers = assets + factors
tickers.sort()
# Downloading data
data = yf.download(tickers, start = start, end = end)
data = data.loc[:,('Adj Close', slice(None))]
data.columns = tickers
# Calculating returns
X = data[factors].pct_change().dropna()
Y = data[assets].pct_change().dropna()
display(X.head())
###Output
_____no_output_____
###Markdown
2. Estimating Mean Variance Portfolios 2.1 Estimating the loadings matrix.This part is just to visualize how Riskfolio-Lib calculates a loadings matrix.
###Code
import riskfolio.ParamsEstimation as pe
step = 'Forward' # Could be Forward or Backward stepwise regression
loadings = pe.loadings_matrix(X=X, Y=Y, stepwise=step)
loadings.style.format("{:.4f}").background_gradient(cmap='RdYlGn')
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Calculating optimum portfolio
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
# Estimate optimal portfolio:
port.alpha = 0.05
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
2.3 Plotting portfolio composition
###Code
import riskfolio.PlotFunctions as plf
# Plotting the composition of the portfolio
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
2.3 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting the efficient frontier
label = 'Max Risk Adjusted Return Portfolio' # Title of point
mu = port.mu_fm # Expected returns
cov = port.cov_fm # Covariance matrix
returns = port.returns_fm # Returns of the assets
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3. Optimization with Constraints on Risk Factors 3.1 Statistics of Risk Factors
###Code
# Displaying factors statistics
display(loadings.min())
display(loadings.max())
display(X.corr())
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Risk Factors
###Code
# Creating risk factors constraints
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False, False, False],
'Factor': ['MTUM', 'QUAL', 'SIZE', 'USMV', 'VLUE'],
'Sign': ['<=', '<=', '<=', '>=', '<='],
'Value': [-0.3, 0.8, 0.4, 0.8 , 0.9],}
constraints = pd.DataFrame(constraints)
display(constraints)
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.T)
###Output
_____no_output_____
###Markdown
To check if the constraints are verified, I will make a regression among the portfolio returns and risk factors:
###Code
import statsmodels.api as sm
X1 = sm.add_constant(X)
y = np.matrix(returns) * np.matrix(w)
results = sm.OLS(y, X1).fit()
coefs = results.params
print(coefs)
###Output
const 0.0229%
MTUM -30.0000%
QUAL 15.4395%
SIZE 1.8657%
USMV 92.6045%
VLUE 21.8622%
dtype: float64
###Markdown
3.3 Plotting portfolio composition
###Code
ax = plf.plot_pie(w=w, title='Sharpe FM Mean Variance', others=0.05, nrow=25, cmap = "tab20",
height=6, width=10, ax=None)
###Output
_____no_output_____
###Markdown
3.4 Calculate efficient frontier
###Code
points = 50 # Number of points of the frontier
frontier = port.efficient_frontier(model=model, rm=rm, points=points, rf=rf, hist=hist)
display(frontier.T.head())
# Plotting efficient frontier composition
ax = plf.plot_frontier(w_frontier=frontier, mu=mu, cov=cov, returns=returns, rm=rm,
rf=rf, alpha=0.05, cmap='viridis', w=w, label=label,
marker='*', s=16, c='r', height=6, width=10, ax=None)
# Plotting efficient frontier composition
ax = plf.plot_frontier_area(w_frontier=frontier, cmap="tab20", height=6, width=10, ax=None)
display(returns)
###Output
_____no_output_____
###Markdown
4. Estimating Portfolios Using Risk Factors with Other Risk MeasuresIn this part I will calculate optimal portfolios for several risk measures. I will find the portfolios that maximize the risk adjusted return for all available risk measures. 4.1 Calculate Optimal Portfolios for Several Risk Measures.I will mantain the constraints on risk factors.
###Code
# Risk Measures available:
#
# 'MV': Standard Deviation.
# 'MAD': Mean Absolute Deviation.
# 'MSV': Semi Standard Deviation.
# 'FLPM': First Lower Partial Moment (Omega Ratio).
# 'SLPM': Second Lower Partial Moment (Sortino Ratio).
# 'CVaR': Conditional Value at Risk.
# 'EVaR': Entropic Value at Risk.
# 'WR': Worst Realization (Minimax)
# 'MDD': Maximum Drawdown of uncompounded cumulative returns (Calmar Ratio).
# 'ADD': Average Drawdown of uncompounded cumulative returns.
# 'CDaR': Conditional Drawdown at Risk of uncompounded cumulative returns.
# 'EDaR': Entropic Drawdown at Risk of uncompounded cumulative returns.
# 'UCI': Ulcer Index of uncompounded cumulative returns.
# port.reset_linear_constraints() # To reset linear constraints (factor constraints)
rms = ['MV', 'MAD', 'MSV', 'FLPM', 'SLPM', 'CVaR',
'EVaR', 'WR', 'MDD', 'ADD', 'CDaR', 'UCI', 'EDaR']
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = False
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
w_s = pd.DataFrame([])
# When we use hist = True the risk measures all calculated
# using historical returns, while when hist = False the
# risk measures are calculated using the expected returns
# based on risk factor model: R = a + B * F
hist = True
for i in rms:
w = port.optimization(model=model, rm=i, obj=obj, rf=rf, l=l, hist=hist)
w_s = pd.concat([w_s, w], axis=1)
w_s.columns = rms
w_s.style.format("{:.2%}").background_gradient(cmap='YlGn')
import matplotlib.pyplot as plt
# Plotting a comparison of assets weights for each portfolio
fig = plt.gcf()
fig.set_figwidth(14)
fig.set_figheight(6)
ax = fig.subplots(nrows=1, ncols=1)
w_s.plot.bar(ax=ax)
###Output
_____no_output_____ |
exploring feature importances.ipynb | ###Markdown
Exploration des features importance Données du Kaggle ["Two Sigma Connect: Rental Listing Inquiries"](https://www.kaggle.com/c/two-sigma-connect-rental-listing-inquiries) But : prédire la popularité ("interest_level") d'un appartement selon ses caractéristiques (certaines informations disponibles dans le jeu de données Kaggle n'ont pas été utilisées ici). Les appartements sont situés à New York.
###Code
features = ['bathrooms','bedrooms','price','longitude','latitude']
df = pd.read_json("data/train.json")
X = df[features]
y = df['interest_level']
y = y.replace('low',1)
y = y.replace('medium',2)
y = y.replace('high',3)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.33, random_state=42)
X.head()
sns.countplot(y)
###Output
_____no_output_____
###Markdown
Il s'agit d'un problème avec des classes déséquilibrées (classe 1 majoritaire)
###Code
def get_score_for_each_class(model, X, y):
"""
fonction pour obtenir un sccore pour chaque classe
"""
for c in [1,2,3]:
idx = y[y==c].index
y_pred = model.predict(X.loc[idx])
print (f'accuracy for y={c} : {accuracy_score(y_pred, y.loc[idx])}')
###Output
_____no_output_____
###Markdown
Feature importances de la librairie xgboostOn va d'abord comparer les différentes feature importances disponibles directement dans la librairie xgboost
###Code
hyperparam_xgb = {'silent': 1, 'subsample': 0.7,
'n_estimators': 100, 'max_depth': 6, 'objective':'multi:softmax','num_class':3}
xgb_model = xgb.XGBClassifier(**hyperparam_xgb)
xgb_model.fit(X_train, y_train)
pred_train = pd.Series(xgb_model.predict(X_train))
print(f"score sur l'ensemble de train {accuracy_score(pred_train, y_train)}")
pred_val = pd.Series(xgb_model.predict(X_val))
print(f"score sur l'ensemble de test {accuracy_score(pred_val, y_val)}")
print ("accuracy pour l'ensemble de train pour chaque classe : ")
get_score_for_each_class(xgb_model, X_train, y_train)
print ("accuracy pour l'ensemble de test pour chaque classe : ")
get_score_for_each_class(xgb_model, X_val, y_val)
###Output
accuracy pour l'ensemble de train pour chaque classe :
accuracy for y=1 : 0.9634454513767864
accuracy for y=2 : 0.324453915823122
accuracy for y=3 : 0.3750479846449136
accuracy pour l'ensemble de test pour chaque classe :
accuracy for y=1 : 0.9331980232968584
accuracy for y=2 : 0.20585864015049718
accuracy for y=3 : 0.22285251215559157
###Markdown
Très rapidement : les performances sont surtout bonnes pour la classe 1 majoritaire et donc beaucoup plus facile à prédire. Il faudrait utiliser des méthodes (et des métriques) adaptées pour des problèmes avec classes ddéséquilibrées, pour avoir de meilleurs résultats. feature importance : weight (what is used in model.feature_importances_)= The number of times a feature is used to split the data accross all trees
###Code
xgb.plot_importance(xgb_model)
plt.title("feature importance : weight")
###Output
_____no_output_____
###Markdown
feature importance : cover= The average coverage across all splits the feature is used in, i.e. the number of time a feature is used to split the data accross all trees weighted by the number of training data points that go through those splits.
###Code
xgb.plot_importance(xgb_model, importance_type="cover")
plt.title("feature importance : cover")
###Output
_____no_output_____
###Markdown
feature importance : gain= The average gain accross all splits the feature is used in, i.e. the average training loss reduction gained when using a feature for splitting.
###Code
xgb.plot_importance(xgb_model, importance_type="gain")
plt.title("feature importance : gain")
###Output
_____no_output_____
###Markdown
Conclusion :Les résultats entre les feature importances sont tous différents ! On ne sait donc pas auquel se fier. Feature importances from RandomForest sklearn
###Code
rf_model = RandomForestClassifier(
n_estimators=100,
min_samples_leaf=1,
n_jobs=-1,
oob_score=True)
rf_model.fit(X_train, y_train)
pred_train = pd.Series(rf_model.predict(X_train))
print(f"score sur l'ensemble de train {accuracy_score(pred_train, y_train)}")
pred_val = pd.Series(rf_model.predict(X_val))
print(f"score sur l'ensemble de test {accuracy_score(pred_val, y_val)}")
print ("accuracy pour l'ensemble de train pour chaque classe : ")
get_score_for_each_class(rf_model, X_train, y_train)
print ("accuracy pour l'ensemble de test pour chaque classe : ")
get_score_for_each_class(rf_model, X_val, y_val)
###Output
accuracy pour l'ensemble de train pour chaque classe :
accuracy for y=1 : 0.9713750435691879
accuracy for y=2 : 0.82059136920618
accuracy for y=3 : 0.8142034548944338
accuracy pour l'ensemble de test pour chaque classe :
accuracy for y=1 : 0.8668372749735263
accuracy for y=2 : 0.3104004299919377
accuracy for y=3 : 0.2941653160453809
###Markdown
Avec ces hyper-paramètres, on a plus de sur-apprentissage avec RandomForest qu'avec Xgboost, mais les résultats sont meilleurs pour les classes 2 et 3 avec peu de représentants. Feature importances : Mean Decrease in Impurity (MDI)Also called Gini importance. In sklearn it's the total decrease in node impurity (weighted by the probability of reaching that node (which is approximated by the proportion of samples reaching that node)) averaged over all trees of the ensemble.__Correspond to "gain importance" for xgboost.__
###Code
def plot_mdi_importance(rf_model, features):
tree_feature_importances = (rf_model.feature_importances_)
sorted_idx = tree_feature_importances.argsort()
y_ticks = np.arange(0, len(features))
fig, ax = plt.subplots()
ax.barh(y_ticks, tree_feature_importances[sorted_idx])
ax.set_yticklabels(pd.Series(features)[sorted_idx])
ax.set_yticks(y_ticks)
ax.set_title("Random Forest Feature Importances (MDI)")
fig.tight_layout()
plt.show()
plot_mdi_importance(rf_model, features)
###Output
_____no_output_____
###Markdown
On a encore des importances différentes... à noter cependant qu'on ne peut pas les comparer directement avec celles du modèle xgboost : les modèles utilisés sont différents, et comme __les feature importances dépendent du modèle__, c'est parfaitement possible que des modèles différents utilisent des variables différentes ! Permutation importancePour un jeu de donnée, pour une variable i, on regarde la différence de performance entre :- les données telles quelles- les données où on permute entre elles les valeursCette importance est disponible dans sklearn depuis la version 0.22
###Code
X.head()
def plot_permutation_importance(rf_model, X, y):
result = permutation_importance(rf_model, X, y, n_repeats=5,
random_state=42, n_jobs=1)
sorted_idx = result.importances_mean.argsort()
fig, ax = plt.subplots()
ax.boxplot(result.importances[sorted_idx].T,
vert=False, labels=X.columns[sorted_idx])
ax.set_title("Permutation Importances")
fig.tight_layout()
plt.show()
plot_permutation_importance(rf_model, X_train, y_train)
plot_permutation_importance(rf_model, X_val, y_val)
###Output
_____no_output_____
###Markdown
Remarques importantes :On remarque des valeurs différentes entre les ensembles de train et test. En particulier, pour l'ensemble de test, les valeurs d'importance sont très faibles : pour des valeurs inférieures à 0.10, la variable a peu d'influence sur la mesure de performance utilisée.Ici, on pourrait considérer que longitude+latitude entraine du sur-apprentissage et qu'il faut les retirer, mais on a alors de moins bons résultats :
###Code
X_train2 = X_train[['bathrooms','bedrooms','price']]
X_val2 = X_val[['bathrooms','bedrooms','price']]
rf_model2 =clone(rf_model)
rf_model2.fit(X_train2, y_train)
pred_train = pd.Series(rf_model2.predict(X_train2))
print(f"train : {+accuracy_score(pred_train, y_train)}")
get_score_for_each_class(rf_model2, X_train2, y_train)
print(f"test :{accuracy_score(pred_val, y_val)}")
get_score_for_each_class(rf_model2, X_val2, y_val)
###Output
train : 0.7362770300922425
accuracy for y=1 : 0.9641861275705821
accuracy for y=2 : 0.2153702717101758
accuracy for y=3 : 0.22955854126679462
test :0.6963222201756002
accuracy for y=1 : 0.9324920578891635
accuracy for y=2 : 0.12416017199677506
accuracy for y=3 : 0.14019448946515398
###Markdown
Effectivement en les retirant on diminue le sur-apprentissage... mais on dégrade également les performances, en particulier pour les classes 2 et 3. - mais la permutation importance regarde les performances globales : on pourrait vouloir évaluer les performances avec une métrique particulière, par exemple en donnant plus de poids aux petites classes (c'est possible !)- et les variables latitues et longitudes devraient sans doute être permutées ensembles : n'importe quelle longitude n'a pas forcément du sens avec n'importe quelle latitude Random featureOn peut tester d'ajouter une variable aléatoire pour voir son importance dans les prédictions. Idéalement, son importance devrait être faible...
###Code
X_train3 = X_train.copy()
X_train3['random'] = np.random.random(size=len(X_train3))
X_val3 = X_val.copy()
X_val3['random'] = np.random.random(size=len(X_val3))
rf_model3 = RandomForestClassifier(
n_estimators=100,
min_samples_leaf=1,
n_jobs=-1,
oob_score=True)
rf_model3.fit(X_train3, y_train)
plot_mdi_importance(rf_model3, features+['random'])
###Output
_____no_output_____
###Markdown
La variable aléatoire parait cependant importante ! C'est parce que :- c'est une variable continue, qui apparait plus facilement dans les splits des arbres et qui peut donc sembler avoir plus d'importance selon les feature imp. qui dépendent du nombre de splits comme ici- l'importance MDI dépend des données d'entraînement et non de test
###Code
plot_permutation_importance(rf_model3, X_train3, y_train)
plot_permutation_importance(rf_model3, X_val3, y_val)
###Output
_____no_output_____
###Markdown
Pour la permutation importances, sur les données de train la variable parait quand même avoir une influence, mais pas sur les donnéees de test : elle est sans doute essentiellement utilisée pour du sur-apprentissage.
###Code
explainer = shap.TreeExplainer(rf_model3)
X_train3_sample = X_train3.sample(1000)
shap_values_train3 = explainer.shap_values(X_train3_sample.values)
shap.summary_plot(shap_values_train3, X_train3_sample, plot_type="bar")
explainer = shap.TreeExplainer(rf_model3)
X_val3_sample = X_val3.sample(1000)
shap_values_val3 = explainer.shap_values(X_val3_sample.values)
shap.summary_plot(shap_values_val3, X_val3_sample, plot_type="bar")
###Output
_____no_output_____
###Markdown
Corrélations entre variables Duplication d'une variableOn teste les effets de la corrélation entre variables en duplicant une variable.
###Code
rf_model_bis = RandomForestClassifier(
n_estimators=100,
min_samples_leaf=1,
n_jobs=-1,
oob_score=True)
X_train_bis = X_train.copy()
X_val_bis = X_val.copy()
X_train_bis['bedrooms_bis'] = X_train_bis['bedrooms']
X_val_bis['bedrooms_bis'] = X_val_bis['bedrooms']
rf_model_bis.fit(X_train_bis, y_train)
plot_permutation_importance(rf_model_bis, X_val_bis, y_val)
explainer = shap.TreeExplainer(rf_model_bis)
X_val_sample = X_val_bis.sample(1000)
shap_values_val = explainer.shap_values(X_val_sample.values)
shap.summary_plot(shap_values_val, X_val_sample, plot_type="bar")
###Output
_____no_output_____
###Markdown
En duplicant "bedrooms", on peut avoir l'impression que la variable est moins importante, car les importances sont partagées entre bedrooms et bedrooms_bis. Permutation de plusieurs colonnes (on permute plusieurs colonnes pour évaluer l'effet groupé mais on ne permute pas les variables d'une même donnée "ensembles")La librairie rfpimp https://github.com/parrt/random-forest-importances est spécialisée dans l'utilisation de la permutation importances. Une plus longue discussion sur les feature importances (qui a inspiré ce notebook) est disponible sur https://explained.ai/rf-importance/index.htmlElle permet en particulier de permuter plusieurs variables en même temps.
###Code
I = rfpimp.importances(rf_model_bis, X_val_bis, y_val)
rfpimp.plot_importances(I)
I = rfpimp.importances(rf_model_bis, X_val_bis, y_val, features=['price','latitude','longitude','bathrooms',['bedrooms','bedrooms_bis']])
rfpimp.plot_importances(I)
I = rfpimp.importances(rf_model_bis, X_val_bis, y_val, features=['price',['latitude','longitude'],'bathrooms',['bedrooms','bedrooms_bis']])
rfpimp.plot_importances(I)
###Output
_____no_output_____
###Markdown
__Quand l'importance d'une variable est faible, cela signifie :__- soit que la variable n'est pas importante- soit que la variable est très corrélée à une ou plusieurs autres variables Duplication avec bruit d'une variable
###Code
rf_model_bis = RandomForestClassifier(
n_estimators=100,
min_samples_leaf=1,
n_jobs=-1,
oob_score=True)
X_train_bis = X_train.copy()
X_val_bis = X_val.copy()
X_train_bis['bedrooms_bis'] = X_train_bis['bedrooms']+np.random.random(len(X_train_bis)) * 1
X_val_bis['bedrooms_bis'] = X_val_bis['bedrooms']+np.random.random(len(X_val_bis)) * 1
rf_model_bis.fit(X_train_bis, y_train)
plot_permutation_importance(rf_model_bis, X_val_bis, y_val)
###Output
_____no_output_____
###Markdown
Corrélations entre les variables
###Code
rfpimp.plot_corr_heatmap(X_train_bis, figsize=(5,5), value_fontsize=8)
###Output
_____no_output_____
###Markdown
Utilisation d'une autre métrique
###Code
def plot_permutation_importance2(rf_model, X, y):
result = permutation_importance(rf_model, X, y, n_repeats=10,
random_state=42, n_jobs=1,
scoring='balanced_accuracy')
sorted_idx = result.importances_mean.argsort()
fig, ax = plt.subplots()
ax.boxplot(result.importances[sorted_idx].T,
vert=False, labels=X.columns[sorted_idx])
ax.set_title("Permutation Importances")
fig.tight_layout()
plt.show()
# metrique "accuracy"
plot_permutation_importance(rf_model, X_val, y_val)
# métrique "balanced_accuracy"
plot_permutation_importance2(rf_model, X_val, y_val)
###Output
_____no_output_____ |
Copy_of_Sentiment_Classfication.ipynb | ###Markdown
###Code
#connecting to google drive
from google.colab import drive
drive.mount('/content/drive')
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
import tensorflow as tf
tf.__version__
# importing libraries
import pandas as pd
import numpy as np
# reading data from google drive
features_sentiment = pd.read_csv('./drive/My Drive/Sentiment_analysis/Data/Copy of bert_features_sentiment.csv', sep=',', header = None)
features_stance = pd.read_csv('./drive/My Drive/Sentiment_analysis/Data/Copy of bert_features_stance.csv', sep=',', header = None)
twitter_stance = pd.read_csv('./drive/My Drive/Sentiment_analysis/Data/Copy of twitter_stance.csv', sep=',', header = None)
data_sentiment = pd.read_csv('./drive/My Drive/Sentiment_analysis/Data/Copy of data_sentiment.csv', sep=',', header = None)
#
sentiment = data[3]
sentiment = np.array(sentiment)
#load the vectors embedded by BERT
vector_bert = pd.read_csv('./drive/My Drive/Colab Notebooks/bert_features_sentiment.csv', sep=',', header = None)
vector_bert
label_new = [0]*len(sentiment)
for i in range(len(sentiment)):
if sentiment[i] == 1:
label_new[i]=0
elif sentiment[i] == 2:
label_new[i]=1
elif sentiment[i] == 3:
label_new[i]=2
elif sentiment[i] == 4:
label_new[i]=3
import keras
import numpy as np
from math import floor
import tensorflow as tf
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Input, LSTM, RepeatVector, Layer
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.optimizers import SGD, RMSprop, Adam
from keras import objectives
class VAE:
def __init__(self, input_dim, latent_dim, hidden_dims, batch_size, optimizer='rmsprop', epsilon_std = .01):
self.input_dim = input_dim
self.latent_dim = latent_dim
self.hidden_dims = hidden_dims
self.batch_size = batch_size
self.optimizer = optimizer
self.epsilon_std = epsilon_std
self.build_model()
def build_model(self):
input_layer = Input(batch_shape=(self.batch_size, self.input_dim))
self.build_encoder(input_layer)
self.build_decoder()
self.autoencoder = Model(input_layer, self.x_decoded_mean)
vae_loss = self._get_vae_loss()
self.autoencoder.compile(optimizer=self.optimizer, loss=vae_loss)
def build_encoder(self, input_layer):
prev_layer = input_layer
for q in self.hidden_dims:
hidden = Dense(q, activation='relu')(prev_layer)
prev_layer = hidden
self._build_z_layers(hidden)
self.encoder = Model(input_layer, self.z_mean)
def _build_z_layers(self, hidden_layer):
self.z_mean = Dense(self.latent_dim)(hidden_layer)
self.z_log_sigma = Dense(self.latent_dim)(hidden_layer)
def build_decoder(self):
z = self._get_sampling_layer()
prev_layer = z
for q in self.hidden_dims:
hidden = Dense(q, activation='relu')(prev_layer)
prev_layer = hidden
self.x_decoded_mean = Dense(self.input_dim, activation='sigmoid')(prev_layer)
# Build the stand-alone generator
generator_input = Input((self.latent_dim,))
prev_layer = generator_input
for q in self.hidden_dims:
hidden = Dense(q, activation='relu')(prev_layer)
prev_layer = hidden
gen_x_decoded_mean = Dense(self.input_dim, activation='sigmoid')(prev_layer)
self.generator = Model(generator_input, gen_x_decoded_mean)
def _get_sampling_layer(self):
def sampling(args):
z_mean, z_log_sigma = args
epsilon = K.random_normal(shape=(self.batch_size, self.latent_dim),
mean=0., stddev=self.epsilon_std)
return z_mean + z_log_sigma * epsilon
return Lambda(sampling, output_shape=(self.latent_dim,))([self.z_mean, self.z_log_sigma])
def _get_vae_loss(self):
z_log_sigma = self.z_log_sigma
z_mean = self.z_mean
def vae_loss(x, x_decoded_mean):
reconstruction_loss = objectives.mse(x, x_decoded_mean)
kl_loss = - 0.5 * K.mean(1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma))
return reconstruction_loss + kl_loss
return vae_loss
class VAE_LSTM(VAE):
def __init__(self, input_dim, latent_dim, hidden_dims, timesteps, batch_size, optimizer='rmsprop', epsilon_std = .01):
self.input_dim = input_dim
self.latent_dim = latent_dim
self.hidden_dims = hidden_dims
self.batch_size = batch_size
self.timesteps = timesteps
self.optimizer = optimizer
self.epsilon_std = epsilon_std
self.build_model()
def build_model(self):
input_layer = Input(shape=(self.timesteps, self.input_dim,))
self.build_encoder(input_layer)
self.build_decoder()
self.autoencoder = Model(input_layer, self.x_decoded_mean)
vae_loss = self._get_vae_loss()
self.autoencoder.compile(optimizer=self.optimizer, loss=vae_loss)
def build_encoder(self, input_layer):
prev_layer = input_layer
for q in self.hidden_dims:
hidden = LSTM(q)(prev_layer)
prev_layer = hidden
self._build_z_layers(hidden)
self.encoder = Model(input_layer, self.z_mean)
def build_decoder(self):
z = self._get_sampling_layer()
prev_layer = RepeatVector(self.timesteps)(z)
for q in self.hidden_dims:
hidden = LSTM(q, return_sequences=True)(prev_layer)
prev_layer = hidden
self.x_decoded_mean = LSTM(self.input_dim, return_sequences=True)(prev_layer)
# Build the stand-alone generator
generator_input = Input((self.latent_dim,))
prev_layer = RepeatVector(self.timesteps)(generator_input)
for q in self.hidden_dims:
hidden = LSTM(q, return_sequences=True)(prev_layer)
prev_layer = hidden
gen_x_decoded_mean = LSTM(self.input_dim, return_sequences=True)(prev_layer)
self.generator = Model(generator_input, gen_x_decoded_mean)
N = len(vector_bert)
train = np.array(vector_bert)
train = train.reshape([N,1,1024])
batch_size = 50
epochs = 300
input_dim = train.shape[-1]
timesteps = train.shape[1]
model = VAE_LSTM(input_dim=input_dim, latent_dim=100, hidden_dims=[32], timesteps=timesteps, batch_size=batch_size)
vae, encoder, generator = model.autoencoder, model.encoder, model.generator
#tf.compat.v1.disable_eager_execution()
vae.fit(train[:floor(N/batch_size)*batch_size],train[:floor(N/batch_size)*batch_size], shuffle=True, epochs=epochs, batch_size=batch_size, validation_data=(train[N-1-batch_size:N-1],train[N-1-batch_size:N-1]))
###Output
WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
###Markdown
###Code
import pandas as pd
vector_vae = encoder.predict(np.array(train), batch_size = batch_size)
pd.DataFrame(vector_vae).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/vector_vae.csv', index=False)
vector_vae = pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/vector_vae.csv')
#there are only 2 points in the class with label 0, it can not be applied to ADASYN
#just simply remove it
vector_vae = np.array(vector_vae)
index = []
for i in range(np.size(label_new, 0)):
if label_new[i] == 0:
index.append(i)
if len(index) != 0:
vector_vae = np.delete(vector_vae, index, axis=0)
label_new = np.delete(label_new, index, axis=0)
from imblearn.over_sampling import ADASYN
ada = ADASYN(random_state=42)
vector_vae_balanced, label_new_balanced = ada.fit_resample(vector_vae, label_new)
import pandas as pd
pd.DataFrame(vector_vae_balanced).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/vector_vae_balanced.csv', index=False)
pd.DataFrame(label_new_balanced).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/label_new_balanced.csv', index=False)
###Output
_____no_output_____
###Markdown
###Code
from torch import nn
from torch.utils.data import Dataset
import numpy as np
import torch
import pandas as pd
from torch import nn, optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from sklearn import preprocessing
from sklearn.model_selection import train_test_split, cross_validate, StratifiedKFold
class Net(nn.Module):
def __init__(self, in_dim, n_hidden_1, n_hidden_2, out_dim):
super(Net, self).__init__()
self.droprate = 0.95
self.layer1 = nn.Sequential(nn.Linear(in_dim, n_hidden_1), nn.Dropout(p=self.droprate), nn.BatchNorm1d(n_hidden_1), nn.ReLU(True))
self.layer2 = nn.Sequential(nn.Linear(n_hidden_1, n_hidden_2), nn.Dropout(p=self.droprate), nn.BatchNorm1d(n_hidden_2), nn.ReLU(True))
self.layer3 = nn.Sequential(nn.Linear(n_hidden_2, out_dim), nn.Dropout(p=self.droprate))
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
return x
class CustomDataset(Dataset):
def __init__(self):
self.data = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/vector_vae_balanced.csv').values).float()
self.labels = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/label_new_balanced.csv').values).float()
def __len__(self):
return len(self.labels)
def __getitem__(self, idx):
sample = self.data[idx,:]
labels = self.labels[idx]
return sample, labels
def process(X_train, X_test, y_train, y_test):
scaler = preprocessing.StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
X_train = np.array(X_train)
X_test = np.array(X_test)
Y_train = np.array(y_train)
Y_test = np.array(y_test)
X_train = torch.from_numpy(X_train).float()
X_test = torch.from_numpy(X_test).float()
Y_train = torch.from_numpy(Y_train).squeeze().to(torch.int64)
Y_test = torch.from_numpy(Y_test).squeeze().to(torch.int64)
train_dataset = []
test_dataset = []
for i in range(len(X_train)):
train_dataset.append((X_train[i],Y_train[i]))
for i in range(len(X_test)):
test_dataset.append((X_test[i],Y_test[i]))
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
return train_loader, test_loader
#do not standardize the one hot features
def process_sentiment(X_train, X_test, y_train, y_test):
sentiment_train = X_train[:,len(X_train[0])-5:len(X_train[0])]
sentiment_test = X_test[:,len(X_test[0])-5:len(X_test[0])]
X_train = X_train[:,:len(X_train[0])-5]
X_test = X_test[:,:len(X_test[0])-5]
scaler = preprocessing.StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
X_train = np.hstack((X_train, sentiment_train))
X_test = np.hstack((X_test, sentiment_test))
Y_train = np.array(y_train)
Y_test = np.array(y_test)
X_train = torch.from_numpy(X_train).float()
X_test = torch.from_numpy(X_test).float()
Y_train = torch.from_numpy(Y_train).squeeze().to(torch.int64)
Y_test = torch.from_numpy(Y_test).squeeze().to(torch.int64)
train_dataset = []
test_dataset = []
for i in range(len(X_train)):
train_dataset.append((X_train[i],Y_train[i]))
for i in range(len(X_test)):
test_dataset.append((X_test[i],Y_test[i]))
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
return train_loader, test_loader
def classifier(input_dim, train_loader, test_loader, totEpoch, num_class, len_test):
model = Net(input_dim, 100, 30, 4)
if torch.cuda.is_available():
model = model.cuda()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
epoch = 0
train_loss_list = []
loss_list = []
accuracy_list = []
precision_list = []
recall_list = []
F1_list = []
F1_list_train = []
for epoch2 in range(0, 0 + totEpoch):
# model.eval()
num_TP = [0]*num_class
num_FP = [0]*num_class
num_FN = [0]*num_class
eval_loss = 0
eval_acc = 0
eval_TP = [0]*num_class
eval_FP = [0]*num_class
eval_FN = [0]*num_class
precision = [0]*num_class
recall = [0]*num_class
F1 = [0]*num_class
for data in train_loader:
img, label = data
if torch.cuda.is_available():
img = img.cuda()
label = label.cuda()
else:
img = Variable(img)
label = Variable(label)
out = model(img)
loss = criterion(out, label)
print_loss = loss.data.item()
_, pred = torch.max(out, 1)
num_correct = (pred == label).sum()
for i in range(num_class):
num_TP[i] = (((pred == i) & (label == i))).sum()
num_FP[i] = (((pred == i) & (label != i))).sum()
num_FN[i] = (((pred != i) & (label == i))).sum()
eval_TP[i] += num_TP[i].item()
eval_FP[i] += num_FP[i].item()
eval_FN[i] += num_FN[i].item()
eval_acc += num_correct.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
epoch+=1
for i in range(num_class):
if (eval_TP[i] + eval_FP[i]) == 0:
precision[i] = 1
else:
precision[i] = eval_TP[i] / (eval_TP[i] + eval_FP[i])
if (eval_TP[i] + eval_FN[i]) == 0:
recall[i] = 1
else:
recall[i] = eval_TP[i] / (eval_TP[i] + eval_FN[i])
if (precision[i]+recall[i]) == 0:
F1[i] = 0
else:
F1[i] = 2*precision[i]*recall[i]/(precision[i]+recall[i])
print('Train F1: {:.6f}'.format(
sum(F1)/num_class,
))
F1_list_train.append(sum(F1)/num_class)
model.eval()
print('epoch: {}, loss: {:.4}'.format(epoch2, loss.data.item()))
train_loss_list.append(loss.data.item())
# model.eval()
num_TP = [0]*num_class
num_FP = [0]*num_class
num_FN = [0]*num_class
eval_loss = 0
eval_acc = 0
eval_TP = [0]*num_class
eval_FP = [0]*num_class
eval_FN = [0]*num_class
precision = [0]*num_class
recall = [0]*num_class
F1 = [0]*num_class
for data in test_loader:
img, label = data
# img = img.view(img.size(0), -1)
if torch.cuda.is_available():
img = img.cuda()
label = label.cuda()
out = model(img)
loss = criterion(out, label)
eval_loss += loss.data.item()*label.size(0)
_, pred = torch.max(out, 1)
num_correct = (pred == label).sum()
for i in range(num_class):
num_TP[i] = (((pred == i) & (label == i))).sum()
num_FP[i] = (((pred == i) & (label != i))).sum()
num_FN[i] = (((pred != i) & (label == i))).sum()
eval_TP[i] += num_TP[i].item()
eval_FP[i] += num_FP[i].item()
eval_FN[i] += num_FN[i].item()
eval_acc += num_correct.item()
for i in range(num_class):
if (eval_TP[i] + eval_FP[i]) == 0:
precision[i] = 1
else:
precision[i] = eval_TP[i] / (eval_TP[i] + eval_FP[i])
if (eval_TP[i] + eval_FN[i]) == 0:
recall[i] = 1
else:
recall[i] = eval_TP[i] / (eval_TP[i] + eval_FN[i])
if (precision[i]+recall[i]) == 0:
F1[i] = 0
else:
F1[i] = 2*precision[i]*recall[i]/(precision[i]+recall[i])
print('Test Loss: {:.6f}, Acc: {:.6f}, Pre: {:.6f}, Rec: {:.6f}, F1: {:.6f}'.format(
eval_loss / (len_test),
eval_acc / (len_test),
sum(precision)/num_class,
sum(recall)/num_class,
sum(F1)/num_class
))
loss_list.append(loss.data.item())
accuracy_list.append(eval_acc / (len_test))
precision_list.append(sum(precision)/num_class)
recall_list.append(sum(recall)/num_class)
F1_list.append(sum(F1)/num_class)
return [np.array(accuracy_list), np.array(precision_list), np.array(recall_list), np.array(F1_list), np.array(F1_list_train), np.array(loss_list), np.array(train_loss_list)]
def train_model(vector_vae, label, model, totEpoch, num_class, random_s):
X_train, X_test, y_train, y_test = train_test_split(vector_vae, label, test_size=0.2, random_state = random_s)
train_loader, test_loader = process_sentiment(X_train, X_test, y_train, y_test)
results= classifier(model, train_loader, test_loader, totEpoch, num_class, len(y_test))
return results
def train_cross_val(input_dim, vector_vae, label, totEpoch, num_class, k):
results = [np.array([0.0]*totEpoch),np.array([0.0]*totEpoch),np.array([0.0]*totEpoch),np.array([0.0]*totEpoch),np.array([0.0]*totEpoch),np.array([0.0]*totEpoch),np.array([0.0]*totEpoch)]
kf = StratifiedKFold(n_splits=k)
c = 0
var_test, var_train = [], []
F1_train, F1_test = [], []
Loss_train, Loss_test = [], []
for train_index, test_index in kf.split(vector_vae, label):
print('The ', c, ' th fold cross validation:')
X_train = vector_vae[train_index]
y_train = label[train_index]
X_test = vector_vae[test_index]
y_test = label[test_index]
train_loader, test_loader = process_sentiment(X_train, X_test, y_train, y_test)
result_list= classifier(input_dim, train_loader, test_loader, totEpoch, num_class, len(y_test))
var_test.append(result_list[3])
var_train.append(result_list[4])
F1_train.append(result_list[4])
F1_test.append(result_list[3])
Loss_train.append(result_list[6])
Loss_test.append(result_list[5])
for i in range(7):
results[i] += result_list[i]
for i in range(7):
results[i] /= k
var_test = np.array(var_test)
var_train = np.array(var_train)
results.append(np.var(var_train, axis = 0))
results.append(np.var(var_test, axis = 0))
return results, F1_train, F1_test, Loss_train, Loss_test
import torch
import pandas as pd
from torch import nn, optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split, cross_validate, KFold
batch_size = 128
learning_rate = 0.002
vector_vae = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/vector_vae_balanced.csv').values).float()
label = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/label_new_balanced.csv').values).float()
totEpoch = 100
num_class = 3
results, F1_train, F1_test, Loss_train, Loss_test = train_cross_val(100, vector_vae, label, totEpoch, num_class, 5)
import pandas as pd
pd.DataFrame(results[0]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/accuracy_list.csv', index=False)
pd.DataFrame(results[1]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/precision_list.csv', index=False)
pd.DataFrame(results[2]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/recall_list.csv', index=False)
pd.DataFrame(results[3]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/F1_list.csv', index=False)
pd.DataFrame(results[4]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/F1_list_train.csv', index=False)
pd.DataFrame(results[5]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/loss_list.csv', index=False)
pd.DataFrame(results[6]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/train_loss_list.csv', index=False)
pd.DataFrame(results[7]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/train_var_list.csv', index=False)
pd.DataFrame(results[8]).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/test_var_list.csv', index=False)
pd.DataFrame(F1_train).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/F1_train.csv', index=False)
pd.DataFrame(F1_test).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/F1_test.csv', index=False)
pd.DataFrame(Loss_train).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/Loss_train.csv', index=False)
pd.DataFrame(Loss_test).to_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/Loss_test.csv', index=False)
import torch
import pandas as pd
from torch import nn, optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
import numpy as np
import matplotlib.pyplot as plt
accuracy_list = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/accuracy_list.csv').values).float()
precision_list = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/precision_list.csv').values).float()
recall_list = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/recall_list.csv').values).float()
F1_list = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/F1_list.csv').values).float()
F1_list_train = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/F1_list_train.csv').values).float()
loss_list = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/loss_list.csv').values).float()
train_loss_list = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/train_loss_list.csv').values).float()
train_var_list = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/train_var_list.csv').values).float()
test_var_list = torch.from_numpy(pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/test_var_list.csv').values).float()
totEpoch = 100
x = range(0, totEpoch)
plt.figure(figsize=(14,3))
grid = plt.GridSpec(3, 2, wspace=0.5, hspace=0.5)
plt.subplot(grid[:,0])
plt.plot(x, F1_list_train, color="b", marker='o',markersize='1.5',markeredgecolor='b',markeredgewidth = 1.5, label = 'Train F1 score')
plt.plot(x, F1_list, color="r", marker='o',markersize='1.5',markeredgecolor='r',markeredgewidth = 1.5, label = 'Test F1 score')
plt.plot(x, train_var_list, color="g", marker='o',markersize='1.5',markeredgecolor='g',markeredgewidth = 1.5, label = 'Train variance')
#plt.plot(x, test_var_list, color="c", marker='o',markersize='1.5',markeredgecolor='c',markeredgewidth = 1.5, label = 'Test variance')
plt.legend()
plt.title('Test F1 score vs epoches')
plt.xlabel('epoches')
plt.ylabel('F1 score')
plt.subplot(grid[:,1])
plt.plot(x, train_loss_list, color="red", marker='o',markersize='1.5',markeredgecolor='b',markeredgewidth = 1.5, label = 'Train Loss')
plt.plot(x, loss_list, color="red", marker='o',markersize='1.5',markeredgecolor='b',markeredgewidth = 1.5, label = 'Test Loss')
plt.legend()
plt.title('Test loss vs epoches')
plt.xlabel('epoches')
plt.ylabel('Loss')
plt.show()
import pandas as pd
F1_list = pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/F1_list.csv').values
F1_list = F1_list.reshape(len(F1_list)).tolist()
test_var_list = pd.read_csv('./drive/My Drive/Colab Notebooks/Sentiment Classification/test_var_list.csv').values
test_var_list = test_var_list.reshape(len(test_var_list)).tolist()
print(max(F1_list))
test_var_list[F1_list.index(max(F1_list))]
###Output
0.7731749602948487
|
Untitled19.ipynb | ###Markdown
Number Guessing Game with Python
###Code
# to import random module
import random
# to create a range of random numbers between 1-100
n = random.randrange(1,100)
# to take a user input to enter a number
guess = int(input("Enter any number: "))
while n!= guess: # means if n is not equal to the input guess
# if guess is smaller than n
if guess < n:
print("Too low")
# to again ask for input
guess = int(input("Enter number again: "))
# if guess is greater than n
elif guess > n:
print("Too high!")
# to again ask for the user input
guess = int(input("Enter number again: "))
# if guess gets equals to n terminate the while loop
else:
break
print("you guessed it right!!")
###Output
Enter any number: 50
Too high!
Enter number again: 25
Too high!
Enter number again: 10
Too low
Enter number again: 17
Too high!
Enter number again: 13
Too high!
Enter number again: 12
you guessed it right!!
###Markdown
astype() Convert float value to int value
###Code
import pandas as pd
nba=pd.read_csv('nba.csv').dropna(how='all')
nba.tail(5)
nba['College'].fillna('0',inplace = True)
nba.tail(5)
nba['Salary'].fillna(0,inplace = True)
nba.tail(5)
nba.info()
nba['Salary']=nba['Salary'].astype("int")
nba.head(5)
nba['Age'].astype("int",implace=True).head(5)
nba['Age'].astype("float",implace=True).head(5)
nba['Position'].head(5)
nba.info()
nba["Position"].nunique()
nba["Position"]=nba["Position"].astype("category")
nba.info()
nba["Team"]=nba["Team"].astype("category")
nba.info()
nba['Weight'].dtype
nba
nba['Weight'].astype("int")
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
###Code
import tensorflow as tf
import os
import tensorflow_datasets as tfds
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# We used the dataset from https://github.com/toppev/is-pineapple-pizza but code our own project
image_w , image_h = 128,128
model = tf.keras.models.Sequential([
# RBG, because pineapple is yellow
tf.keras.layers.Conv2D(6, (2,2), activation='relu', input_shape=(image_h,image_w,3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(8, (2,2), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(16, (2,2), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(16, (2,2), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512),
tf.keras.layers.Dense(258),
tf.keras.layers.Dense(128),
tf.keras.layers.Dropout(0.8),
tf.keras.layers.Dense(64),
tf.keras.layers.Dense(10),
tf.keras.layers.Dense(1,activation="sigmoid"),
])
model.summary()
# print(model.output_shape)
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='binary_crossentropy',metrics=["accuracy"])
!wget http://pizzagan.csail.mit.edu/pizzaGANdata.zip
!unzip pizzaGANdata.zip
! mkdir not_pineapple
! mkdir pineapple
# https://github.com/toppev/is-pineapple-pizza/blob/main/dataset/categorize.py
import os
from shutil import copyfile
images = sorted(os.listdir("pizzaGANdata/images"))
max_images = 5000
test_images = 50
pineapples = 0
not_pineapples = 0
with open("pizzaGANdata/imageLabels.txt") as f:
index = 0
for line in f:
index += 1
is_pineapple = line.endswith('1\n')
target = 'not_pineapple/' + str(not_pineapples) + '.jpg'
if is_pineapple:
target = 'pineapple/' + str(pineapples) + '.jpg'
if pineapples >= max_images:
continue
pineapples += 1
else:
if not_pineapples >= max_images:
continue
not_pineapples += 1
if len(images) > index:
copyfile('pizzaGANdata/images/' + images[index - 1], target)
print('Pineapples: ' + str(pineapples))
print('Not pineapples: ' + str(not_pineapples))
!cat pizzaGANdata/imageLabels.txt
X = []
Y = []
X
import random
not_pineapples_list = random.sample(os.listdir("not_pineapple"),pineapples)
pineapples_list = os.listdir("pineapple")
for i in not_pineapples_list:
X.append(np.array(Image.open("not_pineapple/"+i).resize((128,128)).convert("RGB")))
Y.append(0)
for i in pineapples_list:
X.append(np.array(Image.open("pineapple/"+i)))
Y.append(1)
X = np.array(X)
Y = np.array(Y)
model.fit(X, Y, epochs=50, shuffle=True)
###Output
_____no_output_____
###Markdown
Analysis of Olympics DatasetThe dataset comprises of records of all the events held at the Olympic games between 1896 and 2012.
###Code
# Specify your own path for the location of dataset here first
filepath = 'C:/Users/Ankit/Desktop/DataCamp/all_medalists.csv'
# Importing pandas as pd
import pandas as pd
# Storing the contents at the specified filepath as a
# DataFrame named medal
medals = pd.read_csv(filepath)
# Visualizing the DataFrame
#print(medals.head())
#print(medals.tail())
#print(medals.describe())
print(medals.info())
#print(medals.columns)
# Using .value_counts() for ranking the top 15 countries
# by total number of medals
# .value_counts() sorts the values by default
###Output
_____no_output_____ |
how-to-use-azureml/automated-machine-learning/remote-attach/auto-ml-remote-attach.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Remote Execution using attach**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Attach an existing DSVM to a workspace.3. Configure AutoML using `AutoMLConfig`.4. Train the model using the DSVM.5. Explore the results.6. Test the best fitted model.In addition this notebook showcases the following features- **Parallel** executions for iterations- **Asynchronous** tracking of progress- **Cancellation** of individual iterations or the entire run- Retrieving models for any iteration or logged metric- Specifying AutoML settings as `**kwargs`- Handling **text** data using the `preprocess` flag SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import os
import numpy as np
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the run history container in the workspace.
experiment_name = 'automl-remote-attach'
project_folder = './sample_projects/automl-remote-attach'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Attach a Remote Linux DSVMTo use a remote Docker compute target:1. Create a Linux DSVM in Azure, following these [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.2. Enter the IP address, user name and password below.**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/detailed-troubleshoot-ssh-connection) on changing SSH ports for security reasons.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
import time
# Add your VM information below
# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port,
# dsvm_username and dsvm_password will be ignored.
compute_name = 'mydsvmb'
dsvm_ip_addr = '<<ip_addr>>'
dsvm_ssh_port = 22
dsvm_username = '<<username>>'
dsvm_password = '<<password>>'
if compute_name in ws.compute_targets:
print('Using existing compute.')
dsvm_compute = ws.compute_targets[compute_name]
else:
attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)
ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)
while ws.compute_targets[compute_name].provisioning_state == 'Creating':
time.sleep(1)
dsvm_compute = ws.compute_targets[compute_name]
if dsvm_compute.provisioning_state == 'Failed':
print('Attached failed.')
print(dsvm_compute.provisioning_errors)
dsvm_compute.detach()
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to the Linux DSVM
conda_run_config.target = dsvm_compute
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])
conda_run_config.environment.python.conda_dependencies = cd
###Output
_____no_output_____
###Markdown
DataFor remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.In this example, the `get_data()` function returns a [dictionary](README.mdgetdata).
###Code
if not os.path.exists(project_folder):
os.makedirs(project_folder)
%%writefile $project_folder/get_data.py
import numpy as np
from sklearn.datasets import fetch_20newsgroups
def get_data():
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_train = fetch_20newsgroups(subset = 'train', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_train = np.array(data_train.data).reshape((len(data_train.data),1))
y_train = np.array(data_train.target)
return { "X" : X_train, "y" : y_train }
###Output
_____no_output_____
###Markdown
TrainYou can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.|Property|Description||-|-||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.||**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.||**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.Default is *1*; you can set it to *-1* to use all cores.|
###Code
automl_settings = {
"iteration_timeout_minutes": 60,
"iterations": 4,
"n_cross_validations": 5,
"primary_metric": 'AUC_weighted',
"preprocess": True,
"max_cores_per_iteration": 2
}
automl_config = AutoMLConfig(task = 'classification',
path = project_folder,
run_configuration=conda_run_config,
data_script = project_folder + "/get_data.py",
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.
###Code
remote_run = experiment.submit(automl_config)
remote_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# Wait until the run finishes.
remote_run.wait_for_completion(show_output = True)
###Output
_____no_output_____
###Markdown
Pre-process cache cleanupThe preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell
###Code
remote_run.clean_preprocessor_cache()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(remote_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Cancelling RunsYou can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions.
###Code
# Cancel the ongoing experiment and stop scheduling new iterations.
# remote_run.cancel()
# Cancel iteration 1 and move onto iteration 2.
# remote_run.cancel_iteration(1)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = remote_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model which has the smallest `accuracy` value:
###Code
# lookup_metric = "accuracy"
# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)
###Output
_____no_output_____
###Markdown
Model from a Specific Iteration
###Code
iteration = 0
zero_run, zero_model = remote_run.get_output(iteration = iteration)
###Output
_____no_output_____
###Markdown
Test
###Code
# Load test data.
from pandas_ml import ConfusionMatrix
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_test = fetch_20newsgroups(subset = 'test', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_test = np.array(data_test.data).reshape((len(data_test.data),1))
y_test = data_test.target
# Test our best pipeline.
y_pred = fitted_model.predict(X_test)
y_pred_strings = [data_test.target_names[i] for i in y_pred]
y_test_strings = [data_test.target_names[i] for i in y_test]
cm = ConfusionMatrix(y_test_strings, y_pred_strings)
print(cm)
cm.plot()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Remote Execution using attach**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Attach an existing DSVM to a workspace.3. Configure AutoML using `AutoMLConfig`.4. Train the model using the DSVM.5. Explore the results.6. Test the best fitted model.In addition this notebook showcases the following features- **Parallel** executions for iterations- **Asynchronous** tracking of progress- **Cancellation** of individual iterations or the entire run- Retrieving models for any iteration or logged metric- Specifying AutoML settings as `**kwargs`- Handling **text** data using the `preprocess` flag SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import os
import numpy as np
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the run history container in the workspace.
experiment_name = 'automl-remote-attach'
project_folder = './sample_projects/automl-remote-attach'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Attach a Remote Linux DSVMTo use a remote Docker compute target:1. Create a Linux DSVM in Azure, following these [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.2. Enter the IP address, user name and password below.**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/detailed-troubleshoot-ssh-connection) on changing SSH ports for security reasons.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
import time
# Add your VM information below
# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port,
# dsvm_username and dsvm_password will be ignored.
compute_name = 'mydsvmb'
dsvm_ip_addr = '<<ip_addr>>'
dsvm_ssh_port = 22
dsvm_username = '<<username>>'
dsvm_password = '<<password>>'
if compute_name in ws.compute_targets:
print('Using existing compute.')
dsvm_compute = ws.compute_targets[compute_name]
else:
attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)
ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)
while ws.compute_targets[compute_name].provisioning_state == 'Creating':
time.sleep(1)
dsvm_compute = ws.compute_targets[compute_name]
if dsvm_compute.provisioning_state == 'Failed':
print('Attached failed.')
print(dsvm_compute.provisioning_errors)
dsvm_compute.detach()
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to the Linux DSVM
conda_run_config.target = dsvm_compute
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80'])
conda_run_config.environment.python.conda_dependencies = cd
###Output
_____no_output_____
###Markdown
DataFor remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.In this example, the `get_data()` function returns a [dictionary](README.mdgetdata).
###Code
if not os.path.exists(project_folder):
os.makedirs(project_folder)
%%writefile $project_folder/get_data.py
import numpy as np
from sklearn.datasets import fetch_20newsgroups
def get_data():
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_train = fetch_20newsgroups(subset = 'train', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_train = np.array(data_train.data).reshape((len(data_train.data),1))
y_train = np.array(data_train.target)
return { "X" : X_train, "y" : y_train }
###Output
_____no_output_____
###Markdown
TrainYou can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.|Property|Description||-|-||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.||**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.||**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.Default is *1*; you can set it to *-1* to use all cores.|
###Code
automl_settings = {
"iteration_timeout_minutes": 60,
"iterations": 4,
"n_cross_validations": 5,
"primary_metric": 'AUC_weighted',
"preprocess": True,
"max_cores_per_iteration": 2
}
automl_config = AutoMLConfig(task = 'classification',
path = project_folder,
run_configuration=conda_run_config,
data_script = project_folder + "/get_data.py",
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.
###Code
remote_run = experiment.submit(automl_config)
remote_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# Wait until the run finishes.
remote_run.wait_for_completion(show_output = True)
###Output
_____no_output_____
###Markdown
Pre-process cache cleanupThe preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell
###Code
remote_run.clean_preprocessor_cache()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(remote_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Cancelling RunsYou can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions.
###Code
# Cancel the ongoing experiment and stop scheduling new iterations.
# remote_run.cancel()
# Cancel iteration 1 and move onto iteration 2.
# remote_run.cancel_iteration(1)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = remote_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model which has the smallest `accuracy` value:
###Code
# lookup_metric = "accuracy"
# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)
###Output
_____no_output_____
###Markdown
Model from a Specific Iteration
###Code
iteration = 0
zero_run, zero_model = remote_run.get_output(iteration = iteration)
###Output
_____no_output_____
###Markdown
Test
###Code
# Load test data.
from pandas_ml import ConfusionMatrix
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_test = fetch_20newsgroups(subset = 'test', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_test = np.array(data_test.data).reshape((len(data_test.data),1))
y_test = data_test.target
# Test our best pipeline.
y_pred = fitted_model.predict(X_test)
y_pred_strings = [data_test.target_names[i] for i in y_pred]
y_test_strings = [data_test.target_names[i] for i in y_test]
cm = ConfusionMatrix(y_test_strings, y_pred_strings)
print(cm)
cm.plot()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Automated Machine Learning_**Remote Execution using attach**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Attach an existing DSVM to a workspace.3. Configure AutoML using `AutoMLConfig`.4. Train the model using the DSVM.5. Explore the results.6. Viewing the engineered names for featurized data and featurization summary for all raw features.7. Test the best fitted model.In addition this notebook showcases the following features- **Parallel** executions for iterations- **Asynchronous** tracking of progress- **Cancellation** of individual iterations or the entire run- Retrieving models for any iteration or logged metric- Specifying AutoML settings as `**kwargs`- Handling **text** data using the `preprocess` flag SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import os
import numpy as np
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the run history container in the workspace.
experiment_name = 'automl-remote-attach'
project_folder = './sample_projects/automl-remote-attach'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Attach a Remote Linux DSVMTo use a remote Docker compute target:1. Create a Linux DSVM in Azure, following these [instructions](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.2. Enter the IP address, user name and password below.**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/detailed-troubleshoot-ssh-connection) on changing SSH ports for security reasons.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
import time
# Add your VM information below
# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port,
# dsvm_username and dsvm_password will be ignored.
compute_name = 'mydsvmb'
dsvm_ip_addr = '<<ip_addr>>'
dsvm_ssh_port = 22
dsvm_username = '<<username>>'
dsvm_password = '<<password>>'
if compute_name in ws.compute_targets:
print('Using existing compute.')
dsvm_compute = ws.compute_targets[compute_name]
else:
attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)
ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)
while ws.compute_targets[compute_name].provisioning_state == 'Creating':
time.sleep(1)
dsvm_compute = ws.compute_targets[compute_name]
if dsvm_compute.provisioning_state == 'Failed':
print('Attached failed.')
print(dsvm_compute.provisioning_errors)
dsvm_compute.detach()
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
import pkg_resources
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to the Linux DSVM
conda_run_config.target = dsvm_compute
pandas_dependency = 'pandas==' + pkg_resources.get_distribution("pandas").version
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80',pandas_dependency])
conda_run_config.environment.python.conda_dependencies = cd
###Output
_____no_output_____
###Markdown
DataFor remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.In this example, the `get_data()` function returns a [dictionary](README.mdgetdata).
###Code
if not os.path.exists(project_folder):
os.makedirs(project_folder)
%%writefile $project_folder/get_data.py
import numpy as np
from sklearn.datasets import fetch_20newsgroups
def get_data():
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_train = fetch_20newsgroups(subset = 'train', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_train = np.array(data_train.data).reshape((len(data_train.data),1))
y_train = np.array(data_train.target)
return { "X" : X_train, "y" : y_train }
###Output
_____no_output_____
###Markdown
TrainYou can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.|Property|Description||-|-||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.||**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.||**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.Default is *1*; you can set it to *-1* to use all cores.|
###Code
automl_settings = {
"iteration_timeout_minutes": 60,
"iterations": 4,
"n_cross_validations": 5,
"primary_metric": 'AUC_weighted',
"preprocess": True,
"max_cores_per_iteration": 2
}
automl_config = AutoMLConfig(task = 'classification',
path = project_folder,
run_configuration=conda_run_config,
data_script = project_folder + "/get_data.py",
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.
###Code
remote_run = experiment.submit(automl_config)
remote_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# Wait until the run finishes.
remote_run.wait_for_completion(show_output = True)
###Output
_____no_output_____
###Markdown
Pre-process cache cleanupThe preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell
###Code
remote_run.clean_preprocessor_cache()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(remote_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Cancelling RunsYou can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions.
###Code
# Cancel the ongoing experiment and stop scheduling new iterations.
# remote_run.cancel()
# Cancel iteration 1 and move onto iteration 2.
# remote_run.cancel_iteration(1)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = remote_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the preprocessing featurization.
###Code
fitted_model.named_steps['datatransformer'].get_engineered_feature_names()
###Output
_____no_output_____
###Markdown
View the featurization summaryBelow we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-- Raw feature name- Number of engineered features formed out of this raw feature- Type detected- If feature was dropped- List of feature transformations for the raw feature
###Code
fitted_model.named_steps['datatransformer'].get_featurization_summary()
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model which has the smallest `accuracy` value:
###Code
# lookup_metric = "accuracy"
# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)
###Output
_____no_output_____
###Markdown
Model from a Specific Iteration
###Code
iteration = 0
zero_run, zero_model = remote_run.get_output(iteration = iteration)
###Output
_____no_output_____
###Markdown
Test
###Code
# Load test data.
from pandas_ml import ConfusionMatrix
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_test = fetch_20newsgroups(subset = 'test', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_test = np.array(data_test.data).reshape((len(data_test.data),1))
y_test = data_test.target
# Test our best pipeline.
y_pred = fitted_model.predict(X_test)
y_pred_strings = [data_test.target_names[i] for i in y_pred]
y_test_strings = [data_test.target_names[i] for i in y_test]
cm = ConfusionMatrix(y_test_strings, y_pred_strings)
print(cm)
cm.plot()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Remote Execution using attach**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Attach an existing DSVM to a workspace.3. Configure AutoML using `AutoMLConfig`.4. Train the model using the DSVM.5. Explore the results.6. Viewing the engineered names for featurized data and featurization summary for all raw features.7. Test the best fitted model.In addition this notebook showcases the following features- **Parallel** executions for iterations- **Asynchronous** tracking of progress- **Cancellation** of individual iterations or the entire run- Retrieving models for any iteration or logged metric- Specifying AutoML settings as `**kwargs`- Handling **text** data using the `preprocess` flag SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import os
import numpy as np
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the run history container in the workspace.
experiment_name = 'automl-remote-attach'
project_folder = './sample_projects/automl-remote-attach'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Attach a Remote Linux DSVMTo use a remote Docker compute target:1. Create a Linux DSVM in Azure, following these [instructions](https://docs.microsoft.com/en-us/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.2. Enter the IP address, user name and password below.**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://docs.microsoft.com/en-us/azure/virtual-machines/troubleshooting/detailed-troubleshoot-ssh-connection) on changing SSH ports for security reasons.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
import time
# Add your VM information below
# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port,
# dsvm_username and dsvm_password will be ignored.
compute_name = 'mydsvmb'
dsvm_ip_addr = '<<ip_addr>>'
dsvm_ssh_port = 22
dsvm_username = '<<username>>'
dsvm_password = '<<password>>'
if compute_name in ws.compute_targets:
print('Using existing compute.')
dsvm_compute = ws.compute_targets[compute_name]
else:
attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)
ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)
while ws.compute_targets[compute_name].provisioning_state == 'Creating':
time.sleep(1)
dsvm_compute = ws.compute_targets[compute_name]
if dsvm_compute.provisioning_state == 'Failed':
print('Attached failed.')
print(dsvm_compute.provisioning_errors)
dsvm_compute.detach()
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
import pkg_resources
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to the Linux DSVM
conda_run_config.target = dsvm_compute
pandas_dependency = 'pandas==' + pkg_resources.get_distribution("pandas").version
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy','py-xgboost<=0.80',pandas_dependency])
conda_run_config.environment.python.conda_dependencies = cd
###Output
_____no_output_____
###Markdown
DataFor remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.In this example, the `get_data()` function returns a [dictionary](README.mdgetdata).
###Code
if not os.path.exists(project_folder):
os.makedirs(project_folder)
%%writefile $project_folder/get_data.py
import numpy as np
from sklearn.datasets import fetch_20newsgroups
def get_data():
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_train = fetch_20newsgroups(subset = 'train', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_train = np.array(data_train.data).reshape((len(data_train.data),1))
y_train = np.array(data_train.target)
return { "X" : X_train, "y" : y_train }
###Output
_____no_output_____
###Markdown
TrainYou can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.|Property|Description||-|-||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.||**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.||**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.Default is *1*; you can set it to *-1* to use all cores.|
###Code
automl_settings = {
"iteration_timeout_minutes": 60,
"iterations": 4,
"n_cross_validations": 5,
"primary_metric": 'AUC_weighted',
"preprocess": True,
"max_cores_per_iteration": 2
}
automl_config = AutoMLConfig(task = 'classification',
path = project_folder,
run_configuration=conda_run_config,
data_script = project_folder + "/get_data.py",
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.
###Code
remote_run = experiment.submit(automl_config)
remote_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# Wait until the run finishes.
remote_run.wait_for_completion(show_output = True)
###Output
_____no_output_____
###Markdown
Pre-process cache cleanupThe preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell
###Code
remote_run.clean_preprocessor_cache()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(remote_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Cancelling RunsYou can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions.
###Code
# Cancel the ongoing experiment and stop scheduling new iterations.
# remote_run.cancel()
# Cancel iteration 1 and move onto iteration 2.
# remote_run.cancel_iteration(1)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = remote_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
View the engineered names for featurized dataBelow we display the engineered feature names generated for the featurized data using the preprocessing featurization.
###Code
fitted_model.named_steps['datatransformer'].get_engineered_feature_names()
###Output
_____no_output_____
###Markdown
View the featurization summaryBelow we display the featurization that was performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:-- Raw feature name- Number of engineered features formed out of this raw feature- Type detected- If feature was dropped- List of feature transformations for the raw feature
###Code
fitted_model.named_steps['datatransformer'].get_featurization_summary()
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model which has the smallest `accuracy` value:
###Code
# lookup_metric = "accuracy"
# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)
###Output
_____no_output_____
###Markdown
Model from a Specific Iteration
###Code
iteration = 0
zero_run, zero_model = remote_run.get_output(iteration = iteration)
###Output
_____no_output_____
###Markdown
Test
###Code
# Load test data.
from pandas_ml import ConfusionMatrix
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_test = fetch_20newsgroups(subset = 'test', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_test = np.array(data_test.data).reshape((len(data_test.data),1))
y_test = data_test.target
# Test our best pipeline.
y_pred = fitted_model.predict(X_test)
y_pred_strings = [data_test.target_names[i] for i in y_pred]
y_test_strings = [data_test.target_names[i] for i in y_test]
cm = ConfusionMatrix(y_test_strings, y_pred_strings)
print(cm)
cm.plot()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Remote Execution using attach**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Attach an existing DSVM to a workspace.3. Configure AutoML using `AutoMLConfig`.4. Train the model using the DSVM.5. Explore the results.6. Test the best fitted model.In addition this notebook showcases the following features- **Parallel** executions for iterations- **Asynchronous** tracking of progress- **Cancellation** of individual iterations or the entire run- Retrieving models for any iteration or logged metric- Specifying AutoML settings as `**kwargs`- Handling **text** data using the `preprocess` flag SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import os
import numpy as np
import pandas as pd
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
ws = Workspace.from_config()
# Choose a name for the run history container in the workspace.
experiment_name = 'automl-remote-attach'
project_folder = './sample_projects/automl-remote-attach'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
###Output
_____no_output_____
###Markdown
Opt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
###Output
_____no_output_____
###Markdown
Attach a Remote Linux DSVMTo use a remote Docker compute target:1. Create a Linux DSVM in Azure, following these [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.2. Enter the IP address, user name and password below.**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on changing SSH ports for security reasons.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
import time
# Add your VM information below
# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port,
# dsvm_username and dsvm_password will be ignored.
compute_name = 'mydsvmb'
dsvm_ip_addr = '<<ip_addr>>'
dsvm_ssh_port = 22
dsvm_username = '<<username>>'
dsvm_password = '<<password>>'
if compute_name in ws.compute_targets:
print('Using existing compute.')
dsvm_compute = ws.compute_targets[compute_name]
else:
attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)
ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)
while ws.compute_targets[compute_name].provisioning_state == 'Creating':
time.sleep(1)
dsvm_compute = ws.compute_targets[compute_name]
if dsvm_compute.provisioning_state == 'Failed':
print('Attached failed.')
print(dsvm_compute.provisioning_errors)
dsvm_compute.detach()
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to the Linux DSVM
conda_run_config.target = dsvm_compute
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])
conda_run_config.environment.python.conda_dependencies = cd
###Output
_____no_output_____
###Markdown
DataFor remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.In this example, the `get_data()` function returns a [dictionary](README.mdgetdata).
###Code
if not os.path.exists(project_folder):
os.makedirs(project_folder)
%%writefile $project_folder/get_data.py
import numpy as np
from sklearn.datasets import fetch_20newsgroups
def get_data():
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_train = fetch_20newsgroups(subset = 'train', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_train = np.array(data_train.data).reshape((len(data_train.data),1))
y_train = np.array(data_train.target)
return { "X" : X_train, "y" : y_train }
###Output
_____no_output_____
###Markdown
TrainYou can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.|Property|Description||-|-||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.||**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.||**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.Default is *1*; you can set it to *-1* to use all cores.|
###Code
automl_settings = {
"iteration_timeout_minutes": 60,
"iterations": 4,
"n_cross_validations": 5,
"primary_metric": 'AUC_weighted',
"preprocess": True,
"max_cores_per_iteration": 2
}
automl_config = AutoMLConfig(task = 'classification',
path = project_folder,
run_configuration=conda_run_config,
data_script = project_folder + "/get_data.py",
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.
###Code
remote_run = experiment.submit(automl_config)
remote_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# Wait until the run finishes.
remote_run.wait_for_completion(show_output = True)
###Output
_____no_output_____
###Markdown
Pre-process cache cleanupThe preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell
###Code
remote_run.clean_preprocessor_cache()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(remote_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Cancelling RunsYou can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions.
###Code
# Cancel the ongoing experiment and stop scheduling new iterations.
# remote_run.cancel()
# Cancel iteration 1 and move onto iteration 2.
# remote_run.cancel_iteration(1)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = remote_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model which has the smallest `accuracy` value:
###Code
# lookup_metric = "accuracy"
# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)
###Output
_____no_output_____
###Markdown
Model from a Specific Iteration
###Code
iteration = 0
zero_run, zero_model = remote_run.get_output(iteration = iteration)
###Output
_____no_output_____
###Markdown
Test
###Code
# Load test data.
from pandas_ml import ConfusionMatrix
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_test = fetch_20newsgroups(subset = 'test', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_test = np.array(data_test.data).reshape((len(data_test.data),1))
y_test = data_test.target
# Test our best pipeline.
y_pred = fitted_model.predict(X_test)
y_pred_strings = [data_test.target_names[i] for i in y_pred]
y_test_strings = [data_test.target_names[i] for i in y_test]
cm = ConfusionMatrix(y_test_strings, y_pred_strings)
print(cm)
cm.plot()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning: Remote Execution using attachIn this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.Make sure you have executed the [configuration](../configuration.ipynb) before running this notebook.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Attach an existing DSVM to a workspace.3. Configure AutoML using `AutoMLConfig`.4. Train the model using the DSVM.5. Explore the results.6. Test the best fitted model.In addition this notebook showcases the following features- **Parallel** executions for iterations- **Asynchronous** tracking of progress- **Cancellation** of individual iterations or the entire run- Retrieving models for any iteration or logged metric- Specifying AutoML settings as `**kwargs`- Handling **text** data using the `preprocess` flag Create an ExperimentAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
import os
import random
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.train.automl.run import AutoMLRun
ws = Workspace.from_config()
# Choose a name for the run history container in the workspace.
experiment_name = 'automl-remote-dsvm-blobstore'
project_folder = './sample_projects/automl-remote-dsvm-blobstore'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data=output, index=['']).T
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
###Output
_____no_output_____
###Markdown
Attach a Remote Linux DSVMTo use a remote Docker compute target:1. Create a Linux DSVM in Azure, following these [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.2. Enter the IP address, user name and password below.**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on changing SSH ports for security reasons.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
import time
# Add your VM information below
# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port,
# dsvm_username and dsvm_password will be ignored.
compute_name = 'mydsvmb'
dsvm_ip_addr = '<<ip_addr>>'
dsvm_ssh_port = 22
dsvm_username = '<<username>>'
dsvm_password = '<<password>>'
if compute_name in ws.compute_targets:
print('Using existing compute.')
dsvm_compute = ws.compute_targets[compute_name]
else:
attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)
ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)
while ws.compute_targets[compute_name].provisioning_state == 'Creating':
time.sleep(1)
dsvm_compute = ws.compute_targets[compute_name]
if dsvm_compute.provisioning_state == 'Failed':
print('Attached failed.')
print(dsvm_compute.provisioning_errors)
dsvm_compute.detach()
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to the Linux DSVM
conda_run_config.target = dsvm_compute
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])
conda_run_config.environment.python.conda_dependencies = cd
###Output
_____no_output_____
###Markdown
Create Get Data FileFor remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.In this example, the `get_data()` function returns a [dictionary](README.mdgetdata).
###Code
if not os.path.exists(project_folder):
os.makedirs(project_folder)
%%writefile $project_folder/get_data.py
import numpy as np
from sklearn.datasets import fetch_20newsgroups
def get_data():
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_train = fetch_20newsgroups(subset = 'train', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_train = np.array(data_train.data).reshape((len(data_train.data),1))
y_train = np.array(data_train.target)
return { "X" : X_train, "y" : y_train }
###Output
_____no_output_____
###Markdown
Configure AutoML You can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.|Property|Description||-|-||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.||**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.||**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.Default is *1*; you can set it to *-1* to use all cores.|
###Code
automl_settings = {
"iteration_timeout_minutes": 60,
"iterations": 4,
"n_cross_validations": 5,
"primary_metric": 'AUC_weighted',
"preprocess": True,
"max_cores_per_iteration": 2
}
automl_config = AutoMLConfig(task = 'classification',
path = project_folder,
run_configuration=conda_run_config,
data_script = project_folder + "/get_data.py",
**automl_settings
)
###Output
_____no_output_____
###Markdown
Train the Models Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.
###Code
remote_run = experiment.submit(automl_config)
###Output
_____no_output_____
###Markdown
Exploring the Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# Wait until the run finishes.
remote_run.wait_for_completion(show_output = True)
###Output
_____no_output_____
###Markdown
Pre-process cache cleanupThe preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell
###Code
remote_run.clean_preprocessor_cache()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(remote_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Cancelling RunsYou can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions.
###Code
# Cancel the ongoing experiment and stop scheduling new iterations.
# remote_run.cancel()
# Cancel iteration 1 and move onto iteration 2.
# remote_run.cancel_iteration(1)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = remote_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model which has the smallest `accuracy` value:
###Code
# lookup_metric = "accuracy"
# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)
###Output
_____no_output_____
###Markdown
Model from a Specific Iteration
###Code
iteration = 0
zero_run, zero_model = remote_run.get_output(iteration = iteration)
###Output
_____no_output_____
###Markdown
Testing the Fitted Model
###Code
# Load test data.
from pandas_ml import ConfusionMatrix
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_test = fetch_20newsgroups(subset = 'test', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_test = np.array(data_test.data).reshape((len(data_test.data),1))
y_test = data_test.target
# Test our best pipeline.
y_pred = fitted_model.predict(X_test)
y_pred_strings = [data_test.target_names[i] for i in y_pred]
y_test_strings = [data_test.target_names[i] for i in y_test]
cm = ConfusionMatrix(y_test_strings, y_pred_strings)
print(cm)
cm.plot()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Automated Machine Learning_**Remote Execution using attach**_ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train)1. [Results](Results)1. [Test](Test) IntroductionIn this example we use the scikit-learn's [20newsgroup](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_20newsgroups.html) to showcase how you can use AutoML to handle text data with remote attach.Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.In this notebook you will learn how to:1. Create an `Experiment` in an existing `Workspace`.2. Attach an existing DSVM to a workspace.3. Configure AutoML using `AutoMLConfig`.4. Train the model using the DSVM.5. Explore the results.6. Test the best fitted model.In addition this notebook showcases the following features- **Parallel** executions for iterations- **Asynchronous** tracking of progress- **Cancellation** of individual iterations or the entire run- Retrieving models for any iteration or logged metric- Specifying AutoML settings as `**kwargs`- Handling **text** data using the `preprocess` flag SetupAs part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments.
###Code
import logging
import os
import random
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow
import numpy as np
import pandas as pd
from sklearn import datasets
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.train.automl import AutoMLConfig
from azureml.train.automl.run import AutoMLRun
ws = Workspace.from_config()
# Choose a name for the run history container in the workspace.
experiment_name = 'automl-remote-attach'
project_folder = './sample_projects/automl-remote-attach'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Experiment Name'] = experiment.name
pd.set_option('display.max_colwidth', -1)
pd.DataFrame(data=output, index=['']).T
###Output
_____no_output_____
###Markdown
Opt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
###Output
_____no_output_____
###Markdown
Attach a Remote Linux DSVMTo use a remote Docker compute target:1. Create a Linux DSVM in Azure, following these [quick instructions](https://docs.microsoft.com/en-us/azure/machine-learning/desktop-workbench/how-to-create-dsvm-hdi). Make sure you use the Ubuntu flavor (not CentOS). Make sure that disk space is available under `/tmp` because AutoML creates files under `/tmp/azureml_run`s. The DSVM should have more cores than the number of parallel runs that you plan to enable. It should also have at least 4GB per core.2. Enter the IP address, user name and password below.**Note:** By default, SSH runs on port 22 and you don't need to change the port number below. If you've configured SSH to use a different port, change `dsvm_ssh_port` accordinglyaddress. [Read more](https://render.githubusercontent.com/documentation/sdk/ssh-issue.md) on changing SSH ports for security reasons.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
import time
# Add your VM information below
# If a compute with the specified compute_name already exists, it will be used and the dsvm_ip_addr, dsvm_ssh_port,
# dsvm_username and dsvm_password will be ignored.
compute_name = 'mydsvmb'
dsvm_ip_addr = '<<ip_addr>>'
dsvm_ssh_port = 22
dsvm_username = '<<username>>'
dsvm_password = '<<password>>'
if compute_name in ws.compute_targets:
print('Using existing compute.')
dsvm_compute = ws.compute_targets[compute_name]
else:
attach_config = RemoteCompute.attach_configuration(address=dsvm_ip_addr, username=dsvm_username, password=dsvm_password, ssh_port=dsvm_ssh_port)
ComputeTarget.attach(workspace=ws, name=compute_name, attach_configuration=attach_config)
while ws.compute_targets[compute_name].provisioning_state == 'Creating':
time.sleep(1)
dsvm_compute = ws.compute_targets[compute_name]
if dsvm_compute.provisioning_state == 'Failed':
print('Attached failed.')
print(dsvm_compute.provisioning_errors)
dsvm_compute.detach()
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to the Linux DSVM
conda_run_config.target = dsvm_compute
cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]'], conda_packages=['numpy'])
conda_run_config.environment.python.conda_dependencies = cd
###Output
_____no_output_____
###Markdown
DataFor remote executions you should author a `get_data.py` file containing a `get_data()` function. This file should be in the root directory of the project. You can encapsulate code to read data either from a blob storage or local disk in this file.In this example, the `get_data()` function returns a [dictionary](README.mdgetdata).
###Code
if not os.path.exists(project_folder):
os.makedirs(project_folder)
%%writefile $project_folder/get_data.py
import numpy as np
from sklearn.datasets import fetch_20newsgroups
def get_data():
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_train = fetch_20newsgroups(subset = 'train', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_train = np.array(data_train.data).reshape((len(data_train.data),1))
y_train = np.array(data_train.target)
return { "X" : X_train, "y" : y_train }
###Output
_____no_output_____
###Markdown
TrainYou can specify `automl_settings` as `**kwargs` as well. Also note that you can use a `get_data()` function for local excutions too.**Note:** When using Remote DSVM, you can't pass Numpy arrays directly to the fit method.|Property|Description||-|-||**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics: accuracyAUC_weightedaverage_precision_score_weightednorm_macro_recallprecision_score_weighted||**iteration_timeout_minutes**|Time limit in minutes for each iteration.||**iterations**|Number of iterations. In each iteration AutoML trains a specific pipeline with the data.||**n_cross_validations**|Number of cross validation splits.||**max_concurrent_iterations**|Maximum number of iterations that would be executed in parallel. This should be less than the number of cores on the DSVM.||**preprocess**|Setting this to *True* enables AutoML to perform preprocessing on the input to handle *missing data*, and to perform some common *feature extraction*.||**enable_cache**|Setting this to *True* enables preprocess done once and reuse the same preprocessed data for all the iterations. Default value is True.|**max_cores_per_iteration**|Indicates how many cores on the compute target would be used to train a single pipeline.Default is *1*; you can set it to *-1* to use all cores.|
###Code
automl_settings = {
"iteration_timeout_minutes": 60,
"iterations": 4,
"n_cross_validations": 5,
"primary_metric": 'AUC_weighted',
"preprocess": True,
"max_cores_per_iteration": 2
}
automl_config = AutoMLConfig(task = 'classification',
path = project_folder,
run_configuration=conda_run_config,
data_script = project_folder + "/get_data.py",
**automl_settings
)
###Output
_____no_output_____
###Markdown
Call the `submit` method on the experiment object and pass the run configuration. For remote runs the execution is asynchronous, so you will see the iterations get populated as they complete. You can interact with the widgets and models even when the experiment is running to retrieve the best model up to that point. Once you are satisfied with the model, you can cancel a particular iteration or the whole run.
###Code
remote_run = experiment.submit(automl_config)
remote_run
###Output
_____no_output_____
###Markdown
Results Widget for Monitoring RunsThe widget will first report a "loading" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.You can click on a pipeline to see run properties and output logs. Logs are also available on the DSVM under `/tmp/azureml_run/{iterationid}/azureml-logs`**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details.
###Code
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# Wait until the run finishes.
remote_run.wait_for_completion(show_output = True)
###Output
_____no_output_____
###Markdown
Pre-process cache cleanupThe preprocess data gets cache at user default file store. When the run is completed the cache can be cleaned by running below cell
###Code
remote_run.clean_preprocessor_cache()
###Output
_____no_output_____
###Markdown
Retrieve All Child RunsYou can also use SDK methods to fetch all the child runs and see individual metrics that we log.
###Code
children = list(remote_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
###Output
_____no_output_____
###Markdown
Cancelling RunsYou can cancel ongoing remote runs using the `cancel` and `cancel_iteration` functions.
###Code
# Cancel the ongoing experiment and stop scheduling new iterations.
# remote_run.cancel()
# Cancel iteration 1 and move onto iteration 2.
# remote_run.cancel_iteration(1)
###Output
_____no_output_____
###Markdown
Retrieve the Best ModelBelow we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*.
###Code
best_run, fitted_model = remote_run.get_output()
print(best_run)
print(fitted_model)
###Output
_____no_output_____
###Markdown
Best Model Based on Any Other MetricShow the run and the model which has the smallest `accuracy` value:
###Code
# lookup_metric = "accuracy"
# best_run, fitted_model = remote_run.get_output(metric = lookup_metric)
###Output
_____no_output_____
###Markdown
Model from a Specific Iteration
###Code
iteration = 0
zero_run, zero_model = remote_run.get_output(iteration = iteration)
###Output
_____no_output_____
###Markdown
Test
###Code
# Load test data.
from pandas_ml import ConfusionMatrix
from sklearn.datasets import fetch_20newsgroups
remove = ('headers', 'footers', 'quotes')
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
data_test = fetch_20newsgroups(subset = 'test', categories = categories,
shuffle = True, random_state = 42,
remove = remove)
X_test = np.array(data_test.data).reshape((len(data_test.data),1))
y_test = data_test.target
# Test our best pipeline.
y_pred = fitted_model.predict(X_test)
y_pred_strings = [data_test.target_names[i] for i in y_pred]
y_test_strings = [data_test.target_names[i] for i in y_test]
cm = ConfusionMatrix(y_test_strings, y_pred_strings)
print(cm)
cm.plot()
###Output
_____no_output_____ |
labs/laboratorio_10.ipynb | ###Markdown
MAT281 - Laboratorio N°04 Objetivos del laboratorio* Reforzar conceptos básicos de reducción de dimensionalidad. Contenidos* [Problema 01](p1) I.- Problema 01El **cáncer de mama** es una proliferación maligna de las células epiteliales que revisten los conductos o lobulillos mamarios. Es una enfermedad clonal; donde una célula individual producto de una serie de mutaciones somáticas o de línea germinal adquiere la capacidad de dividirse sin control ni orden, haciendo que se reproduzca hasta formar un tumor. El tumor resultante, que comienza como anomalía leve, pasa a ser grave, invade tejidos vecinos y, finalmente, se propaga a otras partes del cuerpo.El conjunto de datos se denomina `BC.csv`, el cual contine la información de distintos pacientes con tumosres (benignos o malignos) y algunas características del mismo.Las características se calculan a partir de una imagen digitalizada de un aspirado con aguja fina (FNA) de una masa mamaria. Describen las características de los núcleos celulares presentes en la imagen.Los detalles se puede encontrar en [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].Lo primero será cargar el conjunto de datos:
###Code
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier,DecisionTreeRegressor
import random
from sklearn.metrics import classification_report
%matplotlib inline
sns.set_palette("deep", desat=.6)
sns.set(rc={'figure.figsize':(11.7,8.27)})
# cargar datos
df = pd.read_csv(os.path.join("data","BC.csv"), sep=",")
df['diagnosis'] = df['diagnosis'] .replace({'M':1,'B':0})
df.head()
###Output
_____no_output_____
###Markdown
Basado en la información presentada responda las siguientes preguntas:1. Normalizar para las columnas numéricas con procesamiento **StandardScaler**.2. Realice un gráfico de correlación. Identifique la existencia de colinealidad.3. Realizar un ajuste PCA con **n_components = 10**. Realice un gráfico de la varianza y varianza acumulada. Interprete.4. Devuelva un dataframe con las componentes principales.5. Aplique al menos tres modelos de clasificación. Para cada modelo, calule el valor de sus métricas.
###Code
x = df.loc[:, df.drop(['diagnosis', 'id'],axis=1).columns].values
y = df.loc[:, ['diagnosis']].values
x = StandardScaler().fit_transform(x)
df_corr = df.drop(['diagnosis', 'id'],axis=1).corr()
sns.heatmap(df_corr, annot=False)
###Output
_____no_output_____
###Markdown
Vemos que existe colinealidad entre las variables "radius_mean", "area_mean", "perimeter_mean", "radius_worst", "area_worst", "perimeter_worst"
###Code
columns=[]
for i in range(1,11):
st= 'PC'
st= 'PC'+ str(i)
columns.append(st)
pca = PCA(n_components=10)
principalComponents = pca.fit_transform(x)
# graficar varianza por componente
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
plt.figure(figsize=(12,4))
plt.bar(x= range(1,11), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
string='PC1'
columns_2=[str]
for i in range(1,len(columns)):
string = string + '+' + columns[i]
columns_2.append(string)
# graficar varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
plt.figure(figsize=(12,4))
plt.bar(x= range(1,11), height=percent_variance_cum, tick_label=columns_2)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
percent_variance_cum
columns[:7]
###Output
_____no_output_____
###Markdown
Tenemos que se pueden explicar la varianza de las variables con un 91% considerando las primeras 7 componentes principales
###Code
pca = PCA(n_components=7)
principalComponents = pca.fit_transform(x)
principalDataframe = pd.DataFrame(data = principalComponents, columns = columns[:7])
targetDataframe = df[['diagnosis']]
newDataframe = pd.concat([principalDataframe, targetDataframe],axis = 1)
newDataframe.head()
###Output
_____no_output_____
###Markdown
Entrenar Modelos
###Code
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import Perceptron
from sklearn import tree
X_train, X_test, Y_train, Y_test = train_test_split(principalDataframe, targetDataframe, test_size=0.2, random_state = 42)
###Output
_____no_output_____
###Markdown
SVC
###Code
regr_1= SVC()
regr_1.fit(X_train,Y_train)
target_names = ['clase 0', 'clase 1']
print(classification_report(Y_test, regr_1.predict(X_test), target_names=target_names))
###Output
precision recall f1-score support
clase 0 0.97 0.99 0.98 71
clase 1 0.98 0.95 0.96 43
accuracy 0.97 114
macro avg 0.97 0.97 0.97 114
weighted avg 0.97 0.97 0.97 114
###Markdown
Decision Tree
###Code
regr_2=tree.DecisionTreeClassifier()
regr_2.fit(X_train,Y_train)
target_names = ['clase 0', 'clase 1']
print(classification_report(Y_test, regr_2.predict(X_test), target_names=target_names))
###Output
precision recall f1-score support
clase 0 0.97 0.93 0.95 71
clase 1 0.89 0.95 0.92 43
accuracy 0.94 114
macro avg 0.93 0.94 0.94 114
weighted avg 0.94 0.94 0.94 114
###Markdown
Logistic Regression
###Code
regr_3=LogisticRegression()
regr_3.fit(X_train,Y_train)
target_names = ['clase 0', 'clase 1']
print(classification_report(Y_test, regr_3.predict(X_test), target_names=target_names))
###Output
precision recall f1-score support
clase 0 0.99 0.99 0.99 71
clase 1 0.98 0.98 0.98 43
accuracy 0.98 114
macro avg 0.98 0.98 0.98 114
weighted avg 0.98 0.98 0.98 114
###Markdown
Perceptron
###Code
regr_4=Perceptron()
regr_4.fit(X_train,Y_train)
target_names = ['clase 0', 'clase 1']
print(classification_report(Y_test, regr_4.predict(X_test), target_names=target_names))
###Output
precision recall f1-score support
clase 0 0.97 0.99 0.98 71
clase 1 0.98 0.95 0.96 43
accuracy 0.97 114
macro avg 0.97 0.97 0.97 114
weighted avg 0.97 0.97 0.97 114
###Markdown
MAT281 - Laboratorio N°04 Objetivos del laboratorio* Reforzar conceptos básicos de reducción de dimensionalidad. Contenidos* [Problema 01](p1) I.- Problema 01El **cáncer de mama** es una proliferación maligna de las células epiteliales que revisten los conductos o lobulillos mamarios. Es una enfermedad clonal; donde una célula individual producto de una serie de mutaciones somáticas o de línea germinal adquiere la capacidad de dividirse sin control ni orden, haciendo que se reproduzca hasta formar un tumor. El tumor resultante, que comienza como anomalía leve, pasa a ser grave, invade tejidos vecinos y, finalmente, se propaga a otras partes del cuerpo.El conjunto de datos se denomina `BC.csv`, el cual contine la información de distintos pacientes con tumosres (benignos o malignos) y algunas características del mismo.Las características se calculan a partir de una imagen digitalizada de un aspirado con aguja fina (FNA) de una masa mamaria. Describen las características de los núcleos celulares presentes en la imagen.Los detalles se puede encontrar en [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].Lo primero será cargar el conjunto de datos:
###Code
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
%matplotlib inline
sns.set_palette("deep", desat=.6)
sns.set(rc={'figure.figsize':(11.7,8.27)})
# cargar datos
df = pd.read_csv(os.path.join("data","BC.csv"), sep=",")
df['diagnosis'] = df['diagnosis'] .replace({'M':1,'B':0})
df.head()
###Output
_____no_output_____
###Markdown
Basado en la información presentada responda las siguientes preguntas:1. Normalizar para las columnas numéricas con procesamiento **StandardScaler**.2. Realice un gráfico de correlación. Identifique la existencia de colinealidad.3. Realizar un ajuste PCA con **n_components = 10**. Realice un gráfico de la varianza y varianza acumulada. Interprete.4. Devuelva un dataframe con las componentes principales.5. Aplique al menos tres modelos de clasificación. Para cada modelo, calule el valor de sus métricas.
###Code
#1
l=df.columns
f=l.drop(['id','diagnosis'])
scaler = StandardScaler()
df[f]=scaler.fit_transform(df[f])
df.head()
#2
df['target']=0
x= df.loc[:,f].values
y= df.loc[:, ['target']].values
x = StandardScaler().fit_transform(x)
#2
f, ax=plt.subplots(figsize=(9,9))
corr=df.corr()
sns.heatmap(
corr,
cmap=sns.diverging_palette(210,10,as_cmap=True),
square=True,
ax=ax
)
#3
pca=PCA(n_components=10)
principalComponents = pca.fit_transform(x)
percent_variance=np.round(pca.explained_variance_ratio_*100,decimals=2)
columns=['PC1','PC2','PC3','PC4','PC5','PC6','PC7','PC8','PC9','PC10']
plt.figure(figsize=(9,9))
plt.bar(x=range(1,11), height=percent_variance, tick_label=columns)
plt.ylabel('Percentage of variance explained')
plt.xlabel('Componente Principal')
plt.title('PCA Scree Plot')
plt.show()
#3 (varianza acumulada)
percent_variance_cum = np.cumsum(percent_variance)
columns = ['PC1', 'PC1,2', 'PC1to3', 'PC1to4', 'PC1to5', 'PC1to6', 'PC1to7', 'PC1to8', 'PC1to9', 'PC1to10']
plt.figure(figsize=(12,8))
plt.bar(x= range(1,11), height=percent_variance_cum, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
#4
pca = PCA(n_components=7)
principalComponents = pca.fit_transform(x)
principalDataframe = pd.DataFrame(data = principalComponents,
columns=['PC1','PC2','PC3','PC4','PC5','PC6','PC7'])
targetDataframe = df[['target']]
newDataframe = pd.concat([principalDataframe, targetDataframe],axis = 1)
newDataframe.head()
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°04 Objetivos del laboratorio* Reforzar conceptos básicos de reducción de dimensionalidad. Contenidos* [Problema 01](p1) I.- Problema 01El **cáncer de mama** es una proliferación maligna de las células epiteliales que revisten los conductos o lobulillos mamarios. Es una enfermedad clonal; donde una célula individual producto de una serie de mutaciones somáticas o de línea germinal adquiere la capacidad de dividirse sin control ni orden, haciendo que se reproduzca hasta formar un tumor. El tumor resultante, que comienza como anomalía leve, pasa a ser grave, invade tejidos vecinos y, finalmente, se propaga a otras partes del cuerpo.El conjunto de datos se denomina `BC.csv`, el cual contine la información de distintos pacientes con tumosres (benignos o malignos) y algunas características del mismo.Las características se calculan a partir de una imagen digitalizada de un aspirado con aguja fina (FNA) de una masa mamaria. Describen las características de los núcleos celulares presentes en la imagen.Los detalles se puede encontrar en [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].Lo primero será cargar el conjunto de datos:
###Code
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
%matplotlib inline
sns.set_palette("deep", desat=.6)
sns.set(rc={'figure.figsize':(11.7,8.27)})
#### cargar datos
df = pd.read_csv(os.path.join("data","BC.csv"), sep=",")
df['diagnosis'] = df['diagnosis'] .replace({'M':1,'B':0})
df.head()
###Output
_____no_output_____
###Markdown
Basado en la información presentada responda las siguientes preguntas: 1. Normalizar para las columnas numéricas con procesamiento **StandardScaler**.
###Code
scaler = StandardScaler()
x=scaler.fit_transform(df.drop(['id','diagnosis'],axis=1))
y=df.loc[:,['id','diagnosis']]
###Output
_____no_output_____
###Markdown
2. Realice un gráfico de correlación. Identifique la existencia de colinealidad.
###Code
fig, ax = plt.subplots(figsize=(10, 10))
df_corre=df.drop(['id','diagnosis'],axis=1).corr()
sns.heatmap(df_corre)
###Output
_____no_output_____
###Markdown
3. Realizar un ajuste PCA con **n_components = 10**. Realice un gráfico de la varianza y varianza acumulada. Interprete.
###Code
pca = PCA(n_components=10)
principalComponents = pca.fit_transform(x)
principalDataframe = pd.DataFrame(data = principalComponents, columns = ['PC1', 'PC2','PC3', 'PC4', 'PC5', 'PC6','PC7', 'PC8', 'PC9', 'PC10'])
targetDataframe = df[['diagnosis']]
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns = ['PC1', 'PC2','PC3', 'PC4', 'PC5', 'PC6','PC7', 'PC8', 'PC9', 'PC10']
plt.figure(figsize=(15,4))
plt.bar(x= range(1,11), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
percent_variance_cum = np.cumsum(percent_variance)
columns = ['PC1', 'PC1+PC2', 'PC1+PC2+PC3', 'PC1+...+PC4', 'PC1+...+PC5', 'PC1+...+PC6', 'PC1+...+PC7', 'PC1+...+PC8', 'PC1+...+PC9', 'PC1+...+PC10']
plt.figure(figsize=(15,4))
plt.bar(x= range(1,11), height=percent_variance_cum, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()
plt.show()
###Output
_____no_output_____
###Markdown
4. Devuelva un dataframe con las componentes principales.
###Code
newDataframe = pd.concat([principalDataframe, targetDataframe],axis = 1)
newDataframe.head()
###Output
_____no_output_____
###Markdown
5. Aplique al menos tres modelos de clasificación. Para cada modelo, calule el valor de sus métricas.
###Code
# split dataset
X_train, X_test, Y_train, Y_test = train_test_split(principalDataframe, targetDataframe, test_size=0.2, random_state = 42)
Y_test['diagnosis']
from sklearn.linear_model import LogisticRegression
rlog = LogisticRegression()
rlog.fit(X_train, Y_train) # ajustando el modelo
# metrics
from metrics_classification import *
from sklearn.metrics import confusion_matrix
y_true1 = list(Y_test['diagnosis'])
y_pred1 = list(rlog.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true1,y_pred1))
df_temp = pd.DataFrame(
{
'y':y_true1,
'yhat':y_pred1
}
)
df_metrics = summary_metrics(df_temp)
print("")
print(df_metrics)
from sklearn import svm
clf = svm.SVC()
clf.fit(X_train, Y_train)
from metrics_classification import *
from sklearn.metrics import confusion_matrix
y_true2 = list(Y_test['diagnosis'])
y_pred2= list(clf.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true2,y_pred2))
df_temp = pd.DataFrame(
{
'y':y_true2,
'yhat':y_pred2
}
)
df_metrics = summary_metrics(df_temp)
print("")
print(df_metrics)
from sklearn.ensemble import RandomForestClassifier
clf1 = RandomForestClassifier(max_depth=2, random_state=42)
clf1.fit(X_train, Y_train)
from metrics_classification import *
from sklearn.metrics import confusion_matrix
y_true3 = list(Y_test['diagnosis'])
y_pred3= list(clf1.predict(X_test))
print('\nMatriz de confusion:\n ')
print(confusion_matrix(y_true3,y_pred3))
df_temp = pd.DataFrame(
{
'y':y_true3,
'yhat':y_pred3
}
)
df_metrics = summary_metrics(df_temp)
print("")
print(df_metrics)
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°04 Objetivos del laboratorio* Reforzar conceptos básicos de reducción de dimensionalidad. Contenidos* [Problema 01](p1) I.- Problema 01El **cáncer de mama** es una proliferación maligna de las células epiteliales que revisten los conductos o lobulillos mamarios. Es una enfermedad clonal; donde una célula individual producto de una serie de mutaciones somáticas o de línea germinal adquiere la capacidad de dividirse sin control ni orden, haciendo que se reproduzca hasta formar un tumor. El tumor resultante, que comienza como anomalía leve, pasa a ser grave, invade tejidos vecinos y, finalmente, se propaga a otras partes del cuerpo.El conjunto de datos se denomina `BC.csv`, el cual contine la información de distintos pacientes con tumosres (benignos o malignos) y algunas características del mismo.Las características se calculan a partir de una imagen digitalizada de un aspirado con aguja fina (FNA) de una masa mamaria. Describen las características de los núcleos celulares presentes en la imagen.Los detalles se puede encontrar en [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].Lo primero será cargar el conjunto de datos:
###Code
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
%matplotlib inline
sns.set_palette("deep", desat=.6)
sns.set(rc={'figure.figsize':(11.7,8.27)})
# cargar datos
df = pd.read_csv(os.path.join("data","BC.csv"), sep=",")
df['diagnosis'] = df['diagnosis'] .replace({'M':1,'B':0})
df.head()
###Output
_____no_output_____
###Markdown
Basado en la información presentada responda las siguientes preguntas:1. Normalizar para las columnas numéricas con procesamiento **StandardScaler**.2. Realice un gráfico de correlación. Identifique la existencia de colinealidad.3. Realizar un ajuste PCA con **n_components = 10**. Realice un gráfico de la varianza y varianza acumulada. Interprete.4. Devuelva un dataframe con las componentes principales.5. Aplique al menos tres modelos de clasificación. Para cada modelo, calule el valor de sus métricas.
###Code
# Revisamos si hay datos NaN
df.isna().all().unique()
X = df.loc[:, df.columns[2:]].values
y = df.loc[:, ['diagnosis']].values
#La idea es poder separar en dos conjuntos las condiciones de características Y etiquetas respectivamente en Y e X.
X = StandardScaler().fit_transform(X)
#luego, normalizamos las etiquetas.
predictor_names = df.columns[2:]
n_show = min(len(predictor_names),50)
corrmat = df[predictor_names[:n_show]].corr()
fig, ax = plt.subplots(figsize=(10, 10))
sns.heatmap(corrmat, vmax=.9, square=True, ax=ax,cmap="PuBuGn")
ax.set_title("Correlación Lineal Predictores vs Predictores")
plt.show()
###Output
_____no_output_____
###Markdown
Podemos ver que existen ciertos recuadros, simétricos donde hay una correlacion lineal notablemente fuerte. Se realizará un ajuste PCA con n_components = 10. Obteniendo gráficos de la varianza y varianza acumulada.
###Code
pca = PCA(n_components=10) # Reajuste del modelo.
aa = pca.fit_transform(X) # Componente obtenidos a partir del modelo anterior.
# grafico varianza por componente obtenida de los modelos
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns = ['PC1', 'PC2', 'PC3', 'PC4', 'PC5', 'PC6', 'PC7', 'PC8', 'PC9', 'PC10'] # "Principal Components"
plt.figure(figsize=(12,4))
plt.bar(x= range(1,11), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
# grafico varianza por la suma acumulada de los componente
percent_variance_cum = np.cumsum(percent_variance)
columns = ['PC1',
'PC1+PC2',
'PC1+PC2+PC3',
'PC1+PC2+PC3+PC4',
'PC1+PC2+PC3+PC4+PC5',
'PC1+PC2+PC3+PC4+PC5+PC6',
'PC1+PC2+PC3+PC4+PC5+PC6+PC7',
'PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8',
'PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8+PC9',
'PC1+PC2+PC3+PC4+PC5+PC6+PC7+PC8+PC9+PC10']
plt.figure(figsize=(12,4))
plt.bar(x= range(1,11), height=percent_variance_cum, tick_label=columns)
plt.xticks(x= range(1,11), rotation='vertical')
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component Cumsum')
plt.title('PCA Scree Plot')
plt.show()principalDataframe = pd.DataFrame(data = principalComponents,
columns = ['PC1', 'PC2', 'PC3', 'PC4', 'PC5', 'PC6', 'PC7', 'PC8', 'PC9', 'PC10'])
targetDataframe = df[['diagnosis']]
df_red = pd.concat([principalDataframe, targetDataframe],axis = 1)
df_red.head()
principalDataframe = pd.DataFrame(data = aa,
columns = ['PC1', 'PC2', 'PC3', 'PC4', 'PC5', 'PC6', 'PC7', 'PC8', 'PC9', 'PC10'])
targetDataframe = df[['diagnosis']]
df_red = pd.concat([principalDataframe, targetDataframe],axis = 1)
df_red.head()
###Output
_____no_output_____ |
notebooks/train-diqa-base.ipynb | ###Markdown
IntroductionIn this tutorial, we will implement the Deep CNN-Based Blind Image Quality Predictor (DIQA) methodology proposed by Jongio Kim, Anh-Duc Nguyen, and Sanghoon Lee [1]. Also, I will go through the following TensorFlow 2.0 concepts:- Download and prepare a dataset using a *tf.data.Dataset builder*.- Define a TensorFlow input pipeline to pre-process the dataset records using the *tf.data* API.- Create the CNN model using the *tf.keras* functional API.- Define a custom training loop for the objective error map model.- Train the objective error map and subjective score model.- Use the trained subjective score model to make predictions.*Note: Some of the functions are implemented in [utils.py](https://github.com/ocampor/image-quality/blob/master/notebooks/utils.py) as they are out of the guide's scope.* What is DIQA?DIQA is an original proposal that focuses on solving some of the most concerning challenges of applying deep learning to image quality assessment (IQA). The advantages against other methodologies are:- The model is not limited to work exclusively with Natural Scene Statistics (NSS) images [1].- Prevents overfitting by splitting the training into two phases (1) feature learning and (2) mapping learned features to subjective scores. ProblemThe cost of generating datasets for IQA is high since it requires expert supervision. Therefore, the fundamental IQA benchmarks are comprised of solely a few thousands of records. The latter complicates the creation of deep learning models because they require large amounts of training samples to generalize.As an example, let's consider the most frequently used datasets to train and evaluate IQA methods [Live](https://live.ece.utexas.edu/research/quality/subjective.htm), [TID2008](http://www.ponomarenko.info/tid2008.htm), [TID2013](http://www.ponomarenko.info/tid2013.htm), [CSIQ](http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23). An overall summary of each dataset is contained in the next table:| Dataset | References | Distortions | Severity | Total Samples ||---------|------------|-------------|----------|---------------|| LiveIQA | 29 | 5 | 5 | 1011 || TID2008 | 25 | 17 | 5 | 1701 || TID2013 | 25 | 24 | 5 | 3025 || CSIQ | 30 | 6 | 5 | 930 |The total amount of samples does not exceed 4,000 records for any of them. DatasetThe IQA benchmarks only contain a limited amount of records that might not be enough to train a CNN. However, for this guide purpose, we are going to use the [Live](https://live.ece.utexas.edu/research/quality/subjective.htm) dataset. It is comprised of 29 reference images, and 5 different distortions with 5 severity levels each.The first task is to download and prepare the dataset. I have created a couple of TensorFlow dataset buildersfor image quality assessment and published them in the [image-quality](https://github.com/ocampor/image-quality) package. The buildersare an interface defined by [tensorflow-datasets](https://www.tensorflow.org/datasets). *Note: This process might take several minutes because of the size of the dataset (700 megabytes).*
###Code
builder = imquality.datasets.LiveIQA()
builder.download_and_prepare()
###Output
_____no_output_____
###Markdown
After downloading and preparing the data, turn the builder into a dataset, and shuffle it. Note that the batch is equal to 1. The reason is that each image has a different shape. Increasing the batch TensorFlow will raise an error.
###Code
ds = builder.as_dataset(shuffle_files=True)['train']
ds = ds.shuffle(1024).batch(1)
###Output
_____no_output_____
###Markdown
The output is a generator; therefore, to access it using the bracket operator [ ] causes an error. There are two ways to access the images in the generator. The first way is to turn the generator into an iterator and extract a single sample at a time using the *next* function.
###Code
next(iter(ds)).keys()
###Output
_____no_output_____
###Markdown
As you can see, the output is a dictionary that contains the tensor representation for the distorted image, the reference image, and the subjective score (dmos).Another way is to extract samples from the generator by taking samples with a for loop:
###Code
for features in ds.take(2):
distorted_image = features['distorted_image']
reference_image = features['reference_image']
dmos = tf.round(features['dmos'][0], 2)
distortion = features['distortion'][0]
print(f'The distortion of the image is {dmos} with'
f' a distortion {distortion} and shape {distorted_image.shape}')
show_images([reference_image, distorted_image])
###Output
_____no_output_____
###Markdown
Methodology Image NormalizationThe first step for DIQA is to pre-process the images. The image is converted into grayscale, and then a low-pass filter is applied. The low-pass filter is defined as:\begin{align*}\hat{I} = I_{gray} - I^{low}\end{align*}where the low-frequency image is the result of the following algorithm:1. Blur the grayscale image.2. Downscale it by a factor of 1 / 4.3. Upscale it back to the original size.The main reasons for this normalization are (1) the Human Visual System (HVS) is not sensitive to changes in the low-frequency band, and (2) image distortions barely affect the low-frequency component of images.
###Code
def image_preprocess(image: tf.Tensor) -> tf.Tensor:
image = tf.cast(image, tf.float32)
image = tf.image.rgb_to_grayscale(image)
image_low = gaussian_filter(image, 16, 7 / 6)
image_low = rescale(image_low, 1 / 4, method=tf.image.ResizeMethod.BICUBIC)
image_low = tf.image.resize(image_low, size=image_shape(image), method=tf.image.ResizeMethod.BICUBIC)
return image - tf.cast(image_low, image.dtype)
for features in ds.take(2):
distorted_image = features['distorted_image']
reference_image = features['reference_image']
I_d = image_preprocess(distorted_image)
I_d = tf.image.grayscale_to_rgb(I_d)
I_d = image_normalization(I_d, 0, 1)
show_images([reference_image, I_d])
###Output
_____no_output_____
###Markdown
**Fig 1.** On the left, the original image. On the right, the image after applying the low-pass filter. Objective Error MapFor the first model, objective errors are used as a proxy to take advantage of the effect of increasing data. The loss function is defined by the mean squared error between the predicted and ground-truth error maps.\begin{align*}\mathbf{e}_{gt} = err(\hat{I}_r, \hat{I}_d)\end{align*}and *err(·)* is an error function. For this implementation, the authors recommend using\begin{align*}\mathbf{e}_{gt} = | \hat{I}_r - \hat{I}_d | ^ p\end{align*}with *p=0.2*. The latter is to prevent that the values in the error map are small or close to zero.
###Code
def error_map(reference: tf.Tensor, distorted: tf.Tensor, p: float=0.2) -> tf.Tensor:
assert reference.dtype == tf.float32 and distorted.dtype == tf.float32, 'dtype must be tf.float32'
return tf.pow(tf.abs(reference - distorted), p)
for features in ds.take(3):
reference_image = features['reference_image']
I_r = image_preprocess(reference_image)
I_d = image_preprocess(features['distorted_image'])
e_gt = error_map(I_r, I_d, 0.2)
I_d = image_normalization(tf.image.grayscale_to_rgb(I_d), 0, 1)
e_gt = image_normalization(tf.image.grayscale_to_rgb(e_gt), 0, 1)
show_images([reference_image, I_d, e_gt])
###Output
_____no_output_____
###Markdown
**Fig 2.** On the left, the original image. In the middle, the pre-processed image, and finally, the image representation of the error map. Reliability MapAccording to the authors, the model is likely to fail to predict images with homogeneous regions. To prevent it, they propose a reliability function. The assumption is that blurry areas have lower reliability than textured ones. The reliability function is defined as\begin{align*}\mathbf{r} = \frac{2}{1 + exp(-\alpha|\hat{I}_d|)} - 1\end{align*}where α controls the saturation property of the reliability map. The positive part of a sigmoid is used to assign sufficiently large values to pixels with low intensity.
###Code
def reliability_map(distorted: tf.Tensor, alpha: float) -> tf.Tensor:
assert distorted.dtype == tf.float32, 'The Tensor must by of dtype tf.float32'
return 2 / (1 + tf.exp(- alpha * tf.abs(distorted))) - 1
###Output
_____no_output_____
###Markdown
The previous definition might directly affect the predicted score. Therefore, the average reliability map is used instead.\begin{align*}\mathbf{\hat{r}} = \frac{1}{\frac{1}{H_rW_r}\sum_{(i,j)}\mathbf{r}(i,j)}\mathbf{r}\end{align*}For the Tensorflow function, we just calculate the reliability map and divide it by its mean.
###Code
def average_reliability_map(distorted: tf.Tensor, alpha: float) -> tf.Tensor:
r = reliability_map(distorted, alpha)
return r / tf.reduce_mean(r)
for features in ds.take(2):
reference_image = features['reference_image']
I_d = image_preprocess(features['distorted_image'])
r = average_reliability_map(I_d, 1)
r = image_normalization(tf.image.grayscale_to_rgb(r), 0, 1)
show_images([reference_image, r], cmap='gray')
###Output
_____no_output_____
###Markdown
**Fig 3.** On the left, the original image, and on the right, its average reliability map. Loss functionThe loss function is defined as the mean square error of the product between the reliability map and the objective error map. The error is the difference between the predicted error map and the ground-truth error map.\begin{align*}\mathcal{L}_1(\hat{I}_d; \theta_f, \theta_g) = ||g(f(\hat{I}_d, \theta_f), \theta_g) - \mathbf{e}_{gt}) \odot \mathbf{\hat{r}}||^2_2\end{align*}The loss function requires to multiply the error by the reliability map; therefore, we cannot use the default loss implementation *tf.loss.MeanSquareError*.
###Code
def loss(model, x, y_true, r):
y_pred = model(x)
return tf.reduce_mean(tf.square((y_true - y_pred) * r))
###Output
_____no_output_____
###Markdown
After creating the custom loss, we need to tell TensorFlow how to differentiate it. The good thing is that we can take advantage of [automatic differentiation](https://www.tensorflow.org/tutorials/customization/autodiff) using *tf.GradientTape*.
###Code
def gradient(model, x, y_true, r):
with tf.GradientTape() as tape:
loss_value = loss(model, x, y_true, r)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
OptimizerThe authors suggested using a Nadam optimizer with a learning rate of *2e-4*.
###Code
optimizer = tf.optimizers.Nadam(learning_rate=2 * 10 ** -4)
###Output
_____no_output_____
###Markdown
Training Objective Error ModelFor the training phase, it is convenient to utilize the *tf.data* input pipelines to produce a much cleaner and readable code. The only requirement is to create the function to apply to the input.
###Code
def calculate_error_map(features):
I_d = image_preprocess(features['distorted_image'])
I_r = image_preprocess(features['reference_image'])
r = rescale(average_reliability_map(I_d, 0.2), 1 / 4)
e_gt = rescale(error_map(I_r, I_d, 0.2), 1 / 4)
return (I_d, e_gt, r)
###Output
_____no_output_____
###Markdown
Then, map the *tf.data.Dataset* to the *calculate_error_map* function.
###Code
train = ds.map(calculate_error_map)
###Output
_____no_output_____
###Markdown
Applying the transformation is executed in almost no time. The reason is that the processor is not performing any operation to the data yet, it happens on demand. This concept is commonly called [lazy-evaluation](https://wiki.python.org/moin/Generators).So far, the following components are implemented:1. The generator that pre-processes the input and calculates the target.2. The loss and gradient functions required for the custom training loop.3. The optimizer function.The only missing bits are the models' definition.  **Fig 4.** Architecture of the objective error model and subjective score model. In the previous image, it is depicted how:- The pre-processed image gets into the convolutional neural network (CNN). - It is transformed by 8 convolutions with the Relu activation function and "same" padding. This is defined as f(·).- The output of f(·) is processed by the last convolution with a linear activation function. This is defined as g(·).
###Code
input = tf.keras.Input(shape=(None, None, 1), batch_size=1, name='original_image')
f = Conv2D(48, (3, 3), name='Conv1', activation='relu', padding='same')(input)
f = Conv2D(48, (3, 3), name='Conv2', activation='relu', padding='same', strides=(2, 2))(f)
f = Conv2D(64, (3, 3), name='Conv3', activation='relu', padding='same')(f)
f = Conv2D(64, (3, 3), name='Conv4', activation='relu', padding='same', strides=(2, 2))(f)
f = Conv2D(64, (3, 3), name='Conv5', activation='relu', padding='same')(f)
f = Conv2D(64, (3, 3), name='Conv6', activation='relu', padding='same')(f)
f = Conv2D(128, (3, 3), name='Conv7', activation='relu', padding='same')(f)
f = Conv2D(128, (3, 3), name='Conv8', activation='relu', padding='same')(f)
g = Conv2D(1, (1, 1), name='Conv9', padding='same', activation='linear')(f)
objective_error_map = tf.keras.Model(input, g, name='objective_error_map')
objective_error_map.summary()
###Output
_____no_output_____
###Markdown
For the custom training loop, it is necessary to:1. Define a metric to measure the performance of the model.2. Calculate the loss and the gradients.3. Use the optimizer to update the weights.4. Print the accuracy.
###Code
for epoch in range(1):
epoch_accuracy = tf.keras.metrics.MeanSquaredError()
step = 0
for I_d, e_gt, r in train:
loss_value, gradients = gradient(objective_error_map, I_d, e_gt, r)
optimizer.apply_gradients(zip(gradients, objective_error_map.trainable_weights))
epoch_accuracy(e_gt, objective_error_map(I_d))
if step % 100 == 0:
print('step %s: mean loss = %s' % (step, epoch_accuracy.result()))
step += 1
###Output
_____no_output_____
###Markdown
*Note: It would be a good idea to use the Spearman’s rank-order correlation coefficient (SRCC) or Pearson’s linear correlation coefficient (PLCC) as accuracy metrics.* Subjective Score ModelTo create the subjective score model, let's use the output of f(·) to train a regressor.
###Code
v = GlobalAveragePooling2D(data_format='channels_last')(f)
h = Dense(128, activation='relu')(v)
h = Dense(1)(h)
subjective_error = tf.keras.Model(input, h, name='subjective_error')
subjective_error.compile(
optimizer=optimizer,
loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanSquaredError()])
subjective_error.summary()
###Output
_____no_output_____
###Markdown
Training a model with the fit method of *tf.keras.Model* expects a dataset that returns two arguments. The first one is the input, and the second one is the target.
###Code
def calculate_subjective_score(features):
I_d = image_preprocess(features['distorted_image'])
mos = features['dmos']
return (I_d, mos)
train = ds.map(calculate_subjective_score)
###Output
_____no_output_____
###Markdown
Then, *fit* the subjective score model.
###Code
history = subjective_error.fit(train, epochs=1)
###Output
_____no_output_____
###Markdown
PredictionPredicting with the already trained model is simple. Just use the *predict* method in the model.
###Code
sample = next(iter(ds))
I_d = image_preprocess(sample['distorted_image'])
target = sample['dmos'][0]
prediction = subjective_error.predict(I_d)[0][0]
print(f'the predicted value is: {prediction:.4f} and target is: {target:.4f}')
###Output
_____no_output_____
###Markdown
IntroductionIn this tutorial, we will implement the Deep CNN-Based Blind Image Quality Predictor (DIQA) methodology proposed by Jongio Kim, Anh-Duc Nguyen, and Sanghoon Lee [1]. Also, I will go through the following TensorFlow 2.0 concepts:- Download and prepare a dataset using a *tf.data.Dataset builder*.- Define a TensorFlow input pipeline to pre-process the dataset records using the *tf.data* API.- Create the CNN model using the *tf.keras* functional API.- Define a custom training loop for the objective error map model.- Train the objective error map and subjective score model.- Use the trained subjective score model to make predictions.*Note: Some of the functions are implemented in [utils.py](https://github.com/ocampor/image-quality/blob/master/notebooks/utils.py) as they are out of the guide's scope.* What is DIQA?DIQA is an original proposal that focuses on solving some of the most concerning challenges of applying deep learning to image quality assessment (IQA). The advantages against other methodologies are:- The model is not limited to work exclusively with Natural Scene Statistics (NSS) images [1].- Prevents overfitting by splitting the training into two phases (1) feature learning and (2) mapping learned features to subjective scores. ProblemThe cost of generating datasets for IQA is high since it requires expert supervision. Therefore, the fundamental IQA benchmarks are comprised of solely a few thousands of records. The latter complicates the creation of deep learning models because they require large amounts of training samples to generalize.As an example, let's consider the most frequently used datasets to train and evaluate IQA methods [Live](https://live.ece.utexas.edu/research/quality/subjective.htm), [TID2008](http://www.ponomarenko.info/tid2008.htm), [TID2013](http://www.ponomarenko.info/tid2013.htm), [CSIQ](http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23). An overall summary of each dataset is contained in the next table:| Dataset | References | Distortions | Severity | Total Samples ||---------|------------|-------------|----------|---------------|| LiveIQA | 29 | 5 | 5 | 1011 || TID2008 | 25 | 17 | 5 | 1701 || TID2013 | 25 | 24 | 5 | 3025 || CSIQ | 30 | 6 | 5 | 930 |The total amount of samples does not exceed 4,000 records for any of them. DatasetThe IQA benchmarks only contain a limited amount of records that might not be enough to train a CNN. However, for this guide purpose, we are going to use the [Live](https://live.ece.utexas.edu/research/quality/subjective.htm) dataset. It is comprised of 29 reference images, and 5 different distortions with 5 severity levels each.The first task is to download and prepare the dataset. I have created a couple of TensorFlow dataset buildersfor image quality assessment and published them in the [image-quality](https://github.com/ocampor/image-quality) package. The buildersare an interface defined by [tensorflow-datasets](https://www.tensorflow.org/datasets). *Note: This process might take several minutes because of the size of the dataset (700 megabytes).*
###Code
#dl_config = tfds.download.DownloadConfig(register_checksums=True)
builder = imquality.datasets.Tid2013()
builder.download_and_prepare()
###Output
_____no_output_____
###Markdown
After downloading and preparing the data, turn the builder into a dataset, and shuffle it. Note that the batch is equal to 1. The reason is that each image has a different shape. Increasing the batch TensorFlow will raise an error.
###Code
ds = builder.as_dataset(shuffle_files=True)['train']
ds = ds.shuffle(1024).batch(1)
###Output
_____no_output_____
###Markdown
The output is a generator; therefore, to access it using the bracket operator [ ] causes an error. There are two ways to access the images in the generator. The first way is to turn the generator into an iterator and extract a single sample at a time using the *next* function.
###Code
next(iter(ds)).keys()
###Output
_____no_output_____
###Markdown
As you can see, the output is a dictionary that contains the tensor representation for the distorted image, the reference image, and the subjective score (dmos).Another way is to extract samples from the generator by taking samples with a for loop:
###Code
for features in ds.take(2):
distorted_image = features['distorted_image']
reference_image = features['reference_image']
dmos = tf.round(features['dmos'][0], 2)
distortion = features['distortion'][0]
print(f'The distortion of the image is {dmos} with'
f' a distortion {distortion} and shape {distorted_image.shape}')
show_images([reference_image, distorted_image])
###Output
_____no_output_____
###Markdown
Methodology Image NormalizationThe first step for DIQA is to pre-process the images. The image is converted into grayscale, and then a low-pass filter is applied. The low-pass filter is defined as:\begin{align*}\hat{I} = I_{gray} - I^{low}\end{align*}where the low-frequency image is the result of the following algorithm:1. Blur the grayscale image.2. Downscale it by a factor of 1 / 4.3. Upscale it back to the original size.The main reasons for this normalization are (1) the Human Visual System (HVS) is not sensitive to changes in the low-frequency band, and (2) image distortions barely affect the low-frequency component of images.
###Code
def image_preprocess(image: tf.Tensor) -> tf.Tensor:
image = tf.cast(image, tf.float32)
image = tf.image.rgb_to_grayscale(image)
image_low = gaussian_filter(image, 16, 7 / 6)
image_low = rescale(image_low, 1 / 4, method=tf.image.ResizeMethod.BICUBIC)
image_low = tf.image.resize(image_low, size=image_shape(image), method=tf.image.ResizeMethod.BICUBIC)
return image - tf.cast(image_low, image.dtype)
for features in ds.take(2):
distorted_image = features['distorted_image']
reference_image = features['reference_image']
I_d = image_preprocess(distorted_image)
I_d = tf.image.grayscale_to_rgb(I_d)
I_d = image_normalization(I_d, 0, 1)
show_images([reference_image, I_d])
###Output
_____no_output_____
###Markdown
**Fig 1.** On the left, the original image. On the right, the image after applying the low-pass filter. Objective Error MapFor the first model, objective errors are used as a proxy to take advantage of the effect of increasing data. The loss function is defined by the mean squared error between the predicted and ground-truth error maps.\begin{align*}\mathbf{e}_{gt} = err(\hat{I}_r, \hat{I}_d)\end{align*}and *err(·)* is an error function. For this implementation, the authors recommend using\begin{align*}\mathbf{e}_{gt} = | \hat{I}_r - \hat{I}_d | ^ p\end{align*}with *p=0.2*. The latter is to prevent that the values in the error map are small or close to zero.
###Code
def error_map(reference: tf.Tensor, distorted: tf.Tensor, p: float=0.2) -> tf.Tensor:
assert reference.dtype == tf.float32 and distorted.dtype == tf.float32, 'dtype must be tf.float32'
return tf.pow(tf.abs(reference - distorted), p)
for features in ds.take(3):
reference_image = features['reference_image']
I_r = image_preprocess(reference_image)
I_d = image_preprocess(features['distorted_image'])
e_gt = error_map(I_r, I_d, 0.2)
I_d = image_normalization(tf.image.grayscale_to_rgb(I_d), 0, 1)
e_gt = image_normalization(tf.image.grayscale_to_rgb(e_gt), 0, 1)
show_images([reference_image, I_d, e_gt])
###Output
_____no_output_____
###Markdown
**Fig 2.** On the left, the original image. In the middle, the pre-processed image, and finally, the image representation of the error map. Reliability MapAccording to the authors, the model is likely to fail to predict images with homogeneous regions. To prevent it, they propose a reliability function. The assumption is that blurry areas have lower reliability than textured ones. The reliability function is defined as\begin{align*}\mathbf{r} = \frac{2}{1 + exp(-\alpha|\hat{I}_d|)} - 1\end{align*}where α controls the saturation property of the reliability map. The positive part of a sigmoid is used to assign sufficiently large values to pixels with low intensity.
###Code
def reliability_map(distorted: tf.Tensor, alpha: float) -> tf.Tensor:
assert distorted.dtype == tf.float32, 'The Tensor must by of dtype tf.float32'
return 2 / (1 + tf.exp(- alpha * tf.abs(distorted))) - 1
###Output
_____no_output_____
###Markdown
The previous definition might directly affect the predicted score. Therefore, the average reliability map is used instead.\begin{align*}\mathbf{\hat{r}} = \frac{1}{\frac{1}{H_rW_r}\sum_{(i,j)}\mathbf{r}(i,j)}\mathbf{r}\end{align*}For the Tensorflow function, we just calculate the reliability map and divide it by its mean.
###Code
def average_reliability_map(distorted: tf.Tensor, alpha: float) -> tf.Tensor:
r = reliability_map(distorted, alpha)
return r / tf.reduce_mean(r)
for features in ds.take(2):
reference_image = features['reference_image']
I_d = image_preprocess(features['distorted_image'])
r = average_reliability_map(I_d, 1)
r = image_normalization(tf.image.grayscale_to_rgb(r), 0, 1)
show_images([reference_image, r], cmap='gray')
###Output
_____no_output_____
###Markdown
**Fig 3.** On the left, the original image, and on the right, its average reliability map. Loss functionThe loss function is defined as the mean square error of the product between the reliability map and the objective error map. The error is the difference between the predicted error map and the ground-truth error map.\begin{align*}\mathcal{L}_1(\hat{I}_d; \theta_f, \theta_g) = ||g(f(\hat{I}_d, \theta_f), \theta_g) - \mathbf{e}_{gt}) \odot \mathbf{\hat{r}}||^2_2\end{align*}The loss function requires to multiply the error by the reliability map; therefore, we cannot use the default loss implementation *tf.loss.MeanSquareError*.
###Code
def loss(model, x, y_true, r):
y_pred = model(x)
return tf.reduce_mean(tf.square((y_true - y_pred) * r))
###Output
_____no_output_____
###Markdown
After creating the custom loss, we need to tell TensorFlow how to differentiate it. The good thing is that we can take advantage of [automatic differentiation](https://www.tensorflow.org/tutorials/customization/autodiff) using *tf.GradientTape*.
###Code
def gradient(model, x, y_true, r):
with tf.GradientTape() as tape:
loss_value = loss(model, x, y_true, r)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
OptimizerThe authors suggested using a Nadam optimizer with a learning rate of *2e-4*.
###Code
optimizer = tf.optimizers.Nadam(learning_rate=2 * 10 ** -4)
###Output
_____no_output_____
###Markdown
Training Objective Error ModelFor the training phase, it is convenient to utilize the *tf.data* input pipelines to produce a much cleaner and readable code. The only requirement is to create the function to apply to the input.
###Code
def calculate_error_map(features):
I_d = image_preprocess(features['distorted_image'])
I_r = image_preprocess(features['reference_image'])
r = rescale(average_reliability_map(I_d, 0.2), 1 / 4)
e_gt = rescale(error_map(I_r, I_d, 0.2), 1 / 4)
return (I_d, e_gt, r)
###Output
_____no_output_____
###Markdown
Then, map the *tf.data.Dataset* to the *calculate_error_map* function.
###Code
train = ds.map(calculate_error_map)
###Output
_____no_output_____
###Markdown
Applying the transformation is executed in almost no time. The reason is that the processor is not performing any operation to the data yet, it happens on demand. This concept is commonly called [lazy-evaluation](https://wiki.python.org/moin/Generators).So far, the following components are implemented:1. The generator that pre-processes the input and calculates the target.2. The loss and gradient functions required for the custom training loop.3. The optimizer function.The only missing bits are the models' definition.  **Fig 4.** Architecture of the objective error model and subjective score model. In the previous image, it is depicted how:- The pre-processed image gets into the convolutional neural network (CNN). - It is transformed by 8 convolutions with the Relu activation function and "same" padding. This is defined as f(·).- The output of f(·) is processed by the last convolution with a linear activation function. This is defined as g(·).
###Code
input = tf.keras.Input(shape=(None, None, 1), batch_size=1, name='original_image')
f = Conv2D(48, (3, 3), name='Conv1', activation='relu', padding='same')(input)
f = Conv2D(48, (3, 3), name='Conv2', activation='relu', padding='same', strides=(2, 2))(f)
f = Conv2D(64, (3, 3), name='Conv3', activation='relu', padding='same')(f)
f = Conv2D(64, (3, 3), name='Conv4', activation='relu', padding='same', strides=(2, 2))(f)
f = Conv2D(64, (3, 3), name='Conv5', activation='relu', padding='same')(f)
f = Conv2D(64, (3, 3), name='Conv6', activation='relu', padding='same')(f)
f = Conv2D(128, (3, 3), name='Conv7', activation='relu', padding='same')(f)
f = Conv2D(128, (3, 3), name='Conv8', activation='relu', padding='same')(f)
g = Conv2D(1, (1, 1), name='Conv9', padding='same', activation='linear')(f)
objective_error_map = tf.keras.Model(input, g, name='objective_error_map')
objective_error_map.summary()
###Output
_____no_output_____
###Markdown
For the custom training loop, it is necessary to:1. Define a metric to measure the performance of the model.2. Calculate the loss and the gradients.3. Use the optimizer to update the weights.4. Print the accuracy.
###Code
for epoch in range(1):
epoch_accuracy = tf.keras.metrics.MeanSquaredError()
step = 0
for I_d, e_gt, r in train:
loss_value, gradients = gradient(objective_error_map, I_d, e_gt, r)
optimizer.apply_gradients(zip(gradients, objective_error_map.trainable_weights))
epoch_accuracy(e_gt, objective_error_map(I_d))
if step % 100 == 0:
print('step %s: mean loss = %s' % (step, epoch_accuracy.result()))
step += 1
###Output
_____no_output_____
###Markdown
*Note: It would be a good idea to use the Spearman’s rank-order correlation coefficient (SRCC) or Pearson’s linear correlation coefficient (PLCC) as accuracy metrics.* Subjective Score ModelTo create the subjective score model, let's use the output of f(·) to train a regressor.
###Code
v = GlobalAveragePooling2D(data_format='channels_last')(f)
h = Dense(128, activation='relu')(v)
h = Dense(1)(h)
subjective_error = tf.keras.Model(input, h, name='subjective_error')
subjective_error.compile(
optimizer=optimizer,
loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanSquaredError()])
subjective_error.summary()
###Output
_____no_output_____
###Markdown
Training a model with the fit method of *tf.keras.Model* expects a dataset that returns two arguments. The first one is the input, and the second one is the target.
###Code
def calculate_subjective_score(features):
I_d = image_preprocess(features['distorted_image'])
mos = features['dmos']
return (I_d, mos)
train = ds.map(calculate_subjective_score)
###Output
_____no_output_____
###Markdown
Then, *fit* the subjective score model.
###Code
history = subjective_error.fit(train, epochs=1)
###Output
_____no_output_____
###Markdown
PredictionPredicting with the already trained model is simple. Just use the *predict* method in the model.
###Code
sample = next(iter(ds))
I_d = image_preprocess(sample['distorted_image'])
target = sample['dmos'][0]
prediction = subjective_error.predict(I_d)[0][0]
print(f'the predicted value is: {prediction:.4f} and target is: {target:.4f}')
###Output
_____no_output_____
###Markdown
IntroductionIn this tutorial, we will implement the Deep CNN-Based Blind Image Quality Predictor (DIQA) methodology proposed by Jongio Kim, Anh-Duc Nguyen, and Sanghoon Lee [1]. Also, I will go through the following TensorFlow 2.0 concepts:- Download and prepare a dataset using a *tf.data.Dataset builder*.- Define a TensorFlow input pipeline to pre-process the dataset records using the *tf.data* API.- Create the CNN model using the *tf.keras* functional API.- Define a custom training loop for the objective error map model.- Train the objective error map and subjective score model.- Use the trained subjective score model to make predictions.*Note: Some of the functions are implemented in [utils.py](https://github.com/ocampor/image-quality/blob/master/notebooks/utils.py) as they are out of the guide's scope.* What is DIQA?DIQA is an original proposal that focuses on solving some of the most concerning challenges of applying deep learning to image quality assessment (IQA). The advantages against other methodologies are:- The model is not limited to work exclusively with Natural Scene Statistics (NSS) images [1].- Prevents overfitting by splitting the training into two phases (1) feature learning and (2) mapping learned features to subjective scores. ProblemThe cost of generating datasets for IQA is high since it requires expert supervision. Therefore, the fundamental IQA benchmarks are comprised of solely a few thousands of records. The latter complicates the creation of deep learning models because they require large amounts of training samples to generalize.As an example, let's consider the most frequently used datasets to train and evaluate IQA methods [Live](https://live.ece.utexas.edu/research/quality/subjective.htm), [TID2008](http://www.ponomarenko.info/tid2008.htm), [TID2013](http://www.ponomarenko.info/tid2013.htm), [CSIQ](http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23). An overall summary of each dataset is contained in the next table:| Dataset | References | Distortions | Severity | Total Samples ||---------|------------|-------------|----------|---------------|| LiveIQA | 29 | 5 | 5 | 1011 || TID2008 | 25 | 17 | 5 | 1701 || TID2013 | 25 | 24 | 5 | 3025 || CSIQ | 30 | 6 | 5 | 930 |The total amount of samples does not exceed 4,000 records for any of them. DatasetThe IQA benchmarks only contain a limited amount of records that might not be enough to train a CNN. However, for this guide purpose, we are going to use the [Live](https://live.ece.utexas.edu/research/quality/subjective.htm) dataset. It is comprised of 29 reference images, and 5 different distortions with 5 severity levels each.The first task is to download and prepare the dataset. I have created a couple of TensorFlow dataset buildersfor image quality assessment and published them in the [image-quality](https://github.com/ocampor/image-quality) package. The buildersare an interface defined by [tensorflow-datasets](https://www.tensorflow.org/datasets). *Note: This process might take several minutes because of the size of the dataset (700 megabytes).*
###Code
builder = imquality.datasets.LiveIQA()
builder.download_and_prepare()
###Output
_____no_output_____
###Markdown
After downloading and preparing the data, turn the builder into a dataset, and shuffle it. Note that the batch is equal to 1. The reason is that each image has a different shape. Increasing the batch TensorFlow will raise an error.
###Code
ds = builder.as_dataset(shuffle_files=True)['train']
ds = ds.shuffle(1024).batch(1)
###Output
_____no_output_____
###Markdown
The output is a generator; therefore, to access it using the bracket operator [ ] causes an error. There are two ways to access the images in the generator. The first way is to turn the generator into an iterator and extract a single sample at a time using the *next* function.
###Code
next(iter(ds)).keys()
###Output
_____no_output_____
###Markdown
As you can see, the output is a dictionary that contains the tensor representation for the distorted image, the reference image, and the subjective score (dmos).Another way is to extract samples from the generator by taking samples with a for loop:
###Code
for features in ds.take(2):
distorted_image = features['distorted_image']
reference_image = features['reference_image']
dmos = tf.round(features['dmos'][0], 2)
distortion = features['distortion'][0]
print(f'The distortion of the image is {dmos} with'
f' a distortion {distortion} and shape {distorted_image.shape}')
show_images([reference_image, distorted_image])
###Output
_____no_output_____
###Markdown
Methodology Image NormalizationThe first step for DIQA is to pre-process the images. The image is converted into grayscale, and then a low-pass filter is applied. The low-pass filter is defined as:\begin{align*}\hat{I} = I_{gray} - I^{low}\end{align*}where the low-frequency image is the result of the following algorithm:1. Blur the grayscale image.2. Downscale it by a factor of 1 / 4.3. Upscale it back to the original size.The main reasons for this normalization are (1) the Human Visual System (HVS) is not sensitive to changes in the low-frequency band, and (2) image distortions barely affect the low-frequency component of images.
###Code
def image_preprocess(image: tf.Tensor) -> tf.Tensor:
image = tf.cast(image, tf.float32)
image = tf.image.rgb_to_grayscale(image)
image_low = gaussian_filter(image, 16, 7 / 6)
image_low = rescale(image_low, 1 / 4, method=tf.image.ResizeMethod.BICUBIC)
image_low = tf.image.resize(image_low, size=image_shape(image), method=tf.image.ResizeMethod.BICUBIC)
return image - tf.cast(image_low, image.dtype)
for features in ds.take(2):
distorted_image = features['distorted_image']
reference_image = features['reference_image']
I_d = image_preprocess(distorted_image)
I_d = tf.image.grayscale_to_rgb(I_d)
I_d = image_normalization(I_d, 0, 1)
show_images([reference_image, I_d])
###Output
_____no_output_____
###Markdown
**Fig 1.** On the left, the original image. On the right, the image after applying the low-pass filter. Objective Error MapFor the first model, objective errors are used as a proxy to take advantage of the effect of increasing data. The loss function is defined by the mean squared error between the predicted and ground-truth error maps.\begin{align*}\mathbf{e}_{gt} = err(\hat{I}_r, \hat{I}_d)\end{align*}and *err(·)* is an error function. For this implementation, the authors recommend using\begin{align*}\mathbf{e}_{gt} = | \hat{I}_r - \hat{I}_d | ^ p\end{align*}with *p=0.2*. The latter is to prevent that the values in the error map are small or close to zero.
###Code
def error_map(reference: tf.Tensor, distorted: tf.Tensor, p: float=0.2) -> tf.Tensor:
assert reference.dtype == tf.float32 and distorted.dtype == tf.float32, 'dtype must be tf.float32'
return tf.pow(tf.abs(reference - distorted), p)
for features in ds.take(3):
reference_image = features['reference_image']
I_r = image_preprocess(reference_image)
I_d = image_preprocess(features['distorted_image'])
e_gt = error_map(I_r, I_d, 0.2)
I_d = image_normalization(tf.image.grayscale_to_rgb(I_d), 0, 1)
e_gt = image_normalization(tf.image.grayscale_to_rgb(e_gt), 0, 1)
show_images([reference_image, I_d, e_gt])
###Output
_____no_output_____
###Markdown
**Fig 2.** On the left, the original image. In the middle, the pre-processed image, and finally, the image representation of the error map. Reliability MapAccording to the authors, the model is likely to fail to predict images with homogeneous regions. To prevent it, they propose a reliability function. The assumption is that blurry areas have lower reliability than textured ones. The reliability function is defined as\begin{align*}\mathbf{r} = \frac{2}{1 + exp(-\alpha|\hat{I}_d|)} - 1\end{align*}where α controls the saturation property of the reliability map. The positive part of a sigmoid is used to assign sufficiently large values to pixels with low intensity.
###Code
def reliability_map(distorted: tf.Tensor, alpha: float) -> tf.Tensor:
assert distorted.dtype == tf.float32, 'The Tensor must by of dtype tf.float32'
return 2 / (1 + tf.exp(- alpha * tf.abs(distorted))) - 1
###Output
_____no_output_____
###Markdown
The previous definition might directly affect the predicted score. Therefore, the average reliability map is used instead.\begin{align*}\mathbf{\hat{r}} = \frac{1}{\frac{1}{H_rW_r}\sum_{(i,j)}\mathbf{r}(i,j)}\mathbf{r}\end{align*}For the Tensorflow function, we just calculate the reliability map and divide it by its mean.
###Code
def average_reliability_map(distorted: tf.Tensor, alpha: float) -> tf.Tensor:
r = reliability_map(distorted, alpha)
return r / tf.reduce_mean(r)
for features in ds.take(2):
reference_image = features['reference_image']
I_d = image_preprocess(features['distorted_image'])
r = average_reliability_map(I_d, 1)
r = image_normalization(tf.image.grayscale_to_rgb(r), 0, 1)
show_images([reference_image, r], cmap='gray')
###Output
_____no_output_____
###Markdown
**Fig 3.** On the left, the original image, and on the right, its average reliability map. Loss functionThe loss function is defined as the mean square error of the product between the reliability map and the objective error map. The error is the difference between the predicted error map and the ground-truth error map.\begin{align*}\mathcal{L}_1(\hat{I}_d; \theta_f, \theta_g) = ||g(f(\hat{I}_d, \theta_f), \theta_g) - \mathbf{e}_{gt}) \odot \mathbf{\hat{r}}||^2_2\end{align*}The loss function requires to multiply the error by the reliability map; therefore, we cannot use the default loss implementation *tf.loss.MeanSquareError*.
###Code
def loss(model, x, y_true, r):
y_pred = model(x)
return tf.reduce_mean(tf.square((y_true - y_pred) * r))
###Output
_____no_output_____
###Markdown
After creating the custom loss, we need to tell TensorFlow how to differentiate it. The good thing is that we can take advantage of [automatic differentiation](https://www.tensorflow.org/tutorials/customization/autodiff) using *tf.GradientTape*.
###Code
def gradient(model, x, y_true, r):
with tf.GradientTape() as tape:
loss_value = loss(model, x, y_true, r)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
OptimizerThe authors suggested using a Nadam optimizer with a learning rate of *2e-4*.
###Code
optimizer = tf.optimizers.Nadam(learning_rate=2 * 10 ** -4)
###Output
_____no_output_____
###Markdown
Training Objective Error ModelFor the training phase, it is convenient to utilize the *tf.data* input pipelines to produce a much cleaner and readable code. The only requirement is to create the function to apply to the input.
###Code
def calculate_error_map(features):
I_d = image_preprocess(features['distorted_image'])
I_r = image_preprocess(features['reference_image'])
r = rescale(average_reliability_map(I_d, 0.2), 1 / 4)
e_gt = rescale(error_map(I_r, I_d, 0.2), 1 / 4)
return (I_d, e_gt, r)
###Output
_____no_output_____
###Markdown
Then, map the *tf.data.Dataset* to the *calculate_error_map* function.
###Code
train = ds.map(calculate_error_map)
###Output
_____no_output_____
###Markdown
Applying the transformation is executed in almost no time. The reason is that the processor is not performing any operation to the data yet, it happens on demand. This concept is commonly called [lazy-evaluation](https://wiki.python.org/moin/Generators).So far, the following components are implemented:1. The generator that pre-processes the input and calculates the target.2. The loss and gradient functions required for the custom training loop.3. The optimizer function.The only missing bits are the models' definition.  **Fig 4.** Architecture of the objective error model and subjective score model. In the previous image, it is depicted how:- The pre-processed image gets into the convolutional neural network (CNN). - It is transformed by 8 convolutions with the Relu activation function and "same" padding. This is defined as f(·).- The output of f(·) is processed by the last convolution with a linear activation function. This is defined as g(·).
###Code
input = tf.keras.Input(shape=(None, None, 1), batch_size=1, name='original_image')
f = Conv2D(48, (3, 3), name='Conv1', activation='relu', padding='same')(input)
f = Conv2D(48, (3, 3), name='Conv2', activation='relu', padding='same', strides=(2, 2))(f)
f = Conv2D(64, (3, 3), name='Conv3', activation='relu', padding='same')(f)
f = Conv2D(64, (3, 3), name='Conv4', activation='relu', padding='same', strides=(2, 2))(f)
f = Conv2D(64, (3, 3), name='Conv5', activation='relu', padding='same')(f)
f = Conv2D(64, (3, 3), name='Conv6', activation='relu', padding='same')(f)
f = Conv2D(128, (3, 3), name='Conv7', activation='relu', padding='same')(f)
f = Conv2D(128, (3, 3), name='Conv8', activation='relu', padding='same')(f)
g = Conv2D(1, (1, 1), name='Conv9', padding='same', activation='linear')(f)
objective_error_map = tf.keras.Model(input, g, name='objective_error_map')
objective_error_map.summary()
###Output
_____no_output_____
###Markdown
For the custom training loop, it is necessary to:1. Define a metric to measure the performance of the model.2. Calculate the loss and the gradients.3. Use the optimizer to update the weights.4. Print the accuracy.
###Code
for epoch in range(1):
epoch_accuracy = tf.keras.metrics.MeanSquaredError()
step = 0
for I_d, e_gt, r in train:
loss_value, gradients = gradient(objective_error_map, I_d, e_gt, r)
optimizer.apply_gradients(zip(gradients, objective_error_map.trainable_weights))
epoch_accuracy(e_gt, objective_error_map(I_d))
if step % 100 == 0:
print('step %s: mean loss = %s' % (step, epoch_accuracy.result()))
step += 1
###Output
_____no_output_____
###Markdown
*Note: It would be a good idea to use the Spearman’s rank-order correlation coefficient (SRCC) or Pearson’s linear correlation coefficient (PLCC) as accuracy metrics.* Subjective Score ModelTo create the subjective score model, let's use the output of f(·) to train a regressor.
###Code
v = GlobalAveragePooling2D(data_format='channels_last')(f)
h = Dense(128, activation='relu')(v)
h = Dense(1)(h)
subjective_error = tf.keras.Model(input, h, name='subjective_error')
subjective_error.compile(
optimizer=optimizer,
loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanSquaredError()])
subjective_error.summary()
###Output
_____no_output_____
###Markdown
Training a model with the fit method of *tf.keras.Model* expects a dataset that returns two arguments. The first one is the input, and the second one is the target.
###Code
def calculate_subjective_score(features):
I_d = image_preprocess(features['distorted_image'])
mos = features['dmos']
return (I_d, mos)
train = ds.map(calculate_subjective_score)
###Output
_____no_output_____
###Markdown
Then, *fit* the subjective score model.
###Code
history = subjective_error.fit(train, epochs=1)
###Output
_____no_output_____
###Markdown
PredictionPredicting with the already trained model is simple. Just use the *predict* method in the model.
###Code
sample = next(iter(ds))
I_d = image_preprocess(sample['distorted_image'])
target = sample['dmos'][0]
prediction = subjective_error.predict(I_d)[0][0]
print(f'the predicted value is: {prediction:.4f} and target is: {target:.4f}')
###Output
_____no_output_____ |
ipynb/Holy-See.ipynb | ###Markdown
Holy See* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Holy-See.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Holy See", weeks=5);
overview("Holy See");
compare_plot("Holy See", normalise=True);
# load the data
cases, deaths = get_country_data("Holy See")
# get population of the region for future normalisation:
inhabitants = population("Holy See")
print(f'Population of "Holy See": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Holy-See.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Holy See* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Holy-See.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Holy See");
# load the data
cases, deaths, region_label = get_country_data("Holy See")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Holy-See.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____
###Markdown
Holy See* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Holy-See.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Holy See", weeks=5);
overview("Holy See");
compare_plot("Holy See", normalise=True);
# load the data
cases, deaths = get_country_data("Holy See")
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 500 rows
pd.set_option("max_rows", 500)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Holy-See.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
h2Copy_of_LS_DS_222_assignment.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 2, Module 2*--- Random Forests Assignment- [ ] Read [“Adopting a Hypothesis-Driven Workflow”](https://outline.com/5S5tsB), a blog post by a Lambda DS student about the Tanzania Waterpumps challenge.- [ ] Continue to participate in our Kaggle challenge.- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features.- [ ] Try Ordinal Encoding.- [ ] Try a Random Forest Classifier.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.- [ ] Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/).- [ ] Get and plot your feature importances.- [ ] Make visualizations and share on Slack. ReadingTop recommendations in _**bold italic:**_ Decision Trees- A Visual Introduction to Machine Learning, [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/), and _**[Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)**_- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) Random Forests- [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 8: Tree-Based Methods- [Coloring with Random Forests](http://structuringtheunstructured.blogspot.com/2017/11/coloring-with-random-forests.html)- _**[Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/)**_ Categorical encoding for trees- [Are categorical variables getting lost in your random forests?](https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)- [Beyond One-Hot: An Exploration of Categorical Variables](http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/)- _**[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)**_- _**[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)**_- [Mean (likelihood) encodings: a comprehensive study](https://www.kaggle.com/vprokopev/mean-likelihood-encodings-a-comprehensive-study)- [The Mechanics of Machine Learning, Chapter 6: Categorically Speaking](https://mlbook.explained.ai/catvars.html) Imposter Syndrome- [Effort Shock and Reward Shock (How The Karate Kid Ruined The Modern World)](http://www.tempobook.com/2014/07/09/effort-shock-and-reward-shock/)- [How to manage impostor syndrome in data science](https://towardsdatascience.com/how-to-manage-impostor-syndrome-in-data-science-ad814809f068)- ["I am not a real data scientist"](https://brohrer.github.io/imposter_syndrome.html)- _**[Imposter Syndrome in Data Science](https://caitlinhudon.com/2018/01/19/imposter-syndrome-in-data-science/)**_ More Categorical Encodings**1.** The article **[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)** mentions 4 encodings:- **"Categorical Encoding":** This means using the raw categorical values as-is, not encoded. Scikit-learn doesn't support this, but some tree algorithm implementations do. For example, [Catboost](https://catboost.ai/), or R's [rpart](https://cran.r-project.org/web/packages/rpart/index.html) package.- **Numeric Encoding:** Synonymous with Label Encoding, or "Ordinal" Encoding with random order. We can use [category_encoders.OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html).- **One-Hot Encoding:** We can use [category_encoders.OneHotEncoder](http://contrib.scikit-learn.org/categorical-encoding/onehot.html).- **Binary Encoding:** We can use [category_encoders.BinaryEncoder](http://contrib.scikit-learn.org/categorical-encoding/binary.html).**2.** The short video **[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)** introduces an interesting idea: use both X _and_ y to encode categoricals.Category Encoders has multiple implementations of this general concept:- [CatBoost Encoder](http://contrib.scikit-learn.org/categorical-encoding/catboost.html)- [James-Stein Encoder](http://contrib.scikit-learn.org/categorical-encoding/jamesstein.html)- [Leave One Out](http://contrib.scikit-learn.org/categorical-encoding/leaveoneout.html)- [M-estimate](http://contrib.scikit-learn.org/categorical-encoding/mestimate.html)- [Target Encoder](http://contrib.scikit-learn.org/categorical-encoding/targetencoder.html)- [Weight of Evidence](http://contrib.scikit-learn.org/categorical-encoding/woe.html)Category Encoder's mean encoding implementations work for regression problems or binary classification problems. For multi-class classification problems, you will need to temporarily reformulate it as binary classification. For example:```pythonencoder = ce.TargetEncoder(min_samples_leaf=..., smoothing=...) Both parameters > 1 to avoid overfittingX_train_encoded = encoder.fit_transform(X_train, y_train=='functional')X_val_encoded = encoder.transform(X_train, y_val=='functional')```For this reason, mean encoding won't work well within pipelines for multi-class classification problems.**3.** The **[dirty_cat](https://dirty-cat.github.io/stable/)** library has a Target Encoder implementation that works with multi-class classification.```python dirty_cat.TargetEncoder(clf_type='multiclass-clf')```It also implements an interesting idea called ["Similarity Encoder" for dirty categories](https://www.slideshare.net/GaelVaroquaux/machine-learning-on-non-curated-data-154905090).However, it seems like dirty_cat doesn't handle missing values or unknown categories as well as category_encoders does. And you may need to use it with one column at a time, instead of with your whole dataframe.**4. [Embeddings](https://www.kaggle.com/learn/embeddings)** can work well with sparse / high cardinality categoricals._**I hope it’s not too frustrating or confusing that there’s not one “canonical” way to encode categoricals. It’s an active area of research and experimentation! Maybe you can make your own contributions!**_ SetupYou can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab (run the code cell below).
###Code
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
from sklearn.model_selection import train_test_split
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
test0 = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
train.shape, test0.shape
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
train.shape, val.shape
###Output
_____no_output_____
###Markdown
Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features.Try Ordinal Encoding.Try a Random Forest Classifier.
###Code
import numpy as np
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
# Also create a "missing indicator" column, because the fact that
# values are missing may be a predictive signal.
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
# Drop duplicate columns
duplicates = ['quantity_group', 'payment_type']
X = X.drop(columns=duplicates)
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test0)
# The status_group column is the target
target = 'status_group'
# Get a dataframe with all train columns except the target
train_features = train.drop(columns=[target])
# Get a list of the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the nonnumeric features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# Get a list of all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
# Combine the lists
features = numeric_features + categorical_features
# Arrange data into X features matrix and y target vector
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
train.describe(exclude='number').T.unique.sum()
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
%%time
# This pipeline is ideentical to the example cell above,
# except we're replacing one-hot encoder with "ordinal" encoder
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(random_state=0, n_jobs=-1)
)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
# Predict on test
y_pred = pipeline.predict(X_test)
submission = test0[['id']].copy()
submission['status_group'] = y_pred
submission.to_csv('waterpumps-submission2.csv', index=False)
!head 'waterpumps-submission.csv'
###Output
id,status_group
50785,functional
51630,functional
17168,functional
45559,non functional
49871,functional
52449,functional
24806,functional
28965,non functional
36301,non functional
###Markdown
Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.Try other categorical encodings.Get and plot your feature importances.Make visualizations and share on Slack. comparing ordinal encoding on logistic regression ,decision tree, and random forest classifier abalysis comparuson.
###Code
X_train.columns.to_list()
feature = 'gps_height'
X_train[feature].value_counts()
from sklearn.linear_model import LogisticRegressionCV
lr = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
StandardScaler(),
LogisticRegressionCV(multi_class='auto', solver='lbfgs', cv=5, n_jobs=-1)
)
lr.fit(X_train[[feature]], y_train)
score = lr.score(X_val[[feature]], y_val)
print('Logistic Regression, Validation Accuracy', score)
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
dt = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(),
DecisionTreeClassifier(random_state=42)
)
dt.fit(X_train[[feature]], y_train)
score = dt.score(X_val[[feature]], y_val)
print('Decision Tree, Validation Accuracy', score)
import graphviz
from sklearn.tree import export_graphviz
model = dt.named_steps['decisiontreeclassifier']
encoder = dt.named_steps['ordinalencoder']
encoded_columns = encoder.transform(X_val[[feature]]).columns
dot_data = export_graphviz(model,
out_file=None,
max_depth=5,
feature_names=encoded_columns,
class_names=model.classes_,
impurity=False,
filled=True,
proportion=True,
rounded=True)
display(graphviz.Source(dot_data))
###Output
_____no_output_____
###Markdown
Helper function to visualize predicted probabilities
###Code
import itertools
import seaborn as sns
def pred_heatmap(model, X, features, class_index=-1, title='', num=100):
"""
Visualize predicted probabilities, for classifier fit on 2 numeric features
Parameters
----------
model : scikit-learn classifier, already fit
X : pandas dataframe, which was used to fit model
features : list of strings, column names of the 2 numeric features
class_index : integer, index of class label
title : string, title of plot
num : int, number of grid points for each feature
Returns
-------
y_pred_proba : numpy array, predicted probabilities for class_index
"""
feature1, feature2 = features
min1, max1 = X[feature1].min(), X[feature1].max()
min2, max2 = X[feature2].min(), X[feature2].max()
x1 = np.linspace(min1, max1, num)
x2 = np.linspace(max2, min2, num)
combos = list(itertools.product(x1, x2))
y_pred_proba = model.predict_proba(combos)[:, class_index]
pred_grid = y_pred_proba.reshape(num, num).T
table = pd.DataFrame(pred_grid, columns=x1, index=x2)
sns.heatmap(table, vmin=0, vmax=1)
plt.xticks([])
plt.yticks([])
plt.xlabel(feature1)
plt.ylabel(feature2)
plt.title(title)
plt.show()
return y_pred_proba
%matplotlib inline
import matplotlib.pyplot as plt
from ipywidgets import interact
# Instructions
# 1. Choose two features
# 2. Run this code cell
# 3. Interact with the widget sliders
feature1 = 'longitude'
feature2 = 'quantity'
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
def get_X_y(df, feature1, feature2, target):
features = [feature1, feature2]
X = df[features]
y = df[target]
X = X.fillna(X.median())
X = ce.OrdinalEncoder().fit_transform(X)
return X, y
def compare_models(max_depth=1, n_estimators=1):
models = [DecisionTreeClassifier(max_depth=max_depth),
RandomForestClassifier(max_depth=max_depth, n_estimators=n_estimators),
LogisticRegression(solver='lbfgs', multi_class='auto')]
for model in models:
name = model.__class__.__name__
model.fit(X, y)
pred_heatmap(model, X, [feature1, feature2], class_index=0, title=name)
X, y = get_X_y(train, feature1, feature2, target='status_group')
interact(compare_models, max_depth=(1,6,1), n_estimators=(10,40,10));
# Do-it-yourself Bagging Ensemble of Decision Trees (like a Random Forest)
# Instructions
# 1. Choose two features
# 2. Run this code cell
# 3. Interact with the widget sliders
feature1 = 'longitude'
feature2 = 'latitude'
def waterpumps_bagging(max_depth=1, n_estimators=1):
predicteds = []
for i in range(n_estimators):
title = f'Tree {i+1}'
bootstrap_sample = train.sample(n=len(train), replace=True)
X, y = get_X_y(bootstrap_sample, feature1, feature2, target='status_group')
tree = DecisionTreeClassifier(max_depth=max_depth)
tree.fit(X, y)
predicted = pred_heatmap(tree, X, [feature1, feature2], class_index=0, title=title)
predicteds.append(predicted)
ensembled = np.vstack(predicteds).mean(axis=0)
title = f'Ensemble of {n_estimators} trees, with max_depth={max_depth}'
sns.heatmap(ensembled.reshape(100, 100).T, vmin=0, vmax=1)
plt.title(title)
plt.xlabel(feature1)
plt.ylabel(feature2)
plt.xticks([])
plt.yticks([])
plt.show()
interact(waterpumps_bagging, max_depth=(1,6,1), n_estimators=(2,5,1));
###Output
_____no_output_____ |
CleanData_FilterData.ipynb | ###Markdown
**Problem Statement**Glass factory float lines generate significant amounts of wasted heat, which represents an opportunity for heat reclamation. The problem lies in converting the heat to useful electricity with an economically viable waste heat recovery system. Glass factories currently source electricity from the grid at competitive rates for industrial consumption. For our solution to be cost-competitive with electricity generated from fossil fuel plants, it must supply electricity at a lower comparative price.We intend to explore methods by which we can supply electricity at a lower comparative price to fossil fuel plants. The first method is based on the price that we can generate electricity with our thermophotovoltaic (TPV) waste heat recovery system. Some estimates show electricity rates for TPVs at around USD 0.06/kwh - which is already half the price of fossil fuel-generated electricity (quoted at an average of USD 0.12/kwh). However, TPVs are expensive to make and the price of the system would factor into the price of electricity produced by the system. We intend to explore the cost-competitiveness of our system using datasets on state-level electrical prices compared to our best-case-scenario for TPV electricity rates (USD 0.06/kwh). **How will the problem be tackled and solved?**We will generate maps of electrical price, float line location, and float line number at each glass site. These maps will rely on geopandas and geoplot for visualization. We will perform K-means cluster analysis on the maps to identify trends in glass factory locations. We will use visualization methods like a histogram of electrical price for float glass plants to identify what we can charge in each geographical area. The primary assumption is that float glass plants are paying the state-wide industrial rate for electricity and not more or less. **What are the parameters around your problem statement to make it simpleenough to solve but not trivial?**- The first parameter around our problem statement is cleaning and filtering out-of-scope glass plants from our datasets to constrain our analysis to US-based plants. - The next parameter is ensuring the dataset value types are correct for the purpose of processing in our model - e.g. each column has a list of properly formatted strings or floats for the purpose of geolocation. - The final parameter is actually performing the geolocation and cluster analysis on our datasets and creating clear visualizations from the results. These parameters are complex but achievable and require using various libraries to simplify the solution. The solution is non-trivial in that it provides us with new insights into electricity price geographics and how they associate with float glass plant location. **What is the core business or research problem you are solving? Why is itimportant? This should be concise and clear.**The core business problem we are trying to solve is *to determine how much we can charge for electricity at each float glass line we install our waste heat recovery system in*.Because our waste heat recovery system generates electricity for the glass plants, we want to charge them for electricity at a rate lower than what they currently pay. In order to know how much we can charge to remain competitive with on-grid electrical prices, we need to find out how much is currently being paid at each float glass plant; this depends on the geographic location of each plant, since electricity price varies by state. **Have you annotated your notebook with clear explanations to walk the readerthrough, step by step?**All operations are commented and any referenced code is included above each operation. **Does your annotation communicate alternative paths considered in the analysisand decisions made for why particular directions were chosen?**The rationale for most operations has been explained in the comments. Comments alternatively explain what the operation aims to achieve.
###Code
#Importing relevant libraries for data framing
import numpy as np
import pandas as pd
#Importing csv module
#Reference code: https://www.geeksforgeeks.org/working-csv-files-python/
import csv
###Output
_____no_output_____
###Markdown
**Is the data collected and loaded appropriately?**Data collection is from official World of Glass datasets. Loading is done through the csv module in python. Datasets will be converted to csv files and read into the working environment as matrices. The original matrices will be cleaned and filtered for addition into pandas dataframes and subsequent addition into geopandas geo-dataframes. **Does dataset have the capacity to address the question posed in the problemStatement**The datasets include all major US-based float glass lines and the city they are located in plus industrial electricity prices for each state in 2019. These datasets are sufficient to answer the problem posed. **How were the data initially collected? Units? Any metadata should be linked or mentioned.**Data was collected from three sources: World of Glass dataset on glass plant information globally; Electrical Price rates from the US Electricity Information Administration; lat-lon coordinates from the Nominatim OpenStreetMap web database were pulled using reverse geocoding functions from the geopandas library.**Datasets are as follows:**- Electrical Prices: https://www.eia.gov/electricity/monthly/xls/table_5_06_a.xlsx- Glass Plant Locations: https://members.glass.org/cvweb/cgi-bin/msascartdll.dll/ProductInfo?productcd=WOGFLOAT- Lat-Lon Coordinates: https://nominatim.openstreetmap.org/ui/search.html
###Code
#Declaring World of Glass dataset file name
WOG = "WoG_float_061220_fin.csv"
#Declaring Electrical Rate Average dataset file name
ERA = "Average_Price_of_Electricity.csv"
#Creating header list and row list for datasets
"""World of Glass Matrix"""
WOGhead = []
WOGrows = []
"""Electrical Rate Average Matrix"""
ERAhead = []
ERArows = []
###Output
_____no_output_____
###Markdown
**Was data ingested, sufficiently cleaned, in a format that makes it amenable forvisualization, model building and analysis?**Datasets were properly ingested, cleaned, loaded, and formatted for the purpose of visualization, model building, and analysis. Specifically, the data is loaded below as csv files and processed into dataframes and eventually geo-dataframes. Irrelevant classes are discarded for the purpose of simplifying the datasets. New index values are contributed to the dataframes to supplement the analysis (e.g. adding “ ,US” to each city to constrict adding lat-long coordinates for only US-based cities).
###Code
#Read in csv file
"""Assembling WOG Matrix Header and Rows"""
with open(WOG, 'r') as csvfile:
# creating a csv reader object
csvreader = csv.reader(csvfile)
# extracting header names through first row
WOGhead = next(csvreader)
# extracting each data row one by one
for row in csvreader:
WOGrows.append(row)
"""Assembling ERA Matrix Header and Rows"""
with open(ERA, 'r') as csvfile:
# creating a csv reader object
csvreader = csv.reader(csvfile)
# extracting header names through first row
ERAhead = next(csvreader)
# extracting each data row one by one
for row in csvreader:
ERArows.append(row)
#Transpose WOG rows to columns
#Reference code: https://note.nkmk.me/en/python-list-transpose/
WOGcolsnp = np.array(WOGrows).T
"""Convert to list for input into DataFrame"""
WOGcols = WOGcolsnp.tolist()
#Creating WOG DataFrame
"""Initializing DataFrame"""
WOGDF = pd.DataFrame()
for i in range(len(WOGhead)):
WOGDF[WOGhead[i]] = WOGcols[i]
#Transpose ERA rows to columns
ERAcolsnp = np.array(ERArows).T
"""Convert to list for input into DataFrame"""
ERAcols = ERAcolsnp.tolist()
#Creating ERA DataFrame
"""Initializing DataFrame"""
ERADF = pd.DataFrame()
for i in range(len(ERAhead)):
ERADF[ERAhead[i]] = ERAcols[i]
#ERADF
#Filtering WOG DataFrame for relevant classes
#Relevant classes: Company Name, Country, City, State, Number of Lines
#Relevant classes index: 0, 6, 7, 8, 11
RelIndex = [0,6,7, 8, 11]
"""Initializing Filtered DataFrame"""
WOGDFF = pd.DataFrame()
for i in RelIndex:
WOGDFF[WOGhead[i]] = WOGcols[i]
#WOGDFF
###Output
_____no_output_____
###Markdown
**How were missing data handled**Missing data did not exist between relevant rows and so we did not need to perform any interpolation or extrapolation of values. There was empty data past a certain row in our World of Glass dataset, but this was due to the original csv format. The empty data was simply dropped from the dataframe.
###Code
#Removing unpopulated rows beyond index 237
for i in range(237,len(WOGDFF[WOGhead[0]])-1):
WOGDFF = WOGDFF.drop([i])
WOGDFF
#Filtering ERA DataFrame for relevant classes
#Relevant classes:  (State), Industrial 12/1/2020
#Relevant classes index: 0, 5
RelIndex = [0,5]
"""Initializing Filtered DataFrame"""
ERADFF = pd.DataFrame()
for i in RelIndex:
ERADFF[ERAhead[i]] = ERAcols[i]
ERADFF
#Correct ERA and WOG Column Header Names
WOGDFF.columns = ['Company Name', 'Country', 'City', 'State', 'Number of Lines']
WOGDFF
ERADFF.columns = ['State', 'Electrical Price']
ERADFF
#State vector for filter
#Source Code: https://gist.github.com/JeffPaine/3083347
state_names = ["AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DC", "DE", "FL", "GA",
"HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD",
"MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ",
"NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC",
"SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI", "WY"]
#Filtering on rows for US States in WOGDFF
WOGGDFF_states = pd.DataFrame()
for i in state_names:
is_state = WOGDFF['State'] == i
addstate = WOGDFF[is_state]
WOGDFF_states = WOGDFF_states.append(addstate)
WOGDFF_states
FloatLinesCount = WOGDFF_states['Number of Lines'].tolist()
FloatLinesCount = [int(i) for i in FloatLinesCount]
TotalFloatLines = sum(FloatLinesCount)
TotalFloatLines
#State vector for filter
#Source Code: https://gist.github.com/norcal82/e4c7e8113f377db184bb
state_names_ERA = ["Alaska", "Alabama", "Arkansas", "American Samoa", "Arizona", "California", "Colorado", "Connecticut", "District ", "of Columbia", "Delaware", "Florida", "Georgia", "Guam", "Hawaii", "Iowa", "Idaho", "Illinois", "Indiana", "Kansas", "Kentucky", "Louisiana", "Massachusetts", "Maryland", "Maine", "Michigan", "Minnesota", "Missouri", "Mississippi", "Montana", "North Carolina", "North Dakota", "Nebraska", "New Hampshire", "New Jersey", "New Mexico", "Nevada", "New York", "Ohio", "Oklahoma", "Oregon", "Pennsylvania", "Puerto Rico", "Rhode Island", "South Carolina", "South Dakota", "Tennessee", "Texas", "Utah", "Virginia", "Virgin Islands", "Vermont", "Washington", "Wisconsin", "West Virginia", "Wyoming"]
#Filtering on rows for US States in ERADFF
ERADFF_states = pd.DataFrame()
for i in state_names_ERA:
is_state = ERADFF['State'] == i
addstate = ERADFF[is_state]
ERADFF_states = ERADFF_states.append(addstate)
ERADFF_states
#Finding out directory path
import os
os.getcwd()
#Storing our cleaned and filtered dataframes
WOGDFF_states.to_csv(r'C:\\Users\\Sleepwalk\\1 Data Analytics Final Project Data\WOGDFF_states.csv', index = False)
ERADFF_states.to_csv(r'C:\\Users\\Sleepwalk\\1 Data Analytics Final Project Data\ERADFF_states.csv', index = False)
###Output
_____no_output_____ |
examples/TimeCircle.ipynb | ###Markdown
TimeCircle* enumerate different frequencies limited to multiple of seconds, minutes, hours or days/weeks* annual or monthly phenomena (e.g. day of month) are **not** modelled (see `DateCircle`)* The implementation is very simple and based on POSIX seconds
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load Modules
###Code
import sys
sys.path.append('..')
from datefeatures import TimeCircle
import numpy as np
import pandas as pd
from randdate import randdate
from datetime import datetime
from sklearn.pipeline import Pipeline, FeatureUnion
from mlxtend.feature_selection import ColumnSelector
###Output
_____no_output_____
###Markdown
Example 1
###Code
# generate fake dates
X = np.c_[np.array(randdate(10)), np.array(randdate(10))]
# transform date variable to fetures
cmp = TimeCircle()
cmp.fit(X)
Z = cmp.transform(X)
Z.head()
cmp.feature_names_
###Output
_____no_output_____
###Markdown
Example 2
###Code
# generate fake dates
X = np.c_[np.array(randdate(10)), np.array(randdate(10))]
# emulate missing value
X[1,0] = np.nan
# transform date variable to fetures
cmp = TimeCircle(freq = {'d': [1, 2, 7]}, out=['sin'])
cmp.fit(X)
Z = cmp.transform(X)
Z.head()
###Output
_____no_output_____
###Markdown
Example 3
###Code
n_samples = 100000
X = np.c_[np.array(randdate(n_samples)), np.array(randdate(n_samples)), np.array(randdate(n_samples))]
freq = {
's': [1, 2, 3, 4, 6, 10, 12, 15, 20, 30, 40, 45], # 1-59, e.g. range(1, 60)
'm': [1, 2, 3, 4, 6, 10, 12, 15, 20, 30, 40, 45], # 1-59, e.g. range(1, 60)
'h': [1, 2, 3, 4, 6, 9, 12, 15, 18], # 1-23, e.g. range(1,24)
'd': [1, 2, 3, 7, 14, 21, 28] # any number of days, e.g. 1-7, n*7
}
cmp = TimeCircle(freq = freq, out=['sin', 'cos'])
%time Z = cmp.fit_transform(X)
###Output
CPU times: user 2min 20s, sys: 3.35 s, total: 2min 23s
Wall time: 2min 30s
###Markdown
Example 4
###Code
# generate fake dates
n_samples = 5
X = np.c_[np.array(randdate(n_samples))]
X[1,0] = np.nan
# make pipeline
pipe = Pipeline(steps=[
('pre', TimeCircle(freq = {'d': [1, 2, 7]}, out=['sin', 'cos']))
])
Z = pipe.fit_transform(X)
Z
###Output
_____no_output_____
###Markdown
Example 5
###Code
# generate fake dates
n_samples = 5
X = pd.DataFrame(data=randdate(n_samples), columns=['this_date'])
X['some_numbers'] = np.random.randn(n_samples)
X
# make pipeline
pipe = Pipeline(steps=[
# process column by column
('col_by_col', FeatureUnion(transformer_list=[
('dates', Pipeline(steps=[
('sel1', ColumnSelector(cols=('this_date'))),
('pre1', TimeCircle(freq = {'d': [1, 2, 7]}, out=['sin', 'cos']))
])),
('numbers', ColumnSelector(cols=('some_numbers')))
]))
# do some other stuff ..
])
Z = pipe.fit_transform(X)
Z
colnam = list(pipe.steps[0][1].transformer_list[0][1].steps[1][1].feature_names_)
colnam += ['some_numbers']
colnam
pd.DataFrame(Z, columns=colnam)
###Output
_____no_output_____ |
dea_materials/day2/P_moree_cotton_farms.ipynb | ###Markdown
Cotton farms in MoreeMoree is a town in northern New South Wales, Australia. It is located on the banks of the Mehi River, in the centre of the Jim plains. Its name comes from an Aboriginal word for “rising sun”, “long spring”, or “water hole".Moree is a major agricultural centre, noted for its part in the Australian cotton-growing industry which was established there in the early 1960s. The town is renowned by its healing artesian hot spring baths. At the 2016 census, the town of Moree had a population of 7,383. Australia’s cotton growing season lasts approximately six months, starting in September/October (planting) and ending in March/April (picking). Irrigation water availability is a limiting factor in cotton production. Water-use efficiency has increased by approximately 240 percent since the 1970’s and Australian cotton growers are now recognised as the most water-use efficient in the world and three times more efficient than the global average. [Source: agriculture.gov.au](http://www.agriculture.gov.au)The demand for cotton worldwide has been steadily increasing in recent years. The good quality of Australian cotton is recognised worldwide and constitutes an attractive product for farmers. Evolution of worldwide cotton demand since year 2000. (Source: Abares) Your task:After many years working in an office, you have decided to become a cotton farmer. You are looking for properties for sale near Moree but you have heard at the local pub, about the big differences in productivity between lands. There are a couple of properties that fit your budget and you would like to find out which one has historically performed better in the past years. You cannot trust much the numbers that the previous owners are giving to you and want to find a more independent method to inform your decision. Luckily, in your previous job you had to use the DEA for very different purposes and you decide to give it a go.You know about the Fractional Cover product and you decide to use it for comparing the performance for these two properties, over the last growing season.Fractional Cover represents the proportion of the land surface that is bare (BS), covered by photosynthetic vegetation (PV), or non-photosynthetic vegetation (NPV). The green (PV) fraction includes leaves and grass, the non-photosynthetic fraction (NPV) includes branches, dry grass and dead leaf litter, and the bare soil (BS) fraction includes bare soil or rock. You expect to see an increase in the bare soil fraction over the past years. Load packagesYou start by loading the usual Python libraries to start working on this project.
###Code
%matplotlib inline
import datacube
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sys
import xarray as xr
###Output
_____no_output_____
###Markdown
Setting upWe create the DEA object and list the products that are currently available on the DEA containing the string `fc` which is the code that indicates Fractional Cover.
###Code
dc = datacube.Datacube(app='dc-FC')
products = dc.list_products()
display_columns = ['name', 'description']
display_rows = [1]
dc_products = products[display_columns]
dc_products[dc_products['name'].str.contains("fc")]
###Output
_____no_output_____
###Markdown
Property AYou start by putting together Fractional Cover for Landsat 8 data for the first property.
###Code
dc_moree = datacube.Datacube(app="Moree_CottonFarms")
query = {'lat': (-29.34, -29.42),
'lon': (149.78, 149.91),
'time':('2018-06-01', '2019-06-01')}
fc_propA = dc_moree.load(product='ls8_fc_albers', **query)
# Cloud filtering functionality for FC is currently being developed so we manually filter cloudly images
fc_propA = fc_propA.isel(time=[0,2,4,5,6,7,9,10,11,12,13,16,19,20,22])
fc_propA
###Output
_____no_output_____
###Markdown
Visualising the 3 fractions togetherYou have 23 clean images of the area around the first property for the last year. You create a function to plot the 3 fraction plus the Unmixing Error (UE) variable which gives an indication about the uncertainty in the computation of the fractions.
###Code
import matplotlib.gridspec as gridspec
def plot_fractions(ds, scene):
#set up our images on a grid using gridspec
plt.figure(figsize=(12,8))
gs = gridspec.GridSpec(2,2) # set up a 2 x 2 grid of 4 images for better presentation
ax1=plt.subplot(gs[0,0])
ds.PV.isel(time=scene).plot(cmap='gist_earth_r')
ax1.set_title('PV')
ax2=plt.subplot(gs[1,0])
ds.BS.isel(time=scene).plot(cmap='Oranges')
ax2.set_title('BS')
ax3=plt.subplot(gs[0,1])
ds.NPV.isel(time=scene).plot(cmap='copper')
ax3.set_title('NPV')
ax4=plt.subplot(gs[1,1])
ds.UE.isel(time=scene).plot(cmap='magma')
ax4.set_title('UE')
plt.tight_layout()
plt.show()
plot_fractions(fc_propA, 0)
###Output
_____no_output_____
###Markdown
Detecting cropping landsLand dedicated to cropping goes through a series of changes during the season. At the beginning of the season, the land is plowed and presents no vegetation and, at the middle of the season, the fields are green. Later in the growing season, depending on the type of crop, fields start yellowing until the harvest time.The changes in land dedicated to crop are usually larger than in the surrounding lands. The PV fraction in the fractional cover product might become handy for identifying the parts of land dedicated to grow crops. If we consider the temporal evolution of an individual pixels in a PV image, we'll observe that the variability is usually larger for crop farms that for other types of land.Making use of the variability through time, you are going to identify the cropping areas around Moree.
###Code
land_std = fc_propA.PV.std(dim='time')
print("Shape:",land_std.shape)
print("Min: {:02f}, Max: {:02f}, Mean {:02f}".format(np.nanmin(land_std), np.nanmax(land_std), np.nanmean(land_std)))### Now we load and look at some data
###Output
_____no_output_____
###Markdown
Masking croplandsNow you look at the maximum and minimum values and determine a threshold value to create a mask for leaving out all non-cropping lands. After some try and error, 20 seems to give reasonable results.You use this mask to filter the farms out of the original images.
###Code
(land_std > 20).plot(cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Using masks within XArray DatasetsXArray contains a function `.where` to apply binary masks to all the `DataArrays` in a `Dataset`. Here you use it filter your data with the cropping land mask first and then with an unmixing error threshold. As you can see you can chain as many operations as you want to a `Dataset`. The expression evaluates from left to right so the order of the operations affect the result and also the performance (time) of the operation.You create this expression which filters your data twice and then plot the result specifying a colormap and minimum and maximum values for the colour scale.
###Code
mean_error = fc_propA.UE.mean(dim='time')
fc_propA.PV.where(land_std>20).where(mean_error<=20.0).isel(time=3).plot(cmap='YlGn', vmax=120, vmin=0)
###Output
_____no_output_____
###Markdown
Interactive plotsYou have seen in a blog post someone demonstrating a very cool Jupyter notebook functionality to create dynamic visualisations. Once you have located you decide to create an interactive time lapse version of the last cropping season for the first property.
###Code
from ipywidgets import interactive
def plot_field(t):
fc_propA.PV.where(land_std>20).where(mean_error<=20.0).isel(time=t).plot(cmap='YlGn', vmax=120, vmin=0)
interactive_plot = interactive(plot_field, t=(0, fc_propA.time.shape[0]-1))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
###Output
_____no_output_____
###Markdown
Temporal plotsFinally you want to visualise the temporal evolution of the cropping lands over the last year. You are quite surprised to realise that there are what it looks like two seasons, first in spring and the second one more productive in autum.
###Code
plt.figure(figsize=(10,2))
fc_propA.PV.where(land_std>20).where(mean_error<=20.0).mean(dim=['x','y']).plot()
###Output
_____no_output_____
###Markdown
Property BNow that you know how to perform the analysis you repeat the process for the second farm.
###Code
query = {'lat': (-29.30, -29.38),
'lon': (149.65, 149.78),
'time':('2018-06-01', '2019-06-01')}
fc_propB = dc_moree.load(product='ls8_fc_albers', **query)
print(fc_propB.time.shape)
# Cloud filtering functionality for FC is currently being developed so we manually filter cloudly images
fc_propB = fc_propB.isel(time=[0,2,4,5,6,7,9,10,11,12,13,16,19,20,22])
fc_propB
###Output
_____no_output_____
###Markdown
Detecting cropping lands
###Code
land_std = fc_propB.PV.std(dim='time')
print("Shape:",land_std.shape)
print("Min: {:02f}, Max: {:02f}, Mean {:02f}".format(np.nanmin(land_std), np.nanmax(land_std), np.nanmean(land_std)))### Now we load and look at some data
(land_std > 20).plot(cmap='Greys_r')
###Output
_____no_output_____
###Markdown
Plotting fractions
###Code
plot_fractions(fc_propB, 0)
###Output
_____no_output_____
###Markdown
Interactive year time-lapse
###Code
def plot_field(t):
fc_propB.PV.where(land_std>20).isel(time=t).plot(cmap='YlGn', vmax=120, vmin=0)
interactive_plot = interactive(plot_field, t=(0, fc_propB.time.shape[0]-1))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
###Output
_____no_output_____
###Markdown
Year vegetation evolution
###Code
plt.figure(figsize=(10,2))
fc_propB.PV.where(land_std>20).mean(dim=['x','y']).plot()
###Output
_____no_output_____ |
Chris_Tolbert_LS_DS_112_Loading_Data.ipynb | ###Markdown
Lambda School Data Science - Loading DataData comes in many shapes and sizes - we'll start by loading tabular data, usually in csv format.Data set sources:- https://archive.ics.uci.edu/ml/datasets.html- https://github.com/awesomedata/awesome-public-datasets- https://registry.opendata.aws/ (beyond scope for now, but good to be aware of)Let's start with an example - [data about flags](https://archive.ics.uci.edu/ml/datasets/Flags). Lecture example - flag data
###Code
# Step 1 - find the actual file to download
# From navigating the page, clicking "Data Folder"
flag_data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data'
# You can "shell out" in a notebook for more powerful tools
# https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html
# Funny extension, but on inspection looks like a csv
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data
# Extensions are just a norm! You have to inspect to be sure what something is
# Step 2 - load the data
# How to deal with a csv? 🐼
import pandas as pd
flag_data = pd.read_csv(flag_data_url)
# Step 3 - verify we've got *something*
flag_data.head()
# Step 4 - Looks a bit odd - verify that it is what we want
flag_data.count()
!curl https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data | wc
# So we have 193 observations with funny names, file has 194 rows
# Looks like the file has no header row, but read_csv assumes it does
#help(pd.read_csv)
# Alright, we can pass header=None to fix this
flag_data = pd.read_csv(flag_data_url, header=None)
flag_data.head()
flag_data.count()
flag_data.isna().sum()
###Output
_____no_output_____
###Markdown
Yes, but what does it *mean*?This data is fairly nice - it was "donated" and is already "clean" (no missing values). But there are no variable names - so we have to look at the codebook (also from the site).```1. name: Name of the country concerned2. landmass: 1=N.America, 2=S.America, 3=Europe, 4=Africa, 4=Asia, 6=Oceania3. zone: Geographic quadrant, based on Greenwich and the Equator; 1=NE, 2=SE, 3=SW, 4=NW4. area: in thousands of square km5. population: in round millions6. language: 1=English, 2=Spanish, 3=French, 4=German, 5=Slavic, 6=Other Indo-European, 7=Chinese, 8=Arabic, 9=Japanese/Turkish/Finnish/Magyar, 10=Others7. religion: 0=Catholic, 1=Other Christian, 2=Muslim, 3=Buddhist, 4=Hindu, 5=Ethnic, 6=Marxist, 7=Others8. bars: Number of vertical bars in the flag9. stripes: Number of horizontal stripes in the flag10. colours: Number of different colours in the flag11. red: 0 if red absent, 1 if red present in the flag12. green: same for green13. blue: same for blue14. gold: same for gold (also yellow)15. white: same for white16. black: same for black17. orange: same for orange (also brown)18. mainhue: predominant colour in the flag (tie-breaks decided by taking the topmost hue, if that fails then the most central hue, and if that fails the leftmost hue)19. circles: Number of circles in the flag20. crosses: Number of (upright) crosses21. saltires: Number of diagonal crosses22. quarters: Number of quartered sections23. sunstars: Number of sun or star symbols24. crescent: 1 if a crescent moon symbol present, else 025. triangle: 1 if any triangles present, 0 otherwise26. icon: 1 if an inanimate image present (e.g., a boat), otherwise 027. animate: 1 if an animate image (e.g., an eagle, a tree, a human hand) present, 0 otherwise28. text: 1 if any letters or writing on the flag (e.g., a motto or slogan), 0 otherwise29. topleft: colour in the top-left corner (moving right to decide tie-breaks)30. botright: Colour in the bottom-left corner (moving left to decide tie-breaks)```Exercise - read the help for `read_csv` and figure out how to load the data with the above variable names. One pitfall to note - with `header=None` pandas generated variable names starting from 0, but the above list starts from 1... Your assignment - pick a dataset and do something like the aboveThis is purposely open-ended - you can pick any data set you wish. It is highly advised you pick a dataset from UCI or a similar "clean" source.If you get that done and want to try more challenging or exotic things, go for it! Use documentation as illustrated above, and follow the 20-minute rule (that is - ask for help if you're stuck).If you have loaded a few traditional datasets, see the following section for suggested stretch goals.
###Code
# TODO your work here!
# And note you should write comments, descriptions, and add new
# code and text blocks as needed
import pandas as pd
import numpy as np
pd.options.display.max_columns = None #allow all columns to be shown
#pd.set_option("display.max_rows",30000)
# This dataset analyses all nba games since 1946 to 2014. Uses both elo rating
# and carm-elo ratings. Show prob of winning
nba_carmelo_url = 'https://projects.fivethirtyeight.com/nba-model/nba_elo.csv'
nba_data = pd.read_csv(nba_carmelo_url)
nba_data.describe()
nba_data.head()
nba_data.tail()
# Replace NaN with empty string in playoff column
nba_data_clear = nba_data.replace(np.nan, '', regex=True)
nba_data_clear.isna().sum().sum()
nba_data_clear.shape
nba_data_clear.isna().sum()
# Lots of null values due to empty cells. Some values like playoffs are not null
# bcs it only indicates if the game is a playoff game.
# Comparing orgiginal with completed
nba_data.isnull().sum()
nba_data.isnull().sum().sum()
#nba_data_clean = nba_data.replace(np.nan, '', regex=True)
nba_data.[playoff] = nba_data.[playoff].mpa({np.nan: False, })
nba_data_clear.describe()
###Output
_____no_output_____ |
Basic Commends in Python.ipynb | ###Markdown
Basic Commends in Python * pip install **Package name*** pip uninstall **Package name*** pip list (used For To list all the Package we install in your machine) How Update package1. pip install --upgrade **package name** (i will update From Older Version to New Version)2. pip list --outdated (List outdated packages (excluding editables), and the latest version available)3. pip install update **package name** ( i will Update on two package(Style and update))
###Code
pip list
# Use json formatting
!pip list --format=json
# Use freeze formatting
!pip list --format=freeze
# Use legacy formatting
!pip list --format=columns
import this
## Show information about a package
!pip show numpy
### Show all information about a package
!pip show --verbose numpy
## Search for “peppercorn”
!pip search numpy
!pip search peppercorn
###Output
peppercorn (0.6) - A library for converting a token stream into a data structure for use in web form posts
pepperedform (0.6.1) - Helpers for using peppercorn with formprocess.
|
final_project/script/analysis.ipynb | ###Markdown
This notebook includes exploratory data analysis of the Seattle Police Department's Call Data
###Code
import utils
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
pd.options.display.max_rows = 999
df = pd.read_csv('/Users/allen/Documents/Data_512/Data/Call_Data_filtered.csv')
# make YearMonth datetime type
df['YearMonth'] = pd.to_datetime(df['YearMonth'])
# filter priority 9 for this project
df = df[df['Priority'] != 9]
###Output
_____no_output_____
###Markdown
EDA Average response time over time
###Code
# regroup priority
df['priority'] = df['Priority'].apply(utils.group_priority)
# aggregate response_time on priority and YearMonth
response_by_priority = df.groupby(['priority','YearMonth'])['response_time'].mean().reset_index()
# set themes
colors = ["red", "tomato", "steelblue", "forestgreen"]
palette = sns.color_palette(colors)
# plot response_time by priority overtime
sns.set_style("whitegrid")
fig, ax = plt.subplots(figsize=(7,4))
sns.lineplot(x="YearMonth", y="response_time", hue="priority", palette=palette, data=response_by_priority)
ax.set_xlabel('Year')
ax.set_ylabel('Response Time (mins)')
ax.set_title('Response Time by Priority Overtime', fontsize=15);
###Output
/anaconda3/envs/env_0/lib/python3.7/site-packages/pandas/plotting/_matplotlib/converter.py:103: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
Response time by call type
###Code
# aggregate response time by call type
response_by_call_type = df.groupby(['Priority','Call Type']).agg(
response_time = ('response_time','mean'),
n = ('response_time','count')
).reset_index().sort_values(['Priority','response_time'])
fig, ax = plt.subplots(figsize=(5,4))
response_by_call_type[(response_by_call_type['Priority'] == 1) &
(response_by_call_type['Call Type'] != 'ONVIEW') &
(response_by_call_type['Call Type'] != 'IN PERSON COMPLAINT')][['Call Type','response_time']] \
.set_index('Call Type') \
.sort_values('response_time', ascending=False) \
.plot.barh(edgecolor='black', color='green', legend=False, alpha=0.6, ax=ax)
ax.set_title('Response Time by Call Type', fontsize=15)
ax.set_ylabel('')
ax.set_xlabel('Time (mins)');
###Output
_____no_output_____
###Markdown
Compare 911 calls to non 911 calls
###Code
fig, ax = plt.subplots(figsize=(7,4))
sns.barplot(x='Priority', y='response_time', hue='Call Type',
data=response_by_call_type[(response_by_call_type['Call Type'] == '911') |
(response_by_call_type['Call Type'] == 'TELEPHONE OTHER, NOT 911')])
ax.set_ylabel('Time (mins)')
ax.set_title('Response Time of 911 and Not 911 Calls by Priority', fontsize=15)
ax.legend(loc='upper left', bbox_to_anchor=(0.2, -0.13), shadow=True, ncol=2);
###Output
_____no_output_____
###Markdown
Does SPD response to text messages significantly faster than 911 calls?
###Code
from scipy import stats
stats.ttest_ind(df[(df['Priority'] == 1) & (df['Call Type'] == 'TEXT MESSAGE')]['response_time'],
df[(df['Priority'] == 1) & (df['Call Type'] == '911')]['response_time'],
equal_var=False)
###Output
_____no_output_____
###Markdown
p-value is not significant so there is no evidence showing that response time to text messages is statistically significant faster than 911 calls Response time by event type
###Code
response_by_event_type = df.groupby(['Priority','Initial Call Type']).agg(
response_time = ('response_time','mean'),
n = ('response_time','count')
).reset_index().sort_values(['response_time'])
top_n = 15
priority = 1
fig, ax = plt.subplots(figsize=(9,5))
response_by_event_type[(response_by_event_type['Priority'] == priority) &
(response_by_event_type['n'] > 50)].head(top_n) \
.sort_values('response_time', ascending=False) \
.plot.barh(x='Initial Call Type', y='response_time', edgecolor='black', color='green', alpha=0.6, legend=False, ax=ax)
ax.set_ylabel('')
ax.set_xlabel('Response Time (mins)')
ax.set_title('Top {} Events by Response Time'.format(top_n), fontsize=15);
###Output
_____no_output_____ |
exercises/ppi_solved.ipynb | ###Markdown
Exercícios de "playing Python interpreter" Instruções gerais:Em cada exercício, tente prever qual o _output_ da célula. Para confirmar, **corra a célula e compare**!Estes exercícios estão divididos em secções, de acordo com alguns conceitos da linguagem Python.Versão da linguagem: 3.6Módulos adicionais necessários: nenhum ciclos `for`, função `range()`, indexação
###Code
for n in [2, 3, 4]:
print(n, n**2)
for n in [2, 3, 4]:
if n**2 > 10:
print(n, n**2)
for n in '234ABC':
print(n, n*2)
for n in '2,3,4,ABC':
print(n, n*2)
for n in '2,3,4,ABC':
if n in 'ABRACADABRA':
print(n)
print(n*2)
for n in {'2': 'A', 'B': 3, 4: 'ABC'}:
print(n)
print(n*2)
d = {'2': 'A',
'B': 3,
4: 'ABC'}
for i in d.items():
print(i)
d = {'2': 'A',
'B': 3,
4: 'ABC'}
for k, v in d.items():
print(k, '-', v)
for n in range(3):
print(n, n**2)
for n in range(3, 6, 2):
print(n, n**2)
for n in range(len('0'), len('40')):
print(n, n**2)
a = [2, 3, 4]
b = 'ABRACADABRA'
c = {'2': 'A', 'B': 3, 4: 'ABC'}
print(a, b, c)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print(a, b, c)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print(list(a), list(b), list(c))
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print(str(a), str(b), str(c))
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for n in a:
print(n)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for n in a:
print(n, b[n])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for n in a:
if n in c:
print(n, c[n])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for n in b:
if n in c:
print(n, c[n])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for n in range(len(a)):
if b[n] in c:
print(n, c[b[n]])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for n in range(len(a)):
print(a[n], len(c))
print(n)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for n in b:
if n in c[4]:
print(n)
###Output
A
B
A
C
A
A
B
A
###Markdown
Ciclos `for` encaixados
###Code
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in a:
for j in b:
if j == 'R':
print(i, j)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in a:
print(i)
for j in b:
if j == 'R':
print(j)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in a:
print(i)
for j in b:
if j == 'R':
print(j)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in a:
print(i)
for j in b:
if j == 'R':
print(i)
print(j)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in range(len(a)):
for j in b:
if j == 'R':
print(i, j)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in range(len(a)):
for j in range(len(b)):
if j == 2:
print(i, j)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in range(len(a)):
for j in range(len(b)):
if j == i:
print(a[i], b[j])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in range(len(a)):
for j in range(len(b)):
if j == len(a):
print(b[j])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i in b:
for j in c[4]:
if i == j:
print(i)
###Output
A
B
A
C
A
A
B
A
###Markdown
Listas em compreensão
###Code
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print([b[n] for n in a])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print([b[n] for n in a if n in c])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print([n*2 for n in a if n in c])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print([n*2 for n in range(len(b)) if n in c])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print([c[n] for n in b if n in c])
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
print([m for m in [b[n] for n in a] if m in 'ABC'])
a = [i for i in range(100)]
b = [i**2 for i in a if i < 10]
c = [i for i in b if i%2 == 0]
print(c)
###Output
[0, 4, 16, 36, 64]
###Markdown
Slices
###Code
seq = 'ATGGCGAACCGGCTAG'
print(seq[2:])
print(seq[:2])
print(seq[1:])
print(seq[:1])
print(seq[:-1])
print(seq[1:-1])
seq = list('ATGGCGAACCGGCTAG')
print(seq[2:])
print(seq[:2])
print(seq[1:])
print(seq[:1])
print(seq[:-1])
print(seq[1:-1])
seq = 'ATGGCGAACCGGCTAG'
print(seq[::3])
seq = 'ATGGCGAACCGGCTAGTA'
for i in range(0, len(seq), 3):
if seq[i] == 'G':
print(seq[i: i+3])
###Output
GCG
GTA
###Markdown
`zip()` e `enumerate()`
###Code
seq = 'ATGGCGAACCGGCTAGTA'
for i in enumerate(seq):
print(i)
seq = 'ATGGCGAACCGGCTAGTA'
for i, b in enumerate(seq):
if b == 'G':
print(i, b)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i, k in enumerate(a):
print(i, k)
for i, k in enumerate(b):
print(i, k)
for i, k in enumerate(c):
print(i, k)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for x in zip(a, b):
print(x)
for i, j in zip(a, b):
print(i, j)
a, b, c = [2, 3, 4], 'ABRACADABRA', {'2': 'A', 'B': 3, 4: 'ABC'}
for i, j in zip(b, c):
print(i, j)
# muitos pormenores juntos...
seq = 'ATGGCGAACCGGCTAGTA'
b1 = seq[ ::3]
b2 = seq[1::3]
b3 = seq[2::3]
c = [''.join(x) for x in zip(b1,b2,b3)]
print(c)
###Output
['ATG', 'GCG', 'AAC', 'CGG', 'CTA', 'GTA']
###Markdown
`strip()` , `split()` , `join()`, `replace()`
###Code
seq = 'ATGGCGAACCGGCTAGTA'
cods = [''.join(x) for x in zip(seq[ ::3],seq[1::3],seq[2::3])]
print('-'.join(cods))
a = '''Um pequeno texto,
que ocupa
várias linhas,
algumas vazias'''
print(''.join(a.split('\n')))
a = '''Um pequeno texto,
que ocupa
várias linhas,
algumas vazias'''
print(a.replace('\n', ' '))
a = '''Um pequeno texto,
que ocupa
várias linhas,
algumas vazias'''
print(a.replace('\n\n', '\n').replace('\n', ' '))
a = '''Um pequeno texto,
que ocupa
várias linhas,
algumas vazias'''
print(a.split())
a = '''Um pequeno texto,
que ocupa
várias linhas,
algumas vazias'''
print(' '.join([w.strip(',') for w in a.split()]))
a = '''Um pequeno texto,
que ocupa
várias linhas,
algumas vazias'''
print([w.strip(',') for w in a.split() if w[0] in 'aeiou'])
###Output
['ocupa', 'algumas']
|
Kaggle/Cassava Leaf Disease Classification/cassava-leaf-disease-tpu-tensorflow-inference.ipynb | ###Markdown
Cassava Leaf Disease - TPU Tensorflow - Inference- This is the inference part of the work, the training notebook can be found here [Cassava Leaf Disease - TPU Tensorflow - Training](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tpu-tensorflow-training)- keras-applications GitHub repository can be found [here](https://www.kaggle.com/dimitreoliveira/kerasapplications)- efficientnet GitHub repository can be found [here](https://www.kaggle.com/dimitreoliveira/efficientnet-git)- Dataset source `resized` [128x128](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tfrecords-128x128), [256x256](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tfrecords-256x256), [384x384](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tfrecords-384x384), [512x512](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tfrecords-512x512)- Dataset source `center cropped` [128x128](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tfrecords-center-128x128), [256x256](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tfrecords-center-256x256), [384x384](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tfrecords-center-384x384), [512x512](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-tfrecords-center-512x512)- Dataset source [discussion thread](https://www.kaggle.com/c/cassava-leaf-disease-classification/discussion/198744)- Dataset [creation source](https://www.kaggle.com/dimitreoliveira/cassava-leaf-disease-stratified-tfrecords-256x256) Dependencies
###Code
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import math, os, re, warnings, random, glob
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
def seed_everything(seed=0):
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Hardware configuration
###Code
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print(f'Running on TPU {tpu.master()}')
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
###Output
REPLICAS: 1
###Markdown
Model parameters
###Code
BATCH_SIZE = 16 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 5 # Do TTA if > 0
###Output
_____no_output_____
###Markdown
Augmentation
###Code
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# Pixel-level transforms
if p_pixel_1 >= .4:
image = tf.image.random_saturation(image, lower=.7, upper=1.3)
if p_pixel_2 >= .4:
image = tf.image.random_contrast(image, lower=.8, upper=1.2)
if p_pixel_3 >= .4:
image = tf.image.random_brightness(image, max_delta=.1)
return image, label
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
# Datasets utility functions
def get_name(file_path):
parts = tf.strings.split(file_path, os.path.sep)
name = parts[-1]
return name
def decode_image(image_data):
image = tf.image.decode_jpeg(image_data, channels=3)
image = tf.cast(image, tf.float32) / 255.0
# image = center_crop(image)
return image
def center_crop(image):
image = tf.reshape(image, [600, 800, CHANNELS]) # Original shape
h, w = image.shape[0], image.shape[1]
if h > w:
image = tf.image.crop_to_bounding_box(image, (h - w) // 2, 0, w, w)
else:
image = tf.image.crop_to_bounding_box(image, 0, (w - h) // 2, h, h)
image = tf.image.resize(image, [HEIGHT, WIDTH]) # Expected shape
return image
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
def count_data_items(filenames):
n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1)) for filename in filenames]
return np.sum(n)
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/cassava-leaf-disease-tpu-tensorflow-training/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
model_path_list_2 = glob.glob('/kaggle/input/cassava-leaf-disease-training-with-tpu-v2-pods/*.h5')
model_path_list_2.sort()
print('Models to predict:')
print(*model_path_list_2, sep='\n')
###Output
Models to predict:
/kaggle/input/cassava-leaf-disease-training-with-tpu-v2-pods/model_0.h5
/kaggle/input/cassava-leaf-disease-training-with-tpu-v2-pods/model_1.h5
/kaggle/input/cassava-leaf-disease-training-with-tpu-v2-pods/model_2.h5
/kaggle/input/cassava-leaf-disease-training-with-tpu-v2-pods/model_3.h5
/kaggle/input/cassava-leaf-disease-training-with-tpu-v2-pods/model_4.h5
###Markdown
Model
###Code
def model_fn(input_shape, N_CLASSES):
inputs = L.Input(shape=input_shape, name='inputs')
base_model = efn.EfficientNetB3(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
model = tf.keras.Sequential([
base_model,
L.Dropout(.25),
L.Dense(N_CLASSES, activation='softmax', name='output')
])
return model
def model_fn_2(input_shape, N_CLASSES):
inputs = L.Input(shape=input_shape, name='inputs')
base_model = efn.EfficientNetB4(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
x = L.Dropout(.5)(base_model.output)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(x)
model = Model(inputs=inputs, outputs=output)
return model
with strategy.scope():
model = model_fn((None, None, CHANNELS), N_CLASSES)
model_2 = model_fn_2((None, None, CHANNELS), N_CLASSES)
model.summary()
# model_2.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
efficientnet-b3 (Model) (None, 1536) 10783528
_________________________________________________________________
dropout (Dropout) (None, 1536) 0
_________________________________________________________________
output (Dense) (None, 5) 7685
=================================================================
Total params: 10,791,213
Trainable params: 10,703,917
Non-trainable params: 87,296
_________________________________________________________________
###Markdown
Test set predictions
###Code
files_path = f'{database_base_path}test_images/'
test_preds = np.zeros((len(os.listdir(files_path)), N_CLASSES))
print('First model')
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True)
for step in range(TTA_STEPS):
print(f'TTA step {step+1}/{TTA_STEPS}')
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test) / (TTA_STEPS * len(model_path_list))
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test) / len(model_path_list)
print('\nSecond model')
for model_path in model_path_list_2:
print(model_path)
K.clear_session()
model_2.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True)
for step in range(TTA_STEPS):
print(f'TTA step {step+1}/{TTA_STEPS}')
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model_2.predict(x_test) / (TTA_STEPS * len(model_path_list))
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model_2.predict(x_test) / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
###Output
_____no_output_____ |
docs/jax-101/05.1-pytrees.ipynb | ###Markdown
Working with Pytrees[](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
###Code
import jax
import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
###Output
[1, 'a', <object object at 0x7fded60bb8c0>] has 3 leaves: [1, 'a', <object object at 0x7fded60bb8c0>]
(1, (2, 3), ()) has 3 leaves: [1, 2, 3]
[1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5]
{'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3]
DeviceArray([1, 2, 3], dtype=int32) has 1 leaves: [DeviceArray([1, 2, 3], dtype=int32)]
###Markdown
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsThe most commonly used pytree functions are `jax.tree_map` and `jax.tree_multimap`. They work analogously to Python's native `map`, but on entire pytrees.For functions with one argument, use `jax.tree_map`:
###Code
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
###Output
_____no_output_____
###Markdown
To use functions with more than one argument, use `jax.tree_multimap`:
###Code
another_list_of_lists = list_of_lists
jax.tree_multimap(lambda x, y: x+y, list_of_lists, another_list_of_lists)
###Output
_____no_output_____
###Markdown
For `tree_multimap`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
###Code
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
###Output
_____no_output_____
###Markdown
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
###Code
jax.tree_map(lambda x: x.shape, params)
###Output
_____no_output_____
###Markdown
Now, let's train our MLP:
###Code
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit
def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_multimap(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend();
###Output
_____no_output_____
###Markdown
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define your own container class, it will be considered a leaf, even if it has trees inside it:
###Code
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
###Code
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
###Code
from typing import Tuple, Iterable
def flatten_MyContainer(container) -> Tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
###Code
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# Since `tuple` is already registered with JAX, and NamedTuple is a subclass,
# this will work out-of-the-box:
jax.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
###Code
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros
shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
###Output
_____no_output_____
###Markdown
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
###Code
jax.tree_leaves([None, None, None])
###Output
_____no_output_____
###Markdown
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_multimap`:
###Code
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_multimap(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
###Output
_____no_output_____
###Markdown
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
###Code
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
###Output
_____no_output_____
###Markdown
Working with pytrees[](https://colab.research.google.com/github/google/jax/blob/master/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
###Code
import jax
import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsThe most commonly used pytree functions are `jax.tree_map` and `jax.tree_multimap`. They work analogously to Python's native `map`, but on entire pytrees.For functions with one argument, use `jax.tree_map`:
###Code
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
###Output
_____no_output_____
###Markdown
To use functions with more than one argument, use `jax.tree_multimap`:
###Code
another_list_of_lists = list_of_lists
jax.tree_multimap(lambda x, y: x+y, list_of_lists, another_list_of_lists)
###Output
_____no_output_____
###Markdown
For `tree_multimap`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
###Code
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
###Output
_____no_output_____
###Markdown
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
###Code
jax.tree_map(lambda x: x.shape, params)
###Output
_____no_output_____
###Markdown
Now, let's train our MLP:
###Code
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit
def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_multimap(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend()
###Output
_____no_output_____
###Markdown
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define my own container class, it will be considered a leaf, even if it has trees inside it:
###Code
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
###Code
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
###Code
from typing import Tuple, Iterable
def flatten_MyContainer(container) -> Tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
###Code
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# Since `tuple` is already registered with JAX, and NamedTuple is a subclass,
# this will work out-of-the-box:
jax.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
###Code
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros
shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
###Output
_____no_output_____
###Markdown
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
###Code
jax.tree_leaves([None, None, None])
###Output
_____no_output_____
###Markdown
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_multimap`:
###Code
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_multimap(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
###Output
_____no_output_____
###Markdown
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
###Code
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
###Output
_____no_output_____
###Markdown
Working with Pytrees[](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
###Code
import jax
import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
###Output
[1, 'a', <object object at 0x7fded60bb8c0>] has 3 leaves: [1, 'a', <object object at 0x7fded60bb8c0>]
(1, (2, 3), ()) has 3 leaves: [1, 2, 3]
[1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5]
{'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3]
DeviceArray([1, 2, 3], dtype=int32) has 1 leaves: [DeviceArray([1, 2, 3], dtype=int32)]
###Markdown
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsThe most commonly used pytree functions are `jax.tree_map` and `jax.tree_multimap`. They work analogously to Python's native `map`, but on entire pytrees.For functions with one argument, use `jax.tree_map`:
###Code
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
###Output
_____no_output_____
###Markdown
To use functions with more than one argument, use `jax.tree_multimap`:
###Code
another_list_of_lists = list_of_lists
jax.tree_multimap(lambda x, y: x+y, list_of_lists, another_list_of_lists)
###Output
_____no_output_____
###Markdown
For `tree_multimap`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
###Code
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
###Output
_____no_output_____
###Markdown
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
###Code
jax.tree_map(lambda x: x.shape, params)
###Output
_____no_output_____
###Markdown
Now, let's train our MLP:
###Code
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit
def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_multimap(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend();
###Output
_____no_output_____
###Markdown
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define my own container class, it will be considered a leaf, even if it has trees inside it:
###Code
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
###Code
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
###Code
from typing import Tuple, Iterable
def flatten_MyContainer(container) -> Tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
###Code
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# Since `tuple` is already registered with JAX, and NamedTuple is a subclass,
# this will work out-of-the-box:
jax.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
###Code
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros
shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
###Output
_____no_output_____
###Markdown
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
###Code
jax.tree_leaves([None, None, None])
###Output
_____no_output_____
###Markdown
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_multimap`:
###Code
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_multimap(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
###Output
_____no_output_____
###Markdown
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
###Code
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
###Output
_____no_output_____
###Markdown
Working with pytrees[](https://colab.research.google.com/github/google/jax/blob/master/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
###Code
import jax
import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsThe most commonly used pytree functions are `jax.tree_map` and `jax.tree_multimap`. They work analogously to Python's native `map`, but on entire pytrees.For functions with one argument, use `jax.tree_map`:
###Code
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
###Output
_____no_output_____
###Markdown
To use functions with more than one argument, use `jax.tree_multimap`:
###Code
another_list_of_lists = list_of_lists
jax.tree_multimap(lambda x, y: x+y, list_of_lists, another_list_of_lists)
###Output
_____no_output_____
###Markdown
For `tree_multimap`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
###Code
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
###Output
_____no_output_____
###Markdown
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
###Code
jax.tree_map(lambda x: x.shape, params)
###Output
_____no_output_____
###Markdown
Now, let's train our MLP:
###Code
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit
def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_multimap(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend()
###Output
_____no_output_____
###Markdown
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define my own container class, it will be considered a leaf, even if it has trees inside it:
###Code
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
###Code
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
###Code
from typing import Tuple, Iterable
def flatten_MyContainer(container) -> Tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
###Code
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# Since `tuple` is already registered with JAX, and NamedTuple is a subclass,
# this will work out-of-the-box:
jax.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
###Code
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros
shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
###Output
_____no_output_____
###Markdown
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
###Code
jax.tree_leaves([None, None, None])
###Output
_____no_output_____
###Markdown
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_multimap`:
###Code
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_multimap(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
###Output
_____no_output_____
###Markdown
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
###Code
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
###Output
_____no_output_____
###Markdown
Working with Pytrees*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
###Code
import jax
import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
###Output
[1, 'a', <object object at 0x7fded60bb8c0>] has 3 leaves: [1, 'a', <object object at 0x7fded60bb8c0>]
(1, (2, 3), ()) has 3 leaves: [1, 2, 3]
[1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5]
{'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3]
DeviceArray([1, 2, 3], dtype=int32) has 1 leaves: [DeviceArray([1, 2, 3], dtype=int32)]
###Markdown
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsThe most commonly used pytree functions are `jax.tree_map` and `jax.tree_multimap`. They work analogously to Python's native `map`, but on entire pytrees.For functions with one argument, use `jax.tree_map`:
###Code
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
###Output
_____no_output_____
###Markdown
To use functions with more than one argument, use `jax.tree_multimap`:
###Code
another_list_of_lists = list_of_lists
jax.tree_multimap(lambda x, y: x+y, list_of_lists, another_list_of_lists)
###Output
_____no_output_____
###Markdown
For `tree_multimap`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
###Code
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
###Output
_____no_output_____
###Markdown
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
###Code
jax.tree_map(lambda x: x.shape, params)
###Output
_____no_output_____
###Markdown
Now, let's train our MLP:
###Code
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit
def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_multimap(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend();
###Output
_____no_output_____
###Markdown
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define your own container class, it will be considered a leaf, even if it has trees inside it:
###Code
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
###Code
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
###Code
from typing import Tuple, Iterable
def flatten_MyContainer(container) -> Tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
###Code
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# Since `tuple` is already registered with JAX, and NamedTuple is a subclass,
# this will work out-of-the-box:
jax.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
###Code
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros
shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
###Output
_____no_output_____
###Markdown
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
###Code
jax.tree_leaves([None, None, None])
###Output
_____no_output_____
###Markdown
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_multimap`:
###Code
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_multimap(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
###Output
_____no_output_____
###Markdown
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
###Code
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
###Output
_____no_output_____
###Markdown
Working with Pytrees[](https://colab.research.google.com/github/google/jax/blob/master/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
###Code
import jax
import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
###Output
[1, 'a', <object object at 0x7fded60bb8c0>] has 3 leaves: [1, 'a', <object object at 0x7fded60bb8c0>]
(1, (2, 3), ()) has 3 leaves: [1, 2, 3]
[1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5]
{'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3]
DeviceArray([1, 2, 3], dtype=int32) has 1 leaves: [DeviceArray([1, 2, 3], dtype=int32)]
###Markdown
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsThe most commonly used pytree functions are `jax.tree_map` and `jax.tree_multimap`. They work analogously to Python's native `map`, but on entire pytrees.For functions with one argument, use `jax.tree_map`:
###Code
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
###Output
_____no_output_____
###Markdown
To use functions with more than one argument, use `jax.tree_multimap`:
###Code
another_list_of_lists = list_of_lists
jax.tree_multimap(lambda x, y: x+y, list_of_lists, another_list_of_lists)
###Output
_____no_output_____
###Markdown
For `tree_multimap`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
###Code
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
###Output
_____no_output_____
###Markdown
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
###Code
jax.tree_map(lambda x: x.shape, params)
###Output
_____no_output_____
###Markdown
Now, let's train our MLP:
###Code
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit
def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_multimap(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend();
###Output
_____no_output_____
###Markdown
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define my own container class, it will be considered a leaf, even if it has trees inside it:
###Code
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
###Code
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
###Code
from typing import Tuple, Iterable
def flatten_MyContainer(container) -> Tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
###Code
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# Since `tuple` is already registered with JAX, and NamedTuple is a subclass,
# this will work out-of-the-box:
jax.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
###Code
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros
shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
###Output
_____no_output_____
###Markdown
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
###Code
jax.tree_leaves([None, None, None])
###Output
_____no_output_____
###Markdown
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_multimap`:
###Code
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_multimap(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
###Output
_____no_output_____
###Markdown
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
###Code
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
###Output
_____no_output_____
###Markdown
Working with Pytrees[](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
###Code
import jax
import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
###Output
[1, 'a', <object object at 0x7fded60bb8c0>] has 3 leaves: [1, 'a', <object object at 0x7fded60bb8c0>]
(1, (2, 3), ()) has 3 leaves: [1, 2, 3]
[1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5]
{'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3]
DeviceArray([1, 2, 3], dtype=int32) has 1 leaves: [DeviceArray([1, 2, 3], dtype=int32)]
###Markdown
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsPerhaps the most commonly used pytree function is `jax.tree_map`. It works analogously to Python's native `map`, but on entire pytrees:
###Code
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
###Output
_____no_output_____
###Markdown
`jax.tree_map` also works with multiple arguments:
###Code
another_list_of_lists = list_of_lists
jax.tree_map(lambda x, y: x+y, list_of_lists, another_list_of_lists)
###Output
_____no_output_____
###Markdown
When using multiple arguments with `jax.tree_map`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
###Code
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
###Output
_____no_output_____
###Markdown
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
###Code
jax.tree_map(lambda x: x.shape, params)
###Output
_____no_output_____
###Markdown
Now, let's train our MLP:
###Code
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit
def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_map(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend();
###Output
_____no_output_____
###Markdown
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define your own container class, it will be considered a leaf, even if it has trees inside it:
###Code
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
###Code
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
###Code
from typing import Tuple, Iterable
def flatten_MyContainer(container) -> Tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
###Code
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# Since `tuple` is already registered with JAX, and NamedTuple is a subclass,
# this will work out-of-the-box:
jax.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
###Code
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros
shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
###Output
_____no_output_____
###Markdown
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
###Code
jax.tree_leaves([None, None, None])
###Output
_____no_output_____
###Markdown
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_map`:
###Code
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_map(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
###Output
_____no_output_____
###Markdown
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
###Code
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
###Output
_____no_output_____
###Markdown
Working with Pytrees[](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
###Code
import jax
import jax.numpy as jnp
example_trees = [
[1, 'a', object()],
(1, (2, 3), ()),
[1, {'k1': 2, 'k2': (3, 4)}, 5],
{'a': 2, 'b': (2, 3)},
jnp.array([1, 2, 3]),
]
# Let's see how many leaves they have:
for pytree in example_trees:
leaves = jax.tree_leaves(pytree)
print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
###Output
[1, 'a', <object object at 0x7fded60bb8c0>] has 3 leaves: [1, 'a', <object object at 0x7fded60bb8c0>]
(1, (2, 3), ()) has 3 leaves: [1, 2, 3]
[1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5]
{'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3]
DeviceArray([1, 2, 3], dtype=int32) has 1 leaves: [DeviceArray([1, 2, 3], dtype=int32)]
###Markdown
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsPerhaps the most commonly used pytree function is `jax.tree_map`. It works analogously to Python's native `map`, but on entire pytrees:
###Code
list_of_lists = [
[1, 2, 3],
[1, 2],
[1, 2, 3, 4]
]
jax.tree_map(lambda x: x*2, list_of_lists)
###Output
_____no_output_____
###Markdown
`jax.tree_map` also works with multiple arguments:
###Code
another_list_of_lists = list_of_lists
jax.tree_map(lambda x, y: x+y, list_of_lists, another_list_of_lists)
###Output
_____no_output_____
###Markdown
When using multiple arguments with `jax.tree_map`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
###Code
import numpy as np
def init_mlp_params(layer_widths):
params = []
for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]):
params.append(
dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in),
biases=np.ones(shape=(n_out,))
)
)
return params
params = init_mlp_params([1, 128, 128, 1])
###Output
_____no_output_____
###Markdown
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
###Code
jax.tree_map(lambda x: x.shape, params)
###Output
_____no_output_____
###Markdown
Now, let's train our MLP:
###Code
def forward(params, x):
*hidden, last = params
for layer in hidden:
x = jax.nn.relu(x @ layer['weights'] + layer['biases'])
return x @ last['weights'] + last['biases']
def loss_fn(params, x, y):
return jnp.mean((forward(params, x) - y) ** 2)
LEARNING_RATE = 0.0001
@jax.jit
def update(params, x, y):
grads = jax.grad(loss_fn)(params, x, y)
# Note that `grads` is a pytree with the same structure as `params`.
# `jax.grad` is one of the many JAX functions that has
# built-in support for pytrees.
# This is handy, because we can apply the SGD update using tree utils:
return jax.tree_map(
lambda p, g: p - LEARNING_RATE * g, params, grads
)
import matplotlib.pyplot as plt
xs = np.random.normal(size=(128, 1))
ys = xs ** 2
for _ in range(1000):
params = update(params, xs, ys)
plt.scatter(xs, ys)
plt.scatter(xs, forward(params, xs), label='Model prediction')
plt.legend();
###Output
_____no_output_____
###Markdown
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define your own container class, it will be considered a leaf, even if it has trees inside it:
###Code
class MyContainer:
"""A named container."""
def __init__(self, name: str, a: int, b: int, c: int):
self.name = name
self.a = a
self.b = b
self.c = c
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
###Code
jax.tree_map(lambda x: x + 1, [
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
###Code
from typing import Tuple, Iterable
def flatten_MyContainer(container) -> Tuple[Iterable[int], str]:
"""Returns an iterable over container contents, and aux data."""
flat_contents = [container.a, container.b, container.c]
# we don't want the name to appear as a child, so it is auxiliary data.
# auxiliary data is usually a description of the structure of a node,
# e.g., the keys of a dict -- anything that isn't a node's children.
aux_data = container.name
return flat_contents, aux_data
def unflatten_MyContainer(
aux_data: str, flat_contents: Iterable[int]) -> MyContainer:
"""Converts aux data and the flat contents into a MyContainer."""
return MyContainer(aux_data, *flat_contents)
jax.tree_util.register_pytree_node(
MyContainer, flatten_MyContainer, unflatten_MyContainer)
jax.tree_leaves([
MyContainer('Alice', 1, 2, 3),
MyContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
###Code
from typing import NamedTuple, Any
class MyOtherContainer(NamedTuple):
name: str
a: Any
b: Any
c: Any
# Since `tuple` is already registered with JAX, and NamedTuple is a subclass,
# this will work out-of-the-box:
jax.tree_leaves([
MyOtherContainer('Alice', 1, 2, 3),
MyOtherContainer('Bob', 4, 5, 6)
])
###Output
_____no_output_____
###Markdown
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
###Code
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))]
# Try to make another tree with ones instead of zeros
shapes = jax.tree_map(lambda x: x.shape, a_tree)
jax.tree_map(jnp.ones, shapes)
###Output
_____no_output_____
###Markdown
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
###Code
jax.tree_leaves([None, None, None])
###Output
_____no_output_____
###Markdown
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_map`:
###Code
def tree_transpose(list_of_trees):
"""Convert a list of trees of identical structure into a single tree of lists."""
return jax.tree_map(lambda *xs: list(xs), *list_of_trees)
# Convert a dataset from row-major to column-major:
episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)]
tree_transpose(episode_steps)
###Output
_____no_output_____
###Markdown
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
###Code
jax.tree_transpose(
outer_treedef = jax.tree_structure([0 for e in episode_steps]),
inner_treedef = jax.tree_structure(episode_steps[0]),
pytree_to_transpose = episode_steps
)
###Output
_____no_output_____ |
tests/python/mnist/MnistIirFilterTest.ipynb | ###Markdown
IIRフィルタ的なことを試してみるテスト結果を MaxPooling で縮小して次のフレームで使う実験をしてみる
###Code
import os
import shutil
import numpy as np
from tqdm import tqdm
from tqdm.notebook import tqdm
import torch
import torchvision
import torchvision.transforms as transforms
import binarybrain as bb
###Output
_____no_output_____
###Markdown
データセットデータセットの準備には torchvision を使います
###Code
# configuration
net_name = 'MnistIirFilterTestt'
data_path = os.path.join('./data/', net_name)
bin_mode = False
frame_modulation_size = 7
epochs = 4
mini_batch_size = 64
# dataset
dataset_path = './data/'
dataset_train = torchvision.datasets.MNIST(root=dataset_path, train=True, transform=transforms.ToTensor(), download=True)
dataset_test = torchvision.datasets.MNIST(root=dataset_path, train=False, transform=transforms.ToTensor(), download=True)
loader_train = torch.utils.data.DataLoader(dataset=dataset_train, batch_size=mini_batch_size, shuffle=True, num_workers=2)
loader_test = torch.utils.data.DataLoader(dataset=dataset_test, batch_size=mini_batch_size, shuffle=False, num_workers=2)
###Output
_____no_output_____
###Markdown
ネットワーク構築
###Code
# バイナリ時は BIT型を使えばメモリ削減可能
bin_dtype = bb.DType.BIT if bin_mode else bb.DType.FP32
def create_cnv(output_ch, filter_size=(3, 3), padding='same', fw_dtype=bin_dtype):
return bb.Convolution2d(
bb.Sequential([
bb.DenseAffine([output_ch, 1, 1]),
bb.ReLU(),
]),
filter_size=filter_size,
padding=padding,
fw_dtype=fw_dtype)
def create_fc(output_ch, fw_dtype=bin_dtype):
return bb.Convolution2d(
bb.Sequential([
bb.DenseAffine([output_ch, 1, 1]),
]),
filter_size=(1, 1),
fw_dtype=fw_dtype)
class MyNetwork(bb.Sequential):
def __init__(self):
self.N = 4
# self.r2b = bb.RealToBinary(frame_modulation_size=frame_modulation_size, bin_dtype=bin_dtype)
# self.b2r = bb.BinaryToReal(frame_integration_size=frame_modulation_size, bin_dtype=bin_dtype)
self.cnvs0 = bb.Sequential()
self.cnvs1 = bb.Sequential()
self.fcs = bb.Sequential()
self.upss = bb.Sequential()
self.pols = bb.Sequential()
for _ in range(self.N):
self.cnvs0.append(create_cnv(64))
self.cnvs1.append(create_cnv(64))
self.fcs.append(create_fc(10))
self.upss.append(bb.UpSampling((2, 2)))
self.pols.append(bb.MaxPooling((2, 2)))
super(MyNetwork, self).__init__([self.cnvs0, self.cnvs1, self.fcs, self.upss, self.pols])
def set_input_shape(self, shape):
for i in range(self.N):
shape1 = self.cnvs0[i].set_input_shape(shape)
for i in range(self.N):
shape2 = self.cnvs1[i].set_input_shape(shape1)
for i in range(self.N):
self.fcs[i].set_input_shape(shape2)
for i in range(self.N):
shape3 = self.pols[i].set_input_shape(shape2)
for i in range(self.N):
self.upss[i].set_input_shape(shape3)
def param_copy(self):
for i in range(1, self.N):
W = self.cnvs0[i][1][0].W()
b = self.cnvs0[i][1][0].b()
W *= 0; W += self.cnvs0[0][1][0].W()
b *= 0; b += self.cnvs0[0][1][0].b()
W = self.cnvs1[i][1][0].W()
b = self.cnvs1[i][1][0].b()
W *= 0; W += self.cnvs1[0][1][0].W()
b *= 0; b += self.cnvs1[0][1][0].b()
W = self.fcs[i][1][0].W()
b = self.fcs[i][1][0].b()
W *= 0; W += self.fcs[0][1][0].W()
b *= 0; b += self.fcs[0][1][0].b()
def grad_marge(self):
dW0 = self.cnvs0[0][1][0].dW()
db0 = self.cnvs0[0][1][0].db()
dW1 = self.cnvs1[0][1][0].dW()
db1 = self.cnvs1[0][1][0].db()
dW2 = self.fcs[0][1][0].dW()
db2 = self.fcs[0][1][0].db()
for i in range(1, self.N):
dW0 += self.cnvs0[i][1][0].dW()
db0 += self.cnvs0[i][1][0].db()
dW1 += self.cnvs1[i][1][0].dW()
db1 += self.cnvs1[i][1][0].db()
dW2 += self.fcs[i][1][0].dW()
db2 += self.fcs[i][1][0].db()
def forward(self, x_buf, train=True):
x = x_buf.numpy()
px_buf = bb.FrameBuffer.from_numpy(np.zeros((x_buf.get_frame_size(), 64, 14, 14), dtype=np.float32))
y_bufs = []
for i in range(self.N):
# 前のフレームの出力と結合
px_buf = self.upss[i].forward(px_buf, train=train)
px = px_buf.numpy()
x_buf = bb.FrameBuffer.from_numpy(np.concatenate((x, px), 1))
# forward
x_buf = self.cnvs0[i].forward(x_buf, train=train)
x_buf = self.cnvs1[i].forward(x_buf, train=train)
y_buf = self.fcs[i].forward(x_buf, train=train)
# 出力の1つとして追加
y_bufs.append(y_buf)
px_buf = self.pols[i].forward(x_buf, train=train)
return y_bufs
def backward(self, dy_bufs):
pdy_buf = bb.FrameBuffer.from_numpy(np.zeros((dy_bufs[0].get_frame_size(), 64, 14, 14), dtype=np.float32))
for i in reversed(range(self.N)):
pdy_buf = self.pols[i].backward(pdy_buf)
dy_buf = self.fcs[i].backward(dy_bufs[i])
dx_buf = self.cnvs1[i].backward(dy_buf + pdy_buf)
dx_buf = self.cnvs0[i].backward(dx_buf)
dx = dx_buf.numpy()[:,1:]
pdy_buf = self.upss[i].backward(bb.FrameBuffer.from_numpy(dx))
net = MyNetwork()
net.set_input_shape([64+1, 28, 28])
net.param_copy()
net.grad_marge()
#bb.load_networks(data_path, net)
losses = []
for _ in range(net.N):
losses.append(bb.LossSoftmaxCrossEntropy())
optimizer = bb.OptimizerAdam()
metrics = bb.MetricsCategoricalAccuracy()
parameters = bb.Variables()
parameters.append(net.cnvs0[0].get_parameters())
parameters.append(net.cnvs1[0].get_parameters())
parameters.append(net.fcs[0].get_parameters())
gradients = bb.Variables()
gradients.append(net.cnvs0[0].get_gradients())
gradients.append(net.cnvs1[0].get_gradients())
gradients.append(net.fcs[0].get_gradients())
optimizer.set_variables(parameters, gradients)
epochs = 32
for epoch in range(epochs):
with tqdm(loader_train) as tq:
for images, labels in tq:
x_buf = bb.FrameBuffer.from_numpy(np.array(images).astype(np.float32))
t = np.zeros((len(labels), 10, 28, 28), dtype=np.float32)
for i in range(len(labels)):
t[i][labels[i]][13:15,13:15] += 1 # 中央付近のピクセルだけで評価
t_buf = bb.FrameBuffer.from_numpy(t)
net.param_copy()
y_bufs = net.forward(x_buf, train=True)
dy_bufs = []
for i in range(net.N):
dy_buf = losses[i].calculate(y_bufs[i], t_buf)
dy_bufs.append(dy_buf)
metrics.calculate(y_bufs[net.N-1], t_buf)
net.backward(dy_bufs)
net.grad_marge()
optimizer.update()
loss = 0
for i in range(net.N):
loss += losses[i].get()
tq.set_postfix(loss=loss, metrics=metrics.get())
bb.save_networks(data_path, net)
---------------------
bb.save_networks(data_path, net)
###Output
_____no_output_____ |
.ipynb_checkpoints/04-spatial-join-checkpoint.ipynb | ###Markdown
Recreating windfield risk analysis In this lesson, we are replicating the windfield risk analysis presented by Chad Council in his lecture. We will be using two main data sets:- The wind field for the [1938 new england hurricane](https://en.wikipedia.org/wiki/1938_New_England_hurricane)- The [CDC Social Vulnerability Index (SVI) for 2018](https://svi.cdc.gov/data-and-tools-download.html)This analysis will see which areas are at highest risk due to wind damage if the 1938 hurricane hit today
###Code
# unzip windfield into a folder called windfield
!unzip WindField.zip -d windfield
# load windfield into gpd geodataframe, plot the gridcode (wind speed rating: higher is more dangerous)
windfield_df = gpd.read_file('windfield/WindField.shp')
windfield_df.plot('GRIDCODE', legend=True)
###Output
_____no_output_____
###Markdown
The social vulnerability index data is pre-loaded in the course shared folder at:`/home/jovyan/course/04/SVI2018/`The documentation for the dataset has been included in this repository in `SVI2018Documentation.pdf` for ease of reference. It documents each column, and the data sources that were used to compute the respective entry.
###Code
# load SVI2018 from course shared folder
svi_df = gpd.read_file('/home/jovyan/course/04/SVI2018/SVI2018_US_tract.shp')
svi_df
###Output
_____no_output_____
###Markdown
The overall vulnerability score is given in the column `RPL_THEMES`. Values of `-999` indicate missing data. Higher values (closer to 1) are higher vulnerability. Lower values (closer to 0) are less vulnerable.
###Code
svi_df['RPL_THEMES']
# replace missing data with None values
svi_df['RPL_THEMES'] = svi_df['RPL_THEMES'].replace({-999:np.nan})
# take a look at overall SVI (takes a minute to run because it's the whole country)
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(1,1,1)
svi_df.dropna(subset=['RPL_THEMES']).plot('RPL_THEMES', legend=True, ax=ax)
ax.set_xlim([-180,-60])
###Output
_____no_output_____
###Markdown
To combine the two data sets, we will use a powerful function called a "spatial join". Recall the database joins from lesson 1: they merge dataframes based on shared values in a column.For geodataframes, we can do something similar using the geometry column: we can merge dataframe rows based on how their respective geometries interact with one another.The function we use to do this is [`gpd.sjoin()`](https://geopandas.org/reference/geopandas.sjoin.html). Let's take a look at the function
###Code
gpd.sjoin?
###Output
_____no_output_____
###Markdown
There are three important sets of arguments for the function:- left and right geodataframes- 'how': what kind of join: left, right, or inner- 'op': what kind of spatial interaction between the geometries
###Code
joined_df = gpd.sjoin(svi_df, windfield_df, how='inner', op='intersects')
joined_df
# we can see the shape of the joined dataframe: it only includes the intersection of the two datasets
# at a census-tract level
joined_df.plot()
###Output
_____no_output_____
###Markdown
To compute the wind-vulnerability score, we just multiply the respective columns:
###Code
joined_df['wind_vulnerability'] = joined_df['RPL_THEMES'] * joined_df['GRIDCODE']
joined_df.plot('wind_vulnerability', legend=True)
###Output
_____no_output_____ |
chapter_02/chapter_02.ipynb | ###Markdown
Chapter 2 - Exploring the Structure of a Dash App* Using Jupyter Notebooks to run Dash apps * Creating a standalone pure Python function * Understanding the id parameter of Dash components * Using Dash Inputs and Outputs * Incorporating the function into the app - creating your first reactive program * Running your first interactive app
###Code
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Output, Input
app = JupyterDash(__name__)
app.layout = html.Div([
dcc.Dropdown(id='color_dropdown',
options=[{'label': color, 'value': color}
for color in ['blue', 'green', 'yellow']]),
html.Br(),
html.Div(id='color_output')
])
@app.callback(Output('color_output', 'children'),
Input('color_dropdown', 'value'))
def display_selected_color(color):
if color is None:
color = 'nothing'
return 'You selected ' + color
if __name__ == '__main__':
app.run_server(mode='inline')
import os
os.listdir('../data')
import pandas as pd
poverty_data = pd.read_csv('../data/PovStatsData.csv')
poverty_data.head(3)
from jupyter_dash import JupyterDash
import dash_html_components as html
import dash_core_components as dcc
import dash_bootstrap_components as dbc
from dash.dependencies import Output, Input
app = JupyterDash(__name__)
app.layout = html.Div([
dcc.Dropdown(id='country',
options=[{'label': country, 'value': country}
for country in poverty_data['Country Name'].unique()]),
html.Br(),
html.Div(id='report')
])
@app.callback(Output('report', 'children'),
Input('country', 'value'))
def display_country_report(country):
if country is None:
return ''
filtered_df = poverty_data[(poverty_data['Country Name']==country) &
(poverty_data['Indicator Name']=='Population, total')]
population = filtered_df.loc[:, '2010'].values[0]
return [html.H3(country),
f'The population of {country} in 2010 was {population:,.0f}.']
if __name__ == '__main__':
app.run_server(mode='external', height=200, width='30%', port=8051)
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.