path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
01 Machine Learning/scikit_examples_jupyter/classification/plot_classification_probability.ipynb | ###Markdown
Plot classification probabilityPlot the classification probability for different classifiers. We use a 3 classdataset, and we classify it with a Support Vector classifier, L1 and L2penalized logistic regression with either a One-Vs-Rest or multinomial setting,and Gaussian process classification.Linear SVC is not a probabilistic classifier by default but it has a built-incalibration option enabled in this example (`probability=True`).The logistic regression with One-Vs-Rest is not a multiclass classifier out ofthe box. As a result it has more trouble in separating class 2 and 3 than theother estimators.
###Code
print(__doc__)
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data[:, 0:2] # we only take the first two features for visualization
y = iris.target
n_features = X.shape[1]
C = 10
kernel = 1.0 * RBF([1.0, 1.0]) # for GPC
# Create different classifiers.
classifiers = {
'L1 logistic': LogisticRegression(C=C, penalty='l1',
solver='saga',
multi_class='multinomial',
max_iter=10000),
'L2 logistic (Multinomial)': LogisticRegression(C=C, penalty='l2',
solver='saga',
multi_class='multinomial',
max_iter=10000),
'L2 logistic (OvR)': LogisticRegression(C=C, penalty='l2',
solver='saga',
multi_class='ovr',
max_iter=10000),
'Linear SVC': SVC(kernel='linear', C=C, probability=True,
random_state=0),
'GPC': GaussianProcessClassifier(kernel)
}
n_classifiers = len(classifiers)
plt.figure(figsize=(3 * 2, n_classifiers * 2))
plt.subplots_adjust(bottom=.2, top=.95)
xx = np.linspace(3, 9, 100)
yy = np.linspace(1, 5, 100).T
xx, yy = np.meshgrid(xx, yy)
Xfull = np.c_[xx.ravel(), yy.ravel()]
for index, (name, classifier) in enumerate(classifiers.items()):
classifier.fit(X, y)
y_pred = classifier.predict(X)
accuracy = accuracy_score(y, y_pred)
print("Accuracy (train) for %s: %0.1f%% " % (name, accuracy * 100))
# View probabilities:
probas = classifier.predict_proba(Xfull)
n_classes = np.unique(y_pred).size
for k in range(n_classes):
plt.subplot(n_classifiers, n_classes, index * n_classes + k + 1)
plt.title("Class %d" % k)
if k == 0:
plt.ylabel(name)
imshow_handle = plt.imshow(probas[:, k].reshape((100, 100)),
extent=(3, 9, 1, 5), origin='lower')
plt.xticks(())
plt.yticks(())
idx = (y_pred == k)
if idx.any():
plt.scatter(X[idx, 0], X[idx, 1], marker='o', c='w', edgecolor='k')
ax = plt.axes([0.15, 0.04, 0.7, 0.05])
plt.title("Probability")
plt.colorbar(imshow_handle, cax=ax, orientation='horizontal')
plt.show()
###Output
_____no_output_____ |
jupyter/pandas-tut/06-Important Methods.ipynb | ###Markdown
Tutorial 6: Important Methods in Pandas
###Code
import pandas as pd
import numpy as np
s=pd.Series([1,2,3,4],
index=["a","b","c","d"])
s
s["a"]
s2=s.reindex(["b","d","a","c","e"])
s2
s3=pd.Series(["blue","yellow","purple"],
index=[0,2,4])
s3
s3.reindex(range(6),method="ffill")
df=pd.DataFrame(np.arange(9).reshape(3,3),
index=["a","c","d"],
columns=["Tim","Tom","Kate"])
df
df2=df.reindex(["d","c","b","a"])
df2
names=["Kate","Tim","Tom"]
df.reindex(columns=names)
df.loc[["c","d","a"]]
s=pd.Series(np.arange(5.),
index=["a","b","c","d","e"])
s
new_s=s.drop("b")
new_s
s.drop(["c","d"])
data=pd.DataFrame(np.arange(16).reshape(4,4),
index=["Kate","Tim",
"Tom","Alex"],
columns=list("ABCD"))
data
data.drop(["Kate","Tim"])
data.drop("A",axis=1)
data.drop("Kate",axis=0)
data
data.mean(axis="index")
data.mean(axis="columns")
###Output
_____no_output_____ |
Algorithms/landsat_radiance.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
Map.centerObject(raw, 10)
Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw')
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance')
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
Map.centerObject(raw, 10)
Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw')
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance')
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
Map.centerObject(raw, 10)
Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw')
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance')
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
ee_layers.append(EarthEngineLayer(ee_object=raw, vis_params={'bands':['B4','B3','B2'],'min':6000,'max':12000}))
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
ee_layers.append(EarthEngineLayer(ee_object=radiance, vis_params={'bands':['B4','B3','B2'],'max':90}))
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
ee_layers.append(EarthEngineLayer(ee_object=toa, vis_params={'bands':['B4','B3','B2'],'max':0.2}))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
Map.centerObject(raw, 10)
Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw')
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance')
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
Map.centerObject(raw, 10)
Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw')
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance')
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
Map.centerObject(raw, 10)
Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw')
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance')
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Load a raw Landsat scene and display it.
raw = ee.Image('LANDSAT/LC08/C01/T1/LC08_044034_20140318')
Map.centerObject(raw, 10)
Map.addLayer(raw, {'bands': ['B4', 'B3', 'B2'], 'min': 6000, 'max': 12000}, 'raw')
# Convert the raw data to radiance.
radiance = ee.Algorithms.Landsat.calibratedRadiance(raw)
Map.addLayer(radiance, {'bands': ['B4', 'B3', 'B2'], 'max': 90}, 'radiance')
# Convert the raw data to top-of-atmosphere reflectance.
toa = ee.Algorithms.Landsat.TOA(raw)
Map.addLayer(toa, {'bands': ['B4', 'B3', 'B2'], 'max': 0.2}, 'toa reflectance')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____ |
tests/fixtures/with_mknotebooks/docs/demo.ipynb | ###Markdown
A df
###Code
df = pd.DataFrame({"time":np.arange(0, 10, 0.1)})
df["amplitude"] = np.sin(df.time)
df.head(5)
ax = df.plot()
###Output
_____no_output_____ |
analysis/Steven1/Milestone2.ipynb | ###Markdown
Imports
###Code
import pandas as pd
import seaborn as sns
import numpy as np
import os
import matplotlib.pyplot as plt
import pandas_profiling
###Output
_____no_output_____
###Markdown
1. Load Data
###Code
df = pd.read_csv("/Users/stevenzonneveld/Desktop/data301/project-group35-project/data/raw/MileStone1.csv")
df
###Output
_____no_output_____
###Markdown
2. Clean Data
###Code
# Checking for NaN Values
nan_in_df = df.isnull().values.any()
nan_in_df
# No NaN Values
# Dropping columns
df_cleaned = df.drop(columns = ['Unnamed: 0', 'price', 'title_status', 'color', 'state'], axis = 0).sort_values(by = ['brand'])
df_cleaned
# Droping rows
df_cleaned.drop( df_cleaned [ df_cleaned ['brand'] == 'peterbilt'].index, inplace=True)
df_cleaned.drop( df_cleaned [ df_cleaned ['mileage'] == 0.0 ].index, inplace=True)
df_cleaned
# Don't want undriven cars / missing values.
# Peterbilts are semi trucks, we are looking at cars.
# Research Questions: What car brand is the longest lasting on average, based on the model year of the car and the milage
###Output
_____no_output_____
###Markdown
3. Process Data / 4. Wrangle Data --> Dropping Unnecessary Columns, per each graph.
###Code
# Longest lasting by Mileage
# Data Wrangling - Dropping Columns
df_mileage = df_cleaned.drop(columns = ['year'], axis = 0)
# Data Wrangling - Renamming Columns
df_mileage = df_mileage.rename(columns = {"brand": "Manufacturer", "mileage": "Mileage"})
df_mileage
###Output
_____no_output_____
###Markdown
3. Process Data / 4. Wrangle Data --> Dropping Unnecessary Columns, per each graph.
###Code
# Longest Lasting by Year
df_year = df_cleaned
# Data Wrangling - Dropping Columns
df_year = df_year.drop(columns = ['mileage'])
# Data Wrangling - Renamming Columns
df_year = df_year.rename(columns = {"brand": "Manufacturer", "year": "Model Year"})
df_year
###Output
_____no_output_____
###Markdown
Beginning of Method Chaining Method Chaining by Year
###Code
def load_and_process_df_Method_Chain_by_Year(path_to_csv_file):
df_Method_Chain_by_Year1 = (
pd.read_csv("/Users/stevenzonneveld/Desktop/data301/project-group35-project/data/raw/MileStone1.csv")
.drop(columns = ['Unnamed: 0', 'price', 'title_status', 'color', 'state', 'mileage'], axis = 0)
.sort_values(by = ['brand'])
#.drop( df_cleaned [ df_cleaned ['brand'] == 'peterbilt'].index, inplace=True) #could not figure out how to use .drop in method chaining.
.rename(columns = {"brand": "Manufacturer", "year": "Model Year"})
)
df_Method_Chain_by_Year2 = df_Method_Chain_by_Year1.drop( df_Method_Chain_by_Year1 [ df_Method_Chain_by_Year1 ['Manufacturer'] == 'peterbilt'].index)
return df_Method_Chain_by_Year2
# practice code (Method chain minus the function)
df_Method_Chain_by_Year1 = (
pd.read_csv("/Users/stevenzonneveld/Desktop/data301/project-group35-project/data/raw/MileStone1.csv")
.drop(columns = ['Unnamed: 0', 'price', 'title_status', 'color', 'state', 'mileage'], axis = 0)
.sort_values(by = ['brand'])
#.drop( df_cleaned [ df_cleaned ['brand'] == 'peterbilt'].index, inplace=True) #could not figure out how to use .drop in method chaining.
.rename(columns = {"brand": "Manufacturer", "year": "Model Year"})
)
df_Method_Chain_by_Year2 = df_Method_Chain_by_Year1.drop( df_Method_Chain_by_Year1 [ df_Method_Chain_by_Year1 ['Manufacturer'] == 'peterbilt'].index)
df_Method_Chain_by_Year2
###Output
_____no_output_____
###Markdown
Method Chaining by Mileage
###Code
def load_and_process_df_Method_Chain_by_Mileage(path_to_csv_file):
df_Method_Chain_by_Mileage1 = (
pd.read_csv("/Users/stevenzonneveld/Desktop/data301/project-group35-project/data/raw/MileStone1.csv")
.drop(columns = ['Unnamed: 0', 'price', 'title_status', 'color', 'state', 'year'], axis = 0)
.sort_values(by = ['brand'])
#.drop( df_cleaned [ df_cleaned ['brand'] == 'peterbilt'].index, inplace=True) #could not figure out how to use .drop in method chaining.
#.drop( df_cleaned [ df_cleaned ['mileage'] == 0.0 ].index, inplace=True) #could not figure out how to use .drop in method chaining.
.rename(columns = {"brand": "Manufacturer", "mileage": "Mileage"})
)
df_Method_Chain_by_Mileage2 = df_Method_Chain_by_Mileage1.drop( df_Method_Chain_by_Mileage1 [ df_Method_Chain_by_Mileage1 ['Manufacturer'] == 'peterbilt'].index)
df_Method_Chain_by_Mileage3 = df_Method_Chain_by_Mileage2.drop( df_Method_Chain_by_Mileage2 [ df_Method_Chain_by_Mileage2 ['Mileage'] == 0.0 ].index)
return df_Method_Chain_by_Mileage3
# practice code (Method chain minus the function)
df_Method_Chain_by_Mileage1 = (
pd.read_csv("/Users/stevenzonneveld/Desktop/data301/project-group35-project/data/raw/MileStone1.csv")
.drop(columns = ['Unnamed: 0', 'price', 'title_status', 'color', 'state', 'year'], axis = 0)
.sort_values(by = ['brand'])
#.drop( df_cleaned [ df_cleaned ['brand'] == 'peterbilt'].index, inplace=True) #could not figure out how to use .drop in method chaining.
#.drop( df_cleaned [ df_cleaned ['mileage'] == 0.0 ].index, inplace=True) #could not figure out how to use .drop in method chaining.
.rename(columns = {"brand": "Manufacturer", "mileage": "Mileage"})
)
df_Method_Chain_by_Mileage2 = df_Method_Chain_by_Mileage1.drop( df_Method_Chain_by_Mileage1 [ df_Method_Chain_by_Mileage1 ['Manufacturer'] == 'peterbilt'].index)
df_Method_Chain_by_Mileage3 = df_Method_Chain_by_Mileage2.drop( df_Method_Chain_by_Mileage2 [ df_Method_Chain_by_Mileage2 ['Mileage'] == 0.0 ].index)
df_Method_Chain_by_Mileage3
###Output
_____no_output_____ |
docs/tutorials/terraclimate.ipynb | ###Markdown
Complex NetCDF to Zarr Recipe: TerraClimate About the DatasetFrom http://www.climatologylab.org/terraclimate.html:> TerraClimate is a dataset of monthly climate and climatic water balance for global terrestrial surfaces from 1958-2019. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time-varying data. All data have monthly temporal resolution and a ~4-km (1/24th degree) spatial resolution. The data cover the period from 1958-2019. We plan to update these data periodically (annually). What makes it trickyThis is an advanced example that illustrates the following concepts- _MultiVariable recipe_: There is one file per year for a dozen different variables.- _Complex Preprocessing_: We want to apply different preprocessing depending on the variable. This example shows how.- _Inconsistent size of data in input files_: This means we have to scan each input file and cache its metadata before we can start writing the target.This recipe requires a new storage target, a `metadata_cache`. In this example, this is just another directory. You could hypothetically use a database or other key/value store for this.
###Code
from pangeo_forge.recipe import NetCDFtoZarrMultiVarSequentialRecipe
from pangeo_forge.patterns import VariableSequencePattern
import xarray as xr
###Output
_____no_output_____
###Markdown
Define Filename Pattern To keep this example smaller, we just use two years instead of the whole record.
###Code
target_chunks = {"lat": 1024, "lon": 1024, "time": 12}
# only do two years to keep the example small; it's still big!
years = list(range(1958, 1960))
variables = [
"aet",
"def",
"pet",
"ppt",
"q",
"soil",
"srad",
"swe",
"tmax",
"tmin",
"vap",
"ws",
"vpd",
"PDSI",
]
pattern = VariableSequencePattern(
fmt_string="https://climate.northwestknowledge.net/TERRACLIMATE-DATA/TerraClimate_{variable}_{year}.nc",
keys={'variable': variables, 'year': years}
)
pattern
###Output
_____no_output_____
###Markdown
Define Preprocessing FunctionsThese functions apply masks for each variable to remove invalid data.
###Code
rename_vars = {'PDSI': 'pdsi'}
mask_opts = {
"PDSI": ("lt", 10),
"aet": ("lt", 32767),
"def": ("lt", 32767),
"pet": ("lt", 32767),
"ppt": ("lt", 32767),
"ppt_station_influence": None,
"q": ("lt", 2147483647),
"soil": ("lt", 32767),
"srad": ("lt", 32767),
"swe": ("lt", 10000),
"tmax": ("lt", 200),
"tmax_station_influence": None,
"tmin": ("lt", 200),
"tmin_station_influence": None,
"vap": ("lt", 300),
"vap_station_influence": None,
"vpd": ("lt", 300),
"ws": ("lt", 200),
}
def apply_mask(key, da):
"""helper function to mask DataArrays based on a threshold value"""
if mask_opts.get(key, None):
op, val = mask_opts[key]
if op == "lt":
da = da.where(da < val)
elif op == "neq":
da = da.where(da != val)
return da
def preproc(ds):
"""custom preprocessing function for terraclimate data"""
rename = {}
station_influence = ds.get("station_influence", None)
if station_influence is not None:
ds = ds.drop_vars("station_influence")
var = list(ds.data_vars)[0]
if var in rename_vars:
rename[var] = rename_vars[var]
if "day" in ds.coords:
rename["day"] = "time"
if station_influence is not None:
ds[f"{var}_station_influence"] = station_influence
with xr.set_options(keep_attrs=True):
ds[var] = apply_mask(var, ds[var])
if rename:
ds = ds.rename(rename)
return ds
###Output
_____no_output_____
###Markdown
Define RecipeWe are now ready to define the recipe.We also specify the desired chunks of the target dataset.A key property of this recipe is `nitems_per_input=None`, which triggers caching of input metadata.
###Code
chunks = {"lat": 1024, "lon": 1024, "time": 12}
recipe = NetCDFtoZarrMultiVarSequentialRecipe(
input_pattern=pattern,
sequence_dim="time", # TODO: raise error if this is not specified
target_chunks=target_chunks,
nitems_per_input=None, # don't know how many timesteps in each file
process_chunk=preproc
)
recipe
###Output
_____no_output_____
###Markdown
Define Storage TargetsSince our recipe needs to cache input metadata, we need to suply a `metadata_cache` target.
###Code
import tempfile
from fsspec.implementations.local import LocalFileSystem
from pangeo_forge.storage import FSSpecTarget, CacheFSSpecTarget
fs_local = LocalFileSystem()
target_dir = tempfile.TemporaryDirectory()
target = FSSpecTarget(fs_local, target_dir.name)
cache_dir = tempfile.TemporaryDirectory()
cache_target = CacheFSSpecTarget(fs_local, cache_dir.name)
meta_dir = tempfile.TemporaryDirectory()
meta_store = FSSpecTarget(fs_local, meta_dir.name)
recipe.target = target
recipe.input_cache = cache_target
recipe.metadata_cache = meta_store
recipe
###Output
_____no_output_____
###Markdown
Execute with PrefectThis produces A LOT of output because we turn on logging.
###Code
# logging will display some interesting information about our recipe during execution
import logging
import sys
logging.basicConfig(
format='%(asctime)s [%(levelname)s] %(name)s - %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S',
stream=sys.stdout,
)
logger = logging.getLogger("pangeo_forge.recipe")
logger.setLevel(logging.INFO)
from pangeo_forge.executors import PrefectPipelineExecutor
pipelines = recipe.to_pipelines()
executor = PrefectPipelineExecutor()
plan = executor.pipelines_to_plan(pipelines)
executor.execute_plan(plan)
###Output
[2021-03-20 21:30:59-0400] INFO - prefect.FlowRunner | Beginning Flow run for 'Rechunker'
2021-03-20 21:30:59 [INFO] prefect.FlowRunner - Beginning Flow run for 'Rechunker'
[2021-03-20 21:30:59-0400] INFO - prefect.TaskRunner | Task 'MappedTaskWrapper': Starting task run...
2021-03-20 21:30:59 [INFO] prefect.TaskRunner - Task 'MappedTaskWrapper': Starting task run...
[2021-03-20 21:30:59-0400] INFO - prefect.TaskRunner | Task 'MappedTaskWrapper': Finished task run for task with final state: 'Mapped'
2021-03-20 21:30:59 [INFO] prefect.TaskRunner - Task 'MappedTaskWrapper': Finished task run for task with final state: 'Mapped'
[2021-03-20 21:30:59-0400] INFO - prefect.TaskRunner | Task 'MappedTaskWrapper[0]': Starting task run...
2021-03-20 21:30:59 [INFO] prefect.TaskRunner - Task 'MappedTaskWrapper[0]': Starting task run...
2021-03-20 21:30:59 [INFO] pangeo_forge.recipe - Caching input ('aet', 1958)
2021-03-20 21:30:59 [INFO] pangeo_forge.recipe - Opening input 'https://climate.northwestknowledge.net/TERRACLIMATE-DATA/TerraClimate_aet_1958.nc'
###Markdown
Check and Plot Target
###Code
ds_target = xr.open_zarr(target.get_mapper(), consolidated=True)
ds_target
###Output
_____no_output_____
###Markdown
As an example calculation, we compute and plot the seasonal climatology of soil moisture.
###Code
with xr.set_options(keep_attrs=True):
soil_clim = ds_target.soil.groupby('time.season').mean('time').coarsen(lon=12, lat=12).mean()
soil_clim
soil_clim.plot(col='season', col_wrap=2, robust=True, figsize=(18, 8))
###Output
/opt/miniconda3/envs/pangeo2020/lib/python3.8/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide
x = np.divide(x1, x2, out)
###Markdown
Complex NetCDF to Zarr Recipe: TerraClimate About the DatasetFrom http://www.climatologylab.org/terraclimate.html:> TerraClimate is a dataset of monthly climate and climatic water balance for global terrestrial surfaces from 1958-2019. These data provide important inputs for ecological and hydrological studies at global scales that require high spatial resolution and time-varying data. All data have monthly temporal resolution and a ~4-km (1/24th degree) spatial resolution. The data cover the period from 1958-2019. We plan to update these data periodically (annually). What makes it trickyThis is an advanced example that illustrates the following concepts- _Multiple variables in different files_: There is one file per year for a dozen different variables.- _Complex Preprocessing_: We want to apply different preprocessing depending on the variable. This example shows how.- _Inconsistent size of data in input files_: This means we have to scan each input file and cache its metadata before we can start writing the target.This recipe requires a new storage target, a `metadata_cache`. In this example, this is just another directory. You could hypothetically use a database or other key/value store for this.
###Code
from pangeo_forge.recipes import XarrayZarrRecipe
from pangeo_forge.patterns import FilePattern, ConcatDim, MergeDim
import xarray as xr
###Output
_____no_output_____
###Markdown
Define Filename Pattern To keep this example smaller, we just use two years instead of the whole record.
###Code
target_chunks = {"lat": 1024, "lon": 1024, "time": 12}
# only do two years to keep the example small; it's still big!
years = list(range(1958, 1960))
variables = [
"aet",
"def",
"pet",
"ppt",
"q",
"soil",
"srad",
"swe",
"tmax",
"tmin",
"vap",
"ws",
"vpd",
"PDSI",
]
def make_filename(variable, time):
return f"http://thredds.northwestknowledge.net:8080/thredds/fileServer/TERRACLIMATE_ALL/data/TerraClimate_{variable}_{time}.nc"
pattern = FilePattern(
make_filename,
ConcatDim(name="time", keys=years),
MergeDim(name="variable", keys=variables)
)
pattern
###Output
_____no_output_____
###Markdown
Check out the pattern:
###Code
for key, filename in pattern.items():
break
key, filename
###Output
_____no_output_____
###Markdown
Define Preprocessing FunctionsThese functions apply masks for each variable to remove invalid data.
###Code
rename_vars = {'PDSI': 'pdsi'}
mask_opts = {
"PDSI": ("lt", 10),
"aet": ("lt", 32767),
"def": ("lt", 32767),
"pet": ("lt", 32767),
"ppt": ("lt", 32767),
"ppt_station_influence": None,
"q": ("lt", 2147483647),
"soil": ("lt", 32767),
"srad": ("lt", 32767),
"swe": ("lt", 10000),
"tmax": ("lt", 200),
"tmax_station_influence": None,
"tmin": ("lt", 200),
"tmin_station_influence": None,
"vap": ("lt", 300),
"vap_station_influence": None,
"vpd": ("lt", 300),
"ws": ("lt", 200),
}
def apply_mask(key, da):
"""helper function to mask DataArrays based on a threshold value"""
if mask_opts.get(key, None):
op, val = mask_opts[key]
if op == "lt":
da = da.where(da < val)
elif op == "neq":
da = da.where(da != val)
return da
def preproc(ds):
"""custom preprocessing function for terraclimate data"""
rename = {}
station_influence = ds.get("station_influence", None)
if station_influence is not None:
ds = ds.drop_vars("station_influence")
var = list(ds.data_vars)[0]
if var in rename_vars:
rename[var] = rename_vars[var]
if "day" in ds.coords:
rename["day"] = "time"
if station_influence is not None:
ds[f"{var}_station_influence"] = station_influence
with xr.set_options(keep_attrs=True):
ds[var] = apply_mask(var, ds[var])
if rename:
ds = ds.rename(rename)
return ds
###Output
_____no_output_____
###Markdown
Define RecipeWe are now ready to define the recipe.We also specify the desired chunks of the target dataset.A key property of this recipe is `nitems_per_input=None`, which triggers caching of input metadata.
###Code
chunks = {"lat": 1024, "lon": 1024, "time": 12}
recipe = XarrayZarrRecipe(
file_pattern=pattern,
target_chunks=target_chunks,
process_chunk=preproc
)
recipe
###Output
_____no_output_____
###Markdown
Define Storage TargetsSince we did not specify `nitems_per_file` in our `ConcatDim`, the recipe needs to cache input metadata.So we need to suply a `metadata_cache` target.
###Code
import tempfile
from fsspec.implementations.local import LocalFileSystem
from pangeo_forge.storage import FSSpecTarget, CacheFSSpecTarget
fs_local = LocalFileSystem()
target_dir = tempfile.TemporaryDirectory()
target = FSSpecTarget(fs_local, target_dir.name)
cache_dir = tempfile.TemporaryDirectory()
cache_target = CacheFSSpecTarget(fs_local, cache_dir.name)
meta_dir = tempfile.TemporaryDirectory()
meta_store = FSSpecTarget(fs_local, meta_dir.name)
recipe.target = target
recipe.input_cache = cache_target
recipe.metadata_cache = meta_store
recipe
###Output
_____no_output_____
###Markdown
Execute with PrefectThis produces A LOT of output because we turn on logging.
###Code
# logging will display some interesting information about our recipe during execution
import logging
import sys
logging.basicConfig(
format='%(asctime)s [%(levelname)s] %(name)s - %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S',
stream=sys.stdout,
)
logger = logging.getLogger("pangeo_forge.recipe")
logger.setLevel(logging.INFO)
from pangeo_forge.executors import PrefectPipelineExecutor
pipelines = recipe.to_pipelines()
executor = PrefectPipelineExecutor()
plan = executor.pipelines_to_plan(pipelines)
executor.execute_plan(plan)
###Output
[2021-04-19 10:41:07-0400] INFO - prefect.FlowRunner | Beginning Flow run for 'Rechunker'
2021-04-19 10:41:07 [INFO] prefect.FlowRunner - Beginning Flow run for 'Rechunker'
[2021-04-19 10:41:07-0400] INFO - prefect.TaskRunner | Task 'MappedTaskWrapper': Starting task run...
2021-04-19 10:41:07 [INFO] prefect.TaskRunner - Task 'MappedTaskWrapper': Starting task run...
[2021-04-19 10:41:07-0400] INFO - prefect.TaskRunner | Task 'MappedTaskWrapper': Finished task run for task with final state: 'Mapped'
2021-04-19 10:41:07 [INFO] prefect.TaskRunner - Task 'MappedTaskWrapper': Finished task run for task with final state: 'Mapped'
[2021-04-19 10:41:07-0400] INFO - prefect.TaskRunner | Task 'MappedTaskWrapper[0]': Starting task run...
2021-04-19 10:41:07 [INFO] prefect.TaskRunner - Task 'MappedTaskWrapper[0]': Starting task run...
2021-04-19 10:41:07 [INFO] pangeo_forge.recipe - Caching input (0, 0): http://thredds.northwestknowledge.net:8080/thredds/fileServer/TERRACLIMATE_ALL/data/TerraClimate_aet_1958.nc
###Markdown
Check and Plot Target
###Code
ds_target = xr.open_zarr(target.get_mapper(), consolidated=True)
ds_target
###Output
_____no_output_____
###Markdown
As an example calculation, we compute and plot the seasonal climatology of soil moisture.
###Code
with xr.set_options(keep_attrs=True):
soil_clim = ds_target.soil.groupby('time.season').mean('time').coarsen(lon=12, lat=12).mean()
soil_clim
soil_clim.plot(col='season', col_wrap=2, robust=True, figsize=(18, 8))
###Output
/opt/miniconda3/envs/pangeo-forge/lib/python3.8/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide
x = np.divide(x1, x2, out)
|
notebooks/07_scrape_google.ipynb | ###Markdown
Improve book popularity(Include genre in general search: added book_google2() in fictiondb.py)
###Code
import pandas as pd
# Since 'budget' is the bottle neck in dropna process, use that to narrow down the search list
all_df = pd.read_pickle('../dump/all_data').dropna(subset=['budget'])
all_df['date'] = all_df.release_date.dt.strftime('%Y-%m-%d')
all_df.info()
# Create list to search
title_list = list(all_df.movie_title)
author_list = list(all_df.author)
date_list = list(all_df.date)
genre_list = list(all_df.genre)
lng = len(title_list)
%run -i '../py/fictiondb.py'
book_history_2 = []
for i in range(lng):
book = title_list[i]
author = author_list[i]
date = date_list[i]
genre = genre_list[i]
print(i,book)
info = book_google2(book,author,date,genre)
book_history_2.append(info)
book_history_2
book_history_2_df = pd.DataFrame(book_history_2)
book_history_2_df.to_pickle('../data/book_history_2_data')
###Output
_____no_output_____ |
examples/arpeggio.ipynb | ###Markdown
ArpeggioThis song is built around an arpeggiated synthesizer (`wubwub.Arpeggiator`). Named chords are used to determine the notes played by said arpeggiator.
###Code
from pysndfx import AudioEffectsChain
import wubwub as wb
import wubwub.sounds as snd
# load sounds
SYNTH = snd.load('synth.elka')
BASS = snd.load('bass.synth')
DRUMS = snd.load('drums.house')
# init the sequencer
seq = wb.Sequencer(bpm=110, beats=32)
# create an arpeggiator, fill it with named chords
arp = seq.add_arpeggiator(sample=SYNTH['C3'], name='arp', basepitch='C3', freq=1/6, method='updown')
C = wb.chord_from_name(root='C2', lengths=8) + wb.chord_from_name(root='C3', add=12)
E = wb.chord_from_name(root='E2', lengths=8) + wb.chord_from_name(root='E3', add=12)
F = wb.chord_from_name(root='F2', lengths=8) + wb.chord_from_name(root='F3', add=12)
Fm = wb.chord_from_name(root='F2', kind='m', lengths=8) + wb.chord_from_name(root='F3', kind='m', add=12)
arp[1] = C
arp[9] = E
arp[17] = F
arp[25] = Fm
arp.effects = AudioEffectsChain().reverb(reverberance=10, wet_gain=1).lowpass(5000)
# create a lead synth line
# use a pattern to set the rhythm
# add lots of effects
lead1 = seq.add_sampler(sample=SYNTH['C3'], name='lead1', basepitch='C3')
melodypat = wb.Pattern([1,3,4,5,7,9], length=16)
lead1.make_notes(beats=melodypat, pitches=[4, 2, 0, 7, 4, 8], lengths=[2,2,2,2,2,8])
lead1.make_notes(beats=melodypat.onmeasure(2), pitches=[9, 7, 5, 4, 0, 2], lengths=[2,2,2,2,2,8])
lead1.effects = (AudioEffectsChain()
.reverb(room_scale=100, wet_gain=4, reverberance=40)
.delay(delays=[500,1000,1500,2000], decays=[.7])
.lowpass(6000))
lead1.volume += 5
# create a slightly detuned second synthesize to create a chorus effect
lead2 = seq.duplicate_track('lead1', newname='lead2')
lead2notes = {b:n.alter(pitch=n.pitch-.4) for b, n in lead1.notedict.items()}
lead2.add_fromdict(lead2notes)
# add a bass note
bass = seq.add_sampler(sample=BASS['fat8tone'], name='bass', basepitch='C3')
bass[[1,5]] = wb.Note(pitch='C3', length=4)
bass[[9,13]] = wb.Note(pitch='E3', length=4)
bass[[17, 21, 25, 29]] = wb.Note(pitch='F3', length=4)
bass.volume -= 12
bass.effects = AudioEffectsChain().overdrive(colour=50).reverb().lowpass(1000)
# add a lonely kick drum
kick = seq.add_sampler(sample=DRUMS['kick7'], name='kick')
kick.make_notes_every(4)
kick.make_notes_every(4, offset=-.5)
kick.clean()
kick.effects = AudioEffectsChain().reverb().lowpass(600)
kick.volume -= 4
# build the output
seq.build(overhang=4)
###Output
_____no_output_____ |
_notebooks/2020-12-20-digits.ipynb | ###Markdown
JuliaでBit全探索を書く時にはdigitsを使うと便利。> Julia言語でBit全探索を実装します。- toc: true - badges: true- comments: true- categories: [AtCoder]- image: images/chart-preview.png digitsについて digits(a, base=b, pad=c)は、10進数の整数aをc桁のb進数に変換した配列を返します。例えば,
###Code
n = 22
println(digits(n, base=2, pad=6))
###Output
[0, 1, 1, 0, 1, 0]
###Markdown
$22 = 10110_2$ なので、正しいですね! 桁数が足りない場合は0で埋められます。 Juliaでは、これを利用して、Bit全探索を用意に実装できます。(Bit全探索については、https://algo-logic.info/rec-bit-search/ がわかりやすいのでおすすめです)具体的には以下のようなコードです。
###Code
N = 4
for i in 0:2^N - 1
pettern = digits(i, base=2, pad=N)
println(pettern)
end
###Output
[0, 0, 0, 0]
[1, 0, 0, 0]
[0, 1, 0, 0]
[1, 1, 0, 0]
[0, 0, 1, 0]
[1, 0, 1, 0]
[0, 1, 1, 0]
[1, 1, 1, 0]
[0, 0, 0, 1]
[1, 0, 0, 1]
[0, 1, 0, 1]
[1, 1, 0, 1]
[0, 0, 1, 1]
[1, 0, 1, 1]
[0, 1, 1, 1]
[1, 1, 1, 1]
###Markdown
あとはこの各パターンについて`1 -> True`,`0 -> False`と見做して処理を行えば良いです。具体的に問題を解いてみます。 部分和問題 部分和問題とは、n 個の整数 $a_1,...,a_n$ からなる配列Aと,整数 S が与えられた時、適切な部分集合を選んで、総和をSとすることができるかを判定する問題です。例えば,```juliaA = [1, 2, 4, 5]S = 8```の時,$A_1 + A_2 + A_3 = 8$ なので、適切な部分集合を選んで総和をSとすることができました。nが小さい時は、bit全探索を用いることで実用的な速度で解くことができます。 この記事で最初に解説したように, digitsを用いて全てのパターンを列挙します。
###Code
N = 4
for i in 0:2^N - 1
pettern = digits(i, base=2, pad=N)
println(pettern)
end
###Output
[0, 0, 0, 0]
[1, 0, 0, 0]
[0, 1, 0, 0]
[1, 1, 0, 0]
[0, 0, 1, 0]
[1, 0, 1, 0]
[0, 1, 1, 0]
[1, 1, 1, 0]
[0, 0, 0, 1]
[1, 0, 0, 1]
[0, 1, 0, 1]
[1, 1, 0, 1]
[0, 0, 1, 1]
[1, 0, 1, 1]
[0, 1, 1, 1]
[1, 1, 1, 1]
###Markdown
ここで0を `false` (つまりそれを選択しない), 1を`true`(それを選択する)とみなした時、そのパターンを表す配列を $P$ とすると、選択した要素の和は, $$dot(A, P) = (A_1 * P_1 + A_2 * P_2 + ... + A_N * P_N)$$なので、$dot(A, P)$ と表すことができます。 したがって、部分和問題は次のようなコードで解けます。とてもシンプルですね!
###Code
using LinearAlgebra
function solve(N, A, S)
for i in 0:2^N - 1
P = digits(i, base=2, pad=N)
if dot(A, P) == S
return "OK, P = $P"
end
end
return "NO"
end
N = 4
A = rand(1:10, N)
S = rand(1:40)
@show N
@show A
@show S
solve(N, A, S)
###Output
N = 4
A = [4, 5, 8, 8]
S = 14
###Markdown
計算量は、内積に$O(N)$かかり、パターンの数が$2^N$個あることから、$O(N2^N)$です。そのため、Nが大きくなると計算時間がすごいことになります。実験してみます。
###Code
using Plots; plotly()
using BenchmarkTools
# 最悪の計算量が知りたいので、絶対に「"No"」になるようなケースについて調べます。
# Aは全て0, 和が1とします。
function benchmark(N)
times = zeros(N)
S = 1
for i in 1:N
A = zeros(Int, i)
benchmark = @benchmark solve($i, $A, $S)
times[i] = mean(benchmark.times)
end
return times
end
result = benchmark(16)
p = plot(result, yaxis=:log)
xlabel!("N")
ylabel!("Time [ns]")
###Output
_____no_output_____
###Markdown
お手本のような指数関数ありがとうございます。Nが大きくなるにつれ計算量も爆発的に大きくなるので、使う時は気をつけましょう。
###Code
clipboard(string(sprint(show, "text/html", p)))
###Output
_____no_output_____ |
.ipynb_checkpoints/1_linear_algebra-checkpoint.ipynb | ###Markdown
A less obvious tool is the dot product. The dot product of two vectors is the sum oftheir componentwise products:
###Code
def dot(v: Vector, w: Vector) -> float:
"""Computes v_1 * w_1 + ... + v_n * w_n"""
assert len(v) == len(w), "vectors must be same length"
return sum(v_i * w_i for v_i, w_i in zip(v, w))
assert dot([1, 2, 3], [4, 5, 6]) == 32
def sum_of_squares(v: Vector) -> float:
"""Returns v_1 * v_1 + ... + v_n * v_n"""
return dot(v, v)
assert sum_of_squares([1, 2, 3]) == 14
import math
# Which we can use to compute its magnitude (or length):
def magnitude(v: Vector) -> float:
"""Returns the magnitude ( or length) of v"""
return math.sqrt(sum_of_squares(v))
assert magnitude([3, 4]) == 5
# We now have all the pieces we need to compute the distance between two vectors
def squared_distance(v: Vector, w: Vector) -> float:
"""Computes (v_1 - w_1)**2 + ... + (v_n - w_n)**2 """
return sum_of_squares(subtract(v, w))
def distance(v: Vector, w: Vector) -> float:
"""Computes the distance between v and w"""
return math.sqrt(squared_distance(v, w))
Matrix = List[List]
from typing import Tuple
A = [[1, 2, 3],[4, 5, 6]]
def shape(A: Matrix) -> Tuple:
"""Returns (# of rows of A, # of columns of A)"""
num_rows = len(A)
num_cols = len(A[0]) if A else 0 # number of elements in first row, 0 is returns if empty lists of list are present
return num_rows, num_cols
assert shape(A) == (2, 3)
def get_row(A: Matrix, i: int) -> Vector:
"""Returns the i-th row of A (as a Vector)"""
return A[i]
def get_column(A: Matrix, j:int) -> Vector:
"""Returns the j-th columns of A (as a Vector)"""
return [A_i[j] # jth element of row vector A_i
for A_i in A] # for each row vector
assert get_row(A, 1) == [4, 5, 6]
assert get_column(A,1) == [2, 5]
from typing import Callable
def make_matrix(num_rows: int, num_cols: int, entry_fn: Callable[[int, int], float]) -> Matrix:
"""Returns a num_rows x num_cols matrix(i, j) whose (i,j)- th entry is entry_fn(i, j)"""
return [[entry_fn(i, j) # entry_fn on each i,j pair
for j in range(num_cols)] # entry_fn(i,0),...,entry_fn(i,j)
for i in range(num_rows)] # create one list for each i
def identity_matrix(n: int) -> Matrix:
"""Returns the n x n identity matrix"""
return make_matrix(n, n, lambda i,j: 1 if i==j else 0)
assert identity_matrix(5) == [[1, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1]]
###Output
_____no_output_____ |
03-hw-spooky-authors-exercise.ipynb | ###Markdown
Spooky Author Identification exerciseSpooky authors dataset. Mоже да се изтегли от тук:https://www.kaggle.com/c/spooky-author-identification ИзводиСлед направените анализи в/у feature-те по време на лекцията на курса можем да стигнем до следните наблюдения и изводи:- ...- ...Също така можем да опитаме някакъв feature engineering, "почиставне" и обработка на данните:- ...- ... Стратегия- ...- ... Започваме...
###Code
%config IPCompleter.greedy=True
!pip install numpy scipy matplotlib ipython scikit-learn pandas pillow mglearn
###Output
Requirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages
Requirement already satisfied: scipy in /opt/conda/lib/python3.6/site-packages
Requirement already satisfied: matplotlib in /opt/conda/lib/python3.6/site-packages
Requirement already satisfied: ipython in /opt/conda/lib/python3.6/site-packages
Requirement already satisfied: scikit-learn in /opt/conda/lib/python3.6/site-packages
Requirement already satisfied: pandas in /opt/conda/lib/python3.6/site-packages
Requirement already satisfied: pillow in /opt/conda/lib/python3.6/site-packages
Collecting mglearn
Downloading mglearn-0.1.6.tar.gz (541kB)
[K 100% |████████████████████████████████| 542kB 944kB/s ta 0:00:01
[?25hRequirement already satisfied: six>=1.10 in /opt/conda/lib/python3.6/site-packages (from matplotlib)
Requirement already satisfied: python-dateutil in /opt/conda/lib/python3.6/site-packages (from matplotlib)
Requirement already satisfied: pytz in /opt/conda/lib/python3.6/site-packages (from matplotlib)
Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=1.5.6 in /opt/conda/lib/python3.6/site-packages (from matplotlib)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: decorator in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: simplegeneric>0.8 in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: pexpect; sys_platform != "win32" in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: setuptools>=18.5 in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: traitlets>=4.2 in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: pickleshare in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: pygments in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: jedi>=0.10 in /opt/conda/lib/python3.6/site-packages (from ipython)
Requirement already satisfied: olefile in /opt/conda/lib/python3.6/site-packages (from pillow)
Requirement already satisfied: wcwidth in /opt/conda/lib/python3.6/site-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython)
Requirement already satisfied: ipython-genutils in /opt/conda/lib/python3.6/site-packages (from traitlets>=4.2->ipython)
Building wheels for collected packages: mglearn
Running setup.py bdist_wheel for mglearn ... [?25ldone
[?25h Stored in directory: /home/jovyan/.cache/pip/wheels/79/8b/2b/17dcfb9c9b044b216a58daea9787a0637cb1ffc5b4c2a78e50
Successfully built mglearn
Installing collected packages: mglearn
Successfully installed mglearn-0.1.6
###Markdown
...
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import mglearn
from IPython.display import display
%matplotlib inline
###Output
_____no_output_____
###Markdown
...
###Code
import pandas as pd
train = pd.read_csv("data/spooky-authors/train.zip", index_col=['id'])
test = pd.read_csv("data/spooky-authors/test.zip", index_col=['id'])
sample_submission = pd.read_csv("data/spooky-authors/sample_submission.zip", index_col=['id'])
print(train.shape, test.shape, sample_submission.shape)
print(set(train.columns) - set(test.columns))
train.head(5)
###Output
_____no_output_____
###Markdown
...
###Code
!pip install nltk
import nltk
nltk.download('stopwords')
params = {"features__ngram_range": [(1,1), (1,2), (1,3)],
"features__analyzer": ['word'],
"features__max_df": [1.0, 0.9, 0.8, 0.7, 0.6, 0.5],
"features__min_df": [2, 3, 5, 10],
"features__lowercase": [False, True],
"features__stop_words": [None, stopwords],
"features__token_pattern": [r'\w+|\,', None],
"clf__alpha": [0.01, 0.1, 0.5, 1, 2]
}
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import log_loss
def report(results, n_top=5):
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results['rank_test_score'] == i)
for candidate in candidates:
print("Model with rank: {0}".format(i))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
results['mean_test_score'][candidate],
results['std_test_score'][candidate]))
print("Parameters: {0}".format(results['params'][candidate]))
print("")
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import cross_val_score
# from nltk.corpus import stopwords
# stopset = set(stopwords.words('english'))
# from sklearn.ensemble import VotingClassifier
pipeline = Pipeline([
('features', TfidfVectorizer(
# ngram_range=(1,2), min_df=2,
# max_df=0.8, lowercase=False,
# token_pattern=r'\w+|\,', stop_words=stopset
)),
('clf', MultinomialNB(
# alpha=0.01
)),
])
# print(pipeline.named_steps['features'])
# print(cross_val_score(pipeline, train.text, train.author, cv=3, n_jobs=3))
# print(cross_val_score(pipeline, train.text, train.author, cv=3, n_jobs=3,
# scoring='neg_log_loss'))
random_search = RandomizedSearchCV(pipeline, param_distributions=params,
scoring='neg_log_loss',
n_iter=20, cv=3, n_jobs=4)
random_search.fit(train.text, train.author)
report(random_search.cv_results_)
pipeline = pipeline.fit(train.text, train.author)
print(pipeline.predict_proba(test[:10].text))
test_predictions = pipeline.predict_proba(test.text)
print(pipeline.classes_)
submit_file = pd.DataFrame(test_predictions, columns=['EAP', 'MWS', 'HPL'], index=test.index)
submit_file.head(10)
submit_file.to_csv("data/spooky-authors/predictions.csv")
###Output
_____no_output_____ |
week3/Linear Algebra, Distance and Similarity.ipynb | ###Markdown
Table of Contents1 Linear Algebra1.1 Dot Products1.1.1 What does a dot product conceptually mean?1.2 Exercises1.3 Using Scikit-Learn1.4 Bag of Words Models2 Distance Measures2.1 Euclidean Distance2.1.1 Scikit Learn3 Similarity Measures4 Linear Relationships4.1 Pearson Correlation Coefficient4.1.1 Intuition Behind Pearson Correlation Coefficient4.1.1.1 When $ρ_{Χ_Υ} = 1$ or $ρ_{Χ_Υ} = -1$4.2 Cosine Similarity4.2.1 Shift Invariance5 Exercise (20 minutes):5.0.0.1 3. Define your cosine similarity functions5.0.0.2 4. Get the two documents from the BoW feature space and calculate cosine similarity6 Challenge: Use the Example Below to Create Your Own Cosine Similarity Function6.0.1 Create a list of all the vocabulary $V$6.0.1.1 Native Implementation:6.0.2 Create your Bag of Words model Linear AlgebraIn the natural language processing, each document is a vector of numbers. Dot ProductsA dot product is defined as$ a \cdot b = \sum_{i}^{n} a_{i}b_{i} = a_{1}b_{1} + a_{2}b_{2} + a_{3}b_{3} + \dots + a_{n}b_{n}$The geometric definition of a dot product is $ a \cdot b = $\|\|b\|\|\|\|a\|\| What does a dot product conceptually mean?A dot product is a representation of the **similarity between two components**, because it is calculated based upon shared elements. It tells you how much one vector goes in the direction of another vector.The actual value of a dot product reflects the direction of change:* **Zero**: we don't have any growth in the original direction* **Positive** number: we have some growth in the original direction* **Negative** number: we have negative (reverse) growth in the original direction
###Code
A = [0,2]
B = [0,1]
def dot_product(x,y):
return sum(a*b for a,b in zip(x,y))
dot_product(A,B)
# What will the dot product of A and B be?
###Output
_____no_output_____
###Markdown
 Exercises What will the dot product of `A` and `B` be?
###Code
A = [1,2]
B = [2,4]
dot_product(A,B)
###Output
_____no_output_____
###Markdown
What will the dot product of `document_1` and `document_2` be?
###Code
document_1 = [0, 0, 1]
document_2 = [1, 0, 2]
###Output
_____no_output_____
###Markdown
Using Scikit-Learn
###Code
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
data_corpus = ["John likes to watch movies. Mary likes movies too.",
"John also likes football. Mary doesn't like football."]
X = vectorizer.fit_transform(data_corpus)
vectorizer.get_feature_names()
###Output
_____no_output_____
###Markdown
Bag of Words Models
###Code
corpus = [
"Some analysts think demand could drop this year because a large number of homeowners take on remodeling projectsafter buying a new property. With fewer homes selling, home values easing, and mortgage rates rising, they predict home renovations could fall to their lowest levels in three years.",
"Most home improvement stocks are expected to report fourth-quarter earnings next month.",
"The conversation boils down to how much leverage management can get out of its wide-ranging efforts to re-energize operations, branding, digital capabilities, and the menu–and, for investors, how much to pay for that.",
"RMD’s software acquisitions, efficiency, and mix overcame pricing and its gross margin improved by 90 bps Y/Y while its operating margin (including amortization) improved by 80 bps Y/Y. Since RMD expects the slower international flow generator growth to continue for the next few quarters, we have lowered our organic growth estimates to the mid-single digits. "
]
X = vectorizer.fit_transform(corpus).toarray()
import numpy as np
from sys import getsizeof
zeroes = np.where(X.flatten() == 0)[0].size
percent_sparse = zeroes / X.size
print(f"The bag of words feature space is {round(percent_sparse * 100,2)}% sparse. \n\
That's approximately {round(getsizeof(X) * percent_sparse,2)} bytes of wasted memory. This is why sklearn uses CSR (compressed sparse rows) instead of normal matrices!")
###Output
_____no_output_____
###Markdown
Distance Measures Euclidean DistanceEuclidean distances can range from 0 (completely identically) to $\infty$ (extremely dissimilar). The distance between two points, $x$ and $y$, can be defined as $d(x,y)$:$$d(x,y) = \sqrt{\sum_{i=1}^{n}(x_{i}-y_{i})^2}$$Compared to the other dominant distance measure (cosine similarity), **magnitude** plays an extremely important role.
###Code
from math import sqrt
def euclidean_distance_1(x,y):
distance = sum((a-b)**2 for a, b in zip(x, y))
return sqrt(distance)
###Output
_____no_output_____
###Markdown
There's typically an easier way to write this function that takes advantage of Numpy's vectorization capabilities:
###Code
import numpy as np
def euclidean_distance_2(x,y):
x = np.array(x)
y = np.array(y)
return np.linalg.norm(x-y)
###Output
_____no_output_____
###Markdown
Scikit Learn
###Code
from sklearn.metrics.pairwise import euclidean_distances
X = [document_1, document_2]
euclidean_distances(X)
###Output
_____no_output_____
###Markdown
Similarity MeasuresSimilarity measures will always range between -1 and 1. A similarity of -1 means the two objects are complete opposites, while a similarity of 1 indicates the objects are identical. Linear Relationships Pearson Correlation Coefficient* We use **ρ** when the correlation is being measured from the population, and **r** when it is being generated from a sample.* An r value of 1 represents a **perfect linear** relationship, and a value of -1 represents a perfect inverse linear relationship.The equation for Pearson's correlation coefficient is $$ρ_{Χ_Υ} = \frac{cov(X,Y)}{σ_Xσ_Y}$$ Intuition Behind Pearson Correlation Coefficient When $ρ_{Χ_Υ} = 1$ or $ρ_{Χ_Υ} = -1$This requires **$cov(X,Y) = σ_Xσ_Y$** or **$-1 * cov(X,Y) = σ_Xσ_Y$** (in the case of $ρ = -1$) . This corresponds with all the data points lying perfectly on the same line. Cosine SimilarityThe cosine similarity of two vectors (each vector will usually represent one document) is a measure that calculates $ cos(\theta)$, where $\theta$ is the angle between the two vectors.Therefore, if the vectors are **orthogonal** to each other (90 degrees), $cos(90) = 0$. If the vectors are in exactly the same direction, $\theta = 0$ and $cos(0) = 1$.Cosine similiarity **does not care about the magnitude of the vector, only the direction** in which it points. This can help normalize when comparing across documents that are different in terms of word count. Shift Invariance* The Pearson correlation coefficient between X and Y does not change with you transform $X \rightarrow a + bX$ and $Y \rightarrow c + dY$, assuming $a$, $b$, $c$, and $d$ are constants and $b$ and $d$ are positive.* Cosine similarity does, however, change when transformed in this way.Exercise (20 minutes):>In Python, find the **cosine similarity** and the **Pearson correlation coefficient** of the two following sentences, assuming a **one-hot encoded binary bag of words** model. You may use a library to create the BoW feature space, but do not use libraries other than `numpy` or `scipy` to compute Pearson and cosine similarity:>`A = "John likes to watch movies. Mary likes movies too"`>`B = "John also likes to watch football games, but he likes to watch movies on occasion as well"` 3. Define your cosine similarity functions```pythonfrom scipy.spatial.distance import cosine we are importing this library to check that our own cosine similarity func worksfrom numpy import dot to calculate dot productfrom numpy.linalg import norm to calculate the normdef cosine_similarity(A, B): numerator = dot(A, B) denominator = norm(A) * norm(B) return numerator / denominatordef cosine_distance(A,B): return 1 - cosine_similarityA = [0,2,3,4,1,2]B = [1,3,4,0,0,2] check that your native implementation and 3rd party library function produce the same valuesassert round(cosine_similarity(A,B),4) == round(cosine(A,B),4)``` 4. Get the two documents from the BoW feature space and calculate cosine similarity```pythoncosine_similarity(X[0], X[1])```>0.5241424183609592
###Code
from scipy.spatial.distance import cosine
from numpy import dot
import numpy as np
from numpy.linalg import norm
def cosine_similarity(A, B):
numerator = dot(A, B)
denominator = norm(A) * norm(B)
return numerator / denominator # remember, you take 1 - the distance to get the distance
def cosine_distance(A,B):
return 1 - cosine_similarity
A = [0,2,3,4,1,2]
B = [1,3,4,0,0,2]
# check that your native implementation and 3rd party library function produce the same values
assert round(cosine_similarity(A,B),4) == round(1 - cosine(A,B),4)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
# take two very similar sentences, should have high similarity
# edit these sentences to become less similar, and the similarity score should decrease
data_corpus = ["John likes to watch movies. Mary likes movies too.",
"John also likes to watch football games"]
X = vectorizer.fit_transform(data_corpus)
X = X.toarray()
print(vectorizer.get_feature_names())
cosine_similarity(X[0], X[1])
###Output
_____no_output_____
###Markdown
Challenge: Use the Example Below to Create Your Own Cosine Similarity Function Create a list of all the **vocabulary $V$**Using **`sklearn`**'s **`CountVectorizer`**:```pythonfrom sklearn.feature_extraction.text import CountVectorizervectorizer = CountVectorizer()data_corpus = ["John likes to watch movies. Mary likes movies too", "John also likes to watch football games, but he likes to watch movies on occasion as well"]X = vectorizer.fit_transform(data_corpus) V = vectorizer.get_feature_names()``` Native Implementation:```pythondef get_vocabulary(sentences): vocabulary = {} create an empty set - question: Why not a list? for sentence in sentences: this is a very crude form of "tokenization", would not actually use in production for word in sentence.split(" "): if word not in vocabulary: vocabulary.add(word) return vocabulary``` Create your Bag of Words model```pythonX = X.toarray()print(X)```Your console output:```python[[0 0 0 1 2 1 2 1 1 1] [1 1 1 1 1 0 0 1 0 1]]```
###Code
vectors = [[0,0,0,1,2,1,2,1,1,1],
[1,1,1,1,1,0,0,1,0,1]]
import math
def find_norm(vector):
total = 0
for element in vector:
total += element ** 2
return math.sqrt(total)
norm(vectors[0]) # Numpy
find_norm(vectors[0]) # your own
dot_product(vectors[0], vectors[1]) / (find_norm(vectors[0]) * find_norm(vectors[1]))
from sklearn.metrics.pairwise import cosine_distances, cosine_similarity
cosine_similarity(vectors)
###Output
_____no_output_____
###Markdown
Table of Contents1 Linear Algebra1.1 Dot Products1.1.1 What does a dot product conceptually mean?1.2 Exercises1.3 Using Scikit-Learn1.4 Bag of Words Models2 Distance Measures2.1 Euclidean Distance2.1.1 Scikit Learn3 Similarity Measures4 Linear Relationships4.1 Pearson Correlation Coefficient4.1.1 Intuition Behind Pearson Correlation Coefficient4.1.1.1 When $ρ_{Χ_Υ} = 1$ or $ρ_{Χ_Υ} = -1$4.2 Cosine Similarity4.2.1 Shift Invariance5 Exercise (20 minutes):5.0.0.1 3. Define your cosine similarity functions5.0.0.2 4. Get the two documents from the BoW feature space and calculate cosine similarity6 Challenge: Use the Example Below to Create Your Own Cosine Similarity Function6.0.1 Create a list of all the vocabulary $V$6.0.1.1 Native Implementation:6.0.2 Create your Bag of Words model Linear AlgebraIn the natural language processing, each document is a vector of numbers. Dot ProductsA dot product is defined as$ a \cdot b = \sum_{i}^{n} a_{i}b_{i} = a_{1}b_{1} + a_{2}b_{2} + a_{3}b_{3} + \dots + a_{n}b_{n}$The geometric definition of a dot product is $ a \cdot b = $\|\|b\|\|\|\|a\|\| What does a dot product conceptually mean?A dot product is a representation of the **similarity between two components**, because it is calculated based upon shared elements. It tells you how much one vector goes in the direction of another vector.The actual value of a dot product reflects the direction of change:* **Zero**: we don't have any growth in the original direction* **Positive** number: we have some growth in the original direction* **Negative** number: we have negative (reverse) growth in the original direction
###Code
A = [0,2]
B = [0,1]
def dot_product(x,y):
return sum(a*b for a,b in zip(x,y))
dot_product(A,B)
# What will the dot product of A and B be?
###Output
_____no_output_____
###Markdown
 Exercises What will the dot product of `A` and `B` be?
###Code
A = [1,2]
B = [2,4]
dot_product(A,B)
###Output
_____no_output_____
###Markdown
What will the dot product of `document_1` and `document_2` be?
###Code
document_1 = [0, 0, 1]
document_2 = [1, 0, 2]
###Output
_____no_output_____
###Markdown
Using Scikit-Learn
###Code
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
data_corpus = ["John likes to watch movies. Mary likes movies too.",
"John also likes football. Mary doesn't like football."]
X = vectorizer.fit_transform(data_corpus)
vectorizer.get_feature_names()
###Output
_____no_output_____
###Markdown
Bag of Words Models
###Code
corpus = [
"Some analysts think demand could drop this year because a large number of homeowners take on remodeling projectsafter buying a new property. With fewer homes selling, home values easing, and mortgage rates rising, they predict home renovations could fall to their lowest levels in three years.",
"Most home improvement stocks are expected to report fourth-quarter earnings next month.",
"The conversation boils down to how much leverage management can get out of its wide-ranging efforts to re-energize operations, branding, digital capabilities, and the menu–and, for investors, how much to pay for that.",
"RMD’s software acquisitions, efficiency, and mix overcame pricing and its gross margin improved by 90 bps Y/Y while its operating margin (including amortization) improved by 80 bps Y/Y. Since RMD expects the slower international flow generator growth to continue for the next few quarters, we have lowered our organic growth estimates to the mid-single digits. "
]
X = vectorizer.fit_transform(corpus).toarray()
import numpy as np
from sys import getsizeof
zeroes = np.where(X.flatten() == 0)[0].size
percent_sparse = zeroes / X.size
print(f"The bag of words feature space is {round(percent_sparse * 100,2)}% sparse. \n\
That's approximately {round(getsizeof(X) * percent_sparse,2)} bytes of wasted memory. This is why sklearn uses CSR (compressed sparse rows) instead of normal matrices!")
###Output
_____no_output_____
###Markdown
Distance Measures Euclidean DistanceEuclidean distances can range from 0 (completely identically) to $\infty$ (extremely dissimilar). The distance between two points, $x$ and $y$, can be defined as $d(x,y)$:$$d(x,y) = \sqrt{\sum_{i=1}^{n}(x_{i}-y_{i})^2}$$Compared to the other dominant distance measure (cosine similarity), **magnitude** plays an extremely important role.
###Code
from math import sqrt
def euclidean_distance_1(x,y):
distance = sum((a-b)**2 for a, b in zip(x, y))
return sqrt(distance)
###Output
_____no_output_____
###Markdown
There's typically an easier way to write this function that takes advantage of Numpy's vectorization capabilities:
###Code
import numpy as np
def euclidean_distance_2(x,y):
x = np.array(x)
y = np.array(y)
return np.linalg.norm(x-y)
###Output
_____no_output_____
###Markdown
Scikit Learn
###Code
from sklearn.metrics.pairwise import euclidean_distances
X = [document_1, document_2]
euclidean_distances(X)
###Output
_____no_output_____
###Markdown
Similarity MeasuresSimilarity measures will always range between -1 and 1. A similarity of -1 means the two objects are complete opposites, while a similarity of 1 indicates the objects are identical. Linear Relationships Pearson Correlation Coefficient* We use **ρ** when the correlation is being measured from the population, and **r** when it is being generated from a sample.* An r value of 1 represents a **perfect linear** relationship, and a value of -1 represents a perfect inverse linear relationship.The equation for Pearson's correlation coefficient is $$ρ_{Χ_Υ} = \frac{cov(X,Y)}{σ_Xσ_Y}$$ Intuition Behind Pearson Correlation Coefficient When $ρ_{Χ_Υ} = 1$ or $ρ_{Χ_Υ} = -1$This requires **$cov(X,Y) = σ_Xσ_Y$** or **$-1 * cov(X,Y) = σ_Xσ_Y$** (in the case of $ρ = -1$) . This corresponds with all the data points lying perfectly on the same line. Cosine SimilarityThe cosine similarity of two vectors (each vector will usually represent one document) is a measure that calculates $ cos(\theta)$, where $\theta$ is the angle between the two vectors.Therefore, if the vectors are **orthogonal** to each other (90 degrees), $cos(90) = 0$. If the vectors are in exactly the same direction, $\theta = 0$ and $cos(0) = 1$.Cosine similiarity **does not care about the magnitude of the vector, only the direction** in which it points. This can help normalize when comparing across documents that are different in terms of word count. Shift Invariance* The Pearson correlation coefficient between X and Y does not change with you transform $X \rightarrow a + bX$ and $Y \rightarrow c + dY$, assuming $a$, $b$, $c$, and $d$ are constants and $b$ and $d$ are positive.* Cosine similarity does, however, change when transformed in this way.Exercise (20 minutes):>In Python, find the **cosine similarity** and the **Pearson correlation coefficient** of the two following sentences, assuming a **one-hot encoded binary bag of words** model. You may use a library to create the BoW feature space, but do not use libraries other than `numpy` or `scipy` to compute Pearson and cosine similarity:>`A = "John likes to watch movies. Mary likes movies too"`>`B = "John also likes to watch football games, but he likes to watch movies on occasion as well"` 3. Define your cosine similarity functions```pythonfrom scipy.spatial.distance import cosine we are importing this library to check that our own cosine similarity func worksfrom numpy import dot to calculate dot productfrom numpy.linalg import norm to calculate the normdef cosine_similarity(A, B): numerator = dot(A, B) denominator = norm(A) * norm(B) return numerator / denominatordef cosine_distance(A,B): return 1 - cosine_similarityA = [0,2,3,4,1,2]B = [1,3,4,0,0,2] check that your native implementation and 3rd party library function produce the same valuesassert round(cosine_similarity(A,B),4) == round(cosine(A,B),4)``` 4. Get the two documents from the BoW feature space and calculate cosine similarity```pythoncosine_similarity(X[0], X[1])```>0.5241424183609592
###Code
from scipy.spatial.distance import cosine
from numpy import dot
import numpy as np
from numpy.linalg import norm
def cosine_similarity(A, B):
numerator = dot(A, B)
denominator = norm(A) * norm(B)
return numerator / denominator # remember, you take 1 - the distance to get the distance
def cosine_distance(A,B):
return 1 - cosine_similarity
A = [0,2,3,4,1,2]
B = [1,3,4,0,0,2]
# check that your native implementation and 3rd party library function produce the same values
assert round(cosine_similarity(A,B),4) == round(1 - cosine(A,B),4)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
# take two very similar sentences, should have high similarity
# edit these sentences to become less similar, and the similarity score should decrease
data_corpus = ["John likes to watch movies. Mary likes movies too.",
"John also likes to watch football games"]
X = vectorizer.fit_transform(data_corpus)
X = X.toarray()
print(vectorizer.get_feature_names())
cosine_similarity(X[0], X[1])
###Output
_____no_output_____
###Markdown
Challenge: Use the Example Below to Create Your Own Cosine Similarity Function Create a list of all the **vocabulary $V$**Using **`sklearn`**'s **`CountVectorizer`**:```pythonfrom sklearn.feature_extraction.text import CountVectorizervectorizer = CountVectorizer()data_corpus = ["John likes to watch movies. Mary likes movies too", "John also likes to watch football games, but he likes to watch movies on occasion as well"]X = vectorizer.fit_transform(data_corpus) V = vectorizer.get_feature_names()``` Native Implementation:```pythondef get_vocabulary(sentences): vocabulary = {} create an empty set - question: Why not a list? for sentence in sentences: this is a very crude form of "tokenization", would not actually use in production for word in sentence.split(" "): if word not in vocabulary: vocabulary.add(word) return vocabulary``` Create your Bag of Words model```pythonX = X.toarray()print(X)```Your console output:```python[[0 0 0 1 2 1 2 1 1 1] [1 1 1 1 1 0 0 1 0 1]]```
###Code
vectors = [[0,0,0,1,2,1,2,1,1,1],
[1,1,1,1,1,0,0,1,0,1]]
import math
def find_norm(vector):
total = 0
for element in vector:
total += element ** 2
return math.sqrt(total)
norm(vectors[0]) # Numpy
find_norm(vectors[0]) # your own
dot_product(vectors[0], vectors[1]) / (find_norm(vectors[0]) * find_norm(vectors[1]))
from sklearn.metrics.pairwise import cosine_distances, cosine_similarity
cosine_similarity(vectors)
###Output
_____no_output_____ |
_build/jupyter_execute/contents/Tensorflow/Basics.ipynb | ###Markdown
Basics Let's Start - The main difference between tensors and NumPy arrays is that tensors can be used on GPUs (graphical processing units) and TPUs (tensor processing units).- The number of dimensions of a tensor is called its rank. A scalar has rank $0$, a vector has rank $1$, a matrix is rank $2$, a tensor has rank $n$.- There are $2$ ways of creating tensors. `tf.Variable()` and `tf.constant()` the difference being tensors created with `tf.constant()` are immutable, tensors created with `tf.Variable()` are mutable. `any_tensor[2].assign(7)` can be used to change a value of a specific element in the tensor, same would fail for `tf.constant()`.- There are other ways of creating tensors examples being `tf.zeros` or `tf.ones`. You can also convert numpy arrays into tensors.- Tensors can also be indexed just like Python lists.- You can an extra dimension by using `tf.newaxis` or `tf.expand_dims`- `tf.reshape()`,`tf.transpose()` allows us to reshape a tensor- Data type of a tesnor can be changed with `tf.cast(t1, dtype=tf.float16)`- You can squeeze a tensor to remove single-dimensions (dimensions with size 1) using `tf.squeeze()`.
###Code
# Some common commands are as follows
import tensorflow as tf
print("Check TF version: ",tf.__version__)
t1 = tf.constant([[10., 7.],
[3., 2.],
[8., 9.]], dtype=tf.float16) # by default TF creates tensors with either an int32 or float32 datatype.
print("Access a specific feature of the tensor, in this case shape of t1: ",t1.shape)
print("Size of t1: ", tf.size(t1))
print("Datatype of every element:", t1.dtype)
print("Number of dimensions (rank):", t1.ndim)
print("Shape of tensor:", t1.shape)
print("Elements along axis 0 of tensor:", t1.shape[0])
print("Elements along last axis of tensor:", t1.shape[-1])
print("Total number of elements:", tf.size(t1).numpy()) # .numpy() converts to NumPy array
print("Details of the tensor: ",t1)
print("Index tensors: ", t1[:1,:])
import tensorflow as tf
# Math operations
t1 = tf.constant([[10., 7.],
[3., 2.],
[8., 9.]], dtype=tf.float16) # by default TF creates tensors with either an int32 or float32 datatype.
print("Sum: ",t1+10)
print("Substraction: ",t1-10)
print("Multiplication: ",t1*10, tf.multiply(t1, 10))
print("Matrix Multiplication: ",t1 @ tf.transpose(t1)) # can also be done with tf.tensordot()
# Aggregation functions
print("Max: ", tf.reduce_max(t1)) # same or min, mean
print("Sum: ", tf.reduce_sum(t1))
print("Max Position: ", tf.argmax(t1)) # same or min
###Output
Sum: tf.Tensor(
[[20. 17.]
[13. 12.]
[18. 19.]], shape=(3, 2), dtype=float16)
Substraction: tf.Tensor(
[[ 0. -3.]
[-7. -8.]
[-2. -1.]], shape=(3, 2), dtype=float16)
Multiplication: tf.Tensor(
[[100. 70.]
[ 30. 20.]
[ 80. 90.]], shape=(3, 2), dtype=float16) tf.Tensor(
[[100. 70.]
[ 30. 20.]
[ 80. 90.]], shape=(3, 2), dtype=float16)
Matrix Multiplication: tf.Tensor(
[[149. 44. 143.]
[ 44. 13. 42.]
[143. 42. 145.]], shape=(3, 3), dtype=float16)
Max: tf.Tensor(10.0, shape=(), dtype=float16)
Sum: tf.Tensor(39.0, shape=(), dtype=float16)
Max Position: tf.Tensor([0 2], shape=(2,), dtype=int64)
###Markdown
Random Randomness is often used in deep learning, be it initializing weights in a Neural Network or shuffling images while feeding data to the model.
###Code
random_1 = tf.random.Generator.from_seed(35) # setting seed ensures reproducibility
random_1 = random_1.normal(shape = (3,2))
print("Generating tensor from a normal distribution: ", random_1)
print("Shuffling the elements of the tesnor: ", tf.random.shuffle(random_1))
###Output
Generating tensor from a normal distribution: tf.Tensor(
[[ 0.495291 -0.648484 ]
[-1.8700892 2.7830641 ]
[-0.645002 0.18022095]], shape=(3, 2), dtype=float32)
Shuffling the elements of the tesnor: tf.Tensor(
[[-0.645002 0.18022095]
[ 0.495291 -0.648484 ]
[-1.8700892 2.7830641 ]], shape=(3, 2), dtype=float32)
|
how-to-use-azureml/explain-model/explain-tabular-data-local/explain-local-sklearn-multiclass-classification.ipynb | ###Markdown
Iris flower classification with scikit-learn (run model explainer locally)  Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Explain a model with the AML explain-model package1. Train a SVM classification model using Scikit-learn2. Run 'explain_model' with full data in local mode, which doesn't contact any Azure services3. Run 'explain_model' with summarized data in local mode, which doesn't contact any Azure services4. Visualize the global and local explanations with the visualization dashboard.
###Code
from sklearn.datasets import load_iris
from sklearn import svm
from azureml.explain.model.tabular_explainer import TabularExplainer
###Output
_____no_output_____
###Markdown
1. Run model explainer locally with full data Load the breast cancer diagnosis data
###Code
iris = load_iris()
X = iris['data']
y = iris['target']
classes = iris['target_names']
feature_names = iris['feature_names']
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Train a SVM classification model, which you want to explain
###Code
clf = svm.SVC(gamma=0.001, C=100., probability=True)
model = clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Explain predictions on your local machine
###Code
tabular_explainer = TabularExplainer(model, x_train, features = feature_names, classes=classes)
###Output
_____no_output_____
###Markdown
Explain overall model predictions (global explanation)
###Code
global_explanation = tabular_explainer.explain_global(x_test)
# Sorted SHAP values
print('ranked global importance values: {}'.format(global_explanation.get_ranked_global_values()))
# Corresponding feature names
print('ranked global importance names: {}'.format(global_explanation.get_ranked_global_names()))
# feature ranks (based on original order of features)
print('global importance rank: {}'.format(global_explanation.global_importance_rank))
# per class feature names
print('ranked per class feature names: {}'.format(global_explanation.get_ranked_per_class_names()))
# per class feature importance values
print('ranked per class feature values: {}'.format(global_explanation.get_ranked_per_class_values()))
dict(zip(global_explanation.get_ranked_global_names(), global_explanation.get_ranked_global_values()))
###Output
_____no_output_____
###Markdown
Explain overall model predictions as a collection of local (instance-level) explanations
###Code
# feature shap values for all features and all data points in the training data
print('local importance values: {}'.format(global_explanation.local_importance_values))
###Output
_____no_output_____
###Markdown
Explain local data points (individual instances)
###Code
# explain the first member of the test set
instance_num = 0
local_explanation = tabular_explainer.explain_local(x_test[instance_num,:])
# get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = clf.predict(x_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
dict(zip(sorted_local_importance_names, sorted_local_importance_values))
###Output
_____no_output_____
###Markdown
Load visualization dashboard
###Code
# Note you will need to have extensions enabled prior to jupyter kernel starting
!jupyter nbextension install --py --sys-prefix azureml.contrib.explain.model.visualize
!jupyter nbextension enable --py --sys-prefix azureml.contrib.explain.model.visualize
# Or, in Jupyter Labs, uncomment below
# jupyter labextension install @jupyter-widgets/jupyterlab-manager
# jupyter labextension install microsoft-mli-widget
from azureml.contrib.explain.model.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, model, x_test)
###Output
_____no_output_____
###Markdown
Iris flower classification with scikit-learn (run model explainer locally) Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Explain a model with the AML explain-model package1. Train a SVM classification model using Scikit-learn2. Run 'explain_model' with full data in local mode, which doesn't contact any Azure services3. Run 'explain_model' with summarized data in local mode, which doesn't contact any Azure services4. Visualize the global and local explanations with the visualization dashboard.
###Code
from sklearn.datasets import load_iris
from sklearn import svm
from azureml.explain.model.tabular_explainer import TabularExplainer
###Output
_____no_output_____
###Markdown
1. Run model explainer locally with full data Load the breast cancer diagnosis data
###Code
iris = load_iris()
X = iris['data']
y = iris['target']
classes = iris['target_names']
feature_names = iris['feature_names']
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Train a SVM classification model, which you want to explain
###Code
clf = svm.SVC(gamma=0.001, C=100., probability=True)
model = clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Explain predictions on your local machine
###Code
tabular_explainer = TabularExplainer(model, x_train, features = feature_names, classes=classes)
###Output
_____no_output_____
###Markdown
Explain overall model predictions (global explanation)
###Code
global_explanation = tabular_explainer.explain_global(x_test)
# Sorted SHAP values
print('ranked global importance values: {}'.format(global_explanation.get_ranked_global_values()))
# Corresponding feature names
print('ranked global importance names: {}'.format(global_explanation.get_ranked_global_names()))
# feature ranks (based on original order of features)
print('global importance rank: {}'.format(global_explanation.global_importance_rank))
# per class feature names
print('ranked per class feature names: {}'.format(global_explanation.get_ranked_per_class_names()))
# per class feature importance values
print('ranked per class feature values: {}'.format(global_explanation.get_ranked_per_class_values()))
dict(zip(global_explanation.get_ranked_global_names(), global_explanation.get_ranked_global_values()))
###Output
_____no_output_____
###Markdown
Explain overall model predictions as a collection of local (instance-level) explanations
###Code
# feature shap values for all features and all data points in the training data
print('local importance values: {}'.format(global_explanation.local_importance_values))
###Output
_____no_output_____
###Markdown
Explain local data points (individual instances)
###Code
# explain the first member of the test set
instance_num = 0
local_explanation = tabular_explainer.explain_local(x_test[instance_num,:])
# get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = clf.predict(x_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
dict(zip(sorted_local_importance_names, sorted_local_importance_values))
###Output
_____no_output_____
###Markdown
Load visualization dashboard
###Code
from azureml.contrib.explain.model.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, model, x_test)
###Output
_____no_output_____
###Markdown
Iris flower classification with scikit-learn (run model explainer locally)  Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Explain a model with the AML explain-model package1. Train a SVM classification model using Scikit-learn2. Run 'explain_model' with full data in local mode, which doesn't contact any Azure services3. Run 'explain_model' with summarized data in local mode, which doesn't contact any Azure services4. Visualize the global and local explanations with the visualization dashboard.
###Code
from sklearn.datasets import load_iris
from sklearn import svm
from azureml.explain.model.tabular_explainer import TabularExplainer
###Output
_____no_output_____
###Markdown
1. Run model explainer locally with full data Load the breast cancer diagnosis data
###Code
iris = load_iris()
X = iris['data']
y = iris['target']
classes = iris['target_names']
feature_names = iris['feature_names']
# Split data into train and test
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Train a SVM classification model, which you want to explain
###Code
clf = svm.SVC(gamma=0.001, C=100., probability=True)
model = clf.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
Explain predictions on your local machine
###Code
tabular_explainer = TabularExplainer(model, x_train, features = feature_names, classes=classes)
###Output
_____no_output_____
###Markdown
Explain overall model predictions (global explanation)
###Code
global_explanation = tabular_explainer.explain_global(x_test)
# Sorted SHAP values
print('ranked global importance values: {}'.format(global_explanation.get_ranked_global_values()))
# Corresponding feature names
print('ranked global importance names: {}'.format(global_explanation.get_ranked_global_names()))
# feature ranks (based on original order of features)
print('global importance rank: {}'.format(global_explanation.global_importance_rank))
# per class feature names
print('ranked per class feature names: {}'.format(global_explanation.get_ranked_per_class_names()))
# per class feature importance values
print('ranked per class feature values: {}'.format(global_explanation.get_ranked_per_class_values()))
dict(zip(global_explanation.get_ranked_global_names(), global_explanation.get_ranked_global_values()))
###Output
_____no_output_____
###Markdown
Explain overall model predictions as a collection of local (instance-level) explanations
###Code
# feature shap values for all features and all data points in the training data
print('local importance values: {}'.format(global_explanation.local_importance_values))
###Output
_____no_output_____
###Markdown
Explain local data points (individual instances)
###Code
# explain the first member of the test set
instance_num = 0
local_explanation = tabular_explainer.explain_local(x_test[instance_num,:])
# get the prediction for the first member of the test set and explain why model made that prediction
prediction_value = clf.predict(x_test)[instance_num]
sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]
sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]
dict(zip(sorted_local_importance_names, sorted_local_importance_values))
###Output
_____no_output_____
###Markdown
Load visualization dashboard
###Code
from azureml.contrib.explain.model.visualize import ExplanationDashboard
ExplanationDashboard(global_explanation, model, x_test)
###Output
_____no_output_____ |
Colab_notebooks/CARE_2D_ZeroCostDL4Mic.ipynb | ###Markdown
**CARE: Content-aware image restoration (2D)**---CARE is a neural network capable of image restoration from corrupted bio-images, first published in 2018 by [Weigert *et al.* in Nature Methods](https://www.nature.com/articles/s41592-018-0216-7). The CARE network uses a U-Net network architecture and allows image restoration and resolution improvement in 2D and 3D images, in a supervised manner, using noisy images as input and low-noise images as targets for training. The function of the network is essentially determined by the set of images provided in the training dataset. For instance, if noisy images are provided as input and high signal-to-noise ratio images are provided as targets, the network will perform denoising. **This particular notebook enables restoration of 2D datasets. If you are interested in restoring a 3D dataset, you should use the CARE 3D notebook instead.**---*Disclaimer*:This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.This notebook is based on the following paper: **Content-aware image restoration: pushing the limits of fluorescence microscopy**, by Weigert *et al.* published in Nature Methods in 2018 (https://www.nature.com/articles/s41592-018-0216-7)And source code found in: https://github.com/csbdeep/csbdeepFor a more in-depth description of the features of the network, please refer to [this guide](http://csbdeep.bioimagecomputing.com/doc/) provided by the original authors of the work.We provide a dataset for the training of this notebook as a way to test its functionalities but the training and test data of the restoration experiments is also available from the authors of the original paper [here](https://publications.mpi-cbg.de/publications-sites/7207/).**Please also cite this original paper when using or developing this notebook.** **How to use this notebook?**---Video describing how to use our notebooks are available on youtube: - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook---**Structure of a notebook**The notebook contains two types of cell: **Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.---**Table of contents, Code snippets** and **Files**On the top left side of the notebook you find three tabs which contain from top to bottom:*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. **Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!---**Making changes to the notebook****You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).You can use the ``-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment. **0. Before getting started**--- For CARE to train, **it needs to have access to a paired training dataset**. This means that the same image needs to be acquired in the two conditions (for instance, low signal-to-noise ratio and high signal-to-noise ratio) and provided with indication of correspondence. Therefore, the data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called "Training - Low SNR images" (Training_source) and "Training - high SNR images" (Training_target). Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook. **Additionally, the corresponding input and output files need to have the same name**. Please note that you currently can **only use .tif files!**Here's a common data structure that can work:* Experiment A - **Training dataset** - Low SNR images (Training_source) - img_1.tif, img_2.tif, ... - High SNR images (Training_target) - img_1.tif, img_2.tif, ... - **Quality control dataset** - Low SNR images - img_1.tif, img_2.tif - High SNR images - img_1.tif, img_2.tif - **Data to be predicted** - **Results**---**Important note**- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.--- 0.1 Download example data
###Code
data_import = "Download example data from Biostudies" #@param ["Download example data from Biostudies", "Use my own"]
if data_import:
!wget -r ftp://ftp.ebi.ac.uk/biostudies/nfs/S-BSST/666/S-BSST666/Files/ZeroCostDl4Mic/Stardist_v2 --show-progress -q --cut-dirs=7 -nH -np
###Output
.listing [ <=> ] 961 4.12KB/s in 0.2s
Stardist_v2/.listin [ <=> ] 251 --.-KB/s in 0s
Stardist_v2/Stardis [ <=> ] 480 --.-KB/s in 0s
Stardist_v2/Stardis [ <=> ] 367 --.-KB/s in 0s
Stardist_v2/Stardis 100%[===================>] 2.00M 4.77MB/s in 0.4s
Stardist_v2/Stardis 100%[===================>] 2.00M 3.35MB/s in 0.6s
Stardist_v2/Stardis [ <=> ] 367 --.-KB/s in 0s
Stardist_v2/Stardis 100%[===================>] 1.00M 2.06MB/s in 0.5s
Stardist_v2/Stardis 100%[===================>] 1.00M 2.76MB/s in 0.4s
Stardist_v2/Stardis [ <=> ] 587 --.-KB/s in 0s
Stardist_v2/Stardis 100%[===================>] 172.05M 6.07MB/s in 31s
Stardist_v2/Stardis 100%[===================>] 172.05M 6.09MB/s in 34s
Stardist_v2/Stardis 100%[===================>] 172.05M 5.12MB/s in 42s
Stardist_v2/Stardis 100%[===================>] 172.05M 3.90MB/s in 38s
Stardist_v2/Stardis [ <=> ] 5.42K --.-KB/s in 0s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.13MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.50MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.49MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.72MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.50MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.89MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.27MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.33MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.49MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.89MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.62MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.49MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 3.44MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.89MB/s in 1.1s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.44MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.49MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.42MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.88MB/s in 1.1s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.66MB/s in 1.2s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.55MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.52MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.94MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.61MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.30MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.48MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.23MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.66MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 3.16MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 2.00M 3.08MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.66MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.32MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.81MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.38MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.49MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.06MB/s in 1.0s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.53MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.35MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.42MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 3.04MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.68MB/s in 1.2s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.57MB/s in 1.3s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.27MB/s in 1.6s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.17MB/s in 1.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.54MB/s in 1.3s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.67MB/s in 1.2s
Stardist_v2/Stardis [ <=> ] 5.42K --.-KB/s in 0s
Stardist_v2/Stardis 100%[===================>] 1.00M 2.06MB/s in 0.5s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.50MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.50MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.44MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.48MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.54MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.75MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.54MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.52MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.35MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.45MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.61MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.42MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.62MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.48MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.42MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.13MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.45MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.50MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.45MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.53MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.43MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.76MB/s in 1.1s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.56MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.25MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 1.00M 1.77MB/s in 0.6s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.74MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 1.00M 2.16MB/s in 0.5s
Stardist_v2/Stardis 100%[===================>] 1.00M 2.43MB/s in 0.4s
Stardist_v2/Stardis 100%[===================>] 1.00M 2.11MB/s in 0.5s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.56MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.43MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.70MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.70MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 3.91MB/s in 0.5s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.10MB/s in 1.0s
Stardist_v2/Stardis 100%[===================>] 2.00M 1.98MB/s in 1.0s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.38MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.09MB/s in 1.0s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.31MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.04MB/s in 1.0s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.25MB/s in 0.9s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.53MB/s in 0.8s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.70MB/s in 0.7s
Stardist_v2/Stardis 100%[===================>] 2.00M 2.45MB/s in 0.8s
Stardist_v2/__MACOS [ <=> ] 253 --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 176 --.-KB/s in 0.03s
Stardist_v2/__MACOS [ <=> ] 851 --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 176 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 176 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 176 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 176 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 176 --.-KB/s in 0.006s
Stardist_v2/__MACOS [ <=> ] 371 --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS [ <=> ] 371 --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS [ <=> ] 591 --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS [ <=> ] 5.51K --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.06s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.009s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.009s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.009s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.04s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS [ <=> ] 5.51K --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.001s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.008s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.002s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.009s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.008s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.03s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.01s
Stardist_v2/__MACOS 100%[===================>] 212 --.-KB/s in 0.02s
###Markdown
**1. Initialise the Colab session**--- **1.1. Check for GPU access**---By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:Go to **Runtime -> Change the Runtime type****Runtime type: Python 3** *(Python 3 is programming language in which this program is written)***Accelerator: GPU** *(Graphics processing unit)*
###Code
#@markdown ##Run this cell to check if you have GPU access
%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
###Output
_____no_output_____
###Markdown
**1.2. Mount your Google Drive**--- To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook. Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. Once this is done, your data are available in the **Files** tab on the top left of notebook.
###Code
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
**2. Install CARE and dependencies**--- **2.1. Install key dependencies**---
###Code
#@markdown ##Install CARE and dependencies
#Here, we install libraries which are not already included in Colab.
!pip install tifffile # contains tools to operate tiff-files
!pip install csbdeep # contains tools for restoration of fluorescence microcopy images (Content-aware Image Restoration, CARE). It uses Keras and Tensorflow.
!pip install wget
!pip install memory_profiler
!pip install fpdf
#Force session restart
exit(0)
###Output
_____no_output_____
###Markdown
**2.2. Restart your runtime**--- ** Your Runtime has automatically restarted. This is normal.** **2.3. Load key dependencies**---
###Code
#@markdown ##Load key dependencies
Notebook_version = ['1.12']
from builtins import any as b_any
def get_requirements_path():
# Store requirements file in 'contents' directory
current_dir = os.getcwd()
dir_count = current_dir.count('/') - 1
path = '../' * (dir_count) + 'requirements.txt'
return path
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
def build_requirements_file(before, after):
path = get_requirements_path()
# Exporting requirements.txt for local run
!pip freeze > $path
# Get minimum requirements file
df = pd.read_csv(path, delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name and handle cases where import name is different to module name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open(path,'w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
import sys
before = [str(m) for m in sys.modules]
%load_ext memory_profiler
#Here, we import and enable Tensorflow 1 instead of Tensorflow 2.
%tensorflow_version 1.x
import tensorflow
import tensorflow as tf
print(tensorflow.__version__)
print("Tensorflow enabled.")
# ------- Variable specific to CARE -------
from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict, plot_history, Path, download_and_extract_zip_file
from csbdeep.data import RawData, create_patches
from csbdeep.io import load_training_data, save_tiff_imagej_compatible
from csbdeep.models import Config, CARE
from csbdeep import data
from __future__ import print_function, unicode_literals, absolute_import, division
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
import wget
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32
from skimage.util import img_as_ubyte
from tqdm import tqdm
from fpdf import FPDF, HTMLMixin
from datetime import datetime
import subprocess
from pip._internal.operations.freeze import freeze
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
W = '\033[0m' # white (normal)
R = '\033[31m' # red
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
print("Libraries installed")
# Check if this is the latest version of the notebook
Latest_notebook_version = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv")
print('Notebook version: '+Notebook_version[0])
strlist = Notebook_version[0].split('.')
Notebook_version_main = strlist[0]+'.'+strlist[1]
if Notebook_version_main == Latest_notebook_version.columns:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
!pip freeze > requirements.txt
#Create a pdf document with training summary
def pdf_export(trained = False, augmentation = False, pretrained_model = False):
# save FPDF() class into a
# variable pdf
#from datetime import datetime
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'CARE 2D'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
if trained:
training_time = "Training time: "+str(hour)+ "hour(s) "+str(mins)+"min(s) "+str(round(sec))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and methods:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras','csbdeep']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
shape = io.imread(Training_source+'/'+os.listdir(Training_source)[1]).shape
dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
if pretrained_model:
text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(190, 5, txt = text, align='L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(1)
pdf.cell(28, 5, txt='Augmentation: ', ln=0)
pdf.set_font('')
if augmentation:
aug_text = 'The dataset was augmented by a factor of '+str(Multiply_dataset_by)+' by'
if rotate_270_degrees != 0 or rotate_90_degrees != 0:
aug_text = aug_text+'\n- rotation'
if flip_left_right != 0 or flip_top_bottom != 0:
aug_text = aug_text+'\n- flipping'
if random_zoom_magnification != 0:
aug_text = aug_text+'\n- random zoom magnification'
if random_distortion != 0:
aug_text = aug_text+'\n- random distortion'
if image_shear != 0:
aug_text = aug_text+'\n- image shearing'
if skew_image != 0:
aug_text = aug_text+'\n- image skewing'
else:
aug_text = 'No augmentation was used for training.'
pdf.multi_cell(190, 5, txt=aug_text, align='L')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if Use_Default_Advanced_Parameters:
pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=40% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>number_of_patches</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{6}</td>
</tr>
</table>
""".format(number_of_epochs,str(patch_size)+'x'+str(patch_size),number_of_patches,batch_size,number_of_steps,percentage_validation,initial_learning_rate)
pdf.write_html(html)
#pdf.multi_cell(190, 5, txt = text_2, align='L')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(29, 5, txt= 'Training_source:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_source, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(27, 5, txt= 'Training_target:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_target, align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(22, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training pair', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_CARE2D.png').shape
pdf.image('/content/TrainingDataExample_CARE2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
if augmentation:
ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
pdf.multi_cell(190, 5, txt = ref_3, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+"_training_report.pdf")
#Make a pdf summary of the QC results
def qc_pdf_export():
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'CARE 2D'
#model_name = os.path.basename(full_QC_model_path)
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Development of Training Losses', ln=1, align='L')
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape
if os.path.exists(full_QC_model_path+'Quality Control/lossCurvePlots.png'):
pdf.image(full_QC_model_path+'Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/13))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.multi_cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.', align='L')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape
pdf.image(full_QC_model_path+'Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="7" face="Courier New" >
<table width=94% style="margin-left:0px;">"""
with open(full_QC_model_path+'Quality Control/QC_metrics_'+QC_model_name+'.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
image = header[0]
mSSIM_PvsGT = header[1]
mSSIM_SvsGT = header[2]
NRMSE_PvsGT = header[3]
NRMSE_SvsGT = header[4]
PSNR_PvsGT = header[5]
PSNR_SvsGT = header[6]
header = """
<tr>
<th width = 10% align="left">{0}</th>
<th width = 15% align="left">{1}</th>
<th width = 15% align="center">{2}</th>
<th width = 15% align="left">{3}</th>
<th width = 15% align="center">{4}</th>
<th width = 15% align="left">{5}</th>
<th width = 15% align="center">{6}</th>
</tr>""".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)
html = html+header
for row in metrics:
image = row[0]
mSSIM_PvsGT = row[1]
mSSIM_SvsGT = row[2]
NRMSE_PvsGT = row[3]
NRMSE_SvsGT = row[4]
PSNR_PvsGT = row[5]
PSNR_SvsGT = row[6]
cells = """
<tr>
<td width = 10% align="left">{0}</td>
<td width = 15% align="center">{1}</td>
<td width = 15% align="center">{2}</td>
<td width = 15% align="center">{3}</td>
<td width = 15% align="center">{4}</td>
<td width = 15% align="center">{5}</td>
<td width = 15% align="center">{6}</td>
</tr>""".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(full_QC_model_path+'Quality Control/'+QC_model_name+'_QC_report.pdf')
# Build requirements file for local run
after = [str(m) for m in sys.modules]
build_requirements_file(before, after)
###Output
_____no_output_____
###Markdown
**3. Select your parameters and paths**--- **3.1. Setting main training parameters**--- **Paths for training, predictions and results****`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source (Low SNR images) and Training_target (High SNR images or ground truth) training data respecively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).**Training Parameters****`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-300 epochs. Evaluate the performance after training (see 5). **Default value: 50****`patch_size`:** CARE divides the image into patches for training. Input the size of the patches (length of a side). The value should be smaller than the dimensions of the image and divisible by 8. **Default value: 128****When choosing the patch_size, the value should be i) large enough that it will enclose many instances, ii) small enough that the resulting patches fit into the RAM.** **`number_of_patches`:** Input the number of the patches per image. Increasing the number of patches allows for larger training datasets. **Default value: 50** **Decreasing the patch size or increasing the number of patches may improve the training but may also increase the training time.****Advanced Parameters - experienced users only****`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16****`number_of_steps`:** Define the number of training steps by epoch. By default or if set to zero this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patches / batch_size****`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 10** **`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0004**
###Code
#@markdown ###Path to training images:
Training_source = "" #@param {type:"string"}
InputFile = Training_source+"/*.tif"
Training_target = "" #@param {type:"string"}
OutputFile = Training_target+"/*.tif"
#Define where the patch file will be saved
base = "/content"
# model name and path
#@markdown ###Name of the model and path to model folder:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
# other parameters for training.
#@markdown ###Training Parameters
#@markdown Number of epochs:
number_of_epochs = 50#@param {type:"number"}
#@markdown Patch size (pixels) and number
patch_size = 128#@param {type:"number"} # in pixels
number_of_patches = 50#@param {type:"number"}
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 16#@param {type:"number"}
number_of_steps = 0#@param {type:"number"}
percentage_validation = 10 #@param {type:"number"}
initial_learning_rate = 0.0004 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 16
percentage_validation = 10
initial_learning_rate = 0.0004
#Here we define the percentage to use for validation
percentage = percentage_validation/100
#here we check that no model with the same name already exist, if so print a warning
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: "+model_name+" already exists and will be deleted in the following cell !!")
print(bcolors.WARNING +"To continue training "+model_name+", choose a new model_name here, and load "+model_name+" in section 3.3"+W)
# Here we disable pre-trained model by default (in case the cell is not ran)
Use_pretrained_model = False
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = False
print("Parameters initiated.")
# This will display a randomly chosen dataset input and output
random_choice = random.choice(os.listdir(Training_source))
x = imread(Training_source+"/"+random_choice)
# Here we check that the input images contains the expected dimensions
if len(x.shape) == 2:
print("Image dimensions (y,x)",x.shape)
if not len(x.shape) == 2:
print(bcolors.WARNING +"Your images appear to have the wrong dimensions. Image dimension",x.shape)
#Find image XY dimension
Image_Y = x.shape[0]
Image_X = x.shape[1]
#Hyperparameters failsafes
# Here we check that patch_size is smaller than the smallest xy dimension of the image
if patch_size > min(Image_Y, Image_X):
patch_size = min(Image_Y, Image_X)
print (bcolors.WARNING + " Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:",patch_size)
# Here we check that patch_size is divisible by 8
if not patch_size % 8 == 0:
patch_size = ((int(patch_size / 8)-1) * 8)
print (bcolors.WARNING + " Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:",patch_size)
os.chdir(Training_target)
y = imread(Training_target+"/"+random_choice)
f=plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Training source')
plt.axis('off');
plt.subplot(1,2,2)
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Training target')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_CARE2D.png',bbox_inches='tight',pad_inches=0)
###Output
_____no_output_____
###Markdown
**3.2. Data augmentation**--- Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it. **However, data augmentation is not a magic solution and may also introduce issues. Therefore, we recommend that you train your network with and without augmentation, and use the QC section to validate that it improves overall performances.** Data augmentation is performed here by [Augmentor.](https://github.com/mdbloice/Augmentor)[Augmentor](https://github.com/mdbloice/Augmentor) was described in the following article:Marcus D Bloice, Peter M Roth, Andreas Holzinger, Biomedical image augmentation using Augmentor, Bioinformatics, https://doi.org/10.1093/bioinformatics/btz259**Please also cite this original paper when publishing results obtained using this notebook with augmentation enabled.**
###Code
#Data augmentation
Use_Data_augmentation = False #@param {type:"boolean"}
if Use_Data_augmentation:
!pip install Augmentor
import Augmentor
#@markdown ####Choose a factor by which you want to multiply your original dataset
Multiply_dataset_by = 30 #@param {type:"slider", min:1, max:30, step:1}
Save_augmented_images = False #@param {type:"boolean"}
Saving_path = "" #@param {type:"string"}
Use_Default_Augmentation_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please choose the probability of the following image manipulations to be used to augment your dataset (1 = always used; 0 = disabled ):
#@markdown ####Mirror and rotate images
rotate_90_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
rotate_270_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_left_right = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_top_bottom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image Zoom
random_zoom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
random_zoom_magnification = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image distortion
random_distortion = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Image shearing and skewing
image_shear = 0 #@param {type:"slider", min:0, max:1, step:0.1}
max_image_shear = 1 #@param {type:"slider", min:1, max:25, step:1}
skew_image = 0 #@param {type:"slider", min:0, max:1, step:0.1}
skew_image_magnitude = 0 #@param {type:"slider", min:0, max:1, step:0.1}
if Use_Default_Augmentation_Parameters:
rotate_90_degrees = 0.5
rotate_270_degrees = 0.5
flip_left_right = 0.5
flip_top_bottom = 0.5
if not Multiply_dataset_by >5:
random_zoom = 0
random_zoom_magnification = 0.9
random_distortion = 0
image_shear = 0
max_image_shear = 10
skew_image = 0
skew_image_magnitude = 0
if Multiply_dataset_by >5:
random_zoom = 0.1
random_zoom_magnification = 0.9
random_distortion = 0.5
image_shear = 0.2
max_image_shear = 5
if Multiply_dataset_by >25:
random_zoom = 0.5
random_zoom_magnification = 0.8
random_distortion = 0.5
image_shear = 0.5
max_image_shear = 20
list_files = os.listdir(Training_source)
Nb_files = len(list_files)
Nb_augmented_files = (Nb_files * Multiply_dataset_by)
if Use_Data_augmentation:
print("Data augmentation enabled")
# Here we set the path for the various folder were the augmented images will be loaded
# All images are first saved into the augmented folder
#Augmented_folder = "/content/Augmented_Folder"
if not Save_augmented_images:
Saving_path= "/content"
Augmented_folder = Saving_path+"/Augmented_Folder"
if os.path.exists(Augmented_folder):
shutil.rmtree(Augmented_folder)
os.makedirs(Augmented_folder)
#Training_source_augmented = "/content/Training_source_augmented"
Training_source_augmented = Saving_path+"/Training_source_augmented"
if os.path.exists(Training_source_augmented):
shutil.rmtree(Training_source_augmented)
os.makedirs(Training_source_augmented)
#Training_target_augmented = "/content/Training_target_augmented"
Training_target_augmented = Saving_path+"/Training_target_augmented"
if os.path.exists(Training_target_augmented):
shutil.rmtree(Training_target_augmented)
os.makedirs(Training_target_augmented)
# Here we generate the augmented images
#Load the images
p = Augmentor.Pipeline(Training_source, Augmented_folder)
#Define the matching images
p.ground_truth(Training_target)
#Define the augmentation possibilities
if not rotate_90_degrees == 0:
p.rotate90(probability=rotate_90_degrees)
if not rotate_270_degrees == 0:
p.rotate270(probability=rotate_270_degrees)
if not flip_left_right == 0:
p.flip_left_right(probability=flip_left_right)
if not flip_top_bottom == 0:
p.flip_top_bottom(probability=flip_top_bottom)
if not random_zoom == 0:
p.zoom_random(probability=random_zoom, percentage_area=random_zoom_magnification)
if not random_distortion == 0:
p.random_distortion(probability=random_distortion, grid_width=4, grid_height=4, magnitude=8)
if not image_shear == 0:
p.shear(probability=image_shear,max_shear_left=20,max_shear_right=20)
p.sample(int(Nb_augmented_files))
print(int(Nb_augmented_files),"matching images generated")
# Here we sort through the images and move them back to augmented trainning source and targets folders
augmented_files = os.listdir(Augmented_folder)
for f in augmented_files:
if (f.startswith("_groundtruth_(1)_")):
shortname_noprefix = f[17:]
shutil.copyfile(Augmented_folder+"/"+f, Training_target_augmented+"/"+shortname_noprefix)
if not (f.startswith("_groundtruth_(1)_")):
shutil.copyfile(Augmented_folder+"/"+f, Training_source_augmented+"/"+f)
for filename in os.listdir(Training_source_augmented):
os.chdir(Training_source_augmented)
os.rename(filename, filename.replace('_original', ''))
#Here we clean up the extra files
shutil.rmtree(Augmented_folder)
if not Use_Data_augmentation:
print(bcolors.WARNING+"Data augmentation disabled")
###Output
_____no_output_____
###Markdown
**3.3. Using weights from a pre-trained model as initial weights**--- Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a CARE 2D model**. This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**. In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
###Code
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_2D_paper")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.h5 pretrained model does not exist')
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead')
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead')
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print(bcolors.WARNING+'No pretrained network will be used.')
###Output
_____no_output_____
###Markdown
**4. Train the network**--- **4.1. Prepare the training data and model for training**---Here, we use the information from 3. to build the model and convert the training data into a suitable format for training.
###Code
#@markdown ##Create the model and dataset objects
# --------------------- Here we delete the model folder if it already exist ------------------------
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: Model folder already exists and has been removed !!"+W)
shutil.rmtree(model_path+'/'+model_name)
# --------------------- Here we load the augmented data or the raw data ------------------------
if Use_Data_augmentation:
Training_source_dir = Training_source_augmented
Training_target_dir = Training_target_augmented
if not Use_Data_augmentation:
Training_source_dir = Training_source
Training_target_dir = Training_target
# --------------------- ------------------------------------------------
# This object holds the image pairs (GT and low), ensuring that CARE compares corresponding images.
# This file is saved in .npz format and later called when loading the trainig data.
raw_data = data.RawData.from_folder(
basepath=base,
source_dirs=[Training_source_dir],
target_dir=Training_target_dir,
axes='CYX',
pattern='*.tif*')
X, Y, XY_axes = data.create_patches(
raw_data,
patch_filter=None,
patch_size=(patch_size,patch_size),
n_patches_per_image=number_of_patches)
print ('Creating 2D training dataset')
training_path = model_path+"/rawdata"
rawdata1 = training_path+".npz"
np.savez(training_path,X=X, Y=Y, axes=XY_axes)
# Load Training Data
(X,Y), (X_val,Y_val), axes = load_training_data(rawdata1, validation_split=percentage, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
%memit
#plot of training patches.
plt.figure(figsize=(12,5))
plot_some(X[:5],Y[:5])
plt.suptitle('5 example training patches (top row: source, bottom row: target)');
#plot of validation patches
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
#Here we automatically define number_of_step in function of training data and batch size
#if (Use_Default_Advanced_Parameters):
if (Use_Default_Advanced_Parameters) or (number_of_steps == 0):
number_of_steps = int(X.shape[0]/batch_size)+1
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
#Here we create the configuration file
config = Config(axes, n_channel_in, n_channel_out, probabilistic=True, train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, unet_kern_size=5, unet_n_depth=3, train_batch_size=batch_size, train_learning_rate=initial_learning_rate)
print(config)
vars(config)
# Compile the CARE model for network training
model_training= CARE(config, model_name, basedir=model_path)
# --------------------- Using pretrained model ------------------------
# Load the pretrained weights
if Use_pretrained_model:
model_training.load_weights(h5_file_path)
# --------------------- ---------------------- ------------------------
pdf_export(augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
###Output
_____no_output_____
###Markdown
**4.2. Start Training**---When playing the cell below you should see updates after each epoch (round). Network training can take some time.* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the CSBDeep Fiji plugin (Run your Network). You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system).
###Code
#@markdown ##Start training
start = time.time()
# Start Training
history = model_training.train(X,Y, validation_data=(X_val,Y_val))
print("Training, done.")
# copy the .npz to the model's folder
shutil.copyfile(model_path+'/rawdata.npz',model_path+'/'+model_name+'/rawdata.npz')
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(history.history)
if os.path.exists(model_path+"/"+model_name+"/Quality Control"):
shutil.rmtree(model_path+"/"+model_name+"/Quality Control")
os.makedirs(model_path+"/"+model_name+"/Quality Control")
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss', 'learning rate'])
for i in range(len(history.history['loss'])):
writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])
# Displaying the time elapsed for training
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
model_training.export_TF()
print("Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin")
pdf_export(trained = True, augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
###Output
_____no_output_____
###Markdown
**5. Evaluate your model**---This section allows you to perform important quality checks on the validity and generalisability of the trained model. **We highly recommend to perform quality control on all newly trained models.**
###Code
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
QC_model_folder = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
loss_displayed = False
###Output
_____no_output_____
###Markdown
**5.1. Inspection of the loss function**---First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.***Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.**Note: Plots of the losses will be shown in a linear and in a log scale. This can help visualise changes in the losses at different magnitudes. However, note that if the losses are negative the plot on the log scale will be empty. This is not an error.**
###Code
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
loss_displayed = True
lossDataFromCSV = []
vallossDataFromCSV = []
with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png',bbox_inches='tight',pad_inches=0)
plt.show()
###Output
_____no_output_____
###Markdown
**5.2. Error mapping and quality metrics estimation**---This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the "Source_QC_folder" and "Target_QC_folder" !**1. The SSIM (structural similarity) map** The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info). **mSSIM** is the SSIM value calculated across the entire window of both images.**The output below shows the SSIM maps with the mSSIM****2. The RSE (Root Squared Error) map** This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.**The output below shows the RSE maps with the NRMSE and PSNR values.**
###Code
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
# Create a quality control/Prediction Folder
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
# Activate the pretrained model.
model_training = CARE(config=None, name=QC_model_name, basedir=QC_model_path)
# List Tif images in Source_QC_folder
Source_QC_folder_tif = Source_QC_folder+"/*.tif"
Z = sorted(glob(Source_QC_folder_tif))
Z = list(map(imread,Z))
print('Number of test dataset found in the folder: '+str(len(Z)))
# Perform prediction on all datasets in the Source_QC folder
for filename in os.listdir(Source_QC_folder):
img = imread(os.path.join(Source_QC_folder, filename))
predicted = model_training.predict(img, axes='YX')
os.chdir(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
imsave(filename, predicted)
def ssim(img1, img2):
return structural_similarity(img1,img2,data_range=1.,full=True, gaussian_weights=True, use_sample_covariance=False, sigma=1.5)
def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
def norm_minmse(gt, x, normalize_gt=True):
"""This function is adapted from Martin Weigert"""
"""
normalizes and affinely scales an image pair such that the MSE is minimized
Parameters
----------
gt: ndarray
the ground truth image
x: ndarray
the image that will be affinely scaled
normalize_gt: bool
set to True of gt image should be normalized (default)
Returns
-------
gt_scaled, x_scaled
"""
if normalize_gt:
gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)
x = x.astype(np.float32, copy=False) - np.mean(x)
#x = x - np.mean(x)
gt = gt.astype(np.float32, copy=False) - np.mean(gt)
#gt = gt - np.mean(gt)
scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())
return gt, scale * x
# Open and create the csv file that will contain all the QC metrics
with open(QC_model_path+"/"+QC_model_name+"/Quality Control/QC_metrics_"+QC_model_name+".csv", "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["image #","Prediction v. GT mSSIM","Input v. GT mSSIM", "Prediction v. GT NRMSE", "Input v. GT NRMSE", "Prediction v. GT PSNR", "Input v. GT PSNR"])
# Let's loop through the provided dataset in the QC folders
for i in os.listdir(Source_QC_folder):
if not os.path.isdir(os.path.join(Source_QC_folder,i)):
print('Running QC on: '+i)
# -------------------------------- Target test data (Ground truth) --------------------------------
test_GT = io.imread(os.path.join(Target_QC_folder, i))
# -------------------------------- Source test data --------------------------------
test_source = io.imread(os.path.join(Source_QC_folder,i))
# Normalize the images wrt each other by minimizing the MSE between GT and Source image
test_GT_norm,test_source_norm = norm_minmse(test_GT, test_source, normalize_gt=True)
# -------------------------------- Prediction --------------------------------
test_prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction",i))
# Normalize the images wrt each other by minimizing the MSE between GT and prediction
test_GT_norm,test_prediction_norm = norm_minmse(test_GT, test_prediction, normalize_gt=True)
# -------------------------------- Calculate the metric maps and save them --------------------------------
# Calculate the SSIM maps
index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim(test_GT_norm, test_prediction_norm)
index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim(test_GT_norm, test_source_norm)
#Save ssim_maps
img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsPrediction_'+i,img_SSIM_GTvsPrediction_32bit)
img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsSource_'+i,img_SSIM_GTvsSource_32bit)
# Calculate the Root Squared Error (RSE) maps
img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))
img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm))
# Save SE maps
img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)
img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsPrediction_'+i,img_RSE_GTvsPrediction_32bit)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsSource_'+i,img_RSE_GTvsSource_32bit)
# -------------------------------- Calculate the RSE metrics and save them --------------------------------
# Normalised Root Mean Squared Error (here it's valid to take the mean of the image)
NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))
NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource))
# We can also measure the peak signal to noise ratio between the images
PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)
PSNR_GTvsSource = psnr(test_GT_norm,test_source_norm,data_range=1.0)
writer.writerow([i,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsSource),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsSource),str(PSNR_GTvsPrediction),str(PSNR_GTvsSource)])
# All data is now processed saved
Test_FileList = os.listdir(Source_QC_folder) # this assumes, as it should, that both source and target are named the same
plt.figure(figsize=(20,20))
# Currently only displays the last computed set, from memory
# Target (Ground-truth)
plt.subplot(3,3,1)
plt.axis('off')
img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1]))
plt.imshow(img_GT, norm=simple_norm(img_GT, percent = 99))
plt.title('Target',fontsize=15)
# Source
plt.subplot(3,3,2)
plt.axis('off')
img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1]))
plt.imshow(img_Source, norm=simple_norm(img_Source, percent = 99))
plt.title('Source',fontsize=15)
#Prediction
plt.subplot(3,3,3)
plt.axis('off')
img_Prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction/", Test_FileList[-1]))
plt.imshow(img_Prediction, norm=simple_norm(img_Prediction, percent = 99))
plt.title('Prediction',fontsize=15)
#Setting up colours
cmap = plt.cm.CMRmap
#SSIM between GT and Source
plt.subplot(3,3,5)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imSSIM_GTvsSource,fraction=0.046, pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsSource,3)),fontsize=14)
plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)
#SSIM between GT and Prediction
plt.subplot(3,3,6)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)
plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsPrediction,3)),fontsize=14)
#Root Squared Error between GT and Source
plt.subplot(3,3,8)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap = cmap, vmin=0, vmax = 1)
plt.colorbar(imRSE_GTvsSource,fraction=0.046,pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsSource,3))+', PSNR: '+str(round(PSNR_GTvsSource,3)),fontsize=14)
#plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3)))
plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)
#Root Squared Error between GT and Prediction
plt.subplot(3,3,9)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsPrediction,3))+', PSNR: '+str(round(PSNR_GTvsPrediction,3)),fontsize=14)
plt.savefig(full_QC_model_path+'Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)
qc_pdf_export()
###Output
_____no_output_____
###Markdown
**6. Using the trained model**---In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive. **6.1. Generate prediction(s) from unseen dataset**---The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.**`Result_folder`:** This folder will contain the predicted output images.
###Code
#@markdown ### Provide the path to your dataset and to the folder where the predictions are saved, then play the cell to predict outputs from your unseen images.
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = os.path.join(Prediction_model_path, Prediction_model_name)
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Activate the pretrained model.
model_training = CARE(config=None, name=Prediction_model_name, basedir=Prediction_model_path)
# creates a loop, creating filenames and saving them
for filename in os.listdir(Data_folder):
img = imread(os.path.join(Data_folder,filename))
restored = model_training.predict(img, axes='YX')
os.chdir(Result_folder)
imsave(filename,restored)
print("Images saved into folder:", Result_folder)
###Output
_____no_output_____
###Markdown
**6.2. Inspect the predicted output**---
###Code
# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.
# This will display a randomly chosen dataset input and predicted output
random_choice = random.choice(os.listdir(Data_folder))
x = imread(Data_folder+"/"+random_choice)
os.chdir(Result_folder)
y = imread(Result_folder+"/"+random_choice)
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Input')
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Predicted output');
###Output
_____no_output_____
###Markdown
**CARE: Content-aware image restoration (2D)**---CARE is a neural network capable of image restoration from corrupted bio-images, first published in 2018 by [Weigert *et al.* in Nature Methods](https://www.nature.com/articles/s41592-018-0216-7). The CARE network uses a U-Net network architecture and allows image restoration and resolution improvement in 2D and 3D images, in a supervised manner, using noisy images as input and low-noise images as targets for training. The function of the network is essentially determined by the set of images provided in the training dataset. For instance, if noisy images are provided as input and high signal-to-noise ratio images are provided as targets, the network will perform denoising. **This particular notebook enables restoration of 2D dataset. If you are interested in restoring 3D dataset, you should use the CARE 3D notebook instead.**---*Disclaimer*:This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.This notebook is based on the following paper: **Content-aware image restoration: pushing the limits of fluorescence microscopy**, by Weigert *et al.* published in Nature Methods in 2018 (https://www.nature.com/articles/s41592-018-0216-7)And source code found in: https://github.com/csbdeep/csbdeepFor a more in-depth description of the features of the network,please refer to [this guide](http://csbdeep.bioimagecomputing.com/doc/) provided by the original authors of the work.We provide a dataset for the training of this notebook as a way to test its functionalities but the training and test data of the restoration experiments is also available from the authors of the original paper [here](https://publications.mpi-cbg.de/publications-sites/7207/).**Please also cite this original paper when using or developing this notebook.** **How to use this notebook?**---Video describing how to use our notebooks are available on youtube: - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook---**Structure of a notebook**The notebook contains two types of cell: **Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.---**Table of contents, Code snippets** and **Files**On the top left side of the notebook you find three tabs which contain from top to bottom:*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. **Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!---**Making changes to the notebook****You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).You can use the ``-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment. **0. Before getting started**--- For CARE to train, **it needs to have access to a paired training dataset**. This means that the same image needs to be acquired in the two conditions (for instance, low signal-to-noise ratio and high signal-to-noise ratio) and provided with indication of correspondence. Therefore, the data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called "Training - Low SNR images" (Training_source) and "Training - high SNR images" (Training_target). Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook. **Additionally, the corresponding input and output files need to have the same name**. Please note that you currently can **only use .tif files!**Here's a common data structure that can work:* Experiment A - **Training dataset** - Low SNR images (Training_source) - img_1.tif, img_2.tif, ... - High SNR images (Training_target) - img_1.tif, img_2.tif, ... - **Quality control dataset** - Low SNR images - img_1.tif, img_2.tif - High SNR images - img_1.tif, img_2.tif - **Data to be predicted** - **Results**---**Important note**- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.--- **1. Initialise the Colab session**--- **1.1. Check for GPU access**---By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:Go to **Runtime -> Change the Runtime type****Runtime type: Python 3** *(Python 3 is programming language in which this program is written)***Accelator: GPU** *(Graphics processing unit)*
###Code
#@markdown ##Run this cell to check if you have GPU access
%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
###Output
_____no_output_____
###Markdown
**1.2. Mount your Google Drive**--- To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook. Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. Once this is done, your data are available in the **Files** tab on the top left of notebook.
###Code
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
**2. Install CARE and dependencies**---
###Code
Notebook_version = ['1.11']
#@markdown ##Install CARE and dependencies
#Libraries contains information of certain topics.
#For example the tifffile library contains information on how to handle tif-files.
#Here, we install libraries which are not already included in Colab.
!pip install tifffile # contains tools to operate tiff-files
!pip install csbdeep # contains tools for restoration of fluorescence microcopy images (Content-aware Image Restoration, CARE). It uses Keras and Tensorflow.
!pip install wget
!pip install memory_profiler
!pip install fpdf
%load_ext memory_profiler
#Here, we import and enable Tensorflow 1 instead of Tensorflow 2.
%tensorflow_version 1.x
import tensorflow
import tensorflow as tf
print(tensorflow.__version__)
print("Tensorflow enabled.")
# ------- Variable specific to CARE -------
from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict, plot_history, Path, download_and_extract_zip_file
from csbdeep.data import RawData, create_patches
from csbdeep.io import load_training_data, save_tiff_imagej_compatible
from csbdeep.models import Config, CARE
from csbdeep import data
from __future__ import print_function, unicode_literals, absolute_import, division
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
import wget
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32
from skimage.util import img_as_ubyte
from tqdm import tqdm
from fpdf import FPDF, HTMLMixin
from datetime import datetime
import subprocess
from pip._internal.operations.freeze import freeze
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
W = '\033[0m' # white (normal)
R = '\033[31m' # red
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
print("Libraries installed")
# Check if this is the latest version of the notebook
Latest_notebook_version = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv")
if Notebook_version == list(Latest_notebook_version.columns):
print("This notebook is up-to-date.")
if not Notebook_version == list(Latest_notebook_version.columns):
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
!pip freeze > requirements.txt
###Output
_____no_output_____
###Markdown
**3. Select your parameters and paths**--- **3.1. Setting main training parameters**--- **Paths for training, predictions and results****`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source (Low SNR images) and Training_target (High SNR images or ground truth) training data respecively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).**Training Parameters****`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-300 epochs. Evaluate the performance after training (see 5). **Default value: 50****`patch_size`:** CARE divides the image into patches for training. Input the size of the patches (length of a side). The value should be smaller than the dimensions of the image and divisible by 8. **Default value: 80****When choosing the patch_size, the value should be i) large enough that it will enclose many instances, ii) small enough that the resulting patches fit into the RAM.** **`number_of_patches`:** Input the number of the patches per image. Increasing the number of patches allows for larger training datasets. **Default value: 100** **Decreasing the patch size or increasing the number of patches may improve the training but may also increase the training time.****Advanced Parameters - experienced users only****`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16****`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patch / batch_size****`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 10** **`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0004**
###Code
#@markdown ###Path to training images:
Training_source = "" #@param {type:"string"}
InputFile = Training_source+"/*.tif"
Training_target = "" #@param {type:"string"}
OutputFile = Training_target+"/*.tif"
#Define where the patch file will be saved
base = "/content"
# model name and path
#@markdown ###Name of the model and path to model folder:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
# other parameters for training.
#@markdown ###Training Parameters
#@markdown Number of epochs:
number_of_epochs = 50#@param {type:"number"}
#@markdown Patch size (pixels) and number
patch_size = 80#@param {type:"number"} # in pixels
number_of_patches = 100#@param {type:"number"}
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 16#@param {type:"number"}
number_of_steps = 400#@param {type:"number"}
percentage_validation = 10 #@param {type:"number"}
initial_learning_rate = 0.0004 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 16
percentage_validation = 10
initial_learning_rate = 0.0004
#Here we define the percentage to use for validation
percentage = percentage_validation/100
#here we check that no model with the same name already exist, if so print a warning
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: "+model_name+" already exists and will be deleted in the following cell !!")
print(bcolors.WARNING +"To continue training "+model_name+", choose a new model_name here, and load "+model_name+" in section 3.3"+W)
# Here we disable pre-trained model by default (in case the cell is not ran)
Use_pretrained_model = False
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = False
# The shape of the images.
x = imread(InputFile)
y = imread(OutputFile)
print('Loaded Input images (number, width, length) =', x.shape)
print('Loaded Output images (number, width, length) =', y.shape)
print("Parameters initiated.")
# This will display a randomly chosen dataset input and output
random_choice = random.choice(os.listdir(Training_source))
x = imread(Training_source+"/"+random_choice)
# Here we check that the input images contains the expected dimensions
if len(x.shape) == 2:
print("Image dimensions (y,x)",x.shape)
if not len(x.shape) == 2:
print(bcolors.WARNING +"Your images appear to have the wrong dimensions. Image dimension",x.shape)
#Find image XY dimension
Image_Y = x.shape[0]
Image_X = x.shape[1]
#Hyperparameters failsafes
# Here we check that patch_size is smaller than the smallest xy dimension of the image
if patch_size > min(Image_Y, Image_X):
patch_size = min(Image_Y, Image_X)
print (bcolors.WARNING + " Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:",patch_size)
# Here we check that patch_size is divisible by 8
if not patch_size % 8 == 0:
patch_size = ((int(patch_size / 8)-1) * 8)
print (bcolors.WARNING + " Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:",patch_size)
os.chdir(Training_target)
y = imread(Training_target+"/"+random_choice)
f=plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Training source')
plt.axis('off');
plt.subplot(1,2,2)
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Training target')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_CARE2D.png',bbox_inches='tight',pad_inches=0)
###Output
_____no_output_____
###Markdown
**3.2. Data augmentation**--- Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it. **However, data augmentation is not a magic solution and may also introduce issues. Therefore, we recommend that you train your network with and without augmentation, and use the QC section to validate that it improves overall performances.** Data augmentation is performed here by [Augmentor.](https://github.com/mdbloice/Augmentor)[Augmentor](https://github.com/mdbloice/Augmentor) was described in the following article:Marcus D Bloice, Peter M Roth, Andreas Holzinger, Biomedical image augmentation using Augmentor, Bioinformatics, https://doi.org/10.1093/bioinformatics/btz259**Please also cite this original paper when publishing results obtained using this notebook with augmentation enabled.**
###Code
#Data augmentation
Use_Data_augmentation = False #@param {type:"boolean"}
if Use_Data_augmentation:
!pip install Augmentor
import Augmentor
#@markdown ####Choose a factor by which you want to multiply your original dataset
Multiply_dataset_by = 2 #@param {type:"slider", min:1, max:30, step:1}
Save_augmented_images = False #@param {type:"boolean"}
Saving_path = "" #@param {type:"string"}
Use_Default_Augmentation_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please choose the probability of the following image manipulations to be used to augment your dataset (1 = always used; 0 = disabled ):
#@markdown ####Mirror and rotate images
rotate_90_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
rotate_270_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_left_right = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_top_bottom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image Zoom
random_zoom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
random_zoom_magnification = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image distortion
random_distortion = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Image shearing and skewing
image_shear = 0 #@param {type:"slider", min:0, max:1, step:0.1}
max_image_shear = 1 #@param {type:"slider", min:1, max:25, step:1}
skew_image = 0 #@param {type:"slider", min:0, max:1, step:0.1}
skew_image_magnitude = 0 #@param {type:"slider", min:0, max:1, step:0.1}
if Use_Default_Augmentation_Parameters:
rotate_90_degrees = 0.5
rotate_270_degrees = 0.5
flip_left_right = 0.5
flip_top_bottom = 0.5
if not Multiply_dataset_by >5:
random_zoom = 0
random_zoom_magnification = 0.9
random_distortion = 0
image_shear = 0
max_image_shear = 10
skew_image = 0
skew_image_magnitude = 0
if Multiply_dataset_by >5:
random_zoom = 0.1
random_zoom_magnification = 0.9
random_distortion = 0.5
image_shear = 0.2
max_image_shear = 5
skew_image = 0.2
skew_image_magnitude = 0.4
if Multiply_dataset_by >25:
random_zoom = 0.5
random_zoom_magnification = 0.8
random_distortion = 0.5
image_shear = 0.5
max_image_shear = 20
skew_image = 0.5
skew_image_magnitude = 0.6
list_files = os.listdir(Training_source)
Nb_files = len(list_files)
Nb_augmented_files = (Nb_files * Multiply_dataset_by)
if Use_Data_augmentation:
print("Data augmentation enabled")
# Here we set the path for the various folder were the augmented images will be loaded
# All images are first saved into the augmented folder
#Augmented_folder = "/content/Augmented_Folder"
if not Save_augmented_images:
Saving_path= "/content"
Augmented_folder = Saving_path+"/Augmented_Folder"
if os.path.exists(Augmented_folder):
shutil.rmtree(Augmented_folder)
os.makedirs(Augmented_folder)
#Training_source_augmented = "/content/Training_source_augmented"
Training_source_augmented = Saving_path+"/Training_source_augmented"
if os.path.exists(Training_source_augmented):
shutil.rmtree(Training_source_augmented)
os.makedirs(Training_source_augmented)
#Training_target_augmented = "/content/Training_target_augmented"
Training_target_augmented = Saving_path+"/Training_target_augmented"
if os.path.exists(Training_target_augmented):
shutil.rmtree(Training_target_augmented)
os.makedirs(Training_target_augmented)
# Here we generate the augmented images
#Load the images
p = Augmentor.Pipeline(Training_source, Augmented_folder)
#Define the matching images
p.ground_truth(Training_target)
#Define the augmentation possibilities
if not rotate_90_degrees == 0:
p.rotate90(probability=rotate_90_degrees)
if not rotate_270_degrees == 0:
p.rotate270(probability=rotate_270_degrees)
if not flip_left_right == 0:
p.flip_left_right(probability=flip_left_right)
if not flip_top_bottom == 0:
p.flip_top_bottom(probability=flip_top_bottom)
if not random_zoom == 0:
p.zoom_random(probability=random_zoom, percentage_area=random_zoom_magnification)
if not random_distortion == 0:
p.random_distortion(probability=random_distortion, grid_width=4, grid_height=4, magnitude=8)
if not image_shear == 0:
p.shear(probability=image_shear,max_shear_left=20,max_shear_right=20)
if not skew_image == 0:
p.skew(probability=skew_image,magnitude=skew_image_magnitude)
p.sample(int(Nb_augmented_files))
print(int(Nb_augmented_files),"matching images generated")
# Here we sort through the images and move them back to augmented trainning source and targets folders
augmented_files = os.listdir(Augmented_folder)
for f in augmented_files:
if (f.startswith("_groundtruth_(1)_")):
shortname_noprefix = f[17:]
shutil.copyfile(Augmented_folder+"/"+f, Training_target_augmented+"/"+shortname_noprefix)
if not (f.startswith("_groundtruth_(1)_")):
shutil.copyfile(Augmented_folder+"/"+f, Training_source_augmented+"/"+f)
for filename in os.listdir(Training_source_augmented):
os.chdir(Training_source_augmented)
os.rename(filename, filename.replace('_original', ''))
#Here we clean up the extra files
shutil.rmtree(Augmented_folder)
if not Use_Data_augmentation:
print(bcolors.WARNING+"Data augmentation disabled")
###Output
_____no_output_____
###Markdown
**3.3. Using weights from a pre-trained model as initial weights**--- Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a CARE 2D model**. This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**. In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
###Code
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_2D_paper")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.h5 pretrained model does not exist')
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead')
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead')
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print(bcolors.WARNING+'No pretrained network will be used.')
###Output
_____no_output_____
###Markdown
**4. Train the network**--- **4.1. Prepare the training data and model for training**---Here, we use the information from 3. to build the model and convert the training data into a suitable format for training.
###Code
#@markdown ##Create the model and dataset objects
# --------------------- Here we delete the model folder if it already exist ------------------------
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: Model folder already exists and has been removed !!"+W)
shutil.rmtree(model_path+'/'+model_name)
# --------------------- Here we load the augmented data or the raw data ------------------------
if Use_Data_augmentation:
Training_source_dir = Training_source_augmented
Training_target_dir = Training_target_augmented
if not Use_Data_augmentation:
Training_source_dir = Training_source
Training_target_dir = Training_target
# --------------------- ------------------------------------------------
# This object holds the image pairs (GT and low), ensuring that CARE compares corresponding images.
# This file is saved in .npz format and later called when loading the trainig data.
raw_data = data.RawData.from_folder(
basepath=base,
source_dirs=[Training_source_dir],
target_dir=Training_target_dir,
axes='CYX',
pattern='*.tif*')
X, Y, XY_axes = data.create_patches(
raw_data,
patch_filter=None,
patch_size=(patch_size,patch_size),
n_patches_per_image=number_of_patches)
print ('Creating 2D training dataset')
training_path = model_path+"/rawdata"
rawdata1 = training_path+".npz"
np.savez(training_path,X=X, Y=Y, axes=XY_axes)
# Load Training Data
(X,Y), (X_val,Y_val), axes = load_training_data(rawdata1, validation_split=percentage, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
%memit
#plot of training patches.
plt.figure(figsize=(12,5))
plot_some(X[:5],Y[:5])
plt.suptitle('5 example training patches (top row: source, bottom row: target)');
#plot of validation patches
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
#Here we automatically define number_of_step in function of training data and batch size
if (Use_Default_Advanced_Parameters):
number_of_steps= int(X.shape[0]/batch_size)+1
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
#Here we create the configuration file
config = Config(axes, n_channel_in, n_channel_out, probabilistic=True, train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, unet_kern_size=5, unet_n_depth=3, train_batch_size=batch_size, train_learning_rate=initial_learning_rate)
print(config)
vars(config)
# Compile the CARE model for network training
model_training= CARE(config, model_name, basedir=model_path)
# --------------------- Using pretrained model ------------------------
# Load the pretrained weights
if Use_pretrained_model:
model_training.load_weights(h5_file_path)
# --------------------- ---------------------- ------------------------
###Output
_____no_output_____
###Markdown
**4.2. Start Training**---When playing the cell below you should see updates after each epoch (round). Network training can take some time.* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the CSBDeep Fiji plugin (Run your Network). You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system).
###Code
#@markdown ##Start training
start = time.time()
# Start Training
history = model_training.train(X,Y, validation_data=(X_val,Y_val))
print("Training, done.")
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(history.history)
if os.path.exists(model_path+"/"+model_name+"/Quality Control"):
shutil.rmtree(model_path+"/"+model_name+"/Quality Control")
os.makedirs(model_path+"/"+model_name+"/Quality Control")
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss', 'learning rate'])
for i in range(len(history.history['loss'])):
writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])
# Displaying the time elapsed for training
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
model_training.export_TF()
print("Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin")
#Create a pdf document with training summary
# save FPDF() class into a
# variable pdf
from datetime import datetime
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'CARE 2D'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
training_time = "Training time: "+str(hour)+ "hour(s) "+str(mins)+"min(s) "+str(round(sec))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and methods:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras','csbdeep']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
shape = io.imread(Training_source+'/'+os.listdir(Training_source)[1]).shape
dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
if Use_pretrained_model:
text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(190, 5, txt = text, align='L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(1)
pdf.cell(28, 5, txt='Augmentation: ', ln=0)
pdf.set_font('')
if Use_Data_augmentation:
aug_text = 'The dataset was augmented by a factor of '+str(Multiply_dataset_by)+' by'
if rotate_270_degrees != 0 or rotate_90_degrees != 0:
aug_text = aug_text+'\n- rotation'
if flip_left_right != 0 or flip_top_bottom != 0:
aug_text = aug_text+'\n- flipping'
if random_zoom_magnification != 0:
aug_text = aug_text+'\n- random zoom magnification'
if random_distortion != 0:
aug_text = aug_text+'\n- random distortion'
if image_shear != 0:
aug_text = aug_text+'\n- image shearing'
if skew_image != 0:
aug_text = aug_text+'\n- image skewing'
else:
aug_text = 'No augmentation was used for training.'
pdf.multi_cell(190, 5, txt=aug_text, align='L')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if Use_Default_Advanced_Parameters:
pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=40% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>number_of_patches</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{6}</td>
</tr>
</table>
""".format(number_of_epochs,str(patch_size)+'x'+str(patch_size),number_of_patches,batch_size,number_of_steps,percentage_validation,initial_learning_rate)
pdf.write_html(html)
#pdf.multi_cell(190, 5, txt = text_2, align='L')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(29, 5, txt= 'Training_source:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_source, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(27, 5, txt= 'Training_target:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_target, align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(22, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training pair', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_CARE2D.png').shape
pdf.image('/content/TrainingDataExample_CARE2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
if Use_Data_augmentation:
ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
pdf.multi_cell(190, 5, txt = ref_3, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+"_training_report.pdf")
###Output
_____no_output_____
###Markdown
**4.3. Download your model(s) from Google Drive**---Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder as all data can be erased at the next training if using the same folder. **5. Evaluate your model**---This section allows the user to perform important quality checks on the validity and generalisability of the trained model. **We highly recommend to perform quality control on all newly trained models.**
###Code
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
QC_model_folder = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
loss_displayed = False
###Output
_____no_output_____
###Markdown
**5.1. Inspection of the loss function**---First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.***Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.**Note: Plots of the losses will be shown in a linear and in a log scale. This can help visualise changes in the losses at different magnitudes. However, note that if the losses are negative the plot on the log scale will be empty. This is not an error.**
###Code
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
loss_displayed = True
lossDataFromCSV = []
vallossDataFromCSV = []
with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png',bbox_inches='tight',pad_inches=0)
plt.show()
###Output
_____no_output_____
###Markdown
**5.2. Error mapping and quality metrics estimation**---This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the "Source_QC_folder" and "Target_QC_folder" !**1. The SSIM (structural similarity) map** The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info). **mSSIM** is the SSIM value calculated across the entire window of both images.**The output below shows the SSIM maps with the mSSIM****2. The RSE (Root Squared Error) map** This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.**The output below shows the RSE maps with the NRMSE and PSNR values.**
###Code
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
# Create a quality control/Prediction Folder
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
# Activate the pretrained model.
model_training = CARE(config=None, name=QC_model_name, basedir=QC_model_path)
# List Tif images in Source_QC_folder
Source_QC_folder_tif = Source_QC_folder+"/*.tif"
Z = sorted(glob(Source_QC_folder_tif))
Z = list(map(imread,Z))
print('Number of test dataset found in the folder: '+str(len(Z)))
# Perform prediction on all datasets in the Source_QC folder
for filename in os.listdir(Source_QC_folder):
img = imread(os.path.join(Source_QC_folder, filename))
predicted = model_training.predict(img, axes='YX')
os.chdir(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
imsave(filename, predicted)
def ssim(img1, img2):
return structural_similarity(img1,img2,data_range=1.,full=True, gaussian_weights=True, use_sample_covariance=False, sigma=1.5)
def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
def norm_minmse(gt, x, normalize_gt=True):
"""This function is adapted from Martin Weigert"""
"""
normalizes and affinely scales an image pair such that the MSE is minimized
Parameters
----------
gt: ndarray
the ground truth image
x: ndarray
the image that will be affinely scaled
normalize_gt: bool
set to True of gt image should be normalized (default)
Returns
-------
gt_scaled, x_scaled
"""
if normalize_gt:
gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)
x = x.astype(np.float32, copy=False) - np.mean(x)
#x = x - np.mean(x)
gt = gt.astype(np.float32, copy=False) - np.mean(gt)
#gt = gt - np.mean(gt)
scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())
return gt, scale * x
# Open and create the csv file that will contain all the QC metrics
with open(QC_model_path+"/"+QC_model_name+"/Quality Control/QC_metrics_"+QC_model_name+".csv", "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["image #","Prediction v. GT mSSIM","Input v. GT mSSIM", "Prediction v. GT NRMSE", "Input v. GT NRMSE", "Prediction v. GT PSNR", "Input v. GT PSNR"])
# Let's loop through the provided dataset in the QC folders
for i in os.listdir(Source_QC_folder):
if not os.path.isdir(os.path.join(Source_QC_folder,i)):
print('Running QC on: '+i)
# -------------------------------- Target test data (Ground truth) --------------------------------
test_GT = io.imread(os.path.join(Target_QC_folder, i))
# -------------------------------- Source test data --------------------------------
test_source = io.imread(os.path.join(Source_QC_folder,i))
# Normalize the images wrt each other by minimizing the MSE between GT and Source image
test_GT_norm,test_source_norm = norm_minmse(test_GT, test_source, normalize_gt=True)
# -------------------------------- Prediction --------------------------------
test_prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction",i))
# Normalize the images wrt each other by minimizing the MSE between GT and prediction
test_GT_norm,test_prediction_norm = norm_minmse(test_GT, test_prediction, normalize_gt=True)
# -------------------------------- Calculate the metric maps and save them --------------------------------
# Calculate the SSIM maps
index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim(test_GT_norm, test_prediction_norm)
index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim(test_GT_norm, test_source_norm)
#Save ssim_maps
img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsPrediction_'+i,img_SSIM_GTvsPrediction_32bit)
img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsSource_'+i,img_SSIM_GTvsSource_32bit)
# Calculate the Root Squared Error (RSE) maps
img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))
img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm))
# Save SE maps
img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)
img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsPrediction_'+i,img_RSE_GTvsPrediction_32bit)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsSource_'+i,img_RSE_GTvsSource_32bit)
# -------------------------------- Calculate the RSE metrics and save them --------------------------------
# Normalised Root Mean Squared Error (here it's valid to take the mean of the image)
NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))
NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource))
# We can also measure the peak signal to noise ratio between the images
PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)
PSNR_GTvsSource = psnr(test_GT_norm,test_source_norm,data_range=1.0)
writer.writerow([i,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsSource),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsSource),str(PSNR_GTvsPrediction),str(PSNR_GTvsSource)])
# All data is now processed saved
Test_FileList = os.listdir(Source_QC_folder) # this assumes, as it should, that both source and target are named the same
plt.figure(figsize=(20,20))
# Currently only displays the last computed set, from memory
# Target (Ground-truth)
plt.subplot(3,3,1)
plt.axis('off')
img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1]))
plt.imshow(img_GT, norm=simple_norm(img_GT, percent = 99))
plt.title('Target',fontsize=15)
# Source
plt.subplot(3,3,2)
plt.axis('off')
img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1]))
plt.imshow(img_Source, norm=simple_norm(img_Source, percent = 99))
plt.title('Source',fontsize=15)
#Prediction
plt.subplot(3,3,3)
plt.axis('off')
img_Prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction/", Test_FileList[-1]))
plt.imshow(img_Prediction, norm=simple_norm(img_Prediction, percent = 99))
plt.title('Prediction',fontsize=15)
#Setting up colours
cmap = plt.cm.CMRmap
#SSIM between GT and Source
plt.subplot(3,3,5)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imSSIM_GTvsSource,fraction=0.046, pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsSource,3)),fontsize=14)
plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)
#SSIM between GT and Prediction
plt.subplot(3,3,6)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)
plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsPrediction,3)),fontsize=14)
#Root Squared Error between GT and Source
plt.subplot(3,3,8)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap = cmap, vmin=0, vmax = 1)
plt.colorbar(imRSE_GTvsSource,fraction=0.046,pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsSource,3))+', PSNR: '+str(round(PSNR_GTvsSource,3)),fontsize=14)
#plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3)))
plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)
#Root Squared Error between GT and Prediction
plt.subplot(3,3,9)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsPrediction,3))+', PSNR: '+str(round(PSNR_GTvsPrediction,3)),fontsize=14)
plt.savefig(full_QC_model_path+'Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)
#Make a pdf summary of the QC results
from datetime import datetime
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'CARE 2D'
#model_name = os.path.basename(full_QC_model_path)
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Development of Training Losses', ln=1, align='L')
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape
if os.path.exists(full_QC_model_path+'Quality Control/lossCurvePlots.png'):
pdf.image(full_QC_model_path+'Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/13))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape
pdf.image(full_QC_model_path+'Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="7" face="Courier New" >
<table width=94% style="margin-left:0px;">"""
with open(full_QC_model_path+'Quality Control/QC_metrics_'+QC_model_name+'.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
image = header[0]
mSSIM_PvsGT = header[1]
mSSIM_SvsGT = header[2]
NRMSE_PvsGT = header[3]
NRMSE_SvsGT = header[4]
PSNR_PvsGT = header[5]
PSNR_SvsGT = header[6]
header = """
<tr>
<th width = 10% align="left">{0}</th>
<th width = 15% align="left">{1}</th>
<th width = 15% align="center">{2}</th>
<th width = 15% align="left">{3}</th>
<th width = 15% align="center">{4}</th>
<th width = 15% align="left">{5}</th>
<th width = 15% align="center">{6}</th>
</tr>""".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)
html = html+header
for row in metrics:
image = row[0]
mSSIM_PvsGT = row[1]
mSSIM_SvsGT = row[2]
NRMSE_PvsGT = row[3]
NRMSE_SvsGT = row[4]
PSNR_PvsGT = row[5]
PSNR_SvsGT = row[6]
cells = """
<tr>
<td width = 10% align="left">{0}</td>
<td width = 15% align="center">{1}</td>
<td width = 15% align="center">{2}</td>
<td width = 15% align="center">{3}</td>
<td width = 15% align="center">{4}</td>
<td width = 15% align="center">{5}</td>
<td width = 15% align="center">{6}</td>
</tr>""".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(full_QC_model_path+'Quality Control/'+QC_model_name+'_QC_report.pdf')
###Output
_____no_output_____
###Markdown
**6. Using the trained model**---In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive. **6.1. Generate prediction(s) from unseen dataset**---The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.**`Result_folder`:** This folder will contain the predicted output images.
###Code
#@markdown ### Provide the path to your dataset and to the folder where the predictions are saved, then play the cell to predict outputs from your unseen images.
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = os.path.join(Prediction_model_path, Prediction_model_name)
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Activate the pretrained model.
model_training = CARE(config=None, name=Prediction_model_name, basedir=Prediction_model_path)
# creates a loop, creating filenames and saving them
for filename in os.listdir(Data_folder):
img = imread(os.path.join(Data_folder,filename))
restored = model_training.predict(img, axes='YX')
os.chdir(Result_folder)
imsave(filename,restored)
print("Images saved into folder:", Result_folder)
###Output
_____no_output_____
###Markdown
**6.2. Inspect the predicted output**---
###Code
# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.
# This will display a randomly chosen dataset input and predicted output
random_choice = random.choice(os.listdir(Data_folder))
x = imread(Data_folder+"/"+random_choice)
os.chdir(Result_folder)
y = imread(Result_folder+"/"+random_choice)
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Input')
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Predicted output');
###Output
_____no_output_____
###Markdown
**Content-aware image restoration (CARE) 2D**CARE is a neural network capable of image restoration from corrupted bio-images, first published in 2018 by [Weigert *et al.* in Nature Methods](https://www.nature.com/articles/s41592-018-0216-7). The network allows image denoising and resolution improvement in 2D and 3D images, in a supervised training manner. The function of the network is essentially determined by the set of images provided in the training dataset. For instance, if noisy images are provided as input and high signal-to-noise ratio images are provided as targets, the network will perform denoising.---*Disclaimer*:This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.This notebook is based on the following paper: **Content-aware image restoration: pushing the limits of fluorescence microscopy**, Nature Methods, Volume 15. pages 1090–1097(2018) by *Martin Weigert, Uwe Schmidt, Tobias Boothe, Andreas Müller, Alexandr Dibrov, Akanksha Jain, Benjamin Wilhelm, Deborah Schmidt, Coleman Broaddus, Siân Culley, Mauricio Rocha-Martins, Fabián Segovia-Miranda, Caren Norden, Ricardo Henriques, Marino Zerial, Michele Solimena, Jochen Rink, Pavel Tomancak, Loic Royer, Florian Jug & Eugene W. Myers* (https://www.nature.com/articles/s41592-018-0216-7)And source code found in: https://github.com/csbdeep/csbdeepFor a more in-depth description of the features of the network,please refer to [this guide](http://csbdeep.bioimagecomputing.com/doc/) provided by the original authors of the work.We provide a dataset for the training of this notebook as a way to test its functionalities but the training and test data of the restoration experiments is also available from the authors of the original paper [here](https://publications.mpi-cbg.de/publications-sites/7207/).**Please also cite this original paper when using or developing this notebook.** **How to use this notebook?**---Video describing how to use our notebooks are available on youtube: - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook---**Structure of a notebook**The notebook contains two types of cell: **Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.---**Table of contents, Code snippets** and **Files**On the top left side of the notebook you find three tabs which contain from top to bottom:*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. **Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!---**Making changes to the notebook****You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).You can use the ``-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment. **0. Before getting started**--- Before you run the notebook, please ensure that you are logged into your Google account and have the training and/or data to process in your Google Drive. For CARE to train, **it needs to have access to a paired training dataset**. This means that the same image needs to be acquired in the two conditions (for instance, low signal-to-noise ratio and high signal-to-noise ratio) and provided with indication of correspondence. Therefore, the data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called "Training - Low SNR images" (Training_source) and "Training - high SNR images" (Training_target). Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki **Additionally, the corresponding input and output files need to have the same name**. Please note that you currently can **only use .tif files!** You can also provide a folder that contains the data that you wish to analyse with the trained network once all training has been performed. This can include Test dataset for which you have the equivalent output and can compare to what the network provides. Here is a common data structure that can work:* Data - Training dataset - Training - Low SNR images (Training_source) - img_1.tif, img_2.tif, ... - Training - high SNR images (Training_target) - img_1.tif, img_2.tif, ... - Test dataset - Results The **Results** folder will contain the processed images, trained model and network parameters as csv file. Your original images remain unmodified.---**Important note**- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to run predictions on the model that was just trained.- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to run the predictions on the desired model.--- **1. Set the Runtime type and mount your Google Drive**--- **1.1. Change the Runtime type**---Go to **Runtime -> Change the Runtime type****Runtime type: Python 3** *(Python 3 is programming language in which this program is written)***Accelator: GPU** *(Graphics processing unit)*
###Code
#@markdown ##Run this cell to check if you have GPU access
%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime settings are correct then Google did not allocate GPU to your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
###Output
_____no_output_____
###Markdown
**1.2. Mount your Google Drive**--- To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook. Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. Once this is done, your data are available in the **Files** tab on the top left of notebook.
###Code
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
**2. Install CARE and Dependencies**---
###Code
#@markdown ##Install CARE and dependencies
#Libraries contains information of certain topics.
#For example the tifffile library contains information on how to handle tif-files.
#Here, we install libraries which are not already included in Colab.
!pip install tifffile # contains tools to operate tiff-files
!pip install csbdeep # contains tools for restoration of fluorescence microcopy images (Content-aware Image Restoration, CARE). It uses Keras and Tensorflow.
#Here, we import and enable Tensorflow 1 instead of Tensorflow 2.
%tensorflow_version 1.x
import tensorflow
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
print(tensorflow.__version__)
print("Tensorflow enabled.")
#Here, we import all libraries necessary for this notebook.
from __future__ import print_function, unicode_literals, absolute_import, division
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from tifffile import imread, imsave
from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict, plot_history, Path, download_and_extract_zip_file
from csbdeep.data import RawData, create_patches
from csbdeep.io import load_training_data, save_tiff_imagej_compatible
from csbdeep.models import Config, CARE
from csbdeep import data
from pathlib import Path
import os, random
import shutil
import pandas as pd
import csv
!pip install memory_profiler
%load_ext memory_profiler
print("Depencies installed and imported.")
###Output
_____no_output_____
###Markdown
**3. Select your paths and parameters**---The code below allows the user to enter the paths to where the training data is and to define the training parameters. **Paths for training, predictions and results****`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source (Low SNR images) and Training_target (High SNR images or ground truth) training data respecively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).**`visual_validation_after_training`**: If you select this option, a random image pair will be set aside from your training set and will be used to display a predicted image of the trained network next to the input and the ground-truth. This can aid in visually assessing the performance of your network after training. **Note: Your training set size will decrease by 1 if you select this option.****Training Parameters****`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-300 epochs. Evaluate the performance after training (see 4.3.). **Default value: 50****`patch_size`:** CARE divides the image into patches for training. Input the size of the patches (length of a side). The value should be smaller than the dimensions of the image and divisible by 8. **Default value: 80****`number_of_patches`:** Input the number of the patches per image. Increasing the number of patches allows for larger training datasets. **Default value: 100** **Decreasing the patch size or increasing the number of patches may improve the training but may also increase the training time.****Advanced Parameters - experienced users only****`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patch / batch_size****`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 64****`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 10**
###Code
#@markdown ###Path to training images:
# base folder of GT and low images
#base = "/content/gdrive/My Drive/Zero-Cost Deep-Learning to Enhance Microscopy/Notebooks to be tested/CARE 2D/Nucleus_datasets/train"
base = "/content/"
training_data = base+"/my_training_data.npz"
# low SNR images
# low = "/content/gdrive/My Drive/Work/manuscript/Ongoing Projects/Zero-Cost Deep-Learning to Enhance Microscopy/Notebooks to be tested/Training datasets/CARE (2D)/Training - Low SNR images" #@param {type:"string"}
Training_source = "" #@param {type:"string"}
#Input_data_folder = Training_source
InputFile = Training_source+"/*.tif"
# Ground truth images
# GT = "/content/gdrive/My Drive/Work/manuscript/Ongoing Projects/Zero-Cost Deep-Learning to Enhance Microscopy/Notebooks to be tested/Training datasets/CARE (2D)/Training - High SNR images" #@param {type:"string"}
Training_target = "" #@param {type:"string"}
#Output_data_folder = Training_target
OutputFile = Training_target+"/*.tif"
# model name and path
#@markdown ###Name of the model and path to model folder:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
#@markdown ####Use one image of the training set for visual assessment of the training:
Visual_validation_after_training = True #@param {type:"boolean"}
# other parameters for training.
#@markdown ###Training Parameters
#@markdown Number of epochs:
number_of_epochs = 50#@param {type:"number"}
#@markdown Patch size (pixels) and number
patch_size = 80#@param {type:"number"} # in pixels
number_of_patches = 100#@param {type:"number"}
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please input:
number_of_steps = 200 #@param {type:"number"}
batch_size = 64#@param {type:"number"}
percentage_validation = 10 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 64
percentage_validation = 10
percentage = percentage_validation/100
#here we check that no model with the same name already exist, if so delete
if os.path.exists(model_path+'/'+model_name):
shutil.rmtree(model_path+'/'+model_name)
# The shape of the images.
x = imread(InputFile)
y = imread(OutputFile)
print('Loaded Input images (number, width, length) =', x.shape)
print('Loaded Output images (number, width, length) =', y.shape)
print("Parameters initiated.")
# This will display a randomly chosen dataset input and output
random_choice = random.choice(os.listdir(Training_source))
x = imread(Training_source+"/"+random_choice)
os.chdir(Training_target)
y = imread(Training_target+"/"+random_choice)
f=plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(x, interpolation='nearest')
plt.title('Training source')
plt.axis('off');
plt.subplot(1,2,2)
plt.imshow(y, interpolation='nearest')
plt.title('Training target')
plt.axis('off');
#protection for next cell
if (Visual_validation_after_training):
Cell_executed = 0
###Output
_____no_output_____
###Markdown
**4. Train the network**--- **4.1. Prepare the training data and model for training**---Here, we use the information from 3. to build the model and convert the training data into a suitable format for training.
###Code
#@markdown ##Create the model and dataset objects
# The code in this cell is inspired by that from the authors' repository (https://github.com/CSBDeep/CSBDeep).
if (Visual_validation_after_training):
if Cell_executed == 0 :
#Create a temporary file folder for immediate assessment of training results:
#If the folder still exists, delete it
if os.path.exists(Training_source+"/temp"):
shutil.rmtree(Training_source+"/temp")
if os.path.exists(Training_target+"/temp"):
shutil.rmtree(Training_target+"/temp")
if os.path.exists(model_path+"/temp"):
shutil.rmtree(model_path+"/temp")
#Create directories to move files temporarily into for assessment
os.makedirs(Training_source+"/temp")
os.makedirs(Training_target+"/temp")
os.makedirs(model_path+"/temp")
#list_source = os.listdir(os.path.join(Training_source))
#list_target = os.listdir(os.path.join(Training_target))
#Move files into the temporary source and target directories:
shutil.move(Training_source+"/"+random_choice, Training_source+'/temp/'+random_choice)
shutil.move(Training_target+"/"+random_choice, Training_target+'/temp/'+random_choice)
# RawData Object
# This object holds the image pairs (GT and low), ensuring that CARE compares corresponding images.
# This file is saved in .npz format and later called when loading the trainig data.
raw_data = data.RawData.from_folder(
basepath=base,
source_dirs=[Training_source],
target_dir=Training_target,
axes='CYX',
pattern='*.tif*')
X, Y, XY_axes = data.create_patches(
raw_data,
patch_filter=None,
patch_size=(patch_size,patch_size),
n_patches_per_image=number_of_patches)
print ('Creating 2D training dataset')
training_path = model_path+"/rawdata"
rawdata1 = training_path+".npz"
np.savez(training_path,X=X, Y=Y, axes=XY_axes)
# Load Training Data
(X,Y), (X_val,Y_val), axes = load_training_data(rawdata1, validation_split=percentage, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
%memit
#plot of training patches.
plt.figure(figsize=(12,5))
plot_some(X[:5],Y[:5])
plt.suptitle('5 example training patches (top row: source, bottom row: target)');
#plot of validation patches
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
#Here we automatically define number_of_step in function of training data and batch size
if (Use_Default_Advanced_Parameters):
number_of_steps= int(X.shape[0]/batch_size)+1
print(number_of_steps)
#Here we create the configuration file
config = Config(axes, n_channel_in, n_channel_out, probabilistic=True, train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, unet_kern_size=5, unet_n_depth=3, train_batch_size=batch_size, train_learning_rate=0.0004)
print(config)
vars(config)
# Compile the CARE model for network training
model_training= CARE(config, model_name, basedir=model_path)
if (Visual_validation_after_training):
Cell_executed = 1
###Output
_____no_output_____
###Markdown
**4.2. Train the network**---When playing the cell below you should see updates after each epoch (round). Network training can take some time.* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.
###Code
import time
start = time.time()
#@markdown ##Start Training
# Start Training
history = model_training.train(X,Y, validation_data=(X_val,Y_val))
print("Training, done.")
if (Visual_validation_after_training):
if Cell_executed == 1:
#Here we predict one image
validation_image = imread(Training_source+"/temp/"+random_choice)
validation_test = model_training.predict(validation_image, axes='YX')
os.chdir(model_path+"/temp/")
imsave(random_choice+"_predicted.tif",validation_test)
#Source
I = imread(Training_source+"/temp/"+random_choice)
#Target
J = imread(Training_target+"/temp/"+random_choice)
#Prediction
K = imread(model_path+"/temp/"+random_choice+"_predicted.tif")
#Make a plot
f=plt.figure(figsize=(24,12))
plt.subplot(1,3,1)
plt.imshow(I, interpolation='nearest')
plt.title('Source')
plt.axis('off');
plt.subplot(1,3,2)
plt.imshow(J, interpolation='nearest')
plt.title('Target')
plt.axis('off');
plt.subplot(1,3,3)
plt.imshow(K, interpolation='nearest')
plt.title('Prediction')
plt.axis('off');
#Move the temporary files back to their original folders
shutil.move(Training_source+'/temp/'+random_choice, Training_source+"/"+random_choice)
shutil.move(Training_target+'/temp/'+random_choice, Training_target+"/"+random_choice)
#Delete the temporary folder
shutil.rmtree(Training_target+'/temp')
shutil.rmtree(Training_source+'/temp')
#protection against removing data
Cell_executed = 0
# Displaying the time elapsed for training
dt = time.time() - start
min, sec = divmod(dt, 60)
hour, min = divmod(min, 60)
print("Time elapsed:",hour, "hour(s)",min,"min(s)",round(sec),"sec(s)")
###Output
_____no_output_____
###Markdown
**4.3. Evaluate the training**---It is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.***Loss** (loss) describes an error value after each epoch for the difference between the model's prediction and its ground-truth ('GT') target.**Validation loss** (val_loss) describes the same error value between the model's prediction on a validation image (taken from 'low') and compared to it's target (from 'GT').During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.Decreasing **loss** and **validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **validation loss** suddenly increases again an the **loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.
###Code
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
# Create figure framesize
errorfigure = plt.figure(figsize=(16,5))
# Choose the values you wish to compare.
# For example, If you wish to see another values, just replace 'loss' to 'dist_loss'
plot_history(history,['loss','val_loss']);
errorfigure.savefig(model_path+'/training evaluation.tif')
# convert the history.history dict to a pandas DataFrame:
hist_df = pd.DataFrame(history.history)
# The figure is saved into content/ as training evaluation.csv (refresh the Files if needed).
RESULTS = model_path+'/training evaluation.csv'
with open(RESULTS, 'w') as f:
for key in hist_df.keys():
f.write("%s,%s\n"%(key,hist_df[key]))
###Output
_____no_output_____
###Markdown
**4.4. Export model to be used with *CSBDeep Fiji plugins* and *KNIME* workflows (Experimental !!!)**---This allows you to save the trained model in a format where it can be used in the CSBDeep Fiji Plugin. See https://github.com/CSBDeep/CSBDeep_website/wiki/Your-Model-in-Fiji for details.After saving the model to your drive, download the .zip file from your google drive. Do this from your Google Drive and not in the colab interface as this takes very long.
###Code
#@markdown ##Play this cell to save a Fiji-compatible model to Google Drive.
# exports the trained model to Fiji.
# The code is from (https://github.com/CSBDeep/CSBDeep).
model_training.export_TF()
###Output
_____no_output_____
###Markdown
**4.5. Download your model(s) from Google Drive**---The model and its parameters have been saved to your **model_path** on your Google Drive. It is however wise to download the folder as all data can be erased at the next training if using the same folder. **5. Use the network**---In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive. **5.1. Generate prediction from test dataset**---The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).**`Test_data_folder`:** This folder should contain the images that you want to use your trained network on for processing.**`Result_folder`:** This folder will contain the predicted output images.
###Code
#Activate the pretrained model.
#model_training = CARE(config=None, name=model_name, basedir=model_path)
#@markdown ### Provide the path to your dataset and to the folder where the predictions are saved, then play the cell to predict outputs from your unseen images.
Test_data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, provide the name of the model and path to model folder:
#@markdown #####During training, the model files are automatically saved inside a folder named after the parameter 'model_name' (see section 3). Provide the name of this folder as 'inference_model_name' and the path to its parent folder in 'inference_model_path'.
inference_model_name = "" #@param {type:"string"}
inference_model_path = "" #@param {type:"string"}
if (Use_the_current_trained_model):
print("Using current trained network")
inference_model_name = model_name
inference_model_path = model_path
#training_path = model_path+"/"
#Activate the pretrained model.
model_training = CARE(config=None, name=inference_model_name, basedir=inference_model_path)
# creates a loop, creating filenames and saving them
for filename in os.listdir(Test_data_folder):
img = imread(os.path.join(Test_data_folder,filename))
restored = model_training.predict(img, axes='YX')
os.chdir(Result_folder)
imsave(filename,restored)
print("Images saved into folder:", Result_folder)
###Output
_____no_output_____
###Markdown
**5.2. Assess predicted output**---
###Code
# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.
# This will display a randomly chosen dataset input and predicted output
random_choice = random.choice(os.listdir(Test_data_folder))
x = imread(Test_data_folder+"/"+random_choice)
os.chdir(Result_folder)
y = imread(Result_folder+"/"+random_choice)
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(x, interpolation='nearest')
plt.title('Input')
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(y, interpolation='nearest')
plt.title('Predicted output');
###Output
_____no_output_____
###Markdown
**CARE: Content-aware image restoration (2D)**---CARE is a neural network capable of image restoration from corrupted bio-images, first published in 2018 by [Weigert *et al.* in Nature Methods](https://www.nature.com/articles/s41592-018-0216-7). The CARE network uses a U-Net network architecture and allows image restoration and resolution improvement in 2D and 3D images, in a supervised manner, using noisy images as input and low-noise images as targets for training. The function of the network is essentially determined by the set of images provided in the training dataset. For instance, if noisy images are provided as input and high signal-to-noise ratio images are provided as targets, the network will perform denoising. **This particular notebook enables restoration of 2D datasets. If you are interested in restoring a 3D dataset, you should use the CARE 3D notebook instead.**---*Disclaimer*:This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.This notebook is based on the following paper: **Content-aware image restoration: pushing the limits of fluorescence microscopy**, by Weigert *et al.* published in Nature Methods in 2018 (https://www.nature.com/articles/s41592-018-0216-7)And source code found in: https://github.com/csbdeep/csbdeepFor a more in-depth description of the features of the network, please refer to [this guide](http://csbdeep.bioimagecomputing.com/doc/) provided by the original authors of the work.We provide a dataset for the training of this notebook as a way to test its functionalities but the training and test data of the restoration experiments is also available from the authors of the original paper [here](https://publications.mpi-cbg.de/publications-sites/7207/).**Please also cite this original paper when using or developing this notebook.** **How to use this notebook?**---Video describing how to use our notebooks are available on youtube: - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook---**Structure of a notebook**The notebook contains two types of cell: **Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.---**Table of contents, Code snippets** and **Files**On the top left side of the notebook you find three tabs which contain from top to bottom:*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. **Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!---**Making changes to the notebook****You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).You can use the ``-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment. **0. Before getting started**--- For CARE to train, **it needs to have access to a paired training dataset**. This means that the same image needs to be acquired in the two conditions (for instance, low signal-to-noise ratio and high signal-to-noise ratio) and provided with indication of correspondence. Therefore, the data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called "Training - Low SNR images" (Training_source) and "Training - high SNR images" (Training_target). Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook. **Additionally, the corresponding input and output files need to have the same name**. Please note that you currently can **only use .tif files!**Here's a common data structure that can work:* Experiment A - **Training dataset** - Low SNR images (Training_source) - img_1.tif, img_2.tif, ... - High SNR images (Training_target) - img_1.tif, img_2.tif, ... - **Quality control dataset** - Low SNR images - img_1.tif, img_2.tif - High SNR images - img_1.tif, img_2.tif - **Data to be predicted** - **Results**---**Important note**- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.--- **1. Install CARE and dependencies**--- **1.1. Install key dependencies**---
###Code
#@markdown ##Install CARE and dependencies
#Here, we install libraries which are not already included in Colab.
!pip install tifffile # contains tools to operate tiff-files
!pip install csbdeep # contains tools for restoration of fluorescence microcopy images (Content-aware Image Restoration, CARE). It uses Keras and Tensorflow.
!pip install wget
!pip install memory_profiler
!pip install fpdf
#Force session restart
exit(0)
###Output
_____no_output_____
###Markdown
**1.2. Restart your runtime**---** Ignore the following message error message. Your Runtime has automatically restarted. This is normal.** **1.3. Load key dependencies**---
###Code
#@markdown ##Load key dependencies
Notebook_version = '1.13'
Network = 'CARE (2D)'
from builtins import any as b_any
def get_requirements_path():
# Store requirements file in 'contents' directory
current_dir = os.getcwd()
dir_count = current_dir.count('/') - 1
path = '../' * (dir_count) + 'requirements.txt'
return path
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
def build_requirements_file(before, after):
path = get_requirements_path()
# Exporting requirements.txt for local run
!pip freeze > $path
# Get minimum requirements file
df = pd.read_csv(path, delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name and handle cases where import name is different to module name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open(path,'w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
import sys
before = [str(m) for m in sys.modules]
%load_ext memory_profiler
#Here, we import and enable Tensorflow 1 instead of Tensorflow 2.
%tensorflow_version 1.x
import tensorflow
import tensorflow as tf
print(tensorflow.__version__)
print("Tensorflow enabled.")
# ------- Variable specific to CARE -------
from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict, plot_history, Path, download_and_extract_zip_file
from csbdeep.data import RawData, create_patches
from csbdeep.io import load_training_data, save_tiff_imagej_compatible
from csbdeep.models import Config, CARE
from csbdeep import data
from __future__ import print_function, unicode_literals, absolute_import, division
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
import wget
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32
from skimage.util import img_as_ubyte
from tqdm import tqdm
from fpdf import FPDF, HTMLMixin
from datetime import datetime
import subprocess
from pip._internal.operations.freeze import freeze
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
W = '\033[0m' # white (normal)
R = '\033[31m' # red
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
print("Libraries installed")
# Check if this is the latest version of the notebook
All_notebook_versions = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_Notebook_versions.csv", dtype=str)
print('Notebook version: '+Notebook_version)
Latest_Notebook_version = All_notebook_versions[All_notebook_versions["Notebook"] == Network]['Version'].iloc[0]
print('Latest notebook version: '+Latest_Notebook_version)
if Notebook_version == Latest_Notebook_version:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
!pip freeze > requirements.txt
#Create a pdf document with training summary
def pdf_export(trained = False, augmentation = False, pretrained_model = False):
# save FPDF() class into a
# variable pdf
#from datetime import datetime
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'CARE 2D'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
if trained:
training_time = "Training time: "+str(hour)+ "hour(s) "+str(mins)+"min(s) "+str(round(sec))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and methods:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras','csbdeep']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
shape = io.imread(Training_source+'/'+os.listdir(Training_source)[1]).shape
dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
if pretrained_model:
text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(190, 5, txt = text, align='L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(1)
pdf.cell(28, 5, txt='Augmentation: ', ln=0)
pdf.set_font('')
if augmentation:
aug_text = 'The dataset was augmented by a factor of '+str(Multiply_dataset_by)+' by'
if rotate_270_degrees != 0 or rotate_90_degrees != 0:
aug_text = aug_text+'\n- rotation'
if flip_left_right != 0 or flip_top_bottom != 0:
aug_text = aug_text+'\n- flipping'
if random_zoom_magnification != 0:
aug_text = aug_text+'\n- random zoom magnification'
if random_distortion != 0:
aug_text = aug_text+'\n- random distortion'
if image_shear != 0:
aug_text = aug_text+'\n- image shearing'
else:
aug_text = 'No augmentation was used for training.'
pdf.multi_cell(190, 5, txt=aug_text, align='L')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if Use_Default_Advanced_Parameters:
pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=40% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>number_of_patches</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{6}</td>
</tr>
</table>
""".format(number_of_epochs,str(patch_size)+'x'+str(patch_size),number_of_patches,batch_size,number_of_steps,percentage_validation,initial_learning_rate)
pdf.write_html(html)
#pdf.multi_cell(190, 5, txt = text_2, align='L')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(29, 5, txt= 'Training_source:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_source, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(27, 5, txt= 'Training_target:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_target, align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(22, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training pair', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_CARE2D.png').shape
pdf.image('/content/TrainingDataExample_CARE2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
if augmentation:
ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
pdf.multi_cell(190, 5, txt = ref_3, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+"_training_report.pdf")
#Make a pdf summary of the QC results
def qc_pdf_export():
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'CARE 2D'
#model_name = os.path.basename(full_QC_model_path)
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Development of Training Losses', ln=1, align='L')
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape
if os.path.exists(full_QC_model_path+'Quality Control/lossCurvePlots.png'):
pdf.image(full_QC_model_path+'Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/13))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.multi_cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.', align='L')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape
pdf.image(full_QC_model_path+'Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="7" face="Courier New" >
<table width=94% style="margin-left:0px;">"""
with open(full_QC_model_path+'Quality Control/QC_metrics_'+QC_model_name+'.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
image = header[0]
mSSIM_PvsGT = header[1]
mSSIM_SvsGT = header[2]
NRMSE_PvsGT = header[3]
NRMSE_SvsGT = header[4]
PSNR_PvsGT = header[5]
PSNR_SvsGT = header[6]
header = """
<tr>
<th width = 10% align="left">{0}</th>
<th width = 15% align="left">{1}</th>
<th width = 15% align="center">{2}</th>
<th width = 15% align="left">{3}</th>
<th width = 15% align="center">{4}</th>
<th width = 15% align="left">{5}</th>
<th width = 15% align="center">{6}</th>
</tr>""".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)
html = html+header
for row in metrics:
image = row[0]
mSSIM_PvsGT = row[1]
mSSIM_SvsGT = row[2]
NRMSE_PvsGT = row[3]
NRMSE_SvsGT = row[4]
PSNR_PvsGT = row[5]
PSNR_SvsGT = row[6]
cells = """
<tr>
<td width = 10% align="left">{0}</td>
<td width = 15% align="center">{1}</td>
<td width = 15% align="center">{2}</td>
<td width = 15% align="center">{3}</td>
<td width = 15% align="center">{4}</td>
<td width = 15% align="center">{5}</td>
<td width = 15% align="center">{6}</td>
</tr>""".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(full_QC_model_path+'Quality Control/'+QC_model_name+'_QC_report.pdf')
# Build requirements file for local run
after = [str(m) for m in sys.modules]
build_requirements_file(before, after)
###Output
_____no_output_____
###Markdown
**2. Initialise the Colab session**--- **2.1. Check for GPU access**---By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:Go to **Runtime -> Change the Runtime type****Runtime type: Python 3** *(Python 3 is programming language in which this program is written)***Accelerator: GPU** *(Graphics processing unit)*
###Code
#@markdown ##Run this cell to check if you have GPU access
%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
###Output
_____no_output_____
###Markdown
**2.2. Mount your Google Drive**--- To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook. Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. Once this is done, your data are available in the **Files** tab on the top left of notebook.
###Code
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
** If you cannot see your files, reactivate your session by connecting to your hosted runtime.** Connect to a hosted runtime. **3. Select your parameters and paths**--- **3.1. Setting main training parameters**--- **Paths for training, predictions and results****`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source (Low SNR images) and Training_target (High SNR images or ground truth) training data respecively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).**Training Parameters****`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-300 epochs. Evaluate the performance after training (see 5). **Default value: 50****`patch_size`:** CARE divides the image into patches for training. Input the size of the patches (length of a side). The value should be smaller than the dimensions of the image and divisible by 8. **Default value: 128****When choosing the patch_size, the value should be i) large enough that it will enclose many instances, ii) small enough that the resulting patches fit into the RAM.** **`number_of_patches`:** Input the number of the patches per image. Increasing the number of patches allows for larger training datasets. **Default value: 50** **Decreasing the patch size or increasing the number of patches may improve the training but may also increase the training time.****Advanced Parameters - experienced users only****`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16****`number_of_steps`:** Define the number of training steps by epoch. By default or if set to zero this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patches / batch_size****`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 10** **`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0004**
###Code
#@markdown ###Path to training images:
Training_source = "" #@param {type:"string"}
InputFile = Training_source+"/*.tif"
Training_target = "" #@param {type:"string"}
OutputFile = Training_target+"/*.tif"
#Define where the patch file will be saved
base = "/content"
# model name and path
#@markdown ###Name of the model and path to model folder:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
# other parameters for training.
#@markdown ###Training Parameters
#@markdown Number of epochs:
number_of_epochs = 50#@param {type:"number"}
#@markdown Patch size (pixels) and number
patch_size = 128#@param {type:"number"} # in pixels
number_of_patches = 50#@param {type:"number"}
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 16#@param {type:"number"}
number_of_steps = 0#@param {type:"number"}
percentage_validation = 10 #@param {type:"number"}
initial_learning_rate = 0.0004 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 16
percentage_validation = 10
initial_learning_rate = 0.0004
#Here we define the percentage to use for validation
percentage = percentage_validation/100
#here we check that no model with the same name already exist, if so print a warning
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: "+model_name+" already exists and will be deleted in the following cell !!")
print(bcolors.WARNING +"To continue training "+model_name+", choose a new model_name here, and load "+model_name+" in section 3.3"+W)
# Here we disable pre-trained model by default (in case the cell is not ran)
Use_pretrained_model = False
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = False
print("Parameters initiated.")
# This will display a randomly chosen dataset input and output
random_choice = random.choice(os.listdir(Training_source))
x = imread(Training_source+"/"+random_choice)
# Here we check that the input images contains the expected dimensions
if len(x.shape) == 2:
print("Image dimensions (y,x)",x.shape)
if not len(x.shape) == 2:
print(bcolors.WARNING +"Your images appear to have the wrong dimensions. Image dimension",x.shape)
#Find image XY dimension
Image_Y = x.shape[0]
Image_X = x.shape[1]
#Hyperparameters failsafes
# Here we check that patch_size is smaller than the smallest xy dimension of the image
if patch_size > min(Image_Y, Image_X):
patch_size = min(Image_Y, Image_X)
print (bcolors.WARNING + " Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:",patch_size)
# Here we check that patch_size is divisible by 8
if not patch_size % 8 == 0:
patch_size = ((int(patch_size / 8)-1) * 8)
print (bcolors.WARNING + " Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:",patch_size)
os.chdir(Training_target)
y = imread(Training_target+"/"+random_choice)
f=plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Training source')
plt.axis('off');
plt.subplot(1,2,2)
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Training target')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_CARE2D.png',bbox_inches='tight',pad_inches=0)
###Output
_____no_output_____
###Markdown
**3.2. Data augmentation**--- Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it. **However, data augmentation is not a magic solution and may also introduce issues. Therefore, we recommend that you train your network with and without augmentation, and use the QC section to validate that it improves overall performances.** Data augmentation is performed here by [Augmentor.](https://github.com/mdbloice/Augmentor)[Augmentor](https://github.com/mdbloice/Augmentor) was described in the following article:Marcus D Bloice, Peter M Roth, Andreas Holzinger, Biomedical image augmentation using Augmentor, Bioinformatics, https://doi.org/10.1093/bioinformatics/btz259**Please also cite this original paper when publishing results obtained using this notebook with augmentation enabled.**
###Code
#Data augmentation
Use_Data_augmentation = False #@param {type:"boolean"}
if Use_Data_augmentation:
!pip install Augmentor
import Augmentor
#@markdown ####Choose a factor by which you want to multiply your original dataset
Multiply_dataset_by = 5 #@param {type:"slider", min:1, max:30, step:1}
Save_augmented_images = False #@param {type:"boolean"}
Saving_path = "" #@param {type:"string"}
Use_Default_Augmentation_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please choose the probability of the following image manipulations to be used to augment your dataset (1 = always used; 0 = disabled ):
#@markdown ####Mirror and rotate images
rotate_90_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
rotate_270_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_left_right = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_top_bottom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image Zoom
random_zoom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
random_zoom_magnification = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image distortion
random_distortion = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Image shearing
image_shear = 0 #@param {type:"slider", min:0, max:1, step:0.1}
max_image_shear = 1 #@param {type:"slider", min:1, max:25, step:1}
if Use_Default_Augmentation_Parameters:
rotate_90_degrees = 0.5
rotate_270_degrees = 0.5
flip_left_right = 0.5
flip_top_bottom = 0.5
if not Multiply_dataset_by >5:
random_zoom = 0
random_zoom_magnification = 0.9
random_distortion = 0
image_shear = 0
max_image_shear = 10
if Multiply_dataset_by >5:
random_zoom = 0.1
random_zoom_magnification = 0.9
random_distortion = 0.5
image_shear = 0.2
max_image_shear = 5
if Multiply_dataset_by >25:
random_zoom = 0.5
random_zoom_magnification = 0.8
random_distortion = 0.5
image_shear = 0.5
max_image_shear = 20
list_files = os.listdir(Training_source)
Nb_files = len(list_files)
Nb_augmented_files = (Nb_files * Multiply_dataset_by)
if Use_Data_augmentation:
print("Data augmentation enabled")
# Here we set the path for the various folder were the augmented images will be loaded
# All images are first saved into the augmented folder
#Augmented_folder = "/content/Augmented_Folder"
if not Save_augmented_images:
Saving_path= "/content"
Augmented_folder = Saving_path+"/Augmented_Folder"
if os.path.exists(Augmented_folder):
shutil.rmtree(Augmented_folder)
os.makedirs(Augmented_folder)
#Training_source_augmented = "/content/Training_source_augmented"
Training_source_augmented = Saving_path+"/Training_source_augmented"
if os.path.exists(Training_source_augmented):
shutil.rmtree(Training_source_augmented)
os.makedirs(Training_source_augmented)
#Training_target_augmented = "/content/Training_target_augmented"
Training_target_augmented = Saving_path+"/Training_target_augmented"
if os.path.exists(Training_target_augmented):
shutil.rmtree(Training_target_augmented)
os.makedirs(Training_target_augmented)
# Here we generate the augmented images
#Load the images
p = Augmentor.Pipeline(Training_source, Augmented_folder)
#Define the matching images
p.ground_truth(Training_target)
#Define the augmentation possibilities
if not rotate_90_degrees == 0:
p.rotate90(probability=rotate_90_degrees)
if not rotate_270_degrees == 0:
p.rotate270(probability=rotate_270_degrees)
if not flip_left_right == 0:
p.flip_left_right(probability=flip_left_right)
if not flip_top_bottom == 0:
p.flip_top_bottom(probability=flip_top_bottom)
if not random_zoom == 0:
p.zoom_random(probability=random_zoom, percentage_area=random_zoom_magnification)
if not random_distortion == 0:
p.random_distortion(probability=random_distortion, grid_width=4, grid_height=4, magnitude=8)
if not image_shear == 0:
p.shear(probability=image_shear,max_shear_left=20,max_shear_right=20)
p.sample(int(Nb_augmented_files))
print(int(Nb_augmented_files),"matching images generated")
# Here we sort through the images and move them back to augmented trainning source and targets folders
augmented_files = os.listdir(Augmented_folder)
for f in augmented_files:
if (f.startswith("_groundtruth_(1)_")):
shortname_noprefix = f[17:]
shutil.copyfile(Augmented_folder+"/"+f, Training_target_augmented+"/"+shortname_noprefix)
if not (f.startswith("_groundtruth_(1)_")):
shutil.copyfile(Augmented_folder+"/"+f, Training_source_augmented+"/"+f)
for filename in os.listdir(Training_source_augmented):
os.chdir(Training_source_augmented)
os.rename(filename, filename.replace('_original', ''))
#Here we clean up the extra files
shutil.rmtree(Augmented_folder)
if not Use_Data_augmentation:
print(bcolors.WARNING+"Data augmentation disabled")
###Output
_____no_output_____
###Markdown
**3.3. Using weights from a pre-trained model as initial weights**--- Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a CARE 2D model**. This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**. In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
###Code
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_2D_paper")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.h5 pretrained model does not exist')
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead')
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead')
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print(bcolors.WARNING+'No pretrained network will be used.')
###Output
_____no_output_____
###Markdown
**4. Train the network**--- **4.1. Prepare the training data and model for training**---Here, we use the information from 3. to build the model and convert the training data into a suitable format for training.
###Code
#@markdown ##Create the model and dataset objects
# --------------------- Here we delete the model folder if it already exist ------------------------
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: Model folder already exists and has been removed !!"+W)
shutil.rmtree(model_path+'/'+model_name)
# --------------------- Here we load the augmented data or the raw data ------------------------
if Use_Data_augmentation:
Training_source_dir = Training_source_augmented
Training_target_dir = Training_target_augmented
if not Use_Data_augmentation:
Training_source_dir = Training_source
Training_target_dir = Training_target
# --------------------- ------------------------------------------------
# This object holds the image pairs (GT and low), ensuring that CARE compares corresponding images.
# This file is saved in .npz format and later called when loading the trainig data.
raw_data = data.RawData.from_folder(
basepath=base,
source_dirs=[Training_source_dir],
target_dir=Training_target_dir,
axes='CYX',
pattern='*.tif*')
X, Y, XY_axes = data.create_patches(
raw_data,
patch_filter=None,
patch_size=(patch_size,patch_size),
n_patches_per_image=number_of_patches)
print ('Creating 2D training dataset')
training_path = model_path+"/rawdata"
rawdata1 = training_path+".npz"
np.savez(training_path,X=X, Y=Y, axes=XY_axes)
# Load Training Data
(X,Y), (X_val,Y_val), axes = load_training_data(rawdata1, validation_split=percentage, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
%memit
#plot of training patches.
plt.figure(figsize=(12,5))
plot_some(X[:5],Y[:5])
plt.suptitle('5 example training patches (top row: source, bottom row: target)');
#plot of validation patches
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
#Here we automatically define number_of_step in function of training data and batch size
#if (Use_Default_Advanced_Parameters):
if (Use_Default_Advanced_Parameters) or (number_of_steps == 0):
number_of_steps = int(X.shape[0]/batch_size)+1
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
#Here we create the configuration file
config = Config(axes, n_channel_in, n_channel_out, probabilistic=True, train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, unet_kern_size=5, unet_n_depth=3, train_batch_size=batch_size, train_learning_rate=initial_learning_rate)
print(config)
vars(config)
# Compile the CARE model for network training
model_training= CARE(config, model_name, basedir=model_path)
# --------------------- Using pretrained model ------------------------
# Load the pretrained weights
if Use_pretrained_model:
model_training.load_weights(h5_file_path)
# --------------------- ---------------------- ------------------------
pdf_export(augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
###Output
_____no_output_____
###Markdown
**4.2. Start Training**---When playing the cell below you should see updates after each epoch (round). Network training can take some time.* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the CSBDeep Fiji plugin (Run your Network). You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system).
###Code
#@markdown ##Start training
start = time.time()
# Start Training
history = model_training.train(X,Y, validation_data=(X_val,Y_val))
print("Training, done.")
# copy the .npz to the model's folder
shutil.copyfile(model_path+'/rawdata.npz',model_path+'/'+model_name+'/rawdata.npz')
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(history.history)
if os.path.exists(model_path+"/"+model_name+"/Quality Control"):
shutil.rmtree(model_path+"/"+model_name+"/Quality Control")
os.makedirs(model_path+"/"+model_name+"/Quality Control")
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss', 'learning rate'])
for i in range(len(history.history['loss'])):
writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])
# Displaying the time elapsed for training
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
model_training.export_TF()
print("Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin")
pdf_export(trained = True, augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
###Output
_____no_output_____
###Markdown
**5. Evaluate your model**---This section allows you to perform important quality checks on the validity and generalisability of the trained model. **We highly recommend to perform quality control on all newly trained models.**
###Code
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = False #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
QC_model_folder = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
loss_displayed = False
###Output
_____no_output_____
###Markdown
**5.1. Inspection of the loss function**---First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.***Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.**Note: Plots of the losses will be shown in a linear and in a log scale. This can help visualise changes in the losses at different magnitudes. However, note that if the losses are negative the plot on the log scale will be empty. This is not an error.**
###Code
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
loss_displayed = True
lossDataFromCSV = []
vallossDataFromCSV = []
with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png',bbox_inches='tight',pad_inches=0)
plt.show()
###Output
_____no_output_____
###Markdown
**5.2. Error mapping and quality metrics estimation**---This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the "Source_QC_folder" and "Target_QC_folder" !**1. The SSIM (structural similarity) map** The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info). **mSSIM** is the SSIM value calculated across the entire window of both images.**The output below shows the SSIM maps with the mSSIM****2. The RSE (Root Squared Error) map** This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.**The output below shows the RSE maps with the NRMSE and PSNR values.**
###Code
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
# Create a quality control/Prediction Folder
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
# Activate the pretrained model.
model_training = CARE(config=None, name=QC_model_name, basedir=QC_model_path)
# List Tif images in Source_QC_folder
Source_QC_folder_tif = Source_QC_folder+"/*.tif"
Z = sorted(glob(Source_QC_folder_tif))
Z = list(map(imread,Z))
print('Number of test dataset found in the folder: '+str(len(Z)))
# Perform prediction on all datasets in the Source_QC folder
for filename in os.listdir(Source_QC_folder):
img = imread(os.path.join(Source_QC_folder, filename))
predicted = model_training.predict(img, axes='YX')
os.chdir(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
imsave(filename, predicted)
def ssim(img1, img2):
return structural_similarity(img1,img2,data_range=1.,full=True, gaussian_weights=True, use_sample_covariance=False, sigma=1.5)
def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
def norm_minmse(gt, x, normalize_gt=True):
"""This function is adapted from Martin Weigert"""
"""
normalizes and affinely scales an image pair such that the MSE is minimized
Parameters
----------
gt: ndarray
the ground truth image
x: ndarray
the image that will be affinely scaled
normalize_gt: bool
set to True of gt image should be normalized (default)
Returns
-------
gt_scaled, x_scaled
"""
if normalize_gt:
gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)
x = x.astype(np.float32, copy=False) - np.mean(x)
#x = x - np.mean(x)
gt = gt.astype(np.float32, copy=False) - np.mean(gt)
#gt = gt - np.mean(gt)
scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())
return gt, scale * x
# Open and create the csv file that will contain all the QC metrics
with open(QC_model_path+"/"+QC_model_name+"/Quality Control/QC_metrics_"+QC_model_name+".csv", "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["image #","Prediction v. GT mSSIM","Input v. GT mSSIM", "Prediction v. GT NRMSE", "Input v. GT NRMSE", "Prediction v. GT PSNR", "Input v. GT PSNR"])
# Let's loop through the provided dataset in the QC folders
for i in os.listdir(Source_QC_folder):
if not os.path.isdir(os.path.join(Source_QC_folder,i)):
print('Running QC on: '+i)
# -------------------------------- Target test data (Ground truth) --------------------------------
test_GT = io.imread(os.path.join(Target_QC_folder, i))
# -------------------------------- Source test data --------------------------------
test_source = io.imread(os.path.join(Source_QC_folder,i))
# Normalize the images wrt each other by minimizing the MSE between GT and Source image
test_GT_norm,test_source_norm = norm_minmse(test_GT, test_source, normalize_gt=True)
# -------------------------------- Prediction --------------------------------
test_prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction",i))
# Normalize the images wrt each other by minimizing the MSE between GT and prediction
test_GT_norm,test_prediction_norm = norm_minmse(test_GT, test_prediction, normalize_gt=True)
# -------------------------------- Calculate the metric maps and save them --------------------------------
# Calculate the SSIM maps
index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim(test_GT_norm, test_prediction_norm)
index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim(test_GT_norm, test_source_norm)
#Save ssim_maps
img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsPrediction_'+i,img_SSIM_GTvsPrediction_32bit)
img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsSource_'+i,img_SSIM_GTvsSource_32bit)
# Calculate the Root Squared Error (RSE) maps
img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))
img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm))
# Save SE maps
img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)
img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsPrediction_'+i,img_RSE_GTvsPrediction_32bit)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsSource_'+i,img_RSE_GTvsSource_32bit)
# -------------------------------- Calculate the RSE metrics and save them --------------------------------
# Normalised Root Mean Squared Error (here it's valid to take the mean of the image)
NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))
NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource))
# We can also measure the peak signal to noise ratio between the images
PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)
PSNR_GTvsSource = psnr(test_GT_norm,test_source_norm,data_range=1.0)
writer.writerow([i,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsSource),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsSource),str(PSNR_GTvsPrediction),str(PSNR_GTvsSource)])
# All data is now processed saved
Test_FileList = os.listdir(Source_QC_folder) # this assumes, as it should, that both source and target are named the same
plt.figure(figsize=(20,20))
# Currently only displays the last computed set, from memory
# Target (Ground-truth)
plt.subplot(3,3,1)
plt.axis('off')
img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1]))
plt.imshow(img_GT, norm=simple_norm(img_GT, percent = 99))
plt.title('Target',fontsize=15)
# Source
plt.subplot(3,3,2)
plt.axis('off')
img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1]))
plt.imshow(img_Source, norm=simple_norm(img_Source, percent = 99))
plt.title('Source',fontsize=15)
#Prediction
plt.subplot(3,3,3)
plt.axis('off')
img_Prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction/", Test_FileList[-1]))
plt.imshow(img_Prediction, norm=simple_norm(img_Prediction, percent = 99))
plt.title('Prediction',fontsize=15)
#Setting up colours
cmap = plt.cm.CMRmap
#SSIM between GT and Source
plt.subplot(3,3,5)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imSSIM_GTvsSource,fraction=0.046, pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsSource,3)),fontsize=14)
plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)
#SSIM between GT and Prediction
plt.subplot(3,3,6)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)
plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsPrediction,3)),fontsize=14)
#Root Squared Error between GT and Source
plt.subplot(3,3,8)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap = cmap, vmin=0, vmax = 1)
plt.colorbar(imRSE_GTvsSource,fraction=0.046,pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsSource,3))+', PSNR: '+str(round(PSNR_GTvsSource,3)),fontsize=14)
#plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3)))
plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)
#Root Squared Error between GT and Prediction
plt.subplot(3,3,9)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsPrediction,3))+', PSNR: '+str(round(PSNR_GTvsPrediction,3)),fontsize=14)
plt.savefig(full_QC_model_path+'Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)
qc_pdf_export()
###Output
_____no_output_____
###Markdown
**6. Using the trained model**---In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive. **6.1. Generate prediction(s) from unseen dataset**---The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.**`Result_folder`:** This folder will contain the predicted output images.
###Code
#@markdown ### Provide the path to your dataset and to the folder where the predictions are saved, then play the cell to predict outputs from your unseen images.
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = False #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = os.path.join(Prediction_model_path, Prediction_model_name)
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Activate the pretrained model.
model_training = CARE(config=None, name=Prediction_model_name, basedir=Prediction_model_path)
# creates a loop, creating filenames and saving them
for filename in os.listdir(Data_folder):
img = imread(os.path.join(Data_folder,filename))
restored = model_training.predict(img, axes='YX')
os.chdir(Result_folder)
imsave(filename,restored)
print("Images saved into folder:", Result_folder)
###Output
_____no_output_____
###Markdown
**6.2. Inspect the predicted output**---
###Code
# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.
# This will display a randomly chosen dataset input and predicted output
random_choice = random.choice(os.listdir(Data_folder))
x = imread(Data_folder+"/"+random_choice)
os.chdir(Result_folder)
y = imread(Result_folder+"/"+random_choice)
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Input')
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Predicted output');
###Output
_____no_output_____
###Markdown
**CARE: Content-aware image restoration (2D)**---CARE is a neural network capable of image restoration from corrupted bio-images, first published in 2018 by [Weigert *et al.* in Nature Methods](https://www.nature.com/articles/s41592-018-0216-7). The CARE network uses a U-Net network architecture and allows image restoration and resolution improvement in 2D and 3D images, in a supervised manner, using noisy images as input and low-noise images as targets for training. The function of the network is essentially determined by the set of images provided in the training dataset. For instance, if noisy images are provided as input and high signal-to-noise ratio images are provided as targets, the network will perform denoising. **This particular notebook enables restoration of 2D dataset. If you are interested in restoring 3D dataset, you should use the CARE 3D notebook instead.**---*Disclaimer*:This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.This notebook is based on the following paper: **Content-aware image restoration: pushing the limits of fluorescence microscopy**, by Weigert *et al.* published in Nature Methods in 2018 (https://www.nature.com/articles/s41592-018-0216-7)And source code found in: https://github.com/csbdeep/csbdeepFor a more in-depth description of the features of the network,please refer to [this guide](http://csbdeep.bioimagecomputing.com/doc/) provided by the original authors of the work.We provide a dataset for the training of this notebook as a way to test its functionalities but the training and test data of the restoration experiments is also available from the authors of the original paper [here](https://publications.mpi-cbg.de/publications-sites/7207/).**Please also cite this original paper when using or developing this notebook.** **How to use this notebook?**---Video describing how to use our notebooks are available on youtube: - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook---**Structure of a notebook**The notebook contains two types of cell: **Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.---**Table of contents, Code snippets** and **Files**On the top left side of the notebook you find three tabs which contain from top to bottom:*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. **Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!---**Making changes to the notebook****You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).You can use the ``-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment. **0. Before getting started**--- For CARE to train, **it needs to have access to a paired training dataset**. This means that the same image needs to be acquired in the two conditions (for instance, low signal-to-noise ratio and high signal-to-noise ratio) and provided with indication of correspondence. Therefore, the data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called "Training - Low SNR images" (Training_source) and "Training - high SNR images" (Training_target). Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook. **Additionally, the corresponding input and output files need to have the same name**. Please note that you currently can **only use .tif files!**Here's a common data structure that can work:* Experiment A - **Training dataset** - Low SNR images (Training_source) - img_1.tif, img_2.tif, ... - High SNR images (Training_target) - img_1.tif, img_2.tif, ... - **Quality control dataset** - Low SNR images - img_1.tif, img_2.tif - High SNR images - img_1.tif, img_2.tif - **Data to be predicted** - **Results**---**Important note**- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.--- **1. Initialise the Colab session**--- **1.1. Check for GPU access**---By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:Go to **Runtime -> Change the Runtime type****Runtime type: Python 3** *(Python 3 is programming language in which this program is written)***Accelerator: GPU** *(Graphics processing unit)*
###Code
#@markdown ##Run this cell to check if you have GPU access
%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
###Output
_____no_output_____
###Markdown
**1.2. Mount your Google Drive**--- To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook. Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. Once this is done, your data are available in the **Files** tab on the top left of notebook.
###Code
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
**2. Install CARE and dependencies**---
###Code
Notebook_version = ['1.12']
#@markdown ##Install CARE and dependencies
#Libraries contains information of certain topics.
#For example the tifffile library contains information on how to handle tif-files.
#Here, we install libraries which are not already included in Colab.
!pip install tifffile # contains tools to operate tiff-files
!pip install csbdeep # contains tools for restoration of fluorescence microcopy images (Content-aware Image Restoration, CARE). It uses Keras and Tensorflow.
!pip install wget
!pip install memory_profiler
!pip install fpdf
%load_ext memory_profiler
#Here, we import and enable Tensorflow 1 instead of Tensorflow 2.
%tensorflow_version 1.x
import sys
before = [str(m) for m in sys.modules]
import tensorflow
import tensorflow as tf
print(tensorflow.__version__)
print("Tensorflow enabled.")
# ------- Variable specific to CARE -------
from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict, plot_history, Path, download_and_extract_zip_file
from csbdeep.data import RawData, create_patches
from csbdeep.io import load_training_data, save_tiff_imagej_compatible
from csbdeep.models import Config, CARE
from csbdeep import data
from __future__ import print_function, unicode_literals, absolute_import, division
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
import wget
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32
from skimage.util import img_as_ubyte
from tqdm import tqdm
from fpdf import FPDF, HTMLMixin
from datetime import datetime
import subprocess
from pip._internal.operations.freeze import freeze
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
W = '\033[0m' # white (normal)
R = '\033[31m' # red
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
print("Libraries installed")
# Check if this is the latest version of the notebook
Latest_notebook_version = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv")
if Notebook_version == list(Latest_notebook_version.columns):
print("This notebook is up-to-date.")
if not Notebook_version == list(Latest_notebook_version.columns):
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
!pip freeze > requirements.txt
#Create a pdf document with training summary
def pdf_export(trained = False, augmentation = False, pretrained_model = False):
# save FPDF() class into a
# variable pdf
#from datetime import datetime
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'CARE 2D'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
if trained:
training_time = "Training time: "+str(hour)+ "hour(s) "+str(mins)+"min(s) "+str(round(sec))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and methods:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras','csbdeep']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
shape = io.imread(Training_source+'/'+os.listdir(Training_source)[1]).shape
dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
if pretrained_model:
text = 'The '+Network+' model was trained for '+str(number_of_epochs)+' epochs on '+str(dataset_size*number_of_patches)+' paired image patches (image dimensions: '+str(shape)+', patch size: ('+str(patch_size)+','+str(patch_size)+')) with a batch size of '+str(batch_size)+' and a '+config.train_loss+' loss function, using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). The model was re-trained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), Keras (v '+version_numbers[2]+'), csbdeep (v '+version_numbers[3]+'), numpy (v '+version_numbers[1]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+'GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(190, 5, txt = text, align='L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(1)
pdf.cell(28, 5, txt='Augmentation: ', ln=0)
pdf.set_font('')
if augmentation:
aug_text = 'The dataset was augmented by a factor of '+str(Multiply_dataset_by)+' by'
if rotate_270_degrees != 0 or rotate_90_degrees != 0:
aug_text = aug_text+'\n- rotation'
if flip_left_right != 0 or flip_top_bottom != 0:
aug_text = aug_text+'\n- flipping'
if random_zoom_magnification != 0:
aug_text = aug_text+'\n- random zoom magnification'
if random_distortion != 0:
aug_text = aug_text+'\n- random distortion'
if image_shear != 0:
aug_text = aug_text+'\n- image shearing'
if skew_image != 0:
aug_text = aug_text+'\n- image skewing'
else:
aug_text = 'No augmentation was used for training.'
pdf.multi_cell(190, 5, txt=aug_text, align='L')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if Use_Default_Advanced_Parameters:
pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=40% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>number_of_patches</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{6}</td>
</tr>
</table>
""".format(number_of_epochs,str(patch_size)+'x'+str(patch_size),number_of_patches,batch_size,number_of_steps,percentage_validation,initial_learning_rate)
pdf.write_html(html)
#pdf.multi_cell(190, 5, txt = text_2, align='L')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(29, 5, txt= 'Training_source:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_source, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(27, 5, txt= 'Training_target:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = Training_target, align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(22, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training pair', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_CARE2D.png').shape
pdf.image('/content/TrainingDataExample_CARE2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
if augmentation:
ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
pdf.multi_cell(190, 5, txt = ref_3, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+"_training_report.pdf")
#Make a pdf summary of the QC results
def qc_pdf_export():
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'CARE 2D'
#model_name = os.path.basename(full_QC_model_path)
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+QC_model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Development of Training Losses', ln=1, align='L')
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape
if os.path.exists(full_QC_model_path+'Quality Control/lossCurvePlots.png'):
pdf.image(full_QC_model_path+'Quality Control/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/13))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.multi_cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.', align='L')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(full_QC_model_path+'Quality Control/QC_example_data.png').shape
pdf.image(full_QC_model_path+'Quality Control/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="7" face="Courier New" >
<table width=94% style="margin-left:0px;">"""
with open(full_QC_model_path+'Quality Control/QC_metrics_'+QC_model_name+'.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
image = header[0]
mSSIM_PvsGT = header[1]
mSSIM_SvsGT = header[2]
NRMSE_PvsGT = header[3]
NRMSE_SvsGT = header[4]
PSNR_PvsGT = header[5]
PSNR_SvsGT = header[6]
header = """
<tr>
<th width = 10% align="left">{0}</th>
<th width = 15% align="left">{1}</th>
<th width = 15% align="center">{2}</th>
<th width = 15% align="left">{3}</th>
<th width = 15% align="center">{4}</th>
<th width = 15% align="left">{5}</th>
<th width = 15% align="center">{6}</th>
</tr>""".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)
html = html+header
for row in metrics:
image = row[0]
mSSIM_PvsGT = row[1]
mSSIM_SvsGT = row[2]
NRMSE_PvsGT = row[3]
NRMSE_SvsGT = row[4]
PSNR_PvsGT = row[5]
PSNR_SvsGT = row[6]
cells = """
<tr>
<td width = 10% align="left">{0}</td>
<td width = 15% align="center">{1}</td>
<td width = 15% align="center">{2}</td>
<td width = 15% align="center">{3}</td>
<td width = 15% align="center">{4}</td>
<td width = 15% align="center">{5}</td>
<td width = 15% align="center">{6}</td>
</tr>""".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "ZeroCostDL4Mic: an open platform to simplify access and use of Deep-Learning in Microscopy." BioRxiv (2020).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- CARE: Weigert, Martin, et al. "Content-aware image restoration: pushing the limits of fluorescence microscopy." Nature methods 15.12 (2018): 1090-1097.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(full_QC_model_path+'Quality Control/'+QC_model_name+'_QC_report.pdf')
# Exporting requirements.txt for local run
!pip freeze > requirements.txt
after = [str(m) for m in sys.modules]
# Get minimum requirements file
#Add the following lines before all imports:
# import sys
# before = [str(m) for m in sys.modules]
#Add the following line after the imports:
# after = [str(m) for m in sys.modules]
from builtins import any as b_any
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
df = pd.read_csv('requirements.txt', delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open('CARE_2D_requirements_simple.txt','w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
###Output
_____no_output_____
###Markdown
**3. Select your parameters and paths**--- **3.1. Setting main training parameters**--- **Paths for training, predictions and results****`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source (Low SNR images) and Training_target (High SNR images or ground truth) training data respecively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).**Training Parameters****`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-300 epochs. Evaluate the performance after training (see 5). **Default value: 50****`patch_size`:** CARE divides the image into patches for training. Input the size of the patches (length of a side). The value should be smaller than the dimensions of the image and divisible by 8. **Default value: 80****When choosing the patch_size, the value should be i) large enough that it will enclose many instances, ii) small enough that the resulting patches fit into the RAM.** **`number_of_patches`:** Input the number of the patches per image. Increasing the number of patches allows for larger training datasets. **Default value: 100** **Decreasing the patch size or increasing the number of patches may improve the training but may also increase the training time.****Advanced Parameters - experienced users only****`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16****`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patch / batch_size****`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 10** **`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0004**
###Code
#@markdown ###Path to training images:
Training_source = "" #@param {type:"string"}
InputFile = Training_source+"/*.tif"
Training_target = "" #@param {type:"string"}
OutputFile = Training_target+"/*.tif"
#Define where the patch file will be saved
base = "/content"
# model name and path
#@markdown ###Name of the model and path to model folder:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
# other parameters for training.
#@markdown ###Training Parameters
#@markdown Number of epochs:
number_of_epochs = 80#@param {type:"number"}
#@markdown Patch size (pixels) and number
patch_size = 80#@param {type:"number"} # in pixels
number_of_patches = 100#@param {type:"number"}
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 16#@param {type:"number"}
number_of_steps = 400#@param {type:"number"}
percentage_validation = 10 #@param {type:"number"}
initial_learning_rate = 0.0004 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 16
percentage_validation = 10
initial_learning_rate = 0.0004
#Here we define the percentage to use for validation
percentage = percentage_validation/100
#here we check that no model with the same name already exist, if so print a warning
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: "+model_name+" already exists and will be deleted in the following cell !!")
print(bcolors.WARNING +"To continue training "+model_name+", choose a new model_name here, and load "+model_name+" in section 3.3"+W)
# Here we disable pre-trained model by default (in case the cell is not ran)
Use_pretrained_model = False
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = False
# The shape of the images.
x = imread(InputFile)
y = imread(OutputFile)
print('Loaded Input images (number, width, length) =', x.shape)
print('Loaded Output images (number, width, length) =', y.shape)
print("Parameters initiated.")
# This will display a randomly chosen dataset input and output
random_choice = random.choice(os.listdir(Training_source))
x = imread(Training_source+"/"+random_choice)
# Here we check that the input images contains the expected dimensions
if len(x.shape) == 2:
print("Image dimensions (y,x)",x.shape)
if not len(x.shape) == 2:
print(bcolors.WARNING +"Your images appear to have the wrong dimensions. Image dimension",x.shape)
#Find image XY dimension
Image_Y = x.shape[0]
Image_X = x.shape[1]
#Hyperparameters failsafes
# Here we check that patch_size is smaller than the smallest xy dimension of the image
if patch_size > min(Image_Y, Image_X):
patch_size = min(Image_Y, Image_X)
print (bcolors.WARNING + " Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:",patch_size)
# Here we check that patch_size is divisible by 8
if not patch_size % 8 == 0:
patch_size = ((int(patch_size / 8)-1) * 8)
print (bcolors.WARNING + " Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:",patch_size)
os.chdir(Training_target)
y = imread(Training_target+"/"+random_choice)
f=plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Training source')
plt.axis('off');
plt.subplot(1,2,2)
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Training target')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_CARE2D.png',bbox_inches='tight',pad_inches=0)
###Output
_____no_output_____
###Markdown
**3.2. Data augmentation**--- Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it. **However, data augmentation is not a magic solution and may also introduce issues. Therefore, we recommend that you train your network with and without augmentation, and use the QC section to validate that it improves overall performances.** Data augmentation is performed here by [Augmentor.](https://github.com/mdbloice/Augmentor)[Augmentor](https://github.com/mdbloice/Augmentor) was described in the following article:Marcus D Bloice, Peter M Roth, Andreas Holzinger, Biomedical image augmentation using Augmentor, Bioinformatics, https://doi.org/10.1093/bioinformatics/btz259**Please also cite this original paper when publishing results obtained using this notebook with augmentation enabled.**
###Code
#Data augmentation
Use_Data_augmentation = False #@param {type:"boolean"}
if Use_Data_augmentation:
!pip install Augmentor
import Augmentor
#@markdown ####Choose a factor by which you want to multiply your original dataset
Multiply_dataset_by = 2 #@param {type:"slider", min:1, max:30, step:1}
Save_augmented_images = False #@param {type:"boolean"}
Saving_path = "" #@param {type:"string"}
Use_Default_Augmentation_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please choose the probability of the following image manipulations to be used to augment your dataset (1 = always used; 0 = disabled ):
#@markdown ####Mirror and rotate images
rotate_90_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
rotate_270_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_left_right = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_top_bottom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image Zoom
random_zoom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
random_zoom_magnification = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image distortion
random_distortion = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Image shearing and skewing
image_shear = 0 #@param {type:"slider", min:0, max:1, step:0.1}
max_image_shear = 1 #@param {type:"slider", min:1, max:25, step:1}
skew_image = 0 #@param {type:"slider", min:0, max:1, step:0.1}
skew_image_magnitude = 0 #@param {type:"slider", min:0, max:1, step:0.1}
if Use_Default_Augmentation_Parameters:
rotate_90_degrees = 0.5
rotate_270_degrees = 0.5
flip_left_right = 0.5
flip_top_bottom = 0.5
if not Multiply_dataset_by >5:
random_zoom = 0
random_zoom_magnification = 0.9
random_distortion = 0
image_shear = 0
max_image_shear = 10
skew_image = 0
skew_image_magnitude = 0
if Multiply_dataset_by >5:
random_zoom = 0.1
random_zoom_magnification = 0.9
random_distortion = 0.5
image_shear = 0.2
max_image_shear = 5
skew_image = 0.2
skew_image_magnitude = 0.4
if Multiply_dataset_by >25:
random_zoom = 0.5
random_zoom_magnification = 0.8
random_distortion = 0.5
image_shear = 0.5
max_image_shear = 20
skew_image = 0.5
skew_image_magnitude = 0.6
list_files = os.listdir(Training_source)
Nb_files = len(list_files)
Nb_augmented_files = (Nb_files * Multiply_dataset_by)
if Use_Data_augmentation:
print("Data augmentation enabled")
# Here we set the path for the various folder were the augmented images will be loaded
# All images are first saved into the augmented folder
#Augmented_folder = "/content/Augmented_Folder"
if not Save_augmented_images:
Saving_path= "/content"
Augmented_folder = Saving_path+"/Augmented_Folder"
if os.path.exists(Augmented_folder):
shutil.rmtree(Augmented_folder)
os.makedirs(Augmented_folder)
#Training_source_augmented = "/content/Training_source_augmented"
Training_source_augmented = Saving_path+"/Training_source_augmented"
if os.path.exists(Training_source_augmented):
shutil.rmtree(Training_source_augmented)
os.makedirs(Training_source_augmented)
#Training_target_augmented = "/content/Training_target_augmented"
Training_target_augmented = Saving_path+"/Training_target_augmented"
if os.path.exists(Training_target_augmented):
shutil.rmtree(Training_target_augmented)
os.makedirs(Training_target_augmented)
# Here we generate the augmented images
#Load the images
p = Augmentor.Pipeline(Training_source, Augmented_folder)
#Define the matching images
p.ground_truth(Training_target)
#Define the augmentation possibilities
if not rotate_90_degrees == 0:
p.rotate90(probability=rotate_90_degrees)
if not rotate_270_degrees == 0:
p.rotate270(probability=rotate_270_degrees)
if not flip_left_right == 0:
p.flip_left_right(probability=flip_left_right)
if not flip_top_bottom == 0:
p.flip_top_bottom(probability=flip_top_bottom)
if not random_zoom == 0:
p.zoom_random(probability=random_zoom, percentage_area=random_zoom_magnification)
if not random_distortion == 0:
p.random_distortion(probability=random_distortion, grid_width=4, grid_height=4, magnitude=8)
if not image_shear == 0:
p.shear(probability=image_shear,max_shear_left=20,max_shear_right=20)
if not skew_image == 0:
p.skew(probability=skew_image,magnitude=skew_image_magnitude)
p.sample(int(Nb_augmented_files))
print(int(Nb_augmented_files),"matching images generated")
# Here we sort through the images and move them back to augmented trainning source and targets folders
augmented_files = os.listdir(Augmented_folder)
for f in augmented_files:
if (f.startswith("_groundtruth_(1)_")):
shortname_noprefix = f[17:]
shutil.copyfile(Augmented_folder+"/"+f, Training_target_augmented+"/"+shortname_noprefix)
if not (f.startswith("_groundtruth_(1)_")):
shutil.copyfile(Augmented_folder+"/"+f, Training_source_augmented+"/"+f)
for filename in os.listdir(Training_source_augmented):
os.chdir(Training_source_augmented)
os.rename(filename, filename.replace('_original', ''))
#Here we clean up the extra files
shutil.rmtree(Augmented_folder)
if not Use_Data_augmentation:
print(bcolors.WARNING+"Data augmentation disabled")
###Output
_____no_output_____
###Markdown
**3.3. Using weights from a pre-trained model as initial weights**--- Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a CARE 2D model**. This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**. In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
###Code
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_2D_paper")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.h5 pretrained model does not exist')
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead')
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead')
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print(bcolors.WARNING+'No pretrained network will be used.')
###Output
_____no_output_____
###Markdown
**4. Train the network**--- **4.1. Prepare the training data and model for training**---Here, we use the information from 3. to build the model and convert the training data into a suitable format for training.
###Code
#@markdown ##Create the model and dataset objects
# --------------------- Here we delete the model folder if it already exist ------------------------
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: Model folder already exists and has been removed !!"+W)
shutil.rmtree(model_path+'/'+model_name)
# --------------------- Here we load the augmented data or the raw data ------------------------
if Use_Data_augmentation:
Training_source_dir = Training_source_augmented
Training_target_dir = Training_target_augmented
if not Use_Data_augmentation:
Training_source_dir = Training_source
Training_target_dir = Training_target
# --------------------- ------------------------------------------------
# This object holds the image pairs (GT and low), ensuring that CARE compares corresponding images.
# This file is saved in .npz format and later called when loading the trainig data.
raw_data = data.RawData.from_folder(
basepath=base,
source_dirs=[Training_source_dir],
target_dir=Training_target_dir,
axes='CYX',
pattern='*.tif*')
X, Y, XY_axes = data.create_patches(
raw_data,
patch_filter=None,
patch_size=(patch_size,patch_size),
n_patches_per_image=number_of_patches)
print ('Creating 2D training dataset')
training_path = model_path+"/rawdata"
rawdata1 = training_path+".npz"
np.savez(training_path,X=X, Y=Y, axes=XY_axes)
# Load Training Data
(X,Y), (X_val,Y_val), axes = load_training_data(rawdata1, validation_split=percentage, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
%memit
#plot of training patches.
plt.figure(figsize=(12,5))
plot_some(X[:5],Y[:5])
plt.suptitle('5 example training patches (top row: source, bottom row: target)');
#plot of validation patches
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
#Here we automatically define number_of_step in function of training data and batch size
if (Use_Default_Advanced_Parameters):
number_of_steps= int(X.shape[0]/batch_size)+1
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
#Here we create the configuration file
config = Config(axes, n_channel_in, n_channel_out, probabilistic=True, train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, unet_kern_size=5, unet_n_depth=3, train_batch_size=batch_size, train_learning_rate=initial_learning_rate)
print(config)
vars(config)
# Compile the CARE model for network training
model_training= CARE(config, model_name, basedir=model_path)
# --------------------- Using pretrained model ------------------------
# Load the pretrained weights
if Use_pretrained_model:
model_training.load_weights(h5_file_path)
# --------------------- ---------------------- ------------------------
pdf_export(augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
###Output
_____no_output_____
###Markdown
**4.2. Start Training**---When playing the cell below you should see updates after each epoch (round). Network training can take some time.* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the CSBDeep Fiji plugin (Run your Network). You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system).
###Code
#@markdown ##Start training
start = time.time()
# Start Training
history = model_training.train(X,Y, validation_data=(X_val,Y_val))
print("Training, done.")
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(history.history)
if os.path.exists(model_path+"/"+model_name+"/Quality Control"):
shutil.rmtree(model_path+"/"+model_name+"/Quality Control")
os.makedirs(model_path+"/"+model_name+"/Quality Control")
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss', 'learning rate'])
for i in range(len(history.history['loss'])):
writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])
# Displaying the time elapsed for training
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
model_training.export_TF()
print("Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin")
pdf_export(trained = True, augmentation = Use_Data_augmentation, pretrained_model = Use_pretrained_model)
###Output
_____no_output_____
###Markdown
**5. Evaluate your model**---This section allows the user to perform important quality checks on the validity and generalisability of the trained model. **We highly recommend to perform quality control on all newly trained models.**
###Code
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
QC_model_folder = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
loss_displayed = False
###Output
_____no_output_____
###Markdown
**5.1. Inspection of the loss function**---First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.***Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.**Note: Plots of the losses will be shown in a linear and in a log scale. This can help visualise changes in the losses at different magnitudes. However, note that if the losses are negative the plot on the log scale will be empty. This is not an error.**
###Code
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
loss_displayed = True
lossDataFromCSV = []
vallossDataFromCSV = []
with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png',bbox_inches='tight',pad_inches=0)
plt.show()
###Output
_____no_output_____
###Markdown
**5.2. Error mapping and quality metrics estimation**---This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the "Source_QC_folder" and "Target_QC_folder" !**1. The SSIM (structural similarity) map** The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info). **mSSIM** is the SSIM value calculated across the entire window of both images.**The output below shows the SSIM maps with the mSSIM****2. The RSE (Root Squared Error) map** This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.**The output below shows the RSE maps with the NRMSE and PSNR values.**
###Code
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
# Create a quality control/Prediction Folder
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
# Activate the pretrained model.
model_training = CARE(config=None, name=QC_model_name, basedir=QC_model_path)
# List Tif images in Source_QC_folder
Source_QC_folder_tif = Source_QC_folder+"/*.tif"
Z = sorted(glob(Source_QC_folder_tif))
Z = list(map(imread,Z))
print('Number of test dataset found in the folder: '+str(len(Z)))
# Perform prediction on all datasets in the Source_QC folder
for filename in os.listdir(Source_QC_folder):
img = imread(os.path.join(Source_QC_folder, filename))
predicted = model_training.predict(img, axes='YX')
os.chdir(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
imsave(filename, predicted)
def ssim(img1, img2):
return structural_similarity(img1,img2,data_range=1.,full=True, gaussian_weights=True, use_sample_covariance=False, sigma=1.5)
def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
def norm_minmse(gt, x, normalize_gt=True):
"""This function is adapted from Martin Weigert"""
"""
normalizes and affinely scales an image pair such that the MSE is minimized
Parameters
----------
gt: ndarray
the ground truth image
x: ndarray
the image that will be affinely scaled
normalize_gt: bool
set to True of gt image should be normalized (default)
Returns
-------
gt_scaled, x_scaled
"""
if normalize_gt:
gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)
x = x.astype(np.float32, copy=False) - np.mean(x)
#x = x - np.mean(x)
gt = gt.astype(np.float32, copy=False) - np.mean(gt)
#gt = gt - np.mean(gt)
scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())
return gt, scale * x
# Open and create the csv file that will contain all the QC metrics
with open(QC_model_path+"/"+QC_model_name+"/Quality Control/QC_metrics_"+QC_model_name+".csv", "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["image #","Prediction v. GT mSSIM","Input v. GT mSSIM", "Prediction v. GT NRMSE", "Input v. GT NRMSE", "Prediction v. GT PSNR", "Input v. GT PSNR"])
# Let's loop through the provided dataset in the QC folders
for i in os.listdir(Source_QC_folder):
if not os.path.isdir(os.path.join(Source_QC_folder,i)):
print('Running QC on: '+i)
# -------------------------------- Target test data (Ground truth) --------------------------------
test_GT = io.imread(os.path.join(Target_QC_folder, i))
# -------------------------------- Source test data --------------------------------
test_source = io.imread(os.path.join(Source_QC_folder,i))
# Normalize the images wrt each other by minimizing the MSE between GT and Source image
test_GT_norm,test_source_norm = norm_minmse(test_GT, test_source, normalize_gt=True)
# -------------------------------- Prediction --------------------------------
test_prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction",i))
# Normalize the images wrt each other by minimizing the MSE between GT and prediction
test_GT_norm,test_prediction_norm = norm_minmse(test_GT, test_prediction, normalize_gt=True)
# -------------------------------- Calculate the metric maps and save them --------------------------------
# Calculate the SSIM maps
index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim(test_GT_norm, test_prediction_norm)
index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim(test_GT_norm, test_source_norm)
#Save ssim_maps
img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsPrediction_'+i,img_SSIM_GTvsPrediction_32bit)
img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsSource_'+i,img_SSIM_GTvsSource_32bit)
# Calculate the Root Squared Error (RSE) maps
img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))
img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm))
# Save SE maps
img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)
img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsPrediction_'+i,img_RSE_GTvsPrediction_32bit)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsSource_'+i,img_RSE_GTvsSource_32bit)
# -------------------------------- Calculate the RSE metrics and save them --------------------------------
# Normalised Root Mean Squared Error (here it's valid to take the mean of the image)
NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))
NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource))
# We can also measure the peak signal to noise ratio between the images
PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)
PSNR_GTvsSource = psnr(test_GT_norm,test_source_norm,data_range=1.0)
writer.writerow([i,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsSource),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsSource),str(PSNR_GTvsPrediction),str(PSNR_GTvsSource)])
# All data is now processed saved
Test_FileList = os.listdir(Source_QC_folder) # this assumes, as it should, that both source and target are named the same
plt.figure(figsize=(20,20))
# Currently only displays the last computed set, from memory
# Target (Ground-truth)
plt.subplot(3,3,1)
plt.axis('off')
img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1]))
plt.imshow(img_GT, norm=simple_norm(img_GT, percent = 99))
plt.title('Target',fontsize=15)
# Source
plt.subplot(3,3,2)
plt.axis('off')
img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1]))
plt.imshow(img_Source, norm=simple_norm(img_Source, percent = 99))
plt.title('Source',fontsize=15)
#Prediction
plt.subplot(3,3,3)
plt.axis('off')
img_Prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction/", Test_FileList[-1]))
plt.imshow(img_Prediction, norm=simple_norm(img_Prediction, percent = 99))
plt.title('Prediction',fontsize=15)
#Setting up colours
cmap = plt.cm.CMRmap
#SSIM between GT and Source
plt.subplot(3,3,5)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imSSIM_GTvsSource,fraction=0.046, pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsSource,3)),fontsize=14)
plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)
#SSIM between GT and Prediction
plt.subplot(3,3,6)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)
plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsPrediction,3)),fontsize=14)
#Root Squared Error between GT and Source
plt.subplot(3,3,8)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap = cmap, vmin=0, vmax = 1)
plt.colorbar(imRSE_GTvsSource,fraction=0.046,pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsSource,3))+', PSNR: '+str(round(PSNR_GTvsSource,3)),fontsize=14)
#plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3)))
plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)
#Root Squared Error between GT and Prediction
plt.subplot(3,3,9)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsPrediction,3))+', PSNR: '+str(round(PSNR_GTvsPrediction,3)),fontsize=14)
plt.savefig(full_QC_model_path+'Quality Control/QC_example_data.png',bbox_inches='tight',pad_inches=0)
qc_pdf_export()
###Output
_____no_output_____
###Markdown
**6. Using the trained model**---In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive. **6.1. Generate prediction(s) from unseen dataset**---The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.**`Result_folder`:** This folder will contain the predicted output images.
###Code
#@markdown ### Provide the path to your dataset and to the folder where the predictions are saved, then play the cell to predict outputs from your unseen images.
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = os.path.join(Prediction_model_path, Prediction_model_name)
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Activate the pretrained model.
model_training = CARE(config=None, name=Prediction_model_name, basedir=Prediction_model_path)
# creates a loop, creating filenames and saving them
for filename in os.listdir(Data_folder):
img = imread(os.path.join(Data_folder,filename))
restored = model_training.predict(img, axes='YX')
os.chdir(Result_folder)
imsave(filename,restored)
print("Images saved into folder:", Result_folder)
###Output
_____no_output_____
###Markdown
**6.2. Inspect the predicted output**---
###Code
# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.
# This will display a randomly chosen dataset input and predicted output
random_choice = random.choice(os.listdir(Data_folder))
x = imread(Data_folder+"/"+random_choice)
os.chdir(Result_folder)
y = imread(Result_folder+"/"+random_choice)
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Input')
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Predicted output');
###Output
_____no_output_____
###Markdown
**CARE: Content-aware image restoration (2D)**---CARE is a neural network capable of image restoration from corrupted bio-images, first published in 2018 by [Weigert *et al.* in Nature Methods](https://www.nature.com/articles/s41592-018-0216-7). The CARE network uses a U-Net network architecture and allows image restoration and resolution improvement in 2D and 3D images, in a supervised manner, using noisy images as input and low-noise images as targets for training. The function of the network is essentially determined by the set of images provided in the training dataset. For instance, if noisy images are provided as input and high signal-to-noise ratio images are provided as targets, the network will perform denoising. **This particular notebook enables restoration of 2D dataset. If you are interested in restoring 3D dataset, you should use the CARE 3D notebook instead.**---*Disclaimer*:This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.This notebook is based on the following paper: **Content-aware image restoration: pushing the limits of fluorescence microscopy**, by Weigert *et al.* published in Nature Methods in 2018 (https://www.nature.com/articles/s41592-018-0216-7)And source code found in: https://github.com/csbdeep/csbdeepFor a more in-depth description of the features of the network,please refer to [this guide](http://csbdeep.bioimagecomputing.com/doc/) provided by the original authors of the work.We provide a dataset for the training of this notebook as a way to test its functionalities but the training and test data of the restoration experiments is also available from the authors of the original paper [here](https://publications.mpi-cbg.de/publications-sites/7207/).**Please also cite this original paper when using or developing this notebook.** **How to use this notebook?**---Video describing how to use our notebooks are available on youtube: - [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook - [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook---**Structure of a notebook**The notebook contains two types of cell: **Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.---**Table of contents, Code snippets** and **Files**On the top left side of the notebook you find three tabs which contain from top to bottom:*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here. **Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!---**Making changes to the notebook****You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).You can use the ``-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment. **0. Before getting started**--- For CARE to train, **it needs to have access to a paired training dataset**. This means that the same image needs to be acquired in the two conditions (for instance, low signal-to-noise ratio and high signal-to-noise ratio) and provided with indication of correspondence. Therefore, the data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called "Training - Low SNR images" (Training_source) and "Training - high SNR images" (Training_target). Information on how to generate a training dataset is available in our Wiki page: https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki**We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset)**. The quality control assessment can be done directly in this notebook. **Additionally, the corresponding input and output files need to have the same name**. Please note that you currently can **only use .tif files!**Here's a common data structure that can work:* Experiment A - **Training dataset** - Low SNR images (Training_source) - img_1.tif, img_2.tif, ... - High SNR images (Training_target) - img_1.tif, img_2.tif, ... - **Quality control dataset** - Low SNR images - img_1.tif, img_2.tif - High SNR images - img_1.tif, img_2.tif - **Data to be predicted** - **Results**---**Important note**- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.--- **1. Initialise the Colab session**--- **1.1. Check for GPU access**---By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:Go to **Runtime -> Change the Runtime type****Runtime type: Python 3** *(Python 3 is programming language in which this program is written)***Accelator: GPU** *(Graphics processing unit)*
###Code
#@markdown ##Run this cell to check if you have GPU access
%tensorflow_version 1.x
import tensorflow as tf
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime setting is correct then Google did not allocate a GPU for your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
###Output
_____no_output_____
###Markdown
**1.2. Mount your Google Drive**--- To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook. Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive. Once this is done, your data are available in the **Files** tab on the top left of notebook.
###Code
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
**2. Install CARE and dependencies**---
###Code
#@markdown ##Install CARE and dependencies
#Libraries contains information of certain topics.
#For example the tifffile library contains information on how to handle tif-files.
#Here, we install libraries which are not already included in Colab.
!pip install tifffile # contains tools to operate tiff-files
!pip install csbdeep # contains tools for restoration of fluorescence microcopy images (Content-aware Image Restoration, CARE). It uses Keras and Tensorflow.
!pip install wget
!pip install memory_profiler
%load_ext memory_profiler
#Here, we import and enable Tensorflow 1 instead of Tensorflow 2.
%tensorflow_version 1.x
import tensorflow
import tensorflow as tf
print(tensorflow.__version__)
print("Tensorflow enabled.")
# ------- Variable specific to CARE -------
from csbdeep.utils import download_and_extract_zip_file, plot_some, axes_dict, plot_history, Path, download_and_extract_zip_file
from csbdeep.data import RawData, create_patches
from csbdeep.io import load_training_data, save_tiff_imagej_compatible
from csbdeep.models import Config, CARE
from csbdeep import data
from __future__ import print_function, unicode_literals, absolute_import, division
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
# ------- Common variable to all ZeroCostDL4Mic notebooks -------
import numpy as np
from matplotlib import pyplot as plt
import urllib
import os, random
import shutil
import zipfile
from tifffile import imread, imsave
import time
import sys
import wget
from pathlib import Path
import pandas as pd
import csv
from glob import glob
from scipy import signal
from scipy import ndimage
from skimage import io
from sklearn.linear_model import LinearRegression
from skimage.util import img_as_uint
import matplotlib as mpl
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from astropy.visualization import simple_norm
from skimage import img_as_float32
from skimage.util import img_as_ubyte
from tqdm import tqdm
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
#Disable some of the tensorflow warnings
import warnings
warnings.filterwarnings("ignore")
print("Libraries installed")
###Output
_____no_output_____
###Markdown
**3. Select your parameters and paths**--- **3.1. Setting main training parameters**--- **Paths for training, predictions and results****`Training_source:`, `Training_target`:** These are the paths to your folders containing the Training_source (Low SNR images) and Training_target (High SNR images or ground truth) training data respecively. To find the paths of the folders containing the respective datasets, go to your Files on the left of the notebook, navigate to the folder containing your files and copy the path by right-clicking on the folder, **Copy path** and pasting it into the right box below.**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).**Training Parameters****`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for 100-300 epochs. Evaluate the performance after training (see 5). **Default value: 50****`patch_size`:** CARE divides the image into patches for training. Input the size of the patches (length of a side). The value should be smaller than the dimensions of the image and divisible by 8. **Default value: 80****When choosing the patch_size, the value should be i) large enough that it will enclose many instances, ii) small enough that the resulting patches fit into the RAM.** **`number_of_patches`:** Input the number of the patches per image. Increasing the number of patches allows for larger training datasets. **Default value: 100** **Decreasing the patch size or increasing the number of patches may improve the training but may also increase the training time.****Advanced Parameters - experienced users only****`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16****`number_of_steps`:** Define the number of training steps by epoch. By default this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patch / batch_size****`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 10** **`initial_learning_rate`:** Input the initial value to be used as learning rate. **Default value: 0.0004**
###Code
#@markdown ###Path to training images:
Training_source = "" #@param {type:"string"}
InputFile = Training_source+"/*.tif"
Training_target = "" #@param {type:"string"}
OutputFile = Training_target+"/*.tif"
#Define where the patch file will be saved
base = "/content"
# model name and path
#@markdown ###Name of the model and path to model folder:
model_name = "" #@param {type:"string"}
model_path = "" #@param {type:"string"}
# other parameters for training.
#@markdown ###Training Parameters
#@markdown Number of epochs:
number_of_epochs = 50#@param {type:"number"}
#@markdown Patch size (pixels) and number
patch_size = 80#@param {type:"number"} # in pixels
number_of_patches = 100#@param {type:"number"}
#@markdown ###Advanced Parameters
Use_Default_Advanced_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please input:
batch_size = 16#@param {type:"number"}
number_of_steps = 400#@param {type:"number"}
percentage_validation = 10 #@param {type:"number"}
initial_learning_rate = 0.0004 #@param {type:"number"}
if (Use_Default_Advanced_Parameters):
print("Default advanced parameters enabled")
batch_size = 16
percentage_validation = 10
initial_learning_rate = 0.0004
#Here we define the percentage to use for validation
percentage = percentage_validation/100
#here we check that no model with the same name already exist, if so delete
if os.path.exists(model_path+'/'+model_name):
print(bcolors.WARNING +"!! WARNING: Folder already exists and has been removed !!")
shutil.rmtree(model_path+'/'+model_name)
# Here we disable pre-trained model by default (in case the cell is not ran)
Use_pretrained_model = False
# Here we disable data augmentation by default (in case the cell is not ran)
Use_Data_augmentation = False
# The shape of the images.
x = imread(InputFile)
y = imread(OutputFile)
print('Loaded Input images (number, width, length) =', x.shape)
print('Loaded Output images (number, width, length) =', y.shape)
print("Parameters initiated.")
# This will display a randomly chosen dataset input and output
random_choice = random.choice(os.listdir(Training_source))
x = imread(Training_source+"/"+random_choice)
# Here we check that the input images contains the expected dimensions
if len(x.shape) == 2:
print("Image dimensions (y,x)",x.shape)
if not len(x.shape) == 2:
print(bcolors.WARNING +"Your images appear to have the wrong dimensions. Image dimension",x.shape)
#Find image XY dimension
Image_Y = x.shape[0]
Image_X = x.shape[1]
#Hyperparameters failsafes
# Here we check that patch_size is smaller than the smallest xy dimension of the image
if patch_size > min(Image_Y, Image_X):
patch_size = min(Image_Y, Image_X)
print (bcolors.WARNING + " Your chosen patch_size is bigger than the xy dimension of your image; therefore the patch_size chosen is now:",patch_size)
# Here we check that patch_size is divisible by 8
if not patch_size % 8 == 0:
patch_size = ((int(patch_size / 8)-1) * 8)
print (bcolors.WARNING + " Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now:",patch_size)
os.chdir(Training_target)
y = imread(Training_target+"/"+random_choice)
f=plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Training source')
plt.axis('off');
plt.subplot(1,2,2)
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Training target')
plt.axis('off');
###Output
_____no_output_____
###Markdown
**3.2. Data augmentation**--- Data augmentation can improve training progress by amplifying differences in the dataset. This can be useful if the available dataset is small since, in this case, it is possible that a network could quickly learn every example in the dataset (overfitting), without augmentation. Augmentation is not necessary for training and if your training dataset is large you should disable it. **However, data augmentation is not a magic solution and may also introduce issues. Therefore, we recommend that you train your network with and without augmentation, and use the QC section to validate that it improves overall performances.** Data augmentation is performed here by [Augmentor.](https://github.com/mdbloice/Augmentor)[Augmentor](https://github.com/mdbloice/Augmentor) was described in the following article:Marcus D Bloice, Peter M Roth, Andreas Holzinger, Biomedical image augmentation using Augmentor, Bioinformatics, https://doi.org/10.1093/bioinformatics/btz259**Please also cite this original paper when publishing results obtained using this notebook with augmentation enabled.**
###Code
#Data augmentation
Use_Data_augmentation = False #@param {type:"boolean"}
if Use_Data_augmentation:
!pip install Augmentor
import Augmentor
#@markdown ####Choose a factor by which you want to multiply your original dataset
Multiply_dataset_by = 1 #@param {type:"slider", min:1, max:30, step:1}
Save_augmented_images = False #@param {type:"boolean"}
Saving_path = "" #@param {type:"string"}
Use_Default_Augmentation_Parameters = True #@param {type:"boolean"}
#@markdown ###If not, please choose the probability of the following image manipulations to be used to augment your dataset (1 = always used; 0 = disabled ):
#@markdown ####Mirror and rotate images
rotate_90_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
rotate_270_degrees = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_left_right = 0 #@param {type:"slider", min:0, max:1, step:0.1}
flip_top_bottom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image Zoom
random_zoom = 0 #@param {type:"slider", min:0, max:1, step:0.1}
random_zoom_magnification = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Random image distortion
random_distortion = 0 #@param {type:"slider", min:0, max:1, step:0.1}
#@markdown ####Image shearing and skewing
image_shear = 0 #@param {type:"slider", min:0, max:1, step:0.1}
max_image_shear = 1 #@param {type:"slider", min:1, max:25, step:1}
skew_image = 0 #@param {type:"slider", min:0, max:1, step:0.1}
skew_image_magnitude = 0 #@param {type:"slider", min:0, max:1, step:0.1}
if Use_Default_Augmentation_Parameters:
rotate_90_degrees = 0.5
rotate_270_degrees = 0.5
flip_left_right = 0.5
flip_top_bottom = 0.5
if not Multiply_dataset_by >5:
random_zoom = 0
random_zoom_magnification = 0.9
random_distortion = 0
image_shear = 0
max_image_shear = 10
skew_image = 0
skew_image_magnitude = 0
if Multiply_dataset_by >5:
random_zoom = 0.1
random_zoom_magnification = 0.9
random_distortion = 0.5
image_shear = 0.2
max_image_shear = 5
skew_image = 0.2
skew_image_magnitude = 0.4
if Multiply_dataset_by >25:
random_zoom = 0.5
random_zoom_magnification = 0.8
random_distortion = 0.5
image_shear = 0.5
max_image_shear = 20
skew_image = 0.5
skew_image_magnitude = 0.6
list_files = os.listdir(Training_source)
Nb_files = len(list_files)
Nb_augmented_files = (Nb_files * Multiply_dataset_by)
if Use_Data_augmentation:
print("Data augmentation enabled")
# Here we set the path for the various folder were the augmented images will be loaded
# All images are first saved into the augmented folder
#Augmented_folder = "/content/Augmented_Folder"
if not Save_augmented_images:
Saving_path= "/content"
Augmented_folder = Saving_path+"/Augmented_Folder"
if os.path.exists(Augmented_folder):
shutil.rmtree(Augmented_folder)
os.makedirs(Augmented_folder)
#Training_source_augmented = "/content/Training_source_augmented"
Training_source_augmented = Saving_path+"/Training_source_augmented"
if os.path.exists(Training_source_augmented):
shutil.rmtree(Training_source_augmented)
os.makedirs(Training_source_augmented)
#Training_target_augmented = "/content/Training_target_augmented"
Training_target_augmented = Saving_path+"/Training_target_augmented"
if os.path.exists(Training_target_augmented):
shutil.rmtree(Training_target_augmented)
os.makedirs(Training_target_augmented)
# Here we generate the augmented images
#Load the images
p = Augmentor.Pipeline(Training_source, Augmented_folder)
#Define the matching images
p.ground_truth(Training_target)
#Define the augmentation possibilities
if not rotate_90_degrees == 0:
p.rotate90(probability=rotate_90_degrees)
if not rotate_270_degrees == 0:
p.rotate270(probability=rotate_270_degrees)
if not flip_left_right == 0:
p.flip_left_right(probability=flip_left_right)
if not flip_top_bottom == 0:
p.flip_top_bottom(probability=flip_top_bottom)
if not random_zoom == 0:
p.zoom_random(probability=random_zoom, percentage_area=random_zoom_magnification)
if not random_distortion == 0:
p.random_distortion(probability=random_distortion, grid_width=4, grid_height=4, magnitude=8)
if not image_shear == 0:
p.shear(probability=image_shear,max_shear_left=20,max_shear_right=20)
if not skew_image == 0:
p.skew(probability=skew_image,magnitude=skew_image_magnitude)
p.sample(int(Nb_augmented_files))
print(int(Nb_augmented_files),"matching images generated")
# Here we sort through the images and move them back to augmented trainning source and targets folders
augmented_files = os.listdir(Augmented_folder)
for f in augmented_files:
if (f.startswith("_groundtruth_(1)_")):
shortname_noprefix = f[17:]
shutil.copyfile(Augmented_folder+"/"+f, Training_target_augmented+"/"+shortname_noprefix)
if not (f.startswith("_groundtruth_(1)_")):
shutil.copyfile(Augmented_folder+"/"+f, Training_source_augmented+"/"+f)
for filename in os.listdir(Training_source_augmented):
os.chdir(Training_source_augmented)
os.rename(filename, filename.replace('_original', ''))
#Here we clean up the extra files
shutil.rmtree(Augmented_folder)
if not Use_Data_augmentation:
print(bcolors.WARNING+"Data augmentation disabled")
###Output
_____no_output_____
###Markdown
**3.3. Using weights from a pre-trained model as initial weights**--- Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a CARE 2D model**. This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**. In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
###Code
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_2D_paper")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".h5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.h5 pretrained model does not exist')
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead')
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead')
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print(bcolors.WARNING+'No pretrained network will be used.')
###Output
_____no_output_____
###Markdown
**4. Train the network**--- **4.1. Prepare the training data and model for training**---Here, we use the information from 3. to build the model and convert the training data into a suitable format for training.
###Code
#@markdown ##Create the model and dataset objects
# --------------------- Here we load the augmented data or the raw data ------------------------
if Use_Data_augmentation:
Training_source_dir = Training_source_augmented
Training_target_dir = Training_target_augmented
if not Use_Data_augmentation:
Training_source_dir = Training_source
Training_target_dir = Training_target
# --------------------- ------------------------------------------------
# This object holds the image pairs (GT and low), ensuring that CARE compares corresponding images.
# This file is saved in .npz format and later called when loading the trainig data.
raw_data = data.RawData.from_folder(
basepath=base,
source_dirs=[Training_source_dir],
target_dir=Training_target_dir,
axes='CYX',
pattern='*.tif*')
X, Y, XY_axes = data.create_patches(
raw_data,
patch_filter=None,
patch_size=(patch_size,patch_size),
n_patches_per_image=number_of_patches)
print ('Creating 2D training dataset')
training_path = model_path+"/rawdata"
rawdata1 = training_path+".npz"
np.savez(training_path,X=X, Y=Y, axes=XY_axes)
# Load Training Data
(X,Y), (X_val,Y_val), axes = load_training_data(rawdata1, validation_split=percentage, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
%memit
#plot of training patches.
plt.figure(figsize=(12,5))
plot_some(X[:5],Y[:5])
plt.suptitle('5 example training patches (top row: source, bottom row: target)');
#plot of validation patches
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
#Here we automatically define number_of_step in function of training data and batch size
if (Use_Default_Advanced_Parameters):
number_of_steps= int(X.shape[0]/batch_size)+1
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
#Here we create the configuration file
config = Config(axes, n_channel_in, n_channel_out, probabilistic=True, train_steps_per_epoch=number_of_steps, train_epochs=number_of_epochs, unet_kern_size=5, unet_n_depth=3, train_batch_size=batch_size, train_learning_rate=initial_learning_rate)
print(config)
vars(config)
# Compile the CARE model for network training
model_training= CARE(config, model_name, basedir=model_path)
# --------------------- Using pretrained model ------------------------
# Load the pretrained weights
if Use_pretrained_model:
model_training.load_weights(h5_file_path)
# --------------------- ---------------------- ------------------------
###Output
_____no_output_____
###Markdown
**4.2. Start Trainning**---When playing the cell below you should see updates after each epoch (round). Network training can take some time.* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.**Of Note:** At the end of the training, your model will be automatically exported so it can be used in the CSBDeep Fiji plugin (Run your Network). You can find it in your model folder (TF_SavedModel.zip). In Fiji, Make sure to choose the right version of tensorflow. You can check at: Edit-- Options-- Tensorflow. Choose the version 1.4 (CPU or GPU depending on your system).
###Code
#@markdown ##Start training
start = time.time()
# Start Training
history = model_training.train(X,Y, validation_data=(X_val,Y_val))
print("Training, done.")
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(history.history)
if os.path.exists(model_path+"/"+model_name+"/Quality Control"):
shutil.rmtree(model_path+"/"+model_name+"/Quality Control")
os.makedirs(model_path+"/"+model_name+"/Quality Control")
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = model_path+'/'+model_name+'/Quality Control/training_evaluation.csv'
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss', 'learning rate'])
for i in range(len(history.history['loss'])):
writer.writerow([history.history['loss'][i], history.history['val_loss'][i], history.history['lr'][i]])
# Displaying the time elapsed for training
dt = time.time() - start
mins, sec = divmod(dt, 60)
hour, mins = divmod(mins, 60)
print("Time elapsed:",hour, "hour(s)",mins,"min(s)",round(sec),"sec(s)")
model_training.export_TF()
print("Your model has been sucessfully exported and can now also be used in the CSBdeep Fiji plugin")
###Output
_____no_output_____
###Markdown
**4.3. Download your model(s) from Google Drive**---Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder as all data can be erased at the next training if using the same folder. **5. Evaluate your model**---This section allows the user to perform important quality checks on the validity and generalisability of the trained model. **We highly recommend to perform quality control on all newly trained models.**
###Code
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
QC_model_folder = "" #@param {type:"string"}
#Here we define the loaded model name and path
QC_model_name = os.path.basename(QC_model_folder)
QC_model_path = os.path.dirname(QC_model_folder)
if (Use_the_current_trained_model):
QC_model_name = model_name
QC_model_path = model_path
full_QC_model_path = QC_model_path+'/'+QC_model_name+'/'
if os.path.exists(full_QC_model_path):
print("The "+QC_model_name+" network will be evaluated")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
###Output
_____no_output_____
###Markdown
**5.1. Inspection of the loss function**---First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.***Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.**Note: Plots of the losses will be shown in a linear and in a log scale. This can help visualise changes in the losses at different magnitudes. However, note that if the losses are negative the plot on the log scale will be empty. This is not an error.**
###Code
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
lossDataFromCSV = []
vallossDataFromCSV = []
with open(QC_model_path+'/'+QC_model_name+'/Quality Control/training_evaluation.csv','r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(QC_model_path+'/'+QC_model_name+'/Quality Control/lossCurvePlots.png')
plt.show()
###Output
_____no_output_____
###Markdown
**5.2. Error mapping and quality metrics estimation**---This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the "Source_QC_folder" and "Target_QC_folder" !**1. The SSIM (structural similarity) map** The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info). **mSSIM** is the SSIM value calculated across the entire window of both images.**The output below shows the SSIM maps with the mSSIM****2. The RSE (Root Squared Error) map** This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.**The output below shows the RSE maps with the NRMSE and PSNR values.**
###Code
#@markdown ##Choose the folders that contain your Quality Control dataset
Source_QC_folder = "" #@param{type:"string"}
Target_QC_folder = "" #@param{type:"string"}
# Create a quality control/Prediction Folder
if os.path.exists(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction"):
shutil.rmtree(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
os.makedirs(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
# Activate the pretrained model.
model_training = CARE(config=None, name=QC_model_name, basedir=QC_model_path)
# List Tif images in Source_QC_folder
Source_QC_folder_tif = Source_QC_folder+"/*.tif"
Z = sorted(glob(Source_QC_folder_tif))
Z = list(map(imread,Z))
print('Number of test dataset found in the folder: '+str(len(Z)))
# Perform prediction on all datasets in the Source_QC folder
for filename in os.listdir(Source_QC_folder):
img = imread(os.path.join(Source_QC_folder, filename))
predicted = model_training.predict(img, axes='YX')
os.chdir(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction")
imsave(filename, predicted)
def ssim(img1, img2):
return structural_similarity(img1,img2,data_range=1.,full=True, gaussian_weights=True, use_sample_covariance=False, sigma=1.5)
def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
def norm_minmse(gt, x, normalize_gt=True):
"""This function is adapted from Martin Weigert"""
"""
normalizes and affinely scales an image pair such that the MSE is minimized
Parameters
----------
gt: ndarray
the ground truth image
x: ndarray
the image that will be affinely scaled
normalize_gt: bool
set to True of gt image should be normalized (default)
Returns
-------
gt_scaled, x_scaled
"""
if normalize_gt:
gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)
x = x.astype(np.float32, copy=False) - np.mean(x)
#x = x - np.mean(x)
gt = gt.astype(np.float32, copy=False) - np.mean(gt)
#gt = gt - np.mean(gt)
scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())
return gt, scale * x
# Open and create the csv file that will contain all the QC metrics
with open(QC_model_path+"/"+QC_model_name+"/Quality Control/QC_metrics_"+QC_model_name+".csv", "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["image #","Prediction v. GT mSSIM","Input v. GT mSSIM", "Prediction v. GT NRMSE", "Input v. GT NRMSE", "Prediction v. GT PSNR", "Input v. GT PSNR"])
# Let's loop through the provided dataset in the QC folders
for i in os.listdir(Source_QC_folder):
if not os.path.isdir(os.path.join(Source_QC_folder,i)):
print('Running QC on: '+i)
# -------------------------------- Target test data (Ground truth) --------------------------------
test_GT = io.imread(os.path.join(Target_QC_folder, i))
# -------------------------------- Source test data --------------------------------
test_source = io.imread(os.path.join(Source_QC_folder,i))
# Normalize the images wrt each other by minimizing the MSE between GT and Source image
test_GT_norm,test_source_norm = norm_minmse(test_GT, test_source, normalize_gt=True)
# -------------------------------- Prediction --------------------------------
test_prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction",i))
# Normalize the images wrt each other by minimizing the MSE between GT and prediction
test_GT_norm,test_prediction_norm = norm_minmse(test_GT, test_prediction, normalize_gt=True)
# -------------------------------- Calculate the metric maps and save them --------------------------------
# Calculate the SSIM maps
index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = ssim(test_GT_norm, test_prediction_norm)
index_SSIM_GTvsSource, img_SSIM_GTvsSource = ssim(test_GT_norm, test_source_norm)
#Save ssim_maps
img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsPrediction_'+i,img_SSIM_GTvsPrediction_32bit)
img_SSIM_GTvsSource_32bit = np.float32(img_SSIM_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/SSIM_GTvsSource_'+i,img_SSIM_GTvsSource_32bit)
# Calculate the Root Squared Error (RSE) maps
img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))
img_RSE_GTvsSource = np.sqrt(np.square(test_GT_norm - test_source_norm))
# Save SE maps
img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)
img_RSE_GTvsSource_32bit = np.float32(img_RSE_GTvsSource)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsPrediction_'+i,img_RSE_GTvsPrediction_32bit)
io.imsave(QC_model_path+'/'+QC_model_name+'/Quality Control/RSE_GTvsSource_'+i,img_RSE_GTvsSource_32bit)
# -------------------------------- Calculate the RSE metrics and save them --------------------------------
# Normalised Root Mean Squared Error (here it's valid to take the mean of the image)
NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))
NRMSE_GTvsSource = np.sqrt(np.mean(img_RSE_GTvsSource))
# We can also measure the peak signal to noise ratio between the images
PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)
PSNR_GTvsSource = psnr(test_GT_norm,test_source_norm,data_range=1.0)
writer.writerow([i,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsSource),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsSource),str(PSNR_GTvsPrediction),str(PSNR_GTvsSource)])
# All data is now processed saved
Test_FileList = os.listdir(Source_QC_folder) # this assumes, as it should, that both source and target are named the same
plt.figure(figsize=(20,20))
# Currently only displays the last computed set, from memory
# Target (Ground-truth)
plt.subplot(3,3,1)
plt.axis('off')
img_GT = io.imread(os.path.join(Target_QC_folder, Test_FileList[-1]))
plt.imshow(img_GT, norm=simple_norm(img_GT, percent = 99))
plt.title('Target',fontsize=15)
# Source
plt.subplot(3,3,2)
plt.axis('off')
img_Source = io.imread(os.path.join(Source_QC_folder, Test_FileList[-1]))
plt.imshow(img_Source, norm=simple_norm(img_Source, percent = 99))
plt.title('Source',fontsize=15)
#Prediction
plt.subplot(3,3,3)
plt.axis('off')
img_Prediction = io.imread(os.path.join(QC_model_path+"/"+QC_model_name+"/Quality Control/Prediction/", Test_FileList[-1]))
plt.imshow(img_Prediction, norm=simple_norm(img_Prediction, percent = 99))
plt.title('Prediction',fontsize=15)
#Setting up colours
cmap = plt.cm.CMRmap
#SSIM between GT and Source
plt.subplot(3,3,5)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsSource = plt.imshow(img_SSIM_GTvsSource, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imSSIM_GTvsSource,fraction=0.046, pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsSource,3)),fontsize=14)
plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)
#SSIM between GT and Prediction
plt.subplot(3,3,6)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)
plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('mSSIM: '+str(round(index_SSIM_GTvsPrediction,3)),fontsize=14)
#Root Squared Error between GT and Source
plt.subplot(3,3,8)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsSource = plt.imshow(img_RSE_GTvsSource, cmap = cmap, vmin=0, vmax = 1)
plt.colorbar(imRSE_GTvsSource,fraction=0.046,pad=0.04)
plt.title('Target vs. Source',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsSource,3))+', PSNR: '+str(round(PSNR_GTvsSource,3)),fontsize=14)
#plt.title('Target vs. Source PSNR: '+str(round(PSNR_GTvsSource,3)))
plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)
#Root Squared Error between GT and Prediction
plt.subplot(3,3,9)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('NRMSE: '+str(round(NRMSE_GTvsPrediction,3))+', PSNR: '+str(round(PSNR_GTvsPrediction,3)),fontsize=14)
###Output
_____no_output_____
###Markdown
**6. Using the trained model**---In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive. **6.1. Generate prediction(s) from unseen dataset**---The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.**`Result_folder`:** This folder will contain the predicted output images.
###Code
#@markdown ### Provide the path to your dataset and to the folder where the predictions are saved, then play the cell to predict outputs from your unseen images.
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
# model name and path
#@markdown ###Do you want to use the current trained model?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
Prediction_model_folder = "" #@param {type:"string"}
#Here we find the loaded model name and parent path
Prediction_model_name = os.path.basename(Prediction_model_folder)
Prediction_model_path = os.path.dirname(Prediction_model_folder)
if (Use_the_current_trained_model):
print("Using current trained network")
Prediction_model_name = model_name
Prediction_model_path = model_path
full_Prediction_model_path = os.path.join(Prediction_model_path, Prediction_model_name)
if os.path.exists(full_Prediction_model_path):
print("The "+Prediction_model_name+" network will be used.")
else:
W = '\033[0m' # white (normal)
R = '\033[31m' # red
print(R+'!! WARNING: The chosen model does not exist !!'+W)
print('Please make sure you provide a valid model path and model name before proceeding further.')
#Activate the pretrained model.
model_training = CARE(config=None, name=Prediction_model_name, basedir=Prediction_model_path)
# creates a loop, creating filenames and saving them
for filename in os.listdir(Data_folder):
img = imread(os.path.join(Data_folder,filename))
restored = model_training.predict(img, axes='YX')
os.chdir(Result_folder)
imsave(filename,restored)
print("Images saved into folder:", Result_folder)
###Output
_____no_output_____
###Markdown
**6.2. Inspect the predicted output**---
###Code
# @markdown ##Run this cell to display a randomly chosen input and its corresponding predicted output.
# This will display a randomly chosen dataset input and predicted output
random_choice = random.choice(os.listdir(Data_folder))
x = imread(Data_folder+"/"+random_choice)
os.chdir(Result_folder)
y = imread(Result_folder+"/"+random_choice)
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(x, norm=simple_norm(x, percent = 99), interpolation='nearest')
plt.title('Input')
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(y, norm=simple_norm(y, percent = 99), interpolation='nearest')
plt.title('Predicted output');
###Output
_____no_output_____ |
site/en-snapshot/tensorboard/graphs.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Examining the TensorFlow Graph View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewTensorBoard’s **Graphs dashboard** is a powerful tool for examining your TensorFlow model. You can quickly view a conceptual graph of your model’s structure and ensure it matches your intended design. You can also view a op-level graph to understand how TensorFlow understands your program. Examining the op-level graph can give you insight as to how to change your model. For example, you can redesign your model if training is progressing slower than expected. This tutorial presents a quick overview of how to generate graph diagnostic data and visualize it in TensorBoard’s Graphs dashboard. You’ll define and train a simple Keras Sequential model for the Fashion-MNIST dataset and learn how to log and examine your model graphs. You will also use a tracing API to generate graph data for functions created using the new `tf.function` annotation. Setup
###Code
# Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
import tensorboard
tensorboard.__version__
# Clear any logs from previous runs
!rm -rf ./logs/
###Output
_____no_output_____
###Markdown
Define a Keras modelIn this example, the classifier is a simple four-layer Sequential model.
###Code
# Define the model.
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Download and prepare the training data.
###Code
(train_images, train_labels), _ = keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
###Output
_____no_output_____
###Markdown
Train the model and log dataBefore training, define the [Keras TensorBoard callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard), specifying the log directory. By passing this callback to Model.fit(), you ensure that graph data is logged for visualization in TensorBoard.
###Code
# Define the Keras TensorBoard callback.
logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
# Train the model.
model.fit(
train_images,
train_labels,
batch_size=64,
epochs=5,
callbacks=[tensorboard_callback])
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.6955 - accuracy: 0.7618
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4877 - accuracy: 0.8296
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4458 - accuracy: 0.8414
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4246 - accuracy: 0.8476
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4117 - accuracy: 0.8508
###Markdown
Op-level graphStart TensorBoard and wait a few seconds for the UI to load. Select the Graphs dashboard by tapping “Graphs” at the top.
###Code
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
You can also optionally use TensorBoard.dev to create a hosted, shareable experiment.
###Code
!tensorboard dev upload \
--logdir logs \
--name "Sample op-level graph" \
--one_shot
###Output
_____no_output_____
###Markdown
By default, TensorBoard displays the **op-level graph**. (On the left, you can see the “Default” tag selected.) Note that the graph is inverted; data flows from bottom to top, so it’s upside down compared to the code. However, you can see that the graph closely matches the Keras model definition, with extra edges to other computation nodes.Graphs are often very large, so you can manipulate the graph visualization:* Scroll to **zoom** in and out* Drag to **pan*** Double clicking toggles **node expansion** (a node can be a container for other nodes)You can also see metadata by clicking on a node. This allows you to see inputs, outputs, shapes and other details. --> --> Conceptual graphIn addition to the execution graph, TensorBoard also displays a **conceptual graph**. This is a view of just the Keras model. This may be useful if you’re reusing a saved model and you want to examine or validate its structure.To see the conceptual graph, select the “keras” tag. For this example, you’ll see a collapsed **Sequential** node. Double-click the node to see the model’s structure: --> --> Graphs of tf.functionsThe examples so far have described graphs of Keras models, where the graphs have been created by defining Keras layers and calling Model.fit().You may encounter a situation where you need to use the `tf.function` annotation to ["autograph"](https://www.tensorflow.org/guide/function), i.e., transform, a Python computation function into a high-performance TensorFlow graph. For these situations, you use **TensorFlow Summary Trace API** to log autographed functions for visualization in TensorBoard. To use the Summary Trace API:* Define and annotate a function with `tf.function`* Use `tf.summary.trace_on()` immediately before your function call site.* Add profile information (memory, CPU time) to graph by passing `profiler=True`* With a Summary file writer, call `tf.summary.trace_export()` to save the log dataYou can then use TensorBoard to see how your function behaves.
###Code
# The function to be traced.
@tf.function
def my_func(x, y):
# A simple hand-rolled layer.
return tf.nn.relu(tf.matmul(x, y))
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = 'logs/func/%s' % stamp
writer = tf.summary.create_file_writer(logdir)
# Sample data for your function.
x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True, profiler=True)
# Call only one tf.function when tracing.
z = my_func(x, y)
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
%tensorboard --logdir logs/func
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Examining the TensorFlow Graph View on TensorFlow.org Run in Google Colab View source on GitHub OverviewTensorBoard’s **Graphs dashboard** is a powerful tool for examining your TensorFlow model. You can quickly view a conceptual graph of your model’s structure and ensure it matches your intended design. You can also view a op-level graph to understand how TensorFlow understands your program. Examining the op-level graph can give you insight as to how to change your model. For example, you can redesign your model if training is progressing slower than expected. This tutorial presents a quick overview of how to generate graph diagnostic data and visualize it in TensorBoard’s Graphs dashboard. You’ll define and train a simple Keras Sequential model for the Fashion-MNIST dataset and learn how to log and examine your model graphs. You will also use a tracing API to generate graph data for functions created using the new `tf.function` annotation. Setup
###Code
# Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
import tensorboard
tensorboard.__version__
# Clear any logs from previous runs
!rm -rf ./logs/
###Output
_____no_output_____
###Markdown
Define a Keras modelIn this example, the classifier is a simple four-layer Sequential model.
###Code
# Define the model.
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Download and prepare the training data.
###Code
(train_images, train_labels), _ = keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
###Output
_____no_output_____
###Markdown
Train the model and log dataBefore training, define the [Keras TensorBoard callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard), specifying the log directory. By passing this callback to Model.fit(), you ensure that graph data is logged for visualization in TensorBoard.
###Code
# Define the Keras TensorBoard callback.
logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
# Train the model.
model.fit(
train_images,
train_labels,
batch_size=64,
epochs=5,
callbacks=[tensorboard_callback])
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.6955 - accuracy: 0.7618
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4877 - accuracy: 0.8296
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4458 - accuracy: 0.8414
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4246 - accuracy: 0.8476
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4117 - accuracy: 0.8508
###Markdown
Op-level graphStart TensorBoard and wait a few seconds for the UI to load. Select the Graphs dashboard by tapping “Graphs” at the top.
###Code
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
By default, TensorBoard displays the **op-level graph**. (On the left, you can see the “Default” tag selected.) Note that the graph is inverted; data flows from bottom to top, so it’s upside down compared to the code. However, you can see that the graph closely matches the Keras model definition, with extra edges to other computation nodes.Graphs are often very large, so you can manipulate the graph visualization:* Scroll to **zoom** in and out* Drag to **pan*** Double clicking toggles **node expansion** (a node can be a container for other nodes)You can also see metadata by clicking on a node. This allows you to see inputs, outputs, shapes and other details. Conceptual graphIn addition to the execution graph, TensorBoard also displays a **conceptual graph**. This is a view of just the Keras model. This may be useful if you’re reusing a saved model and you want to examine or validate its structure.To see the conceptual graph, select the “keras” tag. For this example, you’ll see a collapsed **Sequential** node. Double-click the node to see the model’s structure: Graphs of tf.functionsThe examples so far have described graphs of Keras models, where the graphs have been created by defining Keras layers and calling Model.fit().You may encounter a situation where you need to use the `tf.function` annotation to ["autograph"](https://www.tensorflow.org/guide/function), i.e., transform, a Python computation function into a high-performance TensorFlow graph. For these situations, you use **TensorFlow Summary Trace API** to log autographed functions for visualization in TensorBoard. To use the Summary Trace API:* Define and annotate a function with `tf.function`* Use `tf.summary.trace_on()` immediately before your function call site.* Add profile information (memory, CPU time) to graph by passing `profiler=True`* With a Summary file writer, call `tf.summary.trace_export()` to save the log dataYou can then use TensorBoard to see how your function behaves.
###Code
# The function to be traced.
@tf.function
def my_func(x, y):
# A simple hand-rolled layer.
return tf.nn.relu(tf.matmul(x, y))
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = 'logs/func/%s' % stamp
writer = tf.summary.create_file_writer(logdir)
# Sample data for your function.
x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True, profiler=True)
# Call only one tf.function when tracing.
z = my_func(x, y)
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
%tensorboard --logdir logs/func
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Examining the TensorFlow Graph View on TensorFlow.org Run in Google Colab View source on GitHub OverviewTensorBoard’s **Graphs dashboard** is a powerful tool for examining your TensorFlow model. You can quickly view a conceptual graph of your model’s structure and ensure it matches your intended design. You can also view a op-level graph to understand how TensorFlow understands your program. Examining the op-level graph can give you insight as to how to change your model. For example, you can redesign your model if training is progressing slower than expected. This tutorial presents a quick overview of how to generate graph diagnostic data and visualize it in TensorBoard’s Graphs dashboard. You’ll define and train a simple Keras Sequential model for the Fashion-MNIST dataset and learn how to log and examine your model graphs. You will also use a tracing API to generate graph data for functions created using the new `tf.function` annotation. Setup
###Code
# Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
import tensorboard
tensorboard.__version__
# Clear any logs from previous runs
!rm -rf ./logs/
###Output
_____no_output_____
###Markdown
Define a Keras modelIn this example, the classifier is a simple four-layer Sequential model.
###Code
# Define the model.
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Download and prepare the training data.
###Code
(train_images, train_labels), _ = keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
###Output
_____no_output_____
###Markdown
Train the model and log dataBefore training, define the [Keras TensorBoard callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard), specifying the log directory. By passing this callback to Model.fit(), you ensure that graph data is logged for visualization in TensorBoard.
###Code
# Define the Keras TensorBoard callback.
logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
# Train the model.
model.fit(
train_images,
train_labels,
batch_size=64,
epochs=5,
callbacks=[tensorboard_callback])
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.6955 - accuracy: 0.7618
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4877 - accuracy: 0.8296
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4458 - accuracy: 0.8414
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4246 - accuracy: 0.8476
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4117 - accuracy: 0.8508
###Markdown
Op-level graphStart TensorBoard and wait a few seconds for the UI to load. Select the Graphs dashboard by tapping “Graphs” at the top.
###Code
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
By default, TensorBoard displays the **op-level graph**. (On the left, you can see the “Default” tag selected.) Note that the graph is inverted; data flows from bottom to top, so it’s upside down compared to the code. However, you can see that the graph closely matches the Keras model definition, with extra edges to other computation nodes.Graphs are often very large, so you can manipulate the graph visualization:* Scroll to **zoom** in and out* Drag to **pan*** Double clicking toggles **node expansion** (a node can be a container for other nodes)You can also see metadata by clicking on a node. This allows you to see inputs, outputs, shapes and other details. Conceptual graphIn addition to the execution graph, TensorBoard also displays a **conceptual graph**. This is a view of just the Keras model. This may be useful if you’re reusing a saved model and you want to examine or validate its structure.To see the conceptual graph, select the “keras” tag. For this example, you’ll see a collapsed **Sequential** node. Double-click the node to see the model’s structure: Graphs of tf.functionsThe examples so far have described graphs of Keras models, where the graphs have been created by defining Keras layers and calling Model.fit().You may encounter a situation where you need to use the `tf.function` annotation to ["autograph"](https://www.tensorflow.org/guide/function), i.e., transform, a Python computation function into a high-performance TensorFlow graph. For these situations, you use **TensorFlow Summary Trace API** to log autographed functions for visualization in TensorBoard. To use the Summary Trace API:* Define and annotate a function with `tf.function`* Use `tf.summary.trace_on()` immediately before your function call site.* Add profile information (memory, CPU time) to graph by passing `profiler=True`* With a Summary file writer, call `tf.summary.trace_export()` to save the log dataYou can then use TensorBoard to see how your function behaves.
###Code
# The function to be traced.
@tf.function
def my_func(x, y):
# A simple hand-rolled layer.
return tf.nn.relu(tf.matmul(x, y))
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = 'logs/func/%s' % stamp
writer = tf.summary.create_file_writer(logdir)
# Sample data for your function.
x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True, profiler=True)
# Call only one tf.function when tracing.
z = my_func(x, y)
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
%tensorboard --logdir logs/func
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Examining the TensorFlow Graph View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewTensorBoard’s **Graphs dashboard** is a powerful tool for examining your TensorFlow model. You can quickly view a conceptual graph of your model’s structure and ensure it matches your intended design. You can also view a op-level graph to understand how TensorFlow understands your program. Examining the op-level graph can give you insight as to how to change your model. For example, you can redesign your model if training is progressing slower than expected. This tutorial presents a quick overview of how to generate graph diagnostic data and visualize it in TensorBoard’s Graphs dashboard. You’ll define and train a simple Keras Sequential model for the Fashion-MNIST dataset and learn how to log and examine your model graphs. You will also use a tracing API to generate graph data for functions created using the new `tf.function` annotation. Setup
###Code
# Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
import tensorboard
tensorboard.__version__
# Clear any logs from previous runs
!rm -rf ./logs/
###Output
_____no_output_____
###Markdown
Define a Keras modelIn this example, the classifier is a simple four-layer Sequential model.
###Code
# Define the model.
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Download and prepare the training data.
###Code
(train_images, train_labels), _ = keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
###Output
_____no_output_____
###Markdown
Train the model and log dataBefore training, define the [Keras TensorBoard callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard), specifying the log directory. By passing this callback to Model.fit(), you ensure that graph data is logged for visualization in TensorBoard.
###Code
# Define the Keras TensorBoard callback.
logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
# Train the model.
model.fit(
train_images,
train_labels,
batch_size=64,
epochs=5,
callbacks=[tensorboard_callback])
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.6955 - accuracy: 0.7618
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4877 - accuracy: 0.8296
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4458 - accuracy: 0.8414
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4246 - accuracy: 0.8476
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4117 - accuracy: 0.8508
###Markdown
Op-level graphStart TensorBoard and wait a few seconds for the UI to load. Select the Graphs dashboard by tapping “Graphs” at the top.
###Code
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
By default, TensorBoard displays the **op-level graph**. (On the left, you can see the “Default” tag selected.) Note that the graph is inverted; data flows from bottom to top, so it’s upside down compared to the code. However, you can see that the graph closely matches the Keras model definition, with extra edges to other computation nodes.Graphs are often very large, so you can manipulate the graph visualization:* Scroll to **zoom** in and out* Drag to **pan*** Double clicking toggles **node expansion** (a node can be a container for other nodes)You can also see metadata by clicking on a node. This allows you to see inputs, outputs, shapes and other details. --> --> Conceptual graphIn addition to the execution graph, TensorBoard also displays a **conceptual graph**. This is a view of just the Keras model. This may be useful if you’re reusing a saved model and you want to examine or validate its structure.To see the conceptual graph, select the “keras” tag. For this example, you’ll see a collapsed **Sequential** node. Double-click the node to see the model’s structure: --> --> Graphs of tf.functionsThe examples so far have described graphs of Keras models, where the graphs have been created by defining Keras layers and calling Model.fit().You may encounter a situation where you need to use the `tf.function` annotation to ["autograph"](https://www.tensorflow.org/guide/function), i.e., transform, a Python computation function into a high-performance TensorFlow graph. For these situations, you use **TensorFlow Summary Trace API** to log autographed functions for visualization in TensorBoard. To use the Summary Trace API:* Define and annotate a function with `tf.function`* Use `tf.summary.trace_on()` immediately before your function call site.* Add profile information (memory, CPU time) to graph by passing `profiler=True`* With a Summary file writer, call `tf.summary.trace_export()` to save the log dataYou can then use TensorBoard to see how your function behaves.
###Code
# The function to be traced.
@tf.function
def my_func(x, y):
# A simple hand-rolled layer.
return tf.nn.relu(tf.matmul(x, y))
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = 'logs/func/%s' % stamp
writer = tf.summary.create_file_writer(logdir)
# Sample data for your function.
x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True, profiler=True)
# Call only one tf.function when tracing.
z = my_func(x, y)
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
%tensorboard --logdir logs/func
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Examining the TensorFlow Graph View on TensorFlow.org Run in Google Colab View source on GitHub OverviewTensorBoard’s **Graphs dashboard** is a powerful tool for examining your TensorFlow model. You can quickly view a conceptual graph of your model’s structure and ensure it matches your intended design. You can also view a op-level graph to understand how TensorFlow understands your program. Examining the op-level graph can give you insight as to how to change your model. For example, you can redesign your model if training is progressing slower than expected. This tutorial presents a quick overview of how to generate graph diagnostic data and visualize it in TensorBoard’s Graphs dashboard. You’ll define and train a simple Keras Sequential model for the Fashion-MNIST dataset and learn how to log and examine your model graphs. You will also use a tracing API to generate graph data for functions created using the new `tf.function` annotation. Setup
###Code
# Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
import tensorboard
tensorboard.__version__
# Clear any logs from previous runs
!rm -rf ./logs/
###Output
_____no_output_____
###Markdown
Define a Keras modelIn this example, the classifier is a simple four-layer Sequential model.
###Code
# Define the model.
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Download and prepare the training data.
###Code
(train_images, train_labels), _ = keras.datasets.fashion_mnist.load_data()
train_images = train_images / 255.0
###Output
_____no_output_____
###Markdown
Train the model and log dataBefore training, define the [Keras TensorBoard callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard), specifying the log directory. By passing this callback to Model.fit(), you ensure that graph data is logged for visualization in TensorBoard.
###Code
# Define the Keras TensorBoard callback.
logdir="logs/fit/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
# Train the model.
model.fit(
train_images,
train_labels,
batch_size=64,
epochs=5,
callbacks=[tensorboard_callback])
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.6955 - accuracy: 0.7618
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4877 - accuracy: 0.8296
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4458 - accuracy: 0.8414
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4246 - accuracy: 0.8476
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4117 - accuracy: 0.8508
###Markdown
Op-level graphStart TensorBoard and wait a few seconds for the UI to load. Select the Graphs dashboard by tapping “Graphs” at the top.
###Code
%tensorboard --logdir logs
###Output
_____no_output_____
###Markdown
By default, TensorBoard displays the **op-level graph**. (On the left, you can see the “Default” tag selected.) Note that the graph is inverted; data flows from bottom to top, so it’s upside down compared to the code. However, you can see that the graph closely matches the Keras model definition, with extra edges to other computation nodes.Graphs are often very large, so you can manipulate the graph visualization:* Scroll to **zoom** in and out* Drag to **pan*** Double clicking toggles **node expansion** (a node can be a container for other nodes)You can also see metadata by clicking on a node. This allows you to see inputs, outputs, shapes and other details. Conceptual graphIn addition to the execution graph, TensorBoard also displays a **conceptual graph**. This is a view of just the Keras model. This may be useful if you’re reusing a saved model and you want to examine or validate its structure.To see the conceptual graph, select the “keras” tag. For this example, you’ll see a collapsed **Sequential** node. Double-click the node to see the model’s structure: Graphs of tf.functionsThe examples so far have described graphs of Keras models, where the graphs have been created by defining Keras layers and calling Model.fit().You may encounter a situation where you need to use the `tf.function` annotation to ["autograph"](https://www.tensorflow.org/guide/function), i.e., transform, a Python computation function into a high-performance TensorFlow graph. For these situations, you use **TensorFlow Summary Trace API** to log autographed functions for visualization in TensorBoard. To use the Summary Trace API:* Define and annotate a function with `tf.function`* Use `tf.summary.trace_on()` immediately before your function call site.* Add profile information (memory, CPU time) to graph by passing `profiler=True`* With a Summary file writer, call `tf.summary.trace_export()` to save the log dataYou can then use TensorBoard to see how your function behaves.
###Code
# The function to be traced.
@tf.function
def my_func(x, y):
# A simple hand-rolled layer.
return tf.nn.relu(tf.matmul(x, y))
# Set up logging.
stamp = datetime.now().strftime("%Y%m%d-%H%M%S")
logdir = 'logs/func/%s' % stamp
writer = tf.summary.create_file_writer(logdir)
# Sample data for your function.
x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True, profiler=True)
# Call only one tf.function when tracing.
z = my_func(x, y)
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
%tensorboard --logdir logs/func
###Output
_____no_output_____ |
kulina.ipynb | ###Markdown
OverviewKunci utama dari Efisiensi pengiriman adalah customer harus terassign ke kitchen yang terdekat dulu.Kami melakukannya dengan menSort customer dari jarak yang paling jauh dari titik pusat customer (sum/total lat long).Solusi tersebut belum optimal,tapi mendekati.Solusi optimal = Sort dari Outermost customer.Customer kemudian di assign ke kitchen terdekatnya, apabila sudah full maka diassign ke kitchen kedua terdekat, dst.Sehingga bisa didapat group berupa customer yang terassign ke suatu kitchen.Driver kemudian di assign per group berdasarkan degree dan jarak.Di assign tidak hanya berdasarkan jarak untuk mengoptimalkan waktu pengiriman selama 1 jam. Grouping customer to the best kitchen
###Code
# Find center point of customer, buat nyari
# long
long_centroid = sum(customer['long'])/len(customer)
# lat
lat_centroid = sum(customer['lat'])/len(customer)
import bokeh.plotting as bk
from bokeh.plotting import figure, show, output_file
bk.output_notebook()
def mscatter(p, x, y, marker,color):
p.scatter(x, y, marker=marker, size=10,
line_color="black", fill_color=color, alpha=0.5)
p = figure(title="Persebaran Customer dan Kitchen")
p.grid.grid_line_color = None
p.background_fill_color = "#eeeeee"
#p.axis.visible = False
mscatter(p, customer['long'], customer['lat'], "circle", "red")
mscatter(p, long_centroid, lat_centroid, "square", "blue")
show(p)
wgs84_geod = Geod(ellps='WGS84') #Distance will be measured on this ellipsoid - more accurate than a spherical method katanya
#Get distance between pairs of lat-lon points
def Distance(lat1,lon1,lat2,lon2):
az12,az21,dist = wgs84_geod.inv(lon1,lat1,lon2,lat2)
return dist
#Add/update a column to the data frame with the distances (in metres)
customer1['dist0'] = Distance(customer1['lat'].tolist(),customer1['long'].tolist(),[kitchen['lat'].iloc[0]]*len(customer),[kitchen['long'].iloc[0]]*len(customer))
customer1['dist1'] = Distance(customer1['lat'].tolist(),customer1['long'].tolist(),[kitchen['lat'].iloc[1]]*len(customer),[kitchen['long'].iloc[1]]*len(customer))
customer1['dist2'] = Distance(customer1['lat'].tolist(),customer1['long'].tolist(),[kitchen['lat'].iloc[2]]*len(customer),[kitchen['long'].iloc[2]]*len(customer))
customer1['dist3'] = Distance(customer1['lat'].tolist(),customer1['long'].tolist(),[kitchen['lat'].iloc[3]]*len(customer),[kitchen['long'].iloc[3]]*len(customer))
customer1['dist4'] = Distance(customer1['lat'].tolist(),customer1['long'].tolist(),[kitchen['lat'].iloc[4]]*len(customer),[kitchen['long'].iloc[4]]*len(customer))
customer1['dist5'] = Distance(customer1['lat'].tolist(),customer1['long'].tolist(),[kitchen['lat'].iloc[5]]*len(customer),[kitchen['long'].iloc[5]]*len(customer))
customer1['dist6'] = Distance(customer1['lat'].tolist(),customer1['long'].tolist(),[kitchen['lat'].iloc[6]]*len(customer),[kitchen['long'].iloc[6]]*len(customer))
# Minimum distance
#customer1['Minimum'] = customer1.loc[:, ['dist0', 'dist1', 'dist2', 'dist3', 'dist4', 'dist5', 'dist6']].min(axis=1)
a = pd.DataFrame(np.sort(customer1[['dist0','dist1','dist2','dist3','dist4','dist5','dist6']].values)[:,:3], columns=['nearest','2nearest', '3nearest'])
customer1 = customer1.join(a)
customer1.head()
print(kitchen1)
# Find distance from customer point to central customer point
customer1['distSort'] = Distance(customer1['lat'].tolist(),customer1['long'].tolist(),[lat_centroid]*len(customer),[lat_centroid]*len(customer))
#np.sqrt( (customer.long-long_centroid)**2 + (customer.lat-lat_centroid)**2)
# Sort by longest distance
customer1 = customer1.sort_values(['distSort'], ascending=False)
customer1.reset_index(drop=True, inplace=True)
customer1.head()
# Data already sorted from outermost customer
# For each row in the column,assign customer to the the nearest kitchen,
# if the kitchen already full, assign customer to the second nearest kitchen and so on.
# BELUM SELESAI YANG INI
clusters = []
#masih manual
cap0 = 0
cap1 = 0
cap2 = 0
cap3 = 0
cap4 = 0
cap5 = 0
cap6 = 0
cluster=8 #hanya init
scndCluster=8 #hanya init
for i in customer1.index:
if customer1['nearest'].loc[i]==customer1['dist0'].loc[i]:
cluster=0
elif customer1['nearest'].loc[i]==customer1['dist1'].loc[i]:
cluster=1
elif customer1['nearest'].loc[i]==customer1['dist2'].loc[i]:
cluster=2
elif customer1['nearest'].loc[i]==customer1['dist3'].loc[i]:
cluster=3
elif customer1['nearest'].loc[i]==customer1['dist4'].loc[i]:
cluster=4
elif customer1['nearest'].loc[i]==customer1['dist5'].loc[i]:
cluster=5
# if customer1['nearest'].loc[i]==customer1['dist6'].loc[i]:
# cluster=6
if customer1['2nearest'].loc[i]==customer1['dist0'].loc[i]:
scndCluster=0
elif customer1['2nearest'].loc[i]==customer1['dist1'].loc[i]:
scndCluster=1
elif customer1['2nearest'].loc[i]==customer1['dist2'].loc[i]:
scndCluster=2
elif customer1['2nearest'].loc[i]==customer1['dist3'].loc[i]:
scndCluster=3
elif customer1['2nearest'].loc[i]==customer1['dist4'].loc[i]:
scndCluster=4
elif customer1['2nearest'].loc[i]==customer1['dist5'].loc[i]:
scndCluster=5
# if customer1['2nearest'].loc[i]==customer1['dist6'].loc[i]:
# scndCluster=6
if customer1['3nearest'].loc[i]==customer1['dist0'].loc[i]:
trdCluster=0
elif customer1['3nearest'].loc[i]==customer1['dist1'].loc[i]:
trdCluster=1
elif customer1['3nearest'].loc[i]==customer1['dist2'].loc[i]:
trdCluster=2
elif customer1['3nearest'].loc[i]==customer1['dist3'].loc[i]:
trdCluster=3
elif customer1['3nearest'].loc[i]==customer1['dist4'].loc[i]:
trdCluster=4
elif customer1['3nearest'].loc[i]==customer1['dist5'].loc[i]:
trdCluster=5
# if customer1['3nearest'].loc[i]==customer1['dist6'].loc[i]:
# trdCluster=6
# Assign to nearest kitchen if not yet full
if (cluster==0) and (cap0<100):
cap0=cap0+customer1['qtyOrdered'].loc[i]
elif (cluster==1) and (cap1<40):
cap1=cap1+customer1['qtyOrdered'].loc[i]
elif (cluster==2) and (cap2<60):
cap2=cap2+customer1['qtyOrdered'].loc[i]
elif (cluster==3) and (cap3<70):
cap3=cap3+customer1['qtyOrdered'].loc[i]
elif (cluster==4) and (cap4<80):
cap4=cap4+customer1['qtyOrdered'].loc[i]
elif (cluster==5) and (cap5<50):
cap5=cap5+customer1['qtyOrdered'].loc[i]
elif (cluster==6) and (cap6<50):
cap6=cap6+customer1['qtyOrdered'].loc[i]
# if full assign to 2nd nearest kitchen
if (cluster==0) and (cap0>100):
cluster=scndCluster
scndCluster=10
elif (cluster==1) and (cap1>40):
cluster=scndCluster
scndCluster=10
elif (cluster==2) and (cap2>60):
cluster=scndCluster
scndCluster=10
elif (cluster==3) and (cap3>70):
cluster=scndCluster
scndCluster=10
elif (cluster==4) and (cap4>80):
cluster=scndCluster
scndCluster=10
elif (cluster==5) and (cap5>50):
cluster=scndCluster
scndCluster=10
elif (cluster==6) and (cap6>50):
cluster=scndCluster
scndCluster=10
# if 2nd nearest also full assign to 3rd nearest
#
# if (cluster==0) and (cap0>100) and (scndCluster==10):
# cluster=trdCluster
# trdCluster=10
# if (cluster==1) and (cap1>40) and (scndCluster==10):
# cluster=trdCluster
# trdCluster=10
# if (cluster==2) and (cap2>60):
# cluster=trdCluster
# trdCluster=10
# if (cluster==3) and (cap3>70):
# cluster=trdCluster
# trdCluster=10
# if (cluster==4) and (cap4>80):
# cluster=trdCluster
# trdCluster=10
# if (cluster==5) and (cap5>50):
# cluster=trdCluster
# trdCluster=10
# if (cluster==6) and (cap6>50):
# cluster=trdCluster
# trdCluster=10
# count if 2nd nearest
if (cluster==0) and (scndCluster==10) and (trdCluster!=10):
cap0=cap0+customer1['qtyOrdered'].loc[i]
elif (cluster==1) and (scndCluster==10) and (trdCluster!=10):
cap1=cap1+customer1['qtyOrdered'].loc[i]
elif (cluster==2) and (scndCluster==10) and (trdCluster!=10):
cap2=cap2+customer1['qtyOrdered'].loc[i]
elif (cluster==3) and (scndCluster==10) and (trdCluster!=10):
cap3=cap3+customer1['qtyOrdered'].loc[i]
elif (cluster==4) and (scndCluster==10) and (trdCluster!=10):
cap4=cap4+customer1['qtyOrdered'].loc[i]
elif (cluster==5) and (scndCluster==10) and (trdCluster!=10):
cap5=cap5+customer1['qtyOrdered'].loc[i]
elif (cluster==6) and (scndCluster==10) and (trdCluster!=10):
cap6=cap6+customer1['qtyOrdered'].loc[i]
# count if 3rd nearest
#
# if (cluster==0) and (scndCluster==10) and (trdCluster==10):
# cap0=cap0+customer1['qtyOrdered'].loc[i]
# if (cluster==1) and (scndCluster==10) and (trdCluster==10):
# cap1=cap1+customer1['qtyOrdered'].loc[i]
# if (cluster==2) and (scndCluster==10) and (trdCluster==10):
# cap2=cap2+customer1['qtyOrdered'].loc[i]
# if (cluster==3) and (scndCluster==10) and (trdCluster==10):
# cap3=cap3+customer1['qtyOrdered'].loc[i]
# if (cluster==4) and (scndCluster==10) and (trdCluster==10):
# cap4=cap4+customer1['qtyOrdered'].loc[i]
# if (cluster==5) and (scndCluster==10) and (trdCluster==10):
# cap5=cap5+customer1['qtyOrdered'].loc[i]
# if (cluster==6) and (scndCluster==10) and (trdCluster==10):
# cap6=cap6+customer1['qtyOrdered'].loc[i]
clusters.append(cluster)
customer1['cluster'] = clusters
print(cap0+cap1+cap2+cap3+cap4+cap5)
customer1['qtyOrdered'].sum()
customer1.head()
# Data visulization customer assigned to its kitchen
def visualize(data):
x = data['long']
y = data['lat']
Cluster = data['cluster']
fig = plt.figure()
ax = fig.add_subplot(111)
scatter = ax.scatter(x,y,c=Cluster, cmap=plt.cm.Paired, s=10, label='customer')
ax.scatter(kitchen['long'],kitchen['lat'], s=10, c='r', marker="x", label='second')
ax.set_xlabel('longitude')
ax.set_ylabel('latitude')
plt.colorbar(scatter)
fig.show()
# Visualization Example customer assigned to kitchen (without following constraint)
# THIS IS ONLY EXAMPLE
#y = kitchen['kitchenName']
#X = pd.DataFrame(kitchen.drop('kitchenName', axis=1))
#clf = NearestCentroid()
#clf.fit(X, y)
#pred = clf.predict(customer)
#customer1['cluster'] = pd.Series(pred, index=customer1.index)
#customer['cluster'] = pd.Series(pred, index=customer.index)
visualize(customer1)
# Count customer order assigned to Kitchen
dapurMiji = (customer1.where(customer1['cluster'] == 0))['qtyOrdered'].sum()
dapurNusantara = (customer1.where(customer1['cluster'] == 1))['qtyOrdered'].sum()
familiaCatering = (customer1.where(customer1['cluster'] == 2))['qtyOrdered'].sum()
pondokRawon = (customer1.where(customer1['cluster'] == 3))['qtyOrdered'].sum()
roseCatering = (customer1.where(customer1['cluster'] == 4))['qtyOrdered'].sum()
tigaKitchenCatering = (customer1.where(customer1['cluster'] == 5))['qtyOrdered'].sum()
ummuUwais = (customer1.where(customer1['cluster'] == 6))['qtyOrdered'].sum()
d = {'Dapur Miji': dapurMiji , 'Dapur Nusantara': dapurNusantara, 'Familia Catering': familiaCatering, 'Pondok Rawon': pondokRawon,'Rose Catering': roseCatering, 'Tiga Kitchen Catering': tigaKitchenCatering, 'Ummu Uwais': ummuUwais}
print(customer1.cluster.value_counts())
# Print sum of assigned
print(d)
print(kitchen1)
###Output
kitchenName long lat minCapacity maxCapacity \
0 Dapur Miji 106.814653 -6.150735 50 100
1 Dapur Nusantara 106.834772 -6.279489 30 40
2 Familia Catering 106.793820 -6.192896 50 60
3 Pondok Rawon 106.826822 -6.224094 50 70
4 Rose Catering 106.795993 -6.157473 70 80
5 Tiga Kitchen Catering 106.847226 -6.184124 30 50
6 Ummu Uwais 106.914214 -6.256911 20 50
tolerance
0 105
1 45
2 65
3 75
4 85
5 55
6 55
###Markdown
Assign driver in group based on degree and distance
###Code
# Get degree for each customer in the cluster
def getDegree(data):
# distance
# center long lat (start of routing)
center_latitude = #Tiap Kitchen
center_longitude = #Tiap Kitchen
degrees = []
degree = 0
# For each row in the column,
for row in data['longitude']:
degrees = np.rint(np.rad2deg(np.arctan2((data['latitude']-center_latitude),(data['longitude']-center_longitude))))
#center di pulogadung
data['degrees'] = degrees
return data
# Assign driver dari kitchen ke customer berdasarkan degree dan jarak
# Priority utama berdasarkan degree jadi gaada driver yang deket doang
# Tapi belum dipikir gimana bisa optimize waktu harus satu jam max, tapi seenggaknya driver udah agak rata jaraknya
# Kasus khusus apabila yg degree nya kecil jaraknya jauh banget, dia driver baru.
# BELUM SELESAI YANG INI
###Output
_____no_output_____ |
nbs/012_data.external.ipynb | ###Markdown
External data> Helper functions used to download and extract common time series datasets.
###Code
#export
from tqdm import tqdm
import zipfile
import tempfile
try: from urllib import urlretrieve
except ImportError: from urllib.request import urlretrieve
import shutil
import distutils
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
#export
# This code was adapted from https://github.com/ChangWeiTan/TSRegression.
# It's used to load time series examples to demonstrate tsai's functionality.
# Copyright for above source is below.
# GNU GENERAL PUBLIC LICENSE
# Version 3, 29 June 2007
# Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
# Everyone is permitted to copy and distribute verbatim copies
# of this license document, but changing it is not allowed.
# Preamble
# The GNU General Public License is a free, copyleft license for
# software and other kinds of works.
# The licenses for most software and other practical works are designed
# to take away your freedom to share and change the works. By contrast,
# the GNU General Public License is intended to guarantee your freedom to
# share and change all versions of a program--to make sure it remains free
# software for all its users. We, the Free Software Foundation, use the
# GNU General Public License for most of our software; it applies also to
# any other work released this way by its authors. You can apply it to
# your programs, too.
# When we speak of free software, we are referring to freedom, not
# price. Our General Public Licenses are designed to make sure that you
# have the freedom to distribute copies of free software (and charge for
# them if you wish), that you receive source code or can get it if you
# want it, that you can change the software or use pieces of it in new
# free programs, and that you know you can do these things.
# To protect your rights, we need to prevent others from denying you
# these rights or asking you to surrender the rights. Therefore, you have
# certain responsibilities if you distribute copies of the software, or if
# you modify it: responsibilities to respect the freedom of others.
# For example, if you distribute copies of such a program, whether
# gratis or for a fee, you must pass on to the recipients the same
# freedoms that you received. You must make sure that they, too, receive
# or can get the source code. And you must show them these terms so they
# know their rights.
# Developers that use the GNU GPL protect your rights with two steps:
# (1) assert copyright on the software, and (2) offer you this License
# giving you legal permission to copy, distribute and/or modify it.
# For the developers' and authors' protection, the GPL clearly explains
# that there is no warranty for this free software. For both users' and
# authors' sake, the GPL requires that modified versions be marked as
# changed, so that their problems will not be attributed erroneously to
# authors of previous versions.
# Some devices are designed to deny users access to install or run
# modified versions of the software inside them, although the manufacturer
# can do so. This is fundamentally incompatible with the aim of
# protecting users' freedom to change the software. The systematic
# pattern of such abuse occurs in the area of products for individuals to
# use, which is precisely where it is most unacceptable. Therefore, we
# have designed this version of the GPL to prohibit the practice for those
# products. If such problems arise substantially in other domains, we
# stand ready to extend this provision to those domains in future versions
# of the GPL, as needed to protect the freedom of users.
# Finally, every program is threatened constantly by software patents.
# States should not allow patents to restrict development and use of
# software on general-purpose computers, but in those that do, we wish to
# avoid the special danger that patents applied to a free program could
# make it effectively proprietary. To prevent this, the GPL assures that
# patents cannot be used to render the program non-free.
# The precise terms and conditions for copying, distribution and
# modification follow.
# TERMS AND CONDITIONS
# 0. Definitions.
# "This License" refers to version 3 of the GNU General Public License.
# "Copyright" also means copyright-like laws that apply to other kinds of
# works, such as semiconductor masks.
# "The Program" refers to any copyrightable work licensed under this
# License. Each licensee is addressed as "you". "Licensees" and
# "recipients" may be individuals or organizations.
# To "modify" a work means to copy from or adapt all or part of the work
# in a fashion requiring copyright permission, other than the making of an
# exact copy. The resulting work is called a "modified version" of the
# earlier work or a work "based on" the earlier work.
# A "covered work" means either the unmodified Program or a work based
# on the Program.
# To "propagate" a work means to do anything with it that, without
# permission, would make you directly or secondarily liable for
# infringement under applicable copyright law, except executing it on a
# computer or modifying a private copy. Propagation includes copying,
# distribution (with or without modification), making available to the
# public, and in some countries other activities as well.
# To "convey" a work means any kind of propagation that enables other
# parties to make or receive copies. Mere interaction with a user through
# a computer network, with no transfer of a copy, is not conveying.
# An interactive user interface displays "Appropriate Legal Notices"
# to the extent that it includes a convenient and prominently visible
# feature that (1) displays an appropriate copyright notice, and (2)
# tells the user that there is no warranty for the work (except to the
# extent that warranties are provided), that licensees may convey the
# work under this License, and how to view a copy of this License. If
# the interface presents a list of user commands or options, such as a
# menu, a prominent item in the list meets this criterion.
# 1. Source Code.
# The "source code" for a work means the preferred form of the work
# for making modifications to it. "Object code" means any non-source
# form of a work.
# A "Standard Interface" means an interface that either is an official
# standard defined by a recognized standards body, or, in the case of
# interfaces specified for a particular programming language, one that
# is widely used among developers working in that language.
# The "System Libraries" of an executable work include anything, other
# than the work as a whole, that (a) is included in the normal form of
# packaging a Major Component, but which is not part of that Major
# Component, and (b) serves only to enable use of the work with that
# Major Component, or to implement a Standard Interface for which an
# implementation is available to the public in source code form. A
# "Major Component", in this context, means a major essential component
# (kernel, window system, and so on) of the specific operating system
# (if any) on which the executable work runs, or a compiler used to
# produce the work, or an object code interpreter used to run it.
# The "Corresponding Source" for a work in object code form means all
# the source code needed to generate, install, and (for an executable
# work) run the object code and to modify the work, including scripts to
# control those activities. However, it does not include the work's
# System Libraries, or general-purpose tools or generally available free
# programs which are used unmodified in performing those activities but
# which are not part of the work. For example, Corresponding Source
# includes interface definition files associated with source files for
# the work, and the source code for shared libraries and dynamically
# linked subprograms that the work is specifically designed to require,
# such as by intimate data communication or control flow between those
# subprograms and other parts of the work.
# The Corresponding Source need not include anything that users
# can regenerate automatically from other parts of the Corresponding
# Source.
# The Corresponding Source for a work in source code form is that
# same work.
# 2. Basic Permissions.
# All rights granted under this License are granted for the term of
# copyright on the Program, and are irrevocable provided the stated
# conditions are met. This License explicitly affirms your unlimited
# permission to run the unmodified Program. The output from running a
# covered work is covered by this License only if the output, given its
# content, constitutes a covered work. This License acknowledges your
# rights of fair use or other equivalent, as provided by copyright law.
# You may make, run and propagate covered works that you do not
# convey, without conditions so long as your license otherwise remains
# in force. You may convey covered works to others for the sole purpose
# of having them make modifications exclusively for you, or provide you
# with facilities for running those works, provided that you comply with
# the terms of this License in conveying all material for which you do
# not control copyright. Those thus making or running the covered works
# for you must do so exclusively on your behalf, under your direction
# and control, on terms that prohibit them from making any copies of
# your copyrighted material outside their relationship with you.
# Conveying under any other circumstances is permitted solely under
# the conditions stated below. Sublicensing is not allowed; section 10
# makes it unnecessary.
# 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
# No covered work shall be deemed part of an effective technological
# measure under any applicable law fulfilling obligations under article
# 11 of the WIPO copyright treaty adopted on 20 December 1996, or
# similar laws prohibiting or restricting circumvention of such
# measures.
# When you convey a covered work, you waive any legal power to forbid
# circumvention of technological measures to the extent such circumvention
# is effected by exercising rights under this License with respect to
# the covered work, and you disclaim any intention to limit operation or
# modification of the work as a means of enforcing, against the work's
# users, your or third parties' legal rights to forbid circumvention of
# technological measures.
# 4. Conveying Verbatim Copies.
# You may convey verbatim copies of the Program's source code as you
# receive it, in any medium, provided that you conspicuously and
# appropriately publish on each copy an appropriate copyright notice;
# keep intact all notices stating that this License and any
# non-permissive terms added in accord with section 7 apply to the code;
# keep intact all notices of the absence of any warranty; and give all
# recipients a copy of this License along with the Program.
# You may charge any price or no price for each copy that you convey,
# and you may offer support or warranty protection for a fee.
# 5. Conveying Modified Source Versions.
# You may convey a work based on the Program, or the modifications to
# produce it from the Program, in the form of source code under the
# terms of section 4, provided that you also meet all of these conditions:
# a) The work must carry prominent notices stating that you modified
# it, and giving a relevant date.
# b) The work must carry prominent notices stating that it is
# released under this License and any conditions added under section
# 7. This requirement modifies the requirement in section 4 to
# "keep intact all notices".
# c) You must license the entire work, as a whole, under this
# License to anyone who comes into possession of a copy. This
# License will therefore apply, along with any applicable section 7
# additional terms, to the whole of the work, and all its parts,
# regardless of how they are packaged. This License gives no
# permission to license the work in any other way, but it does not
# invalidate such permission if you have separately received it.
# d) If the work has interactive user interfaces, each must display
# Appropriate Legal Notices; however, if the Program has interactive
# interfaces that do not display Appropriate Legal Notices, your
# work need not make them do so.
# A compilation of a covered work with other separate and independent
# works, which are not by their nature extensions of the covered work,
# and which are not combined with it such as to form a larger program,
# in or on a volume of a storage or distribution medium, is called an
# "aggregate" if the compilation and its resulting copyright are not
# used to limit the access or legal rights of the compilation's users
# beyond what the individual works permit. Inclusion of a covered work
# in an aggregate does not cause this License to apply to the other
# parts of the aggregate.
# 6. Conveying Non-Source Forms.
# You may convey a covered work in object code form under the terms
# of sections 4 and 5, provided that you also convey the
# machine-readable Corresponding Source under the terms of this License,
# in one of these ways:
# a) Convey the object code in, or embodied in, a physical product
# (including a physical distribution medium), accompanied by the
# Corresponding Source fixed on a durable physical medium
# customarily used for software interchange.
# b) Convey the object code in, or embodied in, a physical product
# (including a physical distribution medium), accompanied by a
# written offer, valid for at least three years and valid for as
# long as you offer spare parts or customer support for that product
# model, to give anyone who possesses the object code either (1) a
# copy of the Corresponding Source for all the software in the
# product that is covered by this License, on a durable physical
# medium customarily used for software interchange, for a price no
# more than your reasonable cost of physically performing this
# conveying of source, or (2) access to copy the
# Corresponding Source from a network server at no charge.
# c) Convey individual copies of the object code with a copy of the
# written offer to provide the Corresponding Source. This
# alternative is allowed only occasionally and noncommercially, and
# only if you received the object code with such an offer, in accord
# with subsection 6b.
# d) Convey the object code by offering access from a designated
# place (gratis or for a charge), and offer equivalent access to the
# Corresponding Source in the same way through the same place at no
# further charge. You need not require recipients to copy the
# Corresponding Source along with the object code. If the place to
# copy the object code is a network server, the Corresponding Source
# may be on a different server (operated by you or a third party)
# that supports equivalent copying facilities, provided you maintain
# clear directions next to the object code saying where to find the
# Corresponding Source. Regardless of what server hosts the
# Corresponding Source, you remain obligated to ensure that it is
# available for as long as needed to satisfy these requirements.
# e) Convey the object code using peer-to-peer transmission, provided
# you inform other peers where the object code and Corresponding
# Source of the work are being offered to the general public at no
# charge under subsection 6d.
# A separable portion of the object code, whose source code is excluded
# from the Corresponding Source as a System Library, need not be
# included in conveying the object code work.
# A "User Product" is either (1) a "consumer product", which means any
# tangible personal property which is normally used for personal, family,
# or household purposes, or (2) anything designed or sold for incorporation
# into a dwelling. In determining whether a product is a consumer product,
# doubtful cases shall be resolved in favor of coverage. For a particular
# product received by a particular user, "normally used" refers to a
# typical or common use of that class of product, regardless of the status
# of the particular user or of the way in which the particular user
# actually uses, or expects or is expected to use, the product. A product
# is a consumer product regardless of whether the product has substantial
# commercial, industrial or non-consumer uses, unless such uses represent
# the only significant mode of use of the product.
# "Installation Information" for a User Product means any methods,
# procedures, authorization keys, or other information required to install
# and execute modified versions of a covered work in that User Product from
# a modified version of its Corresponding Source. The information must
# suffice to ensure that the continued functioning of the modified object
# code is in no case prevented or interfered with solely because
# modification has been made.
# If you convey an object code work under this section in, or with, or
# specifically for use in, a User Product, and the conveying occurs as
# part of a transaction in which the right of possession and use of the
# User Product is transferred to the recipient in perpetuity or for a
# fixed term (regardless of how the transaction is characterized), the
# Corresponding Source conveyed under this section must be accompanied
# by the Installation Information. But this requirement does not apply
# if neither you nor any third party retains the ability to install
# modified object code on the User Product (for example, the work has
# been installed in ROM).
# The requirement to provide Installation Information does not include a
# requirement to continue to provide support service, warranty, or updates
# for a work that has been modified or installed by the recipient, or for
# the User Product in which it has been modified or installed. Access to a
# network may be denied when the modification itself materially and
# adversely affects the operation of the network or violates the rules and
# protocols for communication across the network.
# Corresponding Source conveyed, and Installation Information provided,
# in accord with this section must be in a format that is publicly
# documented (and with an implementation available to the public in
# source code form), and must require no special password or key for
# unpacking, reading or copying.
# 7. Additional Terms.
# "Additional permissions" are terms that supplement the terms of this
# License by making exceptions from one or more of its conditions.
# Additional permissions that are applicable to the entire Program shall
# be treated as though they were included in this License, to the extent
# that they are valid under applicable law. If additional permissions
# apply only to part of the Program, that part may be used separately
# under those permissions, but the entire Program remains governed by
# this License without regard to the additional permissions.
# When you convey a copy of a covered work, you may at your option
# remove any additional permissions from that copy, or from any part of
# it. (Additional permissions may be written to require their own
# removal in certain cases when you modify the work.) You may place
# additional permissions on material, added by you to a covered work,
# for which you have or can give appropriate copyright permission.
# Notwithstanding any other provision of this License, for material you
# add to a covered work, you may (if authorized by the copyright holders of
# that material) supplement the terms of this License with terms:
# a) Disclaiming warranty or limiting liability differently from the
# terms of sections 15 and 16 of this License; or
# b) Requiring preservation of specified reasonable legal notices or
# author attributions in that material or in the Appropriate Legal
# Notices displayed by works containing it; or
# c) Prohibiting misrepresentation of the origin of that material, or
# requiring that modified versions of such material be marked in
# reasonable ways as different from the original version; or
# d) Limiting the use for publicity purposes of names of licensors or
# authors of the material; or
# e) Declining to grant rights under trademark law for use of some
# trade names, trademarks, or service marks; or
# f) Requiring indemnification of licensors and authors of that
# material by anyone who conveys the material (or modified versions of
# it) with contractual assumptions of liability to the recipient, for
# any liability that these contractual assumptions directly impose on
# those licensors and authors.
# All other non-permissive additional terms are considered "further
# restrictions" within the meaning of section 10. If the Program as you
# received it, or any part of it, contains a notice stating that it is
# governed by this License along with a term that is a further
# restriction, you may remove that term. If a license document contains
# a further restriction but permits relicensing or conveying under this
# License, you may add to a covered work material governed by the terms
# of that license document, provided that the further restriction does
# not survive such relicensing or conveying.
# If you add terms to a covered work in accord with this section, you
# must place, in the relevant source files, a statement of the
# additional terms that apply to those files, or a notice indicating
# where to find the applicable terms.
# Additional terms, permissive or non-permissive, may be stated in the
# form of a separately written license, or stated as exceptions;
# the above requirements apply either way.
# 8. Termination.
# You may not propagate or modify a covered work except as expressly
# provided under this License. Any attempt otherwise to propagate or
# modify it is void, and will automatically terminate your rights under
# this License (including any patent licenses granted under the third
# paragraph of section 11).
# However, if you cease all violation of this License, then your
# license from a particular copyright holder is reinstated (a)
# provisionally, unless and until the copyright holder explicitly and
# finally terminates your license, and (b) permanently, if the copyright
# holder fails to notify you of the violation by some reasonable means
# prior to 60 days after the cessation.
# Moreover, your license from a particular copyright holder is
# reinstated permanently if the copyright holder notifies you of the
# violation by some reasonable means, this is the first time you have
# received notice of violation of this License (for any work) from that
# copyright holder, and you cure the violation prior to 30 days after
# your receipt of the notice.
# Termination of your rights under this section does not terminate the
# licenses of parties who have received copies or rights from you under
# this License. If your rights have been terminated and not permanently
# reinstated, you do not qualify to receive new licenses for the same
# material under section 10.
# 9. Acceptance Not Required for Having Copies.
# You are not required to accept this License in order to receive or
# run a copy of the Program. Ancillary propagation of a covered work
# occurring solely as a consequence of using peer-to-peer transmission
# to receive a copy likewise does not require acceptance. However,
# nothing other than this License grants you permission to propagate or
# modify any covered work. These actions infringe copyright if you do
# not accept this License. Therefore, by modifying or propagating a
# covered work, you indicate your acceptance of this License to do so.
# 10. Automatic Licensing of Downstream Recipients.
# Each time you convey a covered work, the recipient automatically
# receives a license from the original licensors, to run, modify and
# propagate that work, subject to this License. You are not responsible
# for enforcing compliance by third parties with this License.
# An "entity transaction" is a transaction transferring control of an
# organization, or substantially all assets of one, or subdividing an
# organization, or merging organizations. If propagation of a covered
# work results from an entity transaction, each party to that
# transaction who receives a copy of the work also receives whatever
# licenses to the work the party's predecessor in interest had or could
# give under the previous paragraph, plus a right to possession of the
# Corresponding Source of the work from the predecessor in interest, if
# the predecessor has it or can get it with reasonable efforts.
# You may not impose any further restrictions on the exercise of the
# rights granted or affirmed under this License. For example, you may
# not impose a license fee, royalty, or other charge for exercise of
# rights granted under this License, and you may not initiate litigation
# (including a cross-claim or counterclaim in a lawsuit) alleging that
# any patent claim is infringed by making, using, selling, offering for
# sale, or importing the Program or any portion of it.
# 11. Patents.
# A "contributor" is a copyright holder who authorizes use under this
# License of the Program or a work on which the Program is based. The
# work thus licensed is called the contributor's "contributor version".
# A contributor's "essential patent claims" are all patent claims
# owned or controlled by the contributor, whether already acquired or
# hereafter acquired, that would be infringed by some manner, permitted
# by this License, of making, using, or selling its contributor version,
# but do not include claims that would be infringed only as a
# consequence of further modification of the contributor version. For
# purposes of this definition, "control" includes the right to grant
# patent sublicenses in a manner consistent with the requirements of
# this License.
# Each contributor grants you a non-exclusive, worldwide, royalty-free
# patent license under the contributor's essential patent claims, to
# make, use, sell, offer for sale, import and otherwise run, modify and
# propagate the contents of its contributor version.
# In the following three paragraphs, a "patent license" is any express
# agreement or commitment, however denominated, not to enforce a patent
# (such as an express permission to practice a patent or covenant not to
# sue for patent infringement). To "grant" such a patent license to a
# party means to make such an agreement or commitment not to enforce a
# patent against the party.
# If you convey a covered work, knowingly relying on a patent license,
# and the Corresponding Source of the work is not available for anyone
# to copy, free of charge and under the terms of this License, through a
# publicly available network server or other readily accessible means,
# then you must either (1) cause the Corresponding Source to be so
# available, or (2) arrange to deprive yourself of the benefit of the
# patent license for this particular work, or (3) arrange, in a manner
# consistent with the requirements of this License, to extend the patent
# license to downstream recipients. "Knowingly relying" means you have
# actual knowledge that, but for the patent license, your conveying the
# covered work in a country, or your recipient's use of the covered work
# in a country, would infringe one or more identifiable patents in that
# country that you have reason to believe are valid.
# If, pursuant to or in connection with a single transaction or
# arrangement, you convey, or propagate by procuring conveyance of, a
# covered work, and grant a patent license to some of the parties
# receiving the covered work authorizing them to use, propagate, modify
# or convey a specific copy of the covered work, then the patent license
# you grant is automatically extended to all recipients of the covered
# work and works based on it.
# A patent license is "discriminatory" if it does not include within
# the scope of its coverage, prohibits the exercise of, or is
# conditioned on the non-exercise of one or more of the rights that are
# specifically granted under this License. You may not convey a covered
# work if you are a party to an arrangement with a third party that is
# in the business of distributing software, under which you make payment
# to the third party based on the extent of your activity of conveying
# the work, and under which the third party grants, to any of the
# parties who would receive the covered work from you, a discriminatory
# patent license (a) in connection with copies of the covered work
# conveyed by you (or copies made from those copies), or (b) primarily
# for and in connection with specific products or compilations that
# contain the covered work, unless you entered into that arrangement,
# or that patent license was granted, prior to 28 March 2007.
# Nothing in this License shall be construed as excluding or limiting
# any implied license or other defenses to infringement that may
# otherwise be available to you under applicable patent law.
# 12. No Surrender of Others' Freedom.
# If conditions are imposed on you (whether by court order, agreement or
# otherwise) that contradict the conditions of this License, they do not
# excuse you from the conditions of this License. If you cannot convey a
# covered work so as to satisfy simultaneously your obligations under this
# License and any other pertinent obligations, then as a consequence you may
# not convey it at all. For example, if you agree to terms that obligate you
# to collect a royalty for further conveying from those to whom you convey
# the Program, the only way you could satisfy both those terms and this
# License would be to refrain entirely from conveying the Program.
# 13. Use with the GNU Affero General Public License.
# Notwithstanding any other provision of this License, you have
# permission to link or combine any covered work with a work licensed
# under version 3 of the GNU Affero General Public License into a single
# combined work, and to convey the resulting work. The terms of this
# License will continue to apply to the part which is the covered work,
# but the special requirements of the GNU Affero General Public License,
# section 13, concerning interaction through a network will apply to the
# combination as such.
# 14. Revised Versions of this License.
# The Free Software Foundation may publish revised and/or new versions of
# the GNU General Public License from time to time. Such new versions will
# be similar in spirit to the present version, but may differ in detail to
# address new problems or concerns.
# Each version is given a distinguishing version number. If the
# Program specifies that a certain numbered version of the GNU General
# Public License "or any later version" applies to it, you have the
# option of following the terms and conditions either of that numbered
# version or of any later version published by the Free Software
# Foundation. If the Program does not specify a version number of the
# GNU General Public License, you may choose any version ever published
# by the Free Software Foundation.
# If the Program specifies that a proxy can decide which future
# versions of the GNU General Public License can be used, that proxy's
# public statement of acceptance of a version permanently authorizes you
# to choose that version for the Program.
# Later license versions may give you additional or different
# permissions. However, no additional obligations are imposed on any
# author or copyright holder as a result of your choosing to follow a
# later version.
# 15. Disclaimer of Warranty.
# THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
# APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
# HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
# OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
# IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
# ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
# 16. Limitation of Liability.
# IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
# WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
# THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
# GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
# USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
# DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
# PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
# EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
# SUCH DAMAGES.
# 17. Interpretation of Sections 15 and 16.
# If the disclaimer of warranty and limitation of liability provided
# above cannot be given local legal effect according to their terms,
# reviewing courts shall apply local law that most closely approximates
# an absolute waiver of all civil liability in connection with the
# Program, unless a warranty or assumption of liability accompanies a
# copy of the Program in return for a fee.
# END OF TERMS AND CONDITIONS
# How to Apply These Terms to Your New Programs
# If you develop a new program, and you want it to be of the greatest
# possible use to the public, the best way to achieve this is to make it
# free software which everyone can redistribute and change under these terms.
# To do so, attach the following notices to the program. It is safest
# to attach them to the start of each source file to most effectively
# state the exclusion of warranty; and each file should have at least
# the "copyright" line and a pointer to where the full""" notice is found.
# <one line to give the program's name and a brief idea of what it does.>
# Copyright (C) <year> <name of author>
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# Also add information on how to contact you by electronic and paper mail.
# If the program does terminal interaction, make it output a short
# notice like this when it starts in an interactive mode:
# <program> Copyright (C) <year> <name of author>
# This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
# This is free software, and you are welcome to redistribute it
# under certain conditions; type `show c' for details.
# The hypothetical commands `show w' and `show c' should show the appropriate
# parts of the General Public License. Of course, your program's commands
# might be different; for a GUI interface, you would use an "about box".
# You should also get your employer (if you work as a programmer) or school,
# if any, to sign a "copyright disclaimer" for the program, if necessary.
# For more information on this, and how to apply and follow the GNU GPL, see
# <https://www.gnu.org/licenses/>.
# The GNU General Public License does not permit incorporating your program
# into proprietary programs. If your program is a subroutine library, you
# may consider it more useful to permit linking proprietary applications with
# the library. If this is what you want to do, use the GNU Lesser General
# Public License instead of this License. But first, please read
# <https://www.gnu.org/licenses/why-not-lgpl.html>.
class _TsFileParseException(Exception):
"""
Should be rcomesaised when parsing a .ts file and the format is incorrect.
"""
pass
def _ts2dfV2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise _TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise _TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise _TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise _TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise _TsFileParseException("empty file")
#export
# This code was adapted from sktime.
# It's used to load time series examples to demonstrate tsai's functionality.
# Copyright for above source is below.
# Copyright (c) 2019 - 2020 The sktime developers.
# All rights reserved.
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
def _check_X(
X:(pd.DataFrame, np.array), # Input data
):
"Validate input data"
# check np.array format
if isinstance(X, np.ndarray):
if X.ndim == 2:
X = X.reshape(X.shape[0], 1, X.shape[1])
# check pd.DataFrame
if isinstance(X, pd.DataFrame): X = _from_nested_to_3d_numpy(X)
return X
def _from_nested_to_3d_numpy(
X:pd.DataFrame, # Nested pandas DataFrame
):
"Convert nested Panel to 3D numpy Panel"
# n_columns = X.shape[1]
nested_col_mask = [*_are_columns_nested(X)]
# If all the columns are nested in structure
if nested_col_mask.count(True) == len(nested_col_mask):
X_3d = np.stack(
X.applymap(_convert_series_cell_to_numpy)
.apply(lambda row: np.stack(row), axis=1)
.to_numpy()
)
# If some columns are primitive (non-nested) then first convert to
# multi-indexed DataFrame where the same value of these columns is
# repeated for each timepoint
# Then the multi-indexed DataFrame can be converted to 3d NumPy array
else:
X_mi = _from_nested_to_multi_index(X)
X_3d = _from_multi_index_to_3d_numpy(
X_mi, instance_index="instance", time_index="timepoints"
)
return X_3d
def _are_columns_nested(
X:pd.DataFrame # DataFrame to check for nested data structures.
):
"Check whether any cells have nested structure in each DataFrame column"
any_nested = _nested_cell_mask(X).any().values
return any_nested
def _nested_cell_mask(X):
return X.applymap(_cell_is_series_or_array)
def _cell_is_series_or_array(cell):
return isinstance(cell, (pd.Series, np.ndarray))
def _convert_series_cell_to_numpy(cell):
if isinstance(cell, pd.Series):
return cell.to_numpy()
else:
return cell
def _from_nested_to_multi_index(
X: pd.DataFrame, # The nested DataFrame to convert to a multi-indexed pandas DataFrame
)->pd.DataFrame:
"Convert nested pandas Panel to multi-index pandas Panel"
time_index_name = "timepoints"
# n_columns = X.shape[1]
nested_col_mask = [*_are_columns_nested(X)]
instance_idxs = X.index.get_level_values(-1).unique()
# n_instances = instance_idxs.shape[0]
instance_index_name = "instance"
instances = []
for instance_idx in instance_idxs:
iidx = instance_idx
instance = [
pd.DataFrame(i[1], columns=[i[0]])
for i in X.loc[iidx, :].iteritems() # noqa
]
instance = pd.concat(instance, axis=1)
# For primitive (non-nested column) assume the same
# primitive value applies to every timepoint of the instance
for col_idx, is_nested in enumerate(nested_col_mask):
if not is_nested:
instance.iloc[:, col_idx] = instance.iloc[:, col_idx].ffill()
# Correctly assign multi-index
multi_index = pd.MultiIndex.from_product(
[[instance_idx], instance.index],
names=[instance_index_name, time_index_name],
)
instance.index = multi_index
instances.append(instance)
X_mi = pd.concat(instances)
X_mi.columns = X.columns
return X_mi
def _from_multi_index_to_3d_numpy(
X:pd.DataFrame, # The multi-index pandas DataFrame
instance_index:str=None, # Name of the multi-index level corresponding to the DataFrame's instances
time_index:str=None # Name of multi-index level corresponding to DataFrame's timepoints
)->np.ndarray:
"Convert pandas multi-index Panel to numpy 3D Panel."
if X.index.nlevels != 2:
raise ValueError("Multi-index DataFrame should have 2 levels.")
if (instance_index is None) or (time_index is None):
msg = "Must supply parameters instance_index and time_index"
raise ValueError(msg)
n_instances = len(X.groupby(level=instance_index))
# Alternative approach is more verbose
# n_instances = (multi_ind_dataframe
# .index
# .get_level_values(instance_index)
# .unique()).shape[0]
n_timepoints = len(X.groupby(level=time_index))
# Alternative approach is more verbose
# n_instances = (multi_ind_dataframe
# .index
# .get_level_values(time_index)
# .unique()).shape[0]
n_columns = X.shape[1]
X_3d = X.values.reshape(n_instances, n_timepoints, n_columns).swapaxes(1, 2)
return X_3d
#export
def _ts2df(
full_file_path_and_name:str, # The full pathname of the .ts file to read.
replace_missing_vals_with:str="NaN", # The value that missing values in the text file should be replaced with prior to parsing.
):
"Load data from a .ts file into a Pandas DataFrame"
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_data_tag = False
previous_timestamp_was_int = None
prev_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, "r", encoding="utf-8") as file:
for line in file:
# Strip white space from start/end of line and change to
# lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this
# function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(" ")
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException(
"problemname tag requires an associated value"
)
# problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(" ")
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException(
"timestamps tag requires an associated Boolean " "value"
)
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(" ")
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException(
"univariate tag requires an associated Boolean " "value"
)
elif tokens[1] == "true":
# univariate = True
pass
elif tokens[1] == "false":
# univariate = False
pass
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(" ")
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException(
"classlabel tag requires an associated Boolean " "value"
)
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException(
"if the classlabel tag is true then class values "
"must be supplied"
)
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException(
"data tag should not have an associated value"
)
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been
# parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
if (
not has_problem_name_tag
or not has_timestamps_tag
or not has_univariate_tag
or not has_class_labels_tag
or not has_data_tag
):
raise _TsFileParseException(
"a full set of metadata has not been provided "
"before the data"
)
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split
# line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamp_for_dim = []
values_for_dimension = []
this_line_num_dim = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if
# we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no
# values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dim + 1):
instance_list.append([])
instance_list[this_line_num_dim].append(
pd.Series(dtype="object")
)
this_line_num_dim += 1
has_another_value = False
has_another_dimension = True
timestamp_for_dim = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and class_labels:
class_val = line[char_num:].strip()
if class_val not in class_label_list:
raise _TsFileParseException(
"the class value '"
+ class_val
+ "' on line "
+ str(line_num + 1)
+ " is not "
"valid"
)
class_val_list.append(class_val)
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamp_for_dim = []
values_for_dimension = []
else:
# Read in the data contained within
# the next tuple
if line[char_num] != "(" and not class_labels:
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " does "
"not "
"start "
"with a "
"'('"
)
char_num += 1
tuple_data = ""
while (
char_num < line_len
and line[char_num] != ")"
):
tuple_data += line[char_num]
char_num += 1
if (
char_num >= line_len
or line[char_num] != ")"
):
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " does "
"not end"
" with a "
"')'"
)
# Read in any spaces immediately
# after the current tuple
char_num += 1
while char_num < line_len and str.isspace(
line[char_num]
):
char_num += 1
# Check if there is another value or
# dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the
# tuple by reading from the end of
# the tuple data backwards to the
# last comma
last_comma_index = tuple_data.rfind(",")
if last_comma_index == -1:
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " contains a tuple that has "
"no comma inside of it"
)
try:
value = tuple_data[last_comma_index + 1 :]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " contains a tuple that does "
"not have a valid numeric "
"value"
)
# Check the type of timestamp that
# we have
timestamp = tuple_data[0:last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in
# the file (not just this dimension
# or case) are consistent
if (
not timestamp_is_timestamp
and not timestamp_is_int
):
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " contains a tuple that "
"has an invalid timestamp '"
+ timestamp
+ "'"
)
if (
previous_timestamp_was_int is not None
and previous_timestamp_was_int
and not timestamp_is_int
):
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " contains tuples where the "
"timestamp format is "
"inconsistent"
)
if (
prev_timestamp_was_timestamp is not None
and prev_timestamp_was_timestamp
and not timestamp_is_timestamp
):
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " contains tuples where the "
"timestamp format is "
"inconsistent"
)
# Store the values
timestamp_for_dim += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then
# we store the type of timestamp we
# had
if (
prev_timestamp_was_timestamp is None
and timestamp_is_timestamp
):
prev_timestamp_was_timestamp = True
previous_timestamp_was_int = False
if (
previous_timestamp_was_int is None
and timestamp_is_int
):
prev_timestamp_was_timestamp = False
previous_timestamp_was_int = True
# See if we should add the data for
# this dimension
if not has_another_value:
if len(instance_list) < (
this_line_num_dim + 1
):
instance_list.append([])
if timestamp_is_timestamp:
timestamp_for_dim = pd.DatetimeIndex(
timestamp_for_dim
)
instance_list[this_line_num_dim].append(
pd.Series(
index=timestamp_for_dim,
data=values_for_dimension,
)
)
this_line_num_dim += 1
timestamp_for_dim = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dim + 1) + " on "
"line "
+ str(line_num + 1)
+ " ends with a ',' that "
"is not followed by "
"another tuple"
)
elif has_another_dimension and class_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dim + 1) + " on "
"line "
+ str(line_num + 1)
+ " ends with a ':' while "
"it should list a class "
"value"
)
elif has_another_dimension and not class_labels:
if len(instance_list) < (this_line_num_dim + 1):
instance_list.append([])
instance_list[this_line_num_dim].append(
pd.Series(dtype=np.float32)
)
this_line_num_dim += 1
num_dimensions = this_line_num_dim
# If this is the 1st line of data we have seen
# then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dim
if num_dimensions != this_line_num_dim:
raise _TsFileParseException(
"line "
+ str(line_num + 1)
+ " does not have the "
"same number of "
"dimensions as the "
"previous line of "
"data"
)
# Check that we are not expecting some more data,
# and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " ends with a ',' that is "
"not followed by another "
"tuple"
)
elif has_another_dimension and class_labels:
raise _TsFileParseException(
"dimension "
+ str(this_line_num_dim + 1)
+ " on line "
+ str(line_num + 1)
+ " ends with a ':' while it "
"should list a class value"
)
elif has_another_dimension and not class_labels:
if len(instance_list) < (this_line_num_dim + 1):
instance_list.append([])
instance_list[this_line_num_dim].append(
pd.Series(dtype="object")
)
this_line_num_dim += 1
num_dimensions = this_line_num_dim
# If this is the 1st line of data we have seen then
# note the dimensions
if (
not has_another_value
and num_dimensions != this_line_num_dim
):
raise _TsFileParseException(
"line " + str(line_num + 1) + " does not have the same "
"number of dimensions as the "
"previous line of data"
)
# Check if we should have class values, and if so
# that they are contained in those listed in the
# metadata
if class_labels and len(class_val_list) == 0:
raise _TsFileParseException(
"the cases have no associated class values"
)
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (
# that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if class_labels:
num_dimensions -= 1
for _dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data
# in represented in this line has
this_line_num_dim = len(dimensions)
if class_labels:
this_line_num_dim -= 1
# All dimensions should be included for all series,
# even if they are empty
if this_line_num_dim != num_dimensions:
raise _TsFileParseException(
"inconsistent number of dimensions. "
"Expecting "
+ str(num_dimensions)
+ " but have read "
+ str(this_line_num_dim)
)
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series(dtype="object"))
if class_labels:
class_val_list.append(dimensions[num_dimensions].strip())
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
if metadata_started and not (
has_problem_name_tag
and has_timestamps_tag
and has_univariate_tag
and has_class_labels_tag
and has_data_tag
):
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data["dim_" + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if class_labels:
return data, np.asarray(class_val_list)
else:
return data
else:
raise _TsFileParseException("empty file")
#export
def decompress_from_url(url, target_dir=None, verbose=False):
# Download
try:
pv("downloading data...", verbose)
fname = os.path.basename(url)
tmpdir = tempfile.mkdtemp()
tmpfile = os.path.join(tmpdir, fname)
urlretrieve(url, tmpfile)
pv("...data downloaded", verbose)
# Decompress
try:
pv("decompressing data...", verbose)
if not os.path.exists(target_dir): os.makedirs(target_dir)
shutil.unpack_archive(tmpfile, target_dir)
shutil.rmtree(tmpdir)
pv("...data decompressed", verbose)
return target_dir
except:
shutil.rmtree(tmpdir)
if verbose: sys.stderr.write("Could not decompress file, aborting.\n")
except:
shutil.rmtree(tmpdir)
if verbose:
sys.stderr.write("Could not download url. Please, check url.\n")
#export
def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):
"Download `url` to `fname`."
from fastai.data.external import URLs
from fastdownload import download_url
fname = Path(fname or URLs.path(url, c_key=c_key))
fname.parent.mkdir(parents=True, exist_ok=True)
if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)
return fname
# export
def get_UCR_univariate_list():
return [
'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',
'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',
'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',
'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',
'CricketZ', 'Crop', 'DiatomSizeReduction',
'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',
'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',
'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',
'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',
'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',
'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',
'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',
'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',
'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',
'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',
'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',
'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',
'MoteStrain', 'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',
'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',
'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',
'PowerCons', 'ProximalPhalanxOutlineAgeGroup',
'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',
'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',
'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',
'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',
'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',
'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',
'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'
]
test_eq(len(get_UCR_univariate_list()), 128)
UTSC_datasets = get_UCR_univariate_list()
UCR_univariate_list = get_UCR_univariate_list()
#export
def get_UCR_multivariate_list():
return [
'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',
'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',
'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',
'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',
'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',
'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',
'StandWalkJump', 'UWaveGestureLibrary'
]
test_eq(len(get_UCR_multivariate_list()), 30)
MTSC_datasets = get_UCR_multivariate_list()
UCR_multivariate_list = get_UCR_multivariate_list()
UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)
classification_list = UCR_list
TSC_datasets = classification_datasets = UCR_list
len(UCR_list)
#export
def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True,
force_download=False, verbose=False):
dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'
dsid = dsid_list[0]
return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data
if dsid in ['InsectWingbeat']:
warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')
pv(f'Dataset: {dsid}', verbose)
full_parent_dir = Path(path)/parent_dir
full_tgt_dir = full_parent_dir/dsid
# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)
full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)
if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
# Option A
src_website = 'http://www.timeseriesclassification.com/Downloads'
decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)
if dsid == 'DuckDuckGeese':
with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:
zip_ref.extractall(Path(parent_dir))
if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \
Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0:
print('It has not been possible to download the required files')
if return_split:
return None, None, None, None
else:
return None, None, None
pv('loading ts files to dataframe...', verbose)
X_train_df, y_train = _ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')
X_valid_df, y_valid = _ts2df(full_tgt_dir/f'{dsid}_TEST.ts')
pv('...ts files loaded', verbose)
pv('preparing numpy arrays...', verbose)
X_train_ = []
X_valid_ = []
for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):
X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))
X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))
X_train, X_valid = match_seq_len(X_train, X_valid)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if return_split:
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if Xdtype is not None:
X = X.astype(Xdtype)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_classification_data = get_UCR_data
from fastai.data.transforms import get_files
PATH = Path('.')
dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate
for dsid in dsids:
print(dsid)
tgt_dir = PATH/f'data/UCR/{dsid}'
if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)
test_eq(len(get_files(tgt_dir)), 0) # no file left
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
del X_train, y_train, X_valid, y_valid
start = time.time()
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
elapsed = time.time() - start
test_eq(elapsed < 1, True)
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(X_train.dtype, np.float32)
test_eq(X_train.__class__.__name__, 'memmap')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)
test_eq(X_train.__class__.__name__, 'ndarray')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data('natops')
dsid = 'natops'
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)
X, y, splits = get_UCR_data(dsid, split_data=False)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_type(X, X_train)
test_type(y, y_train)
#export
def check_data(X, y=None, splits=None, show_plot=True):
try: X_is_nan = np.isnan(X).sum()
except: X_is_nan = 'could not be checked'
if X.ndim == 3:
shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'
print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
else:
print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
if X_is_nan:
warnings.warn('X contains nan values')
if y is not None:
y_shape = y.shape
y = y.ravel()
if isinstance(y[0], str):
n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'
y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')
else:
y_is_nan = np.isnan(y).sum()
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')
if y_is_nan:
warnings.warn('y contains nan values')
if splits is not None:
_splits = get_splits_len(splits)
overlap = check_splits_overlap(splits)
print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')
if show_plot: plot_splits(splits)
dsid = 'ECGFiveDays'
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y = y.astype(np.float32)
check_data(X, y, splits)
y[:10] = np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
splits = get_splits(y, 3)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y[:5]= np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
#export
def get_Monash_regression_list():
return sorted([
"AustraliaRainfall", "HouseholdPowerConsumption1",
"HouseholdPowerConsumption2", "BeijingPM25Quality",
"BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent",
"FloodModeling1", "FloodModeling2", "FloodModeling3",
"AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment",
"NewsTitleSentiment", "IEEEPPG",
#"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded
])
Monash_regression_list = get_Monash_regression_list()
regression_list = Monash_regression_list
TSR_datasets = regression_datasets = regression_list
len(Monash_regression_list)
#export
def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False,
verbose=False, timeout=4):
dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'
dsid = dsid_list[0]
full_tgt_dir = Path(path)/dsid
pv(f'Dataset: {dsid}', verbose)
if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
if dsid == 'AppliancesEnergy': dset_id = 3902637
elif dsid == 'HouseholdPowerConsumption1': dset_id = 3902704
elif dsid == 'HouseholdPowerConsumption2': dset_id = 3902706
elif dsid == 'BenzeneConcentration': dset_id = 3902673
elif dsid == 'BeijingPM25Quality': dset_id = 3902671
elif dsid == 'BeijingPM10Quality': dset_id = 3902667
elif dsid == 'LiveFuelMoistureContent': dset_id = 3902716
elif dsid == 'FloodModeling1': dset_id = 3902694
elif dsid == 'FloodModeling2': dset_id = 3902696
elif dsid == 'FloodModeling3': dset_id = 3902698
elif dsid == 'AustraliaRainfall': dset_id = 3902654
elif dsid == 'PPGDalia': dset_id = 3902728
elif dsid == 'IEEEPPG': dset_id = 3902710
elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': dset_id = 3902685
elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': dset_id = 3902676
elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': dset_id = 3902688
elif dsid == 'NewsHeadlineSentiment': dset_id = 3902718
elif dsid == 'NewsTitleSentiment': dset_id= 3902726
elif dsid == 'Covid3Month': dset_id = 3902690
for split in ['TRAIN', 'TEST']:
url = f"https://zenodo.org/record/{dset_id}/files/{dsid}_{split}.ts"
fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'
pv('downloading data...', verbose)
try:
download_data(url, fname, c_key='archive', force_download=force_download, timeout=timeout)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot download {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
pv('...download complete', verbose)
try:
if split == 'TRAIN':
X_train, y_train = _ts2dfV2(fname)
X_train = _check_X(X_train)
else:
X_valid, y_valid = _ts2dfV2(fname)
X_valid = _check_X(X_valid)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot create numpy arrays for {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if split_data:
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_regression_data = get_Monash_regression_data
dsid = "Covid3Month"
X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=False)
X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=False, verbose=True)
if X_train is not None:
test_eq(X_train.shape, (140, 1, 84))
if X is not None:
test_eq(X.shape, (201, 1, 84))
#export
def get_forecasting_list():
return sorted([
"Sunspots", "Weather"
])
forecasting_time_series = get_forecasting_list()
#export
def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):
dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'
dsid = dsid_list[0]
if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'
else: full_tgt_dir = Path(path)/f'{dsid}.csv'
pv(f'Dataset: {dsid}', verbose)
if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv"
elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'
try:
pv("downloading data...", verbose)
if force_download:
try: os.remove(full_tgt_dir)
except OSError: pass
download_data(url, full_tgt_dir, force_download=force_download, **kwargs)
pv(f"...data downloaded. Path = {full_tgt_dir}", verbose)
if dsid == 'Sunspots':
df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])
return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()
elif dsid == 'Weather':
# This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)
df = pd.read_csv(full_tgt_dir)
df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
# remove error (negative wind)
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
timestamp_s = date_time.map(datetime.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
df.reset_index(drop=True, inplace=True)
return df
else:
return full_tgt_dir
except Exception as inst:
print(inst)
warnings.warn(f"Cannot download {dsid} dataset")
return
ts = get_forecasting_time_series("sunspots", force_download=False)
test_eq(len(ts), 3235)
ts
ts = get_forecasting_time_series("weather", force_download=False)
if ts is not None:
test_eq(len(ts), 70091)
print(ts)
# export
Monash_forecasting_list = ['m1_yearly_dataset',
'm1_quarterly_dataset',
'm1_monthly_dataset',
'm3_yearly_dataset',
'm3_quarterly_dataset',
'm3_monthly_dataset',
'm3_other_dataset',
'm4_yearly_dataset',
'm4_quarterly_dataset',
'm4_monthly_dataset',
'm4_weekly_dataset',
'm4_daily_dataset',
'm4_hourly_dataset',
'tourism_yearly_dataset',
'tourism_quarterly_dataset',
'tourism_monthly_dataset',
'nn5_daily_dataset_with_missing_values',
'nn5_daily_dataset_without_missing_values',
'nn5_weekly_dataset',
'cif_2016_dataset',
'kaggle_web_traffic_dataset_with_missing_values',
'kaggle_web_traffic_dataset_without_missing_values',
'kaggle_web_traffic_weekly_dataset',
'solar_10_minutes_dataset',
'solar_weekly_dataset',
'electricity_hourly_dataset',
'electricity_weekly_dataset',
'london_smart_meters_dataset_with_missing_values',
'london_smart_meters_dataset_without_missing_values',
'wind_farms_minutely_dataset_with_missing_values',
'wind_farms_minutely_dataset_without_missing_values',
'car_parts_dataset_with_missing_values',
'car_parts_dataset_without_missing_values',
'dominick_dataset',
'fred_md_dataset',
'traffic_hourly_dataset',
'traffic_weekly_dataset',
'pedestrian_counts_dataset',
'hospital_dataset',
'covid_deaths_dataset',
'kdd_cup_2018_dataset_with_missing_values',
'kdd_cup_2018_dataset_without_missing_values',
'weather_dataset',
'sunspot_dataset_with_missing_values',
'sunspot_dataset_without_missing_values',
'saugeenday_dataset',
'us_births_dataset',
'elecdemand_dataset',
'solar_4_seconds_dataset',
'wind_4_seconds_dataset',
'Sunspots', 'Weather']
forecasting_list = Monash_forecasting_list
# export
## Original code available at: https://github.com/rakshitha123/TSForecasting
# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in
# the time series forecasting research space.
# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website:
# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.
# Citation:
# @misc{godahewa2021monash,
# author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
# title="Monash Time Series Forecasting Archive",
# howpublished ="\url{https://arxiv.org/abs/2105.06643}",
# year="2021"
# }
# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths
#
# Parameters
# full_file_path_and_name - complete .tsf file path
# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe
# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe
def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"):
col_names = []
col_types = []
all_data = {}
line_count = 0
frequency = None
forecast_horizon = None
contain_missing_values = None
contain_equal_length = None
found_data_tag = False
found_data_section = False
started_reading_data_section = False
with open(full_file_path_and_name, 'r', encoding='cp1252') as file:
for line in file:
# Strip white space from start/end of line
line = line.strip()
if line:
if line.startswith("@"): # Read meta-data
if not line.startswith("@data"):
line_content = line.split(" ")
if line.startswith("@attribute"):
if (len(line_content) != 3): # Attributes have both name and type
raise _TsFileParseException("Invalid meta-data specification.")
col_names.append(line_content[1])
col_types.append(line_content[2])
else:
if len(line_content) != 2: # Other meta-data have only values
raise _TsFileParseException("Invalid meta-data specification.")
if line.startswith("@frequency"):
frequency = line_content[1]
elif line.startswith("@horizon"):
forecast_horizon = int(line_content[1])
elif line.startswith("@missing"):
contain_missing_values = bool(distutils.util.strtobool(line_content[1]))
elif line.startswith("@equallength"):
contain_equal_length = bool(distutils.util.strtobool(line_content[1]))
else:
if len(col_names) == 0:
raise _TsFileParseException("Missing attribute section. Attribute section must come before data.")
found_data_tag = True
elif not line.startswith("#"):
if len(col_names) == 0:
raise _TsFileParseException("Missing attribute section. Attribute section must come before data.")
elif not found_data_tag:
raise _TsFileParseException("Missing @data tag.")
else:
if not started_reading_data_section:
started_reading_data_section = True
found_data_section = True
all_series = []
for col in col_names:
all_data[col] = []
full_info = line.split(":")
if len(full_info) != (len(col_names) + 1):
raise _TsFileParseException("Missing attributes/values in series.")
series = full_info[len(full_info) - 1]
series = series.split(",")
if(len(series) == 0):
raise _TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol")
numeric_series = []
for val in series:
if val == "?":
numeric_series.append(replace_missing_vals_with)
else:
numeric_series.append(float(val))
if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):
raise _TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.")
all_series.append(pd.Series(numeric_series).array)
for i in range(len(col_names)):
att_val = None
if col_types[i] == "numeric":
att_val = int(full_info[i])
elif col_types[i] == "string":
att_val = str(full_info[i])
elif col_types[i] == "date":
att_val = datetime.datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')
else:
raise _TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required.
if(att_val == None):
raise _TsFileParseException("Invalid attribute value.")
else:
all_data[col_names[i]].append(att_val)
line_count = line_count + 1
if line_count == 0:
raise _TsFileParseException("Empty file.")
if len(col_names) == 0:
raise _TsFileParseException("Missing attribute section.")
if not found_data_section:
raise _TsFileParseException("Missing series information under data section.")
all_data[value_column_name] = all_series
loaded_data = pd.DataFrame(all_data)
return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length
# export
def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):
pv(f'Dataset: {dsid}', verbose)
dsid = dsid.lower()
assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'
if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'
elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'
elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'
elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'
elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'
elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'
elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'
elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'
elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'
elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'
elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'
elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'
elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'
elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'
elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'
elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'
elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'
elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'
elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'
elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'
elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'
elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'
elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'
elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'
elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'
elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'
elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'
elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'
elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'
elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'
elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'
elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'
elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'
elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'
elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'
elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'
elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'
elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'
elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'
elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'
elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'
elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'
elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'
elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'
elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'
elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'
elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'
elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'
path = Path(path)
full_path = path/f'{dsid}.tsf'
if not full_path.exists() or force_download:
try:
decompress_from_url(url, target_dir=path, verbose=verbose)
except Exception as inst:
print(inst)
pv("converting dataframe to numpy array...", verbose)
data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)
X = to3d(stack_pad(data['series_value']))
pv("...dataframe converted to numpy array", verbose)
pv(f'\nX.shape: {X.shape}', verbose)
pv(f'freq: {frequency}', verbose)
pv(f'forecast_horizon: {forecast_horizon}', verbose)
pv(f'contain_missing_values: {contain_missing_values}', verbose)
pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)
if remove_from_disk: os.remove(full_path)
return X
get_forecasting_data = get_Monash_forecasting_data
dsid = 'm1_yearly_dataset'
X = get_Monash_forecasting_data(dsid, force_download=False)
if X is not None:
test_eq(X.shape, (181, 1, 58))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
nb_name = "012_data.external.ipynb"
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
External data> Helper functions used to download and extract common time series datasets.
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
#export
from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df
from sktime.utils.validation.panel import check_X
from sktime.utils.data_io import TsFileParseException
#export
from fastai.data.external import *
from tqdm import tqdm
import zipfile
import tempfile
try: from urllib import urlretrieve
except ImportError: from urllib.request import urlretrieve
import shutil
from numpy import distutils
import distutils
#export
def decompress_from_url(url, target_dir=None, verbose=False):
# Download
try:
pv("downloading data...", verbose)
fname = os.path.basename(url)
tmpdir = tempfile.mkdtemp()
tmpfile = os.path.join(tmpdir, fname)
urlretrieve(url, tmpfile)
pv("...data downloaded", verbose)
# Decompress
try:
pv("decompressing data...", verbose)
if not os.path.exists(target_dir): os.makedirs(target_dir)
shutil.unpack_archive(tmpfile, target_dir)
shutil.rmtree(tmpdir)
pv("...data decompressed", verbose)
return target_dir
except:
shutil.rmtree(tmpdir)
if verbose: sys.stderr.write("Could not decompress file, aborting.\n")
except:
shutil.rmtree(tmpdir)
if verbose:
sys.stderr.write("Could not download url. Please, check url.\n")
#export
from fastdownload import download_url
def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):
"Download `url` to `fname`."
fname = Path(fname or URLs.path(url, c_key=c_key))
fname.parent.mkdir(parents=True, exist_ok=True)
if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)
return fname
# export
def get_UCR_univariate_list():
return [
'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',
'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',
'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',
'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',
'CricketZ', 'Crop', 'DiatomSizeReduction',
'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',
'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',
'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',
'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',
'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',
'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',
'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',
'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',
'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',
'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',
'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',
'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',
'MoteStrain', 'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',
'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',
'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',
'PowerCons', 'ProximalPhalanxOutlineAgeGroup',
'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',
'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',
'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',
'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',
'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',
'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',
'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'
]
test_eq(len(get_UCR_univariate_list()), 128)
UTSC_datasets = get_UCR_univariate_list()
UCR_univariate_list = get_UCR_univariate_list()
#export
def get_UCR_multivariate_list():
return [
'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',
'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',
'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',
'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',
'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',
'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',
'StandWalkJump', 'UWaveGestureLibrary'
]
test_eq(len(get_UCR_multivariate_list()), 30)
MTSC_datasets = get_UCR_multivariate_list()
UCR_multivariate_list = get_UCR_multivariate_list()
UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)
classification_list = UCR_list
TSC_datasets = classification_datasets = UCR_list
len(UCR_list)
#export
def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True,
force_download=False, verbose=False):
dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'
dsid = dsid_list[0]
return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data
if dsid in ['InsectWingbeat']:
warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')
pv(f'Dataset: {dsid}', verbose)
full_parent_dir = Path(path)/parent_dir
full_tgt_dir = full_parent_dir/dsid
# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)
full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)
if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
# Option A
src_website = 'http://www.timeseriesclassification.com/Downloads'
decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)
if dsid == 'DuckDuckGeese':
with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:
zip_ref.extractall(Path(parent_dir))
if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \
Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0:
print('It has not been possible to download the required files')
if return_split:
return None, None, None, None
else:
return None, None, None
pv('loading ts files to dataframe...', verbose)
X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')
X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts')
pv('...ts files loaded', verbose)
pv('preparing numpy arrays...', verbose)
X_train_ = []
X_valid_ = []
for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):
X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))
X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))
X_train, X_valid = match_seq_len(X_train, X_valid)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if return_split:
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if Xdtype is not None:
X = X.astype(Xdtype)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_classification_data = get_UCR_data
#hide
PATH = Path('.')
dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate
for dsid in dsids:
print(dsid)
tgt_dir = PATH/f'data/UCR/{dsid}'
if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)
test_eq(len(get_files(tgt_dir)), 0) # no file left
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
del X_train, y_train, X_valid, y_valid
start = time.time()
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
elapsed = time.time() - start
test_eq(elapsed < 1, True)
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(X_train.dtype, np.float32)
test_eq(X_train.__class__.__name__, 'memmap')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)
test_eq(X_train.__class__.__name__, 'ndarray')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data('natops')
dsid = 'natops'
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)
X, y, splits = get_UCR_data(dsid, split_data=False)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_type(X, X_train)
test_type(y, y_train)
#export
def check_data(X, y=None, splits=None, show_plot=True):
try: X_is_nan = np.isnan(X).sum()
except: X_is_nan = 'couldn not be checked'
if X.ndim == 3:
shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'
print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
else:
print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
if not isinstance(X, np.ndarray): warnings.warn('X must be a np.ndarray')
if X_is_nan:
warnings.warn('X must not contain nan values')
if y is not None:
y_shape = y.shape
y = y.ravel()
if isinstance(y[0], str):
n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'
y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')
else:
y_is_nan = np.isnan(y).sum()
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')
if not isinstance(y, np.ndarray): warnings.warn('y must be a np.ndarray')
if y_is_nan:
warnings.warn('y must not contain nan values')
if splits is not None:
_splits = get_splits_len(splits)
overlap = check_splits_overlap(splits)
print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')
if show_plot: plot_splits(splits)
dsid = 'ECGFiveDays'
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y = y.astype(np.float32)
check_data(X, y, splits)
y[:10] = np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True)
splits = get_splits(y, 3)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y[:5]= np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=True)
#export
# This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available.
# The following code is adapted from the python package sktime to read .ts file.
class _TsFileParseException(Exception):
"""
Should be raised when parsing a .ts file and the format is incorrect.
"""
pass
def _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise _TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise _TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise _TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise _TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise _TsFileParseException("empty file")
#export
def get_Monash_regression_list():
return sorted([
"AustraliaRainfall", "HouseholdPowerConsumption1",
"HouseholdPowerConsumption2", "BeijingPM25Quality",
"BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent",
"FloodModeling1", "FloodModeling2", "FloodModeling3",
"AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment",
"NewsTitleSentiment", "IEEEPPG",
#"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded
])
Monash_regression_list = get_Monash_regression_list()
regression_list = Monash_regression_list
TSR_datasets = regression_datasets = regression_list
len(Monash_regression_list)
#export
def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False,
verbose=False):
dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'
dsid = dsid_list[0]
full_tgt_dir = Path(path)/dsid
pv(f'Dataset: {dsid}', verbose)
if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
if dsid == 'AppliancesEnergy': id = 3902637
elif dsid == 'HouseholdPowerConsumption1': id = 3902704
elif dsid == 'HouseholdPowerConsumption2': id = 3902706
elif dsid == 'BenzeneConcentration': id = 3902673
elif dsid == 'BeijingPM25Quality': id = 3902671
elif dsid == 'BeijingPM10Quality': id = 3902667
elif dsid == 'LiveFuelMoistureContent': id = 3902716
elif dsid == 'FloodModeling1': id = 3902694
elif dsid == 'FloodModeling2': id = 3902696
elif dsid == 'FloodModeling3': id = 3902698
elif dsid == 'AustraliaRainfall': id = 3902654
elif dsid == 'PPGDalia': id = 3902728
elif dsid == 'IEEEPPG': id = 3902710
elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': id = 3902685
elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': id = 3902676
elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': id = 3902688
elif dsid == 'NewsHeadlineSentiment': id = 3902718
elif dsid == 'NewsTitleSentiment': id = 3902726
elif dsid == 'Covid3Month': id = 3902690
for split in ['TRAIN', 'TEST']:
url = f"https://zenodo.org/record/{id}/files/{dsid}_{split}.ts"
fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'
pv('downloading data...', verbose)
try:
download_data(url, fname, c_key='archive', force_download=force_download, timeout=4)
except:
warnings.warn(f'Cannot download {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
pv('...download complete', verbose)
if split == 'TRAIN':
X_train, y_train = _load_from_tsfile_to_dataframe2(fname)
X_train = check_X(X_train, coerce_to_numpy=True)
else:
X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname)
X_valid = check_X(X_valid, coerce_to_numpy=True)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if split_data:
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_regression_data = get_Monash_regression_data
dsid = "Covid3Month"
X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=True)
X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=True, verbose=True)
if X_train is not None:
test_eq(X_train.shape, (140, 1, 84))
if X is not None:
test_eq(X.shape, (201, 1, 84))
#export
def get_forecasting_list():
return sorted([
"Sunspots", "Weather"
])
forecasting_time_series = get_forecasting_list()
#export
def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):
dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'
dsid = dsid_list[0]
if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'
else: full_tgt_dir = Path(path)/f'{dsid}.csv'
pv(f'Dataset: {dsid}', verbose)
if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv"
elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'
try:
pv("downloading data...", verbose)
if force_download:
try: os.remove(full_tgt_dir)
except OSError: pass
download_data(url, full_tgt_dir, force_download=force_download, **kwargs)
pv(f"...data downloaded. Path = {full_tgt_dir}", verbose)
if dsid == 'Sunspots':
df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])
return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()
elif dsid == 'Weather':
# This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)
df = pd.read_csv(full_tgt_dir)
df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
# remove error (negative wind)
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
timestamp_s = date_time.map(datetime.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
df.reset_index(drop=True, inplace=True)
return df
else:
return full_tgt_dir
except:
warnings.warn(f"Cannot download {dsid} dataset")
return
ts = get_forecasting_time_series("sunspots", force_download=True)
test_eq(len(ts), 3235)
ts
ts = get_forecasting_time_series("weather", force_download=True)
test_eq(len(ts), 70091)
ts
# export
Monash_forecasting_list = ['m1_yearly_dataset',
'm1_quarterly_dataset',
'm1_monthly_dataset',
'm3_yearly_dataset',
'm3_quarterly_dataset',
'm3_monthly_dataset',
'm3_other_dataset',
'm4_yearly_dataset',
'm4_quarterly_dataset',
'm4_monthly_dataset',
'm4_weekly_dataset',
'm4_daily_dataset',
'm4_hourly_dataset',
'tourism_yearly_dataset',
'tourism_quarterly_dataset',
'tourism_monthly_dataset',
'nn5_daily_dataset_with_missing_values',
'nn5_daily_dataset_without_missing_values',
'nn5_weekly_dataset',
'cif_2016_dataset',
'kaggle_web_traffic_dataset_with_missing_values',
'kaggle_web_traffic_dataset_without_missing_values',
'kaggle_web_traffic_weekly_dataset',
'solar_10_minutes_dataset',
'solar_weekly_dataset',
'electricity_hourly_dataset',
'electricity_weekly_dataset',
'london_smart_meters_dataset_with_missing_values',
'london_smart_meters_dataset_without_missing_values',
'wind_farms_minutely_dataset_with_missing_values',
'wind_farms_minutely_dataset_without_missing_values',
'car_parts_dataset_with_missing_values',
'car_parts_dataset_without_missing_values',
'dominick_dataset',
'fred_md_dataset',
'traffic_hourly_dataset',
'traffic_weekly_dataset',
'pedestrian_counts_dataset',
'hospital_dataset',
'covid_deaths_dataset',
'kdd_cup_2018_dataset_with_missing_values',
'kdd_cup_2018_dataset_without_missing_values',
'weather_dataset',
'sunspot_dataset_with_missing_values',
'sunspot_dataset_without_missing_values',
'saugeenday_dataset',
'us_births_dataset',
'elecdemand_dataset',
'solar_4_seconds_dataset',
'wind_4_seconds_dataset',
'Sunspots', 'Weather']
forecasting_list = Monash_forecasting_list
# export
## Original code available at: https://github.com/rakshitha123/TSForecasting
# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in
# the time series forecasting research space.
# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website:
# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.
# Citation:
# @misc{godahewa2021monash,
# author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
# title="Monash Time Series Forecasting Archive",
# howpublished ="\url{https://arxiv.org/abs/2105.06643}",
# year="2021"
# }
# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths
#
# Parameters
# full_file_path_and_name - complete .tsf file path
# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe
# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe
def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"):
col_names = []
col_types = []
all_data = {}
line_count = 0
frequency = None
forecast_horizon = None
contain_missing_values = None
contain_equal_length = None
found_data_tag = False
found_data_section = False
started_reading_data_section = False
with open(full_file_path_and_name, 'r', encoding='cp1252') as file:
for line in file:
# Strip white space from start/end of line
line = line.strip()
if line:
if line.startswith("@"): # Read meta-data
if not line.startswith("@data"):
line_content = line.split(" ")
if line.startswith("@attribute"):
if (len(line_content) != 3): # Attributes have both name and type
raise TsFileParseException("Invalid meta-data specification.")
col_names.append(line_content[1])
col_types.append(line_content[2])
else:
if len(line_content) != 2: # Other meta-data have only values
raise TsFileParseException("Invalid meta-data specification.")
if line.startswith("@frequency"):
frequency = line_content[1]
elif line.startswith("@horizon"):
forecast_horizon = int(line_content[1])
elif line.startswith("@missing"):
contain_missing_values = bool(distutils.util.strtobool(line_content[1]))
elif line.startswith("@equallength"):
contain_equal_length = bool(distutils.util.strtobool(line_content[1]))
else:
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
found_data_tag = True
elif not line.startswith("#"):
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
elif not found_data_tag:
raise TsFileParseException("Missing @data tag.")
else:
if not started_reading_data_section:
started_reading_data_section = True
found_data_section = True
all_series = []
for col in col_names:
all_data[col] = []
full_info = line.split(":")
if len(full_info) != (len(col_names) + 1):
raise TsFileParseException("Missing attributes/values in series.")
series = full_info[len(full_info) - 1]
series = series.split(",")
if(len(series) == 0):
raise TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol")
numeric_series = []
for val in series:
if val == "?":
numeric_series.append(replace_missing_vals_with)
else:
numeric_series.append(float(val))
if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):
raise TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.")
all_series.append(pd.Series(numeric_series).array)
for i in range(len(col_names)):
att_val = None
if col_types[i] == "numeric":
att_val = int(full_info[i])
elif col_types[i] == "string":
att_val = str(full_info[i])
elif col_types[i] == "date":
att_val = datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')
else:
raise TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required.
if(att_val == None):
raise TsFileParseException("Invalid attribute value.")
else:
all_data[col_names[i]].append(att_val)
line_count = line_count + 1
if line_count == 0:
raise TsFileParseException("Empty file.")
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section.")
if not found_data_section:
raise TsFileParseException("Missing series information under data section.")
all_data[value_column_name] = all_series
loaded_data = pd.DataFrame(all_data)
return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length
# export
def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):
pv(f'Dataset: {dsid}', verbose)
dsid = dsid.lower()
assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'
if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'
elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'
elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'
elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'
elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'
elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'
elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'
elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'
elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'
elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'
elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'
elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'
elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'
elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'
elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'
elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'
elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'
elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'
elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'
elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'
elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'
elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'
elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'
elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'
elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'
elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'
elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'
elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'
elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'
elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'
elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'
elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'
elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'
elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'
elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'
elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'
elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'
elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'
elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'
elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'
elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'
elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'
elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'
elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'
elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'
elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'
elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'
elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'
path = Path(path)
full_path = path/f'{dsid}.tsf'
if not full_path.exists() or force_download:
decompress_from_url(url, target_dir=path, verbose=verbose)
pv("converting dataframe to numpy array...", verbose)
data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)
X = to3d(stack_pad(data['series_value']))
pv("...dataframe converted to numpy array", verbose)
pv(f'\nX.shape: {X.shape}', verbose)
pv(f'freq: {frequency}', verbose)
pv(f'forecast_horizon: {forecast_horizon}', verbose)
pv(f'contain_missing_values: {contain_missing_values}', verbose)
pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)
if remove_from_disk: os.remove(full_path)
return X
get_forecasting_data = get_Monash_forecasting_data
dsid = 'm1_yearly_dataset'
X = get_Monash_forecasting_data(dsid, force_download=True, remove_from_disk=True)
test_eq(X.shape, (181, 1, 58))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
External data> Helper functions used to download and extract common time series datasets.
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
#export
from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df
from sktime.utils.validation.panel import check_X
from sktime.utils.data_io import TsFileParseException
#export
from fastai.data.external import *
from tqdm import tqdm
import zipfile
import tempfile
try: from urllib import urlretrieve
except ImportError: from urllib.request import urlretrieve
import shutil
from numpy import distutils
import distutils
#export
def decompress_from_url(url, target_dir=None, verbose=False):
# Download
try:
pv("downloading data...", verbose)
fname = os.path.basename(url)
tmpdir = tempfile.mkdtemp()
tmpfile = os.path.join(tmpdir, fname)
urlretrieve(url, tmpfile)
pv("...data downloaded", verbose)
# Decompress
try:
pv("decompressing data...", verbose)
if not os.path.exists(target_dir): os.makedirs(target_dir)
shutil.unpack_archive(tmpfile, target_dir)
shutil.rmtree(tmpdir)
pv("...data decompressed", verbose)
return target_dir
except:
shutil.rmtree(tmpdir)
if verbose: sys.stderr.write("Could not decompress file, aborting.\n")
except:
shutil.rmtree(tmpdir)
if verbose:
sys.stderr.write("Could not download url. Please, check url.\n")
#export
from fastdownload import download_url
def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):
"Download `url` to `fname`."
fname = Path(fname or URLs.path(url, c_key=c_key))
fname.parent.mkdir(parents=True, exist_ok=True)
if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)
return fname
# export
def get_UCR_univariate_list():
return [
'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',
'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',
'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',
'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',
'CricketZ', 'Crop', 'DiatomSizeReduction',
'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',
'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',
'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',
'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',
'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',
'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',
'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',
'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',
'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',
'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',
'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',
'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',
'MoteStrain', 'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',
'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',
'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',
'PowerCons', 'ProximalPhalanxOutlineAgeGroup',
'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',
'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',
'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',
'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',
'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',
'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',
'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'
]
test_eq(len(get_UCR_univariate_list()), 128)
UTSC_datasets = get_UCR_univariate_list()
UCR_univariate_list = get_UCR_univariate_list()
#export
def get_UCR_multivariate_list():
return [
'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',
'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',
'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',
'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',
'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',
'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',
'StandWalkJump', 'UWaveGestureLibrary'
]
test_eq(len(get_UCR_multivariate_list()), 30)
MTSC_datasets = get_UCR_multivariate_list()
UCR_multivariate_list = get_UCR_multivariate_list()
UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)
classification_list = UCR_list
TSC_datasets = classification_datasets = UCR_list
len(UCR_list)
#export
def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True,
force_download=False, verbose=False):
dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'
dsid = dsid_list[0]
return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data
if dsid in ['InsectWingbeat']:
warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')
pv(f'Dataset: {dsid}', verbose)
full_parent_dir = Path(path)/parent_dir
full_tgt_dir = full_parent_dir/dsid
# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)
full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)
if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
# Option A
src_website = 'http://www.timeseriesclassification.com/Downloads'
decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)
if dsid == 'DuckDuckGeese':
with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:
zip_ref.extractall(Path(parent_dir))
if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \
Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0:
print('It has not been possible to download the required files')
if return_split:
return None, None, None, None
else:
return None, None, None
pv('loading ts files to dataframe...', verbose)
X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')
X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts')
pv('...ts files loaded', verbose)
pv('preparing numpy arrays...', verbose)
X_train_ = []
X_valid_ = []
for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):
X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))
X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))
X_train, X_valid = match_seq_len(X_train, X_valid)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if return_split:
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if Xdtype is not None:
X = X.astype(Xdtype)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_classification_data = get_UCR_data
#hide
PATH = Path('.')
dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate
for dsid in dsids:
print(dsid)
tgt_dir = PATH/f'data/UCR/{dsid}'
if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)
test_eq(len(get_files(tgt_dir)), 0) # no file left
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
del X_train, y_train, X_valid, y_valid
start = time.time()
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
elapsed = time.time() - start
test_eq(elapsed < 1, True)
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(X_train.dtype, np.float32)
test_eq(X_train.__class__.__name__, 'memmap')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)
test_eq(X_train.__class__.__name__, 'ndarray')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data('natops')
dsid = 'natops'
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)
X, y, splits = get_UCR_data(dsid, split_data=False)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_type(X, X_train)
test_type(y, y_train)
#export
def check_data(X, y=None, splits=None, show_plot=True):
try: X_is_nan = np.isnan(X).sum()
except: X_is_nan = 'could not be checked'
if X.ndim == 3:
shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'
print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
else:
print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
if X_is_nan:
warnings.warn('X contains nan values')
if y is not None:
y_shape = y.shape
y = y.ravel()
if isinstance(y[0], str):
n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'
y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')
else:
y_is_nan = np.isnan(y).sum()
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')
if y_is_nan:
warnings.warn('y contains nan values')
if splits is not None:
_splits = get_splits_len(splits)
overlap = check_splits_overlap(splits)
print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')
if show_plot: plot_splits(splits)
dsid = 'ECGFiveDays'
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y = y.astype(np.float32)
check_data(X, y, splits)
y[:10] = np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
splits = get_splits(y, 3)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y[:5]= np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
#export
# This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available.
# The following code is adapted from the python package sktime to read .ts file.
class _TsFileParseException(Exception):
"""
Should be raised when parsing a .ts file and the format is incorrect.
"""
pass
def _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise _TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise _TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise _TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise _TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise _TsFileParseException("empty file")
#export
def get_Monash_regression_list():
return sorted([
"AustraliaRainfall", "HouseholdPowerConsumption1",
"HouseholdPowerConsumption2", "BeijingPM25Quality",
"BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent",
"FloodModeling1", "FloodModeling2", "FloodModeling3",
"AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment",
"NewsTitleSentiment", "IEEEPPG",
#"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded
])
Monash_regression_list = get_Monash_regression_list()
regression_list = Monash_regression_list
TSR_datasets = regression_datasets = regression_list
len(Monash_regression_list)
#export
def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False,
verbose=False, timeout=4):
dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'
dsid = dsid_list[0]
full_tgt_dir = Path(path)/dsid
pv(f'Dataset: {dsid}', verbose)
if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
if dsid == 'AppliancesEnergy': dset_id = 3902637
elif dsid == 'HouseholdPowerConsumption1': dset_id = 3902704
elif dsid == 'HouseholdPowerConsumption2': dset_id = 3902706
elif dsid == 'BenzeneConcentration': dset_id = 3902673
elif dsid == 'BeijingPM25Quality': dset_id = 3902671
elif dsid == 'BeijingPM10Quality': dset_id = 3902667
elif dsid == 'LiveFuelMoistureContent': dset_id = 3902716
elif dsid == 'FloodModeling1': dset_id = 3902694
elif dsid == 'FloodModeling2': dset_id = 3902696
elif dsid == 'FloodModeling3': dset_id = 3902698
elif dsid == 'AustraliaRainfall': dset_id = 3902654
elif dsid == 'PPGDalia': dset_id = 3902728
elif dsid == 'IEEEPPG': dset_id = 3902710
elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': dset_id = 3902685
elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': dset_id = 3902676
elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': dset_id = 3902688
elif dsid == 'NewsHeadlineSentiment': dset_id = 3902718
elif dsid == 'NewsTitleSentiment': dset_id= 3902726
elif dsid == 'Covid3Month': dset_id = 3902690
for split in ['TRAIN', 'TEST']:
url = f"https://zenodo.org/record/{dset_id}/files/{dsid}_{split}.ts"
fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'
pv('downloading data...', verbose)
try:
download_data(url, fname, c_key='archive', force_download=force_download, timeout=timeout)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot download {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
pv('...download complete', verbose)
try:
if split == 'TRAIN':
X_train, y_train = _load_from_tsfile_to_dataframe2(fname)
X_train = check_X(X_train, coerce_to_numpy=True)
else:
X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname)
X_valid = check_X(X_valid, coerce_to_numpy=True)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot create numpy arrays for {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if split_data:
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_regression_data = get_Monash_regression_data
dsid = "Covid3Month"
X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=False)
X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=False, verbose=True)
if X_train is not None:
test_eq(X_train.shape, (140, 1, 84))
if X is not None:
test_eq(X.shape, (201, 1, 84))
#export
def get_forecasting_list():
return sorted([
"Sunspots", "Weather"
])
forecasting_time_series = get_forecasting_list()
#export
def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):
dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'
dsid = dsid_list[0]
if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'
else: full_tgt_dir = Path(path)/f'{dsid}.csv'
pv(f'Dataset: {dsid}', verbose)
if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv"
elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'
try:
pv("downloading data...", verbose)
if force_download:
try: os.remove(full_tgt_dir)
except OSError: pass
download_data(url, full_tgt_dir, force_download=force_download, **kwargs)
pv(f"...data downloaded. Path = {full_tgt_dir}", verbose)
if dsid == 'Sunspots':
df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])
return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()
elif dsid == 'Weather':
# This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)
df = pd.read_csv(full_tgt_dir)
df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
# remove error (negative wind)
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
timestamp_s = date_time.map(datetime.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
df.reset_index(drop=True, inplace=True)
return df
else:
return full_tgt_dir
except Exception as inst:
print(inst)
warnings.warn(f"Cannot download {dsid} dataset")
return
ts = get_forecasting_time_series("sunspots", force_download=False)
test_eq(len(ts), 3235)
ts
ts = get_forecasting_time_series("weather", force_download=False)
if ts is not None:
test_eq(len(ts), 70091)
print(ts)
# export
Monash_forecasting_list = ['m1_yearly_dataset',
'm1_quarterly_dataset',
'm1_monthly_dataset',
'm3_yearly_dataset',
'm3_quarterly_dataset',
'm3_monthly_dataset',
'm3_other_dataset',
'm4_yearly_dataset',
'm4_quarterly_dataset',
'm4_monthly_dataset',
'm4_weekly_dataset',
'm4_daily_dataset',
'm4_hourly_dataset',
'tourism_yearly_dataset',
'tourism_quarterly_dataset',
'tourism_monthly_dataset',
'nn5_daily_dataset_with_missing_values',
'nn5_daily_dataset_without_missing_values',
'nn5_weekly_dataset',
'cif_2016_dataset',
'kaggle_web_traffic_dataset_with_missing_values',
'kaggle_web_traffic_dataset_without_missing_values',
'kaggle_web_traffic_weekly_dataset',
'solar_10_minutes_dataset',
'solar_weekly_dataset',
'electricity_hourly_dataset',
'electricity_weekly_dataset',
'london_smart_meters_dataset_with_missing_values',
'london_smart_meters_dataset_without_missing_values',
'wind_farms_minutely_dataset_with_missing_values',
'wind_farms_minutely_dataset_without_missing_values',
'car_parts_dataset_with_missing_values',
'car_parts_dataset_without_missing_values',
'dominick_dataset',
'fred_md_dataset',
'traffic_hourly_dataset',
'traffic_weekly_dataset',
'pedestrian_counts_dataset',
'hospital_dataset',
'covid_deaths_dataset',
'kdd_cup_2018_dataset_with_missing_values',
'kdd_cup_2018_dataset_without_missing_values',
'weather_dataset',
'sunspot_dataset_with_missing_values',
'sunspot_dataset_without_missing_values',
'saugeenday_dataset',
'us_births_dataset',
'elecdemand_dataset',
'solar_4_seconds_dataset',
'wind_4_seconds_dataset',
'Sunspots', 'Weather']
forecasting_list = Monash_forecasting_list
# export
## Original code available at: https://github.com/rakshitha123/TSForecasting
# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in
# the time series forecasting research space.
# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website:
# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.
# Citation:
# @misc{godahewa2021monash,
# author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
# title="Monash Time Series Forecasting Archive",
# howpublished ="\url{https://arxiv.org/abs/2105.06643}",
# year="2021"
# }
# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths
#
# Parameters
# full_file_path_and_name - complete .tsf file path
# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe
# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe
def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"):
col_names = []
col_types = []
all_data = {}
line_count = 0
frequency = None
forecast_horizon = None
contain_missing_values = None
contain_equal_length = None
found_data_tag = False
found_data_section = False
started_reading_data_section = False
with open(full_file_path_and_name, 'r', encoding='cp1252') as file:
for line in file:
# Strip white space from start/end of line
line = line.strip()
if line:
if line.startswith("@"): # Read meta-data
if not line.startswith("@data"):
line_content = line.split(" ")
if line.startswith("@attribute"):
if (len(line_content) != 3): # Attributes have both name and type
raise TsFileParseException("Invalid meta-data specification.")
col_names.append(line_content[1])
col_types.append(line_content[2])
else:
if len(line_content) != 2: # Other meta-data have only values
raise TsFileParseException("Invalid meta-data specification.")
if line.startswith("@frequency"):
frequency = line_content[1]
elif line.startswith("@horizon"):
forecast_horizon = int(line_content[1])
elif line.startswith("@missing"):
contain_missing_values = bool(distutils.util.strtobool(line_content[1]))
elif line.startswith("@equallength"):
contain_equal_length = bool(distutils.util.strtobool(line_content[1]))
else:
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
found_data_tag = True
elif not line.startswith("#"):
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
elif not found_data_tag:
raise TsFileParseException("Missing @data tag.")
else:
if not started_reading_data_section:
started_reading_data_section = True
found_data_section = True
all_series = []
for col in col_names:
all_data[col] = []
full_info = line.split(":")
if len(full_info) != (len(col_names) + 1):
raise TsFileParseException("Missing attributes/values in series.")
series = full_info[len(full_info) - 1]
series = series.split(",")
if(len(series) == 0):
raise TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol")
numeric_series = []
for val in series:
if val == "?":
numeric_series.append(replace_missing_vals_with)
else:
numeric_series.append(float(val))
if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):
raise TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.")
all_series.append(pd.Series(numeric_series).array)
for i in range(len(col_names)):
att_val = None
if col_types[i] == "numeric":
att_val = int(full_info[i])
elif col_types[i] == "string":
att_val = str(full_info[i])
elif col_types[i] == "date":
att_val = datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')
else:
raise TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required.
if(att_val == None):
raise TsFileParseException("Invalid attribute value.")
else:
all_data[col_names[i]].append(att_val)
line_count = line_count + 1
if line_count == 0:
raise TsFileParseException("Empty file.")
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section.")
if not found_data_section:
raise TsFileParseException("Missing series information under data section.")
all_data[value_column_name] = all_series
loaded_data = pd.DataFrame(all_data)
return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length
# export
def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):
pv(f'Dataset: {dsid}', verbose)
dsid = dsid.lower()
assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'
if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'
elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'
elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'
elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'
elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'
elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'
elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'
elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'
elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'
elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'
elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'
elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'
elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'
elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'
elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'
elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'
elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'
elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'
elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'
elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'
elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'
elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'
elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'
elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'
elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'
elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'
elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'
elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'
elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'
elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'
elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'
elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'
elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'
elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'
elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'
elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'
elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'
elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'
elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'
elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'
elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'
elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'
elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'
elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'
elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'
elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'
elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'
elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'
path = Path(path)
full_path = path/f'{dsid}.tsf'
if not full_path.exists() or force_download:
try:
decompress_from_url(url, target_dir=path, verbose=verbose)
except Exception as inst:
print(inst)
pv("converting dataframe to numpy array...", verbose)
data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)
X = to3d(stack_pad(data['series_value']))
pv("...dataframe converted to numpy array", verbose)
pv(f'\nX.shape: {X.shape}', verbose)
pv(f'freq: {frequency}', verbose)
pv(f'forecast_horizon: {forecast_horizon}', verbose)
pv(f'contain_missing_values: {contain_missing_values}', verbose)
pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)
if remove_from_disk: os.remove(full_path)
return X
get_forecasting_data = get_Monash_forecasting_data
dsid = 'm1_yearly_dataset'
X = get_Monash_forecasting_data(dsid, force_download=False)
if X is not None:
test_eq(X.shape, (181, 1, 58))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
External data> Helper functions used to download and extract common time series datasets.
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
#export
# Fix to fastai issue #3485 (not deployed to pip yet)
if not hasattr(pd.DataFrame,'_old_init'): pd.DataFrame._old_init = pd.DataFrame.__init__
@patch
def __init__(self:pd.DataFrame, data=None, index=None, columns=None, dtype=None, copy=None):
if data is not None and isinstance(data, Tensor): data = to_np(data)
self._old_init(data, index=index, columns=columns, dtype=dtype, copy=copy)
#export
from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df
from sktime.utils.validation.panel import check_X
from sktime.utils.data_io import TsFileParseException
#export
from fastai.data.external import *
from tqdm import tqdm
import zipfile
import tempfile
try: from urllib import urlretrieve
except ImportError: from urllib.request import urlretrieve
import shutil
from numpy import distutils
import distutils
#export
def decompress_from_url(url, target_dir=None, verbose=False):
# Download
try:
pv("downloading data...", verbose)
fname = os.path.basename(url)
tmpdir = tempfile.mkdtemp()
tmpfile = os.path.join(tmpdir, fname)
urlretrieve(url, tmpfile)
pv("...data downloaded", verbose)
# Decompress
try:
pv("decompressing data...", verbose)
if not os.path.exists(target_dir): os.makedirs(target_dir)
shutil.unpack_archive(tmpfile, target_dir)
shutil.rmtree(tmpdir)
pv("...data decompressed", verbose)
return target_dir
except:
shutil.rmtree(tmpdir)
if verbose: sys.stderr.write("Could not decompress file, aborting.\n")
except:
shutil.rmtree(tmpdir)
if verbose:
sys.stderr.write("Could not download url. Please, check url.\n")
#export
from fastdownload import download_url
def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):
"Download `url` to `fname`."
fname = Path(fname or URLs.path(url, c_key=c_key))
fname.parent.mkdir(parents=True, exist_ok=True)
if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)
return fname
# export
def get_UCR_univariate_list():
return [
'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',
'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',
'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',
'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',
'CricketZ', 'Crop', 'DiatomSizeReduction',
'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',
'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',
'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',
'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',
'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',
'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',
'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',
'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',
'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',
'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',
'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',
'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',
'MoteStrain', 'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',
'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',
'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',
'PowerCons', 'ProximalPhalanxOutlineAgeGroup',
'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',
'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',
'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',
'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',
'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',
'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',
'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'
]
test_eq(len(get_UCR_univariate_list()), 128)
UTSC_datasets = get_UCR_univariate_list()
UCR_univariate_list = get_UCR_univariate_list()
#export
def get_UCR_multivariate_list():
return [
'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',
'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',
'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',
'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',
'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',
'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',
'StandWalkJump', 'UWaveGestureLibrary'
]
test_eq(len(get_UCR_multivariate_list()), 30)
MTSC_datasets = get_UCR_multivariate_list()
UCR_multivariate_list = get_UCR_multivariate_list()
UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)
classification_list = UCR_list
TSC_datasets = classification_datasets = UCR_list
len(UCR_list)
#export
def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True,
force_download=False, verbose=False):
dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'
dsid = dsid_list[0]
return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data
if dsid in ['InsectWingbeat']:
warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')
pv(f'Dataset: {dsid}', verbose)
full_parent_dir = Path(path)/parent_dir
full_tgt_dir = full_parent_dir/dsid
# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)
full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)
if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
# Option A
src_website = 'http://www.timeseriesclassification.com/Downloads'
decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)
if dsid == 'DuckDuckGeese':
with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:
zip_ref.extractall(Path(parent_dir))
if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \
Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0:
print('It has not been possible to download the required files')
if return_split:
return None, None, None, None
else:
return None, None, None
pv('loading ts files to dataframe...', verbose)
X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')
X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts')
pv('...ts files loaded', verbose)
pv('preparing numpy arrays...', verbose)
X_train_ = []
X_valid_ = []
for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):
X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))
X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))
X_train, X_valid = match_seq_len(X_train, X_valid)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if return_split:
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if Xdtype is not None:
X = X.astype(Xdtype)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_classification_data = get_UCR_data
#hide
PATH = Path('.')
dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate
for dsid in dsids:
print(dsid)
tgt_dir = PATH/f'data/UCR/{dsid}'
if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)
test_eq(len(get_files(tgt_dir)), 0) # no file left
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
del X_train, y_train, X_valid, y_valid
start = time.time()
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
elapsed = time.time() - start
test_eq(elapsed < 1, True)
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(X_train.dtype, np.float32)
test_eq(X_train.__class__.__name__, 'memmap')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)
test_eq(X_train.__class__.__name__, 'ndarray')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data('natops')
dsid = 'natops'
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)
X, y, splits = get_UCR_data(dsid, split_data=False)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_type(X, X_train)
test_type(y, y_train)
#export
def check_data(X, y=None, splits=None, show_plot=True):
try: X_is_nan = np.isnan(X).sum()
except: X_is_nan = 'couldn not be checked'
if X.ndim == 3:
shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'
print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
else:
print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
if not isinstance(X, np.ndarray): warnings.warn('X must be a np.ndarray')
if X_is_nan:
warnings.warn('X must not contain nan values')
if y is not None:
y_shape = y.shape
y = y.ravel()
if isinstance(y[0], str):
n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'
y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')
else:
y_is_nan = np.isnan(y).sum()
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')
if not isinstance(y, np.ndarray): warnings.warn('y must be a np.ndarray')
if y_is_nan:
warnings.warn('y must not contain nan values')
if splits is not None:
_splits = get_splits_len(splits)
overlap = check_splits_overlap(splits)
print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')
if show_plot: plot_splits(splits)
dsid = 'ECGFiveDays'
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y = y.astype(np.float32)
check_data(X, y, splits)
y[:10] = np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
splits = get_splits(y, 3)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y[:5]= np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
#export
# This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available.
# The following code is adapted from the python package sktime to read .ts file.
class _TsFileParseException(Exception):
"""
Should be raised when parsing a .ts file and the format is incorrect.
"""
pass
def _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise _TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise _TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise _TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise _TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise _TsFileParseException("empty file")
#export
def get_Monash_regression_list():
return sorted([
"AustraliaRainfall", "HouseholdPowerConsumption1",
"HouseholdPowerConsumption2", "BeijingPM25Quality",
"BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent",
"FloodModeling1", "FloodModeling2", "FloodModeling3",
"AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment",
"NewsTitleSentiment", "IEEEPPG",
#"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded
])
Monash_regression_list = get_Monash_regression_list()
regression_list = Monash_regression_list
TSR_datasets = regression_datasets = regression_list
len(Monash_regression_list)
#export
def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False,
verbose=False, timeout=4):
dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'
dsid = dsid_list[0]
full_tgt_dir = Path(path)/dsid
pv(f'Dataset: {dsid}', verbose)
if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
if dsid == 'AppliancesEnergy': dset_id = 3902637
elif dsid == 'HouseholdPowerConsumption1': dset_id = 3902704
elif dsid == 'HouseholdPowerConsumption2': dset_id = 3902706
elif dsid == 'BenzeneConcentration': dset_id = 3902673
elif dsid == 'BeijingPM25Quality': dset_id = 3902671
elif dsid == 'BeijingPM10Quality': dset_id = 3902667
elif dsid == 'LiveFuelMoistureContent': dset_id = 3902716
elif dsid == 'FloodModeling1': dset_id = 3902694
elif dsid == 'FloodModeling2': dset_id = 3902696
elif dsid == 'FloodModeling3': dset_id = 3902698
elif dsid == 'AustraliaRainfall': dset_id = 3902654
elif dsid == 'PPGDalia': dset_id = 3902728
elif dsid == 'IEEEPPG': dset_id = 3902710
elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': dset_id = 3902685
elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': dset_id = 3902676
elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': dset_id = 3902688
elif dsid == 'NewsHeadlineSentiment': dset_id = 3902718
elif dsid == 'NewsTitleSentiment': dset_id= 3902726
elif dsid == 'Covid3Month': dset_id = 3902690
for split in ['TRAIN', 'TEST']:
url = f"https://zenodo.org/record/{dset_id}/files/{dsid}_{split}.ts"
fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'
pv('downloading data...', verbose)
try:
download_data(url, fname, c_key='archive', force_download=force_download, timeout=timeout)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot download {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
pv('...download complete', verbose)
try:
if split == 'TRAIN':
X_train, y_train = _load_from_tsfile_to_dataframe2(fname)
X_train = check_X(X_train, coerce_to_numpy=True)
else:
X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname)
X_valid = check_X(X_valid, coerce_to_numpy=True)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot create numpy arrays for {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if split_data:
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_regression_data = get_Monash_regression_data
dsid = "Covid3Month"
X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=False)
X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=False, verbose=True)
if X_train is not None:
test_eq(X_train.shape, (140, 1, 84))
if X is not None:
test_eq(X.shape, (201, 1, 84))
#export
def get_forecasting_list():
return sorted([
"Sunspots", "Weather"
])
forecasting_time_series = get_forecasting_list()
#export
def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):
dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'
dsid = dsid_list[0]
if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'
else: full_tgt_dir = Path(path)/f'{dsid}.csv'
pv(f'Dataset: {dsid}', verbose)
if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv"
elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'
try:
pv("downloading data...", verbose)
if force_download:
try: os.remove(full_tgt_dir)
except OSError: pass
download_data(url, full_tgt_dir, force_download=force_download, **kwargs)
pv(f"...data downloaded. Path = {full_tgt_dir}", verbose)
if dsid == 'Sunspots':
df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])
return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()
elif dsid == 'Weather':
# This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)
df = pd.read_csv(full_tgt_dir)
df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
# remove error (negative wind)
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
timestamp_s = date_time.map(datetime.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
df.reset_index(drop=True, inplace=True)
return df
else:
return full_tgt_dir
except Exception as inst:
print(inst)
warnings.warn(f"Cannot download {dsid} dataset")
return
ts = get_forecasting_time_series("sunspots", force_download=False)
test_eq(len(ts), 3235)
ts
ts = get_forecasting_time_series("weather", force_download=False)
if ts is not None:
test_eq(len(ts), 70091)
print(ts)
# export
Monash_forecasting_list = ['m1_yearly_dataset',
'm1_quarterly_dataset',
'm1_monthly_dataset',
'm3_yearly_dataset',
'm3_quarterly_dataset',
'm3_monthly_dataset',
'm3_other_dataset',
'm4_yearly_dataset',
'm4_quarterly_dataset',
'm4_monthly_dataset',
'm4_weekly_dataset',
'm4_daily_dataset',
'm4_hourly_dataset',
'tourism_yearly_dataset',
'tourism_quarterly_dataset',
'tourism_monthly_dataset',
'nn5_daily_dataset_with_missing_values',
'nn5_daily_dataset_without_missing_values',
'nn5_weekly_dataset',
'cif_2016_dataset',
'kaggle_web_traffic_dataset_with_missing_values',
'kaggle_web_traffic_dataset_without_missing_values',
'kaggle_web_traffic_weekly_dataset',
'solar_10_minutes_dataset',
'solar_weekly_dataset',
'electricity_hourly_dataset',
'electricity_weekly_dataset',
'london_smart_meters_dataset_with_missing_values',
'london_smart_meters_dataset_without_missing_values',
'wind_farms_minutely_dataset_with_missing_values',
'wind_farms_minutely_dataset_without_missing_values',
'car_parts_dataset_with_missing_values',
'car_parts_dataset_without_missing_values',
'dominick_dataset',
'fred_md_dataset',
'traffic_hourly_dataset',
'traffic_weekly_dataset',
'pedestrian_counts_dataset',
'hospital_dataset',
'covid_deaths_dataset',
'kdd_cup_2018_dataset_with_missing_values',
'kdd_cup_2018_dataset_without_missing_values',
'weather_dataset',
'sunspot_dataset_with_missing_values',
'sunspot_dataset_without_missing_values',
'saugeenday_dataset',
'us_births_dataset',
'elecdemand_dataset',
'solar_4_seconds_dataset',
'wind_4_seconds_dataset',
'Sunspots', 'Weather']
forecasting_list = Monash_forecasting_list
# export
## Original code available at: https://github.com/rakshitha123/TSForecasting
# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in
# the time series forecasting research space.
# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website:
# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.
# Citation:
# @misc{godahewa2021monash,
# author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
# title="Monash Time Series Forecasting Archive",
# howpublished ="\url{https://arxiv.org/abs/2105.06643}",
# year="2021"
# }
# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths
#
# Parameters
# full_file_path_and_name - complete .tsf file path
# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe
# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe
def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"):
col_names = []
col_types = []
all_data = {}
line_count = 0
frequency = None
forecast_horizon = None
contain_missing_values = None
contain_equal_length = None
found_data_tag = False
found_data_section = False
started_reading_data_section = False
with open(full_file_path_and_name, 'r', encoding='cp1252') as file:
for line in file:
# Strip white space from start/end of line
line = line.strip()
if line:
if line.startswith("@"): # Read meta-data
if not line.startswith("@data"):
line_content = line.split(" ")
if line.startswith("@attribute"):
if (len(line_content) != 3): # Attributes have both name and type
raise TsFileParseException("Invalid meta-data specification.")
col_names.append(line_content[1])
col_types.append(line_content[2])
else:
if len(line_content) != 2: # Other meta-data have only values
raise TsFileParseException("Invalid meta-data specification.")
if line.startswith("@frequency"):
frequency = line_content[1]
elif line.startswith("@horizon"):
forecast_horizon = int(line_content[1])
elif line.startswith("@missing"):
contain_missing_values = bool(distutils.util.strtobool(line_content[1]))
elif line.startswith("@equallength"):
contain_equal_length = bool(distutils.util.strtobool(line_content[1]))
else:
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
found_data_tag = True
elif not line.startswith("#"):
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
elif not found_data_tag:
raise TsFileParseException("Missing @data tag.")
else:
if not started_reading_data_section:
started_reading_data_section = True
found_data_section = True
all_series = []
for col in col_names:
all_data[col] = []
full_info = line.split(":")
if len(full_info) != (len(col_names) + 1):
raise TsFileParseException("Missing attributes/values in series.")
series = full_info[len(full_info) - 1]
series = series.split(",")
if(len(series) == 0):
raise TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol")
numeric_series = []
for val in series:
if val == "?":
numeric_series.append(replace_missing_vals_with)
else:
numeric_series.append(float(val))
if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):
raise TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.")
all_series.append(pd.Series(numeric_series).array)
for i in range(len(col_names)):
att_val = None
if col_types[i] == "numeric":
att_val = int(full_info[i])
elif col_types[i] == "string":
att_val = str(full_info[i])
elif col_types[i] == "date":
att_val = datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')
else:
raise TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required.
if(att_val == None):
raise TsFileParseException("Invalid attribute value.")
else:
all_data[col_names[i]].append(att_val)
line_count = line_count + 1
if line_count == 0:
raise TsFileParseException("Empty file.")
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section.")
if not found_data_section:
raise TsFileParseException("Missing series information under data section.")
all_data[value_column_name] = all_series
loaded_data = pd.DataFrame(all_data)
return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length
# export
def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):
pv(f'Dataset: {dsid}', verbose)
dsid = dsid.lower()
assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'
if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'
elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'
elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'
elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'
elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'
elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'
elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'
elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'
elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'
elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'
elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'
elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'
elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'
elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'
elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'
elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'
elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'
elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'
elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'
elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'
elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'
elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'
elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'
elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'
elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'
elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'
elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'
elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'
elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'
elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'
elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'
elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'
elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'
elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'
elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'
elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'
elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'
elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'
elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'
elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'
elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'
elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'
elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'
elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'
elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'
elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'
elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'
elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'
path = Path(path)
full_path = path/f'{dsid}.tsf'
if not full_path.exists() or force_download:
try:
decompress_from_url(url, target_dir=path, verbose=verbose)
except Exception as inst:
print(inst)
pv("converting dataframe to numpy array...", verbose)
data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)
X = to3d(stack_pad(data['series_value']))
pv("...dataframe converted to numpy array", verbose)
pv(f'\nX.shape: {X.shape}', verbose)
pv(f'freq: {frequency}', verbose)
pv(f'forecast_horizon: {forecast_horizon}', verbose)
pv(f'contain_missing_values: {contain_missing_values}', verbose)
pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)
if remove_from_disk: os.remove(full_path)
return X
get_forecasting_data = get_Monash_forecasting_data
dsid = 'm1_yearly_dataset'
X = get_Monash_forecasting_data(dsid, force_download=False)
if X is not None:
test_eq(X.shape, (181, 1, 58))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
External data> Helper functions used to download and extract common time series datasets.
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
#export
from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df
from sktime.utils.validation.panel import check_X
from sktime.utils.data_io import TsFileParseException
#export
from fastai.data.external import *
from tqdm import tqdm
import zipfile
import tempfile
try: from urllib import urlretrieve
except ImportError: from urllib.request import urlretrieve
import shutil
from numpy import distutils
import distutils
#export
def decompress_from_url(url, target_dir=None, verbose=False):
# Download
try:
pv("downloading data...", verbose)
fname = os.path.basename(url)
tmpdir = tempfile.mkdtemp()
tmpfile = os.path.join(tmpdir, fname)
urlretrieve(url, tmpfile)
pv("...data downloaded", verbose)
# Decompress
try:
pv("decompressing data...", verbose)
if not os.path.exists(target_dir): os.makedirs(target_dir)
shutil.unpack_archive(tmpfile, target_dir)
shutil.rmtree(tmpdir)
pv("...data decompressed", verbose)
return target_dir
except:
shutil.rmtree(tmpdir)
if verbose: sys.stderr.write("Could not decompress file, aborting.\n")
except:
shutil.rmtree(tmpdir)
if verbose:
sys.stderr.write("Could not download url. Please, check url.\n")
#export
from fastdownload import download_url
def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):
"Download `url` to `fname`."
fname = Path(fname or URLs.path(url, c_key=c_key))
fname.parent.mkdir(parents=True, exist_ok=True)
if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)
return fname
# export
def get_UCR_univariate_list():
return [
'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',
'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',
'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',
'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',
'CricketZ', 'Crop', 'DiatomSizeReduction',
'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',
'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',
'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',
'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',
'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',
'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',
'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',
'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',
'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',
'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',
'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',
'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',
'MoteStrain', 'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',
'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',
'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',
'PowerCons', 'ProximalPhalanxOutlineAgeGroup',
'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',
'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',
'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',
'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',
'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',
'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',
'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'
]
test_eq(len(get_UCR_univariate_list()), 128)
UTSC_datasets = get_UCR_univariate_list()
UCR_univariate_list = get_UCR_univariate_list()
#export
def get_UCR_multivariate_list():
return [
'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',
'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',
'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',
'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',
'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',
'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',
'StandWalkJump', 'UWaveGestureLibrary'
]
test_eq(len(get_UCR_multivariate_list()), 30)
MTSC_datasets = get_UCR_multivariate_list()
UCR_multivariate_list = get_UCR_multivariate_list()
UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)
classification_list = UCR_list
TSC_datasets = classification_datasets = UCR_list
len(UCR_list)
#export
def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True,
force_download=False, verbose=False):
dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'
dsid = dsid_list[0]
return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data
if dsid in ['InsectWingbeat']:
warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')
pv(f'Dataset: {dsid}', verbose)
full_parent_dir = Path(path)/parent_dir
full_tgt_dir = full_parent_dir/dsid
# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)
full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)
if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
# Option A
src_website = 'http://www.timeseriesclassification.com/Downloads'
decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)
if dsid == 'DuckDuckGeese':
with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:
zip_ref.extractall(Path(parent_dir))
if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \
Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0:
print('It has not been possible to download the required files')
if return_split:
return None, None, None, None
else:
return None, None, None
pv('loading ts files to dataframe...', verbose)
X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')
X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts')
pv('...ts files loaded', verbose)
pv('preparing numpy arrays...', verbose)
X_train_ = []
X_valid_ = []
for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):
X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))
X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))
X_train, X_valid = match_seq_len(X_train, X_valid)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if return_split:
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if Xdtype is not None:
X = X.astype(Xdtype)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_classification_data = get_UCR_data
#hide
PATH = Path('.')
dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate
for dsid in dsids:
print(dsid)
tgt_dir = PATH/f'data/UCR/{dsid}'
if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)
test_eq(len(get_files(tgt_dir)), 0) # no file left
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
del X_train, y_train, X_valid, y_valid
start = time.time()
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
elapsed = time.time() - start
test_eq(elapsed < 1, True)
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(X_train.dtype, np.float32)
test_eq(X_train.__class__.__name__, 'memmap')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)
test_eq(X_train.__class__.__name__, 'ndarray')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data('natops')
dsid = 'natops'
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)
X, y, splits = get_UCR_data(dsid, split_data=False)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_type(X, X_train)
test_type(y, y_train)
#export
def check_data(X, y=None, splits=None, show_plot=True):
try: X_is_nan = np.isnan(X).sum()
except: X_is_nan = 'couldn not be checked'
if X.ndim == 3:
shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'
print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
else:
print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
if not isinstance(X, np.ndarray): warnings.warn('X must be a np.ndarray')
if X_is_nan:
warnings.warn('X must not contain nan values')
if y is not None:
y_shape = y.shape
y = y.ravel()
if isinstance(y[0], str):
n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'
y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')
else:
y_is_nan = np.isnan(y).sum()
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')
if not isinstance(y, np.ndarray): warnings.warn('y must be a np.ndarray')
if y_is_nan:
warnings.warn('y must not contain nan values')
if splits is not None:
_splits = get_splits_len(splits)
overlap = check_splits_overlap(splits)
print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')
if show_plot: plot_splits(splits)
dsid = 'ECGFiveDays'
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y = y.astype(np.float32)
check_data(X, y, splits)
y[:10] = np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
splits = get_splits(y, 3)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y[:5]= np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
#export
# This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available.
# The following code is adapted from the python package sktime to read .ts file.
class _TsFileParseException(Exception):
"""
Should be raised when parsing a .ts file and the format is incorrect.
"""
pass
def _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise _TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise _TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise _TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise _TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise _TsFileParseException("empty file")
#export
def get_Monash_regression_list():
return sorted([
"AustraliaRainfall", "HouseholdPowerConsumption1",
"HouseholdPowerConsumption2", "BeijingPM25Quality",
"BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent",
"FloodModeling1", "FloodModeling2", "FloodModeling3",
"AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment",
"NewsTitleSentiment", "IEEEPPG",
#"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded
])
Monash_regression_list = get_Monash_regression_list()
regression_list = Monash_regression_list
TSR_datasets = regression_datasets = regression_list
len(Monash_regression_list)
#export
def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False,
verbose=False, timeout=4):
dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'
dsid = dsid_list[0]
full_tgt_dir = Path(path)/dsid
pv(f'Dataset: {dsid}', verbose)
if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
if dsid == 'AppliancesEnergy': dset_id = 3902637
elif dsid == 'HouseholdPowerConsumption1': dset_id = 3902704
elif dsid == 'HouseholdPowerConsumption2': dset_id = 3902706
elif dsid == 'BenzeneConcentration': dset_id = 3902673
elif dsid == 'BeijingPM25Quality': dset_id = 3902671
elif dsid == 'BeijingPM10Quality': dset_id = 3902667
elif dsid == 'LiveFuelMoistureContent': dset_id = 3902716
elif dsid == 'FloodModeling1': dset_id = 3902694
elif dsid == 'FloodModeling2': dset_id = 3902696
elif dsid == 'FloodModeling3': dset_id = 3902698
elif dsid == 'AustraliaRainfall': dset_id = 3902654
elif dsid == 'PPGDalia': dset_id = 3902728
elif dsid == 'IEEEPPG': dset_id = 3902710
elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': dset_id = 3902685
elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': dset_id = 3902676
elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': dset_id = 3902688
elif dsid == 'NewsHeadlineSentiment': dset_id = 3902718
elif dsid == 'NewsTitleSentiment': dset_id= 3902726
elif dsid == 'Covid3Month': dset_id = 3902690
for split in ['TRAIN', 'TEST']:
url = f"https://zenodo.org/record/{dset_id}/files/{dsid}_{split}.ts"
fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'
pv('downloading data...', verbose)
try:
download_data(url, fname, c_key='archive', force_download=force_download, timeout=timeout)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot download {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
pv('...download complete', verbose)
try:
if split == 'TRAIN':
X_train, y_train = _load_from_tsfile_to_dataframe2(fname)
X_train = check_X(X_train, coerce_to_numpy=True)
else:
X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname)
X_valid = check_X(X_valid, coerce_to_numpy=True)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot create numpy arrays for {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if split_data:
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_regression_data = get_Monash_regression_data
dsid = "Covid3Month"
X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=False)
X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=False, verbose=True)
if X_train is not None:
test_eq(X_train.shape, (140, 1, 84))
if X is not None:
test_eq(X.shape, (201, 1, 84))
#export
def get_forecasting_list():
return sorted([
"Sunspots", "Weather"
])
forecasting_time_series = get_forecasting_list()
#export
def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):
dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'
dsid = dsid_list[0]
if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'
else: full_tgt_dir = Path(path)/f'{dsid}.csv'
pv(f'Dataset: {dsid}', verbose)
if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv"
elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'
try:
pv("downloading data...", verbose)
if force_download:
try: os.remove(full_tgt_dir)
except OSError: pass
download_data(url, full_tgt_dir, force_download=force_download, **kwargs)
pv(f"...data downloaded. Path = {full_tgt_dir}", verbose)
if dsid == 'Sunspots':
df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])
return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()
elif dsid == 'Weather':
# This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)
df = pd.read_csv(full_tgt_dir)
df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
# remove error (negative wind)
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
timestamp_s = date_time.map(datetime.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
df.reset_index(drop=True, inplace=True)
return df
else:
return full_tgt_dir
except Exception as inst:
print(inst)
warnings.warn(f"Cannot download {dsid} dataset")
return
ts = get_forecasting_time_series("sunspots", force_download=False)
test_eq(len(ts), 3235)
ts
ts = get_forecasting_time_series("weather", force_download=False)
if ts is not None:
test_eq(len(ts), 70091)
print(ts)
# export
Monash_forecasting_list = ['m1_yearly_dataset',
'm1_quarterly_dataset',
'm1_monthly_dataset',
'm3_yearly_dataset',
'm3_quarterly_dataset',
'm3_monthly_dataset',
'm3_other_dataset',
'm4_yearly_dataset',
'm4_quarterly_dataset',
'm4_monthly_dataset',
'm4_weekly_dataset',
'm4_daily_dataset',
'm4_hourly_dataset',
'tourism_yearly_dataset',
'tourism_quarterly_dataset',
'tourism_monthly_dataset',
'nn5_daily_dataset_with_missing_values',
'nn5_daily_dataset_without_missing_values',
'nn5_weekly_dataset',
'cif_2016_dataset',
'kaggle_web_traffic_dataset_with_missing_values',
'kaggle_web_traffic_dataset_without_missing_values',
'kaggle_web_traffic_weekly_dataset',
'solar_10_minutes_dataset',
'solar_weekly_dataset',
'electricity_hourly_dataset',
'electricity_weekly_dataset',
'london_smart_meters_dataset_with_missing_values',
'london_smart_meters_dataset_without_missing_values',
'wind_farms_minutely_dataset_with_missing_values',
'wind_farms_minutely_dataset_without_missing_values',
'car_parts_dataset_with_missing_values',
'car_parts_dataset_without_missing_values',
'dominick_dataset',
'fred_md_dataset',
'traffic_hourly_dataset',
'traffic_weekly_dataset',
'pedestrian_counts_dataset',
'hospital_dataset',
'covid_deaths_dataset',
'kdd_cup_2018_dataset_with_missing_values',
'kdd_cup_2018_dataset_without_missing_values',
'weather_dataset',
'sunspot_dataset_with_missing_values',
'sunspot_dataset_without_missing_values',
'saugeenday_dataset',
'us_births_dataset',
'elecdemand_dataset',
'solar_4_seconds_dataset',
'wind_4_seconds_dataset',
'Sunspots', 'Weather']
forecasting_list = Monash_forecasting_list
# export
## Original code available at: https://github.com/rakshitha123/TSForecasting
# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in
# the time series forecasting research space.
# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website:
# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.
# Citation:
# @misc{godahewa2021monash,
# author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
# title="Monash Time Series Forecasting Archive",
# howpublished ="\url{https://arxiv.org/abs/2105.06643}",
# year="2021"
# }
# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths
#
# Parameters
# full_file_path_and_name - complete .tsf file path
# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe
# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe
def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"):
col_names = []
col_types = []
all_data = {}
line_count = 0
frequency = None
forecast_horizon = None
contain_missing_values = None
contain_equal_length = None
found_data_tag = False
found_data_section = False
started_reading_data_section = False
with open(full_file_path_and_name, 'r', encoding='cp1252') as file:
for line in file:
# Strip white space from start/end of line
line = line.strip()
if line:
if line.startswith("@"): # Read meta-data
if not line.startswith("@data"):
line_content = line.split(" ")
if line.startswith("@attribute"):
if (len(line_content) != 3): # Attributes have both name and type
raise TsFileParseException("Invalid meta-data specification.")
col_names.append(line_content[1])
col_types.append(line_content[2])
else:
if len(line_content) != 2: # Other meta-data have only values
raise TsFileParseException("Invalid meta-data specification.")
if line.startswith("@frequency"):
frequency = line_content[1]
elif line.startswith("@horizon"):
forecast_horizon = int(line_content[1])
elif line.startswith("@missing"):
contain_missing_values = bool(distutils.util.strtobool(line_content[1]))
elif line.startswith("@equallength"):
contain_equal_length = bool(distutils.util.strtobool(line_content[1]))
else:
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
found_data_tag = True
elif not line.startswith("#"):
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
elif not found_data_tag:
raise TsFileParseException("Missing @data tag.")
else:
if not started_reading_data_section:
started_reading_data_section = True
found_data_section = True
all_series = []
for col in col_names:
all_data[col] = []
full_info = line.split(":")
if len(full_info) != (len(col_names) + 1):
raise TsFileParseException("Missing attributes/values in series.")
series = full_info[len(full_info) - 1]
series = series.split(",")
if(len(series) == 0):
raise TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol")
numeric_series = []
for val in series:
if val == "?":
numeric_series.append(replace_missing_vals_with)
else:
numeric_series.append(float(val))
if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):
raise TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.")
all_series.append(pd.Series(numeric_series).array)
for i in range(len(col_names)):
att_val = None
if col_types[i] == "numeric":
att_val = int(full_info[i])
elif col_types[i] == "string":
att_val = str(full_info[i])
elif col_types[i] == "date":
att_val = datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')
else:
raise TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required.
if(att_val == None):
raise TsFileParseException("Invalid attribute value.")
else:
all_data[col_names[i]].append(att_val)
line_count = line_count + 1
if line_count == 0:
raise TsFileParseException("Empty file.")
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section.")
if not found_data_section:
raise TsFileParseException("Missing series information under data section.")
all_data[value_column_name] = all_series
loaded_data = pd.DataFrame(all_data)
return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length
# export
def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):
pv(f'Dataset: {dsid}', verbose)
dsid = dsid.lower()
assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'
if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'
elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'
elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'
elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'
elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'
elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'
elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'
elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'
elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'
elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'
elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'
elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'
elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'
elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'
elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'
elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'
elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'
elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'
elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'
elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'
elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'
elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'
elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'
elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'
elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'
elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'
elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'
elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'
elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'
elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'
elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'
elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'
elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'
elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'
elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'
elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'
elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'
elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'
elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'
elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'
elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'
elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'
elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'
elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'
elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'
elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'
elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'
elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'
path = Path(path)
full_path = path/f'{dsid}.tsf'
if not full_path.exists() or force_download:
try:
decompress_from_url(url, target_dir=path, verbose=verbose)
except Exception as inst:
print(inst)
pv("converting dataframe to numpy array...", verbose)
data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)
X = to3d(stack_pad(data['series_value']))
pv("...dataframe converted to numpy array", verbose)
pv(f'\nX.shape: {X.shape}', verbose)
pv(f'freq: {frequency}', verbose)
pv(f'forecast_horizon: {forecast_horizon}', verbose)
pv(f'contain_missing_values: {contain_missing_values}', verbose)
pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)
if remove_from_disk: os.remove(full_path)
return X
get_forecasting_data = get_Monash_forecasting_data
dsid = 'm1_yearly_dataset'
X = get_Monash_forecasting_data(dsid, force_download=False)
if X is not None:
test_eq(X.shape, (181, 1, 58))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
External data> Helper functions used to download and extract common time series datasets.
###Code
#export
from tqdm import tqdm
import zipfile
import tempfile
try: from urllib import urlretrieve
except ImportError: from urllib.request import urlretrieve
import shutil
import distutils
from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df
from sktime.utils.validation.panel import check_X
from sktime.utils.data_io import TsFileParseException
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
#export
def decompress_from_url(url, target_dir=None, verbose=False):
# Download
try:
pv("downloading data...", verbose)
fname = os.path.basename(url)
tmpdir = tempfile.mkdtemp()
tmpfile = os.path.join(tmpdir, fname)
urlretrieve(url, tmpfile)
pv("...data downloaded", verbose)
# Decompress
try:
pv("decompressing data...", verbose)
if not os.path.exists(target_dir): os.makedirs(target_dir)
shutil.unpack_archive(tmpfile, target_dir)
shutil.rmtree(tmpdir)
pv("...data decompressed", verbose)
return target_dir
except:
shutil.rmtree(tmpdir)
if verbose: sys.stderr.write("Could not decompress file, aborting.\n")
except:
shutil.rmtree(tmpdir)
if verbose:
sys.stderr.write("Could not download url. Please, check url.\n")
#export
def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):
"Download `url` to `fname`."
from fastai.data.external import URLs
from fastdownload import download_url
fname = Path(fname or URLs.path(url, c_key=c_key))
fname.parent.mkdir(parents=True, exist_ok=True)
if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)
return fname
# export
def get_UCR_univariate_list():
return [
'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',
'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',
'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',
'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',
'CricketZ', 'Crop', 'DiatomSizeReduction',
'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',
'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',
'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',
'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',
'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',
'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',
'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',
'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',
'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',
'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',
'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',
'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',
'MoteStrain', 'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',
'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',
'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',
'PowerCons', 'ProximalPhalanxOutlineAgeGroup',
'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',
'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',
'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',
'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',
'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',
'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',
'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'
]
test_eq(len(get_UCR_univariate_list()), 128)
UTSC_datasets = get_UCR_univariate_list()
UCR_univariate_list = get_UCR_univariate_list()
#export
def get_UCR_multivariate_list():
return [
'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',
'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',
'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',
'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',
'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',
'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',
'StandWalkJump', 'UWaveGestureLibrary'
]
test_eq(len(get_UCR_multivariate_list()), 30)
MTSC_datasets = get_UCR_multivariate_list()
UCR_multivariate_list = get_UCR_multivariate_list()
UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)
classification_list = UCR_list
TSC_datasets = classification_datasets = UCR_list
len(UCR_list)
#export
def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True,
force_download=False, verbose=False):
dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'
dsid = dsid_list[0]
return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data
if dsid in ['InsectWingbeat']:
warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')
pv(f'Dataset: {dsid}', verbose)
full_parent_dir = Path(path)/parent_dir
full_tgt_dir = full_parent_dir/dsid
# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)
full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)
if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
# Option A
src_website = 'http://www.timeseriesclassification.com/Downloads'
decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)
if dsid == 'DuckDuckGeese':
with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:
zip_ref.extractall(Path(parent_dir))
if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \
Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0:
print('It has not been possible to download the required files')
if return_split:
return None, None, None, None
else:
return None, None, None
pv('loading ts files to dataframe...', verbose)
X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')
X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts')
pv('...ts files loaded', verbose)
pv('preparing numpy arrays...', verbose)
X_train_ = []
X_valid_ = []
for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):
X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))
X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))
X_train, X_valid = match_seq_len(X_train, X_valid)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if return_split:
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if Xdtype is not None:
X = X.astype(Xdtype)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_classification_data = get_UCR_data
from fastai.data.transforms import get_files
PATH = Path('.')
dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate
for dsid in dsids:
print(dsid)
tgt_dir = PATH/f'data/UCR/{dsid}'
if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)
test_eq(len(get_files(tgt_dir)), 0) # no file left
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
del X_train, y_train, X_valid, y_valid
start = time.time()
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
elapsed = time.time() - start
test_eq(elapsed < 1, True)
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(X_train.dtype, np.float32)
test_eq(X_train.__class__.__name__, 'memmap')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)
test_eq(X_train.__class__.__name__, 'ndarray')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data('natops')
dsid = 'natops'
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)
X, y, splits = get_UCR_data(dsid, split_data=False)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_type(X, X_train)
test_type(y, y_train)
#export
def check_data(X, y=None, splits=None, show_plot=True):
try: X_is_nan = np.isnan(X).sum()
except: X_is_nan = 'could not be checked'
if X.ndim == 3:
shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'
print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
else:
print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
if X_is_nan:
warnings.warn('X contains nan values')
if y is not None:
y_shape = y.shape
y = y.ravel()
if isinstance(y[0], str):
n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'
y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')
else:
y_is_nan = np.isnan(y).sum()
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')
if y_is_nan:
warnings.warn('y contains nan values')
if splits is not None:
_splits = get_splits_len(splits)
overlap = check_splits_overlap(splits)
print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')
if show_plot: plot_splits(splits)
dsid = 'ECGFiveDays'
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y = y.astype(np.float32)
check_data(X, y, splits)
y[:10] = np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
splits = get_splits(y, 3)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y[:5]= np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
#export
# This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available.
# The following code is adapted from the python package sktime to read .ts file.
class _TsFileParseException(Exception):
"""
Should be raised when parsing a .ts file and the format is incorrect.
"""
pass
def _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise _TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise _TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise _TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise _TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise _TsFileParseException("empty file")
#export
def get_Monash_regression_list():
return sorted([
"AustraliaRainfall", "HouseholdPowerConsumption1",
"HouseholdPowerConsumption2", "BeijingPM25Quality",
"BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent",
"FloodModeling1", "FloodModeling2", "FloodModeling3",
"AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment",
"NewsTitleSentiment", "IEEEPPG",
#"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded
])
Monash_regression_list = get_Monash_regression_list()
regression_list = Monash_regression_list
TSR_datasets = regression_datasets = regression_list
len(Monash_regression_list)
#export
def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False,
verbose=False, timeout=4):
dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'
dsid = dsid_list[0]
full_tgt_dir = Path(path)/dsid
pv(f'Dataset: {dsid}', verbose)
if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
if dsid == 'AppliancesEnergy': dset_id = 3902637
elif dsid == 'HouseholdPowerConsumption1': dset_id = 3902704
elif dsid == 'HouseholdPowerConsumption2': dset_id = 3902706
elif dsid == 'BenzeneConcentration': dset_id = 3902673
elif dsid == 'BeijingPM25Quality': dset_id = 3902671
elif dsid == 'BeijingPM10Quality': dset_id = 3902667
elif dsid == 'LiveFuelMoistureContent': dset_id = 3902716
elif dsid == 'FloodModeling1': dset_id = 3902694
elif dsid == 'FloodModeling2': dset_id = 3902696
elif dsid == 'FloodModeling3': dset_id = 3902698
elif dsid == 'AustraliaRainfall': dset_id = 3902654
elif dsid == 'PPGDalia': dset_id = 3902728
elif dsid == 'IEEEPPG': dset_id = 3902710
elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': dset_id = 3902685
elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': dset_id = 3902676
elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': dset_id = 3902688
elif dsid == 'NewsHeadlineSentiment': dset_id = 3902718
elif dsid == 'NewsTitleSentiment': dset_id= 3902726
elif dsid == 'Covid3Month': dset_id = 3902690
for split in ['TRAIN', 'TEST']:
url = f"https://zenodo.org/record/{dset_id}/files/{dsid}_{split}.ts"
fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'
pv('downloading data...', verbose)
try:
download_data(url, fname, c_key='archive', force_download=force_download, timeout=timeout)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot download {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
pv('...download complete', verbose)
try:
if split == 'TRAIN':
X_train, y_train = _load_from_tsfile_to_dataframe2(fname)
X_train = check_X(X_train, coerce_to_numpy=True)
else:
X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname)
X_valid = check_X(X_valid, coerce_to_numpy=True)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot create numpy arrays for {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if split_data:
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_regression_data = get_Monash_regression_data
dsid = "Covid3Month"
X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=False)
X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=False, verbose=True)
if X_train is not None:
test_eq(X_train.shape, (140, 1, 84))
if X is not None:
test_eq(X.shape, (201, 1, 84))
#export
def get_forecasting_list():
return sorted([
"Sunspots", "Weather"
])
forecasting_time_series = get_forecasting_list()
#export
def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):
dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'
dsid = dsid_list[0]
if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'
else: full_tgt_dir = Path(path)/f'{dsid}.csv'
pv(f'Dataset: {dsid}', verbose)
if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv"
elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'
try:
pv("downloading data...", verbose)
if force_download:
try: os.remove(full_tgt_dir)
except OSError: pass
download_data(url, full_tgt_dir, force_download=force_download, **kwargs)
pv(f"...data downloaded. Path = {full_tgt_dir}", verbose)
if dsid == 'Sunspots':
df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])
return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()
elif dsid == 'Weather':
# This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)
df = pd.read_csv(full_tgt_dir)
df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
# remove error (negative wind)
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
timestamp_s = date_time.map(datetime.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
df.reset_index(drop=True, inplace=True)
return df
else:
return full_tgt_dir
except Exception as inst:
print(inst)
warnings.warn(f"Cannot download {dsid} dataset")
return
ts = get_forecasting_time_series("sunspots", force_download=False)
test_eq(len(ts), 3235)
ts
ts = get_forecasting_time_series("weather", force_download=False)
if ts is not None:
test_eq(len(ts), 70091)
print(ts)
# export
Monash_forecasting_list = ['m1_yearly_dataset',
'm1_quarterly_dataset',
'm1_monthly_dataset',
'm3_yearly_dataset',
'm3_quarterly_dataset',
'm3_monthly_dataset',
'm3_other_dataset',
'm4_yearly_dataset',
'm4_quarterly_dataset',
'm4_monthly_dataset',
'm4_weekly_dataset',
'm4_daily_dataset',
'm4_hourly_dataset',
'tourism_yearly_dataset',
'tourism_quarterly_dataset',
'tourism_monthly_dataset',
'nn5_daily_dataset_with_missing_values',
'nn5_daily_dataset_without_missing_values',
'nn5_weekly_dataset',
'cif_2016_dataset',
'kaggle_web_traffic_dataset_with_missing_values',
'kaggle_web_traffic_dataset_without_missing_values',
'kaggle_web_traffic_weekly_dataset',
'solar_10_minutes_dataset',
'solar_weekly_dataset',
'electricity_hourly_dataset',
'electricity_weekly_dataset',
'london_smart_meters_dataset_with_missing_values',
'london_smart_meters_dataset_without_missing_values',
'wind_farms_minutely_dataset_with_missing_values',
'wind_farms_minutely_dataset_without_missing_values',
'car_parts_dataset_with_missing_values',
'car_parts_dataset_without_missing_values',
'dominick_dataset',
'fred_md_dataset',
'traffic_hourly_dataset',
'traffic_weekly_dataset',
'pedestrian_counts_dataset',
'hospital_dataset',
'covid_deaths_dataset',
'kdd_cup_2018_dataset_with_missing_values',
'kdd_cup_2018_dataset_without_missing_values',
'weather_dataset',
'sunspot_dataset_with_missing_values',
'sunspot_dataset_without_missing_values',
'saugeenday_dataset',
'us_births_dataset',
'elecdemand_dataset',
'solar_4_seconds_dataset',
'wind_4_seconds_dataset',
'Sunspots', 'Weather']
forecasting_list = Monash_forecasting_list
# export
## Original code available at: https://github.com/rakshitha123/TSForecasting
# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in
# the time series forecasting research space.
# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website:
# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.
# Citation:
# @misc{godahewa2021monash,
# author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
# title="Monash Time Series Forecasting Archive",
# howpublished ="\url{https://arxiv.org/abs/2105.06643}",
# year="2021"
# }
# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths
#
# Parameters
# full_file_path_and_name - complete .tsf file path
# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe
# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe
def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"):
col_names = []
col_types = []
all_data = {}
line_count = 0
frequency = None
forecast_horizon = None
contain_missing_values = None
contain_equal_length = None
found_data_tag = False
found_data_section = False
started_reading_data_section = False
with open(full_file_path_and_name, 'r', encoding='cp1252') as file:
for line in file:
# Strip white space from start/end of line
line = line.strip()
if line:
if line.startswith("@"): # Read meta-data
if not line.startswith("@data"):
line_content = line.split(" ")
if line.startswith("@attribute"):
if (len(line_content) != 3): # Attributes have both name and type
raise TsFileParseException("Invalid meta-data specification.")
col_names.append(line_content[1])
col_types.append(line_content[2])
else:
if len(line_content) != 2: # Other meta-data have only values
raise TsFileParseException("Invalid meta-data specification.")
if line.startswith("@frequency"):
frequency = line_content[1]
elif line.startswith("@horizon"):
forecast_horizon = int(line_content[1])
elif line.startswith("@missing"):
contain_missing_values = bool(distutils.util.strtobool(line_content[1]))
elif line.startswith("@equallength"):
contain_equal_length = bool(distutils.util.strtobool(line_content[1]))
else:
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
found_data_tag = True
elif not line.startswith("#"):
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
elif not found_data_tag:
raise TsFileParseException("Missing @data tag.")
else:
if not started_reading_data_section:
started_reading_data_section = True
found_data_section = True
all_series = []
for col in col_names:
all_data[col] = []
full_info = line.split(":")
if len(full_info) != (len(col_names) + 1):
raise TsFileParseException("Missing attributes/values in series.")
series = full_info[len(full_info) - 1]
series = series.split(",")
if(len(series) == 0):
raise TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol")
numeric_series = []
for val in series:
if val == "?":
numeric_series.append(replace_missing_vals_with)
else:
numeric_series.append(float(val))
if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):
raise TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.")
all_series.append(pd.Series(numeric_series).array)
for i in range(len(col_names)):
att_val = None
if col_types[i] == "numeric":
att_val = int(full_info[i])
elif col_types[i] == "string":
att_val = str(full_info[i])
elif col_types[i] == "date":
att_val = datetime.datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')
else:
raise TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required.
if(att_val == None):
raise TsFileParseException("Invalid attribute value.")
else:
all_data[col_names[i]].append(att_val)
line_count = line_count + 1
if line_count == 0:
raise TsFileParseException("Empty file.")
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section.")
if not found_data_section:
raise TsFileParseException("Missing series information under data section.")
all_data[value_column_name] = all_series
loaded_data = pd.DataFrame(all_data)
return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length
# export
def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):
pv(f'Dataset: {dsid}', verbose)
dsid = dsid.lower()
assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'
if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'
elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'
elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'
elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'
elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'
elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'
elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'
elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'
elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'
elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'
elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'
elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'
elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'
elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'
elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'
elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'
elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'
elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'
elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'
elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'
elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'
elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'
elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'
elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'
elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'
elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'
elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'
elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'
elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'
elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'
elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'
elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'
elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'
elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'
elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'
elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'
elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'
elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'
elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'
elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'
elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'
elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'
elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'
elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'
elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'
elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'
elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'
elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'
path = Path(path)
full_path = path/f'{dsid}.tsf'
if not full_path.exists() or force_download:
try:
decompress_from_url(url, target_dir=path, verbose=verbose)
except Exception as inst:
print(inst)
pv("converting dataframe to numpy array...", verbose)
data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)
X = to3d(stack_pad(data['series_value']))
pv("...dataframe converted to numpy array", verbose)
pv(f'\nX.shape: {X.shape}', verbose)
pv(f'freq: {frequency}', verbose)
pv(f'forecast_horizon: {forecast_horizon}', verbose)
pv(f'contain_missing_values: {contain_missing_values}', verbose)
pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)
if remove_from_disk: os.remove(full_path)
return X
get_forecasting_data = get_Monash_forecasting_data
dsid = 'm1_yearly_dataset'
X = get_Monash_forecasting_data(dsid, force_download=False)
if X is not None:
test_eq(X.shape, (181, 1, 58))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
nb_name = "012_data.external.ipynb"
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
External data> Helper functions used to download and extract common time series datasets.
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
#export
from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df
from sktime.utils.validation.panel import check_X
from sktime.utils.data_io import TsFileParseException
#export
from fastai.data.external import *
from tqdm import tqdm
import zipfile
import tempfile
try: from urllib import urlretrieve
except ImportError: from urllib.request import urlretrieve
import shutil
from numpy import distutils
import distutils
#export
def decompress_from_url(url, target_dir=None, verbose=False):
# Download
try:
pv("downloading data...", verbose)
fname = os.path.basename(url)
tmpdir = tempfile.mkdtemp()
tmpfile = os.path.join(tmpdir, fname)
urlretrieve(url, tmpfile)
pv("...data downloaded", verbose)
# Decompress
try:
pv("decompressing data...", verbose)
if not os.path.exists(target_dir): os.makedirs(target_dir)
shutil.unpack_archive(tmpfile, target_dir)
shutil.rmtree(tmpdir)
pv("...data decompressed", verbose)
return target_dir
except:
shutil.rmtree(tmpdir)
if verbose: sys.stderr.write("Could not decompress file, aborting.\n")
except:
shutil.rmtree(tmpdir)
if verbose:
sys.stderr.write("Could not download url. Please, check url.\n")
#export
from fastdownload import download_url
def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):
"Download `url` to `fname`."
fname = Path(fname or URLs.path(url, c_key=c_key))
fname.parent.mkdir(parents=True, exist_ok=True)
if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)
return fname
# export
def get_UCR_univariate_list():
return [
'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',
'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',
'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',
'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',
'CricketZ', 'Crop', 'DiatomSizeReduction',
'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',
'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',
'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',
'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',
'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',
'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',
'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',
'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',
'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',
'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',
'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',
'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',
'MoteStrain', 'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',
'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',
'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',
'PowerCons', 'ProximalPhalanxOutlineAgeGroup',
'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',
'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',
'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',
'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',
'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',
'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',
'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'
]
test_eq(len(get_UCR_univariate_list()), 128)
UTSC_datasets = get_UCR_univariate_list()
UCR_univariate_list = get_UCR_univariate_list()
#export
def get_UCR_multivariate_list():
return [
'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',
'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',
'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',
'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',
'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',
'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',
'StandWalkJump', 'UWaveGestureLibrary'
]
test_eq(len(get_UCR_multivariate_list()), 30)
MTSC_datasets = get_UCR_multivariate_list()
UCR_multivariate_list = get_UCR_multivariate_list()
UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)
classification_list = UCR_list
TSC_datasets = classification_datasets = UCR_list
len(UCR_list)
#export
def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True,
force_download=False, verbose=False):
dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'
dsid = dsid_list[0]
return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data
if dsid in ['InsectWingbeat']:
warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')
pv(f'Dataset: {dsid}', verbose)
full_parent_dir = Path(path)/parent_dir
full_tgt_dir = full_parent_dir/dsid
# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)
full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)
if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
# Option A
src_website = 'http://www.timeseriesclassification.com/Downloads'
decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)
if dsid == 'DuckDuckGeese':
with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:
zip_ref.extractall(Path(parent_dir))
if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \
Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0:
print('It has not been possible to download the required files')
if return_split:
return None, None, None, None
else:
return None, None, None
pv('loading ts files to dataframe...', verbose)
X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')
X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts')
pv('...ts files loaded', verbose)
pv('preparing numpy arrays...', verbose)
X_train_ = []
X_valid_ = []
for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):
X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))
X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))
X_train, X_valid = match_seq_len(X_train, X_valid)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if return_split:
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if Xdtype is not None:
X = X.astype(Xdtype)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_classification_data = get_UCR_data
#hide
PATH = Path('.')
dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate
for dsid in dsids:
print(dsid)
tgt_dir = PATH/f'data/UCR/{dsid}'
if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)
test_eq(len(get_files(tgt_dir)), 0) # no file left
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
del X_train, y_train, X_valid, y_valid
start = time.time()
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
elapsed = time.time() - start
test_eq(elapsed < 1, True)
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(X_train.dtype, np.float32)
test_eq(X_train.__class__.__name__, 'memmap')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)
test_eq(X_train.__class__.__name__, 'ndarray')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data('natops')
dsid = 'natops'
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)
X, y, splits = get_UCR_data(dsid, split_data=False)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_type(X, X_train)
test_type(y, y_train)
#export
def check_data(X, y=None, splits=None, show_plot=True):
try: X_is_nan = np.isnan(X).sum()
except: X_is_nan = 'could not be checked'
if X.ndim == 3:
shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'
print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
else:
print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
if X_is_nan:
warnings.warn('X contains nan values')
if y is not None:
y_shape = y.shape
y = y.ravel()
if isinstance(y[0], str):
n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'
y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')
else:
y_is_nan = np.isnan(y).sum()
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')
if y_is_nan:
warnings.warn('y contains nan values')
if splits is not None:
_splits = get_splits_len(splits)
overlap = check_splits_overlap(splits)
print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')
if show_plot: plot_splits(splits)
dsid = 'ECGFiveDays'
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y = y.astype(np.float32)
check_data(X, y, splits)
y[:10] = np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
splits = get_splits(y, 3)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y[:5]= np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
#export
# This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available.
# The following code is adapted from the python package sktime to read .ts file.
class _TsFileParseException(Exception):
"""
Should be raised when parsing a .ts file and the format is incorrect.
"""
pass
def _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise _TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise _TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise _TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise _TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise _TsFileParseException("empty file")
#export
def get_Monash_regression_list():
return sorted([
"AustraliaRainfall", "HouseholdPowerConsumption1",
"HouseholdPowerConsumption2", "BeijingPM25Quality",
"BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent",
"FloodModeling1", "FloodModeling2", "FloodModeling3",
"AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment",
"NewsTitleSentiment", "IEEEPPG",
#"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded
])
Monash_regression_list = get_Monash_regression_list()
regression_list = Monash_regression_list
TSR_datasets = regression_datasets = regression_list
len(Monash_regression_list)
#export
def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False,
verbose=False, timeout=4):
dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'
dsid = dsid_list[0]
full_tgt_dir = Path(path)/dsid
pv(f'Dataset: {dsid}', verbose)
if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
if dsid == 'AppliancesEnergy': dset_id = 3902637
elif dsid == 'HouseholdPowerConsumption1': dset_id = 3902704
elif dsid == 'HouseholdPowerConsumption2': dset_id = 3902706
elif dsid == 'BenzeneConcentration': dset_id = 3902673
elif dsid == 'BeijingPM25Quality': dset_id = 3902671
elif dsid == 'BeijingPM10Quality': dset_id = 3902667
elif dsid == 'LiveFuelMoistureContent': dset_id = 3902716
elif dsid == 'FloodModeling1': dset_id = 3902694
elif dsid == 'FloodModeling2': dset_id = 3902696
elif dsid == 'FloodModeling3': dset_id = 3902698
elif dsid == 'AustraliaRainfall': dset_id = 3902654
elif dsid == 'PPGDalia': dset_id = 3902728
elif dsid == 'IEEEPPG': dset_id = 3902710
elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': dset_id = 3902685
elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': dset_id = 3902676
elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': dset_id = 3902688
elif dsid == 'NewsHeadlineSentiment': dset_id = 3902718
elif dsid == 'NewsTitleSentiment': dset_id= 3902726
elif dsid == 'Covid3Month': dset_id = 3902690
for split in ['TRAIN', 'TEST']:
url = f"https://zenodo.org/record/{dset_id}/files/{dsid}_{split}.ts"
fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'
pv('downloading data...', verbose)
try:
download_data(url, fname, c_key='archive', force_download=force_download, timeout=timeout)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot download {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
pv('...download complete', verbose)
try:
if split == 'TRAIN':
X_train, y_train = _load_from_tsfile_to_dataframe2(fname)
X_train = check_X(X_train, coerce_to_numpy=True)
else:
X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname)
X_valid = check_X(X_valid, coerce_to_numpy=True)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot create numpy arrays for {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if split_data:
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_regression_data = get_Monash_regression_data
dsid = "Covid3Month"
X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=False)
X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=False, verbose=True)
if X_train is not None:
test_eq(X_train.shape, (140, 1, 84))
if X is not None:
test_eq(X.shape, (201, 1, 84))
#export
def get_forecasting_list():
return sorted([
"Sunspots", "Weather"
])
forecasting_time_series = get_forecasting_list()
#export
def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):
dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'
dsid = dsid_list[0]
if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'
else: full_tgt_dir = Path(path)/f'{dsid}.csv'
pv(f'Dataset: {dsid}', verbose)
if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv"
elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'
try:
pv("downloading data...", verbose)
if force_download:
try: os.remove(full_tgt_dir)
except OSError: pass
download_data(url, full_tgt_dir, force_download=force_download, **kwargs)
pv(f"...data downloaded. Path = {full_tgt_dir}", verbose)
if dsid == 'Sunspots':
df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])
return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()
elif dsid == 'Weather':
# This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)
df = pd.read_csv(full_tgt_dir)
df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
# remove error (negative wind)
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
timestamp_s = date_time.map(datetime.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
df.reset_index(drop=True, inplace=True)
return df
else:
return full_tgt_dir
except Exception as inst:
print(inst)
warnings.warn(f"Cannot download {dsid} dataset")
return
ts = get_forecasting_time_series("sunspots", force_download=False)
test_eq(len(ts), 3235)
ts
ts = get_forecasting_time_series("weather", force_download=False)
if ts is not None:
test_eq(len(ts), 70091)
print(ts)
# export
Monash_forecasting_list = ['m1_yearly_dataset',
'm1_quarterly_dataset',
'm1_monthly_dataset',
'm3_yearly_dataset',
'm3_quarterly_dataset',
'm3_monthly_dataset',
'm3_other_dataset',
'm4_yearly_dataset',
'm4_quarterly_dataset',
'm4_monthly_dataset',
'm4_weekly_dataset',
'm4_daily_dataset',
'm4_hourly_dataset',
'tourism_yearly_dataset',
'tourism_quarterly_dataset',
'tourism_monthly_dataset',
'nn5_daily_dataset_with_missing_values',
'nn5_daily_dataset_without_missing_values',
'nn5_weekly_dataset',
'cif_2016_dataset',
'kaggle_web_traffic_dataset_with_missing_values',
'kaggle_web_traffic_dataset_without_missing_values',
'kaggle_web_traffic_weekly_dataset',
'solar_10_minutes_dataset',
'solar_weekly_dataset',
'electricity_hourly_dataset',
'electricity_weekly_dataset',
'london_smart_meters_dataset_with_missing_values',
'london_smart_meters_dataset_without_missing_values',
'wind_farms_minutely_dataset_with_missing_values',
'wind_farms_minutely_dataset_without_missing_values',
'car_parts_dataset_with_missing_values',
'car_parts_dataset_without_missing_values',
'dominick_dataset',
'fred_md_dataset',
'traffic_hourly_dataset',
'traffic_weekly_dataset',
'pedestrian_counts_dataset',
'hospital_dataset',
'covid_deaths_dataset',
'kdd_cup_2018_dataset_with_missing_values',
'kdd_cup_2018_dataset_without_missing_values',
'weather_dataset',
'sunspot_dataset_with_missing_values',
'sunspot_dataset_without_missing_values',
'saugeenday_dataset',
'us_births_dataset',
'elecdemand_dataset',
'solar_4_seconds_dataset',
'wind_4_seconds_dataset',
'Sunspots', 'Weather']
forecasting_list = Monash_forecasting_list
# export
## Original code available at: https://github.com/rakshitha123/TSForecasting
# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in
# the time series forecasting research space.
# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website:
# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.
# Citation:
# @misc{godahewa2021monash,
# author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
# title="Monash Time Series Forecasting Archive",
# howpublished ="\url{https://arxiv.org/abs/2105.06643}",
# year="2021"
# }
# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths
#
# Parameters
# full_file_path_and_name - complete .tsf file path
# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe
# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe
def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"):
col_names = []
col_types = []
all_data = {}
line_count = 0
frequency = None
forecast_horizon = None
contain_missing_values = None
contain_equal_length = None
found_data_tag = False
found_data_section = False
started_reading_data_section = False
with open(full_file_path_and_name, 'r', encoding='cp1252') as file:
for line in file:
# Strip white space from start/end of line
line = line.strip()
if line:
if line.startswith("@"): # Read meta-data
if not line.startswith("@data"):
line_content = line.split(" ")
if line.startswith("@attribute"):
if (len(line_content) != 3): # Attributes have both name and type
raise TsFileParseException("Invalid meta-data specification.")
col_names.append(line_content[1])
col_types.append(line_content[2])
else:
if len(line_content) != 2: # Other meta-data have only values
raise TsFileParseException("Invalid meta-data specification.")
if line.startswith("@frequency"):
frequency = line_content[1]
elif line.startswith("@horizon"):
forecast_horizon = int(line_content[1])
elif line.startswith("@missing"):
contain_missing_values = bool(distutils.util.strtobool(line_content[1]))
elif line.startswith("@equallength"):
contain_equal_length = bool(distutils.util.strtobool(line_content[1]))
else:
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
found_data_tag = True
elif not line.startswith("#"):
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
elif not found_data_tag:
raise TsFileParseException("Missing @data tag.")
else:
if not started_reading_data_section:
started_reading_data_section = True
found_data_section = True
all_series = []
for col in col_names:
all_data[col] = []
full_info = line.split(":")
if len(full_info) != (len(col_names) + 1):
raise TsFileParseException("Missing attributes/values in series.")
series = full_info[len(full_info) - 1]
series = series.split(",")
if(len(series) == 0):
raise TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol")
numeric_series = []
for val in series:
if val == "?":
numeric_series.append(replace_missing_vals_with)
else:
numeric_series.append(float(val))
if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):
raise TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.")
all_series.append(pd.Series(numeric_series).array)
for i in range(len(col_names)):
att_val = None
if col_types[i] == "numeric":
att_val = int(full_info[i])
elif col_types[i] == "string":
att_val = str(full_info[i])
elif col_types[i] == "date":
att_val = datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')
else:
raise TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required.
if(att_val == None):
raise TsFileParseException("Invalid attribute value.")
else:
all_data[col_names[i]].append(att_val)
line_count = line_count + 1
if line_count == 0:
raise TsFileParseException("Empty file.")
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section.")
if not found_data_section:
raise TsFileParseException("Missing series information under data section.")
all_data[value_column_name] = all_series
loaded_data = pd.DataFrame(all_data)
return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length
# export
def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):
pv(f'Dataset: {dsid}', verbose)
dsid = dsid.lower()
assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'
if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'
elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'
elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'
elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'
elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'
elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'
elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'
elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'
elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'
elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'
elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'
elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'
elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'
elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'
elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'
elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'
elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'
elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'
elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'
elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'
elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'
elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'
elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'
elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'
elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'
elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'
elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'
elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'
elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'
elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'
elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'
elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'
elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'
elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'
elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'
elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'
elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'
elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'
elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'
elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'
elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'
elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'
elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'
elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'
elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'
elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'
elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'
elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'
path = Path(path)
full_path = path/f'{dsid}.tsf'
if not full_path.exists() or force_download:
try:
decompress_from_url(url, target_dir=path, verbose=verbose)
except Exception as inst:
print(inst)
pv("converting dataframe to numpy array...", verbose)
data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)
X = to3d(stack_pad(data['series_value']))
pv("...dataframe converted to numpy array", verbose)
pv(f'\nX.shape: {X.shape}', verbose)
pv(f'freq: {frequency}', verbose)
pv(f'forecast_horizon: {forecast_horizon}', verbose)
pv(f'contain_missing_values: {contain_missing_values}', verbose)
pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)
if remove_from_disk: os.remove(full_path)
return X
get_forecasting_data = get_Monash_forecasting_data
dsid = 'm1_yearly_dataset'
X = get_Monash_forecasting_data(dsid, force_download=False)
if X is not None:
test_eq(X.shape, (181, 1, 58))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
External data> Helper functions used to download and extract common time series datasets.
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.validation import *
#export
from sktime.utils.data_io import load_from_tsfile_to_dataframe as ts2df
from sktime.utils.validation.panel import check_X
from sktime.utils.data_io import TsFileParseException
#export
from fastai.data.external import *
from tqdm import tqdm
import zipfile
import tempfile
try: from urllib import urlretrieve
except ImportError: from urllib.request import urlretrieve
import shutil
from numpy import distutils
import distutils
#export
def decompress_from_url(url, target_dir=None, verbose=False):
# Download
try:
pv("downloading data...", verbose)
fname = os.path.basename(url)
tmpdir = tempfile.mkdtemp()
tmpfile = os.path.join(tmpdir, fname)
urlretrieve(url, tmpfile)
pv("...data downloaded", verbose)
# Decompress
try:
pv("decompressing data...", verbose)
if not os.path.exists(target_dir): os.makedirs(target_dir)
shutil.unpack_archive(tmpfile, target_dir)
shutil.rmtree(tmpdir)
pv("...data decompressed", verbose)
return target_dir
except:
shutil.rmtree(tmpdir)
if verbose: sys.stderr.write("Could not decompress file, aborting.\n")
except:
shutil.rmtree(tmpdir)
if verbose:
sys.stderr.write("Could not download url. Please, check url.\n")
#export
from fastdownload import download_url
def download_data(url, fname=None, c_key='archive', force_download=False, timeout=4, verbose=False):
"Download `url` to `fname`."
fname = Path(fname or URLs.path(url, c_key=c_key))
fname.parent.mkdir(parents=True, exist_ok=True)
if not fname.exists() or force_download: download_url(url, dest=fname, timeout=timeout, show_progress=verbose)
return fname
# export
def get_UCR_univariate_list():
return [
'ACSF1', 'Adiac', 'AllGestureWiimoteX', 'AllGestureWiimoteY',
'AllGestureWiimoteZ', 'ArrowHead', 'Beef', 'BeetleFly', 'BirdChicken',
'BME', 'Car', 'CBF', 'Chinatown', 'ChlorineConcentration',
'CinCECGTorso', 'Coffee', 'Computers', 'CricketX', 'CricketY',
'CricketZ', 'Crop', 'DiatomSizeReduction',
'DistalPhalanxOutlineAgeGroup', 'DistalPhalanxOutlineCorrect',
'DistalPhalanxTW', 'DodgerLoopDay', 'DodgerLoopGame',
'DodgerLoopWeekend', 'Earthquakes', 'ECG200', 'ECG5000', 'ECGFiveDays',
'ElectricDevices', 'EOGHorizontalSignal', 'EOGVerticalSignal',
'EthanolLevel', 'FaceAll', 'FaceFour', 'FacesUCR', 'FiftyWords',
'Fish', 'FordA', 'FordB', 'FreezerRegularTrain', 'FreezerSmallTrain',
'Fungi', 'GestureMidAirD1', 'GestureMidAirD2', 'GestureMidAirD3',
'GesturePebbleZ1', 'GesturePebbleZ2', 'GunPoint', 'GunPointAgeSpan',
'GunPointMaleVersusFemale', 'GunPointOldVersusYoung', 'Ham',
'HandOutlines', 'Haptics', 'Herring', 'HouseTwenty', 'InlineSkate',
'InsectEPGRegularTrain', 'InsectEPGSmallTrain', 'InsectWingbeatSound',
'ItalyPowerDemand', 'LargeKitchenAppliances', 'Lightning2',
'Lightning7', 'Mallat', 'Meat', 'MedicalImages', 'MelbournePedestrian',
'MiddlePhalanxOutlineAgeGroup', 'MiddlePhalanxOutlineCorrect',
'MiddlePhalanxTW', 'MixedShapesRegularTrain', 'MixedShapesSmallTrain',
'MoteStrain', 'NonInvasiveFetalECGThorax1',
'NonInvasiveFetalECGThorax2', 'OliveOil', 'OSULeaf',
'PhalangesOutlinesCorrect', 'Phoneme', 'PickupGestureWiimoteZ',
'PigAirwayPressure', 'PigArtPressure', 'PigCVP', 'PLAID', 'Plane',
'PowerCons', 'ProximalPhalanxOutlineAgeGroup',
'ProximalPhalanxOutlineCorrect', 'ProximalPhalanxTW',
'RefrigerationDevices', 'Rock', 'ScreenType', 'SemgHandGenderCh2',
'SemgHandMovementCh2', 'SemgHandSubjectCh2', 'ShakeGestureWiimoteZ',
'ShapeletSim', 'ShapesAll', 'SmallKitchenAppliances', 'SmoothSubspace',
'SonyAIBORobotSurface1', 'SonyAIBORobotSurface2', 'StarLightCurves',
'Strawberry', 'SwedishLeaf', 'Symbols', 'SyntheticControl',
'ToeSegmentation1', 'ToeSegmentation2', 'Trace', 'TwoLeadECG',
'TwoPatterns', 'UMD', 'UWaveGestureLibraryAll', 'UWaveGestureLibraryX',
'UWaveGestureLibraryY', 'UWaveGestureLibraryZ', 'Wafer', 'Wine',
'WordSynonyms', 'Worms', 'WormsTwoClass', 'Yoga'
]
test_eq(len(get_UCR_univariate_list()), 128)
UTSC_datasets = get_UCR_univariate_list()
UCR_univariate_list = get_UCR_univariate_list()
#export
def get_UCR_multivariate_list():
return [
'ArticularyWordRecognition', 'AtrialFibrillation', 'BasicMotions',
'CharacterTrajectories', 'Cricket', 'DuckDuckGeese', 'EigenWorms',
'Epilepsy', 'ERing', 'EthanolConcentration', 'FaceDetection',
'FingerMovements', 'HandMovementDirection', 'Handwriting', 'Heartbeat',
'InsectWingbeat', 'JapaneseVowels', 'Libras', 'LSST', 'MotorImagery',
'NATOPS', 'PEMS-SF', 'PenDigits', 'PhonemeSpectra', 'RacketSports',
'SelfRegulationSCP1', 'SelfRegulationSCP2', 'SpokenArabicDigits',
'StandWalkJump', 'UWaveGestureLibrary'
]
test_eq(len(get_UCR_multivariate_list()), 30)
MTSC_datasets = get_UCR_multivariate_list()
UCR_multivariate_list = get_UCR_multivariate_list()
UCR_list = sorted(UCR_univariate_list + UCR_multivariate_list)
classification_list = UCR_list
TSC_datasets = classification_datasets = UCR_list
len(UCR_list)
#export
def get_UCR_data(dsid, path='.', parent_dir='data/UCR', on_disk=True, mode='c', Xdtype='float32', ydtype=None, return_split=True, split_data=True,
force_download=False, verbose=False):
dsid_list = [ds for ds in UCR_list if ds.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a UCR dataset'
dsid = dsid_list[0]
return_split = return_split and split_data # keep return_split for compatibility. It will be replaced by split_data
if dsid in ['InsectWingbeat']:
warnings.warn(f'Be aware that download of the {dsid} dataset is very slow!')
pv(f'Dataset: {dsid}', verbose)
full_parent_dir = Path(path)/parent_dir
full_tgt_dir = full_parent_dir/dsid
# if not os.path.exists(full_tgt_dir): os.makedirs(full_tgt_dir)
full_tgt_dir.parent.mkdir(parents=True, exist_ok=True)
if force_download or not all([os.path.isfile(f'{full_tgt_dir}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
# Option A
src_website = 'http://www.timeseriesclassification.com/Downloads'
decompress_from_url(f'{src_website}/{dsid}.zip', target_dir=full_tgt_dir, verbose=verbose)
if dsid == 'DuckDuckGeese':
with zipfile.ZipFile(Path(f'{full_parent_dir}/DuckDuckGeese/DuckDuckGeese_ts.zip'), 'r') as zip_ref:
zip_ref.extractall(Path(parent_dir))
if not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or not os.path.exists(full_tgt_dir/f'{dsid}_TRAIN.ts') or \
Path(full_tgt_dir/f'{dsid}_TRAIN.ts').stat().st_size == 0 or Path(full_tgt_dir/f'{dsid}_TEST.ts').stat().st_size == 0:
print('It has not been possible to download the required files')
if return_split:
return None, None, None, None
else:
return None, None, None
pv('loading ts files to dataframe...', verbose)
X_train_df, y_train = ts2df(full_tgt_dir/f'{dsid}_TRAIN.ts')
X_valid_df, y_valid = ts2df(full_tgt_dir/f'{dsid}_TEST.ts')
pv('...ts files loaded', verbose)
pv('preparing numpy arrays...', verbose)
X_train_ = []
X_valid_ = []
for i in progress_bar(range(X_train_df.shape[-1]), display=verbose, leave=False):
X_train_.append(stack_pad(X_train_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_valid_.append(stack_pad(X_valid_df[f'dim_{i}'])) # stack arrays even if they have different lengths
X_train = np.transpose(np.stack(X_train_, axis=-1), (0, 2, 1))
X_valid = np.transpose(np.stack(X_valid_, axis=-1), (0, 2, 1))
X_train, X_valid = match_seq_len(X_train, X_valid)
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if return_split:
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if Xdtype is not None:
X = X.astype(Xdtype)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_classification_data = get_UCR_data
#hide
PATH = Path('.')
dsids = ['ECGFiveDays', 'AtrialFibrillation'] # univariate and multivariate
for dsid in dsids:
print(dsid)
tgt_dir = PATH/f'data/UCR/{dsid}'
if os.path.isdir(tgt_dir): shutil.rmtree(tgt_dir)
test_eq(len(get_files(tgt_dir)), 0) # no file left
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
del X_train, y_train, X_valid, y_valid
start = time.time()
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid)
elapsed = time.time() - start
test_eq(elapsed < 1, True)
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(len(get_files(tgt_dir, '.npy')), 6)
test_eq(len(get_files(tgt_dir, '.npy')), len(get_files(tgt_dir))) # test no left file/ dir
test_eq(X_train.ndim, 3)
test_eq(y_train.ndim, 1)
test_eq(X_valid.ndim, 3)
test_eq(y_valid.ndim, 1)
test_eq(X_train.dtype, np.float32)
test_eq(X_train.__class__.__name__, 'memmap')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, on_disk=False)
test_eq(X_train.__class__.__name__, 'ndarray')
del X_train, y_train, X_valid, y_valid
X_train, y_train, X_valid, y_valid = get_UCR_data('natops')
dsid = 'natops'
X_train, y_train, X_valid, y_valid = get_UCR_data(dsid, verbose=True)
X, y, splits = get_UCR_data(dsid, split_data=False)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_eq(X[splits[0]], X_train)
test_eq(y[splits[1]], y_valid)
test_type(X, X_train)
test_type(y, y_train)
#export
def check_data(X, y=None, splits=None, show_plot=True):
try: X_is_nan = np.isnan(X).sum()
except: X_is_nan = 'could not be checked'
if X.ndim == 3:
shape = f'[{X.shape[0]} samples x {X.shape[1]} features x {X.shape[-1]} timesteps]'
print(f'X - shape: {shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
else:
print(f'X - shape: {X.shape} type: {cls_name(X)} dtype:{X.dtype} isnan: {X_is_nan}')
if X_is_nan:
warnings.warn('X contains nan values')
if y is not None:
y_shape = y.shape
y = y.ravel()
if isinstance(y[0], str):
n_classes = f'{len(np.unique(y))} ({len(y)//len(np.unique(y))} samples per class) {L(np.unique(y).tolist())}'
y_is_nan = 'nan' in [c.lower() for c in np.unique(y)]
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} n_classes: {n_classes} isnan: {y_is_nan}')
else:
y_is_nan = np.isnan(y).sum()
print(f'y - shape: {y_shape} type: {cls_name(y)} dtype:{y.dtype} isnan: {y_is_nan}')
if y_is_nan:
warnings.warn('y contains nan values')
if splits is not None:
_splits = get_splits_len(splits)
overlap = check_splits_overlap(splits)
print(f'splits - n_splits: {len(_splits)} shape: {_splits} overlap: {overlap}')
if show_plot: plot_splits(splits)
dsid = 'ECGFiveDays'
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y = y.astype(np.float32)
check_data(X, y, splits)
y[:10] = np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
splits = get_splits(y, 3)
check_data(X, y, splits)
check_data(X[:, 0], y, splits)
y[:5]= np.nan
check_data(X[:, 0], y, splits)
X, y, splits = get_UCR_data(dsid, split_data=False, on_disk=False, force_download=False)
#export
# This code comes from https://github.com/ChangWeiTan/TSRegression. As of Jan 16th, 2021 there's no pip install available.
# The following code is adapted from the python package sktime to read .ts file.
class _TsFileParseException(Exception):
"""
Should be raised when parsing a .ts file and the format is incorrect.
"""
pass
def _load_from_tsfile_to_dataframe2(full_file_path_and_name, return_separate_X_and_y=True, replace_missing_vals_with='NaN'):
"""Loads data from a .ts file into a Pandas DataFrame.
Parameters
----------
full_file_path_and_name: str
The full pathname of the .ts file to read.
return_separate_X_and_y: bool
true if X and Y values should be returned as separate Data Frames (X) and a numpy array (y), false otherwise.
This is only relevant for data that
replace_missing_vals_with: str
The value that missing values in the text file should be replaced with prior to parsing.
Returns
-------
DataFrame, ndarray
If return_separate_X_and_y then a tuple containing a DataFrame and a numpy array containing the relevant time-series and corresponding class values.
DataFrame
If not return_separate_X_and_y then a single DataFrame containing all time-series and (if relevant) a column "class_vals" the associated class values.
"""
# Initialize flags and variables used when parsing the file
metadata_started = False
data_started = False
has_problem_name_tag = False
has_timestamps_tag = False
has_univariate_tag = False
has_class_labels_tag = False
has_target_labels_tag = False
has_data_tag = False
previous_timestamp_was_float = None
previous_timestamp_was_int = None
previous_timestamp_was_timestamp = None
num_dimensions = None
is_first_case = True
instance_list = []
class_val_list = []
line_num = 0
# Parse the file
# print(full_file_path_and_name)
with open(full_file_path_and_name, 'r', encoding='utf-8') as file:
for line in tqdm(file):
# print(".", end='')
# Strip white space from start/end of line and change to lowercase for use below
line = line.strip().lower()
# Empty lines are valid at any point in a file
if line:
# Check if this line contains metadata
# Please note that even though metadata is stored in this function it is not currently published externally
if line.startswith("@problemname"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("problemname tag requires an associated value")
problem_name = line[len("@problemname") + 1:]
has_problem_name_tag = True
metadata_started = True
elif line.startswith("@timestamps"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("timestamps tag requires an associated Boolean value")
elif tokens[1] == "true":
timestamps = True
elif tokens[1] == "false":
timestamps = False
else:
raise _TsFileParseException("invalid timestamps value")
has_timestamps_tag = True
metadata_started = True
elif line.startswith("@univariate"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len != 2:
raise _TsFileParseException("univariate tag requires an associated Boolean value")
elif tokens[1] == "true":
univariate = True
elif tokens[1] == "false":
univariate = False
else:
raise _TsFileParseException("invalid univariate value")
has_univariate_tag = True
metadata_started = True
elif line.startswith("@classlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("classlabel tag requires an associated Boolean value")
if tokens[1] == "true":
class_labels = True
elif tokens[1] == "false":
class_labels = False
else:
raise _TsFileParseException("invalid classLabel value")
# Check if we have any associated class values
if token_len == 2 and class_labels:
raise _TsFileParseException("if the classlabel tag is true then class values must be supplied")
has_class_labels_tag = True
class_label_list = [token.strip() for token in tokens[2:]]
metadata_started = True
elif line.startswith("@targetlabel"):
# Check that the data has not started
if data_started:
raise _TsFileParseException("metadata must come before data")
# Check that the associated value is valid
tokens = line.split(' ')
token_len = len(tokens)
if token_len == 1:
raise _TsFileParseException("targetlabel tag requires an associated Boolean value")
if tokens[1] == "true":
target_labels = True
elif tokens[1] == "false":
target_labels = False
else:
raise _TsFileParseException("invalid targetLabel value")
has_target_labels_tag = True
class_val_list = []
metadata_started = True
# Check if this line contains the start of data
elif line.startswith("@data"):
if line != "@data":
raise _TsFileParseException("data tag should not have an associated value")
if data_started and not metadata_started:
raise _TsFileParseException("metadata must come before data")
else:
has_data_tag = True
data_started = True
# If the 'data tag has been found then metadata has been parsed and data can be loaded
elif data_started:
# Check that a full set of metadata has been provided
incomplete_regression_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_target_labels_tag or not has_data_tag
incomplete_classification_meta_data = not has_problem_name_tag or not has_timestamps_tag or not has_univariate_tag or not has_class_labels_tag or not has_data_tag
if incomplete_regression_meta_data and incomplete_classification_meta_data:
raise _TsFileParseException("a full set of metadata has not been provided before the data")
# Replace any missing values with the value specified
line = line.replace("?", replace_missing_vals_with)
# Check if we dealing with data that has timestamps
if timestamps:
# We're dealing with timestamps so cannot just split line on ':' as timestamps may contain one
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
this_line_num_dimensions = 0
line_len = len(line)
char_num = 0
while char_num < line_len:
# Move through any spaces
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# See if there is any more data to read in or if we should validate that read thus far
if char_num < line_len:
# See if we have an empty dimension (i.e. no values)
if line[char_num] == ":":
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
has_another_value = False
has_another_dimension = True
timestamps_for_dimension = []
values_for_dimension = []
char_num += 1
else:
# Check if we have reached a class label
if line[char_num] != "(" and target_labels:
class_val = line[char_num:].strip()
# if class_val not in class_val_list:
# raise _TsFileParseException(
# "the class value '" + class_val + "' on line " + str(
# line_num + 1) + " is not valid")
class_val_list.append(float(class_val))
char_num = line_len
has_another_value = False
has_another_dimension = False
timestamps_for_dimension = []
values_for_dimension = []
else:
# Read in the data contained within the next tuple
if line[char_num] != "(" and not target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not start with a '('")
char_num += 1
tuple_data = ""
while char_num < line_len and line[char_num] != ")":
tuple_data += line[char_num]
char_num += 1
if char_num >= line_len or line[char_num] != ")":
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " does not end with a ')'")
# Read in any spaces immediately after the current tuple
char_num += 1
while char_num < line_len and str.isspace(line[char_num]):
char_num += 1
# Check if there is another value or dimension to process after this tuple
if char_num >= line_len:
has_another_value = False
has_another_dimension = False
elif line[char_num] == ",":
has_another_value = True
has_another_dimension = False
elif line[char_num] == ":":
has_another_value = False
has_another_dimension = True
char_num += 1
# Get the numeric value for the tuple by reading from the end of the tuple data backwards to the last comma
last_comma_index = tuple_data.rfind(',')
if last_comma_index == -1:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has no comma inside of it")
try:
value = tuple_data[last_comma_index + 1:]
value = float(value)
except ValueError:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that does not have a valid numeric value")
# Check the type of timestamp that we have
timestamp = tuple_data[0: last_comma_index]
try:
timestamp = int(timestamp)
timestamp_is_int = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_int = False
if not timestamp_is_int:
try:
timestamp = float(timestamp)
timestamp_is_float = True
timestamp_is_timestamp = False
except ValueError:
timestamp_is_float = False
if not timestamp_is_int and not timestamp_is_float:
try:
timestamp = timestamp.strip()
timestamp_is_timestamp = True
except ValueError:
timestamp_is_timestamp = False
# Make sure that the timestamps in the file (not just this dimension or case) are consistent
if not timestamp_is_timestamp and not timestamp_is_int and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains a tuple that has an invalid timestamp '" + timestamp + "'")
if previous_timestamp_was_float is not None and previous_timestamp_was_float and not timestamp_is_float:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_int is not None and previous_timestamp_was_int and not timestamp_is_int:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
if previous_timestamp_was_timestamp is not None and previous_timestamp_was_timestamp and not timestamp_is_timestamp:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " contains tuples where the timestamp format is inconsistent")
# Store the values
timestamps_for_dimension += [timestamp]
values_for_dimension += [value]
# If this was our first tuple then we store the type of timestamp we had
if previous_timestamp_was_timestamp is None and timestamp_is_timestamp:
previous_timestamp_was_timestamp = True
previous_timestamp_was_int = False
previous_timestamp_was_float = False
if previous_timestamp_was_int is None and timestamp_is_int:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = True
previous_timestamp_was_float = False
if previous_timestamp_was_float is None and timestamp_is_float:
previous_timestamp_was_timestamp = False
previous_timestamp_was_int = False
previous_timestamp_was_float = True
# See if we should add the data for this dimension
if not has_another_value:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
if timestamp_is_timestamp:
timestamps_for_dimension = pd.DatetimeIndex(timestamps_for_dimension)
instance_list[this_line_num_dimensions].append(
pd.Series(index=timestamps_for_dimension, data=values_for_dimension))
this_line_num_dimensions += 1
timestamps_for_dimension = []
values_for_dimension = []
elif has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series(dtype=np.float32))
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and not has_another_dimension:
if num_dimensions is None:
num_dimensions = this_line_num_dimensions
if num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check that we are not expecting some more data, and if not, store that processed above
if has_another_value:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ',' that is not followed by another tuple")
elif has_another_dimension and target_labels:
raise _TsFileParseException(
"dimension " + str(this_line_num_dimensions + 1) + " on line " + str(
line_num + 1) + " ends with a ':' while it should list a class value")
elif has_another_dimension and not target_labels:
if len(instance_list) < (this_line_num_dimensions + 1):
instance_list.append([])
instance_list[this_line_num_dimensions].append(pd.Series())
this_line_num_dimensions += 1
num_dimensions = this_line_num_dimensions
# If this is the 1st line of data we have seen then note the dimensions
if not has_another_value and num_dimensions != this_line_num_dimensions:
raise _TsFileParseException("line " + str(
line_num + 1) + " does not have the same number of dimensions as the previous line of data")
# Check if we should have class values, and if so that they are contained in those listed in the metadata
if target_labels and len(class_val_list) == 0:
raise _TsFileParseException("the cases have no associated class values")
else:
dimensions = line.split(":")
# If first row then note the number of dimensions (that must be the same for all cases)
if is_first_case:
num_dimensions = len(dimensions)
if target_labels:
num_dimensions -= 1
for dim in range(0, num_dimensions):
instance_list.append([])
is_first_case = False
# See how many dimensions that the case whose data in represented in this line has
this_line_num_dimensions = len(dimensions)
if target_labels:
this_line_num_dimensions -= 1
# All dimensions should be included for all series, even if they are empty
if this_line_num_dimensions != num_dimensions:
raise _TsFileParseException("inconsistent number of dimensions. Expecting " + str(
num_dimensions) + " but have read " + str(this_line_num_dimensions))
# Process the data for each dimension
for dim in range(0, num_dimensions):
dimension = dimensions[dim].strip()
if dimension:
data_series = dimension.split(",")
data_series = [float(i) for i in data_series]
instance_list[dim].append(pd.Series(data_series))
else:
instance_list[dim].append(pd.Series())
if target_labels:
class_val_list.append(float(dimensions[num_dimensions].strip()))
line_num += 1
# Check that the file was not empty
if line_num:
# Check that the file contained both metadata and data
complete_regression_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_target_labels_tag and has_data_tag
complete_classification_meta_data = has_problem_name_tag and has_timestamps_tag and has_univariate_tag and has_class_labels_tag and has_data_tag
if metadata_started and not complete_regression_meta_data and not complete_classification_meta_data:
raise _TsFileParseException("metadata incomplete")
elif metadata_started and not data_started:
raise _TsFileParseException("file contained metadata but no data")
elif metadata_started and data_started and len(instance_list) == 0:
raise _TsFileParseException("file contained metadata but no data")
# Create a DataFrame from the data parsed above
data = pd.DataFrame(dtype=np.float32)
for dim in range(0, num_dimensions):
data['dim_' + str(dim)] = instance_list[dim]
# Check if we should return any associated class labels separately
if target_labels:
if return_separate_X_and_y:
return data, np.asarray(class_val_list)
else:
data['class_vals'] = pd.Series(class_val_list)
return data
else:
return data
else:
raise _TsFileParseException("empty file")
#export
def get_Monash_regression_list():
return sorted([
"AustraliaRainfall", "HouseholdPowerConsumption1",
"HouseholdPowerConsumption2", "BeijingPM25Quality",
"BeijingPM10Quality", "Covid3Month", "LiveFuelMoistureContent",
"FloodModeling1", "FloodModeling2", "FloodModeling3",
"AppliancesEnergy", "BenzeneConcentration", "NewsHeadlineSentiment",
"NewsTitleSentiment", "IEEEPPG",
#"BIDMC32RR", "BIDMC32HR", "BIDMC32SpO2", "PPGDalia" # Cannot be downloaded
])
Monash_regression_list = get_Monash_regression_list()
regression_list = Monash_regression_list
TSR_datasets = regression_datasets = regression_list
len(Monash_regression_list)
#export
def get_Monash_regression_data(dsid, path='./data/Monash', on_disk=True, mode='c', Xdtype='float32', ydtype=None, split_data=True, force_download=False,
verbose=False, timeout=4):
dsid_list = [rd for rd in Monash_regression_list if rd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a Monash dataset'
dsid = dsid_list[0]
full_tgt_dir = Path(path)/dsid
pv(f'Dataset: {dsid}', verbose)
if force_download or not all([os.path.isfile(f'{path}/{dsid}/{fn}.npy') for fn in ['X_train', 'X_valid', 'y_train', 'y_valid', 'X', 'y']]):
if dsid == 'AppliancesEnergy': dset_id = 3902637
elif dsid == 'HouseholdPowerConsumption1': dset_id = 3902704
elif dsid == 'HouseholdPowerConsumption2': dset_id = 3902706
elif dsid == 'BenzeneConcentration': dset_id = 3902673
elif dsid == 'BeijingPM25Quality': dset_id = 3902671
elif dsid == 'BeijingPM10Quality': dset_id = 3902667
elif dsid == 'LiveFuelMoistureContent': dset_id = 3902716
elif dsid == 'FloodModeling1': dset_id = 3902694
elif dsid == 'FloodModeling2': dset_id = 3902696
elif dsid == 'FloodModeling3': dset_id = 3902698
elif dsid == 'AustraliaRainfall': dset_id = 3902654
elif dsid == 'PPGDalia': dset_id = 3902728
elif dsid == 'IEEEPPG': dset_id = 3902710
elif dsid == 'BIDMCRR' or dsid == 'BIDM32CRR': dset_id = 3902685
elif dsid == 'BIDMCHR' or dsid == 'BIDM32CHR': dset_id = 3902676
elif dsid == 'BIDMCSpO2' or dsid == 'BIDM32CSpO2': dset_id = 3902688
elif dsid == 'NewsHeadlineSentiment': dset_id = 3902718
elif dsid == 'NewsTitleSentiment': dset_id= 3902726
elif dsid == 'Covid3Month': dset_id = 3902690
for split in ['TRAIN', 'TEST']:
url = f"https://zenodo.org/record/{dset_id}/files/{dsid}_{split}.ts"
fname = Path(path)/f'{dsid}/{dsid}_{split}.ts'
pv('downloading data...', verbose)
try:
download_data(url, fname, c_key='archive', force_download=force_download, timeout=timeout)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot download {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
pv('...download complete', verbose)
try:
if split == 'TRAIN':
X_train, y_train = _load_from_tsfile_to_dataframe2(fname)
X_train = check_X(X_train, coerce_to_numpy=True)
else:
X_valid, y_valid = _load_from_tsfile_to_dataframe2(fname)
X_valid = check_X(X_valid, coerce_to_numpy=True)
except Exception as inst:
print(inst)
warnings.warn(f'Cannot create numpy arrays for {dsid} dataset')
if split_data: return None, None, None, None
else: return None, None, None
np.save(f'{full_tgt_dir}/X_train.npy', X_train)
np.save(f'{full_tgt_dir}/y_train.npy', y_train)
np.save(f'{full_tgt_dir}/X_valid.npy', X_valid)
np.save(f'{full_tgt_dir}/y_valid.npy', y_valid)
np.save(f'{full_tgt_dir}/X.npy', concat(X_train, X_valid))
np.save(f'{full_tgt_dir}/y.npy', concat(y_train, y_valid))
del X_train, X_valid, y_train, y_valid
delete_all_in_dir(full_tgt_dir, exception='.npy')
pv('...numpy arrays correctly saved', verbose)
mmap_mode = mode if on_disk else None
X_train = np.load(f'{full_tgt_dir}/X_train.npy', mmap_mode=mmap_mode)
y_train = np.load(f'{full_tgt_dir}/y_train.npy', mmap_mode=mmap_mode)
X_valid = np.load(f'{full_tgt_dir}/X_valid.npy', mmap_mode=mmap_mode)
y_valid = np.load(f'{full_tgt_dir}/y_valid.npy', mmap_mode=mmap_mode)
if Xdtype is not None:
X_train = X_train.astype(Xdtype)
X_valid = X_valid.astype(Xdtype)
if ydtype is not None:
y_train = y_train.astype(ydtype)
y_valid = y_valid.astype(ydtype)
if split_data:
if verbose:
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('X_valid:', X_valid.shape)
print('y_valid:', y_valid.shape, '\n')
return X_train, y_train, X_valid, y_valid
else:
X = np.load(f'{full_tgt_dir}/X.npy', mmap_mode=mmap_mode)
y = np.load(f'{full_tgt_dir}/y.npy', mmap_mode=mmap_mode)
splits = get_predefined_splits(X_train, X_valid)
if verbose:
print('X :', X .shape)
print('y :', y .shape)
print('splits :', coll_repr(splits[0]), coll_repr(splits[1]), '\n')
return X, y, splits
get_regression_data = get_Monash_regression_data
dsid = "Covid3Month"
X_train, y_train, X_valid, y_valid = get_Monash_regression_data(dsid, on_disk=False, split_data=True, force_download=False)
X, y, splits = get_Monash_regression_data(dsid, on_disk=True, split_data=False, force_download=False, verbose=True)
if X_train is not None:
test_eq(X_train.shape, (140, 1, 84))
if X is not None:
test_eq(X.shape, (201, 1, 84))
#export
def get_forecasting_list():
return sorted([
"Sunspots", "Weather"
])
forecasting_time_series = get_forecasting_list()
#export
def get_forecasting_time_series(dsid, path='./data/forecasting/', force_download=False, verbose=True, **kwargs):
dsid_list = [fd for fd in forecasting_time_series if fd.lower() == dsid.lower()]
assert len(dsid_list) > 0, f'{dsid} is not a forecasting dataset'
dsid = dsid_list[0]
if dsid == 'Weather': full_tgt_dir = Path(path)/f'{dsid}.csv.zip'
else: full_tgt_dir = Path(path)/f'{dsid}.csv'
pv(f'Dataset: {dsid}', verbose)
if dsid == 'Sunspots': url = "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv"
elif dsid == 'Weather': url = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip'
try:
pv("downloading data...", verbose)
if force_download:
try: os.remove(full_tgt_dir)
except OSError: pass
download_data(url, full_tgt_dir, force_download=force_download, **kwargs)
pv(f"...data downloaded. Path = {full_tgt_dir}", verbose)
if dsid == 'Sunspots':
df = pd.read_csv(full_tgt_dir, parse_dates=['Date'], index_col=['Date'])
return df['Monthly Mean Total Sunspot Number'].asfreq('1M').to_frame()
elif dsid == 'Weather':
# This code comes from a great Keras time-series tutorial notebook (https://www.tensorflow.org/tutorials/structured_data/time_series)
df = pd.read_csv(full_tgt_dir)
df = df[5::6] # slice [start:stop:step], starting from index 5 take every 6th record.
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
# remove error (negative wind)
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
timestamp_s = date_time.map(datetime.timestamp)
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
df.reset_index(drop=True, inplace=True)
return df
else:
return full_tgt_dir
except Exception as inst:
print(inst)
warnings.warn(f"Cannot download {dsid} dataset")
return
ts = get_forecasting_time_series("sunspots", force_download=False)
test_eq(len(ts), 3235)
ts
ts = get_forecasting_time_series("weather", force_download=False)
if ts is not None:
test_eq(len(ts), 70091)
print(ts)
# export
Monash_forecasting_list = ['m1_yearly_dataset',
'm1_quarterly_dataset',
'm1_monthly_dataset',
'm3_yearly_dataset',
'm3_quarterly_dataset',
'm3_monthly_dataset',
'm3_other_dataset',
'm4_yearly_dataset',
'm4_quarterly_dataset',
'm4_monthly_dataset',
'm4_weekly_dataset',
'm4_daily_dataset',
'm4_hourly_dataset',
'tourism_yearly_dataset',
'tourism_quarterly_dataset',
'tourism_monthly_dataset',
'nn5_daily_dataset_with_missing_values',
'nn5_daily_dataset_without_missing_values',
'nn5_weekly_dataset',
'cif_2016_dataset',
'kaggle_web_traffic_dataset_with_missing_values',
'kaggle_web_traffic_dataset_without_missing_values',
'kaggle_web_traffic_weekly_dataset',
'solar_10_minutes_dataset',
'solar_weekly_dataset',
'electricity_hourly_dataset',
'electricity_weekly_dataset',
'london_smart_meters_dataset_with_missing_values',
'london_smart_meters_dataset_without_missing_values',
'wind_farms_minutely_dataset_with_missing_values',
'wind_farms_minutely_dataset_without_missing_values',
'car_parts_dataset_with_missing_values',
'car_parts_dataset_without_missing_values',
'dominick_dataset',
'fred_md_dataset',
'traffic_hourly_dataset',
'traffic_weekly_dataset',
'pedestrian_counts_dataset',
'hospital_dataset',
'covid_deaths_dataset',
'kdd_cup_2018_dataset_with_missing_values',
'kdd_cup_2018_dataset_without_missing_values',
'weather_dataset',
'sunspot_dataset_with_missing_values',
'sunspot_dataset_without_missing_values',
'saugeenday_dataset',
'us_births_dataset',
'elecdemand_dataset',
'solar_4_seconds_dataset',
'wind_4_seconds_dataset',
'Sunspots', 'Weather']
forecasting_list = Monash_forecasting_list
# export
## Original code available at: https://github.com/rakshitha123/TSForecasting
# This repository contains the implementations related to the experiments of a set of publicly available datasets that are used in
# the time series forecasting research space.
# The benchmark datasets are available at: https://zenodo.org/communities/forecasting. For more details, please refer to our website:
# https://forecastingdata.org/ and paper: https://arxiv.org/abs/2105.06643.
# Citation:
# @misc{godahewa2021monash,
# author="Godahewa, Rakshitha and Bergmeir, Christoph and Webb, Geoffrey I. and Hyndman, Rob J. and Montero-Manso, Pablo",
# title="Monash Time Series Forecasting Archive",
# howpublished ="\url{https://arxiv.org/abs/2105.06643}",
# year="2021"
# }
# Converts the contents in a .tsf file into a dataframe and returns it along with other meta-data of the dataset: frequency, horizon, whether the dataset contains missing values and whether the series have equal lengths
#
# Parameters
# full_file_path_and_name - complete .tsf file path
# replace_missing_vals_with - a term to indicate the missing values in series in the returning dataframe
# value_column_name - Any name that is preferred to have as the name of the column containing series values in the returning dataframe
def convert_tsf_to_dataframe(full_file_path_and_name, replace_missing_vals_with = 'NaN', value_column_name = "series_value"):
col_names = []
col_types = []
all_data = {}
line_count = 0
frequency = None
forecast_horizon = None
contain_missing_values = None
contain_equal_length = None
found_data_tag = False
found_data_section = False
started_reading_data_section = False
with open(full_file_path_and_name, 'r', encoding='cp1252') as file:
for line in file:
# Strip white space from start/end of line
line = line.strip()
if line:
if line.startswith("@"): # Read meta-data
if not line.startswith("@data"):
line_content = line.split(" ")
if line.startswith("@attribute"):
if (len(line_content) != 3): # Attributes have both name and type
raise TsFileParseException("Invalid meta-data specification.")
col_names.append(line_content[1])
col_types.append(line_content[2])
else:
if len(line_content) != 2: # Other meta-data have only values
raise TsFileParseException("Invalid meta-data specification.")
if line.startswith("@frequency"):
frequency = line_content[1]
elif line.startswith("@horizon"):
forecast_horizon = int(line_content[1])
elif line.startswith("@missing"):
contain_missing_values = bool(distutils.util.strtobool(line_content[1]))
elif line.startswith("@equallength"):
contain_equal_length = bool(distutils.util.strtobool(line_content[1]))
else:
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
found_data_tag = True
elif not line.startswith("#"):
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section. Attribute section must come before data.")
elif not found_data_tag:
raise TsFileParseException("Missing @data tag.")
else:
if not started_reading_data_section:
started_reading_data_section = True
found_data_section = True
all_series = []
for col in col_names:
all_data[col] = []
full_info = line.split(":")
if len(full_info) != (len(col_names) + 1):
raise TsFileParseException("Missing attributes/values in series.")
series = full_info[len(full_info) - 1]
series = series.split(",")
if(len(series) == 0):
raise TsFileParseException("A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol")
numeric_series = []
for val in series:
if val == "?":
numeric_series.append(replace_missing_vals_with)
else:
numeric_series.append(float(val))
if (numeric_series.count(replace_missing_vals_with) == len(numeric_series)):
raise TsFileParseException("All series values are missing. A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series.")
all_series.append(pd.Series(numeric_series).array)
for i in range(len(col_names)):
att_val = None
if col_types[i] == "numeric":
att_val = int(full_info[i])
elif col_types[i] == "string":
att_val = str(full_info[i])
elif col_types[i] == "date":
att_val = datetime.strptime(full_info[i], '%Y-%m-%d %H-%M-%S')
else:
raise TsFileParseException("Invalid attribute type.") # Currently, the code supports only numeric, string and date types. Extend this as required.
if(att_val == None):
raise TsFileParseException("Invalid attribute value.")
else:
all_data[col_names[i]].append(att_val)
line_count = line_count + 1
if line_count == 0:
raise TsFileParseException("Empty file.")
if len(col_names) == 0:
raise TsFileParseException("Missing attribute section.")
if not found_data_section:
raise TsFileParseException("Missing series information under data section.")
all_data[value_column_name] = all_series
loaded_data = pd.DataFrame(all_data)
return loaded_data, frequency, forecast_horizon, contain_missing_values, contain_equal_length
# export
def get_Monash_forecasting_data(dsid, path='./data/forecasting/', force_download=False, remove_from_disk=False, verbose=True):
pv(f'Dataset: {dsid}', verbose)
dsid = dsid.lower()
assert dsid in Monash_forecasting_list, f'{dsid} not available in Monash_forecasting_list'
if dsid == 'm1_yearly_dataset': url = 'https://zenodo.org/record/4656193/files/m1_yearly_dataset.zip'
elif dsid == 'm1_quarterly_dataset': url = 'https://zenodo.org/record/4656154/files/m1_quarterly_dataset.zip'
elif dsid == 'm1_monthly_dataset': url = 'https://zenodo.org/record/4656159/files/m1_monthly_dataset.zip'
elif dsid == 'm3_yearly_dataset': url = 'https://zenodo.org/record/4656222/files/m3_yearly_dataset.zip'
elif dsid == 'm3_quarterly_dataset': url = 'https://zenodo.org/record/4656262/files/m3_quarterly_dataset.zip'
elif dsid == 'm3_monthly_dataset': url = 'https://zenodo.org/record/4656298/files/m3_monthly_dataset.zip'
elif dsid == 'm3_other_dataset': url = 'https://zenodo.org/record/4656335/files/m3_other_dataset.zip'
elif dsid == 'm4_yearly_dataset': url = 'https://zenodo.org/record/4656379/files/m4_yearly_dataset.zip'
elif dsid == 'm4_quarterly_dataset': url = 'https://zenodo.org/record/4656410/files/m4_quarterly_dataset.zip'
elif dsid == 'm4_monthly_dataset': url = 'https://zenodo.org/record/4656480/files/m4_monthly_dataset.zip'
elif dsid == 'm4_weekly_dataset': url = 'https://zenodo.org/record/4656522/files/m4_weekly_dataset.zip'
elif dsid == 'm4_daily_dataset': url = 'https://zenodo.org/record/4656548/files/m4_daily_dataset.zip'
elif dsid == 'm4_hourly_dataset': url = 'https://zenodo.org/record/4656589/files/m4_hourly_dataset.zip'
elif dsid == 'tourism_yearly_dataset': url = 'https://zenodo.org/record/4656103/files/tourism_yearly_dataset.zip'
elif dsid == 'tourism_quarterly_dataset': url = 'https://zenodo.org/record/4656093/files/tourism_quarterly_dataset.zip'
elif dsid == 'tourism_monthly_dataset': url = 'https://zenodo.org/record/4656096/files/tourism_monthly_dataset.zip'
elif dsid == 'nn5_daily_dataset_with_missing_values': url = 'https://zenodo.org/record/4656110/files/nn5_daily_dataset_with_missing_values.zip'
elif dsid == 'nn5_daily_dataset_without_missing_values': url = 'https://zenodo.org/record/4656117/files/nn5_daily_dataset_without_missing_values.zip'
elif dsid == 'nn5_weekly_dataset': url = 'https://zenodo.org/record/4656125/files/nn5_weekly_dataset.zip'
elif dsid == 'cif_2016_dataset': url = 'https://zenodo.org/record/4656042/files/cif_2016_dataset.zip'
elif dsid == 'kaggle_web_traffic_dataset_with_missing_values': url = 'https://zenodo.org/record/4656080/files/kaggle_web_traffic_dataset_with_missing_values.zip'
elif dsid == 'kaggle_web_traffic_dataset_without_missing_values': url = 'https://zenodo.org/record/4656075/files/kaggle_web_traffic_dataset_without_missing_values.zip'
elif dsid == 'kaggle_web_traffic_weekly': url = 'https://zenodo.org/record/4656664/files/kaggle_web_traffic_weekly_dataset.zip'
elif dsid == 'solar_10_minutes_dataset': url = 'https://zenodo.org/record/4656144/files/solar_10_minutes_dataset.zip'
elif dsid == 'solar_weekly_dataset': url = 'https://zenodo.org/record/4656151/files/solar_weekly_dataset.zip'
elif dsid == 'electricity_hourly_dataset': url = 'https://zenodo.org/record/4656140/files/electricity_hourly_dataset.zip'
elif dsid == 'electricity_weekly_dataset': url = 'https://zenodo.org/record/4656141/files/electricity_weekly_dataset.zip'
elif dsid == 'london_smart_meters_dataset_with_missing_values': url = 'https://zenodo.org/record/4656072/files/london_smart_meters_dataset_with_missing_values.zip'
elif dsid == 'london_smart_meters_dataset_without_missing_values': url = 'https://zenodo.org/record/4656091/files/london_smart_meters_dataset_without_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_with_missing_values': url = 'https://zenodo.org/record/4654909/files/wind_farms_minutely_dataset_with_missing_values.zip'
elif dsid == 'wind_farms_minutely_dataset_without_missing_values': url = 'https://zenodo.org/record/4654858/files/wind_farms_minutely_dataset_without_missing_values.zip'
elif dsid == 'car_parts_dataset_with_missing_values': url = 'https://zenodo.org/record/4656022/files/car_parts_dataset_with_missing_values.zip'
elif dsid == 'car_parts_dataset_without_missing_values': url = 'https://zenodo.org/record/4656021/files/car_parts_dataset_without_missing_values.zip'
elif dsid == 'dominick_dataset': url = 'https://zenodo.org/record/4654802/files/dominick_dataset.zip'
elif dsid == 'fred_md_dataset': url = 'https://zenodo.org/record/4654833/files/fred_md_dataset.zip'
elif dsid == 'traffic_hourly_dataset': url = 'https://zenodo.org/record/4656132/files/traffic_hourly_dataset.zip'
elif dsid == 'traffic_weekly_dataset': url = 'https://zenodo.org/record/4656135/files/traffic_weekly_dataset.zip'
elif dsid == 'pedestrian_counts_dataset': url = 'https://zenodo.org/record/4656626/files/pedestrian_counts_dataset.zip'
elif dsid == 'hospital_dataset': url = 'https://zenodo.org/record/4656014/files/hospital_dataset.zip'
elif dsid == 'covid_deaths_dataset': url = 'https://zenodo.org/record/4656009/files/covid_deaths_dataset.zip'
elif dsid == 'kdd_cup_2018_dataset_with_missing_values': url = 'https://zenodo.org/record/4656719/files/kdd_cup_2018_dataset_with_missing_values.zip'
elif dsid == 'kdd_cup_2018_dataset_without_missing_values': url = 'https://zenodo.org/record/4656756/files/kdd_cup_2018_dataset_without_missing_values.zip'
elif dsid == 'weather_dataset': url = 'https://zenodo.org/record/4654822/files/weather_dataset.zip'
elif dsid == 'sunspot_dataset_with_missing_values': url = 'https://zenodo.org/record/4654773/files/sunspot_dataset_with_missing_values.zip'
elif dsid == 'sunspot_dataset_without_missing_values': url = 'https://zenodo.org/record/4654722/files/sunspot_dataset_without_missing_values.zip'
elif dsid == 'saugeenday_dataset': url = 'https://zenodo.org/record/4656058/files/saugeenday_dataset.zip'
elif dsid == 'us_births_dataset': url = 'https://zenodo.org/record/4656049/files/us_births_dataset.zip'
elif dsid == 'elecdemand_dataset': url = 'https://zenodo.org/record/4656069/files/elecdemand_dataset.zip'
elif dsid == 'solar_4_seconds_dataset': url = 'https://zenodo.org/record/4656027/files/solar_4_seconds_dataset.zip'
elif dsid == 'wind_4_seconds_dataset': url = 'https://zenodo.org/record/4656032/files/wind_4_seconds_dataset.zip'
path = Path(path)
full_path = path/f'{dsid}.tsf'
if not full_path.exists() or force_download:
try:
decompress_from_url(url, target_dir=path, verbose=verbose)
except Exception as inst:
print(inst)
pv("converting dataframe to numpy array...", verbose)
data, frequency, forecast_horizon, contain_missing_values, contain_equal_length = convert_tsf_to_dataframe(full_path)
X = to3d(stack_pad(data['series_value']))
pv("...dataframe converted to numpy array", verbose)
pv(f'\nX.shape: {X.shape}', verbose)
pv(f'freq: {frequency}', verbose)
pv(f'forecast_horizon: {forecast_horizon}', verbose)
pv(f'contain_missing_values: {contain_missing_values}', verbose)
pv(f'contain_equal_length: {contain_equal_length}', verbose=verbose)
if remove_from_disk: os.remove(full_path)
return X
get_forecasting_data = get_Monash_forecasting_data
dsid = 'm1_yearly_dataset'
X = get_Monash_forecasting_data(dsid, force_download=False)
if X is not None:
test_eq(X.shape, (181, 1, 58))
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____ |
Lab_4.ipynb | ###Markdown
Generalized Linear Regression Import and Prepare the Data Pandas provides excellent data reading and querying module,dataframe, which allows you to import structured data and perform SQL-like queries. Here we imported some house price records from Trulia. For more about extracting data from Trulia, please check my previous tutorial. We used the house prices as the dependent variable and the lot size, house area, the number of bedrooms and the number of bathrooms as the independent variables.
###Code
import sklearn
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
%matplotlib inline
import pandas
import numpy as np
df = pandas.read_excel('house_price.xlsx')
# combine multipl columns into a 2D array
X = np.column_stack((df.lot_size,df.area,df.bedroom,df.bathroom))
y = df.price
print (X[:10])
#split data for training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3 , random_state=3)
###Output
[[ 10018 1541 3 2]
[ 8712 1810 4 2]
[ 13504 1456 3 2]
[ 10130 2903 5 4]
[ 18295 2616 3 2]
[204732 3850 3 2]
[ 9147 1000 3 1]
[ 2300 920 2 2]
[ 13939 2705 3 3]
[ 2291 1440 4 3]]
###Markdown
Ordinary Least Squares We used a multiple linear regression model to learn the data and test the R squares on the training data and the test data.
###Code
from sklearn.linear_model import LinearRegression
lr = LinearRegression().fit(X_train, y_train)
print("lr.coef_: {}".format(lr.coef_))
print("lr.intercept_: {}".format(lr.intercept_))
print("Training R Square: {:.2f}".format(lr.score(X_train, y_train)))
print("Test R Square: {:.2f}".format(lr.score(X_test, y_test)))
###Output
lr.coef_: [2.12812203e-01 4.09817896e+01 1.34769478e+04 4.47428552e+04]
lr.intercept_: 16022.12534700043
Training R Square: 0.96
Test R Square: 0.71
###Markdown
The model reported the coefficients of all features, and the R square on the training data is good. However, the R square of the testing data is much lower, indicating an overfitting situation. Ridge Regression Ridge Regression reduces the complexity of linear models by imposing a penalty on the size of coefficients to push all coefficients close to 0.
###Code
from sklearn.linear_model import Ridge
ridge = Ridge().fit(X_train, y_train)
print("Training R Square: {:.2f}".format(ridge.score(X_train, y_train)))
print("Test R Square: {:.2f}".format(ridge.score(X_test, y_test)))
###Output
Training R Square: 0.96
Test R Square: 0.72
###Markdown
We can increase the alpha to push all coefficients closer to 0.
###Code
ridge10 = Ridge(alpha=10).fit(X_train, y_train) # set different aphpa
print("Training R Square: {:.2f}".format(ridge10.score(X_train, y_train)))
print("Test R Square: {:.2f}".format(ridge10.score(X_test, y_test)))
ridge1000 = Ridge(alpha=1000).fit(X_train, y_train)
print("Training R Square: {:.2f}".format(ridge1000.score(X_train, y_train)))
print("Test R Square: {:.2f}".format(ridge1000.score(X_test, y_test)))
###Output
Training R Square: 0.93
Test R Square: 0.79
###Markdown
We can visually compare the coefficients from the Ridge Regression models with different alpha.
###Code
plt.plot(ridge.coef_, 's', label="Ridge alpha=1")
plt.plot(ridge10.coef_, '^', label="Ridge alpha=10")
plt.plot(ridge1000.coef_, 'v', label="Ridge alpha=1000")
plt.plot(lr.coef_, 'o', label="LinearRegression")
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
plt.hlines(0, 0, len(lr.coef_))
plt.legend()
###Output
_____no_output_____
###Markdown
Lasso Regression Lasso is another model that estimates sparse coefficients by push some coefficients to 0, i.e., reduce the number of coefficients.
###Code
from sklearn.linear_model import Lasso
lasso = Lasso().fit(X_train, y_train)
print("Training set score: {:.2f}".format(lasso.score(X_train, y_train)))
print("Test set score: {:.2f}".format(lasso.score(X_test, y_test)))
# here we also report the number of coefficients
print("Number of features used: {}".format(np.sum(lasso.coef_ != 0)))
###Output
Training set score: 0.96
Test set score: 0.71
Number of features used: 4
###Markdown
We can also adjust the alpha to push some coefficients closer to 0.
###Code
lasso10 = Lasso(alpha=10).fit(X_train, y_train) #set different alpha
print("Training R Square: {:.2f}".format(lasso10.score(X_train, y_train)))
print("Test R Square: {:.2f}".format(lasso10.score(X_test, y_test)))
print("Number of features used: {}".format(np.sum(lasso10.coef_ != 0)))
lasso10000 = Lasso(alpha=10000, ).fit(X_train, y_train)
print("Training R Square: {:.2f}".format(lasso10000.score(X_train, y_train)))
print("Test R Square: {:.2f}".format(lasso10000.score(X_test, y_test)))
print("Number of features used: {}".format(np.sum(lasso10000.coef_ != 0)))
###Output
Training R Square: 0.95
Test R Square: 0.73
Number of features used: 3
###Markdown
We can also check the coefficients from the Lasso Regression models with different alpha.
###Code
plt.plot(lasso.coef_, 's', label="Lasso alpha=1")
plt.plot(lasso10.coef_, '^', label="Lasso alpha=10")
plt.plot(lasso10000.coef_, 'v', label="Lasso alpha=10000")
plt.plot(lr.coef_, 'o', label="LinearRegression")
plt.legend(ncol=2, loc=(0, 1.05))
plt.xlabel("Coefficient index")
plt.ylabel("Coefficient magnitude")
###Output
_____no_output_____ |
tutorials/algorithms/07_grover.ipynb | ###Markdown
Grover's Algorithm and Amplitude AmplificationGrover's algorithm is one of the most famous quantum algorithms introduced by Lov Grover in 1996 \[1\]. It has initially been proposed for unstructured search problems, i.e. for finding a marked element in a unstructured database. However, Grover's algorithm is now a subroutine to several other algorithms, such as Grover Adaptive Search \[2\]. For the details of Grover's algorithm, please see [Grover's Algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) in the Qiskit textbook.Qiskit implements Grover's algorithm in the `Grover` class. This class also includes the generalized version, Amplitude Amplification \[3\], and allows setting individual iterations and other meta-settings to Grover's algorithm.**References:**\[1\]: L. K. Grover, A fast quantum mechanical algorithm for database search. Proceedings 28th Annual Symposium onthe Theory of Computing (STOC) 1996, pp. 212-219.\[2\]: A. Gilliam, S. Woerner, C. Gonciulea, Grover Adaptive Search for Constrained Polynomial Binary Optimization.https://arxiv.org/abs/1912.04088\[3\]: Brassard, G., Hoyer, P., Mosca, M., & Tapp, A. (2000). Quantum Amplitude Amplification and Estimation. http://arxiv.org/abs/quant-ph/0005055 Grover's algorithmGrover's algorithm uses the Grover operator $\mathcal{Q}$ to amplify the amplitudes of the good states:$$ \mathcal{Q} = \mathcal{A}\mathcal{S_0}\mathcal{A}^\dagger \mathcal{S_f}$$Here, * $\mathcal{A}$ is the initial search state for the algorithm, which is just Hadamards, $H^{\otimes n}$ for the textbook Grover search, but can be more elaborate for Amplitude Amplification* $\mathcal{S_0}$ is the reflection about the all 0 state$$ |x\rangle \mapsto \begin{cases} -|x\rangle, &x \neq 0 \\ |x\rangle, &x = 0\end{cases}$$* $\mathcal{S_f}$ is the oracle that applies $$ |x\rangle \mapsto (-1)^{f(x)}|x\rangle$$ where $f(x)$ is 1 if $x$ is a good state and otherwise 0.In a nutshell, Grover's algorithm applies different powers of $\mathcal{Q}$ and after each execution checks whether a good solution has been found. Running Grover's algorithm To run Grover's algorithm with the `Grover` class, firstly, we need to specify an oracle for the circuit of Grover's algorithm. In the following example, we use `QuantumCircuit` as the oracle of Grover's algorithm. However, there are several other class that we can use as the oracle of Grover's algorithm. We talk about them later in this tutorial.Note that the oracle for `Grover` mush be a _phase-flip_ oracle. That is, it multiplies the amplitudes of the of "good states" by a factor of $-1$. We explain later how to convert a _bit-flip_ oracle to a phase-flip oracle.
###Code
from qiskit import QuantumCircuit
from qiskit.aqua.algorithms import Grover
# the state we desire to find is '11'
good_state = ['11']
# specify the oracle that marks the state '11' as a good solution
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
# define Grover's algorithm
grover = Grover(oracle=oracle, good_state=good_state)
# now we can have a look at the Grover operator that is used in running the algorithm
grover.grover_operator.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Then, we specify a backend and call the `run` method of `Grover` with a backend to execute the circuits. The returned result type is a `GroverResult`. If the search was successful, the `oracle_evaluation` attribute of the result will be `True`. In this case, the most sampled measurement, `top_measurement`, is one of the "good states". Otherwise, `oracle_evaluation` will be False.
###Code
from qiskit import Aer
from qiskit.aqua import QuantumInstance
qasm_simulator = Aer.get_backend('qasm_simulator')
result = grover.run(quantum_instance=qasm_simulator)
print('Result type:', type(result))
print()
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Result type: <class 'qiskit.aqua.algorithms.amplitude_amplifiers.grover.GroverResult'>
Success!
Top measurement: 11
###Markdown
In the example, the result of `top_measurement` is `11` which is one of "good state". Thus, we succeeded to find the answer by using `Grover`. Using the different types of classes as the oracle of `Grover`In the above example, we used `QuantumCircuit` as the oracle of `Grover`. However, we can also use `qiskit.aqua.components.oracles.Oracle`, and `qiskit.quantum_info.Statevector` as oracles.All the following examples are when $|11\rangle$ is "good state"
###Code
from qiskit.quantum_info import Statevector
oracle = Statevector.from_label('11')
grover = Grover(oracle=oracle, good_state=['11'])
result = grover.run(quantum_instance=qasm_simulator)
print('Result type:', type(result))
print()
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Result type: <class 'qiskit.aqua.algorithms.amplitude_amplifiers.grover.GroverResult'>
Success!
Top measurement: 11
###Markdown
Internally, the statevector is mapped to a quantum circuit:
###Code
grover.grover_operator.oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
The `Oracle` components in Qiskit Aqua allow for an easy construction of more complex oracles.The `Oracle` type has the interesting subclasses:* `LogicalExpressionOracle`: for parsing logical expressions such as `'~a | b'`. This is especially useful for solving 3-SAT problems and is shown in the accompanying [Grover Examples](08_grover_examples.ipynb) tutorial.* `TrutheTableOracle`: for converting binary truth tables to circuits Here we'll use the `LogicalExpressionOracle` for the simple example of finding the state $|11\rangle$, which corresponds to `'a & b'`.
###Code
from qiskit.aqua.components.oracles import LogicalExpressionOracle
# `Oracle` (`LogicalExpressionOracle`) as the `oracle` argument
expression = '(a & b)'
oracle = LogicalExpressionOracle(expression)
grover = Grover(oracle=oracle)
grover.grover_operator.oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
You can observe, that this oracle is actually implemented with three qubits instead of two!That is because the `LogicalExpressionOracle` is not a phase-flip oracle (which flips the phase of the good state) but a bit-flip oracle. This means it flips the state of an auxiliary qubit if the other qubits satisfy the condition.For Grover's algorithm, however, we require a phase-flip oracle. To convert the bit-flip oracle to a phase-flip oracle we sandwich the controlled-NOT by $X$ and $H$ gates, as you can see in the circuit above.**Note:** This transformation from a bit-flip to a phase-flip oracle holds generally and you can use this to convert your oracle to the right representation. Amplitude amplificationGrover's algorithm uses Hadamard gates to create the uniform superposition of all the states at the beginning of the Grover operator $\mathcal{Q}$. If some information on the good states is available, it might be useful to not start in a uniform superposition but only initialize specific states. This, generalized, version of Grover's algorithm is referred to _Amplitude Amplification_.In Qiskit, the initial superposition state can easily be adjusted by setting the `state_preparation` argument. State preparationA `state_preparation` argument is used to specify a quantum circuit that prepares a quantum state for the start point of the amplitude amplification.By default, a circuit with $H^{\otimes n} $ is used to prepare uniform superposition (so it will be Grover's search). The diffusion circuit of the amplitude amplification reflects `state_preparation` automatically.
###Code
import numpy as np
# Specifying `state_preparation`
# to prepare a superposition of |01>, |10>, and |11>
oracle = QuantumCircuit(3)
oracle.h(2)
oracle.ccx(0,1,2)
oracle.h(2)
theta = 2 * np.arccos(1 / np.sqrt(3))
state_preparation = QuantumCircuit(3)
state_preparation.ry(theta, 0)
state_preparation.ch(0,1)
state_preparation.x(1)
state_preparation.h(2)
# we only care about the first two bits being in state 1, thus add both possibilities for the last qubit
grover = Grover(oracle=oracle, state_preparation=state_preparation, good_state=['110', '111'])
# state_preparation
print('state preparation circuit:')
grover.grover_operator.state_preparation.draw(output='mpl')
result = grover.run(quantum_instance=qasm_simulator)
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Success!
Top measurement: 111
###Markdown
Full flexibilityFor more advanced use, it is also possible to specify the entire Grover operator by setting the `grover_operator` argument. This might be useful if you know more efficient implementation for $\mathcal{Q}$ than the default construction via zero reflection, oracle and state preparation.The `qiskit.circuit.library.GroverOperator` can be a good starting point and offers more options for an automated construction of the Grover operator. You can for instance * set the `mcx_mode` * ignore qubits in the zero reflection by setting `reflection_qubits`* explicitly exchange the $\mathcal{S_f}, \mathcal{S_0}$ and $\mathcal{A}$ operations using the `oracle`, `zero_reflection` and `state_preparation` arguments For instance, imagine the good state is a three qubit state $|111\rangle$ but we used 2 additional qubits as auxiliary qubits.
###Code
from qiskit.circuit.library import GroverOperator, ZGate
oracle = QuantumCircuit(5)
oracle.append(ZGate().control(2), [0, 1, 2])
oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Then, per default, the Grover operator implements the zero reflection on all five qubits.
###Code
grover_op = GroverOperator(oracle, insert_barriers=True)
grover_op.draw(output='mpl')
###Output
_____no_output_____
###Markdown
But we know that we only need to consider the first three:
###Code
grover_op = GroverOperator(oracle, reflection_qubits=[0, 1, 2], insert_barriers=True)
grover_op.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Dive into other arguments of `Grover``Grover` has arguments other than `oracle` and `state_preparation`. We will explain them in this section. Specifying `good_state``good_state` is used to check whether the measurement result is correct or not internally. It can be a list of binary strings, a list of integer, `Statevector`, and Callable. If the input is a list of bitstrings, each bitstrings in the list represents a good state. If the input is a list of integer, each integer represent the index of the good state to be $|1\rangle$. If it is a `Statevector`, it represents a superposition of all good states.
###Code
# a list of binary strings good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = ['11', '00']
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
# a list of integer good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = [0, 1]
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
from qiskit.quantum_info import Statevector
# `Statevector` good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = Statevector.from_label('11')
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
# Callable good state
def callable_good_state(bitstr):
if bitstr == "11":
return True
return False
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=callable_good_state)
print(grover.is_good_state('11'))
###Output
True
###Markdown
The number of `iterations`The number of repetition of applying the Grover operator is important to obtain the correct result with Grover's algorithm. The number of iteration can be set by the `iteration` argument of `Grover`. The following inputs are supported:* an integer to specify a single power of the Grover operator that's applied* or a list of integers, in which all these different powers of the Grover operator are run consecutively and after each time we check if a correct solution has been foundAdditionally there is the `sample_from_iterations` argument. When it is `True`, instead of the specific power in `iterations`, a random integer between 0 and the value in `iteration` is used as the power Grover's operator. This approach is useful when we don't even know the number of solution.For more details of the algorithm using `sample_from_iterations`, see [4].**References:**[4]: Boyer et al., Tight bounds on quantum searching [arxiv:quant-ph/9605034](https://arxiv.org/abs/quant-ph/9605034)
###Code
# integer iteration
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=1)
# list iteration
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=[1, 2, 3])
# using sample_from_iterations
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=[1, 2, 3], sample_from_iterations=True)
###Output
_____no_output_____
###Markdown
When the number of solutions is known, we can also use a static method `optimal_num_iterations` to find the optimal number of iterations. Note that the output iterations is an approximate value. When the number of qubits is small, the output iterations may not be optimal.
###Code
iterations = Grover.optimal_num_iterations(num_solutions=1, num_qubits=8)
iterations
###Output
_____no_output_____
###Markdown
Applying `post_processing`We can apply an optional post processing to the top measurement for ease of readability. It can be used e.g. to convert from the bit-representation of the measurement `[1, 0, 1]` to a DIMACS CNF format `[1, -2, 3]`.
###Code
def to_DIAMACS_CNF_format(bit_rep):
return [index+1 if val==1 else -1 * (index + 1) for index, val in enumerate(bit_rep)]
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'],post_processing=to_DIAMACS_CNF_format)
grover.post_processing([1, 0, 1])
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Grover's Algorithm and Amplitude AmplificationGrover's algorithm is one of the most famous quantum algorithms introduced by Lov Grover in 1996 \[1\]. It has initially been proposed for unstructured search problems, i.e. for finding a marked element in a unstructured database. However, Grover's algorithm is now a subroutine to several other algorithms, such as Grover Adaptive Search \[2\]. For the details of Grover's algorithm, please see [Grover's Algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) in the Qiskit textbook.Qiskit implements Grover's algorithm in the `Grover` class. This class also includes the generalized version, Amplitude Amplification \[3\], and allows setting individual iterations and other meta-settings to Grover's algorithm.**References:**\[1\]: L. K. Grover, A fast quantum mechanical algorithm for database search. Proceedings 28th Annual Symposium onthe Theory of Computing (STOC) 1996, pp. 212-219. https://arxiv.org/abs/quant-ph/9605043\[2\]: A. Gilliam, S. Woerner, C. Gonciulea, Grover Adaptive Search for Constrained Polynomial Binary Optimization.https://arxiv.org/abs/1912.04088\[3\]: Brassard, G., Hoyer, P., Mosca, M., & Tapp, A. (2000). Quantum Amplitude Amplification and Estimation. http://arxiv.org/abs/quant-ph/0005055 Grover's algorithmGrover's algorithm uses the Grover operator $\mathcal{Q}$ to amplify the amplitudes of the good states:$$ \mathcal{Q} = \mathcal{A}\mathcal{S_0}\mathcal{A}^\dagger \mathcal{S_f}$$Here, * $\mathcal{A}$ is the initial search state for the algorithm, which is just Hadamards, $H^{\otimes n}$ for the textbook Grover search, but can be more elaborate for Amplitude Amplification* $\mathcal{S_0}$ is the reflection about the all 0 state$$ |x\rangle \mapsto \begin{cases} -|x\rangle, &x \neq 0 \\ |x\rangle, &x = 0\end{cases}$$* $\mathcal{S_f}$ is the oracle that applies $$ |x\rangle \mapsto (-1)^{f(x)}|x\rangle$$ where $f(x)$ is 1 if $x$ is a good state and otherwise 0.In a nutshell, Grover's algorithm applies different powers of $\mathcal{Q}$ and after each execution checks whether a good solution has been found. Running Grover's algorithm To run Grover's algorithm with the `Grover` class, firstly, we need to specify an oracle for the circuit of Grover's algorithm. In the following example, we use `QuantumCircuit` as the oracle of Grover's algorithm. However, there are several other class that we can use as the oracle of Grover's algorithm. We talk about them later in this tutorial.Note that the oracle for `Grover` must be a _phase-flip_ oracle. That is, it multiplies the amplitudes of the of "good states" by a factor of $-1$. We explain later how to convert a _bit-flip_ oracle to a phase-flip oracle.
###Code
from qiskit import QuantumCircuit
from qiskit.aqua.algorithms import Grover
# the state we desire to find is '11'
good_state = ['11']
# specify the oracle that marks the state '11' as a good solution
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
# define Grover's algorithm
grover = Grover(oracle=oracle, good_state=good_state)
# now we can have a look at the Grover operator that is used in running the algorithm
grover.grover_operator.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Then, we specify a backend and call the `run` method of `Grover` with a backend to execute the circuits. The returned result type is a `GroverResult`. If the search was successful, the `oracle_evaluation` attribute of the result will be `True`. In this case, the most sampled measurement, `top_measurement`, is one of the "good states". Otherwise, `oracle_evaluation` will be False.
###Code
from qiskit import Aer
from qiskit.aqua import QuantumInstance
qasm_simulator = Aer.get_backend('qasm_simulator')
result = grover.run(quantum_instance=qasm_simulator)
print('Result type:', type(result))
print()
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Result type: <class 'qiskit.aqua.algorithms.amplitude_amplifiers.grover.GroverResult'>
Success!
Top measurement: 11
###Markdown
In the example, the result of `top_measurement` is `11` which is one of "good state". Thus, we succeeded to find the answer by using `Grover`. Using the different types of classes as the oracle of `Grover`In the above example, we used `QuantumCircuit` as the oracle of `Grover`. However, we can also use `qiskit.aqua.components.oracles.Oracle`, and `qiskit.quantum_info.Statevector` as oracles.All the following examples are when $|11\rangle$ is "good state"
###Code
from qiskit.quantum_info import Statevector
oracle = Statevector.from_label('11')
grover = Grover(oracle=oracle, good_state=['11'])
result = grover.run(quantum_instance=qasm_simulator)
print('Result type:', type(result))
print()
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Result type: <class 'qiskit.aqua.algorithms.amplitude_amplifiers.grover.GroverResult'>
Success!
Top measurement: 11
###Markdown
Internally, the statevector is mapped to a quantum circuit:
###Code
grover.grover_operator.oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
The `Oracle` components in Qiskit Aqua allow for an easy construction of more complex oracles.The `Oracle` type has the interesting subclasses:* `LogicalExpressionOracle`: for parsing logical expressions such as `'~a | b'`. This is especially useful for solving 3-SAT problems and is shown in the accompanying [Grover Examples](08_grover_examples.ipynb) tutorial.* `TrutheTableOracle`: for converting binary truth tables to circuits Here we'll use the `LogicalExpressionOracle` for the simple example of finding the state $|11\rangle$, which corresponds to `'a & b'`.
###Code
from qiskit.aqua.components.oracles import LogicalExpressionOracle
# `Oracle` (`LogicalExpressionOracle`) as the `oracle` argument
expression = '(a & b)'
oracle = LogicalExpressionOracle(expression)
grover = Grover(oracle=oracle)
grover.grover_operator.oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
You can observe, that this oracle is actually implemented with three qubits instead of two!That is because the `LogicalExpressionOracle` is not a phase-flip oracle (which flips the phase of the good state) but a bit-flip oracle. This means it flips the state of an auxiliary qubit if the other qubits satisfy the condition.For Grover's algorithm, however, we require a phase-flip oracle. To convert the bit-flip oracle to a phase-flip oracle we sandwich the controlled-NOT by $X$ and $H$ gates, as you can see in the circuit above.**Note:** This transformation from a bit-flip to a phase-flip oracle holds generally and you can use this to convert your oracle to the right representation. Amplitude amplificationGrover's algorithm uses Hadamard gates to create the uniform superposition of all the states at the beginning of the Grover operator $\mathcal{Q}$. If some information on the good states is available, it might be useful to not start in a uniform superposition but only initialize specific states. This, generalized, version of Grover's algorithm is referred to _Amplitude Amplification_.In Qiskit, the initial superposition state can easily be adjusted by setting the `state_preparation` argument. State preparationA `state_preparation` argument is used to specify a quantum circuit that prepares a quantum state for the start point of the amplitude amplification.By default, a circuit with $H^{\otimes n} $ is used to prepare uniform superposition (so it will be Grover's search). The diffusion circuit of the amplitude amplification reflects `state_preparation` automatically.
###Code
import numpy as np
# Specifying `state_preparation`
# to prepare a superposition of |01>, |10>, and |11>
oracle = QuantumCircuit(3)
oracle.h(2)
oracle.ccx(0,1,2)
oracle.h(2)
theta = 2 * np.arccos(1 / np.sqrt(3))
state_preparation = QuantumCircuit(3)
state_preparation.ry(theta, 0)
state_preparation.ch(0,1)
state_preparation.x(1)
state_preparation.h(2)
# we only care about the first two bits being in state 1, thus add both possibilities for the last qubit
grover = Grover(oracle=oracle, state_preparation=state_preparation, good_state=['110', '111'])
# state_preparation
print('state preparation circuit:')
grover.grover_operator.state_preparation.draw(output='mpl')
result = grover.run(quantum_instance=qasm_simulator)
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Success!
Top measurement: 111
###Markdown
Full flexibilityFor more advanced use, it is also possible to specify the entire Grover operator by setting the `grover_operator` argument. This might be useful if you know more efficient implementation for $\mathcal{Q}$ than the default construction via zero reflection, oracle and state preparation.The `qiskit.circuit.library.GroverOperator` can be a good starting point and offers more options for an automated construction of the Grover operator. You can for instance * set the `mcx_mode` * ignore qubits in the zero reflection by setting `reflection_qubits`* explicitly exchange the $\mathcal{S_f}, \mathcal{S_0}$ and $\mathcal{A}$ operations using the `oracle`, `zero_reflection` and `state_preparation` arguments For instance, imagine the good state is a three qubit state $|111\rangle$ but we used 2 additional qubits as auxiliary qubits.
###Code
from qiskit.circuit.library import GroverOperator, ZGate
oracle = QuantumCircuit(5)
oracle.append(ZGate().control(2), [0, 1, 2])
oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Then, per default, the Grover operator implements the zero reflection on all five qubits.
###Code
grover_op = GroverOperator(oracle, insert_barriers=True)
grover_op.draw(output='mpl')
###Output
_____no_output_____
###Markdown
But we know that we only need to consider the first three:
###Code
grover_op = GroverOperator(oracle, reflection_qubits=[0, 1, 2], insert_barriers=True)
grover_op.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Dive into other arguments of `Grover``Grover` has arguments other than `oracle` and `state_preparation`. We will explain them in this section. Specifying `good_state``good_state` is used to check whether the measurement result is correct or not internally. It can be a list of binary strings, a list of integer, `Statevector`, and Callable. If the input is a list of bitstrings, each bitstrings in the list represents a good state. If the input is a list of integer, each integer represent the index of the good state to be $|1\rangle$. If it is a `Statevector`, it represents a superposition of all good states.
###Code
# a list of binary strings good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = ['11', '00']
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
# a list of integer good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = [0, 1]
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
from qiskit.quantum_info import Statevector
# `Statevector` good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = Statevector.from_label('11')
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
# Callable good state
def callable_good_state(bitstr):
if bitstr == "11":
return True
return False
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=callable_good_state)
print(grover.is_good_state('11'))
###Output
True
###Markdown
The number of `iterations`The number of repetition of applying the Grover operator is important to obtain the correct result with Grover's algorithm. The number of iteration can be set by the `iteration` argument of `Grover`. The following inputs are supported:* an integer to specify a single power of the Grover operator that's applied* or a list of integers, in which all these different powers of the Grover operator are run consecutively and after each time we check if a correct solution has been foundAdditionally there is the `sample_from_iterations` argument. When it is `True`, instead of the specific power in `iterations`, a random integer between 0 and the value in `iteration` is used as the power Grover's operator. This approach is useful when we don't even know the number of solution.For more details of the algorithm using `sample_from_iterations`, see [4].**References:**[4]: Boyer et al., Tight bounds on quantum searching [arxiv:quant-ph/9605034](https://arxiv.org/abs/quant-ph/9605034)
###Code
# integer iteration
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=1)
# list iteration
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=[1, 2, 3])
# using sample_from_iterations
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=[1, 2, 3], sample_from_iterations=True)
###Output
_____no_output_____
###Markdown
When the number of solutions is known, we can also use a static method `optimal_num_iterations` to find the optimal number of iterations. Note that the output iterations is an approximate value. When the number of qubits is small, the output iterations may not be optimal.
###Code
iterations = Grover.optimal_num_iterations(num_solutions=1, num_qubits=8)
iterations
###Output
_____no_output_____
###Markdown
Applying `post_processing`We can apply an optional post processing to the top measurement for ease of readability. It can be used e.g. to convert from the bit-representation of the measurement `[1, 0, 1]` to a DIMACS CNF format `[1, -2, 3]`.
###Code
def to_DIAMACS_CNF_format(bit_rep):
return [index+1 if val==1 else -1 * (index + 1) for index, val in enumerate(bit_rep)]
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'],post_processing=to_DIAMACS_CNF_format)
grover.post_processing([1, 0, 1])
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Grover's Algorithm and Amplitude AmplificationGrover's algorithm is one of the most famous quantum algorithms introduced by Lov Grover in 1996 \[1\]. It has initially been proposed for unstructured search problems, i.e. for finding a marked element in a unstructured database. However, Grover's algorithm is now a subroutine to several other algorithms, such as Grover Adaptive Search \[2\]. For the details of Grover's algorithm, please see [Grover's Algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) in the Qiskit textbook.Qiskit implements Grover's algorithm in the `Grover` class. This class also includes the generalized version, Amplitude Amplification \[3\], and allows setting individual iterations and other meta-settings to Grover's algorithm.**References:**\[1\]: L. K. Grover, A fast quantum mechanical algorithm for database search. Proceedings 28th Annual Symposium onthe Theory of Computing (STOC) 1996, pp. 212-219. https://arxiv.org/abs/quant-ph/9605043\[2\]: A. Gilliam, S. Woerner, C. Gonciulea, Grover Adaptive Search for Constrained Polynomial Binary Optimization.https://arxiv.org/abs/1912.04088\[3\]: Brassard, G., Hoyer, P., Mosca, M., & Tapp, A. (2000). Quantum Amplitude Amplification and Estimation. http://arxiv.org/abs/quant-ph/0005055 Grover's algorithmGrover's algorithm uses the Grover operator $\mathcal{Q}$ to amplify the amplitudes of the good states:$$ \mathcal{Q} = \mathcal{A}\mathcal{S_0}\mathcal{A}^\dagger \mathcal{S_f}$$Here, * $\mathcal{A}$ is the initial search state for the algorithm, which is just Hadamards, $H^{\otimes n}$ for the textbook Grover search, but can be more elaborate for Amplitude Amplification* $\mathcal{S_0}$ is the reflection about the all 0 state$$ |x\rangle \mapsto \begin{cases} -|x\rangle, &x \neq 0 \\ |x\rangle, &x = 0\end{cases}$$* $\mathcal{S_f}$ is the oracle that applies $$ |x\rangle \mapsto (-1)^{f(x)}|x\rangle$$ where $f(x)$ is 1 if $x$ is a good state and otherwise 0.In a nutshell, Grover's algorithm applies different powers of $\mathcal{Q}$ and after each execution checks whether a good solution has been found. Running Grover's algorithm To run Grover's algorithm with the `Grover` class, firstly, we need to specify an oracle for the circuit of Grover's algorithm. In the following example, we use `QuantumCircuit` as the oracle of Grover's algorithm. However, there are several other class that we can use as the oracle of Grover's algorithm. We talk about them later in this tutorial.Note that the oracle for `Grover` must be a _phase-flip_ oracle. That is, it multiplies the amplitudes of the of "good states" by a factor of $-1$. We explain later how to convert a _bit-flip_ oracle to a phase-flip oracle.
###Code
from qiskit import QuantumCircuit
from qiskit.aqua.algorithms import Grover
# the state we desire to find is '11'
good_state = ['11']
# specify the oracle that marks the state '11' as a good solution
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
# define Grover's algorithm
grover = Grover(oracle=oracle, good_state=good_state)
# now we can have a look at the Grover operator that is used in running the algorithm
grover.grover_operator.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Then, we specify a backend and call the `run` method of `Grover` with a backend to execute the circuits. The returned result type is a `GroverResult`. If the search was successful, the `oracle_evaluation` attribute of the result will be `True`. In this case, the most sampled measurement, `top_measurement`, is one of the "good states". Otherwise, `oracle_evaluation` will be False.
###Code
from qiskit import Aer
from qiskit.aqua import QuantumInstance
qasm_simulator = Aer.get_backend('qasm_simulator')
result = grover.run(quantum_instance=qasm_simulator)
print('Result type:', type(result))
print()
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Result type: <class 'qiskit.aqua.algorithms.amplitude_amplifiers.grover.GroverResult'>
Success!
Top measurement: 11
###Markdown
In the example, the result of `top_measurement` is `11` which is one of "good state". Thus, we succeeded to find the answer by using `Grover`. Using the different types of classes as the oracle of `Grover`In the above example, we used `QuantumCircuit` as the oracle of `Grover`. However, we can also use `qiskit.aqua.components.oracles.Oracle`, and `qiskit.quantum_info.Statevector` as oracles.All the following examples are when $|11\rangle$ is "good state"
###Code
from qiskit.quantum_info import Statevector
oracle = Statevector.from_label('11')
grover = Grover(oracle=oracle, good_state=['11'])
result = grover.run(quantum_instance=qasm_simulator)
print('Result type:', type(result))
print()
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Result type: <class 'qiskit.aqua.algorithms.amplitude_amplifiers.grover.GroverResult'>
Success!
Top measurement: 11
###Markdown
Internally, the statevector is mapped to a quantum circuit:
###Code
grover.grover_operator.oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
The `Oracle` components in Qiskit Aqua allow for an easy construction of more complex oracles.The `Oracle` type has the interesting subclasses:* `LogicalExpressionOracle`: for parsing logical expressions such as `'~a | b'`. This is especially useful for solving 3-SAT problems and is shown in the accompanying [Grover Examples](08_grover_examples.ipynb) tutorial.* `TrutheTableOracle`: for converting binary truth tables to circuits Here we'll use the `LogicalExpressionOracle` for the simple example of finding the state $|11\rangle$, which corresponds to `'a & b'`.
###Code
from qiskit.aqua.components.oracles import LogicalExpressionOracle
# `Oracle` (`LogicalExpressionOracle`) as the `oracle` argument
expression = '(a & b)'
oracle = LogicalExpressionOracle(expression)
grover = Grover(oracle=oracle)
grover.grover_operator.oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
You can observe, that this oracle is actually implemented with three qubits instead of two!That is because the `LogicalExpressionOracle` is not a phase-flip oracle (which flips the phase of the good state) but a bit-flip oracle. This means it flips the state of an auxiliary qubit if the other qubits satisfy the condition.For Grover's algorithm, however, we require a phase-flip oracle. To convert the bit-flip oracle to a phase-flip oracle we sandwich the controlled-NOT by $X$ and $H$ gates, as you can see in the circuit above.**Note:** This transformation from a bit-flip to a phase-flip oracle holds generally and you can use this to convert your oracle to the right representation. Amplitude amplificationGrover's algorithm uses Hadamard gates to create the uniform superposition of all the states at the beginning of the Grover operator $\mathcal{Q}$. If some information on the good states is available, it might be useful to not start in a uniform superposition but only initialize specific states. This, generalized, version of Grover's algorithm is referred to _Amplitude Amplification_.In Qiskit, the initial superposition state can easily be adjusted by setting the `state_preparation` argument. State preparationA `state_preparation` argument is used to specify a quantum circuit that prepares a quantum state for the start point of the amplitude amplification.By default, a circuit with $H^{\otimes n}$ is used to prepare uniform superposition (so it will be Grover's search). The diffusion circuit of the amplitude amplification reflects `state_preparation` automatically.
###Code
import numpy as np
# Specifying `state_preparation`
# to prepare a superposition of |01>, |10>, and |11>
oracle = QuantumCircuit(3)
oracle.h(2)
oracle.ccx(0,1,2)
oracle.h(2)
theta = 2 * np.arccos(1 / np.sqrt(3))
state_preparation = QuantumCircuit(3)
state_preparation.ry(theta, 0)
state_preparation.ch(0,1)
state_preparation.x(1)
state_preparation.h(2)
# we only care about the first two bits being in state 1, thus add both possibilities for the last qubit
grover = Grover(oracle=oracle, state_preparation=state_preparation, good_state=['110', '111'])
# state_preparation
print('state preparation circuit:')
grover.grover_operator.state_preparation.draw(output='mpl')
result = grover.run(quantum_instance=qasm_simulator)
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Success!
Top measurement: 111
###Markdown
Full flexibilityFor more advanced use, it is also possible to specify the entire Grover operator by setting the `grover_operator` argument. This might be useful if you know more efficient implementation for $\mathcal{Q}$ than the default construction via zero reflection, oracle and state preparation.The `qiskit.circuit.library.GroverOperator` can be a good starting point and offers more options for an automated construction of the Grover operator. You can for instance * set the `mcx_mode`* ignore qubits in the zero reflection by setting `reflection_qubits`* explicitly exchange the $\mathcal{S_f}, \mathcal{S_0}$ and $\mathcal{A}$ operations using the `oracle`, `zero_reflection` and `state_preparation` arguments For instance, imagine the good state is a three qubit state $|111\rangle$ but we used 2 additional qubits as auxiliary qubits.
###Code
from qiskit.circuit.library import GroverOperator, ZGate
oracle = QuantumCircuit(5)
oracle.append(ZGate().control(2), [0, 1, 2])
oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Then, per default, the Grover operator implements the zero reflection on all five qubits.
###Code
grover_op = GroverOperator(oracle, insert_barriers=True)
grover_op.draw(output='mpl')
###Output
_____no_output_____
###Markdown
But we know that we only need to consider the first three:
###Code
grover_op = GroverOperator(oracle, reflection_qubits=[0, 1, 2], insert_barriers=True)
grover_op.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Dive into other arguments of `Grover``Grover` has arguments other than `oracle` and `state_preparation`. We will explain them in this section. Specifying `good_state``good_state` is used to check whether the measurement result is correct or not internally. It can be a list of binary strings, a list of integer, `Statevector`, and Callable. If the input is a list of bitstrings, each bitstrings in the list represents a good state. If the input is a list of integer, each integer represent the index of the good state to be $|1\rangle$. If it is a `Statevector`, it represents a superposition of all good states.
###Code
# a list of binary strings good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = ['11', '00']
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
# a list of integer good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = [0, 1]
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
from qiskit.quantum_info import Statevector
# `Statevector` good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = Statevector.from_label('11')
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
# Callable good state
def callable_good_state(bitstr):
if bitstr == "11":
return True
return False
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=callable_good_state)
print(grover.is_good_state('11'))
###Output
True
###Markdown
The number of `iterations`The number of repetition of applying the Grover operator is important to obtain the correct result with Grover's algorithm. The number of iteration can be set by the `iteration` argument of `Grover`. The following inputs are supported:* an integer to specify a single power of the Grover operator that's applied* or a list of integers, in which all these different powers of the Grover operator are run consecutively and after each time we check if a correct solution has been foundAdditionally there is the `sample_from_iterations` argument. When it is `True`, instead of the specific power in `iterations`, a random integer between 0 and the value in `iteration` is used as the power Grover's operator. This approach is useful when we don't even know the number of solution.For more details of the algorithm using `sample_from_iterations`, see [4].**References:**[4]: Boyer et al., Tight bounds on quantum searching [arxiv:quant-ph/9605034](https://arxiv.org/abs/quant-ph/9605034)
###Code
# integer iteration
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=1)
# list iteration
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=[1, 2, 3])
# using sample_from_iterations
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=[1, 2, 3], sample_from_iterations=True)
###Output
_____no_output_____
###Markdown
When the number of solutions is known, we can also use a static method `optimal_num_iterations` to find the optimal number of iterations. Note that the output iterations is an approximate value. When the number of qubits is small, the output iterations may not be optimal.
###Code
iterations = Grover.optimal_num_iterations(num_solutions=1, num_qubits=8)
iterations
###Output
_____no_output_____
###Markdown
Applying `post_processing`We can apply an optional post processing to the top measurement for ease of readability. It can be used e.g. to convert from the bit-representation of the measurement `[1, 0, 1]` to a DIMACS CNF format `[1, -2, 3]`.
###Code
def to_DIAMACS_CNF_format(bit_rep):
return [index+1 if val==1 else -1 * (index + 1) for index, val in enumerate(bit_rep)]
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'],post_processing=to_DIAMACS_CNF_format)
grover.post_processing([1, 0, 1])
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Grover's Algorithm and Amplitude AmplificationGrover's algorithm is one of the most famous quantum algorithms introduced by Lov Grover in 1996 \[1\]. It has initially been proposed for unstructured search problems, i.e. for finding a marked element in a unstructured database. However, Grover's algorithm is now a subroutine to several other algorithms, such as Grover Adaptive Search \[2\]. For the details of Grover's algorithm, please see [Grover's Algorithm](https://qiskit.org/textbook/ch-algorithms/grover.html) in the Qiskit textbook.Qiskit implements Grover's algorithm in the `Grover` class. This class also includes the generalized version, Amplitude Amplification \[3\], and allows setting individual iterations and other meta-settings to Grover's algorithm.**References:**\[1\]: L. K. Grover, A fast quantum mechanical algorithm for database search. Proceedings 28th Annual Symposium onthe Theory of Computing (STOC) 1996, pp. 212-219.\[2\]: A. Gilliam, S. Woerner, C. Gonciulea, Grover Adaptive Search for Constrained Polynomial Binary Optimization.https://arxiv.org/abs/1912.04088\[3\]: Brassard, G., Hoyer, P., Mosca, M., & Tapp, A. (2000). Quantum Amplitude Amplification and Estimation. http://arxiv.org/abs/quant-ph/0005055 Grover's algorithmGrover's algorithm uses the Grover operator $\mathcal{Q}$ to amplify the amplitudes of the good states:$$ \mathcal{Q} = \mathcal{A}\mathcal{S_0}\mathcal{A}^\dagger \mathcal{S_f}$$Here, * $\mathcal{A}$ is the initial search state for the algorithm, which is just Hadamards, $H^{\otimes n}$ for the textbook Grover search, but can be more elaborate for Amplitude Amplification* $\mathcal{S_0}$ is the reflection about the all 0 state$$ |x\rangle \mapsto \begin{cases} -|x\rangle, &x \neq 0 \\ |x\rangle, &x = 0\end{cases}$$* $\mathcal{S_f}$ is the oracle that applies $$ |x\rangle \mapsto (-1)^{f(x)}|x\rangle$$ where $f(x)$ is 1 if $x$ is a good state and otherwise 0.In a nutshell, Grover's algorithm applies different powers of $\mathcal{Q}$ and after each execution checks whether a good solution has been found. Running Grover's algorithm To run Grover's algorithm with the `Grover` class, firstly, we need to specify an oracle for the circuit of Grover's algorithm. In the following example, we use `QuantumCircuit` as the oracle of Grover's algorithm. However, there are several other class that we can use as the oracle of Grover's algorithm. We talk about them later in this tutorial.Note that the oracle for `Grover` must be a _phase-flip_ oracle. That is, it multiplies the amplitudes of the of "good states" by a factor of $-1$. We explain later how to convert a _bit-flip_ oracle to a phase-flip oracle.
###Code
from qiskit import QuantumCircuit
from qiskit.aqua.algorithms import Grover
# the state we desire to find is '11'
good_state = ['11']
# specify the oracle that marks the state '11' as a good solution
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
# define Grover's algorithm
grover = Grover(oracle=oracle, good_state=good_state)
# now we can have a look at the Grover operator that is used in running the algorithm
grover.grover_operator.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Then, we specify a backend and call the `run` method of `Grover` with a backend to execute the circuits. The returned result type is a `GroverResult`. If the search was successful, the `oracle_evaluation` attribute of the result will be `True`. In this case, the most sampled measurement, `top_measurement`, is one of the "good states". Otherwise, `oracle_evaluation` will be False.
###Code
from qiskit import Aer
from qiskit.aqua import QuantumInstance
qasm_simulator = Aer.get_backend('qasm_simulator')
result = grover.run(quantum_instance=qasm_simulator)
print('Result type:', type(result))
print()
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Result type: <class 'qiskit.aqua.algorithms.amplitude_amplifiers.grover.GroverResult'>
Success!
Top measurement: 11
###Markdown
In the example, the result of `top_measurement` is `11` which is one of "good state". Thus, we succeeded to find the answer by using `Grover`. Using the different types of classes as the oracle of `Grover`In the above example, we used `QuantumCircuit` as the oracle of `Grover`. However, we can also use `qiskit.aqua.components.oracles.Oracle`, and `qiskit.quantum_info.Statevector` as oracles.All the following examples are when $|11\rangle$ is "good state"
###Code
from qiskit.quantum_info import Statevector
oracle = Statevector.from_label('11')
grover = Grover(oracle=oracle, good_state=['11'])
result = grover.run(quantum_instance=qasm_simulator)
print('Result type:', type(result))
print()
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Result type: <class 'qiskit.aqua.algorithms.amplitude_amplifiers.grover.GroverResult'>
Success!
Top measurement: 11
###Markdown
Internally, the statevector is mapped to a quantum circuit:
###Code
grover.grover_operator.oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
The `Oracle` components in Qiskit Aqua allow for an easy construction of more complex oracles.The `Oracle` type has the interesting subclasses:* `LogicalExpressionOracle`: for parsing logical expressions such as `'~a | b'`. This is especially useful for solving 3-SAT problems and is shown in the accompanying [Grover Examples](08_grover_examples.ipynb) tutorial.* `TrutheTableOracle`: for converting binary truth tables to circuits Here we'll use the `LogicalExpressionOracle` for the simple example of finding the state $|11\rangle$, which corresponds to `'a & b'`.
###Code
from qiskit.aqua.components.oracles import LogicalExpressionOracle
# `Oracle` (`LogicalExpressionOracle`) as the `oracle` argument
expression = '(a & b)'
oracle = LogicalExpressionOracle(expression)
grover = Grover(oracle=oracle)
grover.grover_operator.oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
You can observe, that this oracle is actually implemented with three qubits instead of two!That is because the `LogicalExpressionOracle` is not a phase-flip oracle (which flips the phase of the good state) but a bit-flip oracle. This means it flips the state of an auxiliary qubit if the other qubits satisfy the condition.For Grover's algorithm, however, we require a phase-flip oracle. To convert the bit-flip oracle to a phase-flip oracle we sandwich the controlled-NOT by $X$ and $H$ gates, as you can see in the circuit above.**Note:** This transformation from a bit-flip to a phase-flip oracle holds generally and you can use this to convert your oracle to the right representation. Amplitude amplificationGrover's algorithm uses Hadamard gates to create the uniform superposition of all the states at the beginning of the Grover operator $\mathcal{Q}$. If some information on the good states is available, it might be useful to not start in a uniform superposition but only initialize specific states. This, generalized, version of Grover's algorithm is referred to _Amplitude Amplification_.In Qiskit, the initial superposition state can easily be adjusted by setting the `state_preparation` argument. State preparationA `state_preparation` argument is used to specify a quantum circuit that prepares a quantum state for the start point of the amplitude amplification.By default, a circuit with $H^{\otimes n} $ is used to prepare uniform superposition (so it will be Grover's search). The diffusion circuit of the amplitude amplification reflects `state_preparation` automatically.
###Code
import numpy as np
# Specifying `state_preparation`
# to prepare a superposition of |01>, |10>, and |11>
oracle = QuantumCircuit(3)
oracle.h(2)
oracle.ccx(0,1,2)
oracle.h(2)
theta = 2 * np.arccos(1 / np.sqrt(3))
state_preparation = QuantumCircuit(3)
state_preparation.ry(theta, 0)
state_preparation.ch(0,1)
state_preparation.x(1)
state_preparation.h(2)
# we only care about the first two bits being in state 1, thus add both possibilities for the last qubit
grover = Grover(oracle=oracle, state_preparation=state_preparation, good_state=['110', '111'])
# state_preparation
print('state preparation circuit:')
grover.grover_operator.state_preparation.draw(output='mpl')
result = grover.run(quantum_instance=qasm_simulator)
print('Success!' if result.oracle_evaluation else 'Failure!')
print('Top measurement:', result.top_measurement)
###Output
Success!
Top measurement: 111
###Markdown
Full flexibilityFor more advanced use, it is also possible to specify the entire Grover operator by setting the `grover_operator` argument. This might be useful if you know more efficient implementation for $\mathcal{Q}$ than the default construction via zero reflection, oracle and state preparation.The `qiskit.circuit.library.GroverOperator` can be a good starting point and offers more options for an automated construction of the Grover operator. You can for instance * set the `mcx_mode` * ignore qubits in the zero reflection by setting `reflection_qubits`* explicitly exchange the $\mathcal{S_f}, \mathcal{S_0}$ and $\mathcal{A}$ operations using the `oracle`, `zero_reflection` and `state_preparation` arguments For instance, imagine the good state is a three qubit state $|111\rangle$ but we used 2 additional qubits as auxiliary qubits.
###Code
from qiskit.circuit.library import GroverOperator, ZGate
oracle = QuantumCircuit(5)
oracle.append(ZGate().control(2), [0, 1, 2])
oracle.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Then, per default, the Grover operator implements the zero reflection on all five qubits.
###Code
grover_op = GroverOperator(oracle, insert_barriers=True)
grover_op.draw(output='mpl')
###Output
_____no_output_____
###Markdown
But we know that we only need to consider the first three:
###Code
grover_op = GroverOperator(oracle, reflection_qubits=[0, 1, 2], insert_barriers=True)
grover_op.draw(output='mpl')
###Output
_____no_output_____
###Markdown
Dive into other arguments of `Grover``Grover` has arguments other than `oracle` and `state_preparation`. We will explain them in this section. Specifying `good_state``good_state` is used to check whether the measurement result is correct or not internally. It can be a list of binary strings, a list of integer, `Statevector`, and Callable. If the input is a list of bitstrings, each bitstrings in the list represents a good state. If the input is a list of integer, each integer represent the index of the good state to be $|1\rangle$. If it is a `Statevector`, it represents a superposition of all good states.
###Code
# a list of binary strings good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = ['11', '00']
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
# a list of integer good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = [0, 1]
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
from qiskit.quantum_info import Statevector
# `Statevector` good state
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
good_state = Statevector.from_label('11')
grover = Grover(oracle=oracle, good_state=good_state)
print(grover.is_good_state('11'))
# Callable good state
def callable_good_state(bitstr):
if bitstr == "11":
return True
return False
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=callable_good_state)
print(grover.is_good_state('11'))
###Output
True
###Markdown
The number of `iterations`The number of repetition of applying the Grover operator is important to obtain the correct result with Grover's algorithm. The number of iteration can be set by the `iteration` argument of `Grover`. The following inputs are supported:* an integer to specify a single power of the Grover operator that's applied* or a list of integers, in which all these different powers of the Grover operator are run consecutively and after each time we check if a correct solution has been foundAdditionally there is the `sample_from_iterations` argument. When it is `True`, instead of the specific power in `iterations`, a random integer between 0 and the value in `iteration` is used as the power Grover's operator. This approach is useful when we don't even know the number of solution.For more details of the algorithm using `sample_from_iterations`, see [4].**References:**[4]: Boyer et al., Tight bounds on quantum searching [arxiv:quant-ph/9605034](https://arxiv.org/abs/quant-ph/9605034)
###Code
# integer iteration
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=1)
# list iteration
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=[1, 2, 3])
# using sample_from_iterations
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'], iterations=[1, 2, 3], sample_from_iterations=True)
###Output
_____no_output_____
###Markdown
When the number of solutions is known, we can also use a static method `optimal_num_iterations` to find the optimal number of iterations. Note that the output iterations is an approximate value. When the number of qubits is small, the output iterations may not be optimal.
###Code
iterations = Grover.optimal_num_iterations(num_solutions=1, num_qubits=8)
iterations
###Output
_____no_output_____
###Markdown
Applying `post_processing`We can apply an optional post processing to the top measurement for ease of readability. It can be used e.g. to convert from the bit-representation of the measurement `[1, 0, 1]` to a DIMACS CNF format `[1, -2, 3]`.
###Code
def to_DIAMACS_CNF_format(bit_rep):
return [index+1 if val==1 else -1 * (index + 1) for index, val in enumerate(bit_rep)]
oracle = QuantumCircuit(2)
oracle.cz(0, 1)
grover = Grover(oracle=oracle, good_state=['11'],post_processing=to_DIAMACS_CNF_format)
grover.post_processing([1, 0, 1])
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____ |
Reverse Solutions.ipynb | ###Markdown
兩種Reverse的解法下面為使用python兩種reverse的解法,其中一種使用工具reverse,後者以數學方法為主,透過2^31作為(32 bit的整數計算)
###Code
def reverse(x):
y = list(str(x))
y.reverse()
return int("".join(y))
class Solution:
def reverse(self, x: int) -> int:
# Turn the attribute to the positive
sign_multiplier = 1
if x < 0:
sign_multiplier = -1
x = x * sign_multiplier
# assign the result parameter
result = 0
# signed 32-bit integer
min_int_32 = 2 ** 31
while x > 0:
# Add the current digit into result
# x % 10 is equal to the last digit
result = result * 10 + x % 10
# Check if the result is within MaxInt32 and MinInt32 bounds
if result * sign_multiplier <= -min_int_32 or result * sign_multiplier >= min_int_32-1:
return 0
x = x // 10
# Restore to original sign of number (+ or -)
return sign_multiplier * result
###Output
_____no_output_____ |
python-client/examples/ApiManagerClient.ipynb | ###Markdown
Api Manager Example
###Code
from onesaitplatform.apimanager import ApiManagerClient
###Output
_____no_output_____
###Markdown
Create ApiManager
###Code
HOST = "development.onesaitplatform.com"
PORT = 443
TOKEN = "b32522cd73e84ddda519f1dff9627f40"
#client = ApiManagerClient(host=HOST, port=PORT)
client = ApiManagerClient(host=HOST)
###Output
_____no_output_____
###Markdown
Set token
###Code
client.setToken(TOKEN)
###Output
_____no_output_____
###Markdown
Find APIs
###Code
ok_find, res_find = client.find("RestaurantsAPI", "Created", "analytics")
print("API finded: {}".format(ok_find))
print("Api info:")
print(res_find)
###Output
API finded: True
Api info:
[{'identification': 'RestaurantsAPI', 'version': 1, 'type': 'PRIVATE', 'isPublic': None, 'category': 'EDUCATION', 'externalApi': False, 'ontologyId': 'MASTER-Ontology-Restaurant-1', 'endpoint': 'https://development.onesaitplatform.com/api-manager/server/api/v1/RestaurantsAPI', 'endpointExt': None, 'description': '', 'metainf': '', 'imageType': None, 'status': 'CREATED', 'creationDate': '04/16/2019 13:01:03', 'userId': 'analytics', 'operations': [{'identification': 'RestaurantsAPI_GET', 'description': 'id', 'operation': 'GET', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantsAPI_GETAll', 'description': 'all', 'operation': 'GET', 'endpoint': None, 'path': '', 'headers': [], 'queryParams': [], 'postProcess': None}], 'authentication': None}]
###Markdown
List APIs
###Code
ok_list, res_list = client.list("analytics")
print("APIs listed {}".format(ok_list))
print("Apis info:")
for api in res_list:
print(api)
print("*")
###Output
APIs listed True
Apis info:
{'identification': 'RestaurantsAPI', 'version': 1, 'type': 'PRIVATE', 'isPublic': None, 'category': 'EDUCATION', 'externalApi': False, 'ontologyId': 'MASTER-Ontology-Restaurant-1', 'endpoint': 'https://development.onesaitplatform.com/api-manager/server/api/v1/RestaurantsAPI', 'endpointExt': None, 'description': '', 'metainf': '', 'imageType': None, 'status': 'CREATED', 'creationDate': '04/16/2019 13:01:03', 'userId': 'analytics', 'operations': [{'identification': 'RestaurantsAPI_GET', 'description': 'id', 'operation': 'GET', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantsAPI_GETAll', 'description': 'all', 'operation': 'GET', 'endpoint': None, 'path': '', 'headers': [], 'queryParams': [], 'postProcess': None}], 'authentication': None}
*
{'identification': 'RestaurantTestApi', 'version': 1, 'type': 'PRIVATE', 'isPublic': None, 'category': 'OTHER', 'externalApi': False, 'ontologyId': 'b5982b8d-e4c0-4a84-ab65-9e7d2f30e638', 'endpoint': 'https://development.onesaitplatform.com/api-manager/server/api/v1/RestaurantTestApi', 'endpointExt': None, 'description': '', 'metainf': '', 'imageType': None, 'status': 'CREATED', 'creationDate': '04/12/2019 08:19:16', 'userId': 'analytics', 'operations': [{'identification': 'RestaurantTestApi_PUT', 'description': 'update', 'operation': 'PUT', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'body', 'dataType': 'STRING', 'description': '', 'value': '', 'headerType': 'BODY', 'condition': None}, {'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantTestApi_POST', 'description': 'insert', 'operation': 'POST', 'endpoint': None, 'path': '/', 'headers': [], 'queryParams': [{'name': 'body', 'dataType': 'STRING', 'description': '', 'value': '', 'headerType': 'BODY', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantTestApi_GETAll', 'description': 'query', 'operation': 'GET', 'endpoint': None, 'path': '', 'headers': [], 'queryParams': [], 'postProcess': None}, {'identification': 'RestaurantTestApi_GET', 'description': 'queryby', 'operation': 'GET', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantTestApi_DELETEID', 'description': 'delete', 'operation': 'DELETE', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}], 'authentication': None}
*
###Markdown
Make API request
###Code
ok_request, res_request = client.request(method="GET", name="RestaurantsAPI/", version=1, body=None)
print("API request: {}".format(ok_request))
print("Api request:")
print(res_request)
###Output
API request: True
Api request:
[{'_id': '5cb032da23d35d000111809d', 'Restaurant': {'address': {'building': '351', 'coord': [-73.98513559999999, 40.7676919], 'street': 'West 57 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-09-06T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-07-22T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-07-31T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-12-29T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Dj Reynolds Pub And Restaurant', 'restaurant_id': '30191841'}}, {'_id': '5cb032da23d35d000111809e', 'Restaurant': {'address': {'building': '2780', 'coord': [-73.98241999999999, 40.579505], 'street': 'Stillwell Avenue', 'zipcode': '11224'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-06-10T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-06-05T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-04-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-10-12T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Riviera Caterer', 'restaurant_id': '40356018'}}, {'_id': '5cb032da23d35d000111809f', 'Restaurant': {'address': {'building': '97-22', 'coord': [-73.8601152, 40.7311739], 'street': '63 Road', 'zipcode': '11374'}, 'borough': 'Queens', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-11-24T00:00:00Z', 'grade': 'Z', 'score': 20}, {'date': '2013-01-17T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-08-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-12-15T00:00:00Z', 'grade': 'B', 'score': 25}], 'name': 'Tov Kosher Kitchen', 'restaurant_id': '40356068'}}, {'_id': '5cb032da23d35d00011180a0', 'Restaurant': {'address': {'building': '469', 'coord': [-73.961704, 40.662942], 'street': 'Flatbush Avenue', 'zipcode': '11225'}, 'borough': 'Brooklyn', 'cuisine': 'Hamburgers', 'grades': [{'date': '2014-12-30T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2014-07-01T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-04-30T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-05-08T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': "Wendy'S", 'restaurant_id': '30112340'}}, {'_id': '5cb032da23d35d00011180a1', 'Restaurant': {'address': {'building': '1007', 'coord': [-73.856077, 40.848447], 'street': 'Morris Park Ave', 'zipcode': '10462'}, 'borough': 'Bronx', 'cuisine': 'Bakery', 'grades': [{'date': '2014-03-03T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-09-11T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2013-01-24T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-11-23T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-03-10T00:00:00Z', 'grade': 'B', 'score': 14}], 'name': 'Morris Park Bake Shop', 'restaurant_id': '30075445'}}, {'_id': '5cb032da23d35d00011180a2', 'Restaurant': {'address': {'building': '8825', 'coord': [-73.8803827, 40.7643124], 'street': 'Astoria Boulevard', 'zipcode': '11369'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-11-15T00:00:00Z', 'grade': 'Z', 'score': 38}, {'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-02T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-02-10T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Brunos On The Boulevard', 'restaurant_id': '40356151'}}, {'_id': '5cb032da23d35d00011180a3', 'Restaurant': {'address': {'building': '2206', 'coord': [-74.1377286, 40.6119572], 'street': 'Victory Boulevard', 'zipcode': '10314'}, 'borough': 'Staten Island', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-10-06T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2014-05-20T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-04-04T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-01-24T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Kosher Island', 'restaurant_id': '40356442'}}, {'_id': '5cb032da23d35d00011180a4', 'Restaurant': {'address': {'building': '7114', 'coord': [-73.9068506, 40.6199034], 'street': 'Avenue U', 'zipcode': '11234'}, 'borough': 'Brooklyn', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-05-29T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-01-14T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-08-03T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-07-18T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-03-09T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-14T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Wilken'S Fine Food", 'restaurant_id': '40356483'}}, {'_id': '5cb032da23d35d00011180a5', 'Restaurant': {'address': {'building': '6409', 'coord': [-74.00528899999999, 40.628886], 'street': '11 Avenue', 'zipcode': '11219'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-07-18T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-07-30T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-08-16T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2011-08-17T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Regina Caterers', 'restaurant_id': '40356649'}}, {'_id': '5cb032da23d35d00011180a6', 'Restaurant': {'address': {'building': '1839', 'coord': [-73.9482609, 40.6408271], 'street': 'Nostrand Avenue', 'zipcode': '11226'}, 'borough': 'Brooklyn', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-07-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-07-10T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-07-11T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-02-23T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Taste The Tropics Ice Cream', 'restaurant_id': '40356731'}}, {'_id': '5cb032da23d35d00011180a7', 'Restaurant': {'address': {'building': '2300', 'coord': [-73.8786113, 40.8502883], 'street': 'Southern Boulevard', 'zipcode': '10460'}, 'borough': 'Bronx', 'cuisine': 'American', 'grades': [{'date': '2014-05-28T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-06-19T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-06-15T00:00:00Z', 'grade': 'A', 'score': 3}], 'name': 'Wild Asia', 'restaurant_id': '40357217'}}, {'_id': '5cb032da23d35d00011180a8', 'Restaurant': {'address': {'building': '7715', 'coord': [-73.9973325, 40.61174889999999], 'street': '18 Avenue', 'zipcode': '11214'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-04-16T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-04-23T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-04-24T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2011-12-16T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'C & C Catering Service', 'restaurant_id': '40357437'}}, {'_id': '5cb032da23d35d00011180a9', 'Restaurant': {'address': {'building': '1269', 'coord': [-73.871194, 40.6730975], 'street': 'Sutter Avenue', 'zipcode': '11208'}, 'borough': 'Brooklyn', 'cuisine': 'Chinese', 'grades': [{'date': '2014-09-16T00:00:00Z', 'grade': 'B', 'score': 21}, {'date': '2013-08-28T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-04-02T00:00:00Z', 'grade': 'C', 'score': 56}, {'date': '2012-08-15T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-03-28T00:00:00Z', 'grade': 'B', 'score': 27}], 'name': 'May May Kitchen', 'restaurant_id': '40358429'}}, {'_id': '5cb032da23d35d00011180aa', 'Restaurant': {'address': {'building': '1', 'coord': [-73.96926909999999, 40.7685235], 'street': 'East 66 Street', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-05-07T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-05-03T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-04-30T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2011-12-27T00:00:00Z', 'grade': 'A', 'score': 0}], 'name': '1 East 66Th Street Kitchen', 'restaurant_id': '40359480'}}, {'_id': '5cb032da23d35d00011180ab', 'Restaurant': {'address': {'building': '705', 'coord': [-73.9653967, 40.6064339], 'street': 'Kings Highway', 'zipcode': '11223'}, 'borough': 'Brooklyn', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-11-10T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-10-04T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-05-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-12-30T00:00:00Z', 'grade': 'B', 'score': 19}], 'name': 'Seuda Foods', 'restaurant_id': '40360045'}}, {'_id': '5cb032da23d35d00011180ac', 'Restaurant': {'address': {'building': '203', 'coord': [-73.97822040000001, 40.6435254], 'street': 'Church Avenue', 'zipcode': '11218'}, 'borough': 'Brooklyn', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-02-10T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-01-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-09T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2011-11-07T00:00:00Z', 'grade': 'P', 'score': 12}, {'date': '2011-07-21T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Carvel Ice Cream', 'restaurant_id': '40360076'}}, {'_id': '5cb032da23d35d00011180ad', 'Restaurant': {'address': {'building': '265-15', 'coord': [-73.7032601, 40.7386417], 'street': 'Hillside Avenue', 'zipcode': '11004'}, 'borough': 'Queens', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-10-28T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-09-18T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-09-20T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Carvel Ice Cream', 'restaurant_id': '40361322'}}, {'_id': '5cb032da23d35d00011180ae', 'Restaurant': {'address': {'building': '6909', 'coord': [-74.0259567, 40.6353674], 'street': '3 Avenue', 'zipcode': '11209'}, 'borough': 'Brooklyn', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-08-21T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2014-03-05T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-01-10T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'Nordic Delicacies', 'restaurant_id': '40361390'}}, {'_id': '5cb032da23d35d00011180af', 'Restaurant': {'address': {'building': '522', 'coord': [-73.95171, 40.767461], 'street': 'East 74 Street', 'zipcode': '10021'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-02T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-12-19T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2013-05-28T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-12-07T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-03-29T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Glorious Food', 'restaurant_id': '40361521'}}, {'_id': '5cb032da23d35d00011180b0', 'Restaurant': {'address': {'building': '284', 'coord': [-73.9829239, 40.6580753], 'street': 'Prospect Park West', 'zipcode': '11215'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-11-19T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-11-14T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-12-05T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-05-17T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'The Movable Feast', 'restaurant_id': '40361606'}}, {'_id': '5cb032da23d35d00011180b1', 'Restaurant': {'address': {'building': '129-08', 'coord': [-73.839297, 40.78147], 'street': '20 Avenue', 'zipcode': '11356'}, 'borough': 'Queens', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-08-16T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-08-27T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-09-20T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-09-29T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Sal'S Deli", 'restaurant_id': '40361618'}}, {'_id': '5cb032da23d35d00011180b2', 'Restaurant': {'address': {'building': '759', 'coord': [-73.9925306, 40.7309346], 'street': 'Broadway', 'zipcode': '10003'}, 'borough': 'Manhattan', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-01-21T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-01-04T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-06-07T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2012-01-17T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': "Bully'S Deli", 'restaurant_id': '40361708'}}, {'_id': '5cb032da23d35d00011180b3', 'Restaurant': {'address': {'building': '3406', 'coord': [-73.94024739999999, 40.7623288], 'street': '10 Street', 'zipcode': '11106'}, 'borough': 'Queens', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-03-19T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-03-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-27T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-04-05T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': "Steve Chu'S Deli & Grocery", 'restaurant_id': '40361998'}}, {'_id': '5cb032da23d35d00011180b4', 'Restaurant': {'address': {'building': '502', 'coord': [-73.976112, 40.786714], 'street': 'Amsterdam Avenue', 'zipcode': '10024'}, 'borough': 'Manhattan', 'cuisine': 'Chicken', 'grades': [{'date': '2014-09-15T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-03-04T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-01-09T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-04-10T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-11-15T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': "Harriet'S Kitchen", 'restaurant_id': '40362098'}}, {'_id': '5cb032da23d35d00011180b5', 'Restaurant': {'address': {'building': '730', 'coord': [-73.96805719999999, 40.7925587], 'street': 'Columbus Avenue', 'zipcode': '10025'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-12T00:00:00Z', 'grade': 'B', 'score': 26}, {'date': '2013-08-28T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-25T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2012-02-14T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'P & S Deli Grocery', 'restaurant_id': '40362264'}}, {'_id': '5cb032da23d35d00011180b6', 'Restaurant': {'address': {'building': '18', 'coord': [-73.996984, 40.72589], 'street': 'West Houston Street', 'zipcode': '10012'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-04-03T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-04-05T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-03-21T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-04-27T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Angelika Film Center', 'restaurant_id': '40362274'}}, {'_id': '5cb032da23d35d00011180b7', 'Restaurant': {'address': {'building': '531', 'coord': [-73.9634876, 40.6940001], 'street': 'Myrtle Avenue', 'zipcode': '11205'}, 'borough': 'Brooklyn', 'cuisine': 'Hamburgers', 'grades': [{'date': '2014-03-18T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-03-18T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-10-10T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-09-22T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'White Castle', 'restaurant_id': '40362344'}}, {'_id': '5cb032da23d35d00011180b8', 'Restaurant': {'address': {'building': '103-05', 'coord': [-73.8642349, 40.75356], 'street': '37 Avenue', 'zipcode': '11368'}, 'borough': 'Queens', 'cuisine': 'Chinese', 'grades': [{'date': '2014-04-21T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-11-12T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-06-04T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-11-14T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-10-11T00:00:00Z', 'grade': 'P', 'score': 0}, {'date': '2012-05-24T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-12-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-07-20T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Ho Mei Restaurant', 'restaurant_id': '40362432'}}, {'_id': '5cb032da23d35d00011180b9', 'Restaurant': {'address': {'building': '60', 'coord': [-74.0085357, 40.70620539999999], 'street': 'Wall Street', 'zipcode': '10005'}, 'borough': 'Manhattan', 'cuisine': 'Turkish', 'grades': [{'date': '2014-09-26T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-09-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-09-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-05-09T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'The Country Cafe', 'restaurant_id': '40362715'}}, {'_id': '5cb032da23d35d00011180ba', 'Restaurant': {'address': {'building': '195', 'coord': [-73.9246028, 40.6522396], 'street': 'East 56 Street', 'zipcode': '11203'}, 'borough': 'Brooklyn', 'cuisine': 'Caribbean', 'grades': [{'date': '2014-05-13T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-05-08T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-09-22T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-06-06T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': "Shashemene Int'L Restaura", 'restaurant_id': '40362869'}}, {'_id': '5cb032da23d35d00011180bb', 'Restaurant': {'address': {'building': '107', 'coord': [-74.00920839999999, 40.7132925], 'street': 'Church Street', 'zipcode': '10007'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-18T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-02-26T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-08-26T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-02-01T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-01-17T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-18T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Downtown Deli', 'restaurant_id': '40363021'}}, {'_id': '5cb032da23d35d00011180bc', 'Restaurant': {'address': {'building': '1006', 'coord': [-73.84856870000002, 40.8903781], 'street': 'East 233 Street', 'zipcode': '10466'}, 'borough': 'Bronx', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-04-24T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-09-05T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-02-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-07-03T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-07-11T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Carvel Ice Cream', 'restaurant_id': '40363093'}}, {'_id': '5cb032da23d35d00011180bd', 'Restaurant': {'address': {'building': '56', 'coord': [-73.991495, 40.692273], 'street': 'Court Street', 'zipcode': '11201'}, 'borough': 'Brooklyn', 'cuisine': 'Donuts', 'grades': [{'date': '2014-12-30T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2014-01-15T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-01-08T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-01-19T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Dunkin' Donuts", 'restaurant_id': '40363098'}}, {'_id': '5cb032da23d35d00011180be', 'Restaurant': {'address': {'building': '7615', 'coord': [-74.0228449, 40.6281815], 'street': '5 Avenue', 'zipcode': '11209'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-12-04T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-10-24T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-04-18T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-04-05T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Mejlander & Mulgannon', 'restaurant_id': '40363117'}}, {'_id': '5cb032da23d35d00011180bf', 'Restaurant': {'address': {'building': '120', 'coord': [-73.9998042, 40.7251256], 'street': 'Prince Street', 'zipcode': '10012'}, 'borough': 'Manhattan', 'cuisine': 'Bakery', 'grades': [{'date': '2014-10-17T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-09-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-04-30T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-04-20T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-12-19T00:00:00Z', 'grade': 'A', 'score': 3}], 'name': "Olive'S", 'restaurant_id': '40363151'}}, {'_id': '5cb032da23d35d00011180c0', 'Restaurant': {'address': {'building': '1236', 'coord': [-73.8893654, 40.81376179999999], 'street': '238 Spofford Ave', 'zipcode': '10474'}, 'borough': 'Bronx', 'cuisine': 'Chinese', 'grades': [{'date': '2013-12-30T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-01-08T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-06-12T00:00:00Z', 'grade': 'B', 'score': 15}], 'name': 'Happy Garden', 'restaurant_id': '40363289'}}, {'_id': '5cb032da23d35d00011180c1', 'Restaurant': {'address': {'building': '625', 'coord': [-73.990494, 40.7569545], 'street': '8 Avenue', 'zipcode': '10018'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-06-09T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-01-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-12-07T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2011-12-13T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-09-09T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Cafe Metro', 'restaurant_id': '40363298'}}, {'_id': '5cb032da23d35d00011180c2', 'Restaurant': {'address': {'building': '1069', 'coord': [-73.902463, 40.694924], 'street': 'Wyckoff Avenue', 'zipcode': '11385'}, 'borough': 'Queens', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-05-08T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-12-12T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-06-21T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-12-24T00:00:00Z', 'grade': 'B', 'score': 25}, {'date': '2011-10-19T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-06-15T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Tony'S Deli", 'restaurant_id': '40363333'}}, {'_id': '5cb032da23d35d00011180c3', 'Restaurant': {'address': {'building': '405', 'coord': [-73.97534999999999, 40.7516269], 'street': 'Lexington Avenue', 'zipcode': '10174'}, 'borough': 'Manhattan', 'cuisine': 'Sandwiches/Salads/Mixed Buffet', 'grades': [{'date': '2014-02-21T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-09-13T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-08-28T00:00:00Z', 'grade': 'A', 'score': 0}, {'date': '2011-09-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-05-03T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Lexler Deli', 'restaurant_id': '40363426'}}, {'_id': '5cb032da23d35d00011180c4', 'Restaurant': {'address': {'building': '2491', 'coord': [-74.1459332, 40.6103714], 'street': 'Victory Boulevard', 'zipcode': '10314'}, 'borough': 'Staten Island', 'cuisine': 'Delicatessen', 'grades': [{'date': '2015-01-09T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-12-05T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-06-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-01-08T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Bagels N Buns', 'restaurant_id': '40363427'}}, {'_id': '5cb032da23d35d00011180c5', 'Restaurant': {'address': {'building': '7905', 'coord': [-73.8740217, 40.7135015], 'street': 'Metropolitan Avenue', 'zipcode': '11379'}, 'borough': 'Queens', 'cuisine': 'Bagels/Pretzels', 'grades': [{'date': '2014-09-17T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-01-16T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-08-07T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-02-21T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-06-20T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-01-31T00:00:00Z', 'grade': 'B', 'score': 18}], 'name': 'Hot Bagels', 'restaurant_id': '40363565'}}, {'_id': '5cb032da23d35d00011180c6', 'Restaurant': {'address': {'building': '87-69', 'coord': [-73.8309503, 40.7001121], 'street': 'Lefferts Boulevard', 'zipcode': '11418'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-02-25T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-08-14T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-08-07T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-03-26T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-11-04T00:00:00Z', 'grade': 'A', 'score': 0}, {'date': '2011-06-29T00:00:00Z', 'grade': 'A', 'score': 4}], 'name': 'Snack Time Grill', 'restaurant_id': '40363590'}}, {'_id': '5cb032da23d35d00011180c7', 'Restaurant': {'address': {'building': '1418', 'coord': [-73.95685019999999, 40.7753401], 'street': 'Third Avenue', 'zipcode': '10028'}, 'borough': 'Manhattan', 'cuisine': 'Continental', 'grades': [{'date': '2014-06-02T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-12-27T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-03-18T00:00:00Z', 'grade': 'B', 'score': 26}, {'date': '2012-02-01T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-07-06T00:00:00Z', 'grade': 'B', 'score': 25}], 'name': "Lorenzo & Maria'S", 'restaurant_id': '40363630'}}, {'_id': '5cb032da23d35d00011180c8', 'Restaurant': {'address': {'building': '464', 'coord': [-73.9791458, 40.744328], 'street': '3 Avenue', 'zipcode': '10016'}, 'borough': 'Manhattan', 'cuisine': 'Pizza', 'grades': [{'date': '2014-08-05T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2014-03-06T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-07-09T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-01-30T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-01-05T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2011-09-26T00:00:00Z', 'grade': 'A', 'score': 0}], 'name': "Domino'S Pizza", 'restaurant_id': '40363644'}}, {'_id': '5cb032da23d35d00011180c9', 'Restaurant': {'address': {'building': '437', 'coord': [-73.975393, 40.757365], 'street': 'Madison Avenue', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-06-03T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-06-07T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-06-29T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-06T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-06-23T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Berkely', 'restaurant_id': '40363685'}}, {'_id': '5cb032da23d35d00011180ca', 'Restaurant': {'address': {'building': '1031', 'coord': [-73.9075537, 40.6438684], 'street': 'East 92 Street', 'zipcode': '11236'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-02-05T00:00:00Z', 'grade': 'A', 'score': 0}, {'date': '2013-01-29T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2011-12-08T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Sonny'S Heros", 'restaurant_id': '40363744'}}, {'_id': '5cb032da23d35d00011180cb', 'Restaurant': {'address': {'building': '1111', 'coord': [-74.0796436, 40.59878339999999], 'street': 'Hylan Boulevard', 'zipcode': '10305'}, 'borough': 'Staten Island', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-04-24T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-26T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-02-02T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'Carvel Ice Cream', 'restaurant_id': '40363834'}}, {'_id': '5cb032da23d35d00011180cc', 'Restaurant': {'address': {'building': '976', 'coord': [-73.92701509999999, 40.6620192], 'street': 'Rutland Road', 'zipcode': '11212'}, 'borough': 'Brooklyn', 'cuisine': 'Chinese', 'grades': [{'date': '2014-04-23T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-03-26T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-03-13T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2011-11-16T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Golden Pavillion', 'restaurant_id': '40363920'}}, {'_id': '5cb032da23d35d00011180cd', 'Restaurant': {'address': {'building': '148', 'coord': [-73.9806854, 40.7778589], 'street': 'West 72 Street', 'zipcode': '10023'}, 'borough': 'Manhattan', 'cuisine': 'Pizza', 'grades': [{'date': '2014-12-08T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-05-05T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2013-04-05T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-03-30T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Domino'S Pizza", 'restaurant_id': '40363945'}}, {'_id': '5cb032da23d35d00011180ce', 'Restaurant': {'address': {'building': '364', 'coord': [-73.96084119999999, 40.8014307], 'street': 'West 110 Street', 'zipcode': '10025'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-04T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2014-02-26T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-03-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-21T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Spoon Bread Catering', 'restaurant_id': '40364179'}}, {'_id': '5cb032da23d35d00011180cf', 'Restaurant': {'address': {'building': '1423', 'coord': [-73.9615132, 40.6253268], 'street': 'Avenue J', 'zipcode': '11230'}, 'borough': 'Brooklyn', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-12-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-12-05T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-12-06T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Kosher Bagel Hole', 'restaurant_id': '40364220'}}, {'_id': '5cb032da23d35d00011180d0', 'Restaurant': {'address': {'building': '0', 'coord': [-84.2040813, 9.9986585], 'street': 'Guardia Airport Parking', 'zipcode': '11371'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-05-16T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-05-10T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-05-15T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-11-02T00:00:00Z', 'grade': 'C', 'score': 32}], 'name': 'Terminal Cafe/Yankee Clipper', 'restaurant_id': '40364262'}}, {'_id': '5cb032da23d35d00011180d1', 'Restaurant': {'address': {'building': '73', 'coord': [-74.1178949, 40.5734906], 'street': 'New Dorp Plaza', 'zipcode': '10306'}, 'borough': 'Staten Island', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-11-18T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-11-07T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-04-24T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-03-20T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Plaza Bagels & Deli', 'restaurant_id': '40364286'}}, {'_id': '5cb032da23d35d00011180d2', 'Restaurant': {'address': {'building': '277', 'coord': [-73.8941893, 40.8634684], 'street': 'East Kingsbridge Road', 'zipcode': '10458'}, 'borough': 'Bronx', 'cuisine': 'Chinese', 'grades': [{'date': '2014-03-03T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-09-26T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-08-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-08-17T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Happy Garden', 'restaurant_id': '40364296'}}, {'_id': '5cb032da23d35d00011180d3', 'Restaurant': {'address': {'building': '203', 'coord': [-74.15235919999999, 40.5563756], 'street': 'Giffords Lane', 'zipcode': '10308'}, 'borough': 'Staten Island', 'cuisine': 'Delicatessen', 'grades': [{'date': '2015-01-05T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2014-09-11T00:00:00Z', 'grade': 'C', 'score': 39}, {'date': '2014-03-20T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-01-24T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-05-23T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'B & M Hot Bagel & Grocery', 'restaurant_id': '40364299'}}, {'_id': '5cb032da23d35d00011180d4', 'Restaurant': {'address': {'building': '94', 'coord': [-74.0061936, 40.7092038], 'street': 'Fulton Street', 'zipcode': '10038'}, 'borough': 'Manhattan', 'cuisine': 'Chicken', 'grades': [{'date': '2015-01-06T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-07-15T00:00:00Z', 'grade': 'C', 'score': 48}, {'date': '2013-05-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-09-24T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-04-19T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': 'Texas Rotisserie', 'restaurant_id': '40364304'}}, {'_id': '5cb032da23d35d00011180d5', 'Restaurant': {'address': {'building': '10004', 'coord': [-74.03400479999999, 40.6127077], 'street': '4 Avenue', 'zipcode': '11209'}, 'borough': 'Brooklyn', 'cuisine': 'Italian', 'grades': [{'date': '2014-02-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-06-27T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-12-03T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-11-09T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Philadelhia Grille Express', 'restaurant_id': '40364305'}}, {'_id': '5cb032da23d35d00011180d6', 'Restaurant': {'address': {'building': '178', 'coord': [-73.96252129999999, 40.7098035], 'street': 'Broadway', 'zipcode': '11211'}, 'borough': 'Brooklyn', 'cuisine': 'Steak', 'grades': [{'date': '2014-03-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-09-28T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-26T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-09-10T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-08-15T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Peter Luger Steakhouse', 'restaurant_id': '40364335'}}, {'_id': '5cb032da23d35d00011180d7', 'Restaurant': {'address': {'building': '1', 'coord': [-73.97166039999999, 40.764832], 'street': 'East 60 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-10-16T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2013-04-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-10-19T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-04-27T00:00:00Z', 'grade': 'B', 'score': 17}, {'date': '2011-11-29T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Metropolitan Club', 'restaurant_id': '40364347'}}, {'_id': '5cb032da23d35d00011180d8', 'Restaurant': {'address': {'building': '837', 'coord': [-73.9712, 40.751703], 'street': '2 Avenue', 'zipcode': '10017'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-22T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2013-09-26T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-26T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-04-30T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-10-05T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Palm Restaurant', 'restaurant_id': '40364355'}}, {'_id': '5cb032da23d35d00011180d9', 'Restaurant': {'address': {'building': '21', 'coord': [-73.9774394, 40.7604522], 'street': 'West 52 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-05-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-08-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-04-04T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': '21 Club', 'restaurant_id': '40364362'}}, {'_id': '5cb032da23d35d00011180da', 'Restaurant': {'address': {'building': '658', 'coord': [-73.81363999999999, 40.82941100000001], 'street': 'Clarence Ave', 'zipcode': '10465'}, 'borough': 'Bronx', 'cuisine': 'American', 'grades': [{'date': '2014-06-21T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-07-11T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'Manhem Club', 'restaurant_id': '40364363'}}, {'_id': '5cb032da23d35d00011180db', 'Restaurant': {'address': {'building': '1028', 'coord': [-73.966032, 40.762832], 'street': '3 Avenue', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-09-16T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-02-24T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-05-03T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-08-20T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-02-13T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Isle Of Capri Resturant', 'restaurant_id': '40364373'}}, {'_id': '5cb032da23d35d00011180dc', 'Restaurant': {'address': {'building': '45', 'coord': [-73.9891878, 40.7375638], 'street': 'East 18 Street', 'zipcode': '10003'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-10-08T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-04-24T00:00:00Z', 'grade': 'C', 'score': 36}, {'date': '2012-01-09T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Old Town Bar & Restaurant', 'restaurant_id': '40364389'}}, {'_id': '5cb032da23d35d00011180dd', 'Restaurant': {'address': {'building': '261', 'coord': [-73.94839189999999, 40.7224876], 'street': 'Driggs Avenue', 'zipcode': '11222'}, 'borough': 'Brooklyn', 'cuisine': 'Polish', 'grades': [{'date': '2014-05-31T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-05-10T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-02-17T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2011-10-14T00:00:00Z', 'grade': 'C', 'score': 54}], 'name': 'Polish National Home', 'restaurant_id': '40364404'}}, {'_id': '5cb032da23d35d00011180de', 'Restaurant': {'address': {'building': '62', 'coord': [-74.00310999999999, 40.7348888], 'street': 'Charles Street', 'zipcode': '10014'}, 'borough': 'Manhattan', 'cuisine': 'Latin (Cuban, Dominican, Puerto Rican, South & Central American)', 'grades': [{'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-05-20T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-05-24T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-01-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-03T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'Seville Restaurant', 'restaurant_id': '40364439'}}, {'_id': '5cb032da23d35d00011180df', 'Restaurant': {'address': {'building': '100', 'coord': [-74.0010484, 40.71599000000001], 'street': 'Centre Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-03-03T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-03-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-09T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-03-31T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Criminal Court Bldg Cafeteria', 'restaurant_id': '40364443'}}, {'_id': '5cb032da23d35d00011180e0', 'Restaurant': {'address': {'building': '657', 'coord': [-73.9056678, 40.7066898], 'street': 'Fairview Avenue', 'zipcode': '11385'}, 'borough': 'Queens', 'cuisine': 'German', 'grades': [{'date': '2014-03-15T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-03-12T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-07-21T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2011-11-25T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2011-06-22T00:00:00Z', 'grade': 'B', 'score': 20}], 'name': 'Gottscheer Hall', 'restaurant_id': '40364449'}}, {'_id': '5cb032da23d35d00011180e1', 'Restaurant': {'address': {'building': '180', 'coord': [-73.9788694, 40.7665961], 'street': 'Central Park South', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-12-15T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-08-07T00:00:00Z', 'grade': 'C', 'score': 40}, {'date': '2013-07-29T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-12-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-07-30T00:00:00Z', 'grade': 'C', 'score': 4}, {'date': '2012-02-16T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'Nyac Main Dining Room', 'restaurant_id': '40364467'}}, {'_id': '5cb032da23d35d00011180e2', 'Restaurant': {'address': {'building': '108', 'coord': [-73.98146, 40.7250067], 'street': 'Avenue B', 'zipcode': '10009'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-14T00:00:00Z', 'grade': 'B', 'score': 17}, {'date': '2013-12-31T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-10-22T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-05-07T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-14T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': '7B Bar', 'restaurant_id': '40364518'}}, {'_id': '5cb032da23d35d00011180e3', 'Restaurant': {'address': {'building': '96-40', 'coord': [-73.86137149999999, 40.7293762], 'street': 'Queens Boulevard', 'zipcode': '11374'}, 'borough': 'Queens', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-03-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-09-30T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-04-26T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-09-11T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2011-09-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-03-17T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Ben-Best Deli & Restaurant', 'restaurant_id': '40364529'}}, {'_id': '5cb032da23d35d00011180e4', 'Restaurant': {'address': {'building': '215', 'coord': [-73.9805679, 40.7659436], 'street': 'West 57 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-02-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-08-09T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-02-22T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2012-02-16T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Cafe Atelier (Art Students League)', 'restaurant_id': '40364531'}}, {'_id': '5cb032da23d35d00011180e5', 'Restaurant': {'address': {'building': '845', 'coord': [-73.965531, 40.765431], 'street': 'Lexington Avenue', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'Steak', 'grades': [{'date': '2014-03-26T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-03-21T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-10-18T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-05-07T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2011-05-17T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': "Donohue'S Steak House", 'restaurant_id': '40364572'}}, {'_id': '5cb032da23d35d00011180e6', 'Restaurant': {'address': {'building': '311', 'coord': [-73.98621899999999, 40.763406], 'street': 'West 51 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'French', 'grades': [{'date': '2014-11-10T00:00:00Z', 'grade': 'B', 'score': 15}, {'date': '2014-04-03T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-17T00:00:00Z', 'grade': 'C', 'score': 36}, {'date': '2013-02-06T00:00:00Z', 'grade': 'B', 'score': 22}, {'date': '2012-07-16T00:00:00Z', 'grade': 'C', 'score': 36}, {'date': '2012-03-08T00:00:00Z', 'grade': 'C', 'score': 7}], 'name': 'Tout Va Bien', 'restaurant_id': '40364576'}}, {'_id': '5cb032da23d35d00011180e7', 'Restaurant': {'address': {'building': '386', 'coord': [-73.9818918, 40.6901211], 'street': 'Flatbush Avenue Extension', 'zipcode': '11201'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-11-14T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2014-03-10T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-01-10T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-09-04T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': "Junior'S", 'restaurant_id': '40364581'}}, {'_id': '5cb032da23d35d00011180e8', 'Restaurant': {'address': {'building': '37', 'coord': [-74.138263, 40.546681], 'street': 'Mansion Ave', 'zipcode': '10308'}, 'borough': 'Staten Island', 'cuisine': 'American', 'grades': [{'date': '2014-04-22T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-09-25T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-06-09T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Great Kills Yacht Club', 'restaurant_id': '40364610'}}, {'_id': '5cb032da23d35d00011180e9', 'Restaurant': {'address': {'building': '251', 'coord': [-73.9775552, 40.7432016], 'street': 'East 31 Street', 'zipcode': '10016'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-04-22T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-06-19T00:00:00Z', 'grade': 'C', 'score': 32}, {'date': '2012-05-22T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Marchis Restaurant', 'restaurant_id': '40364668'}}, {'_id': '5cb032da23d35d00011180ea', 'Restaurant': {'address': {'building': '2602', 'coord': [-73.95443709999999, 40.5877993], 'street': 'East 15 Street', 'zipcode': '11235'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-05-14T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-04-27T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-11-23T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-03-14T00:00:00Z', 'grade': 'B', 'score': 17}, {'date': '2011-07-14T00:00:00Z', 'grade': 'B', 'score': 21}], 'name': 'Towne Cafe', 'restaurant_id': '40364681'}}, {'_id': '5cb032da23d35d00011180eb', 'Restaurant': {'address': {'building': '.1-A', 'coord': [-48.9424, -16.3550032], 'street': 'East 77 St', 'zipcode': '10021'}, 'borough': 'Manhattan', 'cuisine': 'Continental', 'grades': [{'date': '2014-11-24T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-04-03T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2012-10-02T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-04-25T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Dining Room', 'restaurant_id': '40364691'}}, {'_id': '5cb032da23d35d00011180ec', 'Restaurant': {'address': {'building': '56', 'coord': [-74.004758, 40.741207], 'street': '9 Avenue', 'zipcode': '10011'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-06-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-06-10T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-09-26T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2012-05-24T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-11-14T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'Old Homestead', 'restaurant_id': '40364715'}}, {'_id': '5cb032da23d35d00011180ed', 'Restaurant': {'address': {'building': '156-71', 'coord': [-73.840437, 40.6627235], 'street': 'Crossbay Boulevard', 'zipcode': '11414'}, 'borough': 'Queens', 'cuisine': 'Pizza/Italian', 'grades': [{'date': '2014-10-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-10-30T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-06-12T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-27T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'New Park Pizzeria & Restaurant', 'restaurant_id': '40364744'}}, {'_id': '5cb032da23d35d00011180ee', 'Restaurant': {'address': {'building': '600', 'coord': [-73.7522366, 40.7766941], 'street': 'West Drive', 'zipcode': '11363'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2013-12-04T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-06-13T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-12-06T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-04-12T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2011-07-30T00:00:00Z', 'grade': 'B', 'score': 23}], 'name': 'Douglaston Club', 'restaurant_id': '40364858'}}, {'_id': '5cb032da23d35d00011180ef', 'Restaurant': {'address': {'building': '225', 'coord': [-73.96485799999999, 40.761899], 'street': 'East 60 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-08-11T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-03-14T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-01-16T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-07-12T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Serendipity 3', 'restaurant_id': '40364863'}}, {'_id': '5cb032da23d35d00011180f0', 'Restaurant': {'address': {'building': '461', 'coord': [-74.002944, 40.652779], 'street': '37 Street', 'zipcode': '11232'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-11-28T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-12-04T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-12-06T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-12-06T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Melody Lanes', 'restaurant_id': '40364889'}}, {'_id': '5cb032da23d35d00011180f1', 'Restaurant': {'address': {'building': '30-13', 'coord': [-73.9151096, 40.763377], 'street': 'Steinway Street', 'zipcode': '11103'}, 'borough': 'Queens', 'cuisine': 'Pizza', 'grades': [{'date': '2014-10-06T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-10-24T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-06-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-01-17T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Rizzo'S Fine Pizza", 'restaurant_id': '40364920'}}, {'_id': '5cb032da23d35d00011180f2', 'Restaurant': {'address': {'building': '2222', 'coord': [-73.84971759999999, 40.8304811], 'street': 'Haviland Avenue', 'zipcode': '10462'}, 'borough': 'Bronx', 'cuisine': 'American', 'grades': [{'date': '2014-12-18T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2014-05-01T00:00:00Z', 'grade': 'B', 'score': 17}, {'date': '2013-03-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-09-20T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-02-08T00:00:00Z', 'grade': 'B', 'score': 19}], 'name': 'The New Starling Athletic Club Of The Bronx', 'restaurant_id': '40364956'}}, {'_id': '5cb032da23d35d00011180f3', 'Restaurant': {'address': {'building': '567', 'coord': [-74.00619499999999, 40.735663], 'street': 'Hudson Street', 'zipcode': '10014'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-28T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-07-25T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-02-05T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-05-29T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2011-12-23T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'White Horse Tavern', 'restaurant_id': '40364958'}}, {'_id': '5cb032da23d35d00011180f4', 'Restaurant': {'address': {'building': '67', 'coord': [-74.0707363, 40.59321569999999], 'street': 'Olympia Boulevard', 'zipcode': '10305'}, 'borough': 'Staten Island', 'cuisine': 'Italian', 'grades': [{'date': '2014-04-24T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-04-04T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-02-02T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2011-07-23T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Crystal Room', 'restaurant_id': '40365013'}}, {'_id': '5cb032da23d35d00011180f5', 'Restaurant': {'address': {'building': '390', 'coord': [-74.07444319999999, 40.6096914], 'street': 'Hylan Boulevard', 'zipcode': '10305'}, 'borough': 'Staten Island', 'cuisine': 'American', 'grades': [{'date': '2014-06-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-06-15T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-08-30T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-10-07T00:00:00Z', 'grade': 'B', 'score': 20}], 'name': "Labetti'S Post # 2159", 'restaurant_id': '40365022'}}, {'_id': '5cb032da23d35d00011180f6', 'Restaurant': {'address': {'building': '1', 'coord': [-73.9727638, 40.588853], 'street': 'Bouck Court', 'zipcode': '11223'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-12-10T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2014-06-25T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-10-23T00:00:00Z', 'grade': 'C', 'score': 28}, {'date': '2013-03-21T00:00:00Z', 'grade': 'C', 'score': 29}, {'date': '2012-06-15T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'Shell Lanes', 'restaurant_id': '40365043'}}, {'_id': '5cb032da23d35d00011180f7', 'Restaurant': {'address': {'building': '15', 'coord': [-73.9896713, 40.7287978], 'street': 'East 7 Street', 'zipcode': '10003'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-06-07T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2014-01-09T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-06-12T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-05-21T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-11T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-08-11T00:00:00Z', 'grade': 'B', 'score': 16}], 'name': "Mcsorley'S Old Ale House", 'restaurant_id': '40365075'}}, {'_id': '5cb032da23d35d00011180f8', 'Restaurant': {'address': {'building': '93', 'coord': [-73.99950489999999, 40.7169224], 'street': 'Baxter Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-12-15T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-12-06T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-10-23T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-04T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-01-12T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Forlinis Restaurant', 'restaurant_id': '40365098'}}, {'_id': '5cb032da23d35d00011180f9', 'Restaurant': {'address': {'building': '6736', 'coord': [-74.2274942, 40.5071996], 'street': 'Hylan Boulevard', 'zipcode': '10309'}, 'borough': 'Staten Island', 'cuisine': 'American', 'grades': [{'date': '2014-08-13T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-08-20T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2012-06-18T00:00:00Z', 'grade': 'A', 'score': 6}], 'name': 'South Shore Swimming Club', 'restaurant_id': '40365120'}}, {'_id': '5cb032da23d35d00011180fa', 'Restaurant': {'address': {'building': '331', 'coord': [-74.0037823, 40.7380122], 'street': 'West 4 Street', 'zipcode': '10014'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-11-17T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-06-27T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-05-15T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-05-09T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Corner Bistro', 'restaurant_id': '40365166'}}, {'_id': '5cb032da23d35d00011180fb', 'Restaurant': {'address': {'building': '1449', 'coord': [-73.94933739999999, 40.6509823], 'street': 'Nostrand Avenue', 'zipcode': '11226'}, 'borough': 'Brooklyn', 'cuisine': 'Donuts', 'grades': [{'date': '2014-10-21T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2014-05-21T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-05-02T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-12-03T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-05-09T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2011-12-14T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Nostrand Donut Shop', 'restaurant_id': '40365226'}}, {'_id': '5cb032da23d35d00011180fc', 'Restaurant': {'address': {'building': '1616', 'coord': [-73.952449, 40.776325], 'street': '2 Avenue', 'zipcode': '10028'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-02-28T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-08-30T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-08-27T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-09-14T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Dorrian'S Red Hand Restaurant", 'restaurant_id': '40365239'}}, {'_id': '5cb032da23d35d00011180fd', 'Restaurant': {'address': {'building': '3', 'coord': [-73.97557069999999, 40.7596796], 'street': 'East 52 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'French', 'grades': [{'date': '2014-04-09T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-05T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-02-02T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'La Grenouille', 'restaurant_id': '40365264'}}, {'_id': '5cb032da23d35d00011180fe', 'Restaurant': {'address': {'building': '4035', 'coord': [-73.9395182, 40.8422945], 'street': 'Broadway', 'zipcode': '10032'}, 'borough': 'Manhattan', 'cuisine': 'Pizza', 'grades': [{'date': '2014-02-10T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-02-04T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-01-04T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2011-09-15T00:00:00Z', 'grade': 'C', 'score': 60}], 'name': 'Como Pizza', 'restaurant_id': '40365280'}}, {'_id': '5cb032da23d35d00011180ff', 'Restaurant': {'address': {'building': '842', 'coord': [-73.97063700000001, 40.751495], 'street': '2 Avenue', 'zipcode': '10017'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-22T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2013-05-28T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-05-29T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-01-05T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-08-10T00:00:00Z', 'grade': 'B', 'score': 24}], 'name': 'Keats Restaurant', 'restaurant_id': '40365288'}}, {'_id': '5cb032da23d35d0001118100', 'Restaurant': {'address': {'building': '146', 'coord': [-73.9973041, 40.7188698], 'street': 'Mulberry Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-03-14T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-09-26T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-02-15T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-09-15T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Angelo Of Mulberry St.', 'restaurant_id': '40365293'}}, {'_id': '5cb032da23d35d0001118101', 'Restaurant': {'address': {'building': '103', 'coord': [-74.001043, 40.729795], 'street': 'Macdougal Street', 'zipcode': '10012'}, 'borough': 'Manhattan', 'cuisine': 'Mexican', 'grades': [{'date': '2014-05-22T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-20T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-05-17T00:00:00Z', 'grade': 'B', 'score': 20}], 'name': "Panchito'S", 'restaurant_id': '40365348'}}, {'_id': '5cb032da23d35d0001118102', 'Restaurant': {'address': {'building': '7201', 'coord': [-74.0166091, 40.6284767], 'street': '8 Avenue', 'zipcode': '11228'}, 'borough': 'Brooklyn', 'cuisine': 'Italian', 'grades': [{'date': '2014-12-04T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2014-02-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-07-09T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-06-06T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-12-19T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'New Corner', 'restaurant_id': '40365355'}}, {'_id': '5cb032da23d35d0001118103', 'Restaurant': {'address': {'building': '15', 'coord': [-73.98126069999999, 40.7547107], 'street': 'West 43 Street', 'zipcode': '10036'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2015-01-15T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2014-07-07T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2014-01-14T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-19T00:00:00Z', 'grade': 'C', 'score': 29}, {'date': '2013-02-05T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'The Princeton Club', 'restaurant_id': '40365361'}}, {'_id': '5cb032da23d35d0001118104', 'Restaurant': {'address': {'building': '106', 'coord': [-74.0003315, 40.7274874], 'street': 'West Houston Street', 'zipcode': '10012'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-03-31T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-10-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-03-29T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-09-05T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-03-05T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': "Arturo'S", 'restaurant_id': '40365387'}}, {'_id': '5cb032da23d35d0001118105', 'Restaurant': {'address': {'building': '405', 'coord': [-73.9646207, 40.7550069], 'street': 'East 52 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'French', 'grades': [{'date': '2014-07-14T00:00:00Z', 'grade': 'B', 'score': 14}, {'date': '2013-12-02T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-04-08T00:00:00Z', 'grade': 'B', 'score': 22}, {'date': '2012-09-17T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-04-03T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Le Perigord', 'restaurant_id': '40365414'}}, {'_id': '5cb032da23d35d0001118106', 'Restaurant': {'address': {'building': '4241', 'coord': [-73.9365108, 40.8497077], 'street': 'Broadway', 'zipcode': '10033'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-11-15T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2014-04-25T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-03-28T00:00:00Z', 'grade': 'P', 'score': 15}, {'date': '2013-08-19T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-02-20T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-08-22T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2012-01-30T00:00:00Z', 'grade': 'C', 'score': 48}], 'name': "Reynold'S Bar", 'restaurant_id': '40365423'}}, {'_id': '5cb032da23d35d0001118107', 'Restaurant': {'address': {'building': '1758', 'coord': [-74.1220973, 40.6129407], 'street': 'Victory Boulevard', 'zipcode': '10314'}, 'borough': 'Staten Island', 'cuisine': 'Pizza/Italian', 'grades': [{'date': '2014-11-20T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2014-01-13T00:00:00Z', 'grade': 'B', 'score': 14}, {'date': '2013-04-25T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-10-09T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-05-02T00:00:00Z', 'grade': 'B', 'score': 22}], 'name': "Joe & Pat'S Pizzeria", 'restaurant_id': '40365454'}}, {'_id': '5cb032da23d35d0001118108', 'Restaurant': {'address': {'building': '113', 'coord': [-73.9979214, 40.7371344], 'street': 'West 13 Street', 'zipcode': '10011'}, 'borough': 'Manhattan', 'cuisine': 'Spanish', 'grades': [{'date': '2014-07-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-03-27T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-01-14T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-12-29T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-08-03T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Spain Restaurant & Bar', 'restaurant_id': '40365472'}}, {'_id': '5cb032da23d35d0001118109', 'Restaurant': {'address': {'building': '206', 'coord': [-73.9446421, 40.7253944], 'street': 'Nassau Avenue', 'zipcode': '11222'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-11-19T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2014-05-09T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2013-06-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-10-17T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Palace Cafe', 'restaurant_id': '40365473'}}, {'_id': '5cb032da23d35d000111810a', 'Restaurant': {'address': {'building': '72', 'coord': [-73.92506, 40.8275556], 'street': 'East 161 Street', 'zipcode': '10451'}, 'borough': 'Bronx', 'cuisine': 'American', 'grades': [{'date': '2014-04-15T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-11-14T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2013-07-29T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-12-31T00:00:00Z', 'grade': 'B', 'score': 15}, {'date': '2012-05-30T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-09T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-08-15T00:00:00Z', 'grade': 'C', 'score': 37}], 'name': 'Yankee Tavern', 'restaurant_id': '40365499'}}, {'_id': '5cb032da23d35d000111810b', 'Restaurant': {'address': {'building': '203', 'coord': [-73.99987229999999, 40.7386361], 'street': 'West 14 Street', 'zipcode': '10011'}, 'borough': 'Manhattan', 'cuisine': 'Donuts', 'grades': [{'date': '2014-02-11T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-08T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-07-05T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2012-02-22T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2011-08-08T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-03-16T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Donut Pub', 'restaurant_id': '40365525'}}, {'_id': '5cb032da23d35d000111810c', 'Restaurant': {'address': {'building': '146', 'coord': [-74.0056649, 40.7452371], 'street': '10 Avenue', 'zipcode': '10011'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-10-02T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-09-10T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-05T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-11-23T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': "Moran'S Chelsea", 'restaurant_id': '40365526'}}, {'_id': '5cb032da23d35d000111810d', 'Restaurant': {'address': {'building': '229', 'coord': [-73.9590059, 40.7090147], 'street': 'Havemeyer Street', 'zipcode': '11211'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-08-18T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2014-01-08T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-05-20T00:00:00Z', 'grade': 'B', 'score': 21}, {'date': '2012-09-20T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-09-22T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Reben Luncheonette', 'restaurant_id': '40365546'}}, {'_id': '5cb032da23d35d000111810e', 'Restaurant': {'address': {'building': '1024', 'coord': [-73.96392089999999, 40.8033908], 'street': 'Amsterdam Avenue', 'zipcode': '10025'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-06-12T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-01-09T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-06-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-01T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-12-15T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'V & T Restaurant', 'restaurant_id': '40365577'}}, {'_id': '5cb032da23d35d000111810f', 'Restaurant': {'address': {'building': '181-08', 'coord': [-73.7867565, 40.7271312], 'street': 'Union Turnpike', 'zipcode': '11366'}, 'borough': 'Queens', 'cuisine': 'Chinese', 'grades': [{'date': '2014-10-22T00:00:00Z', 'grade': 'B', 'score': 14}, {'date': '2014-04-09T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-13T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-01-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-12-13T00:00:00Z', 'grade': 'P', 'score': 2}, {'date': '2012-06-19T00:00:00Z', 'grade': 'C', 'score': 36}], 'name': 'King Yum Restaurant', 'restaurant_id': '40365592'}}, {'_id': '5cb032da23d35d0001118110', 'Restaurant': {'address': {'building': '8104', 'coord': [-73.8850023, 40.7494272], 'street': '37 Avenue', 'zipcode': '11372'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-07-07T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-02-06T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-08-14T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-20T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-02-28T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-10-25T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Jahn'S Restaurant", 'restaurant_id': '40365627'}}, {'_id': '5cb032da23d35d0001118111', 'Restaurant': {'address': {'building': '6322', 'coord': [-73.9896898, 40.6199526], 'street': '18 Avenue', 'zipcode': '11204'}, 'borough': 'Brooklyn', 'cuisine': 'Pizza', 'grades': [{'date': '2014-12-30T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-05-15T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-10-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-10-06T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-03-29T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'J&V Famous Pizza', 'restaurant_id': '40365632'}}, {'_id': '5cb032da23d35d0001118112', 'Restaurant': {'address': {'building': '910', 'coord': [-73.9799932, 40.7660886], 'street': 'Seventh Avenue', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2015-01-08T00:00:00Z', 'grade': 'Z', 'score': 35}, {'date': '2014-06-02T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2013-11-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-06-24T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-12-04T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2012-06-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-24T00:00:00Z', 'grade': 'B', 'score': 21}], 'name': 'La Parisienne Diner', 'restaurant_id': '40365633'}}, {'_id': '5cb032da23d35d0001118113', 'Restaurant': {'address': {'building': '326', 'coord': [-73.989131, 40.760039], 'street': 'West 46 Street', 'zipcode': '10036'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-10T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-09-25T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2012-09-11T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-04-19T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-10-26T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Joe Allen Restaurant', 'restaurant_id': '40365644'}}, {'_id': '5cb032da23d35d0001118114', 'Restaurant': {'address': {'building': '3823', 'coord': [-74.16536339999999, 40.5450793], 'street': 'Richmond Avenue', 'zipcode': '10312'}, 'borough': 'Staten Island', 'cuisine': 'American', 'grades': [{'date': '2014-07-15T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2013-04-01T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-12T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': "Joyce'S Tavern", 'restaurant_id': '40365692'}}, {'_id': '5cb032da23d35d0001118115', 'Restaurant': {'address': {'building': '351', 'coord': [-73.96117869999999, 40.7619226], 'street': 'East 62 Street', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-11-13T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2014-02-28T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2013-06-10T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-05-09T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Il Vagabondo Restaurant', 'restaurant_id': '40365709'}}, {'_id': '5cb032da23d35d0001118116', 'Restaurant': {'address': {'building': '319321', 'coord': [-73.988948, 40.760337], 'street': '323 W. 46Th St.', 'zipcode': '10036'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-05-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-11-12T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-04-27T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-12-07T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': 'Barbetta Restaurant', 'restaurant_id': '40365726'}}, {'_id': '5cb032da23d35d0001118117', 'Restaurant': {'address': {'building': '2911', 'coord': [-73.982241, 40.576366], 'street': 'West 15 Street', 'zipcode': '11224'}, 'borough': 'Brooklyn', 'cuisine': 'Italian', 'grades': [{'date': '2014-12-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-05-15T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-06-12T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-02-06T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Gargiulo'S Restaurant", 'restaurant_id': '40365784'}}, {'_id': '5cb032da23d35d0001118118', 'Restaurant': {'address': {'building': '236', 'coord': [-73.9827418, 40.7655827], 'street': 'West 56 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-05-05T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-08-12T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2012-08-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-28T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': "Patsy'S Italian Restaurant", 'restaurant_id': '40365789'}}, {'_id': '5cb032da23d35d0001118119', 'Restaurant': {'address': {'building': '10701', 'coord': [-73.856132, 40.743841], 'street': 'Corona Avenue', 'zipcode': '11368'}, 'borough': 'Queens', 'cuisine': 'Italian', 'grades': [{'date': '2014-07-17T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-02-25T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-03-27T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-02-07T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-12-28T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Parkside Restaurant', 'restaurant_id': '40365841'}}, {'_id': '5cb032da23d35d000111811a', 'Restaurant': {'address': {'building': '45-15', 'coord': [-73.91427200000001, 40.7569379], 'street': 'Broadway', 'zipcode': '11103'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2013-12-04T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-05-02T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-03-15T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-07-12T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-02-16T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': "Lavelle'S Admiral'S Club", 'restaurant_id': '40365844'}}, {'_id': '5cb032da23d35d000111811b', 'Restaurant': {'address': {'building': '358', 'coord': [-73.963506, 40.758273], 'street': 'East 57 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-08-11T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-22T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-07-02T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-02-02T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-08-24T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': "Neary'S Pub", 'restaurant_id': '40365871'}}, {'_id': '5cb032da23d35d000111811c', 'Restaurant': {'address': {'building': '413', 'coord': [-73.99532099999999, 40.750205], 'street': '8 Avenue', 'zipcode': '10001'}, 'borough': 'Manhattan', 'cuisine': 'Pizza', 'grades': [{'date': '2014-05-12T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-12-04T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-11-15T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-06-25T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-01-23T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-09-07T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'New York Pizza Suprema', 'restaurant_id': '40365882'}}, {'_id': '5cb032da23d35d000111811d', 'Restaurant': {'address': {'building': '331', 'coord': [-73.87786539999999, 40.8724377], 'street': 'East 204 Street', 'zipcode': '10467'}, 'borough': 'Bronx', 'cuisine': 'Irish', 'grades': [{'date': '2014-08-26T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-03-26T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-09-11T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-12-18T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2011-10-20T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Mcdwyers Pub', 'restaurant_id': '40365893'}}, {'_id': '5cb032da23d35d000111811e', 'Restaurant': {'address': {'building': '26', 'coord': [-73.9983, 40.715051], 'street': 'Pell Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'Café/Coffee/Tea', 'grades': [{'date': '2014-07-10T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-07-12T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-02-11T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-01-10T00:00:00Z', 'grade': 'P', 'score': 4}, {'date': '2012-07-27T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-27T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-08-12T00:00:00Z', 'grade': 'B', 'score': 24}], 'name': 'Mee Sum Coffee Shop', 'restaurant_id': '40365904'}}, {'_id': '5cb032da23d35d000111811f', 'Restaurant': {'address': {'building': '25541', 'coord': [-73.70902579999999, 40.7276012], 'street': 'Jamaica Avenue', 'zipcode': '11001'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-01-16T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2013-06-20T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-10-04T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-10T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2011-06-23T00:00:00Z', 'grade': 'B', 'score': 21}], 'name': "Nancy'S Fire Side", 'restaurant_id': '40365938'}}, {'_id': '5cb032da23d35d0001118120', 'Restaurant': {'address': {'building': '21', 'coord': [-73.9990337, 40.7143954], 'street': 'Mott Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'Chinese', 'grades': [{'date': '2014-07-28T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2013-11-19T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2013-04-30T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-10-16T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2012-05-07T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Hop Kee Restaurant', 'restaurant_id': '40365942'}}, {'_id': '5cb032da23d35d0001118121', 'Restaurant': {'address': {'building': '1', 'coord': [-74.0049219, 40.720699], 'street': 'Lispenard Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-10-07T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-10-03T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2012-09-17T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-05-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-12-13T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Nancy Whiskey Pub', 'restaurant_id': '40365968'}}, {'_id': '5cb032da23d35d0001118122', 'Restaurant': {'address': {'building': '146-09', 'coord': [-73.808593, 40.702028], 'street': 'Jamaica Avenue', 'zipcode': '11435'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-07-14T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-08-05T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-11T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-09-20T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Blarney Bar', 'restaurant_id': '40365972'}}, {'_id': '5cb032da23d35d0001118123', 'Restaurant': {'address': {'building': '16304', 'coord': [-73.78999089999999, 40.7118632], 'street': 'Jamaica Avenue', 'zipcode': '11432'}, 'borough': 'Queens', 'cuisine': 'Pizza', 'grades': [{'date': '2014-07-25T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-02-10T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-01-03T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-01-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-09-28T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Margherita Pizza', 'restaurant_id': '40366002'}}, {'_id': '5cb032da23d35d0001118124', 'Restaurant': {'address': {'building': '10807', 'coord': [-73.8299395, 40.5812137], 'street': 'Rockaway Beach Boulevard', 'zipcode': '11694'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-01-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-06-26T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-27T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-01-31T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-02-08T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': "Healy'S Pub", 'restaurant_id': '40366054'}}, {'_id': '5cb032da23d35d0001118125', 'Restaurant': {'address': {'building': '416', 'coord': [-73.98586209999999, 40.67017250000001], 'street': '5 Avenue', 'zipcode': '11215'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-12-04T00:00:00Z', 'grade': 'B', 'score': 22}, {'date': '2014-04-19T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-02-14T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-12T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Fifth Avenue Bingo', 'restaurant_id': '40366109'}}, {'_id': '5cb032da23d35d0001118126', 'Restaurant': {'address': {'building': '524', 'coord': [-74.1402105, 40.6301893], 'street': 'Port Richmond Avenue', 'zipcode': '10302'}, 'borough': 'Staten Island', 'cuisine': 'Pizza/Italian', 'grades': [{'date': '2014-12-18T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-12-03T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-28T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2012-02-22T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': "Denino'S Pizzeria Tavern", 'restaurant_id': '40366132'}}, {'_id': '5cb032da23d35d0001118127', 'Restaurant': {'address': {'building': '2929', 'coord': [-73.942849, 40.6076256], 'street': 'Avenue R', 'zipcode': '11229'}, 'borough': 'Brooklyn', 'cuisine': 'Italian', 'grades': [{'date': '2014-03-13T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-10-02T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-01-22T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-12T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-12-01T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2011-05-25T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Michael'S Restaurant", 'restaurant_id': '40366154'}}, {'_id': '5cb032da23d35d0001118128', 'Restaurant': {'address': {'building': '146', 'coord': [-73.9736776, 40.7535755], 'street': 'East 46 Street', 'zipcode': '10017'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-03-11T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-07-31T00:00:00Z', 'grade': 'C', 'score': 53}, {'date': '2012-12-19T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-04T00:00:00Z', 'grade': 'C', 'score': 45}, {'date': '2012-01-18T00:00:00Z', 'grade': 'C', 'score': 34}, {'date': '2011-09-28T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2011-05-24T00:00:00Z', 'grade': 'C', 'score': 52}], 'name': 'Nanni Restaurant', 'restaurant_id': '40366157'}}, {'_id': '5cb032da23d35d0001118129', 'Restaurant': {'address': {'building': '119-09', 'coord': [-73.82770529999999, 40.6944628], 'street': 'Atlantic Avenue', 'zipcode': '11418'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-11-20T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2014-02-15T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-01T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2012-11-28T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2012-02-16T00:00:00Z', 'grade': 'B', 'score': 19}], 'name': "Lenihan'S Saloon", 'restaurant_id': '40366162'}}, {'_id': '5cb032da23d35d000111812a', 'Restaurant': {'address': {'building': '4218', 'coord': [-73.8682701, 40.745683], 'street': 'Junction Boulevard', 'zipcode': '11368'}, 'borough': 'Queens', 'cuisine': 'Latin (Cuban, Dominican, Puerto Rican, South & Central American)', 'grades': [{'date': '2014-11-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-09-06T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-04-04T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-07-27T00:00:00Z', 'grade': 'C', 'score': 30}], 'name': 'Emilio Iii Bar', 'restaurant_id': '40366214'}}, {'_id': '5cb032da23d35d000111812b', 'Restaurant': {'address': {'building': '80', 'coord': [-74.0086833, 40.7052024], 'street': 'Beaver Street', 'zipcode': '10005'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-07-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-08-05T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-03-14T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-07-24T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Killarney Rose', 'restaurant_id': '40366222'}}, {'_id': '5cb032da23d35d000111812c', 'Restaurant': {'address': {'building': '13558', 'coord': [-73.8216767, 40.6689548], 'street': 'Lefferts Boulevard', 'zipcode': '11420'}, 'borough': 'Queens', 'cuisine': 'Italian', 'grades': [{'date': '2014-06-20T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-06-07T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-06-28T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-13T00:00:00Z', 'grade': 'C', 'score': 28}], 'name': 'Don Peppe', 'restaurant_id': '40366230'}}, {'_id': '5cb032da23d35d000111812d', 'Restaurant': {'address': {'building': '202-24', 'coord': [-73.9250442, 40.5595462], 'street': 'Rockaway Point Boulevard', 'zipcode': '11697'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-12-02T00:00:00Z', 'grade': 'Z', 'score': 18}, {'date': '2014-02-12T00:00:00Z', 'grade': 'B', 'score': 21}, {'date': '2013-04-13T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-06-26T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-12-17T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Blarney Castle', 'restaurant_id': '40366356'}}, {'_id': '5cb032da23d35d000111812e', 'Restaurant': {'address': {'building': '1611', 'coord': [-73.955074, 40.599217], 'street': 'Avenue U', 'zipcode': '11229'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-07-02T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-07-10T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-02-13T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-08-16T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-29T00:00:00Z', 'grade': 'B', 'score': 24}], 'name': 'Three Star Restaurant', 'restaurant_id': '40366361'}}, {'_id': '5cb032da23d35d000111812f', 'Restaurant': {'address': {'building': '137', 'coord': [-73.98926, 40.7509054], 'street': 'West 33 Street', 'zipcode': '10001'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-08-15T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-01-21T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-07-24T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-05-31T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-01-26T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-11T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Blarney Rock', 'restaurant_id': '40366379'}}, {'_id': '5cb032da23d35d0001118130', 'Restaurant': {'address': {'building': '1118', 'coord': [-73.960573, 40.760982], 'street': '1 Avenue', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-24T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2013-09-06T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2013-03-20T00:00:00Z', 'grade': 'C', 'score': 36}, {'date': '2012-04-13T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': "Dangerfield'S Night Club", 'restaurant_id': '40366381'}}, {'_id': '5cb032da23d35d0001118131', 'Restaurant': {'address': {'building': '433', 'coord': [-73.98306099999999, 40.7441419], 'street': 'Park Avenue South', 'zipcode': '10016'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-12-29T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-07-03T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-01-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-05-02T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Desmond'S Tavern", 'restaurant_id': '40366396'}}, {'_id': '5cb032da23d35d0001118132', 'Restaurant': {'address': {'building': '6828', 'coord': [-73.8204154, 40.7242443], 'street': 'Main Street', 'zipcode': '11367'}, 'borough': 'Queens', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-09-24T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2014-03-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-05-31T00:00:00Z', 'grade': 'B', 'score': 26}, {'date': '2012-05-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-11-03T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Naomi Kosher Pizza', 'restaurant_id': '40366425'}}]
###Markdown
Api Manager Example
###Code
from onesaitplatform.apimanager import ApiManagerClient
###Output
_____no_output_____
###Markdown
Create ApiManager
###Code
HOST = "development.onesaitplatform.com"
PORT = 443
TOKEN = "b32522cd73e84ddda519f1dff9627f40"
#client = ApiManagerClient(host=HOST, port=PORT)
client = ApiManagerClient(host=HOST)
###Output
_____no_output_____
###Markdown
Set token
###Code
client.setToken(TOKEN)
###Output
_____no_output_____
###Markdown
Find APIs
###Code
ok_find, res_find = client.find("RestaurantsAPI", "Created", "analytics")
print("API finded: {}".format(ok_find))
print("Api info:")
print(res_find)
###Output
API finded: True
Api info:
[{'identification': 'RestaurantsAPI', 'version': 1, 'type': 'PRIVATE', 'isPublic': None, 'category': 'EDUCATION', 'externalApi': False, 'ontologyId': 'MASTER-Ontology-Restaurant-1', 'endpoint': 'https://development.onesaitplatform.com/api-manager/server/api/v1/RestaurantsAPI', 'endpointExt': None, 'description': '', 'metainf': '', 'imageType': None, 'status': 'CREATED', 'creationDate': '04/16/2019 13:01:03', 'userId': 'analytics', 'operations': [{'identification': 'RestaurantsAPI_GET', 'description': 'id', 'operation': 'GET', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantsAPI_GETAll', 'description': 'all', 'operation': 'GET', 'endpoint': None, 'path': '', 'headers': [], 'queryParams': [], 'postProcess': None}], 'authentication': None}]
###Markdown
List APIs
###Code
ok_list, res_list = client.list("analytics")
print("APIs listed {}".format(ok_list))
print("Apis info:")
for api in res_list:
print(api)
print("*")
###Output
APIs listed True
Apis info:
{'identification': 'RestaurantsAPI', 'version': 1, 'type': 'PRIVATE', 'isPublic': None, 'category': 'EDUCATION', 'externalApi': False, 'ontologyId': 'MASTER-Ontology-Restaurant-1', 'endpoint': 'https://development.onesaitplatform.com/api-manager/server/api/v1/RestaurantsAPI', 'endpointExt': None, 'description': '', 'metainf': '', 'imageType': None, 'status': 'CREATED', 'creationDate': '04/16/2019 13:01:03', 'userId': 'analytics', 'operations': [{'identification': 'RestaurantsAPI_GET', 'description': 'id', 'operation': 'GET', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantsAPI_GETAll', 'description': 'all', 'operation': 'GET', 'endpoint': None, 'path': '', 'headers': [], 'queryParams': [], 'postProcess': None}], 'authentication': None}
*
{'identification': 'RestaurantTestApi', 'version': 1, 'type': 'PRIVATE', 'isPublic': None, 'category': 'OTHER', 'externalApi': False, 'ontologyId': 'b5982b8d-e4c0-4a84-ab65-9e7d2f30e638', 'endpoint': 'https://development.onesaitplatform.com/api-manager/server/api/v1/RestaurantTestApi', 'endpointExt': None, 'description': '', 'metainf': '', 'imageType': None, 'status': 'CREATED', 'creationDate': '04/12/2019 08:19:16', 'userId': 'analytics', 'operations': [{'identification': 'RestaurantTestApi_PUT', 'description': 'update', 'operation': 'PUT', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'body', 'dataType': 'STRING', 'description': '', 'value': '', 'headerType': 'BODY', 'condition': None}, {'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantTestApi_POST', 'description': 'insert', 'operation': 'POST', 'endpoint': None, 'path': '/', 'headers': [], 'queryParams': [{'name': 'body', 'dataType': 'STRING', 'description': '', 'value': '', 'headerType': 'BODY', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantTestApi_GETAll', 'description': 'query', 'operation': 'GET', 'endpoint': None, 'path': '', 'headers': [], 'queryParams': [], 'postProcess': None}, {'identification': 'RestaurantTestApi_GET', 'description': 'queryby', 'operation': 'GET', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}, {'identification': 'RestaurantTestApi_DELETEID', 'description': 'delete', 'operation': 'DELETE', 'endpoint': None, 'path': '/{id}', 'headers': [], 'queryParams': [{'name': 'id', 'dataType': 'STRING', 'description': '', 'value': None, 'headerType': 'PATH', 'condition': None}], 'postProcess': None}], 'authentication': None}
*
###Markdown
Make API request
###Code
ok_request, res_request = client.request(method="GET", name="RestaurantsAPI/", version=1, body=None)
print("API request: {}".format(ok_request))
print("Api request:")
print(res_request)
###Output
API request: True
Api request:
[{'_id': '5cb032da23d35d000111809d', 'Restaurant': {'address': {'building': '351', 'coord': [-73.98513559999999, 40.7676919], 'street': 'West 57 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-09-06T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-07-22T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-07-31T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-12-29T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Dj Reynolds Pub And Restaurant', 'restaurant_id': '30191841'}}, {'_id': '5cb032da23d35d000111809e', 'Restaurant': {'address': {'building': '2780', 'coord': [-73.98241999999999, 40.579505], 'street': 'Stillwell Avenue', 'zipcode': '11224'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-06-10T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-06-05T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-04-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-10-12T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Riviera Caterer', 'restaurant_id': '40356018'}}, {'_id': '5cb032da23d35d000111809f', 'Restaurant': {'address': {'building': '97-22', 'coord': [-73.8601152, 40.7311739], 'street': '63 Road', 'zipcode': '11374'}, 'borough': 'Queens', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-11-24T00:00:00Z', 'grade': 'Z', 'score': 20}, {'date': '2013-01-17T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-08-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-12-15T00:00:00Z', 'grade': 'B', 'score': 25}], 'name': 'Tov Kosher Kitchen', 'restaurant_id': '40356068'}}, {'_id': '5cb032da23d35d00011180a0', 'Restaurant': {'address': {'building': '469', 'coord': [-73.961704, 40.662942], 'street': 'Flatbush Avenue', 'zipcode': '11225'}, 'borough': 'Brooklyn', 'cuisine': 'Hamburgers', 'grades': [{'date': '2014-12-30T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2014-07-01T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-04-30T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-05-08T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': "Wendy'S", 'restaurant_id': '30112340'}}, {'_id': '5cb032da23d35d00011180a1', 'Restaurant': {'address': {'building': '1007', 'coord': [-73.856077, 40.848447], 'street': 'Morris Park Ave', 'zipcode': '10462'}, 'borough': 'Bronx', 'cuisine': 'Bakery', 'grades': [{'date': '2014-03-03T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-09-11T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2013-01-24T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-11-23T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-03-10T00:00:00Z', 'grade': 'B', 'score': 14}], 'name': 'Morris Park Bake Shop', 'restaurant_id': '30075445'}}, {'_id': '5cb032da23d35d00011180a2', 'Restaurant': {'address': {'building': '8825', 'coord': [-73.8803827, 40.7643124], 'street': 'Astoria Boulevard', 'zipcode': '11369'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-11-15T00:00:00Z', 'grade': 'Z', 'score': 38}, {'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-02T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-02-10T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Brunos On The Boulevard', 'restaurant_id': '40356151'}}, {'_id': '5cb032da23d35d00011180a3', 'Restaurant': {'address': {'building': '2206', 'coord': [-74.1377286, 40.6119572], 'street': 'Victory Boulevard', 'zipcode': '10314'}, 'borough': 'Staten Island', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-10-06T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2014-05-20T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-04-04T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-01-24T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Kosher Island', 'restaurant_id': '40356442'}}, {'_id': '5cb032da23d35d00011180a4', 'Restaurant': {'address': {'building': '7114', 'coord': [-73.9068506, 40.6199034], 'street': 'Avenue U', 'zipcode': '11234'}, 'borough': 'Brooklyn', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-05-29T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-01-14T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-08-03T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-07-18T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-03-09T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-14T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Wilken'S Fine Food", 'restaurant_id': '40356483'}}, {'_id': '5cb032da23d35d00011180a5', 'Restaurant': {'address': {'building': '6409', 'coord': [-74.00528899999999, 40.628886], 'street': '11 Avenue', 'zipcode': '11219'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-07-18T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-07-30T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-08-16T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2011-08-17T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Regina Caterers', 'restaurant_id': '40356649'}}, {'_id': '5cb032da23d35d00011180a6', 'Restaurant': {'address': {'building': '1839', 'coord': [-73.9482609, 40.6408271], 'street': 'Nostrand Avenue', 'zipcode': '11226'}, 'borough': 'Brooklyn', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-07-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-07-10T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-07-11T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-02-23T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Taste The Tropics Ice Cream', 'restaurant_id': '40356731'}}, {'_id': '5cb032da23d35d00011180a7', 'Restaurant': {'address': {'building': '2300', 'coord': [-73.8786113, 40.8502883], 'street': 'Southern Boulevard', 'zipcode': '10460'}, 'borough': 'Bronx', 'cuisine': 'American', 'grades': [{'date': '2014-05-28T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-06-19T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-06-15T00:00:00Z', 'grade': 'A', 'score': 3}], 'name': 'Wild Asia', 'restaurant_id': '40357217'}}, {'_id': '5cb032da23d35d00011180a8', 'Restaurant': {'address': {'building': '7715', 'coord': [-73.9973325, 40.61174889999999], 'street': '18 Avenue', 'zipcode': '11214'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-04-16T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-04-23T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-04-24T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2011-12-16T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'C & C Catering Service', 'restaurant_id': '40357437'}}, {'_id': '5cb032da23d35d00011180a9', 'Restaurant': {'address': {'building': '1269', 'coord': [-73.871194, 40.6730975], 'street': 'Sutter Avenue', 'zipcode': '11208'}, 'borough': 'Brooklyn', 'cuisine': 'Chinese', 'grades': [{'date': '2014-09-16T00:00:00Z', 'grade': 'B', 'score': 21}, {'date': '2013-08-28T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-04-02T00:00:00Z', 'grade': 'C', 'score': 56}, {'date': '2012-08-15T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-03-28T00:00:00Z', 'grade': 'B', 'score': 27}], 'name': 'May May Kitchen', 'restaurant_id': '40358429'}}, {'_id': '5cb032da23d35d00011180aa', 'Restaurant': {'address': {'building': '1', 'coord': [-73.96926909999999, 40.7685235], 'street': 'East 66 Street', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-05-07T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-05-03T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-04-30T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2011-12-27T00:00:00Z', 'grade': 'A', 'score': 0}], 'name': '1 East 66Th Street Kitchen', 'restaurant_id': '40359480'}}, {'_id': '5cb032da23d35d00011180ab', 'Restaurant': {'address': {'building': '705', 'coord': [-73.9653967, 40.6064339], 'street': 'Kings Highway', 'zipcode': '11223'}, 'borough': 'Brooklyn', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-11-10T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-10-04T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-05-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-12-30T00:00:00Z', 'grade': 'B', 'score': 19}], 'name': 'Seuda Foods', 'restaurant_id': '40360045'}}, {'_id': '5cb032da23d35d00011180ac', 'Restaurant': {'address': {'building': '203', 'coord': [-73.97822040000001, 40.6435254], 'street': 'Church Avenue', 'zipcode': '11218'}, 'borough': 'Brooklyn', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-02-10T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-01-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-09T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2011-11-07T00:00:00Z', 'grade': 'P', 'score': 12}, {'date': '2011-07-21T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Carvel Ice Cream', 'restaurant_id': '40360076'}}, {'_id': '5cb032da23d35d00011180ad', 'Restaurant': {'address': {'building': '265-15', 'coord': [-73.7032601, 40.7386417], 'street': 'Hillside Avenue', 'zipcode': '11004'}, 'borough': 'Queens', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-10-28T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-09-18T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-09-20T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Carvel Ice Cream', 'restaurant_id': '40361322'}}, {'_id': '5cb032da23d35d00011180ae', 'Restaurant': {'address': {'building': '6909', 'coord': [-74.0259567, 40.6353674], 'street': '3 Avenue', 'zipcode': '11209'}, 'borough': 'Brooklyn', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-08-21T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2014-03-05T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-01-10T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'Nordic Delicacies', 'restaurant_id': '40361390'}}, {'_id': '5cb032da23d35d00011180af', 'Restaurant': {'address': {'building': '522', 'coord': [-73.95171, 40.767461], 'street': 'East 74 Street', 'zipcode': '10021'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-02T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-12-19T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2013-05-28T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-12-07T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-03-29T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Glorious Food', 'restaurant_id': '40361521'}}, {'_id': '5cb032da23d35d00011180b0', 'Restaurant': {'address': {'building': '284', 'coord': [-73.9829239, 40.6580753], 'street': 'Prospect Park West', 'zipcode': '11215'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-11-19T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-11-14T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-12-05T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-05-17T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'The Movable Feast', 'restaurant_id': '40361606'}}, {'_id': '5cb032da23d35d00011180b1', 'Restaurant': {'address': {'building': '129-08', 'coord': [-73.839297, 40.78147], 'street': '20 Avenue', 'zipcode': '11356'}, 'borough': 'Queens', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-08-16T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-08-27T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-09-20T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-09-29T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Sal'S Deli", 'restaurant_id': '40361618'}}, {'_id': '5cb032da23d35d00011180b2', 'Restaurant': {'address': {'building': '759', 'coord': [-73.9925306, 40.7309346], 'street': 'Broadway', 'zipcode': '10003'}, 'borough': 'Manhattan', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-01-21T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-01-04T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-06-07T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2012-01-17T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': "Bully'S Deli", 'restaurant_id': '40361708'}}, {'_id': '5cb032da23d35d00011180b3', 'Restaurant': {'address': {'building': '3406', 'coord': [-73.94024739999999, 40.7623288], 'street': '10 Street', 'zipcode': '11106'}, 'borough': 'Queens', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-03-19T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-03-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-27T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-04-05T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': "Steve Chu'S Deli & Grocery", 'restaurant_id': '40361998'}}, {'_id': '5cb032da23d35d00011180b4', 'Restaurant': {'address': {'building': '502', 'coord': [-73.976112, 40.786714], 'street': 'Amsterdam Avenue', 'zipcode': '10024'}, 'borough': 'Manhattan', 'cuisine': 'Chicken', 'grades': [{'date': '2014-09-15T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-03-04T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-01-09T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-04-10T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-11-15T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': "Harriet'S Kitchen", 'restaurant_id': '40362098'}}, {'_id': '5cb032da23d35d00011180b5', 'Restaurant': {'address': {'building': '730', 'coord': [-73.96805719999999, 40.7925587], 'street': 'Columbus Avenue', 'zipcode': '10025'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-12T00:00:00Z', 'grade': 'B', 'score': 26}, {'date': '2013-08-28T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-25T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2012-02-14T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'P & S Deli Grocery', 'restaurant_id': '40362264'}}, {'_id': '5cb032da23d35d00011180b6', 'Restaurant': {'address': {'building': '18', 'coord': [-73.996984, 40.72589], 'street': 'West Houston Street', 'zipcode': '10012'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-04-03T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-04-05T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-03-21T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-04-27T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Angelika Film Center', 'restaurant_id': '40362274'}}, {'_id': '5cb032da23d35d00011180b7', 'Restaurant': {'address': {'building': '531', 'coord': [-73.9634876, 40.6940001], 'street': 'Myrtle Avenue', 'zipcode': '11205'}, 'borough': 'Brooklyn', 'cuisine': 'Hamburgers', 'grades': [{'date': '2014-03-18T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-03-18T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-10-10T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-09-22T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'White Castle', 'restaurant_id': '40362344'}}, {'_id': '5cb032da23d35d00011180b8', 'Restaurant': {'address': {'building': '103-05', 'coord': [-73.8642349, 40.75356], 'street': '37 Avenue', 'zipcode': '11368'}, 'borough': 'Queens', 'cuisine': 'Chinese', 'grades': [{'date': '2014-04-21T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-11-12T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-06-04T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-11-14T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-10-11T00:00:00Z', 'grade': 'P', 'score': 0}, {'date': '2012-05-24T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-12-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-07-20T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Ho Mei Restaurant', 'restaurant_id': '40362432'}}, {'_id': '5cb032da23d35d00011180b9', 'Restaurant': {'address': {'building': '60', 'coord': [-74.0085357, 40.70620539999999], 'street': 'Wall Street', 'zipcode': '10005'}, 'borough': 'Manhattan', 'cuisine': 'Turkish', 'grades': [{'date': '2014-09-26T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-09-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-09-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-05-09T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'The Country Cafe', 'restaurant_id': '40362715'}}, {'_id': '5cb032da23d35d00011180ba', 'Restaurant': {'address': {'building': '195', 'coord': [-73.9246028, 40.6522396], 'street': 'East 56 Street', 'zipcode': '11203'}, 'borough': 'Brooklyn', 'cuisine': 'Caribbean', 'grades': [{'date': '2014-05-13T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-05-08T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-09-22T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-06-06T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': "Shashemene Int'L Restaura", 'restaurant_id': '40362869'}}, {'_id': '5cb032da23d35d00011180bb', 'Restaurant': {'address': {'building': '107', 'coord': [-74.00920839999999, 40.7132925], 'street': 'Church Street', 'zipcode': '10007'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-18T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-02-26T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-08-26T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-02-01T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-01-17T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-18T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Downtown Deli', 'restaurant_id': '40363021'}}, {'_id': '5cb032da23d35d00011180bc', 'Restaurant': {'address': {'building': '1006', 'coord': [-73.84856870000002, 40.8903781], 'street': 'East 233 Street', 'zipcode': '10466'}, 'borough': 'Bronx', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-04-24T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-09-05T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-02-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-07-03T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-07-11T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Carvel Ice Cream', 'restaurant_id': '40363093'}}, {'_id': '5cb032da23d35d00011180bd', 'Restaurant': {'address': {'building': '56', 'coord': [-73.991495, 40.692273], 'street': 'Court Street', 'zipcode': '11201'}, 'borough': 'Brooklyn', 'cuisine': 'Donuts', 'grades': [{'date': '2014-12-30T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2014-01-15T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-01-08T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-01-19T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Dunkin' Donuts", 'restaurant_id': '40363098'}}, {'_id': '5cb032da23d35d00011180be', 'Restaurant': {'address': {'building': '7615', 'coord': [-74.0228449, 40.6281815], 'street': '5 Avenue', 'zipcode': '11209'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-12-04T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-10-24T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-04-18T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-04-05T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Mejlander & Mulgannon', 'restaurant_id': '40363117'}}, {'_id': '5cb032da23d35d00011180bf', 'Restaurant': {'address': {'building': '120', 'coord': [-73.9998042, 40.7251256], 'street': 'Prince Street', 'zipcode': '10012'}, 'borough': 'Manhattan', 'cuisine': 'Bakery', 'grades': [{'date': '2014-10-17T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-09-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-04-30T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-04-20T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-12-19T00:00:00Z', 'grade': 'A', 'score': 3}], 'name': "Olive'S", 'restaurant_id': '40363151'}}, {'_id': '5cb032da23d35d00011180c0', 'Restaurant': {'address': {'building': '1236', 'coord': [-73.8893654, 40.81376179999999], 'street': '238 Spofford Ave', 'zipcode': '10474'}, 'borough': 'Bronx', 'cuisine': 'Chinese', 'grades': [{'date': '2013-12-30T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-01-08T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-06-12T00:00:00Z', 'grade': 'B', 'score': 15}], 'name': 'Happy Garden', 'restaurant_id': '40363289'}}, {'_id': '5cb032da23d35d00011180c1', 'Restaurant': {'address': {'building': '625', 'coord': [-73.990494, 40.7569545], 'street': '8 Avenue', 'zipcode': '10018'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-06-09T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-01-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-12-07T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2011-12-13T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-09-09T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Cafe Metro', 'restaurant_id': '40363298'}}, {'_id': '5cb032da23d35d00011180c2', 'Restaurant': {'address': {'building': '1069', 'coord': [-73.902463, 40.694924], 'street': 'Wyckoff Avenue', 'zipcode': '11385'}, 'borough': 'Queens', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-05-08T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-12-12T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-06-21T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-12-24T00:00:00Z', 'grade': 'B', 'score': 25}, {'date': '2011-10-19T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-06-15T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Tony'S Deli", 'restaurant_id': '40363333'}}, {'_id': '5cb032da23d35d00011180c3', 'Restaurant': {'address': {'building': '405', 'coord': [-73.97534999999999, 40.7516269], 'street': 'Lexington Avenue', 'zipcode': '10174'}, 'borough': 'Manhattan', 'cuisine': 'Sandwiches/Salads/Mixed Buffet', 'grades': [{'date': '2014-02-21T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-09-13T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-08-28T00:00:00Z', 'grade': 'A', 'score': 0}, {'date': '2011-09-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-05-03T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Lexler Deli', 'restaurant_id': '40363426'}}, {'_id': '5cb032da23d35d00011180c4', 'Restaurant': {'address': {'building': '2491', 'coord': [-74.1459332, 40.6103714], 'street': 'Victory Boulevard', 'zipcode': '10314'}, 'borough': 'Staten Island', 'cuisine': 'Delicatessen', 'grades': [{'date': '2015-01-09T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-12-05T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-06-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-01-08T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Bagels N Buns', 'restaurant_id': '40363427'}}, {'_id': '5cb032da23d35d00011180c5', 'Restaurant': {'address': {'building': '7905', 'coord': [-73.8740217, 40.7135015], 'street': 'Metropolitan Avenue', 'zipcode': '11379'}, 'borough': 'Queens', 'cuisine': 'Bagels/Pretzels', 'grades': [{'date': '2014-09-17T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-01-16T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-08-07T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-02-21T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-06-20T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-01-31T00:00:00Z', 'grade': 'B', 'score': 18}], 'name': 'Hot Bagels', 'restaurant_id': '40363565'}}, {'_id': '5cb032da23d35d00011180c6', 'Restaurant': {'address': {'building': '87-69', 'coord': [-73.8309503, 40.7001121], 'street': 'Lefferts Boulevard', 'zipcode': '11418'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-02-25T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-08-14T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-08-07T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-03-26T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-11-04T00:00:00Z', 'grade': 'A', 'score': 0}, {'date': '2011-06-29T00:00:00Z', 'grade': 'A', 'score': 4}], 'name': 'Snack Time Grill', 'restaurant_id': '40363590'}}, {'_id': '5cb032da23d35d00011180c7', 'Restaurant': {'address': {'building': '1418', 'coord': [-73.95685019999999, 40.7753401], 'street': 'Third Avenue', 'zipcode': '10028'}, 'borough': 'Manhattan', 'cuisine': 'Continental', 'grades': [{'date': '2014-06-02T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-12-27T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-03-18T00:00:00Z', 'grade': 'B', 'score': 26}, {'date': '2012-02-01T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-07-06T00:00:00Z', 'grade': 'B', 'score': 25}], 'name': "Lorenzo & Maria'S", 'restaurant_id': '40363630'}}, {'_id': '5cb032da23d35d00011180c8', 'Restaurant': {'address': {'building': '464', 'coord': [-73.9791458, 40.744328], 'street': '3 Avenue', 'zipcode': '10016'}, 'borough': 'Manhattan', 'cuisine': 'Pizza', 'grades': [{'date': '2014-08-05T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2014-03-06T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-07-09T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-01-30T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-01-05T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2011-09-26T00:00:00Z', 'grade': 'A', 'score': 0}], 'name': "Domino'S Pizza", 'restaurant_id': '40363644'}}, {'_id': '5cb032da23d35d00011180c9', 'Restaurant': {'address': {'building': '437', 'coord': [-73.975393, 40.757365], 'street': 'Madison Avenue', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-06-03T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-06-07T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-06-29T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-06T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-06-23T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Berkely', 'restaurant_id': '40363685'}}, {'_id': '5cb032da23d35d00011180ca', 'Restaurant': {'address': {'building': '1031', 'coord': [-73.9075537, 40.6438684], 'street': 'East 92 Street', 'zipcode': '11236'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-02-05T00:00:00Z', 'grade': 'A', 'score': 0}, {'date': '2013-01-29T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2011-12-08T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Sonny'S Heros", 'restaurant_id': '40363744'}}, {'_id': '5cb032da23d35d00011180cb', 'Restaurant': {'address': {'building': '1111', 'coord': [-74.0796436, 40.59878339999999], 'street': 'Hylan Boulevard', 'zipcode': '10305'}, 'borough': 'Staten Island', 'cuisine': 'Ice Cream, Gelato, Yogurt, Ices', 'grades': [{'date': '2014-04-24T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-26T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-02-02T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'Carvel Ice Cream', 'restaurant_id': '40363834'}}, {'_id': '5cb032da23d35d00011180cc', 'Restaurant': {'address': {'building': '976', 'coord': [-73.92701509999999, 40.6620192], 'street': 'Rutland Road', 'zipcode': '11212'}, 'borough': 'Brooklyn', 'cuisine': 'Chinese', 'grades': [{'date': '2014-04-23T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-03-26T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-03-13T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2011-11-16T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Golden Pavillion', 'restaurant_id': '40363920'}}, {'_id': '5cb032da23d35d00011180cd', 'Restaurant': {'address': {'building': '148', 'coord': [-73.9806854, 40.7778589], 'street': 'West 72 Street', 'zipcode': '10023'}, 'borough': 'Manhattan', 'cuisine': 'Pizza', 'grades': [{'date': '2014-12-08T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-05-05T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2013-04-05T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-03-30T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Domino'S Pizza", 'restaurant_id': '40363945'}}, {'_id': '5cb032da23d35d00011180ce', 'Restaurant': {'address': {'building': '364', 'coord': [-73.96084119999999, 40.8014307], 'street': 'West 110 Street', 'zipcode': '10025'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-04T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2014-02-26T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-03-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-21T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Spoon Bread Catering', 'restaurant_id': '40364179'}}, {'_id': '5cb032da23d35d00011180cf', 'Restaurant': {'address': {'building': '1423', 'coord': [-73.9615132, 40.6253268], 'street': 'Avenue J', 'zipcode': '11230'}, 'borough': 'Brooklyn', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-12-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-12-05T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-12-06T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Kosher Bagel Hole', 'restaurant_id': '40364220'}}, {'_id': '5cb032da23d35d00011180d0', 'Restaurant': {'address': {'building': '0', 'coord': [-84.2040813, 9.9986585], 'street': 'Guardia Airport Parking', 'zipcode': '11371'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-05-16T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-05-10T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-05-15T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-11-02T00:00:00Z', 'grade': 'C', 'score': 32}], 'name': 'Terminal Cafe/Yankee Clipper', 'restaurant_id': '40364262'}}, {'_id': '5cb032da23d35d00011180d1', 'Restaurant': {'address': {'building': '73', 'coord': [-74.1178949, 40.5734906], 'street': 'New Dorp Plaza', 'zipcode': '10306'}, 'borough': 'Staten Island', 'cuisine': 'Delicatessen', 'grades': [{'date': '2014-11-18T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-11-07T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-04-24T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-03-20T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Plaza Bagels & Deli', 'restaurant_id': '40364286'}}, {'_id': '5cb032da23d35d00011180d2', 'Restaurant': {'address': {'building': '277', 'coord': [-73.8941893, 40.8634684], 'street': 'East Kingsbridge Road', 'zipcode': '10458'}, 'borough': 'Bronx', 'cuisine': 'Chinese', 'grades': [{'date': '2014-03-03T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-09-26T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-08-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-08-17T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Happy Garden', 'restaurant_id': '40364296'}}, {'_id': '5cb032da23d35d00011180d3', 'Restaurant': {'address': {'building': '203', 'coord': [-74.15235919999999, 40.5563756], 'street': 'Giffords Lane', 'zipcode': '10308'}, 'borough': 'Staten Island', 'cuisine': 'Delicatessen', 'grades': [{'date': '2015-01-05T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2014-09-11T00:00:00Z', 'grade': 'C', 'score': 39}, {'date': '2014-03-20T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-01-24T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-05-23T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'B & M Hot Bagel & Grocery', 'restaurant_id': '40364299'}}, {'_id': '5cb032da23d35d00011180d4', 'Restaurant': {'address': {'building': '94', 'coord': [-74.0061936, 40.7092038], 'street': 'Fulton Street', 'zipcode': '10038'}, 'borough': 'Manhattan', 'cuisine': 'Chicken', 'grades': [{'date': '2015-01-06T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-07-15T00:00:00Z', 'grade': 'C', 'score': 48}, {'date': '2013-05-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-09-24T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-04-19T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': 'Texas Rotisserie', 'restaurant_id': '40364304'}}, {'_id': '5cb032da23d35d00011180d5', 'Restaurant': {'address': {'building': '10004', 'coord': [-74.03400479999999, 40.6127077], 'street': '4 Avenue', 'zipcode': '11209'}, 'borough': 'Brooklyn', 'cuisine': 'Italian', 'grades': [{'date': '2014-02-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-06-27T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-12-03T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-11-09T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Philadelhia Grille Express', 'restaurant_id': '40364305'}}, {'_id': '5cb032da23d35d00011180d6', 'Restaurant': {'address': {'building': '178', 'coord': [-73.96252129999999, 40.7098035], 'street': 'Broadway', 'zipcode': '11211'}, 'borough': 'Brooklyn', 'cuisine': 'Steak', 'grades': [{'date': '2014-03-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-09-28T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-26T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-09-10T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-08-15T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Peter Luger Steakhouse', 'restaurant_id': '40364335'}}, {'_id': '5cb032da23d35d00011180d7', 'Restaurant': {'address': {'building': '1', 'coord': [-73.97166039999999, 40.764832], 'street': 'East 60 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-10-16T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2013-04-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-10-19T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-04-27T00:00:00Z', 'grade': 'B', 'score': 17}, {'date': '2011-11-29T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Metropolitan Club', 'restaurant_id': '40364347'}}, {'_id': '5cb032da23d35d00011180d8', 'Restaurant': {'address': {'building': '837', 'coord': [-73.9712, 40.751703], 'street': '2 Avenue', 'zipcode': '10017'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-22T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2013-09-26T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-26T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-04-30T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-10-05T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Palm Restaurant', 'restaurant_id': '40364355'}}, {'_id': '5cb032da23d35d00011180d9', 'Restaurant': {'address': {'building': '21', 'coord': [-73.9774394, 40.7604522], 'street': 'West 52 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-05-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-08-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-04-04T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': '21 Club', 'restaurant_id': '40364362'}}, {'_id': '5cb032da23d35d00011180da', 'Restaurant': {'address': {'building': '658', 'coord': [-73.81363999999999, 40.82941100000001], 'street': 'Clarence Ave', 'zipcode': '10465'}, 'borough': 'Bronx', 'cuisine': 'American', 'grades': [{'date': '2014-06-21T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-07-11T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'Manhem Club', 'restaurant_id': '40364363'}}, {'_id': '5cb032da23d35d00011180db', 'Restaurant': {'address': {'building': '1028', 'coord': [-73.966032, 40.762832], 'street': '3 Avenue', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-09-16T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-02-24T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-05-03T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-08-20T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-02-13T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Isle Of Capri Resturant', 'restaurant_id': '40364373'}}, {'_id': '5cb032da23d35d00011180dc', 'Restaurant': {'address': {'building': '45', 'coord': [-73.9891878, 40.7375638], 'street': 'East 18 Street', 'zipcode': '10003'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-10-08T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-04-24T00:00:00Z', 'grade': 'C', 'score': 36}, {'date': '2012-01-09T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Old Town Bar & Restaurant', 'restaurant_id': '40364389'}}, {'_id': '5cb032da23d35d00011180dd', 'Restaurant': {'address': {'building': '261', 'coord': [-73.94839189999999, 40.7224876], 'street': 'Driggs Avenue', 'zipcode': '11222'}, 'borough': 'Brooklyn', 'cuisine': 'Polish', 'grades': [{'date': '2014-05-31T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-05-10T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-02-17T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2011-10-14T00:00:00Z', 'grade': 'C', 'score': 54}], 'name': 'Polish National Home', 'restaurant_id': '40364404'}}, {'_id': '5cb032da23d35d00011180de', 'Restaurant': {'address': {'building': '62', 'coord': [-74.00310999999999, 40.7348888], 'street': 'Charles Street', 'zipcode': '10014'}, 'borough': 'Manhattan', 'cuisine': 'Latin (Cuban, Dominican, Puerto Rican, South & Central American)', 'grades': [{'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-05-20T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-05-24T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-01-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-03T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'Seville Restaurant', 'restaurant_id': '40364439'}}, {'_id': '5cb032da23d35d00011180df', 'Restaurant': {'address': {'building': '100', 'coord': [-74.0010484, 40.71599000000001], 'street': 'Centre Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-03-03T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-03-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-09T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-03-31T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Criminal Court Bldg Cafeteria', 'restaurant_id': '40364443'}}, {'_id': '5cb032da23d35d00011180e0', 'Restaurant': {'address': {'building': '657', 'coord': [-73.9056678, 40.7066898], 'street': 'Fairview Avenue', 'zipcode': '11385'}, 'borough': 'Queens', 'cuisine': 'German', 'grades': [{'date': '2014-03-15T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-03-12T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-07-21T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2011-11-25T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2011-06-22T00:00:00Z', 'grade': 'B', 'score': 20}], 'name': 'Gottscheer Hall', 'restaurant_id': '40364449'}}, {'_id': '5cb032da23d35d00011180e1', 'Restaurant': {'address': {'building': '180', 'coord': [-73.9788694, 40.7665961], 'street': 'Central Park South', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-12-15T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-08-07T00:00:00Z', 'grade': 'C', 'score': 40}, {'date': '2013-07-29T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-12-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-07-30T00:00:00Z', 'grade': 'C', 'score': 4}, {'date': '2012-02-16T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'Nyac Main Dining Room', 'restaurant_id': '40364467'}}, {'_id': '5cb032da23d35d00011180e2', 'Restaurant': {'address': {'building': '108', 'coord': [-73.98146, 40.7250067], 'street': 'Avenue B', 'zipcode': '10009'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-14T00:00:00Z', 'grade': 'B', 'score': 17}, {'date': '2013-12-31T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-10-22T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-05-07T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-14T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': '7B Bar', 'restaurant_id': '40364518'}}, {'_id': '5cb032da23d35d00011180e3', 'Restaurant': {'address': {'building': '96-40', 'coord': [-73.86137149999999, 40.7293762], 'street': 'Queens Boulevard', 'zipcode': '11374'}, 'borough': 'Queens', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-03-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-09-30T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-04-26T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-09-11T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2011-09-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-03-17T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Ben-Best Deli & Restaurant', 'restaurant_id': '40364529'}}, {'_id': '5cb032da23d35d00011180e4', 'Restaurant': {'address': {'building': '215', 'coord': [-73.9805679, 40.7659436], 'street': 'West 57 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-02-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-08-09T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-02-22T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2012-02-16T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Cafe Atelier (Art Students League)', 'restaurant_id': '40364531'}}, {'_id': '5cb032da23d35d00011180e5', 'Restaurant': {'address': {'building': '845', 'coord': [-73.965531, 40.765431], 'street': 'Lexington Avenue', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'Steak', 'grades': [{'date': '2014-03-26T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-03-21T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-10-18T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-05-07T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2011-05-17T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': "Donohue'S Steak House", 'restaurant_id': '40364572'}}, {'_id': '5cb032da23d35d00011180e6', 'Restaurant': {'address': {'building': '311', 'coord': [-73.98621899999999, 40.763406], 'street': 'West 51 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'French', 'grades': [{'date': '2014-11-10T00:00:00Z', 'grade': 'B', 'score': 15}, {'date': '2014-04-03T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-17T00:00:00Z', 'grade': 'C', 'score': 36}, {'date': '2013-02-06T00:00:00Z', 'grade': 'B', 'score': 22}, {'date': '2012-07-16T00:00:00Z', 'grade': 'C', 'score': 36}, {'date': '2012-03-08T00:00:00Z', 'grade': 'C', 'score': 7}], 'name': 'Tout Va Bien', 'restaurant_id': '40364576'}}, {'_id': '5cb032da23d35d00011180e7', 'Restaurant': {'address': {'building': '386', 'coord': [-73.9818918, 40.6901211], 'street': 'Flatbush Avenue Extension', 'zipcode': '11201'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-11-14T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2014-03-10T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-01-10T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-09-04T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': "Junior'S", 'restaurant_id': '40364581'}}, {'_id': '5cb032da23d35d00011180e8', 'Restaurant': {'address': {'building': '37', 'coord': [-74.138263, 40.546681], 'street': 'Mansion Ave', 'zipcode': '10308'}, 'borough': 'Staten Island', 'cuisine': 'American', 'grades': [{'date': '2014-04-22T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-09-25T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-06-09T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Great Kills Yacht Club', 'restaurant_id': '40364610'}}, {'_id': '5cb032da23d35d00011180e9', 'Restaurant': {'address': {'building': '251', 'coord': [-73.9775552, 40.7432016], 'street': 'East 31 Street', 'zipcode': '10016'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-04-22T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-06-19T00:00:00Z', 'grade': 'C', 'score': 32}, {'date': '2012-05-22T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Marchis Restaurant', 'restaurant_id': '40364668'}}, {'_id': '5cb032da23d35d00011180ea', 'Restaurant': {'address': {'building': '2602', 'coord': [-73.95443709999999, 40.5877993], 'street': 'East 15 Street', 'zipcode': '11235'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-05-14T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-04-27T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-11-23T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-03-14T00:00:00Z', 'grade': 'B', 'score': 17}, {'date': '2011-07-14T00:00:00Z', 'grade': 'B', 'score': 21}], 'name': 'Towne Cafe', 'restaurant_id': '40364681'}}, {'_id': '5cb032da23d35d00011180eb', 'Restaurant': {'address': {'building': '.1-A', 'coord': [-48.9424, -16.3550032], 'street': 'East 77 St', 'zipcode': '10021'}, 'borough': 'Manhattan', 'cuisine': 'Continental', 'grades': [{'date': '2014-11-24T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-04-03T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2012-10-02T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-04-25T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Dining Room', 'restaurant_id': '40364691'}}, {'_id': '5cb032da23d35d00011180ec', 'Restaurant': {'address': {'building': '56', 'coord': [-74.004758, 40.741207], 'street': '9 Avenue', 'zipcode': '10011'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-06-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-06-10T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-09-26T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2012-05-24T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-11-14T00:00:00Z', 'grade': 'A', 'score': 2}], 'name': 'Old Homestead', 'restaurant_id': '40364715'}}, {'_id': '5cb032da23d35d00011180ed', 'Restaurant': {'address': {'building': '156-71', 'coord': [-73.840437, 40.6627235], 'street': 'Crossbay Boulevard', 'zipcode': '11414'}, 'borough': 'Queens', 'cuisine': 'Pizza/Italian', 'grades': [{'date': '2014-10-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-10-30T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-06-12T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-27T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'New Park Pizzeria & Restaurant', 'restaurant_id': '40364744'}}, {'_id': '5cb032da23d35d00011180ee', 'Restaurant': {'address': {'building': '600', 'coord': [-73.7522366, 40.7766941], 'street': 'West Drive', 'zipcode': '11363'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2013-12-04T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-06-13T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-12-06T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-04-12T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2011-07-30T00:00:00Z', 'grade': 'B', 'score': 23}], 'name': 'Douglaston Club', 'restaurant_id': '40364858'}}, {'_id': '5cb032da23d35d00011180ef', 'Restaurant': {'address': {'building': '225', 'coord': [-73.96485799999999, 40.761899], 'street': 'East 60 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-08-11T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-03-14T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-01-16T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-07-12T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Serendipity 3', 'restaurant_id': '40364863'}}, {'_id': '5cb032da23d35d00011180f0', 'Restaurant': {'address': {'building': '461', 'coord': [-74.002944, 40.652779], 'street': '37 Street', 'zipcode': '11232'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-11-28T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-12-04T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-12-06T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-12-06T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Melody Lanes', 'restaurant_id': '40364889'}}, {'_id': '5cb032da23d35d00011180f1', 'Restaurant': {'address': {'building': '30-13', 'coord': [-73.9151096, 40.763377], 'street': 'Steinway Street', 'zipcode': '11103'}, 'borough': 'Queens', 'cuisine': 'Pizza', 'grades': [{'date': '2014-10-06T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-10-24T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-06-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-01-17T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Rizzo'S Fine Pizza", 'restaurant_id': '40364920'}}, {'_id': '5cb032da23d35d00011180f2', 'Restaurant': {'address': {'building': '2222', 'coord': [-73.84971759999999, 40.8304811], 'street': 'Haviland Avenue', 'zipcode': '10462'}, 'borough': 'Bronx', 'cuisine': 'American', 'grades': [{'date': '2014-12-18T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2014-05-01T00:00:00Z', 'grade': 'B', 'score': 17}, {'date': '2013-03-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-09-20T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-02-08T00:00:00Z', 'grade': 'B', 'score': 19}], 'name': 'The New Starling Athletic Club Of The Bronx', 'restaurant_id': '40364956'}}, {'_id': '5cb032da23d35d00011180f3', 'Restaurant': {'address': {'building': '567', 'coord': [-74.00619499999999, 40.735663], 'street': 'Hudson Street', 'zipcode': '10014'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-28T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-07-25T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-02-05T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-05-29T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2011-12-23T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'White Horse Tavern', 'restaurant_id': '40364958'}}, {'_id': '5cb032da23d35d00011180f4', 'Restaurant': {'address': {'building': '67', 'coord': [-74.0707363, 40.59321569999999], 'street': 'Olympia Boulevard', 'zipcode': '10305'}, 'borough': 'Staten Island', 'cuisine': 'Italian', 'grades': [{'date': '2014-04-24T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-04-04T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-02-02T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2011-07-23T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Crystal Room', 'restaurant_id': '40365013'}}, {'_id': '5cb032da23d35d00011180f5', 'Restaurant': {'address': {'building': '390', 'coord': [-74.07444319999999, 40.6096914], 'street': 'Hylan Boulevard', 'zipcode': '10305'}, 'borough': 'Staten Island', 'cuisine': 'American', 'grades': [{'date': '2014-06-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-06-15T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-08-30T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-10-07T00:00:00Z', 'grade': 'B', 'score': 20}], 'name': "Labetti'S Post # 2159", 'restaurant_id': '40365022'}}, {'_id': '5cb032da23d35d00011180f6', 'Restaurant': {'address': {'building': '1', 'coord': [-73.9727638, 40.588853], 'street': 'Bouck Court', 'zipcode': '11223'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-12-10T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2014-06-25T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-10-23T00:00:00Z', 'grade': 'C', 'score': 28}, {'date': '2013-03-21T00:00:00Z', 'grade': 'C', 'score': 29}, {'date': '2012-06-15T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'Shell Lanes', 'restaurant_id': '40365043'}}, {'_id': '5cb032da23d35d00011180f7', 'Restaurant': {'address': {'building': '15', 'coord': [-73.9896713, 40.7287978], 'street': 'East 7 Street', 'zipcode': '10003'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-06-07T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2014-01-09T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-06-12T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-05-21T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-11T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-08-11T00:00:00Z', 'grade': 'B', 'score': 16}], 'name': "Mcsorley'S Old Ale House", 'restaurant_id': '40365075'}}, {'_id': '5cb032da23d35d00011180f8', 'Restaurant': {'address': {'building': '93', 'coord': [-73.99950489999999, 40.7169224], 'street': 'Baxter Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-12-15T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-12-06T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-10-23T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-04T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-01-12T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Forlinis Restaurant', 'restaurant_id': '40365098'}}, {'_id': '5cb032da23d35d00011180f9', 'Restaurant': {'address': {'building': '6736', 'coord': [-74.2274942, 40.5071996], 'street': 'Hylan Boulevard', 'zipcode': '10309'}, 'borough': 'Staten Island', 'cuisine': 'American', 'grades': [{'date': '2014-08-13T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-08-20T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2012-06-18T00:00:00Z', 'grade': 'A', 'score': 6}], 'name': 'South Shore Swimming Club', 'restaurant_id': '40365120'}}, {'_id': '5cb032da23d35d00011180fa', 'Restaurant': {'address': {'building': '331', 'coord': [-74.0037823, 40.7380122], 'street': 'West 4 Street', 'zipcode': '10014'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-11-17T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-06-27T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-05-15T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-05-09T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Corner Bistro', 'restaurant_id': '40365166'}}, {'_id': '5cb032da23d35d00011180fb', 'Restaurant': {'address': {'building': '1449', 'coord': [-73.94933739999999, 40.6509823], 'street': 'Nostrand Avenue', 'zipcode': '11226'}, 'borough': 'Brooklyn', 'cuisine': 'Donuts', 'grades': [{'date': '2014-10-21T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2014-05-21T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-05-02T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-12-03T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-05-09T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2011-12-14T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Nostrand Donut Shop', 'restaurant_id': '40365226'}}, {'_id': '5cb032da23d35d00011180fc', 'Restaurant': {'address': {'building': '1616', 'coord': [-73.952449, 40.776325], 'street': '2 Avenue', 'zipcode': '10028'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-02-28T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2013-08-30T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-08-27T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2011-09-14T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Dorrian'S Red Hand Restaurant", 'restaurant_id': '40365239'}}, {'_id': '5cb032da23d35d00011180fd', 'Restaurant': {'address': {'building': '3', 'coord': [-73.97557069999999, 40.7596796], 'street': 'East 52 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'French', 'grades': [{'date': '2014-04-09T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-05T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-02-02T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'La Grenouille', 'restaurant_id': '40365264'}}, {'_id': '5cb032da23d35d00011180fe', 'Restaurant': {'address': {'building': '4035', 'coord': [-73.9395182, 40.8422945], 'street': 'Broadway', 'zipcode': '10032'}, 'borough': 'Manhattan', 'cuisine': 'Pizza', 'grades': [{'date': '2014-02-10T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-02-04T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-01-04T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2011-09-15T00:00:00Z', 'grade': 'C', 'score': 60}], 'name': 'Como Pizza', 'restaurant_id': '40365280'}}, {'_id': '5cb032da23d35d00011180ff', 'Restaurant': {'address': {'building': '842', 'coord': [-73.97063700000001, 40.751495], 'street': '2 Avenue', 'zipcode': '10017'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-07-22T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2013-05-28T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-05-29T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-01-05T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-08-10T00:00:00Z', 'grade': 'B', 'score': 24}], 'name': 'Keats Restaurant', 'restaurant_id': '40365288'}}, {'_id': '5cb032da23d35d0001118100', 'Restaurant': {'address': {'building': '146', 'coord': [-73.9973041, 40.7188698], 'street': 'Mulberry Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-03-14T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-09-26T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-02-15T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-09-15T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Angelo Of Mulberry St.', 'restaurant_id': '40365293'}}, {'_id': '5cb032da23d35d0001118101', 'Restaurant': {'address': {'building': '103', 'coord': [-74.001043, 40.729795], 'street': 'Macdougal Street', 'zipcode': '10012'}, 'borough': 'Manhattan', 'cuisine': 'Mexican', 'grades': [{'date': '2014-05-22T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-10-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-20T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-05-17T00:00:00Z', 'grade': 'B', 'score': 20}], 'name': "Panchito'S", 'restaurant_id': '40365348'}}, {'_id': '5cb032da23d35d0001118102', 'Restaurant': {'address': {'building': '7201', 'coord': [-74.0166091, 40.6284767], 'street': '8 Avenue', 'zipcode': '11228'}, 'borough': 'Brooklyn', 'cuisine': 'Italian', 'grades': [{'date': '2014-12-04T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2014-02-19T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-07-09T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-06-06T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-12-19T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'New Corner', 'restaurant_id': '40365355'}}, {'_id': '5cb032da23d35d0001118103', 'Restaurant': {'address': {'building': '15', 'coord': [-73.98126069999999, 40.7547107], 'street': 'West 43 Street', 'zipcode': '10036'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2015-01-15T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2014-07-07T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2014-01-14T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-19T00:00:00Z', 'grade': 'C', 'score': 29}, {'date': '2013-02-05T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'The Princeton Club', 'restaurant_id': '40365361'}}, {'_id': '5cb032da23d35d0001118104', 'Restaurant': {'address': {'building': '106', 'coord': [-74.0003315, 40.7274874], 'street': 'West Houston Street', 'zipcode': '10012'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-03-31T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-10-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-03-29T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-09-05T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-03-05T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': "Arturo'S", 'restaurant_id': '40365387'}}, {'_id': '5cb032da23d35d0001118105', 'Restaurant': {'address': {'building': '405', 'coord': [-73.9646207, 40.7550069], 'street': 'East 52 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'French', 'grades': [{'date': '2014-07-14T00:00:00Z', 'grade': 'B', 'score': 14}, {'date': '2013-12-02T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-04-08T00:00:00Z', 'grade': 'B', 'score': 22}, {'date': '2012-09-17T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-04-03T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Le Perigord', 'restaurant_id': '40365414'}}, {'_id': '5cb032da23d35d0001118106', 'Restaurant': {'address': {'building': '4241', 'coord': [-73.9365108, 40.8497077], 'street': 'Broadway', 'zipcode': '10033'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-11-15T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2014-04-25T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-03-28T00:00:00Z', 'grade': 'P', 'score': 15}, {'date': '2013-08-19T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-02-20T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2012-08-22T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2012-01-30T00:00:00Z', 'grade': 'C', 'score': 48}], 'name': "Reynold'S Bar", 'restaurant_id': '40365423'}}, {'_id': '5cb032da23d35d0001118107', 'Restaurant': {'address': {'building': '1758', 'coord': [-74.1220973, 40.6129407], 'street': 'Victory Boulevard', 'zipcode': '10314'}, 'borough': 'Staten Island', 'cuisine': 'Pizza/Italian', 'grades': [{'date': '2014-11-20T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2014-01-13T00:00:00Z', 'grade': 'B', 'score': 14}, {'date': '2013-04-25T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-10-09T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2012-05-02T00:00:00Z', 'grade': 'B', 'score': 22}], 'name': "Joe & Pat'S Pizzeria", 'restaurant_id': '40365454'}}, {'_id': '5cb032da23d35d0001118108', 'Restaurant': {'address': {'building': '113', 'coord': [-73.9979214, 40.7371344], 'street': 'West 13 Street', 'zipcode': '10011'}, 'borough': 'Manhattan', 'cuisine': 'Spanish', 'grades': [{'date': '2014-07-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-03-27T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-01-14T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-12-29T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-08-03T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Spain Restaurant & Bar', 'restaurant_id': '40365472'}}, {'_id': '5cb032da23d35d0001118109', 'Restaurant': {'address': {'building': '206', 'coord': [-73.9446421, 40.7253944], 'street': 'Nassau Avenue', 'zipcode': '11222'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-11-19T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2014-05-09T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2013-06-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-10-17T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Palace Cafe', 'restaurant_id': '40365473'}}, {'_id': '5cb032da23d35d000111810a', 'Restaurant': {'address': {'building': '72', 'coord': [-73.92506, 40.8275556], 'street': 'East 161 Street', 'zipcode': '10451'}, 'borough': 'Bronx', 'cuisine': 'American', 'grades': [{'date': '2014-04-15T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-11-14T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2013-07-29T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-12-31T00:00:00Z', 'grade': 'B', 'score': 15}, {'date': '2012-05-30T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-09T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-08-15T00:00:00Z', 'grade': 'C', 'score': 37}], 'name': 'Yankee Tavern', 'restaurant_id': '40365499'}}, {'_id': '5cb032da23d35d000111810b', 'Restaurant': {'address': {'building': '203', 'coord': [-73.99987229999999, 40.7386361], 'street': 'West 14 Street', 'zipcode': '10011'}, 'borough': 'Manhattan', 'cuisine': 'Donuts', 'grades': [{'date': '2014-02-11T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-08T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-07-05T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2012-02-22T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2011-08-08T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-03-16T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Donut Pub', 'restaurant_id': '40365525'}}, {'_id': '5cb032da23d35d000111810c', 'Restaurant': {'address': {'building': '146', 'coord': [-74.0056649, 40.7452371], 'street': '10 Avenue', 'zipcode': '10011'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-10-02T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-09-10T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-02-05T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-11-23T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': "Moran'S Chelsea", 'restaurant_id': '40365526'}}, {'_id': '5cb032da23d35d000111810d', 'Restaurant': {'address': {'building': '229', 'coord': [-73.9590059, 40.7090147], 'street': 'Havemeyer Street', 'zipcode': '11211'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-08-18T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2014-01-08T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-05-20T00:00:00Z', 'grade': 'B', 'score': 21}, {'date': '2012-09-20T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-09-22T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Reben Luncheonette', 'restaurant_id': '40365546'}}, {'_id': '5cb032da23d35d000111810e', 'Restaurant': {'address': {'building': '1024', 'coord': [-73.96392089999999, 40.8033908], 'street': 'Amsterdam Avenue', 'zipcode': '10025'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-06-12T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-01-09T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-06-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-01T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-12-15T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': 'V & T Restaurant', 'restaurant_id': '40365577'}}, {'_id': '5cb032da23d35d000111810f', 'Restaurant': {'address': {'building': '181-08', 'coord': [-73.7867565, 40.7271312], 'street': 'Union Turnpike', 'zipcode': '11366'}, 'borough': 'Queens', 'cuisine': 'Chinese', 'grades': [{'date': '2014-10-22T00:00:00Z', 'grade': 'B', 'score': 14}, {'date': '2014-04-09T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-13T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-01-02T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-12-13T00:00:00Z', 'grade': 'P', 'score': 2}, {'date': '2012-06-19T00:00:00Z', 'grade': 'C', 'score': 36}], 'name': 'King Yum Restaurant', 'restaurant_id': '40365592'}}, {'_id': '5cb032da23d35d0001118110', 'Restaurant': {'address': {'building': '8104', 'coord': [-73.8850023, 40.7494272], 'street': '37 Avenue', 'zipcode': '11372'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-07-07T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-02-06T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-08-14T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-20T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2012-02-28T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-10-25T00:00:00Z', 'grade': 'A', 'score': 10}], 'name': "Jahn'S Restaurant", 'restaurant_id': '40365627'}}, {'_id': '5cb032da23d35d0001118111', 'Restaurant': {'address': {'building': '6322', 'coord': [-73.9896898, 40.6199526], 'street': '18 Avenue', 'zipcode': '11204'}, 'borough': 'Brooklyn', 'cuisine': 'Pizza', 'grades': [{'date': '2014-12-30T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-05-15T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-10-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-10-06T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-03-29T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'J&V Famous Pizza', 'restaurant_id': '40365632'}}, {'_id': '5cb032da23d35d0001118112', 'Restaurant': {'address': {'building': '910', 'coord': [-73.9799932, 40.7660886], 'street': 'Seventh Avenue', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2015-01-08T00:00:00Z', 'grade': 'Z', 'score': 35}, {'date': '2014-06-02T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2013-11-25T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-06-24T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-12-04T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2012-06-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-24T00:00:00Z', 'grade': 'B', 'score': 21}], 'name': 'La Parisienne Diner', 'restaurant_id': '40365633'}}, {'_id': '5cb032da23d35d0001118113', 'Restaurant': {'address': {'building': '326', 'coord': [-73.989131, 40.760039], 'street': 'West 46 Street', 'zipcode': '10036'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-10T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-09-25T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2012-09-11T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-04-19T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-10-26T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Joe Allen Restaurant', 'restaurant_id': '40365644'}}, {'_id': '5cb032da23d35d0001118114', 'Restaurant': {'address': {'building': '3823', 'coord': [-74.16536339999999, 40.5450793], 'street': 'Richmond Avenue', 'zipcode': '10312'}, 'borough': 'Staten Island', 'cuisine': 'American', 'grades': [{'date': '2014-07-15T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2013-04-01T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-12T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': "Joyce'S Tavern", 'restaurant_id': '40365692'}}, {'_id': '5cb032da23d35d0001118115', 'Restaurant': {'address': {'building': '351', 'coord': [-73.96117869999999, 40.7619226], 'street': 'East 62 Street', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-11-13T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2014-02-28T00:00:00Z', 'grade': 'B', 'score': 19}, {'date': '2013-06-10T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2012-05-09T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Il Vagabondo Restaurant', 'restaurant_id': '40365709'}}, {'_id': '5cb032da23d35d0001118116', 'Restaurant': {'address': {'building': '319321', 'coord': [-73.988948, 40.760337], 'street': '323 W. 46Th St.', 'zipcode': '10036'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-05-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-11-12T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-04-27T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-12-07T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': 'Barbetta Restaurant', 'restaurant_id': '40365726'}}, {'_id': '5cb032da23d35d0001118117', 'Restaurant': {'address': {'building': '2911', 'coord': [-73.982241, 40.576366], 'street': 'West 15 Street', 'zipcode': '11224'}, 'borough': 'Brooklyn', 'cuisine': 'Italian', 'grades': [{'date': '2014-12-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-05-15T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-06-12T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-02-06T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Gargiulo'S Restaurant", 'restaurant_id': '40365784'}}, {'_id': '5cb032da23d35d0001118118', 'Restaurant': {'address': {'building': '236', 'coord': [-73.9827418, 40.7655827], 'street': 'West 56 Street', 'zipcode': '10019'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-05-05T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-08-12T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2012-08-13T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-28T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': "Patsy'S Italian Restaurant", 'restaurant_id': '40365789'}}, {'_id': '5cb032da23d35d0001118119', 'Restaurant': {'address': {'building': '10701', 'coord': [-73.856132, 40.743841], 'street': 'Corona Avenue', 'zipcode': '11368'}, 'borough': 'Queens', 'cuisine': 'Italian', 'grades': [{'date': '2014-07-17T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-02-25T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-03-27T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-02-07T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-12-28T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Parkside Restaurant', 'restaurant_id': '40365841'}}, {'_id': '5cb032da23d35d000111811a', 'Restaurant': {'address': {'building': '45-15', 'coord': [-73.91427200000001, 40.7569379], 'street': 'Broadway', 'zipcode': '11103'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2013-12-04T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-05-02T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-03-15T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-07-12T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-02-16T00:00:00Z', 'grade': 'A', 'score': 7}], 'name': "Lavelle'S Admiral'S Club", 'restaurant_id': '40365844'}}, {'_id': '5cb032da23d35d000111811b', 'Restaurant': {'address': {'building': '358', 'coord': [-73.963506, 40.758273], 'street': 'East 57 Street', 'zipcode': '10022'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-08-11T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-22T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-14T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-07-02T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-02-02T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-08-24T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': "Neary'S Pub", 'restaurant_id': '40365871'}}, {'_id': '5cb032da23d35d000111811c', 'Restaurant': {'address': {'building': '413', 'coord': [-73.99532099999999, 40.750205], 'street': '8 Avenue', 'zipcode': '10001'}, 'borough': 'Manhattan', 'cuisine': 'Pizza', 'grades': [{'date': '2014-05-12T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-12-04T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2012-11-15T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-06-25T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-01-23T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-09-07T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'New York Pizza Suprema', 'restaurant_id': '40365882'}}, {'_id': '5cb032da23d35d000111811d', 'Restaurant': {'address': {'building': '331', 'coord': [-73.87786539999999, 40.8724377], 'street': 'East 204 Street', 'zipcode': '10467'}, 'borough': 'Bronx', 'cuisine': 'Irish', 'grades': [{'date': '2014-08-26T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-03-26T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2013-09-11T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-12-18T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2011-10-20T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Mcdwyers Pub', 'restaurant_id': '40365893'}}, {'_id': '5cb032da23d35d000111811e', 'Restaurant': {'address': {'building': '26', 'coord': [-73.9983, 40.715051], 'street': 'Pell Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'Café/Coffee/Tea', 'grades': [{'date': '2014-07-10T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-07-12T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-02-11T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-01-10T00:00:00Z', 'grade': 'P', 'score': 4}, {'date': '2012-07-27T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-02-27T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-08-12T00:00:00Z', 'grade': 'B', 'score': 24}], 'name': 'Mee Sum Coffee Shop', 'restaurant_id': '40365904'}}, {'_id': '5cb032da23d35d000111811f', 'Restaurant': {'address': {'building': '25541', 'coord': [-73.70902579999999, 40.7276012], 'street': 'Jamaica Avenue', 'zipcode': '11001'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-01-16T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2013-06-20T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-10-04T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-10T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2011-06-23T00:00:00Z', 'grade': 'B', 'score': 21}], 'name': "Nancy'S Fire Side", 'restaurant_id': '40365938'}}, {'_id': '5cb032da23d35d0001118120', 'Restaurant': {'address': {'building': '21', 'coord': [-73.9990337, 40.7143954], 'street': 'Mott Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'Chinese', 'grades': [{'date': '2014-07-28T00:00:00Z', 'grade': 'B', 'score': 27}, {'date': '2013-11-19T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2013-04-30T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-10-16T00:00:00Z', 'grade': 'B', 'score': 24}, {'date': '2012-05-07T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Hop Kee Restaurant', 'restaurant_id': '40365942'}}, {'_id': '5cb032da23d35d0001118121', 'Restaurant': {'address': {'building': '1', 'coord': [-74.0049219, 40.720699], 'street': 'Lispenard Street', 'zipcode': '10013'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-10-07T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2014-05-02T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2013-10-03T00:00:00Z', 'grade': 'B', 'score': 23}, {'date': '2012-09-17T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-05-08T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2011-12-13T00:00:00Z', 'grade': 'A', 'score': 13}], 'name': 'Nancy Whiskey Pub', 'restaurant_id': '40365968'}}, {'_id': '5cb032da23d35d0001118122', 'Restaurant': {'address': {'building': '146-09', 'coord': [-73.808593, 40.702028], 'street': 'Jamaica Avenue', 'zipcode': '11435'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-07-14T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-08-05T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-03-18T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-11T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2011-09-20T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Blarney Bar', 'restaurant_id': '40365972'}}, {'_id': '5cb032da23d35d0001118123', 'Restaurant': {'address': {'building': '16304', 'coord': [-73.78999089999999, 40.7118632], 'street': 'Jamaica Avenue', 'zipcode': '11432'}, 'borough': 'Queens', 'cuisine': 'Pizza', 'grades': [{'date': '2014-07-25T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2014-02-10T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-01-03T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-01-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-09-28T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Margherita Pizza', 'restaurant_id': '40366002'}}, {'_id': '5cb032da23d35d0001118124', 'Restaurant': {'address': {'building': '10807', 'coord': [-73.8299395, 40.5812137], 'street': 'Rockaway Beach Boulevard', 'zipcode': '11694'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-01-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-06-26T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-27T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-01-31T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-02-08T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': "Healy'S Pub", 'restaurant_id': '40366054'}}, {'_id': '5cb032da23d35d0001118125', 'Restaurant': {'address': {'building': '416', 'coord': [-73.98586209999999, 40.67017250000001], 'street': '5 Avenue', 'zipcode': '11215'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-12-04T00:00:00Z', 'grade': 'B', 'score': 22}, {'date': '2014-04-19T00:00:00Z', 'grade': 'A', 'score': 3}, {'date': '2013-02-14T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-12T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': 'Fifth Avenue Bingo', 'restaurant_id': '40366109'}}, {'_id': '5cb032da23d35d0001118126', 'Restaurant': {'address': {'building': '524', 'coord': [-74.1402105, 40.6301893], 'street': 'Port Richmond Avenue', 'zipcode': '10302'}, 'borough': 'Staten Island', 'cuisine': 'Pizza/Italian', 'grades': [{'date': '2014-12-18T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-12-03T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-03-28T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2012-02-22T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': "Denino'S Pizzeria Tavern", 'restaurant_id': '40366132'}}, {'_id': '5cb032da23d35d0001118127', 'Restaurant': {'address': {'building': '2929', 'coord': [-73.942849, 40.6076256], 'street': 'Avenue R', 'zipcode': '11229'}, 'borough': 'Brooklyn', 'cuisine': 'Italian', 'grades': [{'date': '2014-03-13T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-10-02T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-01-22T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-12T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2011-12-01T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2011-05-25T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Michael'S Restaurant", 'restaurant_id': '40366154'}}, {'_id': '5cb032da23d35d0001118128', 'Restaurant': {'address': {'building': '146', 'coord': [-73.9736776, 40.7535755], 'street': 'East 46 Street', 'zipcode': '10017'}, 'borough': 'Manhattan', 'cuisine': 'Italian', 'grades': [{'date': '2014-03-11T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-07-31T00:00:00Z', 'grade': 'C', 'score': 53}, {'date': '2012-12-19T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-06-04T00:00:00Z', 'grade': 'C', 'score': 45}, {'date': '2012-01-18T00:00:00Z', 'grade': 'C', 'score': 34}, {'date': '2011-09-28T00:00:00Z', 'grade': 'B', 'score': 18}, {'date': '2011-05-24T00:00:00Z', 'grade': 'C', 'score': 52}], 'name': 'Nanni Restaurant', 'restaurant_id': '40366157'}}, {'_id': '5cb032da23d35d0001118129', 'Restaurant': {'address': {'building': '119-09', 'coord': [-73.82770529999999, 40.6944628], 'street': 'Atlantic Avenue', 'zipcode': '11418'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-11-20T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2014-02-15T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2013-07-01T00:00:00Z', 'grade': 'B', 'score': 16}, {'date': '2012-11-28T00:00:00Z', 'grade': 'B', 'score': 20}, {'date': '2012-02-16T00:00:00Z', 'grade': 'B', 'score': 19}], 'name': "Lenihan'S Saloon", 'restaurant_id': '40366162'}}, {'_id': '5cb032da23d35d000111812a', 'Restaurant': {'address': {'building': '4218', 'coord': [-73.8682701, 40.745683], 'street': 'Junction Boulevard', 'zipcode': '11368'}, 'borough': 'Queens', 'cuisine': 'Latin (Cuban, Dominican, Puerto Rican, South & Central American)', 'grades': [{'date': '2014-11-21T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-09-06T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-04-04T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2012-07-27T00:00:00Z', 'grade': 'C', 'score': 30}], 'name': 'Emilio Iii Bar', 'restaurant_id': '40366214'}}, {'_id': '5cb032da23d35d000111812b', 'Restaurant': {'address': {'building': '80', 'coord': [-74.0086833, 40.7052024], 'street': 'Beaver Street', 'zipcode': '10005'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-07-29T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-08-05T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2013-03-14T00:00:00Z', 'grade': 'A', 'score': 2}, {'date': '2012-07-24T00:00:00Z', 'grade': 'A', 'score': 12}], 'name': 'Killarney Rose', 'restaurant_id': '40366222'}}, {'_id': '5cb032da23d35d000111812c', 'Restaurant': {'address': {'building': '13558', 'coord': [-73.8216767, 40.6689548], 'street': 'Lefferts Boulevard', 'zipcode': '11420'}, 'borough': 'Queens', 'cuisine': 'Italian', 'grades': [{'date': '2014-06-20T00:00:00Z', 'grade': 'A', 'score': 10}, {'date': '2013-06-07T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-06-28T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2012-01-13T00:00:00Z', 'grade': 'C', 'score': 28}], 'name': 'Don Peppe', 'restaurant_id': '40366230'}}, {'_id': '5cb032da23d35d000111812d', 'Restaurant': {'address': {'building': '202-24', 'coord': [-73.9250442, 40.5595462], 'street': 'Rockaway Point Boulevard', 'zipcode': '11697'}, 'borough': 'Queens', 'cuisine': 'American', 'grades': [{'date': '2014-12-02T00:00:00Z', 'grade': 'Z', 'score': 18}, {'date': '2014-02-12T00:00:00Z', 'grade': 'B', 'score': 21}, {'date': '2013-04-13T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-06-26T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2011-12-17T00:00:00Z', 'grade': 'A', 'score': 8}], 'name': 'Blarney Castle', 'restaurant_id': '40366356'}}, {'_id': '5cb032da23d35d000111812e', 'Restaurant': {'address': {'building': '1611', 'coord': [-73.955074, 40.599217], 'street': 'Avenue U', 'zipcode': '11229'}, 'borough': 'Brooklyn', 'cuisine': 'American', 'grades': [{'date': '2014-07-02T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2013-07-10T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-02-13T00:00:00Z', 'grade': 'A', 'score': 7}, {'date': '2012-08-16T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2012-03-29T00:00:00Z', 'grade': 'B', 'score': 24}], 'name': 'Three Star Restaurant', 'restaurant_id': '40366361'}}, {'_id': '5cb032da23d35d000111812f', 'Restaurant': {'address': {'building': '137', 'coord': [-73.98926, 40.7509054], 'street': 'West 33 Street', 'zipcode': '10001'}, 'borough': 'Manhattan', 'cuisine': 'Irish', 'grades': [{'date': '2014-08-15T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-01-21T00:00:00Z', 'grade': 'A', 'score': 5}, {'date': '2013-07-24T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2012-05-31T00:00:00Z', 'grade': 'A', 'score': 8}, {'date': '2012-01-26T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2011-10-11T00:00:00Z', 'grade': 'A', 'score': 5}], 'name': 'Blarney Rock', 'restaurant_id': '40366379'}}, {'_id': '5cb032da23d35d0001118130', 'Restaurant': {'address': {'building': '1118', 'coord': [-73.960573, 40.760982], 'street': '1 Avenue', 'zipcode': '10065'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-09-24T00:00:00Z', 'grade': 'A', 'score': 4}, {'date': '2013-09-06T00:00:00Z', 'grade': 'A', 'score': 6}, {'date': '2013-03-20T00:00:00Z', 'grade': 'C', 'score': 36}, {'date': '2012-04-13T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': "Dangerfield'S Night Club", 'restaurant_id': '40366381'}}, {'_id': '5cb032da23d35d0001118131', 'Restaurant': {'address': {'building': '433', 'coord': [-73.98306099999999, 40.7441419], 'street': 'Park Avenue South', 'zipcode': '10016'}, 'borough': 'Manhattan', 'cuisine': 'American', 'grades': [{'date': '2014-12-29T00:00:00Z', 'grade': 'A', 'score': 12}, {'date': '2014-07-03T00:00:00Z', 'grade': 'A', 'score': 13}, {'date': '2014-01-13T00:00:00Z', 'grade': 'A', 'score': 11}, {'date': '2013-05-02T00:00:00Z', 'grade': 'A', 'score': 9}], 'name': "Desmond'S Tavern", 'restaurant_id': '40366396'}}, {'_id': '5cb032da23d35d0001118132', 'Restaurant': {'address': {'building': '6828', 'coord': [-73.8204154, 40.7242443], 'street': 'Main Street', 'zipcode': '11367'}, 'borough': 'Queens', 'cuisine': 'Jewish/Kosher', 'grades': [{'date': '2014-09-24T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2014-03-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2013-05-31T00:00:00Z', 'grade': 'B', 'score': 26}, {'date': '2012-05-10T00:00:00Z', 'grade': 'A', 'score': 9}, {'date': '2011-11-03T00:00:00Z', 'grade': 'A', 'score': 11}], 'name': 'Naomi Kosher Pizza', 'restaurant_id': '40366425'}}]
|
Model backlog/Train/70-melanoma-5fold-efficientnetb6-384x384.ipynb | ###Markdown
Dependencies
###Code
!pip install --quiet efficientnet
# !pip install --quiet image-classifiers
import warnings, json, re, glob, math
from scripts_step_lr_schedulers import *
from melanoma_utility_scripts import *
from kaggle_datasets import KaggleDatasets
from sklearn.model_selection import KFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import optimizers, layers, metrics, losses, Model
import efficientnet.tfkeras as efn
# from classification_models.tfkeras import Classifiers
import tensorflow_addons as tfa
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Model parameters
###Code
config = {
"HEIGHT": 384,
"WIDTH": 384,
"CHANNELS": 3,
"BATCH_SIZE": 128,
"EPOCHS": 15,
"LEARNING_RATE": 3e-4,
"ES_PATIENCE": 5,
"N_FOLDS": 5,
"N_USED_FOLDS": 1,
"TTA_STEPS": 15,
"BASE_MODEL": 'EfficientNetB6',
"BASE_MODEL_WEIGHTS": 'noisy-student',
"DATASET_PATH": 'melanoma-384x384'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/siim-isic-melanoma-classification/'
k_fold = pd.read_csv(database_base_path + 'train.csv')
test = pd.read_csv(database_base_path + 'test.csv')
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print(f'Test samples: {len(test)}')
display(test.head())
GCS_PATH = KaggleDatasets().get_gcs_path(config['DATASET_PATH'])
TRAINING_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/train*.tfrec')
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/test*.tfrec')
###Output
Train samples: 33126
###Markdown
Augmentations
###Code
def data_augment(image, label):
p_spatial = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_spatial2 = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_rotation = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_crop = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_pixel = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_cutout = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
if p_spatial >= .2:
if p_spatial >= .6: # Flips
image['input_image'] = data_augment_spatial(image['input_image'])
else: # Rotate
image['input_image'] = data_augment_rotate(image['input_image'])
if p_crop >= .6: # Crops
image['input_image'] = data_augment_crop(image['input_image'])
if p_spatial2 >= .5:
if p_spatial2 >= .75: # Shift
image['input_image'] = data_augment_shift(image['input_image'])
else: # Shear
image['input_image'] = data_augment_shear(image['input_image'])
if p_pixel >= .6: # Pixel-level transforms
if p_pixel >= .9:
image['input_image'] = data_augment_hue(image['input_image'])
elif p_pixel >= .8:
image['input_image'] = data_augment_saturation(image['input_image'])
elif p_pixel >= .7:
image['input_image'] = data_augment_contrast(image['input_image'])
else:
image['input_image'] = data_augment_brightness(image['input_image'])
if p_rotation >= .5: # Rotation
image['input_image'] = data_augment_rotation(image['input_image'])
if p_cutout >= .5: # Cutout
image['input_image'] = data_augment_cutout(image['input_image'])
return image, label
def data_augment_tta(image, label):
p_spatial = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_spatial2 = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_rotation = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
p_crop = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
if p_spatial >= .2:
if p_spatial >= .6: # Flips
image['input_image'] = data_augment_spatial(image['input_image'])
else: # Rotate
image['input_image'] = data_augment_rotate(image['input_image'])
if p_crop >= .6: # Crops
image['input_image'] = data_augment_crop(image['input_image'])
if p_spatial2 >= .5:
if p_spatial2 >= .75: # Shift
image['input_image'] = data_augment_shift(image['input_image'])
else: # Shear
image['input_image'] = data_augment_shear(image['input_image'])
if p_rotation >= .5: # Rotation
image['input_image'] = data_augment_rotation(image['input_image'])
return image, label
def data_augment_rotation(image, max_angle=45.):
image = transform_rotation(image, config['HEIGHT'], max_angle)
return image
def data_augment_shift(image, h_shift=50., w_shift=50.):
image = transform_shift(image, config['HEIGHT'], h_shift, w_shift)
return image
def data_augment_shear(image, shear=25.):
image = transform_shear(image, config['HEIGHT'], shear)
return image
def data_augment_hue(image, max_delta=.02):
image = tf.image.random_hue(image, max_delta)
return image
def data_augment_saturation(image, lower=.8, upper=1.2):
image = tf.image.random_saturation(image, lower, upper)
return image
def data_augment_contrast(image, lower=.8, upper=1.2):
image = tf.image.random_contrast(image, lower, upper)
return image
def data_augment_brightness(image, max_delta=.1):
image = tf.image.random_brightness(image, max_delta)
return image
def data_augment_spatial(image):
p_spatial = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
return image
def data_augment_rotate(image):
p_rotate = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
if p_rotate > .66:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .33:
image = tf.image.rot90(image, k=2) # rotate 180º
else:
image = tf.image.rot90(image, k=1) # rotate 90º
return image
def data_augment_crop(image):
p_crop = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
if p_crop > .8:
image = tf.image.random_crop(image, size=[int(config['HEIGHT']*.7), int(config['WIDTH']*.7), config['CHANNELS']])
elif p_crop > .6:
image = tf.image.random_crop(image, size=[int(config['HEIGHT']*.8), int(config['WIDTH']*.8), config['CHANNELS']])
elif p_crop > .4:
image = tf.image.random_crop(image, size=[int(config['HEIGHT']*.9), int(config['WIDTH']*.9), config['CHANNELS']])
elif p_crop > .2:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.7)
image = tf.image.resize(image, size=[config['HEIGHT'], config['WIDTH']])
return image
def data_augment_cutout(image, min_mask_size=(int(config['HEIGHT'] * .05), int(config['HEIGHT'] * .05)),
max_mask_size=(int(config['HEIGHT'] * .25), int(config['HEIGHT'] * .25))):
p_cutout = tf.random.uniform([1], minval=0, maxval=1, dtype='float32')
if p_cutout > .9: # 3 cut outs
image = random_cutout(image, config['HEIGHT'], config['WIDTH'],
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=3)
elif p_cutout > .75: # 2 cut outs
image = random_cutout(image, config['HEIGHT'], config['WIDTH'],
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=2)
else: # 1 cut out
image = random_cutout(image, config['HEIGHT'], config['WIDTH'],
min_mask_size=min_mask_size, max_mask_size=max_mask_size, k=1)
return image
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
# Datasets utility functions
def read_labeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_meta': data}, label # returns a dataset of (image, data, label)
def read_labeled_tfrecord_eval(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
label = tf.cast(example['target'], tf.float32)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_meta': data}, label, image_name # returns a dataset of (image, data, label, image_name)
def load_dataset(filenames, ordered=False, buffer_size=-1):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False # disable order, increase speed
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order
dataset = dataset.map(read_labeled_tfrecord, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label)
def load_dataset_eval(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_labeled_tfrecord_eval, num_parallel_calls=buffer_size)
return dataset # returns a dataset of (image, data, label, image_name)
def get_training_dataset(filenames, batch_size, buffer_size=-1):
dataset = load_dataset(filenames, ordered=False, buffer_size=buffer_size)
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.repeat() # the training dataset must repeat for several epochs
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=True) # slighly faster with fixed tensor sizes
dataset = dataset.prefetch(buffer_size) # prefetch next batch while training (autotune prefetch buffer size)
return dataset
def get_validation_dataset(filenames, ordered=True, repeated=False, batch_size=32, buffer_size=-1):
dataset = load_dataset(filenames, ordered=ordered, buffer_size=buffer_size)
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size, drop_remainder=repeated)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_eval_dataset(filenames, batch_size=32, buffer_size=-1):
dataset = load_dataset_eval(filenames, buffer_size=buffer_size)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
# Test function
def read_unlabeled_tfrecord(example, height=config['HEIGHT'], width=config['WIDTH'], channels=config['CHANNELS']):
example = tf.io.parse_single_example(example, UNLABELED_TFREC_FORMAT)
image = decode_image(example['image'], height, width, channels)
image_name = example['image_name']
# meta features
data = {}
data['patient_id'] = tf.cast(example['patient_id'], tf.int32)
data['sex'] = tf.cast(example['sex'], tf.int32)
data['age_approx'] = tf.cast(example['age_approx'], tf.int32)
data['anatom_site_general_challenge'] = tf.cast(tf.one_hot(example['anatom_site_general_challenge'], 7), tf.int32)
return {'input_image': image, 'input_tabular': data}, image_name # returns a dataset of (image, data, image_name)
def load_dataset_test(filenames, buffer_size=-1):
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=buffer_size) # automatically interleaves reads from multiple files
dataset = dataset.map(read_unlabeled_tfrecord, num_parallel_calls=buffer_size)
# returns a dataset of (image, data, label, image_name) pairs if labeled=True or (image, data, image_name) pairs if labeled=False
return dataset
def get_test_dataset(filenames, batch_size=32, buffer_size=-1, tta=False):
dataset = load_dataset_test(filenames, buffer_size=buffer_size)
if tta:
dataset = dataset.map(data_augment_tta, num_parallel_calls=AUTO)
dataset = dataset.batch(batch_size, drop_remainder=False)
dataset = dataset.prefetch(buffer_size)
return dataset
###Output
_____no_output_____
###Markdown
Learning rate scheduler
###Code
lr_min = 1e-6
# lr_start = 5e-6
lr_max = config['LEARNING_RATE']
steps_per_epoch = 24519 // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * steps_per_epoch
warmup_steps = steps_per_epoch * 5
# hold_max_steps = 0
# step_decay = .8
# step_size = steps_per_epoch * 1
# rng = [i for i in range(0, total_steps, 32)]
# y = [step_schedule_with_warmup(tf.cast(x, tf.float32), step_size=step_size,
# warmup_steps=warmup_steps, hold_max_steps=hold_max_steps,
# lr_start=lr_start, lr_max=lr_max, step_decay=step_decay) for x in rng]
# sns.set(style="whitegrid")
# fig, ax = plt.subplots(figsize=(20, 6))
# plt.plot(rng, y)
# print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
_____no_output_____
###Markdown
Model
###Code
# Initial bias
pos = len(k_fold[k_fold['target'] == 1])
neg = len(k_fold[k_fold['target'] == 0])
initial_bias = np.log([pos/neg])
print('Bias')
print(pos)
print(neg)
print(initial_bias)
# class weights
total = len(k_fold)
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Class weight')
print(class_weight)
def model_fn(input_shape):
input_image = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB6(weights=config['BASE_MODEL_WEIGHTS'],
include_top=False)
x = base_model(input_image)
x = L.GlobalAveragePooling2D()(x)
output = L.Dense(1, activation='sigmoid', name='output',
bias_initializer=tf.keras.initializers.Constant(initial_bias))(x)
model = Model(inputs=input_image, outputs=output)
return model
###Output
_____no_output_____
###Markdown
Training
###Code
# Evaluation
eval_dataset = get_eval_dataset(TRAINING_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
image_names = next(iter(eval_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(TRAINING_FILENAMES)))).numpy().astype('U')
image_data = eval_dataset.map(lambda data, label, image_name: data)
# Resample dataframe
k_fold = k_fold[k_fold['image_name'].isin(image_names)]
# Test
NUM_TEST_IMAGES = len(test)
test_preds = np.zeros((NUM_TEST_IMAGES, 1))
test_preds_last = np.zeros((NUM_TEST_IMAGES, 1))
test_dataset = get_test_dataset(TEST_FILENAMES, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, tta=True)
image_names_test = next(iter(test_dataset.unbatch().map(lambda data, image_name: image_name).batch(NUM_TEST_IMAGES))).numpy().astype('U')
test_image_data = test_dataset.map(lambda data, image_name: data)
history_list = []
k_fold_best = k_fold.copy()
kfold = KFold(config['N_FOLDS'], shuffle=True, random_state=SEED)
for n_fold, (trn_idx, val_idx) in enumerate(kfold.split(TRAINING_FILENAMES)):
if n_fold < config['N_USED_FOLDS']:
n_fold +=1
print('\nFOLD: %d' % (n_fold))
tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
### Data
train_filenames = np.array(TRAINING_FILENAMES)[trn_idx]
valid_filenames = np.array(TRAINING_FILENAMES)[val_idx]
steps_per_epoch = count_data_items(train_filenames) // config['BATCH_SIZE']
# Train model
model_path = f'model_fold_{n_fold}.h5'
es = EarlyStopping(monitor='val_auc', mode='max', patience=config['ES_PATIENCE'],
restore_best_weights=False, verbose=1)
checkpoint = ModelCheckpoint(model_path, monitor='val_auc', mode='max',
save_best_only=True, save_weights_only=True)
with strategy.scope():
model = model_fn((config['HEIGHT'], config['WIDTH'], config['CHANNELS']))
optimizer = tfa.optimizers.RectifiedAdam(lr=lr_max,
total_steps=total_steps,
warmup_proportion=(warmup_steps / total_steps),
min_lr=lr_min)
model.compile(optimizer, loss=losses.BinaryCrossentropy(label_smoothing=0.05),
metrics=[metrics.AUC()])
history = model.fit(get_training_dataset(train_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
validation_data=get_validation_dataset(valid_filenames, ordered=True, repeated=False,
batch_size=config['BATCH_SIZE'], buffer_size=AUTO),
epochs=config['EPOCHS'],
steps_per_epoch=steps_per_epoch,
callbacks=[checkpoint, es],
# class_weight=class_weight,
verbose=2).history
# save last epoch weights
model.save_weights('last_' + model_path)
history_list.append(history)
# Get validation IDs
valid_dataset = get_eval_dataset(valid_filenames, batch_size=config['BATCH_SIZE'], buffer_size=AUTO)
valid_image_names = next(iter(valid_dataset.unbatch().map(lambda data, label, image_name: image_name).batch(count_data_items(valid_filenames)))).numpy().astype('U')
k_fold[f'fold_{n_fold}'] = k_fold.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
k_fold_best[f'fold_{n_fold}'] = k_fold_best.apply(lambda x: 'validation' if x['image_name'] in valid_image_names else 'train', axis=1)
##### Last model #####
print('Last model evaluation...')
preds = model.predict(image_data)
name_preds_eval = dict(zip(image_names, preds.reshape(len(preds))))
k_fold[f'pred_fold_{n_fold}'] = k_fold.apply(lambda x: name_preds_eval[x['image_name']], axis=1)
print(f'Last model inference (TTA {config["TTA_STEPS"]} steps)...')
for step in range(config['TTA_STEPS']):
test_preds_last += model.predict(test_image_data)
##### Best model #####
print('Best model evaluation...')
model.load_weights(model_path)
preds = model.predict(image_data)
name_preds_eval = dict(zip(image_names, preds.reshape(len(preds))))
k_fold_best[f'pred_fold_{n_fold}'] = k_fold_best.apply(lambda x: name_preds_eval[x['image_name']], axis=1)
print(f'Best model inference (TTA {config["TTA_STEPS"]} steps)...')
for step in range(config['TTA_STEPS']):
test_preds += model.predict(test_image_data)
# normalize preds
test_preds /= (config['N_USED_FOLDS'] * config['TTA_STEPS'])
test_preds_last /= (config['N_USED_FOLDS'] * config['TTA_STEPS'])
name_preds = dict(zip(image_names_test, test_preds.reshape(NUM_TEST_IMAGES)))
name_preds_last = dict(zip(image_names_test, test_preds_last.reshape(NUM_TEST_IMAGES)))
test['target'] = test.apply(lambda x: name_preds[x['image_name']], axis=1)
test['target_last'] = test.apply(lambda x: name_preds_last[x['image_name']], axis=1)
###Output
FOLD: 1
Downloading data from https://github.com/qubvel/efficientnet/releases/download/v0.0.1/efficientnet-b6_noisy-student_notop.h5
165232640/165226952 [==============================] - 7s 0us/step
Epoch 1/15
204/204 - 124s - auc: 0.5289 - loss: 0.1801 - val_auc: 0.6563 - val_loss: 0.1781
Epoch 2/15
204/204 - 94s - auc: 0.7509 - loss: 0.1716 - val_auc: 0.8024 - val_loss: 0.1723
Epoch 3/15
204/204 - 96s - auc: 0.7800 - loss: 0.1708 - val_auc: 0.8341 - val_loss: 0.1699
Epoch 4/15
204/204 - 91s - auc: 0.8125 - loss: 0.1691 - val_auc: 0.8079 - val_loss: 0.1680
Epoch 5/15
204/204 - 91s - auc: 0.8208 - loss: 0.1688 - val_auc: 0.8308 - val_loss: 0.1677
Epoch 6/15
204/204 - 95s - auc: 0.8421 - loss: 0.1663 - val_auc: 0.8871 - val_loss: 0.1643
Epoch 7/15
204/204 - 90s - auc: 0.8442 - loss: 0.1657 - val_auc: 0.8670 - val_loss: 0.1630
Epoch 8/15
204/204 - 92s - auc: 0.8796 - loss: 0.1642 - val_auc: 0.8485 - val_loss: 0.1654
Epoch 9/15
204/204 - 91s - auc: 0.8864 - loss: 0.1614 - val_auc: 0.8637 - val_loss: 0.1638
Epoch 10/15
204/204 - 93s - auc: 0.8958 - loss: 0.1607 - val_auc: 0.8841 - val_loss: 0.1636
Epoch 11/15
204/204 - 91s - auc: 0.9111 - loss: 0.1590 - val_auc: 0.8787 - val_loss: 0.1616
Epoch 00011: early stopping
Last model evaluation...
Last model inference (TTA 15 steps)...
Best model evaluation...
Best model inference (TTA 15 steps)...
###Markdown
Model loss graph
###Code
for n_fold in range(config['N_USED_FOLDS']):
print(f'Fold: {n_fold + 1}')
plot_metrics(history_list[n_fold])
###Output
Fold: 1
###Markdown
Model loss graph aggregated
###Code
plot_metrics_agg(history_list, config['N_USED_FOLDS'])
###Output
_____no_output_____
###Markdown
Model evaluation (best)
###Code
display(evaluate_model(k_fold_best, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(k_fold_best, config['N_USED_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Model evaluation (last)
###Code
display(evaluate_model(k_fold, config['N_USED_FOLDS']).style.applymap(color_map))
display(evaluate_model_Subset(k_fold, config['N_USED_FOLDS']).style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
for n_fold in range(config['N_USED_FOLDS']):
n_fold += 1
pred_col = f'pred_fold_{n_fold}'
train_set = k_fold_best[k_fold_best[f'fold_{n_fold}'] == 'train']
valid_set = k_fold_best[k_fold_best[f'fold_{n_fold}'] == 'validation']
print(f'Fold: {n_fold}')
plot_confusion_matrix(train_set['target'], np.round(train_set[pred_col]),
valid_set['target'], np.round(valid_set[pred_col]))
###Output
Fold: 1
###Markdown
Visualize predictions
###Code
k_fold['pred'] = 0
for n_fold in range(config['N_USED_FOLDS']):
k_fold['pred'] += k_fold[f'pred_fold_{n_fold+1}'] / config['N_FOLDS']
print('Label/prediction distribution')
print(f"Train positive labels: {len(k_fold[k_fold['target'] > .5])}")
print(f"Train positive predictions: {len(k_fold[k_fold['pred'] > .5])}")
print(f"Train positive correct predictions: {len(k_fold[(k_fold['target'] > .5) & (k_fold['pred'] > .5)])}")
print('Top 10 samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('target == 1').head(10))
print('Top 10 predicted positive samples')
display(k_fold[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'diagnosis',
'target', 'pred'] + [c for c in k_fold.columns if (c.startswith('pred_fold'))]].query('pred > .5').head(10))
###Output
Label/prediction distribution
Train positive labels: 581
Train positive predictions: 0
Train positive correct predictions: 0
Top 10 samples
###Markdown
Visualize test predictions
###Code
print(f"Test predictions {len(test[test['target'] > .5])}|{len(test[test['target'] <= .5])}")
print(f"Test predictions (last) {len(test[test['target_last'] > .5])}|{len(test[test['target_last'] <= .5])}")
print('Top 10 samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].head(10))
print('Top 10 positive samples')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target > .5').head(10))
print('Top 10 positive samples (last)')
display(test[['image_name', 'sex', 'age_approx','anatom_site_general_challenge', 'target', 'target_last'] +
[c for c in test.columns if (c.startswith('pred_fold'))]].query('target_last > .5').head(10))
###Output
Test predictions 0|10982
Test predictions (last) 60|10922
Top 10 samples
###Markdown
Test set predictions
###Code
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission['target'] = test['target']
submission['target_last'] = test['target_last']
submission['target_blend'] = (test['target'] * .5) + (test['target_last'] * .5)
display(submission.head(10))
display(submission.describe())
### BEST ###
submission[['image_name', 'target']].to_csv('submission.csv', index=False)
### LAST ###
submission_last = submission[['image_name', 'target_last']]
submission_last.columns = ['image_name', 'target']
submission_last.to_csv('submission_last.csv', index=False)
### BLEND ###
submission_blend = submission[['image_name', 'target_blend']]
submission_blend.columns = ['image_name', 'target']
submission_blend.to_csv('submission_blend.csv', index=False)
###Output
_____no_output_____ |
C2 Statistics and Model Creation/SOLUTIONS/SOLUTION_Tech_Fun_C2_S3_Inferential_Statistics.ipynb | ###Markdown
Technology Fundamentals Course 2, Session 3: Inferential Statistics**Instructor**: Wesley Beckner**Contact**: [email protected]**Teaching Assitants**: Varsha Bang, Harsha Vardhan**Contact**: [email protected], [email protected] this session we will look at the utility of EDA combined with inferential statistics.--- 6.0 Preparing Environment and Importing Data[back to top](top) 6.0.1 Import Packages[back to top](top)
###Code
# The modules we've seen before
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import seaborn as sns
# our stats modules
import random
import scipy.stats as stats
import statsmodels.api as sm
from statsmodels.formula.api import ols
import scipy
###Output
/usr/local/lib/python3.7/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning:
pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
###Markdown
6.0.2 Load Dataset[back to top](top)For this session, we will use dummy datasets from sklearn.
###Code
df = pd.read_csv('https://raw.githubusercontent.com/wesleybeckner/'\
'ds_for_engineers/main/data/truffle_margin/truffle_margin_customer.csv')
df
descriptors = df.columns[:-2]
for col in descriptors:
print(col)
print(df[col].unique())
print()
###Output
Base Cake
['Butter' 'Cheese' 'Chiffon' 'Pound' 'Sponge' 'Tiramisu']
Truffle Type
['Candy Outer' 'Chocolate Outer' 'Jelly Filled']
Primary Flavor
['Butter Pecan' 'Ginger Lime' 'Margarita' 'Pear' 'Pink Lemonade'
'Raspberry Ginger Ale' 'Sassafras' 'Spice' 'Wild Cherry Cream'
'Cream Soda' 'Horchata' 'Kettle Corn' 'Lemon Bar' 'Orange Pineapple\tP'
'Plum' 'Orange' 'Butter Toffee' 'Lemon' 'Acai Berry' 'Apricot'
'Birch Beer' 'Cherry Cream Spice' 'Creme de Menthe' 'Fruit Punch'
'Ginger Ale' 'Grand Mariner' 'Orange Brandy' 'Pecan' 'Toasted Coconut'
'Watermelon' 'Wintergreen' 'Vanilla' 'Bavarian Cream' 'Black Licorice'
'Caramel Cream' 'Cheesecake' 'Cherry Cola' 'Coffee' 'Irish Cream'
'Lemon Custard' 'Mango' 'Sour' 'Amaretto' 'Blueberry' 'Butter Milk'
'Chocolate Mint' 'Coconut' 'Dill Pickle' 'Gingersnap' 'Chocolate'
'Doughnut']
Secondary Flavor
['Toffee' 'Banana' 'Rum' 'Tutti Frutti' 'Vanilla' 'Mixed Berry'
'Whipped Cream' 'Apricot' 'Passion Fruit' 'Peppermint' 'Dill Pickle'
'Black Cherry' 'Wild Cherry Cream' 'Papaya' 'Mango' 'Cucumber' 'Egg Nog'
'Pear' 'Rock and Rye' 'Tangerine' 'Apple' 'Black Currant' 'Kiwi' 'Lemon'
'Hazelnut' 'Butter Rum' 'Fuzzy Navel' 'Mojito' 'Ginger Beer']
Color Group
['Taupe' 'Amethyst' 'Burgundy' 'White' 'Black' 'Opal' 'Citrine' 'Rose'
'Slate' 'Teal' 'Tiffany' 'Olive']
Customer
['Slugworth' 'Perk-a-Cola' 'Fickelgruber' 'Zebrabar' "Dandy's Candies"]
Date
['1/2020' '2/2020' '3/2020' '4/2020' '5/2020' '6/2020' '7/2020' '8/2020'
'9/2020' '10/2020' '11/2020' '12/2020']
###Markdown
6.1 Many Flavors of Statistical Tests https://luminousmen.com/post/descriptive-and-inferential-statistics >Descriptive statistics describes data (for example, a chart or graph) and inferential statistics allows you to make predictions (“inferences”) from that data. With inferential statistics, you take data from samples and make generalizations about a population - [statshowto](https://www.statisticshowto.com/probability-and-statistics/statistics-definitions/inferential-statistics/:~:text=Descriptive%20statistics%20describes%20data%20(for,make%20generalizations%20about%20a%20population.)* **Moods Median Test*** [Kruskal-Wallis Test](https://sixsigmastudyguide.com/kruskal-wallis-non-parametric-hypothesis-test/) (Another comparison of Medians test)* T-Test* Analysis of Variance (ANOVA) * One Way ANOVA * Two Way ANOVA * MANOVA * Factorial ANOVAWhen do I use each of these? We will talk about this as we proceed through the examples. [This page](https://support.minitab.com/en-us/minitab/20/help-and-how-to/statistics/nonparametrics/supporting-topics/which-test-should-i-use/) from minitab has good rules of thumb on the subject. 6.1.1 What is Mood's Median?> You can use Chi-Square to test for a goodness of fit (whether a sample of data represents a distribution) or whether two variables are related (using a contingency table, which we will create below!)**A special case of Pearon's Chi-Squared Test:** We create a table that counts the observations above and below the global median for two different groups. We then perform a *chi-squared test of significance* on this *contingency table* Null hypothesis: the Medians are all equalThe chi-square test statistic:$x^2 = \sum{\frac{(O-E)^2}{E}}$Where $O$ is the observed frequency and $E$ is the expected frequency.**Let's take an example**, say we have three shifts with the following production rates:
###Code
np.random.seed(42)
shift_one = [round(i) for i in np.random.normal(16, 3, 10)]
shift_two = [round(i) for i in np.random.normal(24, 3, 10)]
print(shift_one)
print(shift_two)
stat, p, m, table = scipy.stats.median_test(shift_one, shift_two, correction=False)
###Output
_____no_output_____
###Markdown
what is `median_test` returning?
###Code
print("The perasons chi-square test statistic: {:.2f}".format(stat))
print("p-value of the test: {:.3f}".format(p))
print("the grand median: {}".format(m))
###Output
The perasons chi-square test statistic: 7.20
p-value of the test: 0.007
the grand median: 19.5
###Markdown
Let's evaluate that test statistic ourselves by taking a look at the contingency table:
###Code
table
###Output
_____no_output_____
###Markdown
This is easier to make sense of if we order the shift times
###Code
shift_one.sort()
shift_one
###Output
_____no_output_____
###Markdown
When we look at shift one, we see that 8 values are at or below the grand median.
###Code
shift_two.sort()
shift_two
###Output
_____no_output_____
###Markdown
For shift two, only two are at or below the grand median.Since the sample sizes are the same, the expected value for both groups is the same, 5 above and 5 below the grand median. The chi-square is then:$X^2 = \frac{(2-5)^2}{5} + \frac{(8-5)^2}{5} + \frac{(8-5)^2}{5} + \frac{(2-5)^2}{5}$
###Code
(2-5)**2/5 + (8-5)**2/5 + (8-5)**2/5 + (2-5)**2/5
###Output
_____no_output_____
###Markdown
Our p-value, or the probability of observing the null-hypothsis, is under 0.05. We can conclude that these shift performances were drawn under seperate distributions.For comparison, let's do this analysis again with shifts of equal performances
###Code
np.random.seed(3)
shift_three = [round(i) for i in np.random.normal(16, 3, 10)]
shift_four = [round(i) for i in np.random.normal(16, 3, 10)]
stat, p, m, table = scipy.stats.median_test(shift_three, shift_four,
correction=False)
print("The pearsons chi-square test statistic: {:.2f}".format(stat))
print("p-value of the test: {:.3f}".format(p))
print("the grand median: {}".format(m))
###Output
The pearsons chi-square test statistic: 0.00
p-value of the test: 1.000
the grand median: 15.5
###Markdown
and the shift raw values:
###Code
shift_three.sort()
shift_four.sort()
print(shift_three)
print(shift_four)
table
###Output
_____no_output_____
###Markdown
6.1.2 When to Use Mood's?**Mood's Median Test is highly flexible** but has the following assumptions:* Considers only one categorical factor* Response variable is continuous (our shift rates)* Data does not need to be normally distributed * But the distributions are similarly shaped* Sample sizes can be unequal and small (less than 20 observations)Other considerations:* Not as powerful as Kruskal-Wallis Test but still useful for small sample sizes or when there are outliers 6.1.2.1 Exercise: Use Mood's Median Test **Part A** Perform moods median test on Base Cake in Truffle dataWe're also going to get some practice with pandas groupby.
###Code
df[['Base Cake', 'EBITDA/KG']].head()
# what is returned by this groupby?
gp = df.groupby('Base Cake')
###Output
_____no_output_____
###Markdown
How do we find out? We could iterate through it:
###Code
# seems to be a tuple of some sort
for i in gp:
print(i)
break
# the first object appears to be the group
print(i[0])
# the second object appears to be the df belonging to that group
print(i[1])
###Output
Butter
Base Cake Truffle Type ... KG EBITDA/KG
0 Butter Candy Outer ... 53770.342593 0.500424
1 Butter Candy Outer ... 466477.578125 0.220395
2 Butter Candy Outer ... 80801.728070 0.171014
3 Butter Candy Outer ... 18046.111111 0.233025
4 Butter Candy Outer ... 19147.454268 0.480689
... ... ... ... ... ...
1562 Butter Chocolate Outer ... 9772.200521 0.158279
1563 Butter Chocolate Outer ... 10861.245675 -0.159275
1564 Butter Chocolate Outer ... 3578.592163 0.431328
1565 Butter Jelly Filled ... 21438.187500 0.105097
1566 Butter Jelly Filled ... 15617.489115 0.185070
[456 rows x 9 columns]
###Markdown
going back to our diagram from our earlier pandas session. It looks like whenever we split in the groupby method, we create separate dataframes as well as their group label:Ok, so we know `gp` is separate dataframes. How do we turn them into arrays to then pass to `median_test`?
###Code
# complete this for loop
for i, j in gp:
# turn j into an array using the .values attribute
print(i, j['EBITDA/KG'].values)
# turn j into an array of the EBITDA/KG column and grab the values using .values attribute
# j --> grab EBITDA/KG --> turn into an array with .values
# print this to the screen
###Output
Butter [ 5.00423594e-01 2.20395451e-01 1.71013869e-01 2.33024872e-01
4.80689371e-01 1.64934546e-01 2.03213256e-01 1.78681400e-01
1.25050726e-01 2.17021951e-01 7.95955185e-02 3.25042287e-01
2.17551215e-01 2.48152299e-01 -1.20503094e-02 1.47190567e-01
3.84488948e-01 2.05438764e-01 1.32190256e-01 3.23019144e-01
-9.73361477e-03 1.98397692e-01 1.67067902e-01 -2.60063690e-02
1.30365325e-01 2.36337749e-01 -9.70556780e-02 1.59051819e-01
-8.76572259e-02 -3.32199843e-02 -5.05704451e-02 -5.56458806e-02
-8.86273564e-02 4.32267857e-02 -1.88615579e-01 4.24939227e-01
9.35136847e-02 -3.43605950e-02 1.63823520e-01 2.78522916e-01
1.29207730e-01 1.79194495e-01 1.37419569e-01 1.31372653e-01
2.53275225e-01 2.26761431e-01 1.10173466e-01 1.99338787e-01
-2.01250197e-01 1.16567591e-01 1.32324984e-01 4.02912418e-01
9.35051765e-02 1.65865814e-01 2.12269112e-01 2.53461571e-01
1.89055713e-01 1.20416365e-01 3.95276612e-02 2.93121770e-01
1.40947082e-01 -1.21555832e-01 1.56455622e-01 -1.29776953e-02
-6.17934014e-02 -8.19904808e-02 -3.14711557e-02 -8.03820228e-02
1.63839981e-01 8.34406336e-02 1.49369698e-01 1.05990633e-01
1.27399979e-01 2.26634255e-01 -2.20801929e-03 -6.92044284e-02
1.74048414e-01 1.30933438e-01 1.27620323e-01 2.78652749e-01
2.14772018e-01 1.40864278e-01 1.23745138e-01 1.66586809e-01
2.91940995e-01 2.49925584e-01 8.65447719e-02 3.80907774e-01
2.70851719e-01 3.32946265e-01 9.00795862e-03 2.00960974e-01
2.72623570e-01 3.35902190e-01 1.27337723e-01 2.36618545e-01
-6.82774785e-02 3.13166906e-01 2.15752651e-01 9.29694447e-02
3.60809152e-02 2.32488112e-01 3.38200308e-02 1.70916188e-01
2.81620452e-01 -1.61981289e-01 -4.14570666e-02 1.13465970e-02
2.28733252e-01 9.87516565e-02 3.52732668e-02 6.32598661e-02
2.10300526e-01 1.98761726e-01 1.38832882e-01 2.95465366e-01
2.68022024e-01 3.22389724e-01 4.04867623e-01 2.38086167e-01
1.12586985e-01 1.94010438e-01 1.96757297e-01 1.65215620e-01
1.22730941e-02 1.14415249e-01 3.26252563e-01 1.89080695e-01
-5.11830382e-02 2.41661008e-01 2.00063672e-01 3.07633312e-01
4.20740234e-01 1.34764192e-01 -4.75993730e-02 1.52973888e-02
1.87709908e-01 7.20193743e-02 3.48745346e-02 2.77659158e-01
2.73466257e-01 1.32419725e-01 2.85933859e-02 3.99622870e-02
-7.46829380e-02 9.03915641e-02 -9.61708181e-02 7.16896946e-02
1.08714611e-01 1.18536709e-01 8.52229628e-02 4.13523715e-01
7.71194281e-01 1.73738798e-01 3.05406909e-01 1.53831064e-01
2.06911408e-01 1.13075512e-01 1.29416734e-01 1.60275533e-01
2.29962628e-01 2.50895646e-01 1.73060658e-01 2.01020670e-01
3.16227457e-01 1.57652647e-01 5.47188384e-02 2.61436808e-01
1.46570523e-01 1.58977569e-01 2.11215119e-01 1.40679855e-01
-8.00696326e-02 1.59842103e-01 2.00211820e-01 9.92221921e-02
-1.91516176e-02 -5.02510162e-02 -9.15402427e-02 4.28019215e-02
1.06537078e-01 -3.24195486e-01 1.79861627e-02 -1.29900711e-01
-1.18627679e-01 -1.26903307e-01 -1.12941251e-01 2.81344485e-01
-5.75519167e-02 1.62155727e-02 2.14084866e-01 2.05315240e-01
1.27598359e-01 1.89025252e-01 3.96820478e-01 1.20290515e-01
3.32130996e-01 1.37858897e-01 9.78393589e-02 3.51731323e-01
1.10782088e-01 2.27390210e-01 3.89559348e-01 1.74184808e-01
3.08568571e-01 1.71747215e-01 2.33275587e-01 2.56728635e-01
3.02423314e-01 2.74374851e-01 3.27629705e-02 5.61005655e-02
1.68330538e-01 1.12578506e-01 1.08314409e-02 1.33944964e-01
-2.12285231e-01 -1.21224032e-01 1.07819533e-01 3.17613330e-02
2.84300351e-01 -1.58586907e-01 1.36753020e-01 1.26197635e-01
7.40448636e-02 2.35065994e-01 -6.15319415e-02 -7.51966701e-02
4.13427726e-01 1.60539980e-01 1.09901498e-01 1.74329568e-01
1.48135527e-01 1.85728609e-01 2.85476612e-01 2.24898461e-01
1.33343564e-01 1.80618963e-01 2.03080820e-02 2.16728570e-01
1.86566493e-01 1.25929822e-01 1.79317565e-01 3.88162321e-01
2.03009067e-01 2.64872648e-01 4.95978731e-01 1.52347749e-01
-7.23596372e-02 1.29552280e-01 6.16496157e-02 1.05956924e-01
-2.71699836e-01 -5.64473565e-03 -2.50275527e-02 1.29269950e-01
-1.87247727e-01 -3.49347255e-01 -1.93280406e-01 7.87217542e-02
2.21951811e-01 7.10999656e-02 3.49382049e-02 1.48398799e-01
5.65517753e-02 1.05690961e-01 2.55476023e-01 1.28401889e-01
1.33289903e-01 1.14201836e-01 1.43169893e-01 5.69591438e-01
1.54755202e-01 1.55028578e-01 1.64827975e-01 4.67083700e-01
3.31029661e-02 1.62382617e-01 1.54156022e-01 6.55873722e-01
-5.31208735e-02 2.37122763e-01 2.71368392e-01 4.69144223e-01
1.62923984e-01 1.22718216e-01 1.68055251e-01 1.35999904e-01
2.04736813e-01 1.27146904e-01 -1.12549423e-01 3.24840692e-03
7.10375441e-02 7.90146006e-03 5.79775663e-02 -1.57867224e-01
1.33194074e-01 1.11364361e-01 1.95665062e-01 5.57144416e-02
-6.22623725e-02 2.59366443e-01 1.96512306e-02 -2.47699823e-02
3.37429602e-01 1.84628626e-01 2.42417229e-01 1.88852778e-01
2.10930109e-01 2.10416004e-01 2.81527817e-01 5.45666352e-01
1.85856370e-01 4.88939364e-01 1.29308220e-01 1.30534366e-01
4.31600221e-01 1.42478827e-01 1.11633119e-01 1.45026679e-01
2.79724659e-01 3.33422150e-01 4.92846588e-01 1.88026032e-01
4.35734950e-01 1.29765005e-01 1.36498013e-01 1.27056277e-01
2.39063615e-01 -1.49002763e-01 2.00230923e-02 1.23378339e-01
6.12350194e-02 -1.57952580e-01 5.93742728e-02 -6.88460761e-03
7.48854198e-02 6.45607765e-02 8.47908994e-03 2.15403273e-01
6.38359483e-02 -6.30232436e-04 4.09513551e-01 3.59478228e-01
1.15102395e-01 1.56907967e-01 1.60361237e-01 3.16259692e-01
4.37763243e-01 1.82457530e-01 3.12791208e-01 1.59771151e-01
-6.63636501e-02 3.37363422e-01 2.58858115e-01 1.81217734e-01
3.73234115e-02 1.44936318e-01 3.16879135e-01 4.73967251e-01
2.43696316e-01 2.73749525e-01 2.46270449e-02 2.27465471e-01
1.71915626e-01 6.96528119e-02 1.51926333e-01 1.91790172e-01
-1.70457889e-01 1.94258861e-02 1.05929285e-01 2.46869777e-01
-6.42981449e-03 1.22480623e-01 1.27650832e-01 1.23734951e-01
2.01582021e-01 7.66321281e-02 1.25943788e-01 -5.22321249e-02
2.95908687e-01 3.44925520e-01 1.07812252e-01 1.15365733e-01
2.13185926e-01 1.29626595e-01 4.15526961e-01 1.23294607e-01
1.45059294e-01 1.81411556e-01 1.06561684e-01 1.20626826e-01
2.19538968e-01 3.16034720e-01 9.72365601e-02 1.83261409e-01
1.47228661e-01 1.57946602e-01 3.83712037e-01 1.36031656e-01
3.75214905e-02 1.97768668e-02 3.06073435e-02 -1.01445936e-01
1.41457346e-01 4.89799924e-02 1.35908206e-01 2.95765484e-02
1.34596792e-01 -2.45031560e-01 9.09800159e-02 -2.80465423e-02
4.60956009e-03 4.76391647e-02 9.71343281e-02 6.71838252e-02
-1.45994631e-02 -5.39188915e-02 2.79919933e-01 2.31919186e-01
1.12801182e-01 1.13704532e-01 4.26356671e-01 1.90428244e-01
1.10496872e-01 3.31699294e-01 1.36443699e-01 1.97119264e-01
-5.57694684e-03 1.11270325e-01 4.61516648e-01 2.68630982e-01
1.00774945e-01 1.41438672e-01 3.97197924e-01 1.92009640e-01
1.34873803e-01 2.20134800e-01 1.11572142e-01 2.04669213e-02
2.21970350e-01 -1.13088611e-01 2.39645009e-01 2.70424952e-01
2.65250470e-01 7.79145265e-02 4.09394578e-03 -2.78502700e-01
-1.88647588e-02 -8.11508107e-02 2.05797599e-01 1.58278762e-01
-1.59274599e-01 4.31328198e-01 1.05097241e-01 1.85069899e-01]
Cheese [0.31024611 0.71174685 0.50751806 0.46142939 0.44756216 0.34030411
0.10382401 0.33386367 0.46296368 0.64676254 0.51310951 0.41967325
0.29605687 0.41850126 0.36864453 0.5582262 0.42049747 0.39449372
0.41897671 0.30695448 0.33910074 0.31757224 0.39323566 0.4420211
0.46471088 0.70530657 0.28512715 0.58736445 0.36488057 0.82106239
0.50970246 0.90376594 0.38217046 0.45333367 0.45167657 0.35115948
0.55637042 0.52114454 0.45354709 0.79224572 0.32544919 0.06849032
0.30827338 0.39214898 0.45700653 0.39646832 0.36237725 0.47371197
0.39708762 0.31607959 0.4316696 0.78152586 0.60156046 0.79837066
0.05176626 0.40828883 0.69434929 0.39885525 0.45567959 0.44382732
0.55357404 0.33127973 0.29538841 0.50827145 0.76002093 0.8387116
0.46159887 0.4182431 0.21700543 0.15736328 0.31618931 0.47388105
0.44323638 0.4053324 0.63362771 0.42012583 0.49582874 0.49920671
0.41526695 0.45875867 0.43960576 0.80129282 0.27568184 0.15913551]
Chiffon [ 1.21858343e-01 -2.31276552e-02 9.18464443e-02 2.50289033e-01
-5.91501237e-02 1.25642083e-01 1.46252162e-01 2.72813694e-01
1.38008152e-01 1.61345943e-01 1.13661894e-01 4.13249336e-01
1.39500741e-01 1.20583334e-01 1.65888636e-01 1.46040047e-01
-1.14107528e-01 2.14782308e-01 2.67154086e-02 -2.34184446e-01
4.15517880e-02 9.74031938e-02 1.60625832e-01 3.28729869e-01
-5.51110582e-02 -5.14479289e-02 1.29843330e-01 8.80520764e-02
-4.79386189e-02 3.63798458e-01 1.52387552e-01 1.35037619e-01
3.34227431e-01 2.33799637e-01 7.56293647e-02 1.96522640e-01
1.65482936e-01 3.51614087e-01 3.37735808e-01 3.33538427e-01
-4.49725725e-02 -4.25619026e-02 1.83394229e-01 -1.75028044e-03
1.20080698e-01 1.04391810e-01 5.40776461e-02 1.12891535e-01
-1.52575715e-01 5.84966270e-02 1.77161930e-01 1.77017297e-01
8.59815568e-02 6.03403396e-02 -1.53695767e-02 1.76768673e-01
2.95748619e-01 2.39682232e-01 1.22997307e-01 6.50780252e-02
2.00487390e-01 1.74528563e-01 3.21886617e-02 2.13705279e-01
6.86376721e-02 8.33547341e-02 2.74796796e-01 -5.07158091e-02
-4.84429928e-04 8.62049281e-02 7.02475345e-02 6.36241200e-02
7.94096568e-02 -1.16555238e-01 1.19969317e-01 7.19759215e-02
1.75612439e-01 3.41857571e-01 9.17704162e-02 2.65574883e-01
1.08739401e-01 1.24534261e-01 1.80974414e-01 1.10842025e-01
2.49251781e-01 6.13871762e-01 8.22906745e-02 1.08623612e-01
4.94895083e-02 -3.36986360e-02 1.76124342e-01 1.05787397e-01
-6.46956842e-02 -1.31216346e-01 -1.23043971e-01 5.94911621e-02
3.04844201e-02 4.04344988e-02 2.39073372e-01 1.06499044e-01
-8.23206990e-02 1.30534591e-01 2.14943183e-01 3.90611238e-01
3.04834209e-01 2.19561702e-01 2.74978888e-01 2.96165576e-01
2.68207392e-01 4.29832512e-02 6.38934896e-02 3.52278721e-01
2.68179206e-01 2.32450968e-01 2.10270759e-01 5.32072024e-02
4.22360609e-02 1.43364872e-01 -6.03838982e-02 2.63235851e-01
2.11645689e-01 1.59601094e-01 6.07989371e-02 1.10329657e-01
6.45146959e-02 1.69995343e-01 7.26000108e-02 1.59994162e-01
2.57372980e-01 2.02785304e-01 1.47469657e-01 1.02091933e-01
1.59127758e-01 5.04847567e-02 8.79725623e-02 8.83044372e-02
3.42132099e-02 5.96479065e-02 -2.44781745e-02 1.66214781e-01
-6.40488465e-02 -2.97878433e-02 -1.85761630e-01 1.74600131e-01
1.13405471e-01 2.25835043e-01 1.98365573e-01 1.21426113e-01
7.90042565e-03 1.41035546e-01 8.88806936e-02 1.48734867e-01
3.31603369e-01 1.73695423e-01 1.92571237e-01 1.65574916e-01
1.10631537e-01 6.40089705e-02 1.52677792e-01 2.02591244e-01
8.76656892e-02 1.80554583e-01 -1.81296026e-01 1.71943820e-01
9.75547799e-02 1.72937506e-01 1.83826001e-01 1.33485741e-01
1.35154552e-01 -9.66031422e-02 1.73405645e-01 1.00492254e-01
9.52718797e-03 1.09604252e-01 -1.37954309e-01 4.60501177e-01
1.28683778e-01 1.88093396e-01 2.03093522e-01 1.92527678e-01
3.19116764e-01 1.14801976e-01 1.37311036e-01 2.19699839e-01
8.75410371e-02 4.32131208e-02 1.09240071e-01 3.16627525e-01
-9.14249180e-02 2.00289059e-01 -7.33450479e-02 1.78344909e-01
7.82707627e-02 3.44465621e-02 4.28351783e-01 2.05981435e-01
1.44395401e-01 9.40545510e-02 6.72957569e-02 1.46842980e-01
9.90234025e-02 1.97945349e-01 8.52796848e-02 2.94398027e-01
1.52089322e-01 1.46053365e-01 6.77642392e-02 2.27347429e-01
1.08704027e-01 -1.22820453e-01 1.33917127e-01 3.04743829e-01
-1.36908488e-01 2.35855878e-01 -7.29328812e-02 1.34602838e-01
-1.07531362e-01 6.77260583e-02 7.43162379e-02 2.29050288e-01
9.87466631e-03 1.83650982e-01 -4.79982170e-02 4.33551139e-01
2.26663847e-01 7.19528059e-02 2.96940036e-01 6.09360315e-01
3.21286830e-01 1.30938550e-01 8.01044974e-01 2.87534189e-01
9.82785044e-02 -2.60778114e-01 -3.28347689e-02 1.90839908e-01
-2.76583098e-02 -2.25278602e-01 6.55441616e-02 2.10229762e-01
3.23412688e-02 -7.95661741e-02 3.96588312e-01 1.85660156e-01
-9.61954556e-02 4.51382027e-01 1.59659245e-01 1.53143391e-01
1.36332049e-01 7.46792191e-02 3.51699289e-01 4.02943998e-01
2.11485426e-01 3.10978806e-01 1.25908816e-01 1.28446699e-01
2.21492620e-01 3.38132074e-01 -1.30528198e-01 -1.34926091e-01
-7.03314041e-03 1.44183726e-01 -3.15638790e-02 1.84780081e-01
9.37495403e-02 7.57805086e-02 1.28215402e-01 1.17819064e-01
1.22257780e-02 7.52901489e-02 1.54757281e-01 1.19257089e-01
2.72782431e-01 1.52796984e-01 3.04144687e-01 7.29370147e-02
2.38916270e-01 1.60206737e-01 3.40495788e-02 2.30405489e-01
-5.52342680e-02 3.62790016e-02 4.89325632e-02 1.52314497e-02
7.58613984e-02 2.32380375e-01 3.91738218e-02 1.68845662e-01]
Pound [ 0.32321607 0.38508878 0.42297863 0.34451719 0.27738627 0.24482153
0.14172564 -0.0173595 -0.01870757 0.32355307 0.28607933 0.27570462
0.29002439 0.33265454 0.20198275 0.36564746 0.47768429 0.04244718
0.26028803 0.21499723 0.41401122 0.49965754 0.3083381 0.31291979
0.10171442 0.31546809 0.20888824 0.0928985 0.15241702 0.15078056
0.53822529 0.141656 0.19875801 0.31095959 0.26115004 0.52238763
0.17481194 0.37581023 0.40559732 0.1799333 0.24178222 0.26948023
0.24040905 0.15765103 0.19455205 0.1459339 0.45075494 0.35282801
0.36375433 0.33498334 0.23111502 0.37758293 0.26414853 0.02031629
-0.08532176 0.22579867 0.15366796 -0.00912829 0.27145471 0.42720462
0.25976266 0.47013859 0.2297805 0.23338651 0.16754926 0.2412075
0.46944699 0.40067312 0.20247278 0.19655044 0.30404021 0.12816256
0.16748396 0.25186199 0.29512684 0.21036365 0.05771213 0.04225223
-0.07432231 0.20418141 0.08937287 0.38401244 0.19439379 0.15578787
0.3348507 0.33558447 0.16276681 0.26034261 0.29275909 0.25818752
0.21052719 0.21792357 0.21006737 0.27540074 0.15393335 0.1545965
0.24711151 0.31465184 0.26112959 0.33186199 0.3426386 0.27499092
0.12380227 0.01685322 0.39336257 0.42175486 0.3488756 0.29484295
0.27530805 0.3646811 0.27356843 0.02792581 0.11111569 0.19718529
0.10748888 0.00553779 0.31200243 0.1470243 0.19179442 0.12623421
0.21634481 0.20222126 0.19343959 0.26994521 -0.0495724 0.04008776
0.39031927 0.17468296 0.38091786 0.23852503 0.28186562 0.18962601
0.13859143 0.41357774 0.51768241 0.61397133 0.21448552 0.1607562
0.26383846 0.34511841 0.44946125 0.24111933 0.38131255 0.51755165
0.46657461 0.58984442 0.34826857 0.23292436 0.33127621 0.12314482
0.12743829 0.36100748 0.08223741 0.07593321 0.28849014 0.29266131
0.57693589 0.35963854 0.25674005 0.42419876 0.17426009 0.24947853
0.41144646 0.3505417 0.1453595 0.25444058 0.24083198 0.2802688
0.33430517 0.57175158 0.17699496 0.24857315 0.12806569 0.46726304
0.02195756 -0.03315828 -0.10698311 0.40384354 0.32085676 0.13137146
0.12127028 0.22501458 0.23994971 0.16164933 0.28723918 0.19613472
0.25424671 0.53352985 0.15617013 0.13363199 0.17379802 0.23837871
0.32504894 0.75742513 0.37674573 0.36790457 0.30206674 0.22176289
0.15113828 0.44639703 0.2246356 0.24925448 0.25350291 0.31175686
0.33724911 0.35643596 0.12550246 -0.18240312 0.35181627 0.41884665
-0.01710304 0.22532558 0.17841254 0.49844941 0.27227185 0.29980404
0.15476942 0.22406937 0.29049974 0.10426626 0.16885104 0.00273775
0.38720682 0.29167847 -0.11466641 0.16710221 0.13239378 0.18218465
0.1619281 0.36896349 0.22416167 0.14321535 0.20342944 0.19816102
0.16923039 0.23980651 0.14950695 0.18763079 0.2314456 0.18065398
0.72513341 0.4381927 0.3063706 0.30198679 0.68424321 0.19179253
-0.05441273 0.20607683 0.04697187 0.42209185 0.89688176 0.26588886
0.41342926 0.31032533 0.35323879 -0.16248052 0.20888861 0.32034052
0.03220128 0.30544068 0.29683875 0.1644044 0.2918579 0.19255763
0.32082358 0.30836791 0.20868915 0.28632935 0.16421954 0.11807203
0.25984075 -0.08741326 0.0195581 0.13962133 0.8191702 0.39712282
0.37241833 0.22326601 0.27309132 0.30478413 0.49010052 0.95564857
0.25887253 0.31403387 0.30973205 0.39785483 0.14237171 0.14648819
0.46166122 0.26203058 0.3038787 0.30641543 0.27497289 0.10168111
0.10909005 0.24881722 0.56269999 0.2808664 0.21827234 0.39231092
0.25439673 0.44043259 0.05585949 0.06772251 0.48326974 0.18332497
0.00709466 0.09138764 0.22765224 0.18391179 0.3151882 0.20135587
0.1970134 0.1795083 0.24481941 0.11279568 0.12359239 0.12035994
0.23725643 0.20626565 -0.13238069 0.14592134 0.22745041 0.32902301
0.20790333 0.22956102 0.55181976 0.33856839 0.16175831 0.22851344
0.23527664 0.18968501 0.25135573 0.16758549 0.32436807 0.23463334
0.52275751 0.30083867 0.31533008 0.3318652 0.30354029 0.29656873
0.25330369 0.08267882 0.31857957 0.29218788 0.27019334 0.20973553
0.30581798 0.3037418 0.07019526 0.21748189 0.22824375 0.27342359
0.09743493 0.21568052 0.14214957 0.15158143 0.190655 0.19012006
0.09454959 0.52599239 0.17690979 0.20588511 0.29518866 0.11206204
0.05541632 -0.1306972 0.06977679 0.270653 0.2803273 0.25227206
0.34930219 0.17338134 0.34734598 0.17803046 0.55178673 0.4727106
0.179678 0.18500216 0.52021697 0.4489349 0.39781507 0.14562059
0.43307102 0.31240784 0.27413817 0.2190362 0.38735826 0.20171435
-0.07810885 0.24205143 0.24026078 0.18752377 0.46486901 0.23386662
0.42296434 0.32916347 0.24195824 0.20586755 0.05482604 0.06491987
-0.09533184 0.12370986 0.12462263 0.32617149 0.3113235 0.13012888
0.30347691 0.52019857 0.20143771 0.05920849 0.04591077 -0.10847787
0.20921121 0.2027113 0.08656521 0.27785597 0.93151874 0.2603655
0.38619877 0.16019194 0.37561695 0.18520309 0.27655601 0.37138838
0.21842314 0.403135 0.24465811 0.23882034 0.14160437 0.14237542
0.34945666 0.4215184 0.24579291 0.37785612 0.35377472 0.10804025
0.35275633 0.28602091 0.38664527 0.24164303 0.46958793 0.24420113
0.47097589 0.51666776 -0.032854 0.09367494 0.23370998 0.11874117
0.25973467 0.06197103 0.26714186 0.21786156 0.27347199 0.2621909
0.18586173 0.41871719 0.19310781 0.04965107 0.03563542 -0.31577181
0.23618772 0.14820913 -0.14772189 0.12055125 0.19762764 0.5608269
0.41520575 0.25977854 0.23727075 0.17736649 0.38157878 0.16800624
0.29267396 0.1544561 0.30116502 0.14817098 0.30230718 0.12513873
0.26513452 0.24217922 0.43441077 0.4051062 0.30980097 0.09374276
-0.12477535 0.46341042 0.48595824 0.29977695 0.2157381 0.22888101
0.43379298 0.33577663 0.24533144 0.33091922 0.3585703 0.11791354
0.395184 0.3508324 0.36330316 0.19847734 0.27272467 0.1823314
0.28103009 0.18886829 0.47335529 0.42204217 0.07795239 0.1644861
0.166938 0.08497851 0.14992156 0.13242363 0.13302711 0.13621586
0.16369434 0.34999184 0.37153275 0.19894069 0.20734444 0.43345348
0.22785146 0.18064148 0.15266387 0.71659587 0.45050932 0.12070704
0.31243788 0.58460164 0.35063063 0.29290808 0.33224335 -0.08017125
0.1213992 0.25940566 0.32480667 0.45404359 0.45056863 0.24049159
0.27553437 0.28569063 0.19665626 0.18861634 0.30719515 0.02043033
0.10484632 0.2026839 0.18108911 0.3072568 0.15559446 0.40917622
0.2910387 0.21252036 0.18745794 0.08094336 0.22369671 -0.20667537
-0.07775362 0.23309588 -0.02396401 0.2426802 0.24550625 0.26726533
0.61864891 0.20158177 0.24459158 0.49158842 0.24269758 0.26872075
0.2632711 0.56159067 0.23156456 0.13964574 0.13375843 0.20490908]
Sponge [0.73580257 0.73183187 0.68004424 0.72870923 0.76041478 0.71522379
0.60990968 0.83932268 0.57932365 0.54579051 0.79465664 1.50826802
0.7242606 0.44620431 0.59595697 0.7396216 0.76041558 0.59082638
0.57392993 0.59021514 0.71908453 0.72947292 0.71175378 0.67672714
0.53366892 0.72946871 0.4948195 0.61905171 0.78784887 0.66610438
0.66960562 0.78850913 0.71465054 0.6690954 0.51805207 0.72005788
0.70414236 0.6700258 0.64207815 0.8725881 0.98870797 0.65662429
0.71353141 0.70047491 0.49071003 0.75413095 0.73636842 0.76312067
0.67139827 0.80401958 0.65729027 0.69701251 0.78185858 0.88520571
0.63026251 0.79766947 0.6283201 0.58098635 0.63744721 0.51293953
0.88223351 0.66580087 0.75410702 0.93970336 0.83000237 0.68498655
0.66321435 0.64834447 0.59470426 0.63241816 0.69790741 0.73720243
0.75137394 0.59674542 0.833015 0.7144524 0.71150449 0.72541211
0.78719123 0.46557876 0.79521048 0.75380001 0.68783355 0.65243237
0.75254094 0.69527937 0.52906568 0.4393708 0.61566195 0.6027417
0.79388791 0.72645901 0.82822171 0.83614689 0.78281066 0.86053271
0.61157488 0.68184308 0.60023454 0.8649757 0.92128418 0.71457679
0.70654795 0.51146619 0.64679203 0.7012766 0.63793851 0.37486146
0.67076655 0.63159197 0.73020035 0.67160317 0.79609011 0.74849716
0.85231975 0.6806251 0.6950639 0.69823519 0.56649505 0.52117887]
Tiramisu [ 0.39410473 0.51952846 0.68412982 0.20415572 0.93950367 0.46495972
0.53329274 0.38295519 0.34666474 0.32200474 0.82340501 0.18547367
-0.0338556 0.08897884 -0.17901569 1.09510291 0.37278453 0.32250226
0.25886483 0.24887151 0.44646163 0.39559448 0.16000816 0.16013681
0.17757029 0.11894748 0.15518036 0.27747575 0.39782558 0.31292385
0.79088771 0.384457 0.28597306 0.37108544 0.47871742 0.36331051
0.01629586 0.15881475 0.06454623 0.34600752 0.47649894 0.31917813
0.29944674 1.11686399 0.78062804 0.38007041 0.38885918 0.36919903
0.34348412 0.37454545 0.18412813 0.24472462 0.36089282 0.51516818
0.33285772 0.34919443 0.55326467 0.45767164 0.64846502 0.20028945
0.36675685 0.24660515 0.13730941 0.29538812 0.35009826 0.37817969
0.26484654 0.72136273 0.60404694 0.54291064 0.393219 0.37865084
0.17327122 0.31126174 0.14882122 0.2059877 0.47155327 0.60802022
0.45547079 0.35291207 0.47571532 0.46927999 0.32885089 0.44623411
0.44535224 0.20323303 0.27533903 0.29764739 0.42394079 0.28397372
0.28263452 0.28012608 0.28200062 0.32995584 0.17524124 0.36522752
0.48055322 0.09818075 0.25380091 0.94935928 0.38610721 0.23115292
0.39581731 0.25153396 0.25240126 0.63060265 0.23773134 0.21161604
0.30557517 0.03604113 0.58466124 0.20293707 0.23417331 0.4189347
0.46649045 0.65240572 0.71026355 0.67737871 1.06433212 0.59239449
0.50347069 0.19563407 0.17312944 0.3529793 0.35522799 0.74959928
0.38638603 0.67915167 0.55596793 0.78047273 0.39242498 0.27114476
0.09664842 0.37718942 -0.05965329 0.72164314 0.45291755 1.29376527
0.31212929 0.42011077 0.24865926 0.37850131 0.21314941 0.43181279]
###Markdown
After you've completed the previous step, turn this into a list comprehension and pass the result to a variable called `margins`
###Code
# complete the code below
margins = [j['EBITDA/KG'].values for i,j in gp]
###Output
_____no_output_____
###Markdown
Remember the list unpacking we did for the tic tac toe project? We're going to do the same thing here. Unpack the margins list for `median_test` and run the cell below!
###Code
# complete the following line
stat, p, m, table = scipy.stats.median_test(*margins, correction=False)
print("The pearsons chi-square test statistic: {:.2f}".format(stat))
print("p-value of the test: {:.2e}".format(p))
print("the grand median: {:.2e}".format(m))
###Output
The pearsons chi-square test statistic: 448.81
p-value of the test: 8.85e-95
the grand median: 2.16e-01
###Markdown
**Part B** View the distributions of the data using matplotlib and seabornWhat a fantastic statistical result we found! Can we affirm our result with some visualizations? I hope so! Create a boxplot below using pandas. In your call to `df.boxplot()` the `by` parameter should be set to `Base Cake` and the `column` parameter should be set to `EBITDA/KG`
###Code
# YOUR BOXPLOT HERE
df.boxplot(by='Base Cake', column='EBITDA/KG')
###Output
/usr/local/lib/python3.7/dist-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning:
Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
###Markdown
For comparison, I've shown the boxplot below using seaborn!
###Code
fig, ax = plt.subplots(figsize=(10,7))
ax = sns.boxplot(x='Base Cake', y='EBITDA/KG', data=df, color='#A0cbe8')
###Output
_____no_output_____
###Markdown
**Part C** Perform Moods Median on all the other groups
###Code
ls = []
for i in range(10): # for loop initiation line
if i % 2 == 0:
ls.append(i**2) # actual task upon each loop
ls
ls = [i**2 for i in range(10) if i % 2 == 0]
ls
# Recall the other descriptors we have
descriptors
for desc in descriptors:
# YOUR CODE FORM MARGINS BELOW
margins = [j['EBITDA/KG'].values for i,j in df.groupby(desc)]
# UNPACK MARGINS INTO MEDIAN_TEST
stat, p, m, table = scipy.stats.median_test(*margins, correction=False)
print(desc)
print("The pearsons chi-square test statistic: {:.2f}".format(stat))
print("p-value of the test: {:e}".format(p))
print("the grand median: {}".format(m), end='\n\n')
###Output
Base Cake
The pearsons chi-square test statistic: 448.81
p-value of the test: 8.851450e-95
the grand median: 0.2160487288076019
Truffle Type
The pearsons chi-square test statistic: 22.86
p-value of the test: 1.088396e-05
the grand median: 0.2160487288076019
Primary Flavor
The pearsons chi-square test statistic: 638.99
p-value of the test: 3.918933e-103
the grand median: 0.2160487288076019
Secondary Flavor
The pearsons chi-square test statistic: 323.13
p-value of the test: 6.083210e-52
the grand median: 0.2160487288076019
Color Group
The pearsons chi-square test statistic: 175.18
p-value of the test: 1.011412e-31
the grand median: 0.2160487288076019
Customer
The pearsons chi-square test statistic: 5.66
p-value of the test: 2.257760e-01
the grand median: 0.2160487288076019
Date
The pearsons chi-square test statistic: 5.27
p-value of the test: 9.175929e-01
the grand median: 0.2160487288076019
###Markdown
**Part D** Many boxplotsAnd finally, we will confirm these visually. Complete the Boxplot for each group:
###Code
for desc in descriptors:
fig, ax = plt.subplots(figsize=(10,5))
sns.boxplot(x=desc, y='EBITDA/KG', data=df, color='#A0cbe8', ax=ax)
###Output
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:214: RuntimeWarning:
Glyph 9 missing from current font.
/usr/local/lib/python3.7/dist-packages/matplotlib/backends/backend_agg.py:183: RuntimeWarning:
Glyph 9 missing from current font.
###Markdown
6.1.3 **Enrichment**: What is a T-test?There are 1-sample and 2-sample T-tests _(note: we would use a 1-sample T-test just to determine if the sample mean is equal to a hypothesized population mean)_Within 2-sample T-tests we have **_independent_** and **_dependent_** T-tests (uncorrelated or correlated samples)For independent, two-sample T-tests:* **_Equal variance_** (or pooled) T-test * `scipy.stats.ttest_ind(equal_var=True)`* **_Unequal variance_** T-test * `scipy.stats.ttest_ind(equal_var=False)` * also called ***Welch's T-test***For dependent T-tests:* Paired (or correlated) T-test * `scipy.stats.ttest_rel`A full discussion on T-tests is outside the scope of this session, but we can refer to wikipedia for more information, including formulas on how each statistic is computed:* [student's T-test](https://en.wikipedia.org/wiki/Student%27s_t-testDependent_t-test_for_paired_samples) 6.1.4 **Enrichment**: Demonstration of T-tests[back to top](top) We'll assume our shifts are of **_equal variance_** and proceed with the appropriate **_independent two-sample_** T-test...
###Code
print(shift_one)
print(shift_two)
###Output
[15, 15, 15, 16, 17, 18, 18, 18, 21, 21]
[15, 16, 17, 18, 18, 19, 20, 20, 22, 22]
###Markdown
To calculate the T-test, we follow a slightly different statistical formula:$T=\frac{\mu_1 - \mu_2}{s\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}}$where $\mu$ are the means of the two groups, $n$ are the sample sizes and $s$ is the pooled standard deviation, also known as the cummulative variance (depending on if you square it or not):$s= \sqrt{\frac{(n_1-1)\sigma_1^2 + (n_2-1)\sigma_2^2}{n_1 + n_2 - 2}}$where $\sigma$ are the standard deviations. What you'll notice here is we are combining the two variances, we can only do this if we assume the variances are somewhat equal, this is known as the *equal variances* t-test.
###Code
mean_shift_one = np.mean(shift_one)
mean_shift_two = np.mean(shift_two)
print(mean_shift_one, mean_shift_two)
com_var = ((np.sum([(i - mean_shift_one)**2 for i in shift_one]) +
np.sum([(i - mean_shift_two)**2 for i in shift_two])) /
(len(shift_one) + len(shift_two)-2))
print(com_var)
T = (np.abs(mean_shift_one - mean_shift_two) / (
np.sqrt(com_var/len(shift_one) +
com_var/len(shift_two))))
T
###Output
_____no_output_____
###Markdown
We see that this hand-computed result matches that of the `scipy` module:
###Code
scipy.stats.ttest_ind(shift_two, shift_one, equal_var=True)
###Output
_____no_output_____
###Markdown
**Enrichment**: 6.1.5 What are F-statistics and the F-test?The F-statistic is simply a ratio of two variances, or the ratio of _mean squares__mean squares_ is the estimate of population variance that accounts for the degrees of freedom to compute that estimate. We will explore this in the context of ANOVA 6.1.6 **Enrichment**: What is Analysis of Variance? ANOVA uses the F-test to determine whether the variability between group means is larger than the variability within the groups. If that statistic is large enough, you can conclude that the means of the groups are not equal.**The caveat is that ANOVA tells us whether there is a difference in means but it does not tell us where the difference is.** To find where the difference is between the groups, we have to conduct post-hoc tests.There are two main types:* One-way (one factor) and* Two-way (two factor) where factor is an indipendent variable| Ind A | Ind B | Dep ||-------|-------|-----|| X | H | 10 || X | I | 12 || Y | I | 11 || Y | H | 20 | ANOVA Hypotheses* _Null hypothesis_: group means are equal* _Alternative hypothesis_: at least one group mean is different form the other groups ANOVA Assumptions* Residuals (experimental error) are normally distributed (test with Shapiro-Wilk)* Homogeneity of variances (variances are equal between groups) (test with Bartlett's)* Observations are sampled independently from each other* _Note: ANOVA assumptions can be checked using test statistics (e.g. Shapiro-Wilk, Bartlett’s, Levene’s test) and the visual approaches such as residual plots (e.g. QQ-plots) and histograms._ Steps for ANOVA* Check sample sizes: equal observations must be in each group* Calculate Sum of Square between groups and within groups ($SS_B, SS_E$)* Calculate Mean Square between groups and within groups ($MS_B, MS_E$)* Calculate F value ($MS_B/MS_E$)This might be easier to see in a table:| Source of Variation | degree of freedom (Df) | Sum of squares (SS) | Mean square (MS) | F value ||-----------------------------|------------------------|---------------------|--------------------|-------------|| Between Groups | Df_b = P-1 | SS_B | MS_B = SS_B / Df_B | MS_B / MS_E || Within Groups | Df_E = P(N-1) | SS_E | MS_E = SS_E / Df_E | || total | Df_T = PN-1 | SS_T | | |Where:$$ SS_B = \sum_{i}^{P}{(\bar{y}_i-\bar{y})^2} $$$$ SS_E = \sum_{ik}^{PN}{(\bar{y}_{ik}-\bar{y}_i)^2} $$$$ SS_T = SS_B + SS_E $$Let's go back to our shift data to take an example:
###Code
shifts = pd.DataFrame([shift_one, shift_two, shift_three, shift_four]).T
shifts.columns = ['A', 'B', 'C', 'D']
shifts.boxplot()
###Output
_____no_output_____
###Markdown
6.1.6.0 **Enrichment**: SNS Boxplotthis is another great way to view boxplot data. Notice how sns also shows us the raw data alongside the box and whiskers using a _swarmplot_.
###Code
shift_melt = pd.melt(shifts.reset_index(), id_vars=['index'],
value_vars=['A', 'B', 'C', 'D'])
shift_melt.columns = ['index', 'shift', 'rate']
ax = sns.boxplot(x='shift', y='rate', data=shift_melt, color='#A0cbe8')
ax = sns.swarmplot(x="shift", y="rate", data=shift_melt, color='#79706e')
###Output
_____no_output_____
###Markdown
Anyway back to ANOVA...
###Code
fvalue, pvalue = stats.f_oneway(shifts['A'],
shifts['B'],
shifts['C'],
shifts['D'])
print(fvalue, pvalue)
###Output
10.736198592071137 3.4885909965240144e-05
###Markdown
We can get this in the format of the table we saw above:
###Code
# get ANOVA table
import statsmodels.api as sm
from statsmodels.formula.api import ols
# Ordinary Least Squares (OLS) model
model = ols('rate ~ C(shift)', data=shift_melt).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
anova_table
# output (ANOVA F and p value)
###Output
_____no_output_____
###Markdown
The **_Shapiro-Wilk_** test can be used to check the _normal distribution of residuals_. Null hypothesis: data is drawn from normal distribution.
###Code
w, pvalue = stats.shapiro(model.resid)
print(w, pvalue)
###Output
0.9800916314125061 0.6929556727409363
###Markdown
We can use **_Bartlett’s_** test to check the _Homogeneity of variances_. Null hypothesis: samples from populations have equal variances.
###Code
w, pvalue = stats.bartlett(shifts['A'],
shifts['B'],
shifts['C'],
shifts['D'])
print(w, pvalue)
###Output
1.9492677462621584 0.5830028540285896
###Markdown
6.1.6.1 ANOVA InterpretationThe _p_ value form ANOVA analysis is significant (_p_ < 0.05) and we can conclude there are significant difference between the shifts. But we do not know which shift(s) are different. For this we need to perform a post hoc test. There are a multitude of these that are beyond the scope of this discussion ([Tukey-kramer](https://www.real-statistics.com/one-way-analysis-of-variance-anova/unplanned-comparisons/tukey-kramer-test/) is one such test) 6.1.7 Putting it all togetherIn summary, there are many statistical tests at our disposal when performing inferential statistical analysis. In times like these, a simple decision tree can be extraordinarily useful!source: [scribbr](https://www.scribbr.com/statistics/statistical-tests/) 6.2 Evaluate statistical significance of product margin: a snake in the garden 6.2.1 Mood's Median on product descriptorsThe first issue we run into with moods is... what? We can only perform moods on two groups at a time. How can we get around this?Let's take a look at the category with the fewest descriptors. If we remember, this was the Truffle Types.
###Code
df.columns
df['Truffle Type'].unique()
col = 'Truffle Type'
moodsdf = pd.DataFrame()
for truff in df[col].unique():
# for each
group = df.loc[df[col] == truff]['EBITDA/KG']
pop = df.loc[~(df[col] == truff)]['EBITDA/KG']
stat, p, m, table = scipy.stats.median_test(group, pop)
median = np.median(group)
mean = np.mean(group)
size = len(group)
print("{}: N={}".format(truff, size))
print("Welch's T-Test for Unequal Variances")
print(scipy.stats.ttest_ind(group, pop, equal_var=False))
welchp = scipy.stats.ttest_ind(group, pop, equal_var=False).pvalue
print()
moodsdf = pd.concat([moodsdf,
pd.DataFrame([truff,
stat, p, m, mean, median, size,
welchp, table]).T])
moodsdf.columns = [col, 'pearsons_chi_square', 'p_value',
'grand_median', 'group_mean', 'group_median', 'size', 'welch p',
'table']
###Output
Candy Outer: N=288
Welch's T-Test for Unequal Variances
Ttest_indResult(statistic=-2.7615297773427527, pvalue=0.005911048922657976)
Chocolate Outer: N=1356
Welch's T-Test for Unequal Variances
Ttest_indResult(statistic=4.409449025092911, pvalue=1.1932685612874952e-05)
Jelly Filled: N=24
Welch's T-Test for Unequal Variances
Ttest_indResult(statistic=-8.4142523067935, pvalue=7.929912531660173e-09)
###Markdown
Question 1: Moods Results on Truffle Type> What do we notice about the resultant table?* **_p-values_** Most are quite small (really low probability of achieving these table results under a single distribution)* group sizes: our Jelly Filled group is relatively small
###Code
sns.boxplot(x='Base Cake', y='EBITDA/KG', data=df)
moodsdf.sort_values('p_value')
###Output
_____no_output_____
###Markdown
We can go ahead and repeat this analysis for all of our product categories:
###Code
df.columns[:5]
moodsdf = pd.DataFrame()
for col in df.columns[:5]:
for truff in df[col].unique():
group = df.loc[df[col] == truff]['EBITDA/KG']
pop = df.loc[~(df[col] == truff)]['EBITDA/KG']
stat, p, m, table = scipy.stats.median_test(group, pop)
median = np.median(group)
mean = np.mean(group)
size = len(group)
welchp = scipy.stats.ttest_ind(group, pop, equal_var=False).pvalue
moodsdf = pd.concat([moodsdf,
pd.DataFrame([col, truff,
stat, p, m, mean, median, size,
welchp, table]).T])
moodsdf.columns = ['descriptor', 'group', 'pearsons_chi_square', 'p_value',
'grand_median', 'group_mean', 'group_median', 'size', 'welch p',
'table']
print(moodsdf.shape)
moodsdf = moodsdf.loc[(moodsdf['welch p'] < 0.05) &
(moodsdf['p_value'] < 0.05)].sort_values('group_median')
moodsdf = moodsdf.sort_values('group_median').reset_index(drop=True)
print(moodsdf.shape)
moodsdf[-10:]
###Output
_____no_output_____
###Markdown
6.2.2 **Enrichment**: Broad Analysis of Categories: ANOVA Recall our "melted" shift data. It will be useful to think of getting our Truffle data in this format:
###Code
shift_melt.head()
df.columns = df.columns.str.replace(' ', '_')
df.columns = df.columns.str.replace('/', '_')
# get ANOVA table
# Ordinary Least Squares (OLS) model
model = ols('EBITDA_KG ~ C(Truffle_Type)', data=df).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
anova_table
# output (ANOVA F and p value)
###Output
_____no_output_____
###Markdown
Recall the **_Shapiro-Wilk_** test can be used to check the _normal distribution of residuals_. Null hypothesis: data is drawn from normal distribution.
###Code
w, pvalue = stats.shapiro(model.resid)
print(w, pvalue)
###Output
0.9576056599617004 1.2598073820281984e-21
###Markdown
And the **_Bartlett’s_** test to check the _Homogeneity of variances_. Null hypothesis: samples from populations have equal variances.
###Code
gb = df.groupby('Truffle_Type')['EBITDA_KG']
gb
w, pvalue = stats.bartlett(*[gb.get_group(x) for x in gb.groups])
print(w, pvalue)
###Output
109.93252546442552 1.344173733366234e-24
###Markdown
Wow it looks like our data is not drawn from a normal distribution! Let's check this for other categories...We can wrap these in a for loop:
###Code
for col in df.columns[:5]:
print(col)
model = ols('EBITDA_KG ~ C({})'.format(col), data=df).fit()
anova_table = sm.stats.anova_lm(model, typ=2)
display(anova_table)
w, pvalue = stats.shapiro(model.resid)
print("Shapiro: ", w, pvalue)
gb = df.groupby(col)['EBITDA_KG']
w, pvalue = stats.bartlett(*[gb.get_group(x) for x in gb.groups])
print("Bartlett: ", w, pvalue)
print()
###Output
Base_Cake
###Markdown
6.2.3 **Enrichment**: Visual Analysis of Residuals: QQ-PlotsThis can be distressing and is often why we want visual methods to see what is going on with our data!
###Code
model = ols('EBITDA_KG ~ C(Truffle_Type)', data=df).fit()
#create instance of influence
influence = model.get_influence()
#obtain standardized residuals
standardized_residuals = influence.resid_studentized_internal
# res.anova_std_residuals are standardized residuals obtained from ANOVA (check above)
sm.qqplot(standardized_residuals, line='45')
plt.xlabel("Theoretical Quantiles")
plt.ylabel("Standardized Residuals")
plt.show()
# histogram
plt.hist(model.resid, bins='auto', histtype='bar', ec='k')
plt.xlabel("Residuals")
plt.ylabel('Frequency')
plt.show()
###Output
_____no_output_____ |
demo/MMPose_Tutorial.ipynb | ###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommend to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environment
!which python
# install dependencies: (use cu111 because colab has CUDA 11.1)
%pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
%pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
# install mmdet for inference demo
%pip install mmdet
# clone mmpose repo
%rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
%pip install -r requirements.txt
# install mmpose in develop mode
%pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu111 True
torchvision version: 0.10.0+cu111
mmpose version: 0.18.0
cuda version: 11.1
compiler information: GCC 9.3
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os
import os.path as osp
from collections import OrderedDict
import tempfile
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.base import Kpt2dSviewRgbImgTopDownDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(Kpt2dSviewRgbImgTopDownDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
dataset_info=None,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, dataset_info, coco_style=False, test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.ann_file) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
center, scale = self._xywh2cs(*bbox)
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'center': center,
'scale': scale,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def _xywh2cs(self, x, y, w, h):
"""This encodes bbox(x, y, w, h) into (center, scale)
Args:
x, y, w, h
Returns:
tuple: A tuple containing center and scale.
- center (np.ndarray[float32](2,)): center of the bbox (x, y).
- scale (np.ndarray[float32](2,)): scale of the bbox w & h.
"""
aspect_ratio = self.ann_info['image_size'][0] / self.ann_info[
'image_size'][1]
center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)
if w > aspect_ratio * h:
h = w * 1.0 / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
# pixel std is 200.0
scale = np.array([w / 200.0, h / 200.0], dtype=np.float32)
# padding to include proper amount of context
scale = scale * 1.25
return center, scale
def evaluate(self, results, res_folder=None, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
results (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outputs.
res_folder (str, optional): The folder to save the testing
results. If not specified, a temp folder will be created.
Default: None.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
if res_folder is not None:
tmp_folder = None
res_file = osp.join(res_folder, 'result_keypoints.json')
else:
tmp_folder = tempfile.TemporaryDirectory()
res_file = osp.join(tmp_folder.name, 'result_keypoints.json')
kpts = []
for result in results:
preds = result['preds']
boxes = result['boxes']
image_paths = result['image_paths']
bbox_ids = result['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
if tmp_folder is not None:
tmp_folder.cleanup()
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
dataset_info = dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),
3:
dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),
4:
dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),
5:
dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),
6:
dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),
10:
dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),
11:
dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),
12:
dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),
16:
dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),
17:
dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0, 1.0, 1.2,
1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommand to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environtment
!which python
# install pytorch
!pip install torch
# install mmcv-full
!pip install mmcv-full
# install mmdet for inference demo
!pip install mmdet
# clone mmpose repo
!rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
!pip install -r requirements.txt
# install mmpose in develop mode
!pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu111 True
torchvision version: 0.10.0+cu111
mmpose version: 0.18.0
cuda version: 11.1
compiler information: GCC 9.3
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the cooresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os
import os.path as osp
from collections import OrderedDict
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.base import Kpt2dSviewRgbImgTopDownDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(Kpt2dSviewRgbImgTopDownDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
dataset_info=None,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, dataset_info, coco_style=False, test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.ann_file) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
center, scale = self._xywh2cs(*bbox)
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'center': center,
'scale': scale,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def _xywh2cs(self, x, y, w, h):
"""This encodes bbox(x, y, w, h) into (center, scale)
Args:
x, y, w, h
Returns:
tuple: A tuple containing center and scale.
- center (np.ndarray[float32](2,)): center of the bbox (x, y).
- scale (np.ndarray[float32](2,)): scale of the bbox w & h.
"""
aspect_ratio = self.ann_info['image_size'][0] / self.ann_info[
'image_size'][1]
center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)
if w > aspect_ratio * h:
h = w * 1.0 / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
# pixel std is 200.0
scale = np.array([w / 200.0, h / 200.0], dtype=np.float32)
# padding to include proper amount of context
scale = scale * 1.25
return center, scale
def evaluate(self, outputs, res_folder, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
outputs (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outpus.
res_folder (str): Path of directory to save the results.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
res_file = os.path.join(res_folder, 'result_keypoints.json')
kpts = []
for output in outputs:
preds = output['preds']
boxes = output['boxes']
image_paths = output['image_paths']
bbox_ids = output['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
dataset_info = dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),
3:
dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),
4:
dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),
5:
dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),
6:
dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),
10:
dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),
11:
dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),
12:
dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),
16:
dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),
17:
dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0, 1.0, 1.2,
1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommend to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environment
!which python
# install pytorch
!pip install torch
# install mmcv-full
!pip install mmcv-full
# install mmdet for inference demo
!pip install mmdet
# clone mmpose repo
!rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
!pip install -r requirements.txt
# install mmpose in develop mode
!pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu111 True
torchvision version: 0.10.0+cu111
mmpose version: 0.18.0
cuda version: 11.1
compiler information: GCC 9.3
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os
import os.path as osp
from collections import OrderedDict
import tempfile
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.base import Kpt2dSviewRgbImgTopDownDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(Kpt2dSviewRgbImgTopDownDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
dataset_info=None,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, dataset_info, coco_style=False, test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.ann_file) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
center, scale = self._xywh2cs(*bbox)
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'center': center,
'scale': scale,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def _xywh2cs(self, x, y, w, h):
"""This encodes bbox(x, y, w, h) into (center, scale)
Args:
x, y, w, h
Returns:
tuple: A tuple containing center and scale.
- center (np.ndarray[float32](2,)): center of the bbox (x, y).
- scale (np.ndarray[float32](2,)): scale of the bbox w & h.
"""
aspect_ratio = self.ann_info['image_size'][0] / self.ann_info[
'image_size'][1]
center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)
if w > aspect_ratio * h:
h = w * 1.0 / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
# pixel std is 200.0
scale = np.array([w / 200.0, h / 200.0], dtype=np.float32)
# padding to include proper amount of context
scale = scale * 1.25
return center, scale
def evaluate(self, results, res_folder=None, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
results (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outputs.
res_folder (str, optional): The folder to save the testing
results. If not specified, a temp folder will be created.
Default: None.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
if res_folder is not None:
tmp_folder = None
res_file = osp.join(res_folder, 'result_keypoints.json')
else:
tmp_folder = tempfile.TemporaryDirectory()
res_file = osp.join(tmp_folder.name, 'result_keypoints.json')
kpts = []
for result in results:
preds = result['preds']
boxes = result['boxes']
image_paths = result['image_paths']
bbox_ids = result['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
if tmp_folder is not None:
tmp_folder.cleanup()
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
dataset_info = dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),
3:
dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),
4:
dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),
5:
dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),
6:
dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),
10:
dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),
11:
dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),
12:
dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),
16:
dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),
17:
dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0, 1.0, 1.2,
1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommend to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environment
!which python
# install dependencies: (use cu111 because colab has CUDA 11.1)
%pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
%pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
# install mmdet for inference demo
%pip install mmdet
# clone mmpose repo
%rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
%pip install -r requirements.txt
# install mmpose in develop mode
%pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu111 True
torchvision version: 0.10.0+cu111
mmpose version: 0.18.0
cuda version: 11.1
compiler information: GCC 9.3
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(
pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(
pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os.path as osp
from collections import OrderedDict
import tempfile
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.base import Kpt2dSviewRgbImgTopDownDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(Kpt2dSviewRgbImgTopDownDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
dataset_info=None,
test_mode=False):
super().__init__(
ann_file,
img_prefix,
data_cfg,
pipeline,
dataset_info,
coco_style=False,
test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.ann_file) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def evaluate(self, results, res_folder=None, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
results (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outputs.
res_folder (str, optional): The folder to save the testing
results. If not specified, a temp folder will be created.
Default: None.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
if res_folder is not None:
tmp_folder = None
res_file = osp.join(res_folder, 'result_keypoints.json')
else:
tmp_folder = tempfile.TemporaryDirectory()
res_file = osp.join(tmp_folder.name, 'result_keypoints.json')
kpts = []
for result in results:
preds = result['preds']
boxes = result['boxes']
image_paths = result['image_paths']
bbox_ids = result['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
if tmp_folder is not None:
tmp_folder.cleanup()
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
dataset_info = dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),
3:
dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),
4:
dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),
5:
dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),
6:
dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),
10:
dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),
11:
dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),
12:
dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),
16:
dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),
17:
dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0, 1.0, 1.2,
1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(
pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(
pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommend to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environment
!which python
# install dependencies: (use cu111 because colab has CUDA 11.1)
%pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
%pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
# install mmdet for inference demo
%pip install mmdet
# clone mmpose repo
%rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
%pip install -r requirements.txt
# install mmpose in develop mode
%pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu111 True
torchvision version: 0.10.0+cu111
mmpose version: 0.18.0
cuda version: 11.1
compiler information: GCC 9.3
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os
import os.path as osp
from collections import OrderedDict
import tempfile
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.base import Kpt2dSviewRgbImgTopDownDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(Kpt2dSviewRgbImgTopDownDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
dataset_info=None,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, dataset_info, coco_style=False, test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.ann_file) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
center, scale = self._xywh2cs(*bbox)
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'center': center,
'scale': scale,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def _xywh2cs(self, x, y, w, h):
"""This encodes bbox(x, y, w, h) into (center, scale)
Args:
x, y, w, h
Returns:
tuple: A tuple containing center and scale.
- center (np.ndarray[float32](2,)): center of the bbox (x, y).
- scale (np.ndarray[float32](2,)): scale of the bbox w & h.
"""
aspect_ratio = self.ann_info['image_size'][0] / self.ann_info[
'image_size'][1]
center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)
if w > aspect_ratio * h:
h = w * 1.0 / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
# pixel std is 200.0
scale = np.array([w / 200.0, h / 200.0], dtype=np.float32)
# padding to include proper amount of context
scale = scale * 1.25
return center, scale
def evaluate(self, results, res_folder=None, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
results (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outputs.
res_folder (str, optional): The folder to save the testing
results. If not specified, a temp folder will be created.
Default: None.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
if res_folder is not None:
tmp_folder = None
res_file = osp.join(res_folder, 'result_keypoints.json')
else:
tmp_folder = tempfile.TemporaryDirectory()
res_file = osp.join(tmp_folder.name, 'result_keypoints.json')
kpts = []
for result in results:
preds = result['preds']
boxes = result['boxes']
image_paths = result['image_paths']
bbox_ids = result['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
if tmp_folder is not None:
tmp_folder.cleanup()
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
dataset_info = dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),
3:
dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),
4:
dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),
5:
dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),
6:
dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),
10:
dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),
11:
dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),
12:
dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),
16:
dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),
17:
dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0, 1.0, 1.2,
1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommend to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environment
!which python
# install pytorch
!pip install torch
# install mmcv-full
!pip install mmcv-full
# install mmdet for inference demo
!pip install mmdet
# clone mmpose repo
!rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
!pip install -r requirements.txt
# install mmpose in develop mode
!pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu111 True
torchvision version: 0.10.0+cu111
mmpose version: 0.18.0
cuda version: 11.1
compiler information: GCC 9.3
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os
import os.path as osp
from collections import OrderedDict
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.base import Kpt2dSviewRgbImgTopDownDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(Kpt2dSviewRgbImgTopDownDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
dataset_info=None,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, dataset_info, coco_style=False, test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.ann_file) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
center, scale = self._xywh2cs(*bbox)
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'center': center,
'scale': scale,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def _xywh2cs(self, x, y, w, h):
"""This encodes bbox(x, y, w, h) into (center, scale)
Args:
x, y, w, h
Returns:
tuple: A tuple containing center and scale.
- center (np.ndarray[float32](2,)): center of the bbox (x, y).
- scale (np.ndarray[float32](2,)): scale of the bbox w & h.
"""
aspect_ratio = self.ann_info['image_size'][0] / self.ann_info[
'image_size'][1]
center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)
if w > aspect_ratio * h:
h = w * 1.0 / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
# pixel std is 200.0
scale = np.array([w / 200.0, h / 200.0], dtype=np.float32)
# padding to include proper amount of context
scale = scale * 1.25
return center, scale
def evaluate(self, outputs, res_folder, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
outputs (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outputs.
res_folder (str): Path of directory to save the results.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
res_file = os.path.join(res_folder, 'result_keypoints.json')
kpts = []
for output in outputs:
preds = output['preds']
boxes = output['boxes']
image_paths = output['image_paths']
bbox_ids = output['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
dataset_info = dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),
3:
dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),
4:
dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),
5:
dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),
6:
dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),
10:
dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),
11:
dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),
12:
dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),
16:
dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),
17:
dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0, 1.0, 1.2,
1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommand to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environtment
!which python
# install pytorch
!pip install torch
# install mmcv-full
!pip install mmcv-full
# install mmdet for inference demo
!pip install mmdet
# clone mmpose repo
!rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
!pip install -r requirements.txt
# install mmpose in develop mode
!pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu102 True
torchvision version: 0.10.0+cu102
mmpose version: 0.16.0
cuda version: 10.2
compiler information: GCC 5.4
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the cooresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os
import os.path as osp
from collections import OrderedDict
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.top_down.topdown_base_dataset import \
TopDownBaseDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(TopDownBaseDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.annotations_path) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
center, scale = self._xywh2cs(*bbox)
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'center': center,
'scale': scale,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def _xywh2cs(self, x, y, w, h):
"""This encodes bbox(x, y, w, h) into (center, scale)
Args:
x, y, w, h
Returns:
tuple: A tuple containing center and scale.
- center (np.ndarray[float32](2,)): center of the bbox (x, y).
- scale (np.ndarray[float32](2,)): scale of the bbox w & h.
"""
aspect_ratio = self.ann_info['image_size'][0] / self.ann_info[
'image_size'][1]
center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)
if w > aspect_ratio * h:
h = w * 1.0 / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
# pixel std is 200.0
scale = np.array([w / 200.0, h / 200.0], dtype=np.float32)
# padding to include proper amount of context
scale = scale * 1.25
return center, scale
def evaluate(self, outputs, res_folder, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
outputs (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outpus.
res_folder (str): Path of directory to save the results.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
res_file = os.path.join(res_folder, 'result_keypoints.json')
kpts = []
for output in outputs:
preds = output['preds']
boxes = output['boxes']
image_paths = output['image_paths']
bbox_ids = output['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
]),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommend to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environment
!which python
# install dependencies: (use cu111 because colab has CUDA 11.1)
%pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
# install mmcv-full thus we could use CUDA operators
%pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
# install mmdet for inference demo
%pip install mmdet
# clone mmpose repo
%rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
%pip install -r requirements.txt
# install mmpose in develop mode
%pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu111 True
torchvision version: 0.10.0+cu111
mmpose version: 0.18.0
cuda version: 11.1
compiler information: GCC 9.3
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the corresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os.path as osp
from collections import OrderedDict
import tempfile
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.base import Kpt2dSviewRgbImgTopDownDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(Kpt2dSviewRgbImgTopDownDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
dataset_info=None,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, dataset_info, coco_style=False, test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.ann_file) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def evaluate(self, results, res_folder=None, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
results (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outputs.
res_folder (str, optional): The folder to save the testing
results. If not specified, a temp folder will be created.
Default: None.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
if res_folder is not None:
tmp_folder = None
res_file = osp.join(res_folder, 'result_keypoints.json')
else:
tmp_folder = tempfile.TemporaryDirectory()
res_file = osp.join(tmp_folder.name, 'result_keypoints.json')
kpts = []
for result in results:
preds = result['preds']
boxes = result['boxes']
image_paths = result['image_paths']
bbox_ids = result['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
if tmp_folder is not None:
tmp_folder.cleanup()
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
dataset_info = dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(link=('right_ankle', 'right_knee'), id=2, color=[255, 128, 0]),
3:
dict(link=('right_knee', 'right_hip'), id=3, color=[255, 128, 0]),
4:
dict(link=('left_hip', 'right_hip'), id=4, color=[51, 153, 255]),
5:
dict(link=('left_shoulder', 'left_hip'), id=5, color=[51, 153, 255]),
6:
dict(link=('right_shoulder', 'right_hip'), id=6, color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(link=('left_shoulder', 'left_elbow'), id=8, color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'), id=9, color=[255, 128, 0]),
10:
dict(link=('left_elbow', 'left_wrist'), id=10, color=[0, 255, 0]),
11:
dict(link=('right_elbow', 'right_wrist'), id=11, color=[255, 128, 0]),
12:
dict(link=('left_eye', 'right_eye'), id=12, color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(link=('left_eye', 'left_ear'), id=15, color=[51, 153, 255]),
16:
dict(link=('right_eye', 'right_ear'), id=16, color=[51, 153, 255]),
17:
dict(link=('left_ear', 'left_shoulder'), id=17, color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0, 1.0, 1.2,
1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
],
dataset_info=dict(
dataset_name='coco',
paper_info=dict(
author=
'Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence',
title='Microsoft coco: Common objects in context',
container='European conference on computer vision',
year='2014',
homepage='http://cocodataset.org/'),
keypoint_info=dict({
0:
dict(
name='nose',
id=0,
color=[51, 153, 255],
type='upper',
swap=''),
1:
dict(
name='left_eye',
id=1,
color=[51, 153, 255],
type='upper',
swap='right_eye'),
2:
dict(
name='right_eye',
id=2,
color=[51, 153, 255],
type='upper',
swap='left_eye'),
3:
dict(
name='left_ear',
id=3,
color=[51, 153, 255],
type='upper',
swap='right_ear'),
4:
dict(
name='right_ear',
id=4,
color=[51, 153, 255],
type='upper',
swap='left_ear'),
5:
dict(
name='left_shoulder',
id=5,
color=[0, 255, 0],
type='upper',
swap='right_shoulder'),
6:
dict(
name='right_shoulder',
id=6,
color=[255, 128, 0],
type='upper',
swap='left_shoulder'),
7:
dict(
name='left_elbow',
id=7,
color=[0, 255, 0],
type='upper',
swap='right_elbow'),
8:
dict(
name='right_elbow',
id=8,
color=[255, 128, 0],
type='upper',
swap='left_elbow'),
9:
dict(
name='left_wrist',
id=9,
color=[0, 255, 0],
type='upper',
swap='right_wrist'),
10:
dict(
name='right_wrist',
id=10,
color=[255, 128, 0],
type='upper',
swap='left_wrist'),
11:
dict(
name='left_hip',
id=11,
color=[0, 255, 0],
type='lower',
swap='right_hip'),
12:
dict(
name='right_hip',
id=12,
color=[255, 128, 0],
type='lower',
swap='left_hip'),
13:
dict(
name='left_knee',
id=13,
color=[0, 255, 0],
type='lower',
swap='right_knee'),
14:
dict(
name='right_knee',
id=14,
color=[255, 128, 0],
type='lower',
swap='left_knee'),
15:
dict(
name='left_ankle',
id=15,
color=[0, 255, 0],
type='lower',
swap='right_ankle'),
16:
dict(
name='right_ankle',
id=16,
color=[255, 128, 0],
type='lower',
swap='left_ankle')
}),
skeleton_info=dict({
0:
dict(
link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
1:
dict(link=('left_knee', 'left_hip'), id=1, color=[0, 255, 0]),
2:
dict(
link=('right_ankle', 'right_knee'),
id=2,
color=[255, 128, 0]),
3:
dict(
link=('right_knee', 'right_hip'),
id=3,
color=[255, 128, 0]),
4:
dict(
link=('left_hip', 'right_hip'), id=4, color=[51, 153,
255]),
5:
dict(
link=('left_shoulder', 'left_hip'),
id=5,
color=[51, 153, 255]),
6:
dict(
link=('right_shoulder', 'right_hip'),
id=6,
color=[51, 153, 255]),
7:
dict(
link=('left_shoulder', 'right_shoulder'),
id=7,
color=[51, 153, 255]),
8:
dict(
link=('left_shoulder', 'left_elbow'),
id=8,
color=[0, 255, 0]),
9:
dict(
link=('right_shoulder', 'right_elbow'),
id=9,
color=[255, 128, 0]),
10:
dict(
link=('left_elbow', 'left_wrist'),
id=10,
color=[0, 255, 0]),
11:
dict(
link=('right_elbow', 'right_wrist'),
id=11,
color=[255, 128, 0]),
12:
dict(
link=('left_eye', 'right_eye'),
id=12,
color=[51, 153, 255]),
13:
dict(link=('nose', 'left_eye'), id=13, color=[51, 153, 255]),
14:
dict(link=('nose', 'right_eye'), id=14, color=[51, 153, 255]),
15:
dict(
link=('left_eye', 'left_ear'), id=15, color=[51, 153,
255]),
16:
dict(
link=('right_eye', 'right_ear'),
id=16,
color=[51, 153, 255]),
17:
dict(
link=('left_ear', 'left_shoulder'),
id=17,
color=[51, 153, 255]),
18:
dict(
link=('right_ear', 'right_shoulder'),
id=18,
color=[51, 153, 255])
}),
joint_weights=[
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.2, 1.2, 1.5, 1.5, 1.0,
1.0, 1.2, 1.2, 1.5, 1.5
],
sigmas=[
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072,
0.062, 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
])))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
###Markdown
MMPose TutorialWelcome to MMPose colab tutorial! In this tutorial, we will show you how to- perform inference with an MMPose model- train a new mmpose model with your own datasetsLet's start! Install MMPoseWe recommand to use a conda environment to install mmpose and its dependencies. And compilers `nvcc` and `gcc` are required.
###Code
# check NVCC version
!nvcc -V
# check GCC version
!gcc --version
# check python in conda environtment
!which python
# install pytorch
!pip install torch
# install mmcv-full
!pip install mmcv-full
# install mmdet for inference demo
!pip install mmdet
# clone mmpose repo
!rm -rf mmpose
!git clone https://github.com/open-mmlab/mmpose.git
%cd mmpose
# install mmpose dependencies
!pip install -r requirements.txt
# install mmpose in develop mode
!pip install -e .
# Check Pytorch installation
import torch, torchvision
print('torch version:', torch.__version__, torch.cuda.is_available())
print('torchvision version:', torchvision.__version__)
# Check MMPose installation
import mmpose
print('mmpose version:', mmpose.__version__)
# Check mmcv installation
from mmcv.ops import get_compiling_cuda_version, get_compiler_version
print('cuda version:', get_compiling_cuda_version())
print('compiler information:', get_compiler_version())
###Output
torch version: 1.9.0+cu102 True
torchvision version: 0.10.0+cu102
mmpose version: 0.16.0
cuda version: 10.2
compiler information: GCC 5.4
###Markdown
Inference with an MMPose modelMMPose provides high level APIs for model inference and training.
###Code
import cv2
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_config = 'configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w48_coco_256x192.py'
pose_checkpoint = 'https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(pose_config, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset=pose_model.cfg.data.test.type)
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
dataset=pose_model.cfg.data.test.type,
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_http loader
###Markdown
Train a pose estimation model on a customized datasetTo train a model on a customized dataset with MMPose, there are usually three steps:1. Support the dataset in MMPose1. Create a config1. Perform training and evaluation Add a new datasetThere are two methods to support a customized dataset in MMPose. The first one is to convert the data to a supported format (e.g. COCO) and use the cooresponding dataset class (e.g. TopdownCOCODataset), as described in the [document](https://mmpose.readthedocs.io/en/latest/tutorials/2_new_dataset.htmlreorganize-dataset-to-existing-format). The second one is to add a new dataset class. In this tutorial, we give an example of the second method.We first download the demo dataset, which contains 100 samples (75 for training and 25 for validation) selected from COCO train2017 dataset. The annotations are stored in a different format from the original COCO format.
###Code
# download dataset
%mkdir data
%cd data
!wget https://openmmlab.oss-cn-hangzhou.aliyuncs.com/mmpose/datasets/coco_tiny.tar
!tar -xf coco_tiny.tar
%cd ..
# check the directory structure
!apt-get -q install tree
!tree data/coco_tiny
# check the annotation format
import json
import pprint
anns = json.load(open('data/coco_tiny/train.json'))
print(type(anns), len(anns))
pprint.pprint(anns[0], compact=True)
###Output
<class 'list'> 75
{'bbox': [267.03, 104.32, 229.19, 320],
'image_file': '000000537548.jpg',
'image_size': [640, 480],
'keypoints': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 325, 160, 2, 398,
177, 2, 0, 0, 0, 437, 238, 2, 0, 0, 0, 477, 270, 2, 287, 255, 1,
339, 267, 2, 0, 0, 0, 423, 314, 2, 0, 0, 0, 355, 367, 2]}
###Markdown
After downloading the data, we implement a new dataset class to load data samples for model training and validation. Assume that we are going to train a top-down pose estimation model (refer to [Top-down Pose Estimation](https://github.com/open-mmlab/mmpose/tree/master/configs/body/2d_kpt_sview_rgb_img/topdown_heatmapreadme) for a brief introduction), the new dataset class inherits `TopDownBaseDataset`.
###Code
import json
import os
import os.path as osp
from collections import OrderedDict
import numpy as np
from mmpose.core.evaluation.top_down_eval import (keypoint_nme,
keypoint_pck_accuracy)
from mmpose.datasets.builder import DATASETS
from mmpose.datasets.datasets.top_down.topdown_base_dataset import \
TopDownBaseDataset
@DATASETS.register_module()
class TopDownCOCOTinyDataset(TopDownBaseDataset):
def __init__(self,
ann_file,
img_prefix,
data_cfg,
pipeline,
test_mode=False):
super().__init__(
ann_file, img_prefix, data_cfg, pipeline, test_mode=test_mode)
# flip_pairs, upper_body_ids and lower_body_ids will be used
# in some data augmentations like random flip
self.ann_info['flip_pairs'] = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10],
[11, 12], [13, 14], [15, 16]]
self.ann_info['upper_body_ids'] = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
self.ann_info['lower_body_ids'] = (11, 12, 13, 14, 15, 16)
self.ann_info['joint_weights'] = None
self.ann_info['use_different_joint_weights'] = False
self.dataset_name = 'coco_tiny'
self.db = self._get_db()
def _get_db(self):
with open(self.annotations_path) as f:
anns = json.load(f)
db = []
for idx, ann in enumerate(anns):
# get image path
image_file = osp.join(self.img_prefix, ann['image_file'])
# get bbox
bbox = ann['bbox']
center, scale = self._xywh2cs(*bbox)
# get keypoints
keypoints = np.array(
ann['keypoints'], dtype=np.float32).reshape(-1, 3)
num_joints = keypoints.shape[0]
joints_3d = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d[:, :2] = keypoints[:, :2]
joints_3d_visible = np.zeros((num_joints, 3), dtype=np.float32)
joints_3d_visible[:, :2] = np.minimum(1, keypoints[:, 2:3])
sample = {
'image_file': image_file,
'center': center,
'scale': scale,
'bbox': bbox,
'rotation': 0,
'joints_3d': joints_3d,
'joints_3d_visible': joints_3d_visible,
'bbox_score': 1,
'bbox_id': idx,
}
db.append(sample)
return db
def _xywh2cs(self, x, y, w, h):
"""This encodes bbox(x, y, w, h) into (center, scale)
Args:
x, y, w, h
Returns:
tuple: A tuple containing center and scale.
- center (np.ndarray[float32](2,)): center of the bbox (x, y).
- scale (np.ndarray[float32](2,)): scale of the bbox w & h.
"""
aspect_ratio = self.ann_info['image_size'][0] / self.ann_info[
'image_size'][1]
center = np.array([x + w * 0.5, y + h * 0.5], dtype=np.float32)
if w > aspect_ratio * h:
h = w * 1.0 / aspect_ratio
elif w < aspect_ratio * h:
w = h * aspect_ratio
# pixel std is 200.0
scale = np.array([w / 200.0, h / 200.0], dtype=np.float32)
# padding to include proper amount of context
scale = scale * 1.25
return center, scale
def evaluate(self, outputs, res_folder, metric='PCK', **kwargs):
"""Evaluate keypoint detection results. The pose prediction results will
be saved in `${res_folder}/result_keypoints.json`.
Note:
batch_size: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
outputs (list(preds, boxes, image_path, output_heatmap))
:preds (np.ndarray[N,K,3]): The first two dimensions are
coordinates, score is the third dimension of the array.
:boxes (np.ndarray[N,6]): [center[0], center[1], scale[0]
, scale[1],area, score]
:image_paths (list[str]): For example, ['Test/source/0.jpg']
:output_heatmap (np.ndarray[N, K, H, W]): model outpus.
res_folder (str): Path of directory to save the results.
metric (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
Returns:
dict: Evaluation results for evaluation metric.
"""
metrics = metric if isinstance(metric, list) else [metric]
allowed_metrics = ['PCK', 'NME']
for metric in metrics:
if metric not in allowed_metrics:
raise KeyError(f'metric {metric} is not supported')
res_file = os.path.join(res_folder, 'result_keypoints.json')
kpts = []
for output in outputs:
preds = output['preds']
boxes = output['boxes']
image_paths = output['image_paths']
bbox_ids = output['bbox_ids']
batch_size = len(image_paths)
for i in range(batch_size):
kpts.append({
'keypoints': preds[i].tolist(),
'center': boxes[i][0:2].tolist(),
'scale': boxes[i][2:4].tolist(),
'area': float(boxes[i][4]),
'score': float(boxes[i][5]),
'bbox_id': bbox_ids[i]
})
kpts = self._sort_and_unique_bboxes(kpts)
self._write_keypoint_results(kpts, res_file)
info_str = self._report_metric(res_file, metrics)
name_value = OrderedDict(info_str)
return name_value
def _report_metric(self, res_file, metrics, pck_thr=0.3):
"""Keypoint evaluation.
Args:
res_file (str): Json file stored prediction results.
metrics (str | list[str]): Metric to be performed.
Options: 'PCK', 'NME'.
pck_thr (float): PCK threshold, default: 0.3.
Returns:
dict: Evaluation results for evaluation metric.
"""
info_str = []
with open(res_file, 'r') as fin:
preds = json.load(fin)
assert len(preds) == len(self.db)
outputs = []
gts = []
masks = []
for pred, item in zip(preds, self.db):
outputs.append(np.array(pred['keypoints'])[:, :-1])
gts.append(np.array(item['joints_3d'])[:, :-1])
masks.append((np.array(item['joints_3d_visible'])[:, 0]) > 0)
outputs = np.array(outputs)
gts = np.array(gts)
masks = np.array(masks)
normalize_factor = self._get_normalize_factor(gts)
if 'PCK' in metrics:
_, pck, _ = keypoint_pck_accuracy(outputs, gts, masks, pck_thr,
normalize_factor)
info_str.append(('PCK', pck))
if 'NME' in metrics:
info_str.append(
('NME', keypoint_nme(outputs, gts, masks, normalize_factor)))
return info_str
@staticmethod
def _write_keypoint_results(keypoints, res_file):
"""Write results into a json file."""
with open(res_file, 'w') as f:
json.dump(keypoints, f, sort_keys=True, indent=4)
@staticmethod
def _sort_and_unique_bboxes(kpts, key='bbox_id'):
"""sort kpts and remove the repeated ones."""
kpts = sorted(kpts, key=lambda x: x[key])
num = len(kpts)
for i in range(num - 1, 0, -1):
if kpts[i][key] == kpts[i - 1][key]:
del kpts[i]
return kpts
@staticmethod
def _get_normalize_factor(gts):
"""Get inter-ocular distance as the normalize factor, measured as the
Euclidean distance between the outer corners of the eyes.
Args:
gts (np.ndarray[N, K, 2]): Groundtruth keypoint location.
Return:
np.ndarray[N, 2]: normalized factor
"""
interocular = np.linalg.norm(
gts[:, 0, :] - gts[:, 1, :], axis=1, keepdims=True)
return np.tile(interocular, [1, 2])
###Output
_____no_output_____
###Markdown
Create a config fileIn the next step, we create a config file which configures the model, dataset and runtime settings. More information can be found at [Learn about Configs](https://mmpose.readthedocs.io/en/latest/tutorials/0_config.html). A common practice to create a config file is deriving from a existing one. In this tutorial, we load a config file that trains a HRNet on COCO dataset, and modify it to adapt to the COCOTiny dataset.
###Code
from mmcv import Config
cfg = Config.fromfile(
'./configs/body/2d_kpt_sview_rgb_img/topdown_heatmap/coco/hrnet_w32_coco_256x192.py'
)
# set basic configs
cfg.data_root = 'data/coco_tiny'
cfg.work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
cfg.gpu_ids = range(1)
cfg.seed = 0
# set log interval
cfg.log_config.interval = 1
# set evaluation configs
cfg.evaluation.interval = 10
cfg.evaluation.metric = 'PCK'
cfg.evaluation.save_best = 'PCK'
# set learning rate policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=10,
warmup_ratio=0.001,
step=[17, 35])
cfg.total_epochs = 40
# set batch size
cfg.data.samples_per_gpu = 16
cfg.data.val_dataloader = dict(samples_per_gpu=16)
cfg.data.test_dataloader = dict(samples_per_gpu=16)
# set dataset configs
cfg.data.train.type = 'TopDownCOCOTinyDataset'
cfg.data.train.ann_file = f'{cfg.data_root}/train.json'
cfg.data.train.img_prefix = f'{cfg.data_root}/images/'
cfg.data.val.type = 'TopDownCOCOTinyDataset'
cfg.data.val.ann_file = f'{cfg.data_root}/val.json'
cfg.data.val.img_prefix = f'{cfg.data_root}/images/'
cfg.data.test.type = 'TopDownCOCOTinyDataset'
cfg.data.test.ann_file = f'{cfg.data_root}/val.json'
cfg.data.test.img_prefix = f'{cfg.data_root}/images/'
print(cfg.pretty_text)
###Output
log_level = 'INFO'
load_from = None
resume_from = None
dist_params = dict(backend='nccl')
workflow = [('train', 1)]
checkpoint_config = dict(interval=10)
evaluation = dict(interval=10, metric='PCK', save_best='PCK')
optimizer = dict(type='Adam', lr=0.0005)
optimizer_config = dict(grad_clip=None)
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[170, 200])
total_epochs = 40
log_config = dict(interval=1, hooks=[dict(type='TextLoggerHook')])
channel_cfg = dict(
num_output_channels=17,
dataset_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
])
model = dict(
type='TopDown',
pretrained=
'https://download.openmmlab.com/mmpose/pretrain_models/hrnet_w32-36af842e.pth',
backbone=dict(
type='HRNet',
in_channels=3,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(32, 64)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(32, 64, 128)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(32, 64, 128, 256)))),
keypoint_head=dict(
type='TopdownHeatmapSimpleHead',
in_channels=32,
out_channels=17,
num_deconv_layers=0,
extra=dict(final_conv_kernel=1),
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
train_cfg=dict(),
test_cfg=dict(
flip_test=True,
post_process='default',
shift_heatmap=True,
modulate_kernel=11))
data_cfg = dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
'rotation', 'bbox_score', 'flip_pairs'
])
]
val_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]
data_root = 'data/coco_tiny'
data = dict(
samples_per_gpu=16,
workers_per_gpu=2,
val_dataloader=dict(samples_per_gpu=16),
test_dataloader=dict(samples_per_gpu=16),
train=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/train.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownRandomFlip', flip_prob=0.5),
dict(
type='TopDownHalfBodyTransform',
num_joints_half_body=8,
prob_half_body=0.3),
dict(
type='TopDownGetRandomScaleRotation',
rot_factor=40,
scale_factor=0.5),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(type='TopDownGenerateTarget', sigma=2),
dict(
type='Collect',
keys=['img', 'target', 'target_weight'],
meta_keys=[
'image_file', 'joints_3d', 'joints_3d_visible', 'center',
'scale', 'rotation', 'bbox_score', 'flip_pairs'
])
]),
val=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]),
test=dict(
type='TopDownCOCOTinyDataset',
ann_file='data/coco_tiny/val.json',
img_prefix='data/coco_tiny/images/',
data_cfg=dict(
image_size=[192, 256],
heatmap_size=[48, 64],
num_output_channels=17,
num_joints=17,
dataset_channel=[[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
]],
inference_channel=[
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
],
soft_nms=False,
nms_thr=1.0,
oks_thr=0.9,
vis_thr=0.2,
use_gt_bbox=False,
det_bbox_thr=0.0,
bbox_file=
'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json'
),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='TopDownAffine'),
dict(type='ToTensor'),
dict(
type='NormalizeTensor',
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
dict(
type='Collect',
keys=['img'],
meta_keys=[
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
'flip_pairs'
])
]))
work_dir = 'work_dirs/hrnet_w32_coco_tiny_256x192'
gpu_ids = range(0, 1)
seed = 0
###Markdown
Train and Evaluation
###Code
from mmpose.datasets import build_dataset
from mmpose.models import build_posenet
from mmpose.apis import train_model
import mmcv
# build dataset
datasets = [build_dataset(cfg.data.train)]
# build model
model = build_posenet(cfg.model)
# create work_dir
mmcv.mkdir_or_exist(cfg.work_dir)
# train model
train_model(
model, datasets, cfg, distributed=False, validate=True, meta=dict())
###Output
Use load_from_http loader
###Markdown
Test the trained model. Since the model is trained on a toy dataset coco-tiny, its performance would be as good as the ones in our model zoo. Here we mainly show how to inference and visualize a local model checkpoint.
###Code
from mmpose.apis import (inference_top_down_pose_model, init_pose_model,
vis_pose_result, process_mmdet_results)
from mmdet.apis import inference_detector, init_detector
local_runtime = False
try:
from google.colab.patches import cv2_imshow # for image visualization in colab
except:
local_runtime = True
pose_checkpoint = 'work_dirs/hrnet_w32_coco_tiny_256x192/latest.pth'
det_config = 'demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py'
det_checkpoint = 'https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
# initialize pose model
pose_model = init_pose_model(cfg, pose_checkpoint)
# initialize detector
det_model = init_detector(det_config, det_checkpoint)
img = 'tests/data/coco/000000196141.jpg'
# inference detection
mmdet_results = inference_detector(det_model, img)
# extract person (COCO_ID=1) bounding boxes from the detection results
person_results = process_mmdet_results(mmdet_results, cat_id=1)
# inference pose
pose_results, returned_outputs = inference_top_down_pose_model(pose_model,
img,
person_results,
bbox_thr=0.3,
format='xyxy',
dataset='TopDownCocoDataset')
# show pose estimation results
vis_result = vis_pose_result(pose_model,
img,
pose_results,
kpt_score_thr=0.,
dataset='TopDownCocoDataset',
show=False)
# reduce image size
vis_result = cv2.resize(vis_result, dsize=None, fx=0.5, fy=0.5)
if local_runtime:
from IPython.display import Image, display
import tempfile
import os.path as osp
import cv2
with tempfile.TemporaryDirectory() as tmpdir:
file_name = osp.join(tmpdir, 'pose_results.png')
cv2.imwrite(file_name, vis_result)
display(Image(file_name))
else:
cv2_imshow(vis_result)
###Output
Use load_from_local loader
|
data-processing-notebooks/.ipynb_checkpoints/mpd-slice-combine-checkpoint.ipynb | ###Markdown
Read playlists
###Code
t0 = time()
keys = [
'pid',
'name',
'description',
'num_artists',
'num_albums',
'num_tracks',
'num_followers',
'duration_ms',
'collaborative',
'tracks'
]
samp = 700
arr = np.empty(shape=(samp*1000,len(keys)), dtype = object)
path='spotify_million_playlist_dataset/data/'
filenames = os.listdir(path)
for i, filename in enumerate(random.sample(sorted(filenames), samp)):
if filename.startswith("mpd.slice.") and filename.endswith(".json"):
fullpath = os.sep.join((path, filename))
f = open(fullpath)
js = f.read()
f.close()
mpd_slice = json.loads(js)
D = pd.DataFrame(mpd_slice['playlists'])[keys].to_numpy()
arr[i*1000:(i+1)*1000,:]= D
print(filename,i)
# Time diff
print(f"Time taken: {(time()-t0)/60}")
arr[:10]
###Output
_____no_output_____
###Markdown
Reading track_uri
###Code
lst = []
for playlist in data['playlists'][:2]:
for track in playlist['tracks']:
lst.append([track['track_uri'],playlist['pid']])
t0 = time()
samp = 100
lst = []
path='spotify_million_playlist_dataset/data/'
filenames = os.listdir(path)
for i, filename in enumerate(sorted(filenames)):
if filename.startswith("mpd.slice.") and filename.endswith(".json"):
fullpath = os.sep.join((path, filename))
f = open(fullpath)
js = f.read()
f.close()
mpd_slice = json.loads(js)
for playlist in mpd_slice['playlists']:
for track in playlist['tracks']:
lst.append([track['track_uri'],playlist['pid']])
print(filename, i)
# Time diff
print(f"Time taken: {(time()-t0)/60}")
len(lst)
lst_uri = [el[0] for el in lst if el[0]]
set_uri = set(lst_uri)
len(set_uri)
###Output
_____no_output_____ |
courses/machine_learning/deepdive/06_structured/4_preproc_tft.ipynb | ###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`.  Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%%bash
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz==2018.4
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
###Output
apache-airflow==1.9.0
google-cloud-dataflow==2.0.0
tensorflow==1.8.0
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'max_num_workers': 24,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`.  Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%%bash
source activate py2env
conda install -y pytz
pip uninstall -y google-cloud-dataflow
pip install --upgrade --force tensorflow_transform==0.8.0 apache-beam[gcp]
%%bash
pip freeze | grep -e 'flow\|beam'
###Output
_____no_output_____
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.4.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'max_num_workers': 24,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`.  Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%%bash
conda update -y -n base -c defaults conda
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz
pip install apache-beam[gcp]==2.9.0
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
###Output
apache-airflow==1.9.0
google-cloud-dataflow==2.0.0
tensorflow==1.8.0
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'num_workers': 4,
'max_num_workers': 5,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`.  Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%%bash
conda update -y -n base -c defaults conda
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz
pip install apache-beam[gcp]==2.9.0
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
###Output
apache-airflow==1.9.0
google-cloud-dataflow==2.0.0
tensorflow==1.8.0
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'max_num_workers': 6,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`.  Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%%bash
conda update -y -n base -c defaults conda
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz
pip install apache-beam[gcp]==2.9.0
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
###Output
apache-airflow==1.9.0
google-cloud-dataflow==2.0.0
tensorflow==1.8.0
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'max_num_workers': 16,
'max_num_workers': 6,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%bash
pip uninstall -y google-cloud-dataflow
pip install --upgrade --force tensorflow_transform==0.8.0 apache-beam[gcp]
%bash
pip freeze | grep -e 'flow\|beam'
###Output
_____no_output_____
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.4.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print 'Launching local job ... hang on'
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print 'Launching Dataflow job {} ... hang on'.format(job_name)
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'max_num_workers': 24,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print 'Processing {} data from {}'.format(step, selquery)
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print "Done!"
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`.  Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%%bash
conda update -y -n base -c defaults conda
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz
pip install apache-beam[gcp]==2.9.0
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
###Output
apache-airflow==1.9.0
google-cloud-dataflow==2.0.0
tensorflow==1.8.0
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'num_workers': 4,
'max_num_workers': 5,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.4.0* TF 1.4 or higher* Apache Beam [GCP] 2.2.0 or higher
###Code
%bash
pip uninstall -y google-cloud-dataflow
pip install --upgrade --force tensorflow_transform==0.4.0 apache-beam[gcp]
%bash
pip freeze | grep -e 'flow\|beam'
###Output
_____no_output_____
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.4.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print 'Launching local job ... hang on'
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print 'Launching Dataflow job {} ... hang on'.format(job_name)
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'max_num_workers': 24,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print 'Processing {} data from {}'.format(step, selquery)
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print "Done!"
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`.  Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%%bash
conda update -y -n base -c defaults conda
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz
pip install apache-beam[gcp]==2.9.0
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
###Output
apache-airflow==1.9.0
google-cloud-dataflow==2.0.0
tensorflow==1.8.0
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'max_num_workers': 6,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____
###Markdown
Preprocessing using tf.transform and Dataflow This notebook illustrates: Creating datasets for Machine Learning using tf.transform and DataflowWhile Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming. Apache Beam only works in Python 2 at the moment, so we're going to switch to the Python 2 kernel. In the above menu, click the dropdown arrow and select `python2`.  Then activate a Python 2 environment and install Apache Beam. Only specific combinations of TensorFlow/Beam are supported by tf.transform. So make sure to get a combo that is.* TFT 0.8.0* TF 1.8 or higher* Apache Beam [GCP] 2.5.0 or higher
###Code
%%bash
source activate py2env
pip uninstall -y google-cloud-dataflow
conda install -y pytz==2018.4
pip install apache-beam[gcp] tensorflow_transform==0.8.0
%%bash
pip freeze | grep -e 'flow\|beam'
###Output
apache-airflow==1.9.0
google-cloud-dataflow==2.0.0
tensorflow==1.8.0
###Markdown
You need to restart your kernel to register the new installs running the below cells
###Code
import tensorflow as tf
import apache_beam as beam
print(tf.__version__)
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR PROJECT ID
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
!gcloud config set project $PROJECT
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
###Output
_____no_output_____
###Markdown
Save the query from earlier The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
###Code
query="""
SELECT
weight_pounds,
is_male,
mother_age,
mother_race,
plurality,
gestation_weeks,
mother_married,
ever_born,
cigarette_use,
alcohol_use,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
"""
import google.datalab.bigquery as bq
df = bq.Query(query + " LIMIT 100").execute().result().to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
Create ML dataset using tf.transform and Dataflow Let's use Cloud Dataflow to read in the BigQuery data and write it out as CSV files. Along the way, let's use tf.transform to do scaling and transforming. Using tf.transform allows us to save the metadata to ensure that the appropriate transformations get carried out during prediction as well.Note that after you launch this, the notebook won't show you progress. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 30 minutes for me. If you wish to continue without doing this step, you can copy my preprocessed output:gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc_tft gs://your-bucket/
###Code
%writefile requirements.txt
tensorflow-transform==0.8.0
import datetime
import apache_beam as beam
import tensorflow_transform as tft
from tensorflow_transform.beam import impl as beam_impl
def preprocess_tft(inputs):
import copy
import numpy as np
def center(x):
return x - tft.mean(x)
result = copy.copy(inputs) # shallow copy
result['mother_age_tft'] = center(inputs['mother_age'])
result['gestation_weeks_centered'] = tft.scale_to_0_1(inputs['gestation_weeks'])
result['mother_race_tft'] = tft.string_to_int(inputs['mother_race'])
return result
#return inputs
def cleanup(rowdict):
import copy, hashlib
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,mother_race,plurality,gestation_weeks,mother_married,cigarette_use,alcohol_use'.split(',')
STR_COLUMNS = 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
FLT_COLUMNS = 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
# add any missing columns, and correct the types
def tofloat(value, ifnot):
try:
return float(value)
except (ValueError, TypeError):
return ifnot
result = {
k : str(rowdict[k]) if k in rowdict else 'None' for k in STR_COLUMNS
}
result.update({
k : tofloat(rowdict[k], -99) if k in rowdict else -99 for k in FLT_COLUMNS
})
# modify opaque numeric race code into human-readable data
races = dict(zip([1,2,3,4,5,6,7,18,28,39,48],
['White', 'Black', 'American Indian', 'Chinese',
'Japanese', 'Hawaiian', 'Filipino',
'Asian Indian', 'Korean', 'Samaon', 'Vietnamese']))
if 'mother_race' in rowdict and rowdict['mother_race'] in races:
result['mother_race'] = races[rowdict['mother_race']]
else:
result['mother_race'] = 'Unknown'
# cleanup: write out only the data we that we want to train on
if result['weight_pounds'] > 0 and result['mother_age'] > 0 and result['gestation_weeks'] > 0 and result['plurality'] > 0:
data = ','.join([str(result[k]) for k in CSV_COLUMNS])
result['key'] = hashlib.sha224(data).hexdigest()
yield result
def preprocess(query, in_test_mode):
import os
import os.path
import tempfile
import tensorflow as tf
from apache_beam.io import tfrecordio
from tensorflow_transform.coders import example_proto_coder
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
import shutil
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc_tft'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc_tft/'.format(BUCKET)
import subprocess
subprocess.call('gsutil rm -r {}'.format(OUTPUT_DIR).split())
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'max_num_workers': 16,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'requirements_file': 'requirements.txt'
}
opts = beam.pipeline.PipelineOptions(flags=[], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
# set up metadata
raw_data_schema = {
colname : dataset_schema.ColumnSchema(tf.string, [], dataset_schema.FixedColumnRepresentation())
for colname in 'key,is_male,mother_race,mother_married,cigarette_use,alcohol_use'.split(',')
}
raw_data_schema.update({
colname : dataset_schema.ColumnSchema(tf.float32, [], dataset_schema.FixedColumnRepresentation())
for colname in 'weight_pounds,mother_age,plurality,gestation_weeks'.split(',')
})
raw_data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.Schema(raw_data_schema))
def read_rawdata(p, step, test_mode):
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE MOD(ABS(hashmonth),4) = 3'.format(query)
if in_test_mode:
selquery = selquery + ' LIMIT 100'
#print('Processing {} data from {}'.format(step, selquery))
return (p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query=selquery, use_standard_sql=True))
| '{}_cleanup'.format(step) >> beam.FlatMap(cleanup)
)
# run Beam
with beam.Pipeline(RUNNER, options=opts) as p:
with beam_impl.Context(temp_dir=os.path.join(OUTPUT_DIR, 'tmp')):
# analyze and transform training
raw_data = read_rawdata(p, 'train', in_test_mode)
raw_dataset = (raw_data, raw_data_metadata)
transformed_dataset, transform_fn = (
raw_dataset | beam_impl.AnalyzeAndTransformDataset(preprocess_tft))
transformed_data, transformed_metadata = transformed_dataset
_ = transformed_data | 'WriteTrainData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'train'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
# transform eval data
raw_test_data = read_rawdata(p, 'eval', in_test_mode)
raw_test_dataset = (raw_test_data, raw_data_metadata)
transformed_test_dataset = (
(raw_test_dataset, transform_fn) | beam_impl.TransformDataset())
transformed_test_data, _ = transformed_test_dataset
_ = transformed_test_data | 'WriteTestData' >> tfrecordio.WriteToTFRecord(
os.path.join(OUTPUT_DIR, 'eval'),
coder=example_proto_coder.ExampleProtoCoder(
transformed_metadata.schema))
_ = (transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(os.path.join(OUTPUT_DIR, 'metadata')))
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(query, in_test_mode=False)
%bash
gsutil ls gs://${BUCKET}/babyweight/preproc_tft/*-00000*
###Output
_____no_output_____ |
code/run_daily.ipynb | ###Markdown
Run Daily Space-Time LISA * Software dependencies
###Code
import tools
import pandas as pd
import numpy as np
import pysal as ps
import multiprocessing as mp
from sqlalchemy import create_engine
import geopandas as gpd
###Output
/Users/dani/anaconda/envs/pydata/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
###Markdown
* Data dependencies
###Code
db_link ='/Users/dani/AAA/LargeData/adam_cell_phone/a10.db'
shp_link = '../data/a10/a10.shp'
# To be created in the process:
ashp_link = '/Users/dani/Desktop/a10_agd_maxp.shp'
engine = create_engine('sqlite:///'+db_link)
###Output
_____no_output_____
###Markdown
* Read data in
###Code
%%time
a10 = pd.read_sql_query('SELECT gridcode, date_time, trafficerlang '
'FROM data ',
engine, parse_dates=['date_time'])
months = a10['date_time'].apply(lambda x: str(x.year) + '-' + str(x.month))
hours = a10['date_time'].apply(lambda x: str(x.hour))
order = ps.open(shp_link.replace('.shp', '.dbf')).by_col('GRIDCODE')
areas = pd.Series([poly.area for poly in ps.open(shp_link)], \
index=order)
areas = areas * 1e-6 # Sq. Km
###Output
CPU times: user 41.8 s, sys: 3.44 s, total: 45.2 s
Wall time: 52.7 s
###Markdown
* MaxPThis step removes an area with no data and joins very small polygons to adjacent ones with density as similar as possible. This is performed through an aggregation using the Max-P algorithm
###Code
shp = gpd.read_file(shp_link).set_index('GRIDCODE')
overall = a10.groupby('gridcode').mean()
overall['area (Km2)'] = areas
overall['erldens'] = overall['trafficerlang'] / overall['area (Km2)']
overall = gpd.GeoDataFrame(overall, geometry=shp['geometry'], crs=shp.crs)\
.dropna()
# W
wmxp = ps.queen_from_shapefile(shp_link, idVariable='GRIDCODE')
wmxp.transform = 'R'
wmxp.transform = 'O'
# Polygon `49116` does not have data. Remove.
wmxp = ps.w_subset(wmxp, [i for i in wmxp.id_order if i!=49116])
# Information matrix with hourly average day
x = a10.assign(hour=hours).groupby(['gridcode', 'hour'])\
.mean()['trafficerlang']\
.unstack()\
.reindex(wmxp.id_order)
# Areas for the MaxP
mxp_a = overall.loc[wmxp.id_order, 'area (Km2)'].values
%%time
np.random.seed(1234)
mxp = ps.Maxp(wmxp, x.values, 0.05, mxp_a, initial=1000)
labels = pd.Series(mxp.area2region).apply(lambda x: 'a'+str(x))
###Output
CPU times: user 8.31 s, sys: 20.7 ms, total: 8.33 s
Wall time: 8.38 s
###Markdown
* Aggregate polygons
###Code
aggd = overall.groupby(labels).sum()
aggd['erldens'] = aggd['trafficerlang'] / aggd['area (Km2)']
ag_geo = overall.groupby(labels)['geometry'].apply(lambda x: x.unary_union)
aggd_shp = gpd.GeoDataFrame(aggd, geometry=ag_geo, crs=overall.crs)
aggd_shp.reset_index().to_file(ashp_link)
ag_a10 = a10.assign(hour=hours, month=months)\
.set_index('gridcode')\
.assign(labels=labels)\
.groupby(['month', 'hour', 'labels', 'date_time'])[['trafficerlang']].sum()\
.reset_index()
###Output
_____no_output_____
###Markdown
* $ST-W$
###Code
# W
aw = ps.queen_from_shapefile(ashp_link, idVariable='index')
aw.transform = 'R'
aw.transform = 'O'
# Space-Time W
ats = ag_a10['hour'].unique().shape[0]
%time astw = tools.w_stitch_single(aw, ats)
astw.transform = 'R'
###Output
CPU times: user 45 ms, sys: 3.51 ms, total: 48.5 ms
Wall time: 48.5 ms
###Markdown
* Expand areas
###Code
aareas = aggd_shp.reset_index().set_index('index')
astw_index = pd.Series(astw.id_order, \
index=[i.split('-')[1] for i in astw.id_order], \
name='astw_index')
astareas = aareas.reset_index()\
.join(astw_index, on='index')\
.drop('index', axis=1)\
.set_index('astw_index')\
[['area (Km2)']]
###Output
_____no_output_____
###Markdown
* Reshape for daily runs
###Code
daily = ag_a10.drop('month', axis=1)\
.assign(h_gc=ag_a10['hour']+'-'+ag_a10['labels'])\
.join(astareas, on='h_gc')\
.assign(date=ag_a10['date_time'].apply(lambda x: str(x.date())))\
.set_index(['date', 'hour', 'labels'])
daily['erldens'] = daily['trafficerlang'] / daily['area (Km2)']
###Output
_____no_output_____
###Markdown
* Run in parallel
###Code
permutations = 1
g = daily.groupby(level='date')
tasks = [(i, astw, astareas, permutations, id) for id, i in g]
#pool = mp.Pool(mp.cpu_count())
%time tasks = map(tools.child_lisa, tasks)
lisa_clusters = pd.concat(tasks, axis=1)
#lisa_clusters.to_csv('../data/lisa_clusters_%ip.csv'%permutations)ss
###Output
CPU times: user 1min 16s, sys: 227 ms, total: 1min 16s
Wall time: 1min 16s
|
notebooks/Merge results.ipynb | ###Markdown
Allow relative imports
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import project_path
import glob
import os
import random
from pathlib import Path
from itertools import product
import pandas as pd
def merge_csv(path):
path_temp = Path(f'/tmp/{random.randint(10, 10000)}.csv').resolve()
is_first_file = True
with open(path_temp,"wb") as output_file:
for subdir, dirs, files in os.walk(path):
for file in files:
input_path = f'{subdir}/{file}'
if is_first_file:
is_first_file = False
with open(input_path, "rb") as input_file:
output_file.write(input_file.read())
else:
with open(input_path, "rb") as input_file:
next(input_file)
output_file.write(input_file.read())
return path_temp
path_in = Path(f'../raw_results/benchmarks/').resolve()
path_out = Path(f'../results/benchmarks/').resolve()
path_out.mkdir(parents=True, exist_ok=True)
path_out = path_out / 'other_algorithms.csv'
path_temp = merge_csv(path_in)
experiment_df = pd.read_csv(path_temp).reset_index(drop=True)
experiment_df
experiment_df.to_csv(path_out, index=None)
experiments_names = dict(pd.read_csv('../experiments.csv').astype(str).values)
experiments_names
path_raw_results = Path('../raw_results/')
path_results = Path('../results/')
for exp in os.listdir(path_raw_results):
name = experiments_names.get(exp, None)
print(name)
if name is not None:
if name not in ['SBM', 'Mindsets']:
path_out = path_results /'benchmarks' / f'{name}.csv'
else:
path_out = path_results / f'{name}.csv'
path_temp = merge_csv(path_raw_results / f'{exp}')
print(path_temp)
experiment_df = pd.read_csv(path_temp).reset_index(drop=True).to_csv(path_out, index=None)
###Output
cancer_20_bins
/tmp/532.csv
cancer_10_bins
/tmp/5436.csv
Mindsets
/tmp/3053.csv
2_gaussian_blobs_sigma_1_binning
/tmp/456.csv
Moons_noise0.2
/tmp/374.csv
SBM
/tmp/2910.csv
2_gaussian_blobs_sigma_1.5_binning
/tmp/3336.csv
4_gaussian_blobs_sigma_1_binning
/tmp/5154.csv
4_gaussian_blobs_sigma_1.5_binning
/tmp/2953.csv
None
|
tutorials/asr/Intro_to_Transducers.ipynb | ###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T < U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T < U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with prediction some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail as compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three seperate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesnt face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `experimental_fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we dont have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is suffcient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____
###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T < U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T < U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with prediction some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail as compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three separate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesn't face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `experimental_fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we don't have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is sufficient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____
###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T \ge U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T \ge U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with predicting some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three separate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesn't face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we don't have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is sufficient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____
###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T \ge U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T \ge U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with predicting some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three separate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesn't face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we don't have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is sufficient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____
###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T \ge U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T \ge U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with predicting some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three separate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesn't face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we don't have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is sufficient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____
###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T < U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T < U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with prediction some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail as compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three seperate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesnt face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `experimental_fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we dont have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is suffcient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____
###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T \ge U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T \ge U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with predicting some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three separate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesn't face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we don't have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is sufficient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____
###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T \ge U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T \ge U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with predicting some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three separate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesn't face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we don't have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is sufficient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____
###Markdown
Intro to TransducersBy following the earlier tutorials for Automatic Speech Recognition in NeMo, one would have probably noticed that we always end up using [Connectionist Temporal Classification (CTC) loss](https://distill.pub/2017/ctc/) in order to train the model. Speech Recognition can be formulated in many different ways, and CTC is a more popular approach because it is a monotonic loss - an acoustic feature at timestep $t_1$ and $t_2$ will correspond to a target token at timestep $u_1$ and only then $u_2$. This monotonic property significantly simplifies the training of ASR models and speeds up convergence. However, it has certain drawbacks that we will discuss below.In general, ASR can be described as a sequence-to-sequence prediction task - the original sequence is an audio sequence (often transformed into mel spectrograms). The target sequence is a sequence of characters (or subword tokens). Attention models are capable of the same sequence-to-sequence prediction tasks. They can even perform better than CTC due to their autoregressive decoding. However, they lack certain inductive biases that can be leveraged to stabilize and speed up training (such as the monotonicity exhibited by the CTC loss). Furthermore, by design, attention models require the entire sequence to be available to align the sequence to the output, thereby preventing their use for streaming inference.Then comes the [Transducer Loss](https://arxiv.org/abs/1211.3711). Proposed by Alex Graves, it aimed to resolve the issues in CTC loss while resolving the transcription accuracy issues by performing autoregressive decoding. Drawbacks of Connectionist Temporal Classification (CTC)CTC is an excellent loss to train ASR models in a stable manner but comes with certain limitations on model design. If we presume speech recognition to be a sequence-to-sequence problem, let $T$ be the sequence length of the acoustic model's output, and let $U$ be the sequence length of the target text transcript (post tokenization, either as characters or subwords). -------1) CTC imposes the limitation : $T \ge U$. Normally, this assumption is naturally valid because $T$ is generally a lot longer than the final text transcription. However, there are many cases where this assumption fails.- Acoustic model performs downsampling to such a degree that $T \ge U$. Why would we want to perform so much downsampling? For convolutions, longer sequences take more stride steps and more memory. For Attention-based models (say Conformer), there's a quadratic memory cost of computing the attention step in proportion to $T$. So more downsampling significantly helps relieve the memory requirements. There are ways to bypass this limitation, as discussed in the `ASR_with_Subword_Tokenization` notebook, but even that has limits.- The target sequence is generally very long. Think of languages such as German, which have very long translations for short English words. In the task of ASR, if there is more than 2x downsampling and character tokenization is used, the model will often fail to learn due to this CTC limitation.2) Tokens predicted by models which are trained with just CTC loss are assumed to be *conditionally independent*. This means that, unlike language models where *h*-*e*-*l*-*l* as input would probably predict *o* to complete *hello*, for CTC trained models - any character from the English alphabet has equal likelihood for prediction. So CTC trained models often have misspellings or missing tokens when transcribing the audio segment to text. - Since we often use the Word Error Rate (WER) metric when evaluating models, even a single misspelling contributes significantly to the "word" being incorrect. - To alleviate this issue, we have to resort to Beam Search via an external language model. While this often works and significantly improves transcription accuracy, it is a slow process and involves large N-gram or Neural language models. --------Let's see CTC loss's limitation (1) in action:
###Code
import torch
import torch.nn as nn
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
acoustic_seq = torch.randn(1, T, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return acoustic_seq, acoustic_seq_len, target_seq, target_seq_len
# First, we use CTC loss in the general sense.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=False)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
# Next, we use CTC loss with `zero_infinity` flag set.
loss = torch.nn.CTCLoss(blank=V, zero_infinity=True)
acoustic_seq, acoustic_seq_len, target_seq, target_seq_len = get_sample(T, U, V)
# CTC loss expects acoustic sequence to be in shape (T, B, V)
val = loss(acoustic_seq.transpose(1, 0), target_seq, acoustic_seq_len, target_seq_len)
print("CTC Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", acoustic_seq.grad[0, 0, :])
###Output
_____no_output_____
###Markdown
-------As we saw, CTC loss in general case will not be able to compute the loss or the gradient when $T \ge U$. In the PyTorch specific implementation of CTC Loss, we can specify a flag `zero_infinity`, which explicitly checks for such cases, zeroes out the loss and the gradient if such a case occurs. The flag allows us to train a batch of samples where some samples may accidentally violate this limitation, but training will not halt, and gradients will not become NAN. What is the Transducer Loss ?  A model that seeks to use the Transducer loss is composed of three models that interact with each other. They are:-------1) **Acoustic model** : This is nearly the same acoustic model used for CTC models. The output shape of these models is generally $(Batch, \, T, \, AM-Hidden)$. You will note that unlike for CTC, the output of the acoustic model is no longer passed through a decoder layer which would have the shape $(Batch, \, T, \, Vocabulary + 1)$.2) **Prediction / Decoder model** : The prediction model accepts a sequence of target tokens (in the case of ASR, text tokens) and is usually a causal auto-regressive model that is tasked with predicting some hidden feature dimension of shape $(Batch, \, U, \, Pred-Hidden)$.3) **Joint model** : This model accepts the outputs of the Acoustic model and the Prediction model and joins them to compute a joint probability distribution over the vocabulary space to compute the alignments from Acoustic sequence to Target sequence. The output of this model is of the shape $(Batch, \, T, \, U, \, Vocabulary + 1)$.--------During training, the transducer loss is computed on the output of the joint model, which computes the joint probability distribution of a target vocabulary token $v_{t, u}$ (for all $v \in V$) being predicted given the acoustic feature at timestep $t \le T$ and the prediction network features at timestep $u \le U$.--------During inference, we perform a single forward pass over the Acoustic Network to obtain the features of shape $(Batch, \, T, \, AM-Hidden)$, and autoregressively perform the forward passes of the Prediction Network and the Joint Network to decode several $u \le U$ target tokens per acoustic timestep $t \le T$. We will discuss decoding in the following sections. ---------**Note**: For an excellent in-depth explanation of how Transducer loss works, how it computes the alignment, and how the gradient of this alignment is calculated, we highly encourage you to read this post about [Sequence-to-sequence learning with Transducers by Loren Lugosch](https://lorenlugosch.github.io/posts/2020/11/transducer/).--------- Benefits of Transducer LossNow that we understand what a Transducer model is comprised of and how it is trained, the next question that comes to mind is - What is the benefit of the Transducer loss?------1) It is a monotonic loss (similar to CTC). Monotonicity speeds up convergence and does not require auxiliary losses to stabilize training (which is required when using only attention-based loss for sequence-to-sequence training).2) Autoregressive decoding enables the model to implicitly have a dependency between predicted tokens (the conditional independence assumption of CTC trained models is corrected). As such, missing characters or incorrect spellings are less frequent (but still exist since no model is perfect).3) It no longer has the $T \ge U$ limitation that CTC imposed. This is because the total joint probability distribution is calculated now - mapping every acoustic timestep $t \le T$ to one or more target timestep $u \le U$. This means that for each timestep $t$, the model has at most $U$ tokens that it can predict, and therefore in the extreme case, it can predict a total of $T \times U$ tokens! Drawbacks of Transducer LossAll of these benefits come with certain costs. As is (almost) always the case in machine learning, there is no free lunch. -------1) During training, the Joint model is required to compute a joint matrix of shape $(Batch, \, T, \, U, \, Vocabulary + 1)$. If you consider the value of these constants for a general dataset like Librispeech, $T \sim 1600$, $U \sim 450$ (with character encoding) and vocabulary $V \sim 28+1$. Considering a batch size of 32, that total memory cost comes out to roughly **2.7 GB** at float precision. The model would also need another **2.7 GB** for the gradients. Of course, the model needs more memory still for the actual Acoustic model + Prediction model + their gradients. Note, however - this issue can be *partially* resolved with some simple tricks, which are discussed in the next tutorial. Also, this memory cost is no longer an issue during inference!2) Autoregressive decoding is slow. Much slower than CTC models, which require just a simple argmax of the output tensor. So while we do get superior transcription quality, we sacrifice decoding speed. --------Let's check that RNNT loss no longer shows the limitations of CTC loss -
###Code
T = 10 # acoustic sequence length
U = 16 # target sequence length
V = 28 # vocabulary size
def get_rnnt_sample(T, U, V, require_grad=True):
torch.manual_seed(0)
joint_tensor = torch.randn(1, T, U + 1, V + 1, requires_grad=require_grad)
acoustic_seq_len = torch.tensor([T], dtype=torch.int32) # actual seq length in padded tensor (here no padding is done)
target_seq = torch.randint(low=0, high=V, size=(1, U))
target_seq_len = torch.tensor([U], dtype=torch.int32)
return joint_tensor, acoustic_seq_len, target_seq, target_seq_len
import nemo.collections.asr as nemo_asr
joint_tensor, acoustic_seq_len, target_seq, target_seq_len = get_rnnt_sample(T, U, V)
# RNNT loss expects joint tensor to be in shape (B, T, U, V)
loss = nemo_asr.losses.rnnt.RNNTLoss(num_classes=V)
# Uncomment to check out the keyword arguments required to call the RNNT loss
print("Transducer loss input types :", loss.input_types)
print()
val = loss(log_probs=joint_tensor, targets=target_seq, input_lengths=acoustic_seq_len, target_lengths=target_seq_len)
print("Transducer Loss :", val)
val.backward()
print("Grad of Acoustic model (over V):", joint_tensor.grad[0, 0, 0, :])
###Output
_____no_output_____
###Markdown
Configure a Transducer ModelWe now understand a bit more about the transducer loss. Next, we will take a deep dive into how to set up the config for a transducer model.Transducer configs contain a fair bit more detail compared to CTC configs. However, the vast majority of the defaults can be copied and pasted into your configs to have a perfectly functioning transducer model!------Let us download one of the transducer configs already available in NeMo to analyze the components.
###Code
import os
if not os.path.exists("contextnet_rnnt.yaml"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/contextnet_rnnt/contextnet_rnnt.yaml
from omegaconf import OmegaConf, open_dict
cfg = OmegaConf.load('contextnet_rnnt.yaml')
###Output
_____no_output_____
###Markdown
Model DefaultsSince the transducer model is comprised of three separate models working in unison, it is practical to have some shared section of the config. That shared section is called `model.model_defaults`.
###Code
print(OmegaConf.to_yaml(cfg.model.model_defaults))
###Output
_____no_output_____
###Markdown
-------Of the many components shared here, the last three values are the primary components that a transducer model **must** possess. They are :1) `enc_hidden`: The hidden dimension of the final layer of the Encoder network.2) `pred_hidden`: The hidden dimension of the final layer of the Prediction network.3) `joint_hidden`: The hidden dimension of the intermediate layer of the Joint network.--------One can access these values inside the config by using OmegaConf interpolation as follows :```yamlmodel: ... decoder: ... prednet: pred_hidden: ${model.model_defaults.pred_hidden}``` Acoustic ModelAs we discussed before, the transducer model is comprised of three models combined. One of these models is the Acoustic (encoder) model. We should be able to drop in any CTC Acoustic model config into this section of the transducer config.The only condition that needs to be met is that **the final layer of the acoustic model must have the dimension defined in `model_defaults.enc_hidden`**. Decoder / Prediction ModelThe Prediction model is generally an autoregressive, causal model that consumes text tokens and returns embeddings that will be used by the Joint model. **This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoder))
###Output
_____no_output_____
###Markdown
------This config will build an LSTM based Transducer Decoder model. Let us discuss some of the important arguments:1) `blank_as_pad`: In ordinary transducer models, the embedding matrix does not acknowledge the `Transducer Blank` token (similar to CTC Blank). However, this causes the autoregressive loop to be more complicated and less efficient. Instead, this flag which is set by default, will add the `Transducer Blank` token to the embedding matrix - and use it as a pad value (zeros tensor). This enables more efficient inference without harming training.2) `prednet.pred_hidden`: The hidden dimension of the LSTM and the output dimension of the Prediction network. Joint ModelThe Joint model is a simple feed-forward Multi-Layer Perceptron network. This MLP accepts the output of the Acoustic and Prediction models and computes a joint probability distribution over the entire vocabulary space.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.joint))
###Output
_____no_output_____
###Markdown
------The Joint model config has several essential components which we discuss below :1) `log_softmax`: Due to the cost of computing softmax on such large tensors, the Numba CUDA implementation of RNNT loss will implicitly compute the log softmax when called (so its inputs should be logits). The CPU version of the loss doesn't face such memory issues so it requires log-probabilities instead. Since the behaviour is different for CPU-GPU, the `None` value will automatically switch behaviour dependent on whether the input tensor is on a CPU or GPU device.2) `preserve_memory`: This flag will call `torch.cuda.empty_cache()` at certain critical sections when computing the Joint tensor. While this operation might allow us to preserve some memory, the empty_cache() operation is tremendously slow and will slow down training by an order of magnitude or more. It is available to use but not recommended.3) `experimental_fuse_loss_wer`: This flag performs "batch splitting" and then "fused loss + metric" calculation. It will be discussed in detail in the next tutorial that will train a Transducer model.4) `fused_batch_size`: When the above flag is set to True, the model will have two distinct "batch sizes". The batch size provided in the three data loader configs (`model.*_ds.batch_size`) will now be the `Acoustic model` batch size, whereas the `fused_batch_size` will be the batch size of the `Prediction model`, the `Joint model`, the `transducer loss` module and the `decoding` module.5) `jointnet.joint_hidden`: The hidden intermediate dimension of the joint network. Transducer DecodingModels which have been trained with CTC can transcribe text simply by performing a regular argmax over the output of their decoder.For transducer-based models, the three networks must operate in a synchronized manner in order to transcribe the acoustic features.The following section of the config describes how to change the decoding logic of the transducer model.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.decoding))
###Output
_____no_output_____
###Markdown
-------The most important component at the top level is the `strategy`. It can take one of many values:1) `greedy`: This is sample-level greedy decoding. It is generally exceptionally slow as each sample in the batch will be decoded independently. For publications, this should be used alongside batch size of 1 for exact results.2) `greedy_batch`: This is the general default and should nearly match the `greedy` decoding scores (if the acoustic features are not affected by feature mixing in batch mode). Even for small batch sizes, this strategy is significantly faster than `greedy`.3) `beam`: Runs beam search with the implicit language model of the Prediction model. It will generally be quite slow, and might need some tuning of the beam size to get better transcriptions.4) `tsd`: Time synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Time synchronous decoding (TSD) execution time grows by the factor T * max_symmetric_expansions. For longer sequences, T is greater and can therefore take a long time for beams to obtain good results. TSD also requires more memory to execute.5) `alsd`: Alignment-length synchronous decoding. Please refer to the paper: [Alignment-Length Synchronous Decoding for RNN Transducer](https://ieeexplore.ieee.org/document/9053040) for details on the algorithm implemented. Alignment-length synchronous decoding (ALSD) execution time is faster than TSD, with a growth factor of T + U_max, where U_max is the maximum target length expected during execution. Generally, T + U_max < T * max_symmetric_expansions. However, ALSD beams are non-unique. Therefore it is required to use larger beam sizes to achieve the same (or close to the same) decoding accuracy as TSD. For a given decoding accuracy, it is possible to attain faster decoding via ALSD than TSD.-------Below, we discuss the various decoding strategies. Greedy DecodingWhen `strategy` is one of `greedy` or `greedy_batch`, an additional subconfig of `decoding.greedy` can be used to set an important decoding value.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.greedy))
###Output
_____no_output_____
###Markdown
-------This argument `max_symbols` is the maximum number of `target token` decoding steps $u \le U$ per acoustic timestep $t \le T$. Note that during training, this was implicitly constrained by the shape of the joint matrix (max_symbols = $U$). However, there is no such $U$ upper bound during inference (we don't have the ground truth $U$).So we explicitly set a heuristic upper bound on how many decoding steps can be performed per acoustic timestep. Generally a value of 5 and above is sufficient. Beam DecodingNext, we discuss the subconfig when `strategy` is one of `beam`, `tsd` or `alsd`.
###Code
print(OmegaConf.to_yaml(cfg.model.decoding.beam))
###Output
_____no_output_____
###Markdown
------There are several important arguments in this section :1) `beam_size`: This determines the beam size for all types of beam decoding strategy. Since this is implemented in PyTorch, large beam sizes will take exorbitant amounts of time.2) `score_norm`: Whether to normalize scores prior to pruning the beam.3) `return_best_hypothesis`: If beam search is being performed, we can choose to return just the best hypothesis or all the hypotheses.4) `tsd_max_sym_exp`: The maximum symmetric expansions allowed per timestep during beam search. Larger values should be used to attempt decoding of longer sequences, but this in turn increases execution time and memory usage.5) `alsd_max_target_len`: The maximum expected target sequence length during beam search. Larger values allow decoding of longer sequences at the expense of execution time and memory. Transducer LossFinally, we reach the Transducer loss config itself. This section configures the type of Transducer loss itself, along with possible sub-sections.**This config can be dropped into any custom transducer model with no modification.**
###Code
print(OmegaConf.to_yaml(cfg.model.loss))
###Output
_____no_output_____ |
site/ru/r1/tutorials/keras/save_and_restore_models.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
import os
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
import os
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import os
import tensorflow as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
import os
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
import os
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import os
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Сохранение и загрузка моделей Запусти в Google Colab Изучай код на GitHub Note: Вся информация в этом разделе переведена с помощью русскоговорящего Tensorflow сообщества на общественных началах. Поскольку этот перевод не является официальным, мы не гарантируем что он на 100% аккуратен и соответствует [официальной документации на английском языке](https://www.tensorflow.org/?hl=en). Если у вас есть предложение как исправить этот перевод, мы будем очень рады увидеть pull request в [tensorflow/docs](https://github.com/tensorflow/docs) репозиторий GitHub. Если вы хотите помочь сделать документацию по Tensorflow лучше (сделать сам перевод или проверить перевод подготовленный кем-то другим), напишите нам на [[email protected] list](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ru). Прогресс обучения моделей можно сохранять во время и после обучения: тренировку можно возобновить с того места, где ты остановился. Это обычно помогает избежать долгих бесперервыных сессий обучения. Сохраняя модель, ты также можешь поделиться ею с другими, чтобы они могли воспроизвести результаты ее работы. Большинство практиков машинного обучения помимо самой модели и использованных техник также публикуют:* Код, при помощи которого обучалась модель* Тренировочные веса, или параметры моделиПубликация этих данных помогает другим понять как работает модель, а также они смогут проверить как она ведет себя с новыми данными.Внимание! Будь осторожен с кодом, которому ты не доверяешь. Обязательно прочти [Как использовать TensorFlow безопасно?](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) ВариантыСуществуют разные способы сохранять модели TensorFlow - все зависит от API, которые ты использовал в своей модели. В этом уроке используется [tf.keras](https://www.tensorflow.org/r1/guide/keras), высокоуровневый API для построения и обучения моделей в TensorFlow. Для всех остальных подходов читай руководство по TensorFlow [Сохраняй и загружай модели](https://www.tensorflow.org/r1/guide/saved_model) или [Сохранение в Eager](https://www.tensorflow.org/r1/guide/eagerobject-based_saving). Настройка Настроим и импортируем зависимости Установим и импортируем TensorFlow и все зависимые библиотеки:
###Code
!pip install h5py pyyaml
###Output
_____no_output_____
###Markdown
Загрузим датасетМы воспользуемся [датасетом MNIST](http://yann.lecun.com/exdb/mnist/) для обучения нашей модели, чтобы показать как сохранять веса. Ускорим процесс, используя только первые 1000 образцов данных:
###Code
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import os
import tensorflow as tf
from tensorflow import keras
tf.__version__
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
###Output
_____no_output_____
###Markdown
Построим модель Давай построим простую модель, на которой мы продемонстрируем как сохранять и загружать веса моделей:
###Code
# Возвращает короткую последовательную модель
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
return model
# Создадим модель
model = create_model()
model.summary()
###Output
_____no_output_____
###Markdown
Сохраняем контрольные точки Основная задача заключается в том, чтобы автоматически сохранять модель как *во время*, так и *по окончании* обучения. Таким образом ты сможешь снова использовать модель без необходимости обучать ее заново, или просто продолжить с места, на котором обучение было приостановлено.Эту задачу выполняет функция обратного вызова `tf.keras.callbacks.ModelCheckpoint`. Эта функция также может быть настроена при помощи нескольких аргументов. Использование функцииОбучим нашу модель и передадим ей функцию `ModelCheckpoint`:
###Code
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Создадим контрольную точку при помощи callback функции
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
model = create_model()
model.fit(train_images, train_labels, epochs = 10,
validation_data = (test_images,test_labels),
callbacks = [cp_callback]) # передаем callback обучению
###Output
_____no_output_____
###Markdown
Это создаст одну совокупность файлов контрольных точек TensorFlow, которые обновлялись в конце каждой эпохи:
###Code
!ls {checkpoint_dir}
###Output
_____no_output_____
###Markdown
Теперь создадим новую необученную модель. Когда мы восстанавливаем модель только из весов, новая модель должна быть точно такой же структуры, как и старая. Поскольку архитектура модели точно такая же, мы можем опубликовать веса из другой *инстанции* модели.Также мы оценим точность новой модели на проверочных данных. Необученная модель будет лишь изредка угадывать правильную категорию обзоров фильмов (точность будет около 10%):
###Code
model = create_model()
loss, acc = model.evaluate(test_images, test_labels)
print("Необученная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
А теперь загрузим веса из контрольной точки и проверим еще раз:
###Code
model.load_weights(checkpoint_path)
loss,acc = model.evaluate(test_images, test_labels)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Параметры вызова контрольной точкиУ callback функции есть несколько параметров, которые дают контрольным точкам уникальные имена, а также корректируют частоту сохранения.Обучим новую модель и укажем параметр чтобы сохранять контрольные точки через каждые 5 эпох:
###Code
# Укажем эпоху в имени файла (переведем ее в строки при помощи `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path, verbose=1, save_weights_only=True,
# Сохраняем веса через каждые 5 эпох
period=5)
model = create_model()
model.fit(train_images, train_labels,
epochs = 50, callbacks = [cp_callback],
validation_data = (test_images,test_labels),
verbose=0)
###Output
_____no_output_____
###Markdown
Теперь посмотрим на получившиеся контрольные точки и выберем последнюю:
###Code
! ls {checkpoint_dir}
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
###Output
_____no_output_____
###Markdown
Помни: по умолчанию TensorFlow сохраняет только 5 последних контрольных точек.Для проверки восстановим модель по умолчанию и загрузим последнюю контрольную точку:
###Code
model = create_model()
model.load_weights(latest)
loss, acc = model.evaluate(test_images, test_labels)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Как выглядят эти файлы? Код выше сохраняет веса модели как совокупность [контрольных точек](https://www.tensorflow.org/r1/guide/saved_modelsave_and_restore_variables) - форматированных файлов, которые содержат только обученные веса в двоичном формате. Они включают в себя:* Один или несколько шардов (shard, пер. "Часть данных"), в которых хранятся веса твоей модели* Индекс, который указывает какие веса хранятся в каждом шардеЕсли ты обучаешь модель на одном компьютере, то тогда у тебя будет всего один шард, оканчивающийся на `.data-00000-of-00001` Сохраняем веса вручнуюВыше мы посмотрели как загружать веса в модель.Сохранять веса вручную так же просто, просто воспользуйся методом `Model.save_weights`:
###Code
# Сохраняем веса
model.save_weights('./checkpoints/my_checkpoint')
# Восстанавливаем веса
model = create_model()
model.load_weights('./checkpoints/my_checkpoint')
loss,acc = model.evaluate(test_images, test_labels)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Сохраняем модель целикомТы также можешь сохранить модель целиком в единый файл, который будет содержать все веса, конфигурацию модели и даже оптимизатор конфигурации (однако это зависит от выбранных параметров). Это позволит тебе восстановить модель и продолжить обучение позже, ровно с того момента, где ты остановился, и без правки изначального кода.Сохранять рабочую модель полностью весьма полезно. Например, ты можешь потом восстановить ее в TensorFlow.js ([HDF5](https://js.tensorflow.org/r1/tutorials/import-keras.html), [Сохраненные модели](https://js.tensorflow.org/r1/tutorials/import-saved-model.html)) и затем обучать и запускать ее в веб-браузерах, или конвертировать ее в формат для мобильных устройств, используя TensorFlow Lite ([HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Сохраненные модели](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_)) Сохраняем в формате HDF5В Keras есть встроенный формат для сохранения модель при помощи стандарта [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format). Для наших целей сохраненная модель будет использована как единый двоичный объект *blob*.
###Code
model = create_model()
# Используй keras.optimizer чтобы восстановить оптимизатор из файла HDF5
model.compile(optimizer=keras.optimizers.Adam(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5)
# Сохраним модель полностью в единый HDF5 файл
model.save('my_model.h5')
###Output
_____no_output_____
###Markdown
Теперь воссоздадим модель из этого файла:
###Code
# Воссоздадим точно такую же модель, включая веса и оптимизатор:
new_model = keras.models.load_model('my_model.h5')
new_model.summary()
###Output
_____no_output_____
###Markdown
Проверим ее точность:
###Code
loss, acc = new_model.evaluate(test_images, test_labels)
print("Восстановленная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____
###Markdown
Данная техника сохраняет все:* Веса модели* Конфигурацию (ее структуру)* Параметры оптимизатораKeras сохраняет модель путем исследования ее архитектуры. В настоящее время он не может сохранять оптимизаторы TensorFlow из `tf.train`. В случае их использования нужно скомпилировать модель еще раз после загрузки. Таким образом ты получишь параметры оптимизатора. Сохраняем как `saved_model` Обрати внимание: этот метод сохранения моделей `tf.keras` является экспериментальным и может измениться в будущих версиях. Построим новую модель:
###Code
model = create_model()
model.fit(train_images, train_labels, epochs=5)
###Output
_____no_output_____
###Markdown
Создадим `saved_model`:
###Code
saved_model_path = tf.contrib.saved_model.save_keras_model(model, "./saved_models")
###Output
_____no_output_____
###Markdown
Сохраненные модели будут помещены в папку и отмечены текущей датой и временем в названии:
###Code
!ls saved_models/
###Output
_____no_output_____
###Markdown
Загрузим новую модель Keras из уже сохраненной:
###Code
new_model = tf.contrib.saved_model.load_keras_model(saved_model_path)
new_model
###Output
_____no_output_____
###Markdown
Запустим загруженную модель:
###Code
# Оптимизатор не был восстановлен, поэтому мы укажим новый
new_model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy'])
loss, acc = new_model.evaluate(test_images, test_labels)
print("Загруженная модель, точность: {:5.2f}%".format(100*acc))
###Output
_____no_output_____ |
wordle-analysis-p1.ipynb | ###Markdown
First things firstLet's import all the fun stuff that lets us do the really fun stuff.
###Code
import os
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import matplotlib.pyplot as plt
from pylab import rcParams
INDIR = "/kaggle/input"
OUTDIR = "/kaggle/working"
DBNAME = "words.db"
rcParams['figure.figsize'] = 16,9
pd.options.plotting.backend = "matplotlib"
sns.set(style="darkgrid")
figs = {}
###Output
_____no_output_____
###Markdown
Prepping the dataWe're going to load our words from the word lists that we copied from the source code. There are two lists, one which seems to be a list of played words and ones that have yet to be released.
###Code
playable_words = pd.read_json("../input/wordle-word-list/played_words.json")
other_words = pd.read_json("../input/wordle-word-list/unplayed_words.json")
playable_words[1] = True
other_words[1] = False
words = pd.concat([playable_words, other_words])
words.columns = ["name", "playable"]
words.reset_index()
words.describe()
###Output
_____no_output_____
###Markdown
First Heuristic: Letter RankIt's pretty simple: we're going to find out how often each letter is used in the entire wordle word list and rank them using a barplot. Once that is done, we'll check to see if there are any words spelled using the top 5 letters in the word list.
###Code
from itertools import chain
game1 = {}
letters = pd.Series(chain.from_iterable(words["name"]))
letter_counts = letters.value_counts()
ax = sns.barplot(
x=letter_counts.index,
y=letter_counts,
color='#69b3a2'
)
ax.set_xlabel("Letters")
ax.set_ylabel("Frequency")
ax.set_title("Letter Ranking")
figs["letter_ranking.png"] = ax.figure
###Output
_____no_output_____
###Markdown
From the above, we can see that the top 5 letters are: s, e, a, o and r. Given that information, let's try and find out if there are any words spelled using all five letters.
###Code
def contains_all(letters):
return lambda word: set(letters) <= set(word)
all_top5_letters = words["name"].apply(contains_all("seaor"))
words[all_top5_letters]
###Output
_____no_output_____
###Markdown
The result is the following three words:1. arose2. aeros3. soare First turn
###Code
def contains_any(letters):
return lambda word: len(set(letters) & set(word)) > 0
any_top5_letter = words["name"].apply(contains_any("seaor"))
words_turn1 = words[any_top5_letter]
words_turn1.describe()
###Output
_____no_output_____
###Markdown
So what does that mean for us? Well, let's look at the worst possible scenario: that we get no yellow or green tiles. In this case, we've still managed to do away with a great deal of the problem space, out of a total 12,972 words we've eliminated 12,395. Using only 1/6th of the time given to us, we've eliminated 95% of the problem space. Let's filter for any possible words using the next 5 most common letters. Second Turn The next five most frequent letters according to our chart are i, l, t, n and u. Same as above, we'll check the remaining words in the list and see if all five letters can be used to play a word.
###Code
remaining_words = words[~any_top5_letter]
all_next5_letters = remaining_words["name"].apply(contains_all("intlu"))
remaining_words[all_next5_letters]
###Output
_____no_output_____
###Markdown
So we have two words that use the letters ranking 6 through 10. Again, our worst case scenario is that none of these letters our in the secret word. How many possibilities have we eliminated?
###Code
any_next5_letters = remaining_words["name"].apply(contains_any("until"))
words_turn2 = remaining_words[any_next5_letters]
words_turn2.describe()
###Output
_____no_output_____
###Markdown
Two turns done, and we have eliminated 12,969 (12,395 + 574) words. By turn 3, this means that we have the following possiblities left. Even if we ignore the clues dropped by the game, in three more turns, we'll have the answer (assuming we actually know these words).
###Code
remaining_words = remaining_words[~any_next5_letters]
remaining_words
first_game = pd.DataFrame.from_dict({
"Turn 1": words_turn1["name"].count(),
"Turn 2": words_turn2["name"].count(),
"Remaining": remaining_words["name"].count()
}, orient="index", columns=["Words"])
ax = sns.barplot(
x=first_game.index,
y=first_game.Words,
color='#69b3a2'
)
#ax.set_xlabel("Letters")
ax.set_ylabel("Words")
ax.set_title("Game 1 results")
figs["first_game_result.png"] = ax.figure
first_game
###Output
_____no_output_____
###Markdown
Second Heuristic: Et tu Brute-force-usOur first heuristic used the most common letters to find words that could match against. The thing is, a (good) heuristic gives us an answer that is good enough. So did our heuristic give us the word that eliminates the most
###Code
def count_matches(words):
return lambda word: words.apply(
contains_any(word)
).value_counts()[True] # remove the word itself from number of matches
words["starter_score"] = words["name"].apply(count_matches(words["name"]))
words[words["starter_score"] == words["starter_score"].max()]
###Output
_____no_output_____
###Markdown
As it turns out, there are two words in the word list that just barely outperform our starter words consisting of the top 5 letter (and I have no idea what they mean):1. Stoae2. Toeas Turn 1Since we already know how many words we rule out by playing either of the words above, we can go directly to figuring out what our options are for turn 2 in case we don't match any letters at all.
###Code
any_top5_letter = words["name"].apply(contains_any("stoae"))
remaining_words = words[~any_top5_letter].copy()
remaining_words["starter_score"] = remaining_words["name"].apply(count_matches(remaining_words["name"]))
remaining_words[remaining_words["starter_score"] == remaining_words["starter_score"].max()]
###Output
_____no_output_____
###Markdown
We get three words that match the most remaining words. In fact, they match all the remaining words except one. Turn 2We can play one of the three words above to find out what the last word is: grrrl.I was not expecting that.
###Code
any_next5_letters = remaining_words["name"].apply(contains_any("unify"))
remaining_words = remaining_words[~any_next5_letters]
remaining_words
###Output
_____no_output_____
###Markdown
It would seem that the second method outperforms the first. We're guaranteed to figure the wordle out even if the first two turns don't reveal any green or yellow tiles.
###Code
second_game = pd.DataFrame.from_dict({
"Turn 1": 12417,
"Turn 2": 554,
"Remaining": 1
}, orient="index", columns=["Words"])
ax = sns.barplot(
x=second_game.index,
y=second_game.Words,
color='#69b3a2'
)
#ax.set_xlabel("Letters")
ax.set_ylabel("Words")
ax.set_title("Game 2 results")
figs["second_game_result.png"] = ax.figure
second_game
###Output
_____no_output_____
###Markdown
When you assume, you make ...Having done all the analysis above, can we say that the second heuristic is better than the first?1. Is the worst case scenario really the worst case scenario? How can we find out?2. How do does discovering a yellow or green tile change the likelihood of other letters appearing in the wordle?3. How do you even compare two heuristics or techniques?4. Is there a great starter word?5. We're assuming that the player knows ALL the words in the wordle word list. Final Thoughts (Work in progress)
###Code
matches = words[words['name'].apply(contains_all('sta'))]
matches = matches[~matches['name'].apply(contains_any('oe'))]
###Output
_____no_output_____
###Markdown
Don't mind meJust saving the plots
###Code
for name, figure in figs.items():
figure.savefig(name)
###Output
_____no_output_____ |
coding_act_3.ipynb | ###Markdown
Coding act 3
###Code
#Python program to inverse and transpose a 3x3 matrix
import numpy as np
A = np.array([[1,-2,3],[4,25,17],[8,9,-10]])
print(A,"\n")
#transposing the matrix
B = np.transpose(A)
print(B)
#inversing the matrix
B = np.linalg.inv(A)
print(B)
#Python program to inverse and transpose a 4x4 matrix
C = np.array([[1,2,3,-4],[4,5,-6,7],[8,9,-10,11],[12,13,14,-15]])
print(C,"\n")
#tansposing the matrix
D = np.transpose(C)
print(D,"\n")
#inversing the matrix
invC = np.linalg.inv(C)
print(invC)
###Output
[[ 1 2 3 -4]
[ 4 5 -6 7]
[ 8 9 -10 11]
[ 12 13 14 -15]]
[[ 1 4 8 12]
[ 2 5 9 13]
[ 3 -6 -10 14]
[ -4 7 11 -15]]
[[-0.59090909 -1.125 0.625 0.09090909]
[ 0.54545455 1. -0.5 -0.04545455]
[-0.68181818 1.375 -0.875 0.18181818]
[-0.63636364 1.25 -0.75 0.13636364]]
|
docs/jupyter/canvas-test.ipynb | ###Markdown
Canvas Drawing Test
###Code
from ipycanvas import Canvas
canvas = Canvas(width=200, height=200)
canvas.fill_rect(25, 25, 100, 100)
canvas.clear_rect(45, 45, 60, 60)
canvas.stroke_rect(50, 50, 50, 50)
canvas
###Output
_____no_output_____ |
exp2021-02/exp2021-02.ipynb | ###Markdown
Image processing Load an image to numpy
###Code
from typing import List
import PIL
from PIL import Image, ImagePalette
import numpy as np
print('Pillow Version:', PIL.__version__)
import pandas as pd
###Output
Pillow Version: 8.1.0
###Markdown
Let's create a helper to manipulate the image with a palette
###Code
class ImageReaderHelper:
def __init__(self, name: str):
self.name = name
def load(self):
self.img = Image.open(f'{self.name}.png').convert('P')
self.palette = self.img.getpalette()
self.img_arr = np.asarray(self.img)
self.palette_arr = np.asarray(self.palette).reshape(256, 3)
print(self.palette_arr)
print(f"{self.name} shape: {self.img_arr.shape} dtype: {self.img_arr.dtype}")
print(f"{self.name} palette: {self.palette_arr.shape} dtype: {self.palette_arr.dtype}")
print('Unique colors used from palette in the image')
self.colors_used = np.unique(self.img_arr)
print(self.colors_used)
print('Unique RGB colors used')
self.min_color_idx = np.amin(self.colors_used)
self.max_color_idx = np.amax(self.colors_used)
print(f'Color index range: {self.min_color_idx} to {self.max_color_idx}')
def print_palette_info(self):
rgb_colors_used = self.palette_arr[self.min_color_idx:self.max_color_idx+1]
print(rgb_colors_used)
color_id, color_frequency = np.unique(self.img_arr, return_counts = True)
print('Color frequency')
print(color_frequency)
def get_pixels(self):
return self.img_arr
def get_colors(self):
return self.palette_arr
def get_shuffle_colors(self):
new_palette_arr = self.palette_arr[self.min_color_idx:self.max_color_idx+1].copy()
np.random.shuffle(new_palette_arr)
return new_palette_arr
veg2020 = ImageReaderHelper('vegetation2020')
veg2020.load()
###Output
[[254 254 254]
[ 2 2 2]
[232 232 232]
[215 215 215]
[ 23 23 23]
[ 55 71 97]
[199 199 200]
[183 183 183]
[ 39 39 39]
[167 167 168]
[ 55 55 55]
[135 135 136]
[151 151 151]
[ 71 71 71]
[119 119 119]
[ 87 87 87]
[103 103 103]
[ 72 87 110]
[142 151 165]
[107 119 138]
[ 87 101 122]
[180 186 195]
[122 132 150]
[188 194 202]
[155 163 176]
[217 220 225]
[ 97 109 130]
[222 225 229]
[ 95 107 128]
[ 64 79 104]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]
[ 0 0 0]]
vegetation2020 shape: (1080, 1920) dtype: uint8
vegetation2020 palette: (256, 3) dtype: int64
Unique colors used from palette in the image
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29]
Unique RGB colors used
Color index range: 0 to 29
###Markdown
Understand the paletteWe can check the how many colors from the palette are been used in the entire image. For vegetation2020, it is a palette of 30 colors.
###Code
veg2020.print_palette_info()
###Output
[[254 254 254]
[ 2 2 2]
[232 232 232]
[215 215 215]
[ 23 23 23]
[ 55 71 97]
[199 199 200]
[183 183 183]
[ 39 39 39]
[167 167 168]
[ 55 55 55]
[135 135 136]
[151 151 151]
[ 71 71 71]
[119 119 119]
[ 87 87 87]
[103 103 103]
[ 72 87 110]
[142 151 165]
[107 119 138]
[ 87 101 122]
[180 186 195]
[122 132 150]
[188 194 202]
[155 163 176]
[217 220 225]
[ 97 109 130]
[222 225 229]
[ 95 107 128]
[ 64 79 104]]
Color frequency
[1882788 52211 17195 12337 10959 10353 10175 8440 8384
7944 7288 7085 6974 6744 6633 6514 6427 846
789 674 526 507 474 363 352 305 118
94 62 39]
###Markdown
Change the palette
###Code
class ImageWriteHelper:
def __init__(self, name: str):
self.name = name
def set_colors(self,colors: List):
# Palette would be provided as [(R, G, B), ..]
self.colors = colors
def set_pixels(self,pixels: List):
self.pixels = np.asarray(pixels).astype(np.uint8)
def _create_image_palette(self):
# The list must be aligned by channel (All R values must be contiguous in the list before G and B values.)
colors = np.asarray(self.colors).astype(np.uint8).flatten()
r_colors = colors[0::3]
g_colors = colors[1::3]
b_colors = colors[2::3]
cols_palette = np.concatenate((r_colors, g_colors, b_colors), axis=None).tolist()
img_palette = ImagePalette.ImagePalette(mode='RGB', palette=cols_palette, size=len(cols_palette))
return img_palette
def _create_image(self):
new_img = Image.fromarray(self.pixels, 'P')
new_img.putpalette(self._create_image_palette())
return new_img
def save(self):
self._create_image().save(f"{self.name}.png")
img_rand_palette = ImageWriteHelper('img-exp-random-palette')
img_rand_palette.set_pixels(veg2020.get_pixels())
img_rand_palette.set_colors(veg2020.get_shuffle_colors())
img_rand_palette.save()
###Output
_____no_output_____
###Markdown
Create a tri-colors grid image
###Code
col_white = (255, 255, 255)
col_black = (0, 0, 0)
col_grey = (85, 86, 87)
col_navy = (0,0,128)
img_grid = ImageWriteHelper('img-exp-grid')
img_grid.set_colors([col_white, col_black, col_grey])
pix_grid = np.ones((1080, 1920))
select_color = lambda i, j : 1 if i % 30 <10 and j % 30 < 10 else (2 if i % 30 > 20 and j % 30 > 20 else 0)
for i in range(1080):
for j in range(1920):
pix_grid[i, j]= select_color(i, j)
print(pix_grid)
img_grid.set_pixels(pix_grid)
img_grid.save()
###Output
[[1. 1. 1. ... 0. 0. 0.]
[1. 1. 1. ... 0. 0. 0.]
[1. 1. 1. ... 0. 0. 0.]
...
[0. 0. 0. ... 2. 2. 2.]
[0. 0. 0. ... 2. 2. 2.]
[0. 0. 0. ... 2. 2. 2.]]
###Markdown
Convert an image with palette to monochrome
###Code
img_converted_mono = ImageWriteHelper('img-exp-converted-monochrome')
img_converted_mono.set_colors([col_white, col_black, col_grey])
img_converted_mono_shape = veg2020.get_pixels().shape
it_pix_converted_mono = np.nditer(veg2020.get_pixels(), flags=['multi_index'])
pix_converted_mono = np.array([1 if v >0 else 0 for v in it_pix_converted_mono]).reshape(img_converted_mono_shape)
img_converted_mono.set_pixels(pix_converted_mono)
img_converted_mono.save()
###Output
_____no_output_____
###Markdown
Add two images
###Code
img_exp_add = ImageWriteHelper('img-exp-add')
img_exp_add.set_colors([col_white, col_black, col_grey, col_navy])
pixel_exp_add = pix_converted_mono + pix_grid
img_exp_add.set_pixels(pixel_exp_add)
img_exp_add.save()
###Output
_____no_output_____ |
laboratorio/lezione2-01ott21/lezione2-python-sequenze.ipynb | ###Markdown
Python - Sequenze (Stringa, Lista e Tupla)> Definizione di sequenza> Costruzione di una *sequenza*> Dimensione di una *sequenza*> Accesso agli elementi di una *sequenza*> Concatenazione di *sequenze*> Ripetizione di una *sequenza*> Scansione degli elementi di una *sequenza*> Aggiornamento di una lista> Assegnamento multiplo> List comprehension Definizione di *sequenza*Stringhe, liste e tuple sono ***sequenze***, cioé oggetti su cui si può iterare e i cui elementi sono indicizzati tramite posizione.**Stringa**: sequenza di caratteri- oggetto di tipo `str`- oggetto ___immutabile___---**Lista**: sequenza di valori (oggetti) anche di tipo diverso- oggetto di tipo `list`- oggetto ***mutabile*****Tupla**: sequenza di valori (oggetti) anche di tipo diverso- oggetto di tipo `tuple`- oggetto ***immutabile*** Costruzione di una *sequenza* Costruzione di una stringa- tramite letterale (sequenza di caratteri racchiusi tra singoli apici `'` o doppi apici `"`)- tramite funzione `str()` Letterale con doppi apici:
###Code
print("ciao")
###Output
ciao
###Markdown
Letterale con singoli apici:
###Code
print('ciao')
###Output
ciao
###Markdown
Singoli apici nel valore della stringa:
###Code
print('\'ciao\'')
print("'ciao'")
###Output
'ciao'
###Markdown
Doppi apici nel valore della stringa:
###Code
print("\"ciao\"")
print('"ciao"')
###Output
"ciao"
###Markdown
*Newline* `\n` nel valore della stringa:
###Code
print('ciao\nciao')
###Output
ciao
ciao
###Markdown
Costruzione della stringa vuota:
###Code
""
''
###Output
_____no_output_____
###Markdown
Esempi di costruzione tramite funzione `str()` Stringa vuota:
###Code
str()
###Output
_____no_output_____
###Markdown
Costruzione con letterale stringa:
###Code
str("ciao")
###Output
_____no_output_____
###Markdown
Costruzione con espressione aritmetica:
###Code
str(3+4)
###Output
_____no_output_____
###Markdown
Costruzione con espressione di confronto:
###Code
str(34 > 0)
###Output
_____no_output_____
###Markdown
**NOTA BENE**: il valore restituito dalla chiamata `str(34 >0)` è la stringa dei caratteri `T`, `r`, `u` ed `e` e non il valore `True` di tipo `bool`. Costruzione di una lista- tramite letterale `[value1, value2, ..., valueN]`- tramite funzione `list()` Esempi di costruzione tramite letterale Lista vuota:
###Code
[]
###Output
_____no_output_____
###Markdown
Lista di un solo valore:
###Code
[4]
###Output
_____no_output_____
###Markdown
Lista di tre valori dello stesso tipo:
###Code
[1,2,3]
###Output
_____no_output_____
###Markdown
Lista di tre valori di tipo diverso:
###Code
[10, 7.8, "ab"]
###Output
_____no_output_____
###Markdown
Lista di tre valori di tipo diverso di cui il terzo di tipo `list` (lista annidata):
###Code
[10, 7.8, [True, "ab"]]
###Output
_____no_output_____
###Markdown
Esempi di costruzione tramite la funzione `list()` Lista vuota:
###Code
list()
###Output
_____no_output_____
###Markdown
Costruzione con stringa:
###Code
list("abcde")
###Output
_____no_output_____
###Markdown
Costruzione con lista:
###Code
list([1,2,3,4])
###Output
_____no_output_____
###Markdown
Costruzione di una tupla- tramite letterale `(value1, value2, ..., valueN)`- tramite la funzione `tuple()` Esempi di costruzione tramite letterale Tupla vuota:
###Code
()
###Output
_____no_output_____
###Markdown
Tupla di un solo valore:
###Code
(4,)
###Output
_____no_output_____
###Markdown
Tupla di tre valori dello stesso tipo:
###Code
(1,2,3)
###Output
_____no_output_____
###Markdown
Tupla di tre valori di tipo diverso:
###Code
(10, 7.8, "ab")
###Output
_____no_output_____
###Markdown
Tupla di quattro valori di tipo diverso di cui il terzo di tipo `list` (lista annidata) e il quarto di tipo `tuple` (tupla annidata):
###Code
(10, 7.8, [1,2], (3,4))
###Output
_____no_output_____
###Markdown
**NOTA BENE**: le parentesi tonde nel letterale di una tupla sono opzionali (non possono però essere omesse nel caso di letterale di una tupla vuota).
###Code
10, 7.8, [1,2], (3,4)
###Output
_____no_output_____
###Markdown
Esempi di costruzione tramite la funzione `tuple()` Tupla vuota:
###Code
tuple()
###Output
_____no_output_____
###Markdown
Costruzione con stringa:
###Code
tuple("abcde")
###Output
_____no_output_____
###Markdown
Costruzione con lista:
###Code
tuple([1,2,3])
###Output
_____no_output_____
###Markdown
Costruzione con tupla:
###Code
tuple((1,2,3))
###Output
_____no_output_____
###Markdown
Dimensione di una *sequenza* La funzione `len()` restituisce la dimensione della *sequenza* passata come argomento.Dimensione di una stringa:
###Code
len("abcde")
###Output
_____no_output_____
###Markdown
Dimensione di una lista:
###Code
len([1,2,3,4])
###Output
_____no_output_____
###Markdown
Dimensione di una tupla:
###Code
len((1,2,3,4,5))
###Output
_____no_output_____
###Markdown
Accesso agli elementi di una *sequenza* Le posizioni degli elementi all'interno di una *sequenza* sono indicate tramite:- indici positivi - 0: posizione del primo elemento - 1: posizione del secondo elemento - ... - dimensione della sequenza decrementato di 1: posizione dell'ultimo elemento - indici negativi - -1: posizione dell'ultimo elemento - -2: posizione del penultimo elemento - ... - negazione aritmetica della dimensione della sequenza: posizione del primo elemento Accesso a un solo elemento di una *sequenza* L'espressione: my_sequence[some_index] restituisce l'elemento della *sequenza* `my_sequence` in posizione `some_index`. **Esempi di accesso a un carattere di una stringa** Accesso al quinto carattere tramite indice positivo:
###Code
stringa = 'Hello world!'
stringa[4]
###Output
_____no_output_____
###Markdown
**NOTA BENE**: `stringa[4]` restituisce un oggetto di tipo `str`. Accesso al quinto carattere tramite indice negativo:
###Code
stringa = 'Hello world!'
stringa[-8]
###Output
_____no_output_____
###Markdown
**Esempi di accesso a un elemento di una lista** Accesso al quarto elemento tramite indice positivo:
###Code
lista = [1, 2, 3, [4, 5]]
lista[3]
###Output
_____no_output_____
###Markdown
Accesso al primo elemento della lista annidata:
###Code
lista = [1, 2, 3, [4, 5]]
lista[3][0]
###Output
_____no_output_____
###Markdown
Accesso al primo elemento tramite indice negativo:
###Code
lista = [1, 2, 3, [4, 5]]
lista[-4]
###Output
_____no_output_____
###Markdown
**Esempi di accesso a un elemento di una tupla** Accesso al terzo elemento tramite indice positivo:
###Code
tupla = (1, 2, 3, 4)
tupla[2]
###Output
_____no_output_____
###Markdown
Accesso all'ultimo elemento tramite indice negativo:
###Code
tupla = (1, 2, 3, 4)
tupla[-1]
###Output
_____no_output_____
###Markdown
Accesso a elementi consecutivi di una *sequenza* (*slicing1*) L'epressione: my_sequence[start_index:end_index] restituisce gli elementi della *sequenza* `my_sequence` da quello in posizione `start_index` fino a quello in posizione `end_index-1`. **Esempi di accesso a caratteri consecutivi di una stringa (sottostringa)**Accesso alla sottostringa dal terzo al settimo carattere tramite indici positivi:
###Code
stringa = 'Hello world!'
stringa[2:7]
###Output
_____no_output_____
###Markdown
Accesso alla sottostringa dal terzo al settimo carattere tramite indici negativi:
###Code
stringa = 'Hello world!'
stringa[-10:-5]
###Output
_____no_output_____
###Markdown
Accesso al prefisso dei primi tre caratteri tramite indici positivi:
###Code
stringa = 'Hello world!'
stringa[0:3]
stringa = 'Hello world!'
stringa[:3]
###Output
_____no_output_____
###Markdown
Accesso al suffisso che parte dal nono carattere tramite indici positivi:
###Code
stringa = 'Hello world!'
stringa[8:len(stringa)]
stringa = 'Hello world!'
stringa[8:]
###Output
_____no_output_____
###Markdown
Ottenere una copia della stringa:
###Code
stringa = 'Hello world!'
stringa[0:len(stringa)]
stringa = 'Hello world!'
stringa[:]
###Output
_____no_output_____
###Markdown
**Esempi di accesso a elementi consecutivi di una lista/tupla (sottolista/sottotupla)** Accesso alla sottolista dal terzo al quarto elemento tramite indici positivi:
###Code
lista = [1, 2, 3, 4, 5, 6]
lista[2:4]
###Output
_____no_output_____
###Markdown
Accesso al suffisso di lista che parte dal terzo elemento tramite indici positivi:
###Code
lista = [1, 2, 3, 4, 5, 6]
lista[2:]
###Output
_____no_output_____
###Markdown
Accesso al prefisso di tupla dei primi quattro elementi tramite indici positivi:
###Code
tupla = (1, 2, 3, 4, 5, 6)
tupla[:4]
###Output
_____no_output_____
###Markdown
Accesso a elementi non consecutivi di una *sequenza* (*slicing2*) L'epressione: my_sequence[start_index:end_index:step] equivale a- considerare la sottosequenza di elementi consecutivi `my_sequence[start_index:end_index]` e- restituire di questa sottosequenza gli elementi a partire dal primo, saltando ogni volta `step-1` elementi**Esempi di accesso a caratteri non consecutivi di una stringa**Accesso ai caratteri a partire dal secondo, saltando ogni volta due caratteri e terminando non oltre il settimo:
###Code
stringa = 'xAxxBxxCxx'
stringa[1:8:3]
###Output
_____no_output_____
###Markdown
Lo *slicing* con passo può essere utilizzato per invertire una sequenza:
###Code
stringa = 'Hello world!'
stringa[::-1]
lista = [1, 2, 3, 4, 5, 6, 7, 8, 9]
lista[::-1]
###Output
_____no_output_____
###Markdown
Concatenazione di *sequenze*L'espressione: my_sequence1 + my_sequence2 + ... + my_sequenceN restituisce la concatenazione di N sequenze dello stesso tipo.
###Code
"ciao " + "mondo"
[1,2] + [3,4,5]
(1,2) + (3,4,5)
###Output
_____no_output_____
###Markdown
Ripetizione di una *sequenza*L'espressione: my_sequence * times restituisce la ripetizione di `my_sequence` per `times` volte.
###Code
"ciao " * 4
[1,2] * 4
(1,2) * 4
###Output
_____no_output_____
###Markdown
Scansione degli elementi una *sequenza* Operatore `in` L'espressione: my_element in my_sequence restituisce `True` se `my_element` è presente in `my_sequence`, altrimenti restituisce `False`Controllo della presenza del carattere `H`:
###Code
stringa = 'Hello world!'
"H" in stringa
###Output
_____no_output_____
###Markdown
Controllo della presenza della sottostringa `world` nella stringa:
###Code
stringa = 'Hello world!'
"llo" in stringa
###Output
_____no_output_____
###Markdown
Controllo della presenza della stringa `ca` nella lista:
###Code
lista = [1, 2, 'acaa', 10.5]
"ca" in lista
###Output
_____no_output_____
###Markdown
Controllo della presenza della lista `[1,2]` nella lista:
###Code
lista = [1, 2, 'acaa', 10.5]
[1,2] in lista
###Output
_____no_output_____
###Markdown
Scansione con operatore `in` **Sintassi di scansione di una *sequenza***: for element in my_sequence: do_something - `my_sequence`: *sequenza* da scandire da sinistra a destra- `element`: variabile di scansione- `do_something`: blocco di istruzioni da eseguire per ogni elemento considerato **Regola**:- le istruzioni in `do_something` devono essere indentate 4 volte rispetto alla riga di intestazioneScansione e stampa a video dei caratteri della stringa:
###Code
stringa = 'world'
for c in stringa:
print(c)
###Output
w
o
r
l
d
###Markdown
Scansione e stampa a video degli elementi della lista:
###Code
lista = [3, 45.6, 'ciao']
for element in lista:
print(element)
###Output
3
45.6
ciao
###Markdown
Scansione e stampa a video degli elementi della tupla:
###Code
tupla = 3, 45.6, 'ciao'
for element in tupla:
print(element)
###Output
3
45.6
ciao
###Markdown
Aggiornamento di una lista Aggiornamento di un elemento di una lista L'istruzione: my_list[some_index] = new_value sostituisce l'elemento della lista `my_list` in posizione `some_index` con il nuovo valore `new_value`.Sostituzione del quarto elemento della lista con il valore 0:
###Code
lista = [1, 2, 3, 4, 5, 6, 7]
lista[3] = 0
lista
###Output
_____no_output_____
###Markdown
Aggiornamento di più elementi di una lista Le istruzioni: my_list[start_index:end_index] = new_list my_list[start_index:end_index:step] = new_list sostituiscono la sottolista di `my_list` ottenuta tramite *slicing* con la lista `new_list`. Sostituzione dei tre elementi consecutivi dal quarto al sesto della lista:
###Code
lista = [1, 2, 3, 4, 5, 6, 7]
lista[3:6] = ['*', '*', '*']
lista
###Output
_____no_output_____
###Markdown
Cancellazione dei tre elementi consecutivi dal quarto al sesto della lista:
###Code
lista = [1, 2, 3, 4, 5, 6, 7]
lista[3:6] = []
lista
###Output
_____no_output_____
###Markdown
Inserimento di tre elementi asterisco prima del secondo elemento:
###Code
lista = [1, 2, 3, 4, 5, 6, 7]
lista[1:1] = ['*', '*', '*']
lista
###Output
_____no_output_____
###Markdown
Aggiunta in coda di tre elementi asterisco:
###Code
lista = [1, 2, 3, 4, 5, 6, 7]
lista[len(lista):len(lista)] = ['*', '*', '*']
lista
###Output
_____no_output_____
###Markdown
Aggiunta in testa di tre elementi asterisco:
###Code
lista = [1, 2, 3, 4, 5, 6, 7]
lista[:0] = ['*', '*', '*']
lista
###Output
_____no_output_____
###Markdown
Aggiornamento al valore 0 degli elementi in posizione di indice pari:
###Code
lista = [1, 2, 3, 4, 5, 6, 7]
lista[::2] = [0, 0, 0, 0]
lista
###Output
_____no_output_____
###Markdown
Cancellazione di un elemento di una lista con l'operatore `del` L'istruzione: del my_list[some_index] rimuove dalla lista `my_list` l'elemento in posizione di indice `some_index`. Cancellazione del terzo elemento della lista:
###Code
lista = [1, 2, 3, 4]
del lista[2]
lista
###Output
_____no_output_____
###Markdown
Cancellazione di più elementi di una lista con l'operatore `del` Le istruzioni: del my_list[start_index:end_index] del my_list[start_index:end_index:step]rimuovono dalla lista `my_list` gli elementi prodotti dall'operazione di *slicing*.Cancellazione degli elementi dal terzo al quinto:
###Code
lista = [1, 2, 3, 4, 5, 6, 7, 8]
del lista[2:5]
lista
###Output
_____no_output_____
###Markdown
Cancellazione degli elementi in posizione dispari:
###Code
lista = [0, 1, 2, 3, 4, 5, 6, 7]
del lista[1::2]
lista
###Output
_____no_output_____
###Markdown
Assegnamento multiplo L'istruzione di assegnamento: (my_var1, my_var2, ..., my_varN) = my_sequence assegna alle N variabili specificate nella tupla a sinistra i valori della *sequenza* `my_sequence` specificata a destra.Le parentesi sono opzionali: my_var1, my_var2, ..., my_varN = my_sequence**NOTA BENE**: `my_sequence` deve avere dimensione pari a N.Assegnamento dei quattro caratteri della stringa "1234" a quattro variabili diverse:
###Code
v1, v2, v3, v4 = "1234"
v4
###Output
_____no_output_____
###Markdown
Assegnamento dei quattro elementi della lista `[1,2,3,4]` a quattro variabili diverse:
###Code
v1, v2, v3, v4 = [1,2,3,4]
v4
###Output
_____no_output_____
###Markdown
Assegnamento dei quattro elementi della tupla `(1,2,3,4)` a quattro variabili diverse:
###Code
v1, v2, v3, v4 = 1,2,3,4
###Output
_____no_output_____
###Markdown
Scambiare il valore tra due variabili:
###Code
v1,v2 = v2,v1
v2
###Output
_____no_output_____
###Markdown
Scandire una lista di tuple di due elementi stampando ogni volta i due elementi separatamente:
###Code
lista = [(1, 2), (3, 4), (5, 6)]
for (a,b) in lista:
print(str(a)+' '+str(b))
###Output
1 2
3 4
5 6
###Markdown
List comprehension L'istruzione: [some_expression for element in my_sequence if condition] equivale a scrivere: for element in my_sequence: if condition: some_expression con la differenza che i valori restituiti da `some_expression` vengono restituiti tutti in un oggetto di tipo `list`.**NB**: la clausola if è opzionale.Data una lista, scrivere la list comprehension che produce la lista degli elementi di valore pari incrementati di 1:
###Code
lista = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[element+1 for element in lista if element % 2 == 0]
###Output
_____no_output_____
###Markdown
Data una lista, scrivere la list comprehension che produce la lista dei caratteri concatenati alla cifra 1:
###Code
stringa = 'ciao'
[e+"1" for e in stringa]
###Output
_____no_output_____
###Markdown
La list comprehension nella sua sintassi più generale: [some_expression for e1 in my_seq1 for e2 in my_se2 ... for eN in my_seqN if condition] equivale a scrivere: for e1 in my_seq1: for e2 in my_seq2: ... for eN in my_seqN: if condition: some_expression Scrivere la list comprehension che produce la lista delle stringhe ottenute combinando tra di loro in tutti i modi possibili i caratteri della stringa, gli elementi (valori interi) della lista e gli elementi (caratteri) della tupla:
###Code
stringa = 'ciao'
lista = [1, 2, 3, 4]
tupla = ('a', 'b', 'c')
[x+str(y)+z for x in stringa for y in lista for z in tupla]
###Output
_____no_output_____ |
align_your_own.ipynb | ###Markdown
So, show me how to align two vector spaces for myself! No problem. We're going to run through the example given in the README again, and show you how to learn your own transformation to align the French vector space to the Russian vector space.First, let's define a few simple functions...
###Code
import numpy as np
from fasttext import FastVector
# from https://stackoverflow.com/questions/21030391/how-to-normalize-array-numpy
def normalized(a, axis=-1, order=2):
"""Utility function to normalize the rows of a numpy array."""
l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
l2[l2==0] = 1
return a / np.expand_dims(l2, axis)
def make_training_matrices(source_dictionary, target_dictionary, bilingual_dictionary):
"""
Source and target dictionaries are the FastVector objects of
source/target languages. bilingual_dictionary is a list of
translation pair tuples [(source_word, target_word), ...].
"""
source_matrix = []
target_matrix = []
for (source, target) in bilingual_dictionary:
if source in source_dictionary and target in target_dictionary:
source_matrix.append(source_dictionary[source])
target_matrix.append(target_dictionary[target])
# return training matrices
return np.array(source_matrix), np.array(target_matrix)
def learn_transformation(source_matrix, target_matrix, normalize_vectors=True):
"""
Source and target matrices are numpy arrays, shape
(dictionary_length, embedding_dimension). These contain paired
word vectors from the bilingual dictionary.
"""
# optionally normalize the training vectors
if normalize_vectors:
source_matrix = normalized(source_matrix)
target_matrix = normalized(target_matrix)
# perform the SVD
product = np.matmul(source_matrix.transpose(), target_matrix)
U, s, V = np.linalg.svd(product)
# return orthogonal transformation which aligns source language to the target
return np.matmul(U, V)
###Output
_____no_output_____
###Markdown
Now we load the French and Russian word vectors, and evaluate the similarity of "chat" and "кот":
###Code
# seem to only work for Python 2
fr_dictionary = FastVector(vector_file='wiki.id.vec')
ru_dictionary = FastVector(vector_file='wiki.ms.vec')
fr_vector = fr_dictionary["keretaapi"]
ru_vector = ru_dictionary["melatih"]
print(FastVector.cosine_similarity(fr_vector, ru_vector))
fr_vector = fr_dictionary["china"]
ru_vector = ru_dictionary["china"]
print(FastVector.cosine_similarity(fr_vector, ru_vector))
###Output
0.06021251543834871
###Markdown
"chat" and "кот" both mean "cat", so they should be highly similar; clearly the two word vector spaces are not yet aligned. To align them, we need a bilingual dictionary of French and Russian translation pairs. As it happens, this is a great opportunity to show you something truly amazing...Many words appear in the vocabularies of more than one language; words like "alberto", "london" and "presse". These words usually mean similar things in each language. Therefore we can form a bilingual dictionary, by simply extracting every word that appears in both the French and Russian vocabularies.
###Code
ru_words = set(ru_dictionary.word2id.keys())
fr_words = set(fr_dictionary.word2id.keys())
overlap = list(ru_words & fr_words)
bilingual_dictionary = [(entry, entry) for entry in overlap]
###Output
_____no_output_____
###Markdown
Let's align the French vectors to the Russian vectors, using only this "free" dictionary that we acquired without any bilingual expert knowledge.
###Code
# form the training matrices
source_matrix, target_matrix = make_training_matrices(
fr_dictionary, ru_dictionary, bilingual_dictionary)
# learn and apply the transformation
transform = learn_transformation(source_matrix, target_matrix)
fr_dictionary.apply_transform(transform)
###Output
_____no_output_____
###Markdown
Finally, we re-evaluate the similarity of "chat" and "кот":
###Code
fr_vector = fr_dictionary["keretaapi"]
ru_vector = ru_dictionary["melatih"]
print(FastVector.cosine_similarity(fr_vector, ru_vector))
fr_vector = fr_dictionary["china"]
ru_vector = ru_dictionary["china"]
print(FastVector.cosine_similarity(fr_vector, ru_vector))
###Output
0.7430311431667744
###Markdown
So, show me how to align two vector spaces for myself! No problem. We're going to run through the example given in the README again, and show you how to learn your own transformation to align the French vector space to the Russian vector space.First, let's define a few simple functions...
###Code
import numpy as np
from fasttext import FastVector
# from https://stackoverflow.com/questions/21030391/how-to-normalize-array-numpy
def normalized(a, axis=-1, order=2):
"""Utility function to normalize the rows of a numpy array."""
l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
l2[l2==0] = 1
return a / np.expand_dims(l2, axis)
def make_training_matrices(source_dictionary, target_dictionary, bilingual_dictionary):
"""
Source and target dictionaries are the FastVector objects of
source/target languages. bilingual_dictionary is a list of
translation pair tuples [(source_word, target_word), ...].
"""
source_matrix = []
target_matrix = []
for (source, target) in bilingual_dictionary:
if source in source_dictionary and target in target_dictionary:
source_matrix.append(source_dictionary[source])
target_matrix.append(target_dictionary[target])
# return training matrices
return np.array(source_matrix), np.array(target_matrix)
def learn_transformation(source_matrix, target_matrix, normalize_vectors=True):
"""
Source and target matrices are numpy arrays, shape
(dictionary_length, embedding_dimension). These contain paired
word vectors from the bilingual dictionary.
"""
# optionally normalize the training vectors
if normalize_vectors:
source_matrix = normalized(source_matrix)
target_matrix = normalized(target_matrix)
# perform the SVD
product = np.matmul(source_matrix.transpose(), target_matrix)
U, s, V = np.linalg.svd(product)
# return orthogonal transformation which aligns source language to the target
return np.matmul(U, V)
###Output
_____no_output_____
###Markdown
Now we load the French and Russian word vectors, and evaluate the similarity of "chat" and "кот":
###Code
fr_dictionary = FastVector(vector_file='wiki.fr.vec')
ru_dictionary = FastVector(vector_file='wiki.ru.vec')
fr_vector = fr_dictionary["chat"]
ru_vector = ru_dictionary["кот"]
print(FastVector.cosine_similarity(fr_vector, ru_vector))
###Output
reading word vectors from wiki.fr.vec
reading word vectors from wiki.ru.vec
0.0238620125217
###Markdown
"chat" and "кот" both mean "cat", so they should be highly similar; clearly the two word vector spaces are not yet aligned. To align them, we need a bilingual dictionary of French and Russian translation pairs. As it happens, this is a great opportunity to show you something truly amazing...Many words appear in the vocabularies of more than one language; words like "alberto", "london" and "presse". These words usually mean similar things in each language. Therefore we can form a bilingual dictionary, by simply extracting every word that appears in both the French and Russian vocabularies.
###Code
ru_words = set(ru_dictionary.word2id.keys())
fr_words = set(fr_dictionary.word2id.keys())
overlap = list(ru_words & fr_words)
bilingual_dictionary = [(entry, entry) for entry in overlap]
###Output
_____no_output_____
###Markdown
Let's align the French vectors to the Russian vectors, using only this "free" dictionary that we acquired without any bilingual expert knowledge.
###Code
# form the training matrices
source_matrix, target_matrix = make_training_matrices(
fr_dictionary, ru_dictionary, bilingual_dictionary)
# learn and apply the transformation
transform = learn_transformation(source_matrix, target_matrix)
fr_dictionary.apply_transform(transform)
###Output
_____no_output_____
###Markdown
Finally, we re-evaluate the similarity of "chat" and "кот":
###Code
fr_vector = fr_dictionary["chat"]
ru_vector = ru_dictionary["кот"]
print(FastVector.cosine_similarity(fr_vector, ru_vector))
###Output
0.377368048895
|
lab/00_Warmup/01_Basic model with a Builtin Algorithm.ipynb | ###Markdown
Training/Optimizing a basic model with a Built Algorithm Summary:This exercise is about executing all the steps of the Machine Learning development pipeline, using some features SageMaker offers. We'll use here a public dataset called iris. The dataset and the model aren't the focus of this exercise. The idea here is to see how SageMaker can accelerate your work and avoid wasting your time with tasks that aren't related to your business. So, we'll do the following: Table of contents1. [Train/deploy/test a multiclass model using XGBoost](part1)2. [Optimize the model](part2)3. [Run batch predictions](part3)4. [Check the monitoring results, created in **Part 1**](part4) 1. Train deploy and test[(back to top)](contents) Let's start by importing the dataset and visualize it
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import datasets
sns.set(color_codes=True)
iris = datasets.load_iris()
X=iris.data
y=iris.target
dataset = np.insert(iris.data, 0, iris.target,axis=1)
df = pd.DataFrame(data=dataset, columns=['iris_id'] + iris.feature_names)
## We'll also save the dataset, with header, give we'll need to create a baseline for the monitoring
df.to_csv('full_dataset.csv', sep=',', index=None)
df['species'] = df['iris_id'].map(lambda x: 'setosa' if x == 0 else 'versicolor' if x == 1 else 'virginica')
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Checking the class distribution
###Code
ax = df.groupby(df['species'])['species'].count().plot(kind='bar')
x_offset = -0.05
y_offset = 0
for p in ax.patches:
b = p.get_bbox()
val = "{}".format(int(b.y1 + b.y0))
ax.annotate(val, ((b.x0 + b.x1)/2 + x_offset, b.y1 + y_offset))
###Output
_____no_output_____
###Markdown
Correlation Matrix
###Code
corr = df.corr()
f, ax = plt.subplots(figsize=(15, 8))
sns.heatmap(corr, annot=True, fmt="f",
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
ax=ax);
###Output
_____no_output_____
###Markdown
Pairplots & histograms
###Code
sns.pairplot(df.drop(['iris_id'], axis=1), hue='species', size=2.5,diag_kind="kde");
###Output
/usr/local/lib/python3.6/site-packages/seaborn/axisgrid.py:2071: UserWarning: The `size` parameter has been renamed to `height`; please update your code.
warnings.warn(msg, UserWarning)
###Markdown
Now with linear regression
###Code
sns.pairplot(df.drop(['iris_id'], axis=1), kind="reg", hue='species', size=2.5,diag_kind="kde");
###Output
/usr/local/lib/python3.6/site-packages/seaborn/axisgrid.py:2071: UserWarning: The `size` parameter has been renamed to `height`; please update your code.
warnings.warn(msg, UserWarning)
###Markdown
Fit a plot a kernel density estimate.We can see in this dimension an overlaping between **versicolor** and **virginica**. This is a better representation of what we identified above.
###Code
tmp_df = df[(df.iris_id==0.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Blues", shade=False, shade_lowest=False)
tmp_df = df[(df.iris_id==1.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Greens", shade=False, shade_lowest=False)
tmp_df = df[(df.iris_id==2.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Reds", shade=False, shade_lowest=False)
plt.xlabel('species')
###Output
_____no_output_____
###Markdown
Ok. Petal length and petal width have the highest linear correlation with our label. Also, sepal width seems to be useless, considering the linear correlation with our label.Since versicolor and virginica cannot be split linearly, we need a more versatile algorithm to create a better classifier. In this case, we'll use XGBoost, a tree ensable that can give us a good model for predicting the flower. Ok, now let's split the dataset into training and test
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, stratify=y)
###Output
_____no_output_____
###Markdown
First we build the datasets with *X* and *y* for train and test:
###Code
iris_train = pd.concat([X_train, y_train], axis = 1, ignore_index = True)
iris_test = pd.concat([X_test, y_train], axis = 1, ignore_index = True)
###Output
_____no_output_____
###Markdown
Then we save the datasets:
###Code
iris_train.to_csv('iris_train.csv', index = False)
iris_test.to_csv('iris_test.csv', index = False)
###Output
_____no_output_____
###Markdown
Now it's time to train our model with the builtin algorithm XGBoost
###Code
import sagemaker
import boto3
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
from sklearn.model_selection import train_test_split
role = get_execution_role()
prefix='mlops/iris'
# Retrieve the default bucket
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
###Output
_____no_output_____
###Markdown
We will launch an async job to create the baseline for the monitoring processA baseline is a what the monitoring will consider **normal**. The training dataset with which you trained the model is usually a good baseline dataset. Note that the training dataset data schema and the inference dataset schema should exactly match (i.e. the number and order of the features).From the training dataset you can ask Amazon SageMaker to suggest a set of baseline constraints and generate descriptive statistics to explore the data. For this example, upload the training dataset that was used to train the pre-trained model included in this example. If you already have it in Amazon S3, you can directly point to it.
###Code
from sagemaker.model_monitor import DefaultModelMonitor
from sagemaker.model_monitor.dataset_format import DatasetFormat
endpoint_monitor = DefaultModelMonitor(
role=role,
instance_count=1,
instance_type='ml.m5.xlarge',
volume_size_in_gb=20,
max_runtime_in_seconds=3600,
)
endpoint_monitor.suggest_baseline(
baseline_dataset='full_dataset.csv',
dataset_format=DatasetFormat.csv(header=True),
output_s3_uri='s3://{}/{}/monitoring/baseline'.format(bucket, prefix),
wait=False,
logs=False
)
###Output
_____no_output_____
###Markdown
Ok. Let's continue, upload the dataset and train the model
###Code
# Upload the dataset to an S3 bucket
input_train = sagemaker_session.upload_data(path='iris_train.csv', key_prefix='%s/data' % prefix)
input_test = sagemaker_session.upload_data(path='iris_test.csv', key_prefix='%s/data' % prefix)
train_data = sagemaker.session.s3_input(s3_data=input_train,content_type="csv")
test_data = sagemaker.session.s3_input(s3_data=input_test,content_type="csv")
###Output
_____no_output_____
###Markdown
**Note:** If there are other packages you want to use with your script, you can include a requirements.txt file in the same directory as your training script to install other dependencies at runtime. Both requirements.txt and your training script should be put in the same folder. You must specify this folder in `source_dir` argument when creating the estimator.
###Code
# get the URI for new container
container_uri = get_image_uri(boto3.Session().region_name, 'xgboost', repo_version='0.90-2');
# Create the estimator
xgb = sagemaker.estimator.Estimator(container_uri,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sagemaker_session)
# Set the hyperparameters
xgb.set_hyperparameters(eta=0.1,
max_depth=10,
gamma=4,
num_class=len(np.unique(y)),
alpha=10,
min_child_weight=6,
silent=0,
objective='multi:softmax',
num_round=30)
###Output
_____no_output_____
###Markdown
Train the model
###Code
%%time
# takes around 3min 11s
xgb.fit({'train': train_data, 'validation': test_data, })
###Output
_____no_output_____
###Markdown
Deploy the model and create an endpoint for itThe following action will: * get the assets from the job we just ran and then create an input in the Models Catalog * create a endpoint configuration (a metadata for our final endpoint) * create an enpoint, which is our model wrapped in a format of a WebService After that we'll be able to call our deployed endpoint for doing predictions
###Code
%%time
# Enable log capturing in the endpoint
data_capture_configuration = sagemaker.model_monitor.data_capture_config.DataCaptureConfig(
enable_capture=True,
sampling_percentage=100,
destination_s3_uri='s3://{}/{}/monitoring'.format(bucket, prefix),
sagemaker_session=sagemaker_session
)
xgb_predictor = xgb.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
data_capture_config=data_capture_configuration
)
###Output
_____no_output_____
###Markdown
Alright, now that we have deployed the endpoint, with data capturing enabled, it's time to setup the monitorLet's start by configuring our predictor
###Code
from sagemaker.predictor import csv_serializer
from sklearn.metrics import f1_score
endpoint_name = xgb_predictor.endpoint
model_name = boto3.client('sagemaker').describe_endpoint_config(
EndpointConfigName=endpoint_name
)['ProductionVariants'][0]['ModelName']
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
xgb_predictor.deserializer = None
###Output
_____no_output_____
###Markdown
*Monitoring*:- We will generate a **baseline** for the monitoring. Yes, we can monitor a deployed model by collecting logs from the payload and the model output. SageMaker can suggest some statistics and constraints that can be used to compare with the collected data. Then we can see some **metrics** related to the **model performance**.- We'll also create a monitoring scheduler. With this scheduler, SageMaker will parse the logs from time to time to compute the metrics we need. Given it takes some time to get the results, we'll check these metrics at the end of the exercice, in **Part 4**. And then, we need to create a **Monitoring Schedule** for our endpoint. The command below will create a cron scheduler that will process the log each hour, then we can see how well our model is going.
###Code
from sagemaker.model_monitor import CronExpressionGenerator
from time import gmtime, strftime
endpoint_monitor.create_monitoring_schedule(
endpoint_input=endpoint_name,
output_s3_uri='s3://{}/{}/monitoring/reports'.format(bucket, prefix),
statistics=endpoint_monitor.baseline_statistics(),
constraints=endpoint_monitor.suggested_constraints(),
schedule_cron_expression=CronExpressionGenerator.hourly(),
enable_cloudwatch_metrics=True,
)
###Output
_____no_output_____
###Markdown
Just take a look on the baseline created/sugested by SageMaker for your datasetThis set of statistics and constraints will be used by the Monitoring Scheduler to compare the incoming data with what is considered **normal**. Each invalid payload sent to the endpoint will be considered a violation.
###Code
baseline_job = endpoint_monitor.latest_baselining_job
schema_df = pd.io.json.json_normalize(baseline_job.baseline_statistics().body_dict["features"])
constraints_df = pd.io.json.json_normalize(baseline_job.suggested_constraints().body_dict["features"])
report_df = schema_df.merge(constraints_df)
report_df.drop([
'numerical_statistics.distribution.kll.buckets',
'numerical_statistics.distribution.kll.sketch.data',
'numerical_statistics.distribution.kll.sketch.parameters.c'
], axis=1).head(10)
###Output
_____no_output_____
###Markdown
Start generating some artificial trafficThe cell below starts a thread to send some traffic to the endpoint. Note that you need to stop the kernel to terminate this thread. If there is no traffic, the monitoring jobs are marked as `Failed` since there is no data to process.
###Code
import random
import time
from threading import Thread
traffic_generator_running=True
def invoke_endpoint_forever():
print('Invoking endpoint forever!')
while traffic_generator_running:
## This will create an invalid set of features
## The idea is to violate two monitoring constraings: not_null and data_drift
null_idx = random.randint(0,3)
sample = [random.randint(500,2000) / 100.0 for i in range(4)]
sample[null_idx] = None
xgb_predictor.predict(sample)
time.sleep(0.5)
print('Endpoint invoker has stopped')
Thread(target = invoke_endpoint_forever).start()
###Output
_____no_output_____
###Markdown
Now, let's do a basic test with the deployed endpointIn this test, we'll use a helper object called predictor. This object is always returned from a **Deploy** call. The predictor is just for testing purposes and we'll not use it inside our real application.
###Code
predictions_test = [ float(xgb_predictor.predict(x).decode('utf-8')) for x in X_test]
score = f1_score(y_test,predictions_test,labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
###Output
_____no_output_____
###Markdown
Then, let's test the API for our trained modelThis is how your application will call the endpoint. Using boto3 for getting a sagemaker runtime client and then we'll call invoke_endpoint
###Code
from sagemaker.predictor import csv_serializer
sm = boto3.client('sagemaker-runtime')
resp = sm.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='text/csv',
Body=csv_serializer(X_test[0])
)
prediction = float(resp['Body'].read().decode('utf-8'))
print('Predicted class: %.1f for [%s]' % (prediction, csv_serializer(X_test[0])) )
###Output
_____no_output_____
###Markdown
2. Model optimization with Hyperparameter Tuning[(back to top)](contents) Hyperparameter Tuning Jobs A.K.A. Hyperparameter Optimization Let's tune our model before using it for our batch predictionWe know that the iris dataset is an easy challenge. We can achieve a better score with XGBoost. However, we don't want to wast time testing all the possible variations of the hyperparameters in order to optimize the training process.Instead, we'll use the Sagemaker's tuning feature. For that, we'll use the same estimator, but let's create a Tuner and ask it for optimize the model for us.
###Code
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {'eta': ContinuousParameter(0, 1),
'min_child_weight': ContinuousParameter(1, 10),
'alpha': ContinuousParameter(0, 2),
'gamma': ContinuousParameter(0, 10),
'max_depth': IntegerParameter(1, 10)}
objective_metric_name = 'validation:merror'
tuner = HyperparameterTuner(xgb,
objective_metric_name,
hyperparameter_ranges,
max_jobs=20,
max_parallel_jobs=4,
objective_type='Minimize')
tuner.fit({'train': train_data, 'validation': test_data, })
tuner.wait()
job_name = tuner.latest_tuning_job.name
attached_tuner = HyperparameterTuner.attach(job_name)
xgb_predictor2 = attached_tuner.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
first_endpoint_name = endpoint_name
endpoint_name = xgb_predictor2.endpoint
model_name = boto3.client('sagemaker').describe_endpoint_config(
EndpointConfigName=endpoint_name
)['ProductionVariants'][0]['ModelName']
###Output
_____no_output_____
###Markdown
A simple test before we move on
###Code
from sagemaker.predictor import csv_serializer
from sklearn.metrics import f1_score
xgb_predictor2.content_type = 'text/csv'
xgb_predictor2.serializer = csv_serializer
xgb_predictor2.deserializer = None
predictions_test = [ float(xgb_predictor2.predict(x).decode('utf-8')) for x in X_test]
score = f1_score(y_test,predictions_test,labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
###Output
_____no_output_____
###Markdown
3. Batch prediction[(back to top)](contents) Batch transform jobIf you have a file with the samples you want to predict, just upload that file to an S3 bucket and start a Batch Transform job. For this task, you don't need to deploy an endpoint. Sagemaker will create all the resources needed to do this batch prediction, save the results into an S3 bucket and then it will destroy the resources automatically for you
###Code
batch_dataset_filename='batch_dataset.csv'
with open(batch_dataset_filename, 'w') as csv:
for x_ in X:
line = ",".join( list(map(str, x_)) )
csv.write( line + "\n" )
csv.flush()
csv.close()
input_batch = sagemaker_session.upload_data(path=batch_dataset_filename, key_prefix='%s/data' % prefix)
import sagemaker
# Initialize the transformer object
transformer=sagemaker.transformer.Transformer(
base_transform_job_name='mlops-iris',
model_name=model_name,
instance_count=1,
instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/batch_output'.format(bucket, prefix),
)
# To start a transform job:
transformer.transform(input_batch, content_type='text/csv', split_type='Line')
# Then wait until transform job is completed
transformer.wait()
import boto3
predictions_filename='iris_predictions.csv'
s3 = boto3.client('s3')
s3.download_file(bucket, '{}/batch_output/{}.out'.format(prefix, batch_dataset_filename), predictions_filename)
df2 = pd.read_csv(predictions_filename, sep=',', encoding='utf-8',header=None, names=[ 'predicted_iris_id'])
df3 = df.copy()
df3['predicted_iris_id'] = df2['predicted_iris_id']
df3.head()
from sklearn.metrics import f1_score
score = f1_score(df3['iris_id'], df3['predicted_iris_id'],labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
cnf_matrix = confusion_matrix(df3['iris_id'], df3['predicted_iris_id'])
f, ax = plt.subplots(figsize=(15, 8))
sns.heatmap(cnf_matrix, annot=True, fmt="f", mask=np.zeros_like(cnf_matrix, dtype=np.bool),
cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, ax=ax)
###Output
_____no_output_____
###Markdown
4. Check the monitoring results[(back to top)](contents)The HPO took something like 20 minutes to run. The batch prediction, 3-5 more. It is probably enough time to have at least one execution of the monitor schedule. Since we created a thread for generating **invalid** features, we must have some data drift detected in our monitoring. Let's check
###Code
mon_executions = endpoint_monitor.list_executions()
print("We created a hourly schedule above and it will kick off executions ON the hour (plus 0 - 20 min buffer.\nWe will have to wait till we hit the hour...")
while len(mon_executions) == 0:
print("Waiting for the 1st execution to happen...")
time.sleep(60)
mon_executions = endpoint_monitor.list_executions()
print('OK. we have %d execution(s) now' % len(mon_executions))
import time
import pandas as pd
from IPython.display import display, HTML
def print_constraint_violations():
violations = endpoint_monitor.latest_monitoring_constraint_violations()
pd.set_option('display.max_colwidth', -1)
constraints_df = pd.io.json.json_normalize(violations.body_dict["violations"])
display(HTML(constraints_df.head(10).to_html()))
while True:
resp = mon_executions[-1].describe()
status = resp['ProcessingJobStatus']
msg = resp['ExitMessage']
if status == 'InProgress':
time.sleep(30)
elif status == 'Completed':
print("Finished: %s" % msg)
print_constraint_violations()
break
else:
print("Error: %s" % msg)
break
###Output
_____no_output_____
###Markdown
You can also check these metrics on CloudWatch. Just open the CloudWatch console, click on **Metrics**, then select: All -> aws/sagemaker/Endpoints/data-metrics -> Endpoint, MonitoringScheduleUse the *endpoint_monitor* name to filter the metrics Cleaning up
###Code
traffic_generator_running=False
time.sleep(3)
endpoint_monitor.delete_monitoring_schedule()
time.sleep(10) # wait for 10 seconds before trying to delete the endpoint
xgb_predictor.delete_endpoint()
xgb_predictor2.delete_endpoint()
###Output
_____no_output_____
###Markdown
Training/Optimizing a basic model with a Built AlgorithmThis exercise is about manually execute all the steps of the Machine Learning development pipeline. We'll use here a public dataset called iris. The dataset and the model aren't the focus of this exercise. The idea here is to see how Sagemaker can accelerate your work and void wasting your time with tasks that aren't related to your business. So, we'll do the following: - Train/deploy/test a multiclass model using XGBoost - Optimize the model - Run batch predictions PART 1 - Train deploy and test Let's start by importing the dataset and visualize it
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import datasets
sns.set(color_codes=True)
iris = datasets.load_iris()
X=iris.data
y=iris.target
dataset = np.insert(iris.data, 0, iris.target,axis=1)
df = pd.DataFrame(data=dataset, columns=['iris_id'] + iris.feature_names)
df['species'] = df['iris_id'].map(lambda x: 'setosa' if x == 0 else 'versicolor' if x == 1 else 'virginica')
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Checking the class distribution
###Code
ax = df.groupby(df['species'])['species'].count().plot(kind='bar')
x_offset = -0.05
y_offset = 0
for p in ax.patches:
b = p.get_bbox()
val = "{}".format(int(b.y1 + b.y0))
ax.annotate(val, ((b.x0 + b.x1)/2 + x_offset, b.y1 + y_offset))
###Output
_____no_output_____
###Markdown
Correlation Matrix
###Code
corr = df.corr()
f, ax = plt.subplots(figsize=(15, 8))
sns.heatmap(corr, annot=True, fmt="f",
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
ax=ax)
###Output
_____no_output_____
###Markdown
Pairplots & histograms
###Code
sns.pairplot(df.drop(['iris_id'], axis=1), hue='species', size=2.5,diag_kind="kde")
###Output
_____no_output_____
###Markdown
Now with linear regression
###Code
sns.pairplot(df.drop(['iris_id'], axis=1), kind="reg", hue='species', size=2.5,diag_kind="kde")
###Output
_____no_output_____
###Markdown
Fit and plot a univariate or bivariate kernel density estimate.
###Code
tmp_df = df[(df.iris_id==0.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Reds", shade=False, shade_lowest=False)
tmp_df = df[(df.iris_id==2.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Blues", shade=False, shade_lowest=False)
tmp_df = df[(df.iris_id==1.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Greens", shade=False, shade_lowest=False)
plt.xlabel('species')
###Output
_____no_output_____
###Markdown
Ok. Petal length and petal width have the highest linear correlation with our label. Also, sepal width seems to be useless, considering the linear correlation with our label.Since versicolor and virginica cannot be split linearly, we need a more versatile algorithm to create a better classifier. In this case, we'll use XGBoost, a tree ensable that can give us a good model for predicting the flower. Ok, now let's split the dataset into training and test
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, stratify=y)
with open('iris_train.csv', 'w') as csv:
for x_,y_ in zip(X_train, y_train):
line = "%s,%s" % (y_, ",".join( list(map(str, x_)) ) )
csv.write( line + "\n" )
csv.flush()
csv.close()
with open('iris_test.csv', 'w') as csv:
for x_,y_ in zip(X_test, y_test):
line = "%s,%s" % (y_, ",".join( list(map(str, x_)) ) )
csv.write( line + "\n" )
csv.flush()
csv.close()
###Output
_____no_output_____
###Markdown
Now it's time to train our model with the builtin algorithm XGBoost
###Code
import sagemaker
import boto3
from sagemaker import get_execution_role
from sklearn.model_selection import train_test_split
role = get_execution_role()
prefix='mlops/iris'
# Retrieve the default bucket
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
# Upload the dataset to an S3 bucket
input_train = sagemaker_session.upload_data(path='iris_train.csv', key_prefix='%s/data' % prefix)
input_test = sagemaker_session.upload_data(path='iris_test.csv', key_prefix='%s/data' % prefix)
train_data = sagemaker.session.s3_input(s3_data=input_train,content_type="csv")
test_data = sagemaker.session.s3_input(s3_data=input_test,content_type="csv")
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/xgboost:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/xgboost:latest'}
# Create the estimator
xgb = sagemaker.estimator.Estimator(containers[boto3.Session().region_name],
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sagemaker_session)
# Set the hyperparameters
xgb.set_hyperparameters(eta=0.1,
max_depth=10,
gamma=4,
reg_lambda=10,
num_class=len(np.unique(y)),
alpha=10,
min_child_weight=6,
silent=0,
objective='multi:softmax',
num_round=30)
###Output
_____no_output_____
###Markdown
Train the model
###Code
%%time
# takes around 3min 11s
xgb.fit({'train': train_data, 'validation': test_data, })
###Output
_____no_output_____
###Markdown
Deploy the model and create an endpoint for itThe following action will: * get the assets from the job we just ran and then create an input in the Models Catalog * create a endpoint configuration (a metadata for our final endpoint) * create an enpoint, which is our model wrapped in a format of a WebService After that we'll be able to call our deployed endpoint for doing predictions
###Code
%%time
xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
endpoint_name = xgb_predictor.endpoint
model_name = boto3.client('sagemaker').describe_endpoint_config(
EndpointConfigName=endpoint_name
)['ProductionVariants'][0]['ModelName']
###Output
_____no_output_____
###Markdown
Now, let's do a basic test with the deployed endpointIn this test, we'll use a helper object called predictor. This object is always returned from a **Deploy** call. The predictor is just for testing purposes and we'll not use it inside our real application.
###Code
from sagemaker.predictor import csv_serializer
from sklearn.metrics import f1_score
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
xgb_predictor.deserializer = None
predictions_test = [ float(xgb_predictor.predict(x).decode('utf-8')) for x in X_test]
score = f1_score(y_test,predictions_test,labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
###Output
_____no_output_____
###Markdown
Then, let's test the API for our trained modelThis is how your application will call the endpoint. Using boto3 for getting a sagemaker runtime client and then we'll call invoke_endpoint
###Code
sm = boto3.client('sagemaker-runtime')
from sagemaker.predictor import csv_serializer
resp = sm.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='text/csv',
Body=csv_serializer(X_test[0])
)
prediction = float(resp['Body'].read().decode('utf-8'))
print('Predicted class: %.1f for [%s]' % (prediction, csv_serializer(X_test[0])) )
###Output
_____no_output_____
###Markdown
PART 2 - Model optimization with Hyperparameter Tuning Hyperparameter Tuning Jobs A.K.A. Hyperparameter Optimization Let's tune our model before using it for our batch predictionWe know that the iris dataset is an easy challenge. We can achieve a better score with XGBoost. However, we don't want to wast time testing all the possible variations of the hyperparameters in order to optimize the training process.Instead, we'll use the Sagemaker's tuning feature. For that, we'll use the same estimator, but let's create a Tuner and ask it for optimize the model for us.
###Code
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {'eta': ContinuousParameter(0, 1),
'min_child_weight': ContinuousParameter(1, 10),
'alpha': ContinuousParameter(0, 2),
'gamma': ContinuousParameter(0, 10),
'max_depth': IntegerParameter(1, 10)}
objective_metric_name = 'validation:merror'
tuner = HyperparameterTuner(xgb,
objective_metric_name,
hyperparameter_ranges,
max_jobs=20,
max_parallel_jobs=4,
objective_type='Minimize')
tuner.fit({'train': train_data, 'validation': test_data, })
tuner.wait()
job_name = tuner.latest_tuning_job.name
attached_tuner = HyperparameterTuner.attach(job_name)
xgb_predictor2 = attached_tuner.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
first_endpoint_name = endpoint_name
endpoint_name = xgb_predictor2.endpoint
model_name = boto3.client('sagemaker').describe_endpoint_config(
EndpointConfigName=endpoint_name
)['ProductionVariants'][0]['ModelName']
###Output
_____no_output_____
###Markdown
A simple test before we move on
###Code
from sagemaker.predictor import csv_serializer
from sklearn.metrics import f1_score
xgb_predictor2.content_type = 'text/csv'
xgb_predictor2.serializer = csv_serializer
xgb_predictor2.deserializer = None
predictions_test = [ float(xgb_predictor2.predict(x).decode('utf-8')) for x in X_test]
score = f1_score(y_test,predictions_test,labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
###Output
_____no_output_____
###Markdown
PART 3 - Batch Prediction Batch transform jobIf you have a file with the samples you want to predict, just upload that file to an S3 bucket and start a Batch Transform job. For this task, you don't need to deploy an endpoint. Sagemaker will create all the resources needed to do this batch prediction, save the results into an S3 bucket and then it will destroy the resources automatically for you
###Code
batch_dataset_filename='batch_dataset.csv'
with open(batch_dataset_filename, 'w') as csv:
for x_ in X:
line = ",".join( list(map(str, x_)) )
csv.write( line + "\n" )
csv.flush()
csv.close()
input_batch = sagemaker_session.upload_data(path=batch_dataset_filename, key_prefix='%s/data' % prefix)
import sagemaker
# Initialize the transformer object
transformer=sagemaker.transformer.Transformer(
base_transform_job_name='mlops-iris',
model_name=model_name,
instance_count=1,
instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/batch_output'.format(bucket, prefix),
)
# To start a transform job:
transformer.transform(input_batch, content_type='text/csv', split_type='Line')
# Then wait until transform job is completed
transformer.wait()
import boto3
predictions_filename='iris_predictions.csv'
s3 = boto3.client('s3')
s3.download_file(bucket, '{}/batch_output/{}.out'.format(prefix, batch_dataset_filename), predictions_filename)
df2 = pd.read_csv(predictions_filename, sep=',', encoding='utf-8',header=None, names=[ 'predicted_iris_id'])
df3 = df.copy()
df3['predicted_iris_id'] = df2['predicted_iris_id']
df3.head()
from sklearn.metrics import f1_score
score = f1_score(df3['iris_id'], df3['predicted_iris_id'],labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
cnf_matrix = confusion_matrix(df3['iris_id'], df3['predicted_iris_id'])
f, ax = plt.subplots(figsize=(15, 8))
sns.heatmap(cnf_matrix, annot=True, fmt="f", mask=np.zeros_like(cnf_matrix, dtype=np.bool),
cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, ax=ax)
###Output
_____no_output_____
###Markdown
Cleaning up
###Code
sagemaker.delete_endpoint(EndpointName=endpoint_name)
sagemaker.delete_endpoint(EndpointName=first_endpoint_name)
###Output
_____no_output_____
###Markdown
Training/Optimizing a basic model with a Built AlgorithmThis exercise is about manually execute all the steps of the Machine Learning development pipeline. We'll use here a public dataset called iris. The dataset and the model aren't the focus of this exercise. The idea here is to see how Sagemaker can accelerate your work and void wasting your time with tasks that aren't related to your business. So, we'll do the following: - Train/deploy/test a multiclass model using XGBoost - Optimize the model - Run batch predictions PART 1 - Train deploy and test Let's start by importing the dataset and visualize it
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import datasets
sns.set(color_codes=True)
iris = datasets.load_iris()
X=iris.data
y=iris.target
dataset = np.insert(iris.data, 0, iris.target,axis=1)
df = pd.DataFrame(data=dataset, columns=['iris_id'] + iris.feature_names)
df['species'] = df['iris_id'].map(lambda x: 'setosa' if x == 0 else 'versicolor' if x == 1 else 'virginica')
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Checking the class distribution
###Code
ax = df.groupby(df['species'])['species'].count().plot(kind='bar')
x_offset = -0.05
y_offset = 0
for p in ax.patches:
b = p.get_bbox()
val = "{}".format(int(b.y1 + b.y0))
ax.annotate(val, ((b.x0 + b.x1)/2 + x_offset, b.y1 + y_offset))
###Output
_____no_output_____
###Markdown
Correlation Matrix
###Code
corr = df.corr()
f, ax = plt.subplots(figsize=(15, 8))
sns.heatmap(corr, annot=True, fmt="f",
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
ax=ax)
###Output
_____no_output_____
###Markdown
Pairplots & histograms
###Code
sns.pairplot(df.drop(['iris_id'], axis=1), hue='species', size=2.5,diag_kind="kde")
###Output
_____no_output_____
###Markdown
Now with linear regression
###Code
sns.pairplot(df.drop(['iris_id'], axis=1), kind="reg", hue='species', size=2.5,diag_kind="kde")
###Output
_____no_output_____
###Markdown
Fit and plot a univariate or bivariate kernel density estimate.
###Code
tmp_df = df[(df.iris_id==0.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Reds", shade=False, shade_lowest=False)
tmp_df = df[(df.iris_id==2.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Blues", shade=False, shade_lowest=False)
tmp_df = df[(df.iris_id==1.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Greens", shade=False, shade_lowest=False)
plt.xlabel('species')
###Output
_____no_output_____
###Markdown
Ok. Petal length and petal width have the highest linear correlation with our label. Also, sepal width seems to be useless, considering the linear correlation with our label.Since versicolor and virginica cannot be split linearly, we need a more versatile algorithm to create a better classifier. In this case, we'll use XGBoost, a tree ensable that can give us a good model for predicting the flower. Ok, now let's split the dataset into training and test
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, stratify=y)
with open('iris_train.csv', 'w') as csv:
for x_,y_ in zip(X_train, y_train):
line = "%s,%s" % (y_, ",".join( list(map(str, x_)) ) )
csv.write( line + "\n" )
csv.flush()
csv.close()
with open('iris_test.csv', 'w') as csv:
for x_,y_ in zip(X_test, y_test):
line = "%s,%s" % (y_, ",".join( list(map(str, x_)) ) )
csv.write( line + "\n" )
csv.flush()
csv.close()
###Output
_____no_output_____
###Markdown
Now it's time to train our model with the builtin algorithm XGBoost
###Code
import sagemaker
import boto3
from sagemaker import get_execution_role
from sklearn.model_selection import train_test_split
role = get_execution_role()
prefix='mlops/iris'
# Retrieve the default bucket
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
# Upload the dataset to an S3 bucket
input_train = sagemaker_session.upload_data(path='iris_train.csv', key_prefix='%s/data' % prefix)
input_test = sagemaker_session.upload_data(path='iris_test.csv', key_prefix='%s/data' % prefix)
train_data = sagemaker.session.s3_input(s3_data=input_train,content_type="csv")
test_data = sagemaker.session.s3_input(s3_data=input_test,content_type="csv")
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/xgboost:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/xgboost:latest'}
# Create the estimator
xgb = sagemaker.estimator.Estimator(containers[boto3.Session().region_name],
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sagemaker_session)
# Set the hyperparameters
xgb.set_hyperparameters(eta=0.1,
max_depth=10,
gamma=4,
reg_lambda=10,
num_class=len(np.unique(y)),
alpha=10,
min_child_weight=6,
silent=0,
objective='multi:softmax',
num_round=30)
###Output
_____no_output_____
###Markdown
Train the model
###Code
%%time
# takes around 3min 11s
xgb.fit({'train': train_data, 'validation': test_data, })
###Output
_____no_output_____
###Markdown
Deploy the model and create an endpoint for itThe following action will: * get the assets from the job we just ran and then create an input in the Models Catalog * create a endpoint configuration (a metadata for our final endpoint) * create an enpoint, which is our model wrapped in a format of a WebService After that we'll be able to call our deployed endpoint for doing predictions
###Code
%%time
xgb_predictor = xgb.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
endpoint_name = xgb_predictor.endpoint
model_name = boto3.client('sagemaker').describe_endpoint_config(
EndpointConfigName=endpoint_name
)['ProductionVariants'][0]['ModelName']
###Output
_____no_output_____
###Markdown
Now, let's do a basic test with the deployed endpointIn this test, we'll use a helper object called predictor. This object is always returned from a **Deploy** call. The predictor is just for testing purposes and we'll not use it inside our real application.
###Code
from sagemaker.predictor import csv_serializer
from sklearn.metrics import f1_score
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
xgb_predictor.deserializer = None
predictions_test = [ float(xgb_predictor.predict(x).decode('utf-8')) for x in X_test]
score = f1_score(y_test,predictions_test,labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
###Output
_____no_output_____
###Markdown
Then, let's test the API for our trained modelThis is how your application will call the endpoint. Using boto3 for getting a sagemaker runtime client and then we'll call invoke_endpoint
###Code
sm = boto3.client('sagemaker-runtime')
from sagemaker.predictor import csv_serializer
resp = sm.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='text/csv',
Body=csv_serializer(X_test[0])
)
prediction = float(resp['Body'].read().decode('utf-8'))
print('Predicted class: %.1f for [%s]' % (prediction, csv_serializer(X_test[0])) )
###Output
_____no_output_____
###Markdown
PART 2 - Model optimization with Hyperparameter Tuning Hyperparameter Tuning Jobs A.K.A. Hyperparameter Optimization Let's tune our model before using it for our batch predictionWe know that the iris dataset is an easy challenge. We can achieve a better score with XGBoost. However, we don't want to wast time testing all the possible variations of the hyperparameters in order to optimize the training process.Instead, we'll use the Sagemaker's tuning feature. For that, we'll use the same estimator, but let's create a Tuner and ask it for optimize the model for us.
###Code
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {'eta': ContinuousParameter(0, 1),
'min_child_weight': ContinuousParameter(1, 10),
'alpha': ContinuousParameter(0, 2),
'gamma': ContinuousParameter(0, 10),
'max_depth': IntegerParameter(1, 10)}
objective_metric_name = 'validation:merror'
tuner = HyperparameterTuner(xgb,
objective_metric_name,
hyperparameter_ranges,
max_jobs=20,
max_parallel_jobs=4,
objective_type='Minimize')
tuner.fit({'train': train_data, 'validation': test_data, })
tuner.wait()
job_name = tuner.latest_tuning_job.name
attached_tuner = HyperparameterTuner.attach(job_name)
xgb_predictor2 = attached_tuner.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
first_endpoint_name = endpoint_name
endpoint_name = xgb_predictor2.endpoint
model_name = boto3.client('sagemaker').describe_endpoint_config(
EndpointConfigName=endpoint_name
)['ProductionVariants'][0]['ModelName']
###Output
_____no_output_____
###Markdown
A simple test before we move on
###Code
from sagemaker.predictor import csv_serializer
from sklearn.metrics import f1_score
xgb_predictor2.content_type = 'text/csv'
xgb_predictor2.serializer = csv_serializer
xgb_predictor2.deserializer = None
predictions_test = [ float(xgb_predictor2.predict(x).decode('utf-8')) for x in X_test]
score = f1_score(y_test,predictions_test,labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
###Output
_____no_output_____
###Markdown
PART 3 - Batch Prediction Batch transform jobIf you have a file with the samples you want to predict, just upload that file to an S3 bucket and start a Batch Transform job. For this task, you don't need to deploy an endpoint. Sagemaker will create all the resources needed to do this batch prediction, save the results into an S3 bucket and then it will destroy the resources automatically for you
###Code
batch_dataset_filename='batch_dataset.csv'
with open(batch_dataset_filename, 'w') as csv:
for x_ in X:
line = ",".join( list(map(str, x_)) )
csv.write( line + "\n" )
csv.flush()
csv.close()
input_batch = sagemaker_session.upload_data(path=batch_dataset_filename, key_prefix='%s/data' % prefix)
import sagemaker
# Initialize the transformer object
transformer=sagemaker.transformer.Transformer(
base_transform_job_name='mlops-iris',
model_name=model_name,
instance_count=1,
instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/batch_output'.format(bucket, prefix),
)
# To start a transform job:
transformer.transform(input_batch, content_type='text/csv', split_type='Line')
# Then wait until transform job is completed
transformer.wait()
import boto3
predictions_filename='iris_predictions.csv'
s3 = boto3.client('s3')
s3.download_file(bucket, '{}/batch_output/{}.out'.format(prefix, batch_dataset_filename), predictions_filename)
df2 = pd.read_csv(predictions_filename, sep=',', encoding='utf-8',header=None, names=[ 'predicted_iris_id'])
df3 = df.copy()
df3['predicted_iris_id'] = df2['predicted_iris_id']
df3.head()
from sklearn.metrics import f1_score
score = f1_score(df3['iris_id'], df3['predicted_iris_id'],labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
cnf_matrix = confusion_matrix(df3['iris_id'], df3['predicted_iris_id'])
f, ax = plt.subplots(figsize=(15, 8))
sns.heatmap(cnf_matrix, annot=True, fmt="f", mask=np.zeros_like(cnf_matrix, dtype=np.bool),
cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, ax=ax)
###Output
_____no_output_____
###Markdown
Cleaning up
###Code
xgb_predictor.delete_endpoint()
xgb_predictor2.delete_endpoint()
###Output
_____no_output_____
###Markdown
Training/Optimizing a basic model with a Built AlgorithmThis exercise is about executing all the steps of the Machine Learning development pipeline, using some features SageMaker offers. We'll use here a public dataset called iris. The dataset and the model aren't the focus of this exercise. The idea here is to see how SageMaker can accelerate your work and avoid wasting your time with tasks that aren't related to your business. So, we'll do the following: - **PART 1** - Train/deploy/test a multiclass model using XGBoost - *Monitoring*: - We will generate a **baseline** for the monitoring. Yes, we can monitor a deployed model by collecting logs from the payload and the model output. SageMaker can suggest some statistics and constraints that can be used to compare with the collected data. Then we can see some **metrics** related to the **model performance**. - We'll also create a monitoring scheduler. With this scheduler, SageMaker will parse the logs from time to time to compute the metrics we need. Given it takes some time to get the results, we'll check these metrics at the end of the exercice, in **Part 4**. - **PART 2** - Optimize the model - **PART 3** - Run batch predictions - **PART 4** - Check the monitoring results, created in **Part 1** PART 1 - Train deploy and test Let's start by importing the dataset and visualize it
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import datasets
sns.set(color_codes=True)
iris = datasets.load_iris()
X=iris.data
y=iris.target
dataset = np.insert(iris.data, 0, iris.target,axis=1)
df = pd.DataFrame(data=dataset, columns=['iris_id'] + iris.feature_names)
## We'll also save the dataset, with header, give we'll need to create a baseline for the monitoring
df.to_csv('full_dataset.csv', sep=',', index=None)
df['species'] = df['iris_id'].map(lambda x: 'setosa' if x == 0 else 'versicolor' if x == 1 else 'virginica')
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Checking the class distribution
###Code
ax = df.groupby(df['species'])['species'].count().plot(kind='bar')
x_offset = -0.05
y_offset = 0
for p in ax.patches:
b = p.get_bbox()
val = "{}".format(int(b.y1 + b.y0))
ax.annotate(val, ((b.x0 + b.x1)/2 + x_offset, b.y1 + y_offset))
###Output
_____no_output_____
###Markdown
Correlation Matrix
###Code
corr = df.corr()
f, ax = plt.subplots(figsize=(15, 8))
sns.heatmap(corr, annot=True, fmt="f",
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
ax=ax)
###Output
_____no_output_____
###Markdown
Pairplots & histograms
###Code
sns.pairplot(df.drop(['iris_id'], axis=1), hue='species', size=2.5,diag_kind="kde")
###Output
_____no_output_____
###Markdown
Now with linear regression
###Code
sns.pairplot(df.drop(['iris_id'], axis=1), kind="reg", hue='species', size=2.5,diag_kind="kde")
###Output
_____no_output_____
###Markdown
Fit a plot a kernel density estimate.We can see in this dimension an overlaping between **versicolor** and **virginica**. This is a better representation of what we identified above.
###Code
tmp_df = df[(df.iris_id==0.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Blues", shade=False, shade_lowest=False)
tmp_df = df[(df.iris_id==1.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Greens", shade=False, shade_lowest=False)
tmp_df = df[(df.iris_id==2.0)]
sns.kdeplot(tmp_df['petal width (cm)'], tmp_df['petal length (cm)'], bw='silverman', cmap="Reds", shade=False, shade_lowest=False)
plt.xlabel('species')
###Output
_____no_output_____
###Markdown
Ok. Petal length and petal width have the highest linear correlation with our label. Also, sepal width seems to be useless, considering the linear correlation with our label.Since versicolor and virginica cannot be split linearly, we need a more versatile algorithm to create a better classifier. In this case, we'll use XGBoost, a tree ensable that can give us a good model for predicting the flower. Ok, now let's split the dataset into training and test
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42, stratify=y)
with open('iris_train.csv', 'w') as csv:
for x_,y_ in zip(X_train, y_train):
line = "%s,%s" % (y_, ",".join( list(map(str, x_)) ) )
csv.write( line + "\n" )
csv.flush()
csv.close()
with open('iris_test.csv', 'w') as csv:
for x_,y_ in zip(X_test, y_test):
line = "%s,%s" % (y_, ",".join( list(map(str, x_)) ) )
csv.write( line + "\n" )
csv.flush()
csv.close()
###Output
_____no_output_____
###Markdown
Now it's time to train our model with the builtin algorithm XGBoost
###Code
import sagemaker
import boto3
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import get_image_uri
from sklearn.model_selection import train_test_split
role = get_execution_role()
prefix='mlops/iris'
# Retrieve the default bucket
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
###Output
_____no_output_____
###Markdown
We will launch an async job to create the baseline for the monitoring processA baseline is a what the monitoring will consider **normal**. The training dataset with which you trained the model is usually a good baseline dataset. Note that the training dataset data schema and the inference dataset schema should exactly match (i.e. the number and order of the features).From the training dataset you can ask Amazon SageMaker to suggest a set of baseline constraints and generate descriptive statistics to explore the data. For this example, upload the training dataset that was used to train the pre-trained model included in this example. If you already have it in Amazon S3, you can directly point to it.
###Code
from sagemaker.model_monitor import DefaultModelMonitor
from sagemaker.model_monitor.dataset_format import DatasetFormat
endpoint_monitor = DefaultModelMonitor(
role=role,
instance_count=1,
instance_type='ml.m5.xlarge',
volume_size_in_gb=20,
max_runtime_in_seconds=3600,
)
endpoint_monitor.suggest_baseline(
baseline_dataset='full_dataset.csv',
dataset_format=DatasetFormat.csv(header=True),
output_s3_uri='s3://{}/{}/monitoring/baseline'.format(bucket, prefix),
wait=False,
logs=False
)
###Output
_____no_output_____
###Markdown
Ok. Let's continue, upload the dataset and train the model
###Code
# Upload the dataset to an S3 bucket
input_train = sagemaker_session.upload_data(path='iris_train.csv', key_prefix='%s/data' % prefix)
input_test = sagemaker_session.upload_data(path='iris_test.csv', key_prefix='%s/data' % prefix)
train_data = sagemaker.session.s3_input(s3_data=input_train,content_type="csv")
test_data = sagemaker.session.s3_input(s3_data=input_test,content_type="csv")
# get the URI for new container
container_uri = get_image_uri(boto3.Session().region_name, 'xgboost', repo_version='0.90-1');
# Create the estimator
xgb = sagemaker.estimator.Estimator(container_uri,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sagemaker_session)
# Set the hyperparameters
xgb.set_hyperparameters(eta=0.1,
max_depth=10,
gamma=4,
num_class=len(np.unique(y)),
alpha=10,
min_child_weight=6,
silent=0,
objective='multi:softmax',
num_round=30)
###Output
_____no_output_____
###Markdown
Train the model
###Code
%%time
# takes around 3min 11s
xgb.fit({'train': train_data, 'validation': test_data, })
###Output
_____no_output_____
###Markdown
Deploy the model and create an endpoint for itThe following action will: * get the assets from the job we just ran and then create an input in the Models Catalog * create a endpoint configuration (a metadata for our final endpoint) * create an enpoint, which is our model wrapped in a format of a WebService After that we'll be able to call our deployed endpoint for doing predictions
###Code
%%time
# Enable log capturing in the endpoint
data_capture_configuration = sagemaker.model_monitor.data_capture_config.DataCaptureConfig(
enable_capture=True,
sampling_percentage=100,
destination_s3_uri='s3://{}/{}/monitoring'.format(bucket, prefix),
sagemaker_session=sagemaker_session
)
xgb_predictor = xgb.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge',
data_capture_config=data_capture_configuration
)
###Output
_____no_output_____
###Markdown
Alright, now that we have deployed the endpoint, with data capturing enabled, it's time to setup the monitorLet's start by configuring our predictor
###Code
from sagemaker.predictor import csv_serializer
from sklearn.metrics import f1_score
endpoint_name = xgb_predictor.endpoint
model_name = boto3.client('sagemaker').describe_endpoint_config(
EndpointConfigName=endpoint_name
)['ProductionVariants'][0]['ModelName']
xgb_predictor.content_type = 'text/csv'
xgb_predictor.serializer = csv_serializer
xgb_predictor.deserializer = None
###Output
_____no_output_____
###Markdown
And then, we need to create a **Monitoring Schedule** for our endpoint. The command below will create a cron scheduler that will process the log each hour, then we can see how well our model is going.
###Code
from sagemaker.model_monitor import CronExpressionGenerator
from time import gmtime, strftime
endpoint_monitor.create_monitoring_schedule(
endpoint_input=endpoint_name,
output_s3_uri='s3://{}/{}/monitoring/reports'.format(bucket, prefix),
statistics=endpoint_monitor.baseline_statistics(),
constraints=endpoint_monitor.suggested_constraints(),
schedule_cron_expression=CronExpressionGenerator.hourly(),
enable_cloudwatch_metrics=True,
)
###Output
_____no_output_____
###Markdown
Just take a look on the baseline created/sugested by SageMaker for your datasetThis set of statistics and constraints will be used by the Monitoring Scheduler to compare the incoming data with what is considered **normal**. Each invalid payload sent to the endpoint will be considered a violation.
###Code
baseline_job = endpoint_monitor.latest_baselining_job
schema_df = pd.io.json.json_normalize(baseline_job.baseline_statistics().body_dict["features"])
constraints_df = pd.io.json.json_normalize(baseline_job.suggested_constraints().body_dict["features"])
report_df = schema_df.merge(constraints_df)
report_df.drop([
'numerical_statistics.distribution.kll.buckets',
'numerical_statistics.distribution.kll.sketch.data',
'numerical_statistics.distribution.kll.sketch.parameters.c'
], axis=1).head(10)
###Output
_____no_output_____
###Markdown
Start generating some artificial trafficThe cell below starts a thread to send some traffic to the endpoint. Note that you need to stop the kernel to terminate this thread. If there is no traffic, the monitoring jobs are marked as `Failed` since there is no data to process.
###Code
import random
import time
from threading import Thread
traffic_generator_running=True
def invoke_endpoint_forever():
print('Invoking endpoint forever!')
while traffic_generator_running:
## This will create an invalid set of features
## The idea is to violate two monitoring constraings: not_null and data_drift
null_idx = random.randint(0,3)
sample = [random.randint(500,2000) / 100.0 for i in range(4)]
sample[null_idx] = None
xgb_predictor.predict(sample)
time.sleep(0.5)
print('Endpoint invoker has stopped')
Thread(target = invoke_endpoint_forever).start()
###Output
_____no_output_____
###Markdown
Now, let's do a basic test with the deployed endpointIn this test, we'll use a helper object called predictor. This object is always returned from a **Deploy** call. The predictor is just for testing purposes and we'll not use it inside our real application.
###Code
predictions_test = [ float(xgb_predictor.predict(x).decode('utf-8')) for x in X_test]
score = f1_score(y_test,predictions_test,labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
###Output
_____no_output_____
###Markdown
Then, let's test the API for our trained modelThis is how your application will call the endpoint. Using boto3 for getting a sagemaker runtime client and then we'll call invoke_endpoint
###Code
from sagemaker.predictor import csv_serializer
sm = boto3.client('sagemaker-runtime')
resp = sm.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='text/csv',
Body=csv_serializer(X_test[0])
)
prediction = float(resp['Body'].read().decode('utf-8'))
print('Predicted class: %.1f for [%s]' % (prediction, csv_serializer(X_test[0])) )
###Output
_____no_output_____
###Markdown
PART 2 - Model optimization with Hyperparameter Tuning Hyperparameter Tuning Jobs A.K.A. Hyperparameter Optimization Let's tune our model before using it for our batch predictionWe know that the iris dataset is an easy challenge. We can achieve a better score with XGBoost. However, we don't want to wast time testing all the possible variations of the hyperparameters in order to optimize the training process.Instead, we'll use the Sagemaker's tuning feature. For that, we'll use the same estimator, but let's create a Tuner and ask it for optimize the model for us.
###Code
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {'eta': ContinuousParameter(0, 1),
'min_child_weight': ContinuousParameter(1, 10),
'alpha': ContinuousParameter(0, 2),
'gamma': ContinuousParameter(0, 10),
'max_depth': IntegerParameter(1, 10)}
objective_metric_name = 'validation:merror'
tuner = HyperparameterTuner(xgb,
objective_metric_name,
hyperparameter_ranges,
max_jobs=20,
max_parallel_jobs=4,
objective_type='Minimize')
tuner.fit({'train': train_data, 'validation': test_data, })
tuner.wait()
job_name = tuner.latest_tuning_job.name
attached_tuner = HyperparameterTuner.attach(job_name)
xgb_predictor2 = attached_tuner.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
first_endpoint_name = endpoint_name
endpoint_name = xgb_predictor2.endpoint
model_name = boto3.client('sagemaker').describe_endpoint_config(
EndpointConfigName=endpoint_name
)['ProductionVariants'][0]['ModelName']
###Output
_____no_output_____
###Markdown
A simple test before we move on
###Code
from sagemaker.predictor import csv_serializer
from sklearn.metrics import f1_score
xgb_predictor2.content_type = 'text/csv'
xgb_predictor2.serializer = csv_serializer
xgb_predictor2.deserializer = None
predictions_test = [ float(xgb_predictor2.predict(x).decode('utf-8')) for x in X_test]
score = f1_score(y_test,predictions_test,labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
###Output
_____no_output_____
###Markdown
PART 3 - Batch Prediction Batch transform jobIf you have a file with the samples you want to predict, just upload that file to an S3 bucket and start a Batch Transform job. For this task, you don't need to deploy an endpoint. Sagemaker will create all the resources needed to do this batch prediction, save the results into an S3 bucket and then it will destroy the resources automatically for you
###Code
batch_dataset_filename='batch_dataset.csv'
with open(batch_dataset_filename, 'w') as csv:
for x_ in X:
line = ",".join( list(map(str, x_)) )
csv.write( line + "\n" )
csv.flush()
csv.close()
input_batch = sagemaker_session.upload_data(path=batch_dataset_filename, key_prefix='%s/data' % prefix)
import sagemaker
# Initialize the transformer object
transformer=sagemaker.transformer.Transformer(
base_transform_job_name='mlops-iris',
model_name=model_name,
instance_count=1,
instance_type='ml.c4.xlarge',
output_path='s3://{}/{}/batch_output'.format(bucket, prefix),
)
# To start a transform job:
transformer.transform(input_batch, content_type='text/csv', split_type='Line')
# Then wait until transform job is completed
transformer.wait()
import boto3
predictions_filename='iris_predictions.csv'
s3 = boto3.client('s3')
s3.download_file(bucket, '{}/batch_output/{}.out'.format(prefix, batch_dataset_filename), predictions_filename)
df2 = pd.read_csv(predictions_filename, sep=',', encoding='utf-8',header=None, names=[ 'predicted_iris_id'])
df3 = df.copy()
df3['predicted_iris_id'] = df2['predicted_iris_id']
df3.head()
from sklearn.metrics import f1_score
score = f1_score(df3['iris_id'], df3['predicted_iris_id'],labels=[0.0,1.0,2.0],average='micro')
print('F1 Score(micro): %.1f' % (score * 100.0))
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
cnf_matrix = confusion_matrix(df3['iris_id'], df3['predicted_iris_id'])
f, ax = plt.subplots(figsize=(15, 8))
sns.heatmap(cnf_matrix, annot=True, fmt="f", mask=np.zeros_like(cnf_matrix, dtype=np.bool),
cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True, ax=ax)
###Output
_____no_output_____
###Markdown
PART 4 - Checking the monitoring resultsThe HPO took something like 20 minutes to run. The batch prediction, 3-5 more. It is probably enough time to have at least one execution of the monitor schedule. Since we created a thread for generating **invalid** features, we must have some data drift detected in our monitoring. Let's check
###Code
mon_executions = endpoint_monitor.list_executions()
print("We created a hourly schedule above and it will kick off executions ON the hour (plus 0 - 20 min buffer.\nWe will have to wait till we hit the hour...")
while len(mon_executions) == 0:
print("Waiting for the 1st execution to happen...")
time.sleep(60)
mon_executions = endpoint_monitor.list_executions()
print('OK. we have %d execution(s) now' % len(mon_executions))
import time
tries=5
constraints_df = None
while tries > 0:
tries -= 1
try:
violations = endpoint_monitor.latest_monitoring_constraint_violations()
pd.set_option('display.max_colwidth', -1)
constraints_df = pd.io.json.json_normalize(violations.body_dict["violations"])
tries=0
except Exception as e:
time.sleep(2)
constraints_df.head(10)
###Output
_____no_output_____
###Markdown
You can also check these metrics on CloudWatch. Just open the CloudWatch console, click on **Metrics**, then select: All -> aws/sagemaker/Endpoints/data-metrics -> Endpoint, MonitoringScheduleUse the *endpoint_monitor* name to filter the metrics Cleaning up
###Code
traffic_generator_running=False
time.sleep(3)
endpoint_monitor.delete_monitoring_schedule()
time.sleep(10) # wait for 10 seconds before trying to delete the endpoint
xgb_predictor.delete_endpoint()
xgb_predictor2.delete_endpoint()
###Output
_____no_output_____ |
pandas/03.03-Operations-in-Pandas.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!**No changes were made to the contents of this notebook from the original.* Operating on Data in Pandas One of the essential pieces of NumPy is the ability to perform quick element-wise operations, both with basic arithmetic (addition, subtraction, multiplication, etc.) and with more sophisticated operations (trigonometric functions, exponential and logarithmic functions, etc.).Pandas inherits much of this functionality from NumPy, and the ufuncs that we introduced in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) are key to this.Pandas includes a couple useful twists, however: for unary operations like negation and trigonometric functions, these ufuncs will *preserve index and column labels* in the output, and for binary operations such as addition and multiplication, Pandas will automatically *align indices* when passing the objects to the ufunc.This means that keeping the context of data and combining data from different sources–both potentially error-prone tasks with raw NumPy arrays–become essentially foolproof ones with Pandas.We will additionally see that there are well-defined operations between one-dimensional ``Series`` structures and two-dimensional ``DataFrame`` structures. Ufuncs: Index PreservationBecause Pandas is designed to work with NumPy, any NumPy ufunc will work on Pandas ``Series`` and ``DataFrame`` objects.Let's start by defining a simple ``Series`` and ``DataFrame`` on which to demonstrate this:
###Code
import pandas as pd
import numpy as np
rng = np.random.RandomState(42)
ser = pd.Series(rng.randint(0, 10, 4))
ser
df = pd.DataFrame(rng.randint(0, 10, (3, 4)),
columns=['A', 'B', 'C', 'D'])
df
###Output
_____no_output_____
###Markdown
If we apply a NumPy ufunc on either of these objects, the result will be another Pandas object *with the indices preserved:*
###Code
np.exp(ser)
###Output
_____no_output_____
###Markdown
Or, for a slightly more complex calculation:
###Code
np.sin(df * np.pi / 4)
###Output
_____no_output_____
###Markdown
Any of the ufuncs discussed in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) can be used in a similar manner. UFuncs: Index AlignmentFor binary operations on two ``Series`` or ``DataFrame`` objects, Pandas will align indices in the process of performing the operation.This is very convenient when working with incomplete data, as we'll see in some of the examples that follow. Index alignment in SeriesAs an example, suppose we are combining two different data sources, and find only the top three US states by *area* and the top three US states by *population*:
###Code
area = pd.Series({'Alaska': 1723337, 'Texas': 695662,
'California': 423967}, name='area')
population = pd.Series({'California': 38332521, 'Texas': 26448193,
'New York': 19651127}, name='population')
###Output
_____no_output_____
###Markdown
Let's see what happens when we divide these to compute the population density:
###Code
population / area
###Output
_____no_output_____
###Markdown
The resulting array contains the *union* of indices of the two input arrays, which could be determined using standard Python set arithmetic on these indices:
###Code
area.index | population.index
###Output
_____no_output_____
###Markdown
Any item for which one or the other does not have an entry is marked with ``NaN``, or "Not a Number," which is how Pandas marks missing data (see further discussion of missing data in [Handling Missing Data](03.04-Missing-Values.ipynb)).This index matching is implemented this way for any of Python's built-in arithmetic expressions; any missing values are filled in with NaN by default:
###Code
A = pd.Series([2, 4, 6], index=[0, 1, 2])
B = pd.Series([1, 3, 5], index=[1, 2, 3])
A + B
###Output
_____no_output_____
###Markdown
If using NaN values is not the desired behavior, the fill value can be modified using appropriate object methods in place of the operators.For example, calling ``A.add(B)`` is equivalent to calling ``A + B``, but allows optional explicit specification of the fill value for any elements in ``A`` or ``B`` that might be missing:
###Code
A.add(B, fill_value=0)
###Output
_____no_output_____
###Markdown
Index alignment in DataFrameA similar type of alignment takes place for *both* columns and indices when performing operations on ``DataFrame``s:
###Code
A = pd.DataFrame(rng.randint(0, 20, (2, 2)),
columns=list('AB'))
A
B = pd.DataFrame(rng.randint(0, 10, (3, 3)),
columns=list('BAC'))
B
A + B
###Output
_____no_output_____
###Markdown
Notice that indices are aligned correctly irrespective of their order in the two objects, and indices in the result are sorted.As was the case with ``Series``, we can use the associated object's arithmetic method and pass any desired ``fill_value`` to be used in place of missing entries.Here we'll fill with the mean of all values in ``A`` (computed by first stacking the rows of ``A``):
###Code
fill = A.stack().mean()
A.add(B, fill_value=fill)
###Output
_____no_output_____
###Markdown
The following table lists Python operators and their equivalent Pandas object methods:| Python Operator | Pandas Method(s) ||-----------------|---------------------------------------|| ``+`` | ``add()`` || ``-`` | ``sub()``, ``subtract()`` || ``*`` | ``mul()``, ``multiply()`` || ``/`` | ``truediv()``, ``div()``, ``divide()``|| ``//`` | ``floordiv()`` || ``%`` | ``mod()`` || ``**`` | ``pow()`` | Ufuncs: Operations Between DataFrame and SeriesWhen performing operations between a ``DataFrame`` and a ``Series``, the index and column alignment is similarly maintained.Operations between a ``DataFrame`` and a ``Series`` are similar to operations between a two-dimensional and one-dimensional NumPy array.Consider one common operation, where we find the difference of a two-dimensional array and one of its rows:
###Code
A = rng.randint(10, size=(3, 4))
A
A - A[0]
###Output
_____no_output_____
###Markdown
According to NumPy's broadcasting rules (see [Computation on Arrays: Broadcasting](02.05-Computation-on-arrays-broadcasting.ipynb)), subtraction between a two-dimensional array and one of its rows is applied row-wise.In Pandas, the convention similarly operates row-wise by default:
###Code
df = pd.DataFrame(A, columns=list('QRST'))
df - df.iloc[0]
###Output
_____no_output_____
###Markdown
If you would instead like to operate column-wise, you can use the object methods mentioned earlier, while specifying the ``axis`` keyword:
###Code
df.subtract(df['R'], axis=0)
###Output
_____no_output_____
###Markdown
Note that these ``DataFrame``/``Series`` operations, like the operations discussed above, will automatically align indices between the two elements:
###Code
halfrow = df.iloc[0, ::2]
halfrow
df - halfrow
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!**No changes were made to the contents of this notebook from the original.* Operating on Data in Pandas One of the essential pieces of NumPy is the ability to perform quick element-wise operations, both with basic arithmetic (addition, subtraction, multiplication, etc.) and with more sophisticated operations (trigonometric functions, exponential and logarithmic functions, etc.).Pandas inherits much of this functionality from NumPy, and the ufuncs that we introduced in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) are key to this.Pandas includes a couple useful twists, however: for unary operations like negation and trigonometric functions, these ufuncs will *preserve index and column labels* in the output, and for binary operations such as addition and multiplication, Pandas will automatically *align indices* when passing the objects to the ufunc.This means that keeping the context of data and combining data from different sources–both potentially error-prone tasks with raw NumPy arrays–become essentially foolproof ones with Pandas.We will additionally see that there are well-defined operations between one-dimensional ``Series`` structures and two-dimensional ``DataFrame`` structures. Ufuncs: Index PreservationBecause Pandas is designed to work with NumPy, any NumPy ufunc will work on Pandas ``Series`` and ``DataFrame`` objects.Let's start by defining a simple ``Series`` and ``DataFrame`` on which to demonstrate this:
###Code
import pandas as pd
import numpy as np
rng = np.random.RandomState(42)
ser = pd.Series(rng.randint(0, 10, 4))
ser
df = pd.DataFrame(rng.randint(0, 10, (3, 4)),
columns=['A', 'B', 'C', 'D'])
df
###Output
_____no_output_____
###Markdown
If we apply a NumPy ufunc on either of these objects, the result will be another Pandas object *with the indices preserved:*
###Code
np.exp(ser)
###Output
_____no_output_____
###Markdown
Or, for a slightly more complex calculation:
###Code
np.sin(df * np.pi / 4)
###Output
_____no_output_____
###Markdown
Any of the ufuncs discussed in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) can be used in a similar manner. UFuncs: Index AlignmentFor binary operations on two ``Series`` or ``DataFrame`` objects, Pandas will align indices in the process of performing the operation.This is very convenient when working with incomplete data, as we'll see in some of the examples that follow. Index alignment in SeriesAs an example, suppose we are combining two different data sources, and find only the top three US states by *area* and the top three US states by *population*:
###Code
area = pd.Series({'Alaska': 1723337, 'Texas': 695662,
'California': 423967}, name='area')
population = pd.Series({'California': 38332521, 'Texas': 26448193,
'New York': 19651127}, name='population')
###Output
_____no_output_____
###Markdown
Let's see what happens when we divide these to compute the population density:
###Code
population / area
###Output
_____no_output_____
###Markdown
The resulting array contains the *union* of indices of the two input arrays, which could be determined using standard Python set arithmetic on these indices:
###Code
area.index | population.index
###Output
_____no_output_____
###Markdown
Any item for which one or the other does not have an entry is marked with ``NaN``, or "Not a Number," which is how Pandas marks missing data (see further discussion of missing data in [Handling Missing Data](03.04-Missing-Values.ipynb)).This index matching is implemented this way for any of Python's built-in arithmetic expressions; any missing values are filled in with NaN by default:
###Code
A = pd.Series([2, 4, 6], index=[0, 1, 2])
B = pd.Series([1, 3, 5], index=[1, 2, 3])
A + B
###Output
_____no_output_____
###Markdown
If using NaN values is not the desired behavior, the fill value can be modified using appropriate object methods in place of the operators.For example, calling ``A.add(B)`` is equivalent to calling ``A + B``, but allows optional explicit specification of the fill value for any elements in ``A`` or ``B`` that might be missing:
###Code
A.add(B, fill_value=0)
###Output
_____no_output_____
###Markdown
Index alignment in DataFrameA similar type of alignment takes place for *both* columns and indices when performing operations on ``DataFrame``s:
###Code
A = pd.DataFrame(rng.randint(0, 20, (2, 2)),
columns=list('AB'))
A
B = pd.DataFrame(rng.randint(0, 10, (3, 3)),
columns=list('BAC'))
B
A + B
###Output
_____no_output_____
###Markdown
Notice that indices are aligned correctly irrespective of their order in the two objects, and indices in the result are sorted.As was the case with ``Series``, we can use the associated object's arithmetic method and pass any desired ``fill_value`` to be used in place of missing entries.Here we'll fill with the mean of all values in ``A`` (computed by first stacking the rows of ``A``):
###Code
fill = A.stack().mean()
A.add(B, fill_value=fill)
###Output
_____no_output_____
###Markdown
The following table lists Python operators and their equivalent Pandas object methods:| Python Operator | Pandas Method(s) ||-----------------|---------------------------------------|| ``+`` | ``add()`` || ``-`` | ``sub()``, ``subtract()`` || ``*`` | ``mul()``, ``multiply()`` || ``/`` | ``truediv()``, ``div()``, ``divide()``|| ``//`` | ``floordiv()`` || ``%`` | ``mod()`` || ``**`` | ``pow()`` | Ufuncs: Operations Between DataFrame and SeriesWhen performing operations between a ``DataFrame`` and a ``Series``, the index and column alignment is similarly maintained.Operations between a ``DataFrame`` and a ``Series`` are similar to operations between a two-dimensional and one-dimensional NumPy array.Consider one common operation, where we find the difference of a two-dimensional array and one of its rows:
###Code
A = rng.randint(10, size=(3, 4))
A
A - A[0]
###Output
_____no_output_____
###Markdown
According to NumPy's broadcasting rules (see [Computation on Arrays: Broadcasting](02.05-Computation-on-arrays-broadcasting.ipynb)), subtraction between a two-dimensional array and one of its rows is applied row-wise.In Pandas, the convention similarly operates row-wise by default:
###Code
df = pd.DataFrame(A, columns=list('QRST'))
df - df.iloc[0]
###Output
_____no_output_____
###Markdown
If you would instead like to operate column-wise, you can use the object methods mentioned earlier, while specifying the ``axis`` keyword:
###Code
df.subtract(df['R'], axis=0)
###Output
_____no_output_____
###Markdown
Note that these ``DataFrame``/``Series`` operations, like the operations discussed above, will automatically align indices between the two elements:
###Code
halfrow = df.iloc[0, ::2]
halfrow
df - halfrow
###Output
_____no_output_____
###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!**No changes were made to the contents of this notebook from the original.* Operating on Data in Pandas One of the essential pieces of NumPy is the ability to perform quick element-wise operations, both with basic arithmetic (addition, subtraction, multiplication, etc.) and with more sophisticated operations (trigonometric functions, exponential and logarithmic functions, etc.).Pandas inherits much of this functionality from NumPy, and the ufuncs that we introduced in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) are key to this.Pandas includes a couple useful twists, however: for unary operations like negation and trigonometric functions, these ufuncs will *preserve index and column labels* in the output, and for binary operations such as addition and multiplication, Pandas will automatically *align indices* when passing the objects to the ufunc.This means that keeping the context of data and combining data from different sources–both potentially error-prone tasks with raw NumPy arrays–become essentially foolproof ones with Pandas.We will additionally see that there are well-defined operations between one-dimensional ``Series`` structures and two-dimensional ``DataFrame`` structures. Ufuncs: Index PreservationBecause Pandas is designed to work with NumPy, any NumPy ufunc will work on Pandas ``Series`` and ``DataFrame`` objects.Let's start by defining a simple ``Series`` and ``DataFrame`` on which to demonstrate this:
###Code
import pandas as pd
import numpy as np
rng = np.random.RandomState(42)
ser = pd.Series(rng.randint(0, 10, 4))
ser
df = pd.DataFrame(rng.randint(0, 10, (3, 4)),
columns=['A', 'B', 'C', 'D'])
df
###Output
_____no_output_____
###Markdown
If we apply a NumPy ufunc on either of these objects, the result will be another Pandas object *with the indices preserved:*
###Code
np.exp(ser)
###Output
_____no_output_____
###Markdown
Or, for a slightly more complex calculation:
###Code
np.sin(df * np.pi / 4)
###Output
_____no_output_____
###Markdown
Any of the ufuncs discussed in [Computation on NumPy Arrays: Universal Functions](02.03-Computation-on-arrays-ufuncs.ipynb) can be used in a similar manner. UFuncs: Index AlignmentFor binary operations on two ``Series`` or ``DataFrame`` objects, Pandas will align indices in the process of performing the operation.This is very convenient when working with incomplete data, as we'll see in some of the examples that follow. Index alignment in SeriesAs an example, suppose we are combining two different data sources, and find only the top three US states by *area* and the top three US states by *population*:
###Code
area = pd.Series({'Alaska': 1723337, 'Texas': 695662,
'California': 423967}, name='area')
population = pd.Series({'California': 38332521, 'Texas': 26448193,
'New York': 19651127}, name='population')
###Output
_____no_output_____
###Markdown
Let's see what happens when we divide these to compute the population density:
###Code
population / area
###Output
_____no_output_____
###Markdown
The resulting array contains the *union* of indices of the two input arrays, which could be determined using standard Python set arithmetic on these indices:
###Code
area.index | population.index
###Output
_____no_output_____
###Markdown
Any item for which one or the other does not have an entry is marked with ``NaN``, or "Not a Number," which is how Pandas marks missing data (see further discussion of missing data in [Handling Missing Data](03.04-Missing-Values.ipynb)).This index matching is implemented this way for any of Python's built-in arithmetic expressions; any missing values are filled in with NaN by default:
###Code
A = pd.Series([2, 4, 6], index=[0, 1, 2])
B = pd.Series([1, 3, 5], index=[1, 2, 3])
A + B
###Output
_____no_output_____
###Markdown
If using NaN values is not the desired behavior, the fill value can be modified using appropriate object methods in place of the operators.For example, calling ``A.add(B)`` is equivalent to calling ``A + B``, but allows optional explicit specification of the fill value for any elements in ``A`` or ``B`` that might be missing:
###Code
A.add(B, fill_value=0)
###Output
_____no_output_____
###Markdown
Index alignment in DataFrameA similar type of alignment takes place for *both* columns and indices when performing operations on ``DataFrame``s:
###Code
A = pd.DataFrame(rng.randint(0, 20, (2, 2)),
columns=list('AB'))
A
B = pd.DataFrame(rng.randint(0, 10, (3, 3)),
columns=list('BAC'))
B
A + B
###Output
_____no_output_____
###Markdown
Notice that indices are aligned correctly irrespective of their order in the two objects, and indices in the result are sorted.As was the case with ``Series``, we can use the associated object's arithmetic method and pass any desired ``fill_value`` to be used in place of missing entries.Here we'll fill with the mean of all values in ``A`` (computed by first stacking the rows of ``A``):
###Code
fill = A.stack().mean()
A.add(B, fill_value=fill)
###Output
_____no_output_____
###Markdown
The following table lists Python operators and their equivalent Pandas object methods:| Python Operator | Pandas Method(s) ||-----------------|---------------------------------------|| ``+`` | ``add()`` || ``-`` | ``sub()``, ``subtract()`` || ``*`` | ``mul()``, ``multiply()`` || ``/`` | ``truediv()``, ``div()``, ``divide()``|| ``//`` | ``floordiv()`` || ``%`` | ``mod()`` || ``**`` | ``pow()`` | Ufuncs: Operations Between DataFrame and SeriesWhen performing operations between a ``DataFrame`` and a ``Series``, the index and column alignment is similarly maintained.Operations between a ``DataFrame`` and a ``Series`` are similar to operations between a two-dimensional and one-dimensional NumPy array.Consider one common operation, where we find the difference of a two-dimensional array and one of its rows:
###Code
A = rng.randint(10, size=(3, 4))
A
A - A[0]
###Output
_____no_output_____
###Markdown
According to NumPy's broadcasting rules (see [Computation on Arrays: Broadcasting](02.05-Computation-on-arrays-broadcasting.ipynb)), subtraction between a two-dimensional array and one of its rows is applied row-wise.In Pandas, the convention similarly operates row-wise by default:
###Code
df = pd.DataFrame(A, columns=list('QRST'))
df - df.iloc[0]
###Output
_____no_output_____
###Markdown
If you would instead like to operate column-wise, you can use the object methods mentioned earlier, while specifying the ``axis`` keyword:
###Code
df.subtract(df['R'], axis=0)
###Output
_____no_output_____
###Markdown
Note that these ``DataFrame``/``Series`` operations, like the operations discussed above, will automatically align indices between the two elements:
###Code
halfrow = df.iloc[0, ::2]
halfrow
df - halfrow
###Output
_____no_output_____ |
database/tasks/How to plot interaction effects of treatments/Python, using Matplotlib and Seaborn.ipynb | ###Markdown
---author: Krtin Juneja ([email protected])--- The solution below uses an example dataset about the teeth of 10 guinea pigs at three Vitamin C dosage levels (in mg) with two delivery methods (orange juice vs. ascorbic acid). (See how to quickly load some sample data.)
###Code
from rdatasets import data
df = data('ToothGrowth')
###Output
_____no_output_____
###Markdown
To plot the interaction effects among tooth length, supplement, and dosage, we can use the `pointplot` function in the Seaborn package.
###Code
import seaborn as sns
import matplotlib.pyplot as plt
sns.pointplot(x='dose',y='len',hue='supp',data=df)
plt.legend(loc='lower right') # Default is upper right, which overlaps the data here.
plt.show()
###Output
_____no_output_____ |
notebooks/4_PandasDataWrangling.ipynb | ###Markdown
Data Wrangling Contents:* String formatting (f-strings)* Regular expressions (regex)* Pandas (Reading/writing CSV) String formatting Instead of writing a series of `print()` statements with multiple arguments, or concatenating (by `+`) strings, you can also use a Python string formatting method, called `f-strings`. More information can be read in PEP 498: https://www.python.org/dev/peps/pep-0498/ You can define a string as a template by inserting `{ }` characters with a variable name or expression in between. For this to work, you have to type an `f` in front of the `'`, `"` or `"""` start of the string definition. When defined, the string will read with the string value of the variable or the expression filled in. ```pythonname = "Joe"text = f"My name is {name}."```Again, if you need a `'` or `"` in your expression, use the other variant in the Python source code to declare the string. Writing:```pythonf'This is my {example}.'```is equivalent to:```pythonf"This is my {example}."```
###Code
name = "Joe"
text = f"My name is {name}."
print(text)
day = "Monday"
weather = "Sunny"
n_messages = 8
test_dict = {'test': 'test_value'}
text = f"""
Today is {day}.
The weather is {weather.lower()} and you have {n_messages} unread messages.
The first three letters of the weekday: {day[:3]}
An example expression is: {15 ** 2 = }
"""
text = f'Test by selecting key: {test_dict["test"]}'
print(text)
###Output
_____no_output_____
###Markdown
--- Regular expressions Using regular expressions can be very useful when working with texts. It is a powerful search mechanism by which you can search on patterns, instead of 'exact matches'. But, they can be difficult to grasp, at first sight.A **regular expression**, for instance, allows you to substitute all digits in a text, following another text sequence, or to find all urls, phone numbers, or email addresses. Or any text, that meets a particular condition.See the Python manual for the `re` module for more info: https://docs.python.org/3/library/re.htmlYou can/should use a cheatsheet when writing a regular expression. A nice website to write and test them is: https://regex101.com/. Some examples of commonly used expressions:* `\d` for all digits 0-9* `\w` for any word character* `[abc]` for a set of characters (here: a, b, c)* `.` any character* `?` the preceding character/pattern 0 or 1 times* `*` the preceding character/pattern 0 or multiple times* `+` the preceding character/pattern 1 or multiple times* `{1,2}` 1 or 2 times* `^` the start of the string* `$` the end of the string* `|` or* `()` capture group (only return this part)In many text editors (e.g. VSCode) there is also an option to search (and replace) with the help of regular expressions. Python has a regex module built in. When working with a regular expression, you have to import it first:
###Code
import re
###Output
_____no_output_____
###Markdown
You can use a regular expression for **finding** occurences in a text. Let's say we want to filter out all web urls in a text:
###Code
text = """
There are various search engines on the web.
There is https://www.google.com/, but also https://www.bing.com/.
A more privacy friendly alternative is https://duckduckgo.com/.
And who remembers http://www.altavista.com/?
"""
re.findall(r'https?://.+?/', text)
# Copied from https://www.imdb.com/search/title/?groups=top_250&sort=user_rating
text = """
1. The Shawshank Redemption (1994)
12 | 142 min | Drama
9,3 Rate this 80 Metascore
Two imprisoned men bond over a number of years, finding solace and eventual redemption through acts of common decency.
Director: Frank Darabont | Stars: Tim Robbins, Morgan Freeman, Bob Gunton, William Sadler
Votes: 2.355.643 | Gross: $28.34M
2. The Godfather (1972)
16 | 175 min | Crime, Drama
9,2 Rate this 100 Metascore
An organized crime dynasty's aging patriarch transfers control of his clandestine empire to his reluctant son.
Director: Francis Ford Coppola | Stars: Marlon Brando, Al Pacino, James Caan, Diane Keaton
Votes: 1.630.157 | Gross: $134.97M
3. The Dark Knight (2008)
16 | 152 min | Action, Crime, Drama
9,0 Rate this 84 Metascore
When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.
Director: Christopher Nolan | Stars: Christian Bale, Heath Ledger, Aaron Eckhart, Michael Caine
Votes: 2.315.134 | Gross: $534.86M
"""
titles = re.findall(r'\d{1,2}\. (.+)', text)
titles
###Output
_____no_output_____
###Markdown
QuizTry to get a list of all directors. And the gross income.
###Code
# All directors
# Gross income
###Output
_____no_output_____
###Markdown
Or, you can use a regular expression to **replace** a character sequence. This is an equivalent to the `.replace()` function, but allows more variance in the string matching.
###Code
text = """
Tim Robbins
Morgan Freeman
Bob Gunton
William Sadler
Marlon Brando
Al Pacino
James Caan
Diane Keaton
Christian Bale
Heath Ledger
Aaron Eckhart
Michael Caine
"""
# Hint: test this with https://regex101.com/
new_text = re.sub(r"(?:(\w)\w+) (\w+)", r"\1. \2", text)
print(new_text)
###Output
_____no_output_____
###Markdown
--- Data wrangling with Pandas CSV (in Pandas)The other often used file type is CSV (Comma Separated Values), or variants, such as TSV (Tab Separated Values). Python includes another built-in module to deal with these files: the `csv` module. But, we will be using the `Pandas` module, the go-to package for data analysis, that you already imported and updated in Notebook 0. A CSV file is similar to an Excel or Google Docs spreadsheet, but more limited in markup and functionality (e.g. you cannot store Excel functions). It is just a text file in which individual entries correspond to lines, and columns are separated by a comma. You can always open a CSV file with a text editor, and this also makes it so easy to store and share data with.For the rest of the notebook we will see how to work with the two main data types in `pandas`: the `DataFrame` and a `Series`.Information on functions and modules of Pandas cannot be found in the Python manual online, as it is an external package. Instead, you can refer to https://pandas.pydata.org/pandas-docs/stable/index.html . `DataFrame`What is a `pandas.DataFrame`? A `DataFrame` is a collection of `Series` having the same length and whose indexes are in sync. A *collection* means that each column of a dataframe is a series. You can also see it as a spreadheet in memory, that also allows for inclusion of Python objects. We first have to import the package. It's a convention to do this like so with Pandas, which makes the elements from this package (classes, functions, methods) available under its abbreviation `pd`:
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Next is loading the data. The following data comes from Wikipedia and was [automatically](https://query.wikidata.org/%0ASELECT%20DISTINCT%20%3FmovieLabel%20%3Fimdb%20%28MIN%28%3FpublicationYear%29%20as%20%3Fyear%29%20%28year%28%3Fdate%29%20as%20%3Faward_year%29%20%28group_concat%28DISTINCT%20%3FdirectorLabel%3Bseparator%3D%22%2C%20%22%29%20as%20%3Fdirectors%20%29%20%28group_concat%28DISTINCT%20%3FcompanyLabel%3Bseparator%3D%22%2C%20%22%29%20as%20%3Fcompanies%29%20%3Fmale_cast%20%3Ffemale_cast%20WHERE%20%7B%0A%20%20%0A%20%20%7B%0A%20%20%3Fmovie%20p%3AP166%20%3Fawardstatement%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP345%20%3Fimdb%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP577%20%3Fpublication%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP57%20%3Fdirector%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP272%20%3Fcompany%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP31%20wd%3AQ11424%20.%0A%20%20%0A%20%20%3Fawardstatement%20ps%3AP166%20wd%3AQ102427%20%3B%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20pq%3AP585%20%3Fdate%20.%0A%20%20%7D%0A%20%20%0A%20%20BIND%28year%28%3Fpublication%29%20as%20%3FpublicationYear%29%0A%20%20%0A%20%20%7B%0A%20%20%20%20%20SELECT%20%3Fmovie%20%28COUNT%28%3Fcast_member%29%20AS%20%3Fmale_cast%29%20WHERE%20%7B%0A%20%20%20%20%20%20%3Fmovie%20wdt%3AP161%20%3Fcast_member%20.%0A%20%20%20%20%20%20%3Fcast_member%20wdt%3AP21%20wd%3AQ6581097%20.%0A%20%20%20%20%7D%20GROUP%20BY%20%3Fmovie%0A%7D%20%7B%0A%20%20%20%20SELECT%20%3Fmovie%20%28COUNT%28%3Fcast_member%29%20AS%20%3Ffemale_cast%29%20WHERE%20%7B%0A%20%20%20%20%20%20%3Fmovie%20wdt%3AP161%20%3Fcast_member%20.%0A%20%20%20%20%20%20%3Fcast_member%20wdt%3AP21%20wd%3AQ6581072%20.%0A%20%20%20%20%7D%20GROUP%20BY%20%3Fmovie%0A%20%20%7D%0A%20%20%0A%20%20SERVICE%20wikibase%3Alabel%20%7B%20%0A%20%20%20%20bd%3AserviceParam%20wikibase%3Alanguage%20%22en%22%20.%0A%20%20%20%20%3Fmovie%20rdfs%3Alabel%20%3FmovieLabel%20.%0A%20%20%20%20%3Fdirector%20rdfs%3Alabel%20%3FdirectorLabel%20.%0A%20%20%20%20%3Fcompany%20rdfs%3Alabel%20%3FcompanyLabel%20.%20%0A%20%20%7D%0A%7D%20%0A%0AGROUP%20BY%20%3FmovieLabel%20%3Fimdb%20%3Fdate%20%3Fmale_cast%20%3Ffemale_cast%0AORDER%20BY%20%3Fyear%20) retreived. It is an overview of all movies that have won an Academy Award for Best Picture, including some extra data for the movie: a link to the IMDB, the publication and award year, the director(s), production company and the number of male and female actors in the cast. It can be that this data is incorrect, because this information is not entered in Wikipedia. You can find this file in `data/academyawards.csv`. Download it from the repository and save it in the data folder if you don't have it. Reading in a csv with pandas is easy. We call the `pd.read_csv()` function with the file path as argument. Pandas takes care of opening and closing the file, so a `with` statement is not needed. The contents of the csv file are then read in a Pandas DataFrame object. We can store this in the variable `df`. Calling this variable in a Jypyter Notebook gives back a nicely formatted table with the first and last 5 rows of the file.
###Code
df = pd.read_csv('data/academyawards.csv', encoding='utf-8')
df
###Output
_____no_output_____
###Markdown
Think of a `DataFrame` as an in-memory spreadsheet that you can analyse and manipulate programmatically. Or, think of it as a table in which every line is a data entry, and every column holds specific information on this data.These columns can also be seen as lists of values. They are ordered and the index of an element corresponds with the index of the data entry. The collection of all such columns is what makes the DataFrame. One column in a table is represented by a Pandas `Series`, which collects observations about a given variable. Multiple columns are a `DataFrame`. A DataFrame therefore is a collection of lists (=columns), or `Series`.If you look for other methods on `pd` you can call, you'll also see that there is an `pd.read_excel()` option to read spreadsheets in `.xls` or `.xlsx`. You can also use this, if you have these kind of files. StatisticsNow that we loaded our DataFrame, we can make pandas print some statistics on the file.
###Code
df.head(1) # First 5 rows
df.tail() # Last 5 rows
df.describe() # Descriptive statistics
###Output
_____no_output_____
###Markdown
As you can see by what they return, these methods return another DataFrame with some descriptive statistics on the file, such as the number of entries (count), the mean of the numerical values, the standard deviation, minimum and maximum values, and the 25th, 50th, and 75th percentiles. The `.info()` method can also be informative. It gives you information about a dataframe:- how much space does it take in memory?- what is the datatype of each column?- how many records are there?- how many `null` values does each column contain (!)?
###Code
df.info()
###Output
_____no_output_____
###Markdown
Pandas automatically interprets which datatypes are used in the file, but this is not always correct. Especially if you have empty fields in the DataFrame, any other integers get interpreted as float. Every column has one datatype. You can check them separately by requesting the `.dtypes` argument on the `df`. The 'object' type is a string in this file, 'int64' is an integer.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
We expect different datatypes for the description-dataframe:
###Code
description_df = df.describe()
description_df.dtypes
###Output
_____no_output_____
###Markdown
Slicing and selecting `df['column1']`You can select a single column by calling this column name as if the DataFrame was a dictionary. A single column from a DataFrame returns a `Series` object.
###Code
df
print(type(df['movie']))
df['movie']
###Output
_____no_output_____
###Markdown
The `Series` object is very similar to a `list`:
###Code
movies = df['movie']
print("Length:", len(movies))
print()
for n, movie in enumerate(movies[:10], 1):
print(n, movie, sep='\t')
###Output
_____no_output_____
###Markdown
`df[['column1', 'column2']]`We can also slice a DataFrame by calling multiple column names as one list:
###Code
df[['movie', 'imdb']]
###Output
_____no_output_____
###Markdown
Looping over DataFrames You might expect that if you loop through a DataFrame, you get all the rows. Sadly, it is not that simple, because we now have data in two dimensions. We instead get all the column names (or the first row of the dataframe):
###Code
for r in df:
print(r)
###Output
_____no_output_____
###Markdown
`zip(df['column1', df['column2')`Going over these items in a `for` loop needs a different approach. The built-in `zip()` function ([manual](https://docs.python.org/3/library/functions.htmlzip)) takes two iterables of even length and creates a new iterable of tuples. The number of arguments/iterables that you give to `zip()` determines the length of the tuples.
###Code
list1 = ['a', 'b', 'c']
list2 = [1, 2, 3]
list(zip(list1, list2))
n = 0
for movie, imdb in zip(df['movie'], df['imdb']):
if n > 9:
break # stop flooding the Notebook
print(movie, "http://www.imdb.com/title/" + imdb, sep='\t')
n += 1
###Output
_____no_output_____
###Markdown
`.to_dict(orient='record')`Or, accessing all entries in a convenient way, as a python dictionary for instance, can be done with the `.to_dict(orient='records')` method:
###Code
df.head(1)
for r in df.to_dict(orient='records'):
print(r)
print()
name = r['movie']
year = r['year']
won = r['award_year']
print("The movie " + name + " was produced in " + str(year) + " and won in " + str(won) + ".")
print()
break # To not flood the notebook, only print the first
###Output
_____no_output_____
###Markdown
`.iterrows()`Or you can use the `.iterrows()` method, which gives you tuples of the index of the row, and the row itself as `Series` object:
###Code
for n, r in df.iterrows():
name = r.movie # You can use a dot notation here
year = r.year
won = r.award_year
print(f"The movie {name} was produced in {year} and won in {won}.")
print()
break # To not flood the notebook, only print the first
###Output
_____no_output_____
###Markdown
--- Analysis
###Code
df
df.mean()
###Output
_____no_output_____
###Markdown
You already saw above that you could get statistics by calling `.describe()` on a DataFrame. You can also get these metrics for individual columns. Let's ask the maximum number of male and female actors in the cast of a movie:
###Code
df['female_cast'].max()
df['male_cast'].max()
###Output
_____no_output_____
###Markdown
You can also apply these operations to multiple columns at once. You get a `Series` object back.
###Code
df.max()
df[['male_cast', 'female_cast']]
slice_df = df[['male_cast', 'female_cast']]
slice_df.max()
###Output
_____no_output_____
###Markdown
To find the corresponding movie title, we can ask Pandas to give us the record in which these maxima occur. This is done through `df.loc`. This works by asking: "Give me all the locations (=rows) for which a value in a specified column is equal to this value".
###Code
df
df[['male_cast', 'female_cast']].max()
df[df['female_cast'] > 10]
for column_name, value in df[['male_cast', 'female_cast']].max().items():
print("Movie with maximum for", column_name, value)
row = df.loc[df[column_name] == value]
print(row.movie)
print()
###Output
_____no_output_____
###Markdown
Other functions that can be used are for instance `.mean()`, `.median()`, `.std()` and `.sum()`.
###Code
df['female_cast'].mean()
df['male_cast'].mean()
df['female_cast'].sum()
df['male_cast'].sum()
df
###Output
_____no_output_____
###Markdown
Pandas also understands dates, but you have to tell it to interpret a column as such. We can change the `year` column in-place so that it is not interpreted as integer, but as a date object. In this case, since we only have the year available, and not a full date such as `2021-02-22` (YYYY-mm-dd), we have to specify the format. Typing `%Y` as string is shorthand for `YYYY`. It returns a full date, so every month and day are set to January first.
###Code
df['year'] = pd.to_datetime(df['year'], format='%Y')
df['award_year'] = pd.to_datetime(df['award_year'], format='%Y')
df['year']
df
###Output
_____no_output_____
###Markdown
PlottingLet's try to make some graphs from our data, for instance the number of male/female actors over time. We now have a year column that is interpreted as time by Pandas. These values can figure as values on a x-axis in a graph. The y-axis would then give info on the number of male and female actors in the movie. First, we set an **index** for the DataFrame. This determines how the data can be accessed. Normally, this is a range of 0 untill the number of rows. But, you can change this, so that we can analyse the dataframe on a time index.
###Code
# Select only what we need
df_actors = df[['award_year', 'male_cast', 'female_cast']]
df_actors
df_actors = df_actors.set_index('award_year')
df_actors
###Output
_____no_output_____
###Markdown
Then simply call `.plot()` on your newly created DataFrame!
###Code
df_actors.plot(figsize=(15,10))
###Output
_____no_output_____
###Markdown
There are tons of parameters, functions, methods, transformations you can use on DataFrames and also on this plotting function. Luckily, plenty of guides and examples can be found on the internet. Grouping
###Code
df
###Output
_____no_output_____
###Markdown
Some directors have won multiple Oscars. To find out which, we have to count the number of rows in the DataFrame that include the same director. There is a Pandas function for this: `.count()`. Calling this on the DataFrame itself would give us the total number of rows only, per column. Therefore, we have to tell Pandas that we want to group by a particular column, say 'directors'.
###Code
df.groupby('directors')
###Output
_____no_output_____
###Markdown
It does not give back something nicely formatted or interpretable. It's just another Python object. The object returned by `groupby` is a `DataFrameGroupBy` **not** a normal `DataFrame`.However, some methods of the latter work also on the former, e.g. `.head()` and `.tail()`. Let's call the `.count()` on this object:
###Code
df.groupby('directors').count()
###Output
_____no_output_____
###Markdown
Remember that this counts the numer of rows. As we know that each row is one movie, we can trim this down to:
###Code
director_counts = df.groupby('directors').count()['movie']
director_counts
###Output
_____no_output_____
###Markdown
Now, get all directors that have won an Oscar more than once by specifying a conditional operator:
###Code
director_counts[director_counts > 1]
list(director_counts.items())
for i, value in director_counts.items():
print(i, value)
###Output
_____no_output_____
###Markdown
Adding a column If we want to get the total number of actors per movie, we have to sum the values from the `male_cast` and `female_cast` columns. You can do this in a for loop, by going over every row (like we saw above), but you can also sum the individual columns. Pandas will then add up the values with the same index and will return a new Series of the same length with the values summed.
###Code
df
df['male_cast'] + df['female_cast']
total_cast = df['male_cast'] + df['female_cast']
total_cast
###Output
_____no_output_____
###Markdown
Then, we add it as a column in our original dataframe. The only requirement for adding a column to a DataFrame is that the length of the Series or list is the same as that of the DataFrame.
###Code
df['total_cast'] = total_cast
df
###Output
_____no_output_____
###Markdown
Optionally, we can sort the DataFrame by column. For instance, from high to low (`ascending=False`) for the newly created `total_cast` column.
###Code
df_sorted = df.sort_values('total_cast', ascending=False)
df_sorted
###Output
_____no_output_____
###Markdown
Saving back the file Use one of the `.to_csv()` or `.to_excel` functions to save the DataFrame. Again, no `with` statement needed, just a file path (and an encoding).
###Code
df_sorted.to_csv('stuff/academyawards_sum.csv', encoding='utf-8')
df_sorted.to_excel('stuff/academyawards_sum.xlsx')
###Output
_____no_output_____
###Markdown
You need to specify `index=False` if you want to prevent a standard index (0,1,2,3...) to be saved in the file as well.
###Code
df_sorted.to_csv('stuff/academyawards_sum.csv', encoding='utf-8', index=False)
###Output
_____no_output_____
###Markdown
Open the contents in Excel, LibreOffice Calc, or another program to read spreadsheets! --- Data wrangling (example)We can take a look at another example. We consider a dataset of tweets from Elon Musk, SpaceX and Tesla founder, and ask the following questions:* When is Elon most actively tweeting?While this question is a bit trivial, it will allow us to learn how to wrangle data.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Load dataset Let's read in a CSV file containing an export of [Elon Musk's tweets](https://twitter.com/elonmusk), exported from Twitter's API.
###Code
dataset_path = 'data/elonmusk_tweets.csv'
df = pd.read_csv(dataset_path, encoding='utf-8')
df
df.info()
###Output
_____no_output_____
###Markdown
Let's give this dataset a bit more structure:- The `id` column can be transformed into the dataframe's index, thus enabling us e.g. to select a tweet by id;- The column `created_at` contains a timestamp, thus it can easily be converted into a `datetime` value
###Code
df.set_index('id', drop=True, inplace=True)
df
df.created_at = pd.to_datetime(df.created_at)
df.info()
df
###Output
_____no_output_____
###Markdown
--- Selection Renaming columns An operation on dataframes that you'll find yourself doing very often is to rename the columns. The first way of renaming columns is by manipulating directly the dataframe's index via the `columns` property.
###Code
df.columns
###Output
_____no_output_____
###Markdown
We can change the column names by assigning to `columns` a list having as values the new column names.**NB**: the size of the list and new number of colums must match!
###Code
# here we renamed the column `text` => `tweet`
df.columns = ['created_at', 'tweet']
# let's check that the change did take place
df.head()
###Output
_____no_output_____
###Markdown
The second way of renaming colums is to use the method `rename()` of a dataframe. The `columns` parameter takes a dictionary of mappings between old and new column names.```pythonmapping_dict = { "old_column_name": "new_column_name"}```
###Code
# let's change column `tweet` => `text`
df = df.rename(columns={"tweet": "text"})
df.head()
###Output
_____no_output_____
###Markdown
**Question**: in which cases is it more convenient to use the second method over the first? Selecting columns
###Code
# this selects one single column and returns as a Series
df["created_at"].head()
type(df["created_at"])
# whereas this syntax selects one single column
# but returns a Dataframe
df[["created_at"]].head()
type(df[["created_at"]])
###Output
_____no_output_____
###Markdown
Selecting rowsFiltering rows in `pandas` is done by means of `[ ]`, which can contain the row number as well as a condition for the selection.
###Code
df[0:2]
###Output
_____no_output_____
###Markdown
TransformationThe two main functions used to manipulate and transform values in a dataframe are:- `.map()` (on Series only!)- `.apply()`In this section we'll be using both to enrich our datasets with useful information (useful for exploration, for later visualizations, etc.). Add link to original tweet The `map()` method can be called on a column, as well as on the dataframe's index.When passed as a parameter to `map`, an 'anonymous' lambda function `lambda` can be used to transform any value from that column into another one.
###Code
df['tweet_link'] = df.index.map(lambda x: f'https://twitter.com/i/web/status/{x}')
###Output
_____no_output_____
###Markdown
Or, maybe it is easier with a list comprehension:
###Code
df['tweet_link'] = [f'https://twitter.com/i/web/status/{x}' for x in df.index]
df
###Output
_____no_output_____
###Markdown
Add colums with mentions
###Code
import re
def find_mentions(tweet_text):
"""
Find all @ mentions in a tweet and
return them as a list.
"""
regex = r'@[a-zA-Z0-9_]{1,15}'
mentions = re.findall(regex, tweet_text)
return mentions
df['tweet_mentions'] = df.text.apply(find_mentions)
df['n_mentions'] = df.tweet_mentions.apply(len)
df.head()
###Output
_____no_output_____
###Markdown
Add column with week day and hour
###Code
def day_of_week(t):
"""
Get the week day name from a week day integer.
"""
if t == 0:
return "Monday"
elif t == 1:
return "Tuesday"
elif t == 2:
return "Wednesday"
elif t == 3:
return "Thursday"
elif t == 4:
return "Friday"
elif t == 5:
return "Saturday"
elif t == 6:
return "Sunday"
df["week_day"] = df.created_at.dt.weekday
df["week_day_name"] = df["week_day"].apply(day_of_week)
###Output
_____no_output_____
###Markdown
Or, there is a built-in function in Pandas that gives back the day name:
###Code
df["week_day_name"] = df.created_at.dt.day_name()
df.head(3)
###Output
_____no_output_____
###Markdown
Add column with day hour
###Code
df.created_at.dt?
df.created_at.dt.hour.head()
df["day_hour"] = df.created_at.dt.hour
display_cols = ['created_at', 'week_day', 'day_hour']
df[display_cols].head(4)
###Output
_____no_output_____
###Markdown
Multiple conditions
###Code
# AND condition with `&`
df[
(df.week_day_name == 'Saturday') & (df.n_mentions == 0)
].shape
# Equivalent expression with `query()`
df.query("week_day_name == 'Saturday' and n_mentions == 0").shape
# OR condition with `|`
df[
(df.week_day_name == 'Saturday') | (df.n_mentions == 0)
].shape
###Output
_____no_output_____
###Markdown
Aggregation
###Code
df.agg({'n_mentions': ['min', 'max', 'sum']})
###Output
_____no_output_____
###Markdown
Grouping
###Code
group_by_day = df.groupby('week_day')
# The head of a DataFrameGroupBy consists of the first
# n records for each group (see `help(grp_by_day.head)`)
group_by_day.head(1)
###Output
_____no_output_____
###Markdown
`agg` is used to pass an aggregation function to be applied to each group resulting from `groupby`.Here we are interested in how many tweets there are for each group, so we pass `len()` to an 'aggregate'. This is similar to the `.count()` method.
###Code
group_by_day.agg(len)
###Output
_____no_output_____
###Markdown
However, we are not interested in having the count for all columns. Rather we want to create a new dataframe with renamed column names.
###Code
group_by_day.agg({'text': len}).rename({'text': 'tweet_count'}, axis='columns')
###Output
_____no_output_____
###Markdown
By label (column) Previously we've added a column indicating on which day of the week a given tweet appeared.
###Code
groupby_result_as_series = df.groupby('day_hour')['text'].count()
groupby_result_as_series
groupby_result_as_df = df.groupby('day_hour')[['text']]\
.count()\
.rename({'text': 'count'}, axis='columns')
groupby_result_as_df.head()
###Output
_____no_output_____
###Markdown
By series or dict
###Code
df.groupby?
# here we pass the groups as a series
df.groupby(df.created_at.dt.day).agg({'text':len}).head()
# here we pass the groups as a series
df.groupby(df.created_at.dt.day)[['text']].count().head()
df.groupby(df.created_at.dt.hour)[['text']].count().head()
###Output
_____no_output_____
###Markdown
By multiple labels (columns)
###Code
# Here we group based on the values of two columns
# instead of one
x = df.groupby(['week_day', 'day_hour'])[['text']].count()
x.head()
###Output
_____no_output_____
###Markdown
Aggregation methods**Summary**:- `count`: Number of non-NA values- `sum`: Sum of non-NA values- `mean`: Mean of non-NA values- `median`: Arithmetic median of non-NA values- `std`, `var`: standard deviation and variance- `min`, `max`: Minimum and maximum of non-NA values You can also use these in an aggregation functions within a groupby:
###Code
df.groupby('week_day').agg(
{
# each key in this dict specifies
# a given column
'n_mentions':[
# the list contains aggregation functions
# to be applied to this column
'count',
'mean',
'min',
'max',
'std',
'var'
]
}
)
###Output
_____no_output_____
###Markdown
Sorting To sort the values of a dataframe we use its `sort_values` method:- `by`: specifies the name of the column to be used for sorting- `ascending` (default = `True`): specifies whether the sorting should be *ascending* (A-Z, 0-9) or `descending` (Z-A, 9-0)
###Code
df.sort_values(by='created_at', ascending=True).head()
df.sort_values(by='n_mentions', ascending=False).head()
###Output
_____no_output_____
###Markdown
SaveBefore continuing with the plotting, let's save our enhanced dataframe, so that we can come back to it without having to redo the same manipulations on it.`pandas` provides a number of handy functions to export dataframes in a variety of formats. Here we use `.to_pickle()` to serialize the dataframe into a binary format, by using behind the scenes Python's `pickle` library.
###Code
df.to_pickle("stuff/musk_tweets_enhanced.pickle")
###Output
_____no_output_____
###Markdown
Part 2
###Code
df = pd.read_pickle("stuff/musk_tweets_enhanced.pickle")
###Output
_____no_output_____
###Markdown
`describe()` The default behavior is to include only column with numerical values
###Code
df.describe()
###Output
_____no_output_____
###Markdown
A trick to include more values is to exclude the datatype on which it breaks, which in our case is `list`.
###Code
df.describe(exclude=[list])
df.created_at.describe(datetime_is_numeric=True)
df['week_day_name'] = df['week_day_name'].astype('category')
df.describe(exclude=['object'])
###Output
_____no_output_____
###Markdown
Plotting
###Code
# Not needed in newest Pandas version
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
HistogramsThey are useful to see the distribution of a certain variable in your dataset.
###Code
df.groupby(['n_mentions'])[['text']].count()
plt.figure(figsize=(10, 6))
plt.hist(df.n_mentions, bins='auto', rwidth=1.0)
plt.title('Distribution of the number of mentions per tweet')
plt.ylabel("Tweets")
plt.xlabel("Mentions (per tweet)")
plt.show()
plt.figure(figsize=(10, 6))
plt.hist(df.day_hour, bins='auto', rwidth=0.6)
plt.title('Distribution of the number of mentions per tweet')
plt.ylabel("Tweets")
plt.xlabel("Hour of the day")
plt.show()
df_2017 = df[df.created_at.dt.year == 2017]
plt.figure(figsize=(10, 6))
plt.hist(df_2017.day_hour, bins='auto', rwidth=0.6)
plt.title('Year 2017')
plt.ylabel("Tweets")
plt.xlabel("Hour of the day")
plt.show()
###Output
_____no_output_____
###Markdown
So far we have used directly `matplotlib` to generate our plots.`pandas`'s dataframes provide some methods that directly call `matplotlib`'s API behind the scenes:- `hist()` for histograms- `boxplot()` for boxplots- `plot()` for other types of plots (specified with e.g. `any='scatter'`) By passing the `by` parameter to e.g. `hist()` it is possible to produce one histogram plot of a given variable for each value in another column. Let's see how we can plot the number of mentions by year:
###Code
df['year'] = df.created_at.dt.year
axes = df.hist(column='day_hour', by='year', figsize=(10,10))
###Output
_____no_output_____
###Markdown
Bar chartsThey are useful to plot categorical data.
###Code
plt.bar?
tweets_by_weekday = df.groupby(df.created_at.dt.weekday)[['text']].count()
week_days = [
"Mon",
"Tue",
"Wed",
"Thur",
"Fri",
"Sat",
"Sun"
]
plt.figure(figsize=(8, 6))
# specify the type of plot and the labels
# for the y axis (the bars)
plt.bar(
tweets_by_weekday.index,
tweets_by_weekday.text,
tick_label=week_days,
width=0.5
)
# give a title to the plot
plt.title('Elon Musk\'s week on Twitter')
# give a label to the axes
plt.ylabel("Number of tweets")
plt.xlabel("Week day")
plt.show()
###Output
_____no_output_____
###Markdown
Box plots Outliers, missing valuesAn *outlier* is an observation far from the center of mass of the distribution. It might be an error or a genuine observation: this distinction requires domain knowledge. Outliers infuence the outcomes of several statistics and machine learning methods: it is important to decide how to deal with them.A *missing value* is an observation without a value. There can be many reasons for a missing value: the value might not exist (hence its absence is informative and it should be left empty) or might not be known (hence the value is existing but missing in the dataset and it should be marked as NA).*One way to think about the difference is with this Zen-like koan: An explicit missing value is the presence of an absence; an implicit missing value is the absence of a presence.*
###Code
tweets_by_weekday
tweets_by_weekday.describe()
tweets_by_weekday.boxplot()
plt.bar?
df.head(3)
df[['day_hour']].describe()
df[['day_hour']].quantile(.25)
df.boxplot?
df[['day_hour', 'week_day_name']].boxplot(
by='week_day_name',
grid=False,
figsize=(8,6),
fontsize=10
)
# give a title to the plot
plt.title('')
# give a label to the axes
plt.xlabel("Day of the week")
plt.show()
df[['day_hour', 'week_day']].boxplot(
by='week_day',
grid=True, # just to show the difference with/without
figsize=(8,6),
fontsize=10
)
# give a title to the plot
plt.title('')
# give a label to the axes
plt.xlabel("Day of the week")
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 1.* Create a function that calculates the frequency of hashtags in tweets.* Test it on toy examples, to make sure it works.* Apply it to Elon Musk's tweets.* List the top 10 hashtags in the dataset.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
Exercise 2.Read the file `data/adams-hhgttg.txt` and:- Count the number of occurrences per distinct word in the text.- Create a data frame with two columns: word and counts.- Plot the histogram of the word frequencies and think about what is happening.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
Data Wrangling Contents:* String formatting (f-strings)* Regular expressions (regex)* Pandas (Reading/writing CSV) String formatting Instead of writing a series of `print()` statements with multiple arguments, or concatenating (by `+`) strings, you can also use a Python string formatting method, called `f-strings`. More information can be read in PEP 498: https://www.python.org/dev/peps/pep-0498/ You can define a string as a template by inserting `{ }` characters with a variable name or expression in between. For this to work, you have to type an `f` in front of the `'`, `"` or `"""` start of the string definition. When defined, the string will read with the string value of the variable or the expression filled in. ```pythonname = "Joe"text = f"My name is {name}."```Again, if you need a `'` or `"` in your expression, use the other variant in the Python source code to declare the string. Writing:```pythonf'This is my {example}.'```is equivalent to:```pythonf"This is my {example}."```
###Code
name = "Joe"
text = f"My name is {name}."
print(text)
day = "Monday"
weather = "Sunny"
n_messages = 8
test_dict = {'test': 'test_value'}
text = f"""
Today is {day}.
The weather is {weather.lower()} and you have {n_messages} unread messages.
The first three letters of the weekday: {day[:3]}
An example expression is: {15 ** 2 = }
"""
text = f'Test by selecting key: {test_dict["test"]}'
print(text)
###Output
_____no_output_____
###Markdown
--- Regular expressions Using regular expressions can be very useful when working with texts. It is a powerful search mechanism by which you can search on patterns, instead of 'exact matches'. But, they can be difficult to grasp, at first sight.A **regular expression**, for instance, allows you to substitute all digits in a text, following another text sequence, or to find all urls, phone numbers, or email addresses. Or any text, that meets a particular condition.See the Python manual for the `re` module for more info: https://docs.python.org/3/library/re.htmlYou can/should use a cheatsheet when writing a regular expression. A nice website to write and test them is: https://regex101.com/. Some examples of commonly used expressions:* `\d` for all digits 0-9* `\w` for any word character* `[abc]` for a set of characters (here: a, b, c)* `.` any character* `?` the preceding character/pattern 0 or 1 times* `*` the preceding character/pattern 0 or multiple times* `+` the preceding character/pattern 1 or multiple times* `{1,2}` 1 or 2 times* `^` the start of the string* `$` the end of the string* `|` or* `()` capture group (only return this part)In many text editors (e.g. VSCode) there is also an option to search (and replace) with the help of regular expressions. Python has a regex module built in. When working with a regular expression, you have to import it first:
###Code
import re
###Output
_____no_output_____
###Markdown
You can use a regular expression for **finding** occurences in a text. Let's say we want to filter out all web urls in a text:
###Code
text = """
There are various search engines on the web.
There is https://www.google.com/, but also https://www.bing.com/.
A more privacy friendly alternative is https://duckduckgo.com/.
And who remembers http://www.altavista.com/?
"""
re.findall(r'https?://.+?/', text)
# Copied from https://www.imdb.com/search/title/?groups=top_250&sort=user_rating
text = """
1. The Shawshank Redemption (1994)
12 | 142 min | Drama
9,3 Rate this 80 Metascore
Two imprisoned men bond over a number of years, finding solace and eventual redemption through acts of common decency.
Director: Frank Darabont | Stars: Tim Robbins, Morgan Freeman, Bob Gunton, William Sadler
Votes: 2.355.643 | Gross: $28.34M
2. The Godfather (1972)
16 | 175 min | Crime, Drama
9,2 Rate this 100 Metascore
An organized crime dynasty's aging patriarch transfers control of his clandestine empire to his reluctant son.
Director: Francis Ford Coppola | Stars: Marlon Brando, Al Pacino, James Caan, Diane Keaton
Votes: 1.630.157 | Gross: $134.97M
3. The Dark Knight (2008)
16 | 152 min | Action, Crime, Drama
9,0 Rate this 84 Metascore
When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.
Director: Christopher Nolan | Stars: Christian Bale, Heath Ledger, Aaron Eckhart, Michael Caine
Votes: 2.315.134 | Gross: $534.86M
"""
titles = re.findall(r'\d{1,2}\. (.+)', text)
titles
###Output
_____no_output_____
###Markdown
QuizTry to get a list of all directors. And the gross income.
###Code
# All directors
# Gross income
###Output
_____no_output_____
###Markdown
Or, you can use a regular expression to **replace** a character sequence. This is an equivalent to the `.replace()` function, but allows more variance in the string matching.
###Code
text = """
Tim Robbins
Morgan Freeman
Bob Gunton
William Sadler
Marlon Brando
Al Pacino
James Caan
Diane Keaton
Christian Bale
Heath Ledger
Aaron Eckhart
Michael Caine
"""
# Hint: test this with https://regex101.com/
new_text = re.sub(r"(?:(\w)\w+) (\w+)", r"\1. \2", text)
print(new_text)
###Output
_____no_output_____
###Markdown
--- Data wrangling with Pandas CSV (in Pandas)The other often used file type is CSV (Comma Separated Values), or variants, such as TSV (Tab Separated Values). Python includes another built-in module to deal with these files: the `csv` module. But, we will be using the `Pandas` module, the go-to package for data analysis, that you already imported and updated in Notebook 0. A CSV file is similar to an Excel or Google Docs spreadsheet, but more limited in markup and functionality (e.g. you cannot store Excel functions). It is just a text file in which individual entries correspond to lines, and columns are separated by a comma. You can always open a CSV file with a text editor, and this also makes it so easy to store and share data with.For the rest of the notebook we will see how to work with the two main data types in `pandas`: the `DataFrame` and a `Series`.Information on functions and modules of Pandas cannot be found in the Python manual online, as it is an external package. Instead, you can refer to https://pandas.pydata.org/pandas-docs/stable/index.html . `DataFrame`What is a `pandas.DataFrame`? A `DataFrame` is a collection of `Series` having the same length and whose indexes are in sync. A *collection* means that each column of a dataframe is a series. You can also see it as a spreadheet in memory, that also allows for inclusion of Python objects. We first have to import the package. It's a convention to do this like so with Pandas, which makes the elements from this package (classes, functions, methods) available under its abbreviation `pd`:
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Next is loading the data. The following data comes from Wikipedia and was [automatically](https://query.wikidata.org/%0ASELECT%20DISTINCT%20%3FmovieLabel%20%3Fimdb%20%28MIN%28%3FpublicationYear%29%20as%20%3Fyear%29%20%28year%28%3Fdate%29%20as%20%3Faward_year%29%20%28group_concat%28DISTINCT%20%3FdirectorLabel%3Bseparator%3D%22%2C%20%22%29%20as%20%3Fdirectors%20%29%20%28group_concat%28DISTINCT%20%3FcompanyLabel%3Bseparator%3D%22%2C%20%22%29%20as%20%3Fcompanies%29%20%3Fmale_cast%20%3Ffemale_cast%20WHERE%20%7B%0A%20%20%0A%20%20%7B%0A%20%20%3Fmovie%20p%3AP166%20%3Fawardstatement%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP345%20%3Fimdb%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP577%20%3Fpublication%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP57%20%3Fdirector%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP272%20%3Fcompany%20%3B%0A%20%20%20%20%20%20%20%20%20wdt%3AP31%20wd%3AQ11424%20.%0A%20%20%0A%20%20%3Fawardstatement%20ps%3AP166%20wd%3AQ102427%20%3B%20%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20pq%3AP585%20%3Fdate%20.%0A%20%20%7D%0A%20%20%0A%20%20BIND%28year%28%3Fpublication%29%20as%20%3FpublicationYear%29%0A%20%20%0A%20%20%7B%0A%20%20%20%20%20SELECT%20%3Fmovie%20%28COUNT%28%3Fcast_member%29%20AS%20%3Fmale_cast%29%20WHERE%20%7B%0A%20%20%20%20%20%20%3Fmovie%20wdt%3AP161%20%3Fcast_member%20.%0A%20%20%20%20%20%20%3Fcast_member%20wdt%3AP21%20wd%3AQ6581097%20.%0A%20%20%20%20%7D%20GROUP%20BY%20%3Fmovie%0A%7D%20%7B%0A%20%20%20%20SELECT%20%3Fmovie%20%28COUNT%28%3Fcast_member%29%20AS%20%3Ffemale_cast%29%20WHERE%20%7B%0A%20%20%20%20%20%20%3Fmovie%20wdt%3AP161%20%3Fcast_member%20.%0A%20%20%20%20%20%20%3Fcast_member%20wdt%3AP21%20wd%3AQ6581072%20.%0A%20%20%20%20%7D%20GROUP%20BY%20%3Fmovie%0A%20%20%7D%0A%20%20%0A%20%20SERVICE%20wikibase%3Alabel%20%7B%20%0A%20%20%20%20bd%3AserviceParam%20wikibase%3Alanguage%20%22en%22%20.%0A%20%20%20%20%3Fmovie%20rdfs%3Alabel%20%3FmovieLabel%20.%0A%20%20%20%20%3Fdirector%20rdfs%3Alabel%20%3FdirectorLabel%20.%0A%20%20%20%20%3Fcompany%20rdfs%3Alabel%20%3FcompanyLabel%20.%20%0A%20%20%7D%0A%7D%20%0A%0AGROUP%20BY%20%3FmovieLabel%20%3Fimdb%20%3Fdate%20%3Fmale_cast%20%3Ffemale_cast%0AORDER%20BY%20%3Fyear%20) retreived. It is an overview of all movies that have won an Academy Award for Best Picture, including some extra data for the movie: a link to the IMDB, the publication and award year, the director(s), production company and the number of male and female actors in the cast. It can be that this data is incorrect, because this information is not entered in Wikipedia. You can find this file in `data/academyawards.csv`. Download it from the repository and save it in the data folder if you don't have it. Reading in a csv with pandas is easy. We call the `pd.read_csv()` function with the file path as argument. Pandas takes care of opening and closing the file, so a `with` statement is not needed. The contents of the csv file are then read in a Pandas DataFrame object. We can store this in the variable `df`. Calling this variable in a Jypyter Notebook gives back a nicely formatted table with the first and last 5 rows of the file.
###Code
df = pd.read_csv('data/academyawards.csv', encoding='utf-8')
df
###Output
_____no_output_____
###Markdown
Think of a `DataFrame` as an in-memory spreadsheet that you can analyse and manipulate programmatically. Or, think of it as a table in which every line is a data entry, and every column holds specific information on this data.These columns can also be seen as lists of values. They are ordered and the index of an element corresponds with the index of the data entry. The collection of all such columns is what makes the DataFrame. One column in a table is represented by a Pandas `Series`, which collects observations about a given variable. Multiple columns are a `DataFrame`. A DataFrame therefore is a collection of lists (=columns), or `Series`.If you look for other methods on `pd` you can call, you'll also see that there is an `pd.read_excel()` option to read spreadsheets in `.xls` or `.xlsx`. You can also use this, if you have these kind of files. StatisticsNow that we loaded our DataFrame, we can make pandas print some statistics on the file.
###Code
df.head(1) # First 5 rows
df.tail() # Last 5 rows
df.describe() # Descriptive statistics
###Output
_____no_output_____
###Markdown
As you can see by what they return, these methods return another DataFrame with some descriptive statistics on the file, such as the number of entries (count), the mean of the numerical values, the standard deviation, minimum and maximum values, and the 25th, 50th, and 75th percentiles. The `.info()` method can also be informative. It gives you information about a dataframe:- how much space does it take in memory?- what is the datatype of each column?- how many records are there?- how many `null` values does each column contain (!)?
###Code
df.info()
###Output
_____no_output_____
###Markdown
Pandas automatically interprets which datatypes are used in the file, but this is not always correct. Especially if you have empty fields in the DataFrame, any other integers get interpreted as float. Every column has one datatype. You can check them separately by requesting the `.dtypes` argument on the `df`. The 'object' type is a string in this file, 'int64' is an integer.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
We expect different datatypes for the description-dataframe:
###Code
description_df = df.describe()
description_df.dtypes
###Output
_____no_output_____
###Markdown
Slicing and selecting `df['column1']`You can select a single column by calling this column name as if the DataFrame was a dictionary. A single column from a DataFrame returns a `Series` object.
###Code
df
print(type(df['movie']))
df['movie']
###Output
_____no_output_____
###Markdown
The `Series` object is very similar to a `list`:
###Code
movies = df['movie']
print("Length:", len(movies))
print()
for n, movie in enumerate(movies[:10], 1):
print(n, movie, sep='\t')
###Output
_____no_output_____
###Markdown
`df[['column1', 'column2']]`We can also slice a DataFrame by calling multiple column names as one list:
###Code
df[['movie', 'imdb']]
###Output
_____no_output_____
###Markdown
Looping over DataFrames `zip(df['column1', df['column2')`Going over these items in a `for` loop needs a different approach. The built-in `zip()` function ([manual](https://docs.python.org/3/library/functions.htmlzip)) takes two iterables of even length and creates a new iterable of tuples. The number of arguments/iterables that you give to `zip()` determines the length of the tuples.
###Code
list1 = ['a', 'b', 'c']
list2 = [1, 2, 3]
list(zip(list1, list2))
n = 0
for movie, imdb in zip(df['movie'], df['imdb']):
if n > 9:
break # stop flooding the Notebook
print(movie, "http://www.imdb.com/title/" + imdb, sep='\t')
n += 1
###Output
_____no_output_____
###Markdown
`.to_dict(orient='record')`Or, accessing all entries in a convenient way, as a python dictionary for instance, can be done with the `.to_dict(orient='records')` method:
###Code
df.head(1)
for r in df.to_dict(orient='records'):
print(r)
print()
name = r['movie']
year = r['year']
won = r['award_year']
print("The movie " + name + " was produced in " + str(year) + " and won in " + str(won) + ".")
print()
break # To not flood the notebook, only print the first
###Output
_____no_output_____
###Markdown
`.iterrows()`Or you can use the `.iterrows()` method, which gives you tuples of the index of the row, and the row itself as `Series` object:
###Code
for n, r in df.iterrows():
name = r.movie # You can use a dot notation here
year = r.year
won = r.award_year
print(f"The movie {name} was produced in {year} and won in {won}.")
print()
break # To not flood the notebook, only print the first
###Output
_____no_output_____
###Markdown
--- Analysis
###Code
df
df.mean()
###Output
_____no_output_____
###Markdown
You already saw above that you could get statistics by calling `.describe()` on a DataFrame. You can also get these metrics for individual columns. Let's ask the maximum number of male and female actors in the cast of a movie:
###Code
df['female_cast'].max()
df['male_cast'].max()
###Output
_____no_output_____
###Markdown
You can also apply these operations to multiple columns at once. You get a `Series` object back.
###Code
df.max()
df[['male_cast', 'female_cast']]
slice_df = df[['male_cast', 'female_cast']]
slice_df.max()
###Output
_____no_output_____
###Markdown
To find the corresponding movie title, we can ask Pandas to give us the record in which these maxima occur. This is done through `df.loc`. This works by asking: "Give me all the locations (=rows) for which a value in a specified column is equal to this value".
###Code
df
df[['male_cast', 'female_cast']].max()
df[df['female_cast'] > 10]
for column_name, value in df[['male_cast', 'female_cast']].max().items():
print("Movie with maximum for", column_name, value)
row = df.loc[df[column_name] == value]
print(row.movie)
print()
###Output
_____no_output_____
###Markdown
Other functions that can be used are for instance `.mean()`, `.median()`, `.std()` and `.sum()`.
###Code
df['female_cast'].mean()
df['male_cast'].mean()
df['female_cast'].sum()
df['male_cast'].sum()
df
###Output
_____no_output_____
###Markdown
Pandas also understands dates, but you have to tell it to interpret a column as such. We can change the `year` column in-place so that it is not interpreted as integer, but as a date object. In this case, since we only have the year available, and not a full date such as `2021-02-22` (YYYY-mm-dd), we have to specify the format. Typing `%Y` as string is shorthand for `YYYY`. It returns a full date, so every month and day are set to January first.
###Code
df['year'] = pd.to_datetime(df['year'], format='%Y')
df['award_year'] = pd.to_datetime(df['award_year'], format='%Y')
df['year']
df
###Output
_____no_output_____
###Markdown
PlottingLet's try to make some graphs from our data, for instance the number of male/female actors over time. We now have a year column that is interpreted as time by Pandas. These values can figure as values on a x-axis in a graph. The y-axis would then give info on the number of male and female actors in the movie. First, we set an **index** for the DataFrame. This determines how the data can be accessed. Normally, this is a range of 0 untill the number of rows. But, you can change this, so that we can analyse the dataframe on a time index.
###Code
# Select only what we need
df_actors = df[['award_year', 'male_cast', 'female_cast']]
df_actors
df_actors = df_actors.set_index('award_year')
df_actors
###Output
_____no_output_____
###Markdown
Then simply call `.plot()` on your newly created DataFrame!
###Code
df_actors.plot(figsize=(15,10))
###Output
_____no_output_____
###Markdown
There are tons of parameters, functions, methods, transformations you can use on DataFrames and also on this plotting function. Luckily, plenty of guides and examples can be found on the internet. Grouping
###Code
df
###Output
_____no_output_____
###Markdown
Some directors have won multiple Oscars. To find out which, we have to count the number of rows in the DataFrame that include the same director. There is a Pandas function for this: `.count()`. Calling this on the DataFrame itself would give us the total number of rows only, per column. Therefore, we have to tell Pandas that we want to group by a particular column, say 'directors'.
###Code
df.groupby('directors')
###Output
_____no_output_____
###Markdown
It does not give back something nicely formatted or interpretable. It's just another Python object. The object returned by `groupby` is a `DataFrameGroupBy` **not** a normal `DataFrame`.However, some methods of the latter work also on the former, e.g. `.head()` and `.tail()`. Let's call the `.count()` on this object:
###Code
df.groupby('directors').count()
###Output
_____no_output_____
###Markdown
Remember that this counts the numer of rows. As we know that each row is one movie, we can trim this down to:
###Code
director_counts = df.groupby('directors').count()['movie']
director_counts
###Output
_____no_output_____
###Markdown
Now, get all directors that have won an Oscar more than once by specifying a conditional operator:
###Code
director_counts[director_counts > 1]
list(director_counts.items())
for i, value in director_counts.items():
print(i, value)
###Output
_____no_output_____
###Markdown
Adding a column If we want to get the total number of actors per movie, we have to sum the values from the `male_cast` and `female_cast` columns. You can do this in a for loop, by going over every row (like we saw above), but you can also sum the individual columns. Pandas will then add up the values with the same index and will return a new Series of the same length with the values summed.
###Code
df
df['male_cast'] + df['female_cast']
total_cast = df['male_cast'] + df['female_cast']
total_cast
###Output
_____no_output_____
###Markdown
Then, we add it as a column in our original dataframe. The only requirement for adding a column to a DataFrame is that the length of the Series or list is the same as that of the DataFrame.
###Code
df['total_cast'] = total_cast
df
###Output
_____no_output_____
###Markdown
Optionally, we can sort the DataFrame by column. For instance, from high to low (`ascending=False`) for the newly created `total_cast` column.
###Code
df_sorted = df.sort_values('total_cast', ascending=False)
df_sorted
###Output
_____no_output_____
###Markdown
Saving back the file Use one of the `.to_csv()` or `.to_excel` functions to save the DataFrame. Again, no `with` statement needed, just a file path (and an encoding).
###Code
df_sorted.to_csv('stuff/academyawards_sum.csv', encoding='utf-8')
df_sorted.to_excel('stuff/academyawards_sum.xlsx')
###Output
_____no_output_____
###Markdown
You need to specify `index=False` if you want to prevent a standard index (0,1,2,3...) to be saved in the file as well.
###Code
df_sorted.to_csv('stuff/academyawards_sum.csv', encoding='utf-8', index=False)
###Output
_____no_output_____
###Markdown
Open the contents in Excel, LibreOffice Calc, or another program to read spreadsheets! --- Data wrangling (example)We can take a look at another example. We consider a dataset of tweets from Elon Musk, SpaceX and Tesla founder, and ask the following questions:* When is Elon most actively tweeting?While this question is a bit trivial, it will allow us to learn how to wrangle data.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Load dataset Let's read in a CSV file containing an export of [Elon Musk's tweets](https://twitter.com/elonmusk), exported from Twitter's API.
###Code
dataset_path = 'data/elonmusk_tweets.csv'
df = pd.read_csv(dataset_path, encoding='utf-8')
df
df.info()
###Output
_____no_output_____
###Markdown
Let's give this dataset a bit more structure:- The `id` column can be transformed into the dataframe's index, thus enabling us e.g. to select a tweet by id;- The column `created_at` contains a timestamp, thus it can easily be converted into a `datetime` value
###Code
df.set_index('id', drop=True, inplace=True)
df
df.created_at = pd.to_datetime(df.created_at)
df.info()
df
###Output
_____no_output_____
###Markdown
--- Selection Renaming columns An operation on dataframes that you'll find yourself doing very often is to rename the columns. The first way of renaming columns is by manipulating directly the dataframe's index via the `columns` property.
###Code
df.columns
###Output
_____no_output_____
###Markdown
We can change the column names by assigning to `columns` a list having as values the new column names.**NB**: the size of the list and new number of colums must match!
###Code
# here we renamed the column `text` => `tweet`
df.columns = ['created_at', 'tweet']
# let's check that the change did take place
df.head()
###Output
_____no_output_____
###Markdown
The second way of renaming colums is to use the method `rename()` of a dataframe. The `columns` parameter takes a dictionary of mappings between old and new column names.```pythonmapping_dict = { "old_column_name": "new_column_name"}```
###Code
# let's change column `tweet` => `text`
df = df.rename(columns={"tweet": "text"})
df.head()
###Output
_____no_output_____
###Markdown
**Question**: in which cases is it more convenient to use the second method over the first? Selecting columns
###Code
# this selects one single column and returns as a Series
df["created_at"].head()
type(df["created_at"])
# whereas this syntax selects one single column
# but returns a Dataframe
df[["created_at"]].head()
type(df[["created_at"]])
###Output
_____no_output_____
###Markdown
Selecting rowsFiltering rows in `pandas` is done by means of `[ ]`, which can contain the row number as well as a condition for the selection.
###Code
df[0:2]
###Output
_____no_output_____
###Markdown
TransformationThe two main functions used to manipulate and transform values in a dataframe are:- `.map()` (on Series only!)- `.apply()`In this section we'll be using both to enrich our datasets with useful information (useful for exploration, for later visualizations, etc.). Add link to original tweet The `map()` method can be called on a column, as well as on the dataframe's index.When passed as a parameter to `map`, an 'anonymous' lambda function `lambda` can be used to transform any value from that column into another one.
###Code
df['tweet_link'] = df.index.map(lambda x: f'https://twitter.com/i/web/status/{x}')
###Output
_____no_output_____
###Markdown
Or, maybe it is easier with a list comprehension:
###Code
df['tweet_link'] = [f'https://twitter.com/i/web/status/{x}' for x in df.index]
df
###Output
_____no_output_____
###Markdown
Add colums with mentions
###Code
import re
def find_mentions(tweet_text):
"""
Find all @ mentions in a tweet and
return them as a list.
"""
regex = r'@[a-zA-Z0-9_]{1,15}'
mentions = re.findall(regex, tweet_text)
return mentions
df['tweet_mentions'] = df.text.apply(find_mentions)
df['n_mentions'] = df.tweet_mentions.apply(len)
df.head()
###Output
_____no_output_____
###Markdown
Add column with week day and hour
###Code
def day_of_week(t):
"""
Get the week day name from a week day integer.
"""
if t == 0:
return "Monday"
elif t == 1:
return "Tuesday"
elif t == 2:
return "Wednesday"
elif t == 3:
return "Thursday"
elif t == 4:
return "Friday"
elif t == 5:
return "Saturday"
elif t == 6:
return "Sunday"
df["week_day"] = df.created_at.dt.weekday
df["week_day_name"] = df["week_day"].apply(day_of_week)
###Output
_____no_output_____
###Markdown
Or, there is a built-in function in Pandas that gives back the day name:
###Code
df["week_day_name"] = df.created_at.dt.day_name()
df.head(3)
###Output
_____no_output_____
###Markdown
Add column with day hour
###Code
df.created_at.dt?
df.created_at.dt.hour.head()
df["day_hour"] = df.created_at.dt.hour
display_cols = ['created_at', 'week_day', 'day_hour']
df[display_cols].head(4)
###Output
_____no_output_____
###Markdown
Multiple conditions
###Code
# AND condition with `&`
df[
(df.week_day_name == 'Saturday') & (df.n_mentions == 0)
].shape
# Equivalent expression with `query()`
df.query("week_day_name == 'Saturday' and n_mentions == 0").shape
# OR condition with `|`
df[
(df.week_day_name == 'Saturday') | (df.n_mentions == 0)
].shape
###Output
_____no_output_____
###Markdown
Aggregation
###Code
df.agg({'n_mentions': ['min', 'max', 'sum']})
###Output
_____no_output_____
###Markdown
Grouping
###Code
group_by_day = df.groupby('week_day')
# The head of a DataFrameGroupBy consists of the first
# n records for each group (see `help(grp_by_day.head)`)
group_by_day.head(1)
###Output
_____no_output_____
###Markdown
`agg` is used to pass an aggregation function to be applied to each group resulting from `groupby`.Here we are interested in how many tweets there are for each group, so we pass `len()` to an 'aggregate'. This is similar to the `.count()` method.
###Code
group_by_day.agg(len)
###Output
_____no_output_____
###Markdown
However, we are not interested in having the count for all columns. Rather we want to create a new dataframe with renamed column names.
###Code
group_by_day.agg({'text': len}).rename({'text': 'tweet_count'}, axis='columns')
###Output
_____no_output_____
###Markdown
By label (column) Previously we've added a column indicating on which day of the week a given tweet appeared.
###Code
groupby_result_as_series = df.groupby('day_hour')['text'].count()
groupby_result_as_series
groupby_result_as_df = df.groupby('day_hour')[['text']]\
.count()\
.rename({'text': 'count'}, axis='columns')
groupby_result_as_df.head()
###Output
_____no_output_____
###Markdown
By series or dict
###Code
df.groupby?
# here we pass the groups as a series
df.groupby(df.created_at.dt.day).agg({'text':len}).head()
# here we pass the groups as a series
df.groupby(df.created_at.dt.day)[['text']].count().head()
df.groupby(df.created_at.dt.hour)[['text']].count().head()
###Output
_____no_output_____
###Markdown
By multiple labels (columns)
###Code
# Here we group based on the values of two columns
# instead of one
x = df.groupby(['week_day', 'day_hour'])[['text']].count()
x.head()
###Output
_____no_output_____
###Markdown
Aggregation methods**Summary**:- `count`: Number of non-NA values- `sum`: Sum of non-NA values- `mean`: Mean of non-NA values- `median`: Arithmetic median of non-NA values- `std`, `var`: standard deviation and variance- `min`, `max`: Minimum and maximum of non-NA values You can also use these in an aggregation functions within a groupby:
###Code
df.groupby('week_day').agg(
{
# each key in this dict specifies
# a given column
'n_mentions':[
# the list contains aggregation functions
# to be applied to this column
'count',
'mean',
'min',
'max',
'std',
'var'
]
}
)
###Output
_____no_output_____
###Markdown
Sorting To sort the values of a dataframe we use its `sort_values` method:- `by`: specifies the name of the column to be used for sorting- `ascending` (default = `True`): specifies whether the sorting should be *ascending* (A-Z, 0-9) or `descending` (Z-A, 9-0)
###Code
df.sort_values(by='created_at', ascending=True).head()
df.sort_values(by='n_mentions', ascending=False).head()
###Output
_____no_output_____
###Markdown
SaveBefore continuing with the plotting, let's save our enhanced dataframe, so that we can come back to it without having to redo the same manipulations on it.`pandas` provides a number of handy functions to export dataframes in a variety of formats. Here we use `.to_pickle()` to serialize the dataframe into a binary format, by using behind the scenes Python's `pickle` library.
###Code
df.to_pickle("stuff/musk_tweets_enhanced.pickle")
###Output
_____no_output_____
###Markdown
Part 2
###Code
df = pd.read_pickle("stuff/musk_tweets_enhanced.pickle")
###Output
_____no_output_____
###Markdown
`describe()` The default behavior is to include only column with numerical values
###Code
df.describe()
###Output
_____no_output_____
###Markdown
A trick to include more values is to exclude the datatype on which it breaks, which in our case is `list`.
###Code
df.describe(exclude=[list])
df.created_at.describe(datetime_is_numeric=True)
df['week_day_name'] = df['week_day_name'].astype('category')
df.describe(exclude=['object'])
###Output
_____no_output_____
###Markdown
Plotting
###Code
# Not needed in newest Pandas version
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
HistogramsThey are useful to see the distribution of a certain variable in your dataset.
###Code
df.groupby(['n_mentions'])[['text']].count()
plt.figure(figsize=(10, 6))
plt.hist(df.n_mentions, bins='auto', rwidth=1.0)
plt.title('Distribution of the number of mentions per tweet')
plt.ylabel("Tweets")
plt.xlabel("Mentions (per tweet)")
plt.show()
plt.figure(figsize=(10, 6))
plt.hist(df.day_hour, bins='auto', rwidth=0.6)
plt.title('Distribution of the number of mentions per tweet')
plt.ylabel("Tweets")
plt.xlabel("Hour of the day")
plt.show()
df_2017 = df[df.created_at.dt.year == 2017]
plt.figure(figsize=(10, 6))
plt.hist(df_2017.day_hour, bins='auto', rwidth=0.6)
plt.title('Year 2017')
plt.ylabel("Tweets")
plt.xlabel("Hour of the day")
plt.show()
###Output
_____no_output_____
###Markdown
So far we have used directly `matplotlib` to generate our plots.`pandas`'s dataframes provide some methods that directly call `matplotlib`'s API behind the scenes:- `hist()` for histograms- `boxplot()` for boxplots- `plot()` for other types of plots (specified with e.g. `any='scatter'`) By passing the `by` parameter to e.g. `hist()` it is possible to produce one histogram plot of a given variable for each value in another column. Let's see how we can plot the number of mentions by year:
###Code
df['year'] = df.created_at.dt.year
axes = df.hist(column='day_hour', by='year', figsize=(10,10))
###Output
_____no_output_____
###Markdown
Bar chartsThey are useful to plot categorical data.
###Code
plt.bar?
tweets_by_weekday = df.groupby(df.created_at.dt.weekday)[['text']].count()
week_days = [
"Mon",
"Tue",
"Wed",
"Thur",
"Fri",
"Sat",
"Sun"
]
plt.figure(figsize=(8, 6))
# specify the type of plot and the labels
# for the y axis (the bars)
plt.bar(
tweets_by_weekday.index,
tweets_by_weekday.text,
tick_label=week_days,
width=0.5
)
# give a title to the plot
plt.title('Elon Musk\'s week on Twitter')
# give a label to the axes
plt.ylabel("Number of tweets")
plt.xlabel("Week day")
plt.show()
###Output
_____no_output_____
###Markdown
Box plots Outliers, missing valuesAn *outlier* is an observation far from the center of mass of the distribution. It might be an error or a genuine observation: this distinction requires domain knowledge. Outliers infuence the outcomes of several statistics and machine learning methods: it is important to decide how to deal with them.A *missing value* is an observation without a value. There can be many reasons for a missing value: the value might not exist (hence its absence is informative and it should be left empty) or might not be known (hence the value is existing but missing in the dataset and it should be marked as NA).*One way to think about the difference is with this Zen-like koan: An explicit missing value is the presence of an absence; an implicit missing value is the absence of a presence.*
###Code
tweets_by_weekday
tweets_by_weekday.describe()
tweets_by_weekday.boxplot()
plt.bar?
df.head(3)
df[['day_hour']].describe()
df[['day_hour']].quantile(.25)
df.boxplot?
df[['day_hour', 'week_day_name']].boxplot(
by='week_day_name',
grid=False,
figsize=(8,6),
fontsize=10
)
# give a title to the plot
plt.title('')
# give a label to the axes
plt.xlabel("Day of the week")
plt.show()
df[['day_hour', 'week_day']].boxplot(
by='week_day',
grid=True, # just to show the difference with/without
figsize=(8,6),
fontsize=10
)
# give a title to the plot
plt.title('')
# give a label to the axes
plt.xlabel("Day of the week")
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 1.* Create a function that calculates the frequency of hashtags in tweets.* Test it on toy examples, to make sure it works.* Apply it to Elon Musk's tweets.* List the top 10 hashtags in the dataset.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
Exercise 2.Read the file `data/adams-hhgttg.txt` and:- Count the number of occurrences per distinct word in the text.- Create a data frame with two columns: word and counts.- Plot the histogram of the word frequencies and think about what is happening.
###Code
# Your code here.
###Output
_____no_output_____ |
g3-p5-Ajuste-MrvsZ.ipynb | ###Markdown
Problema 5 En este ejercicio se quiere ver como se comporta la magnitud absoluta en la banda r en función del redshift para las galaxias de la muestra. La magnitud absoluta para cada galaxia se puede calcular usando la aproximación:$$M = m - 25 - 5.log_{10}(\frac{c.z}{H})$$ Ec (1)donde r es la magnitud aparente, c=300000 km/s es la velocidad de la luz y H=75 (km/s)/Mpc la constante de Hubble. En este ejercicio se pide considerar los valores $m_r < 17.5$, por lo tanto se tiene m=r y M=$M_r$. Se comienza importando los datos necesario de la tabla de datos: la magnitud aparente r y el resdhift de cada galaxia.
###Code
from math import *
import numpy as np
import matplotlib.pyplot as plt
#import random
import seaborn as sns
sns.set()
#defino la petroMag_r (columna 5)
r = np.genfromtxt('Tabla2_g3.csv', delimiter=',', usecols=5)
#defino el redshift (columna 6)
z = np.genfromtxt('Tabla2_g3.csv', delimiter=',', usecols=6)
#calculo magnitud absoluta para cada galaxia
c=300000 #km/s
H=75 #km/s /Mpc
z2=[]
Mr=[]
for i in range(len(z)):
if (-9999 != r[i]) and (r[i] < 17.5): #hay valores de Mr que dan -9999 que no los considero y pido mr<17.5
M = r[i] - 25 - 5 * log10((c * z[i]) / H)
z2.append(z[i])
Mr.append(M)
else:
None
#grafico para ver la forma
plt.figure(figsize=(11,6))
plt.scatter(z2, Mr, s=20, color='orchid')
plt.ylim(-24,-16)
plt.title('Magnitud absoluta vs. Redshift')
plt.xlabel('z')
plt.ylabel('$M_r$')
plt.show()
###Output
_____no_output_____
###Markdown
Se quiere ajustar los puntos del borde, aquellos que envuelven el resto de puntos. Para ello se realiza un programa para identificarlos.Se comienza dividiendo al resdshinf en n bines y para cada uno de ellos se calcula el valor máximo de la magnitud absoluta en ese intervalo. Luego se grafica ese $M_r$ máximo del intervalo con el valor extremo izquierdo del bin.
###Code
n=50
bin_z=np.linspace(min(z2), max(z2), n)
Mr_max=[] #lista que va a guardar las Mr máximas
z3=[]
for i in range(len(bin_z)-1): #recorro los bines, pongo -1 por que sino cuando sume hasta i+1 se rompe
lista_i=[] #lista para cada i=bin
for j in range(len(z2)): #recorro el z2
if (bin_z[i] <= z2[j]) and (z2[j] < bin_z[i+1]): #condicion para que esté adentro del bin
lista_i.append(Mr[j]) #lo agrego a una lista
#print(len(lista_i))
if (len(lista_i) != 0): #le indico que no considere estos valores sino no le gusta
x=max(lista_i) #busco el máximo para cada bin
Mr_max.append(x)
z3.append(bin_z[i])
#grafico los valores a ajustar
plt.figure(figsize=(11,6))
plt.scatter(z2, Mr, s=20, color='orchid', label='Muestra')
plt.scatter(z3, Mr_max, marker='x', color='black', label='Puntos a ajustar')
plt.ylim(-24,-16)
plt.xlabel('z')
plt.ylabel('$M_r$')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
Se puede ver que los puntos a ajustar dependen del número de bineado 'n' elegido para el redshift: si son pocos puntos puese ser que no sean suficientes para realizar el ajuste, mientras que si son muchos se introduce mucho ruido en los mismos. Se toma n=50. Se propone para ajustar la envolvente de puntos la Ec (1), donde m (magnitud aparente) es el parámentro a ajustar.
###Code
#defino funcion de ajuste
def func(m):
lista=[]
for i in range(len(z3)):
lista.append(m-25- 5 * log10((c * z3[i]) / H))
return lista
###Output
_____no_output_____
###Markdown
Se le dan distintos valores de m para ver cual es la mejor curva de ajuste:
###Code
plt.figure(figsize=(11,6))
plt.scatter(z2, Mr, s=20, color='orchid', label='Muestra')
plt.scatter(z3, Mr_max, marker='x', color='black', label='Puntos a ajustar')
plt.plot(z3, func(m=17.2), label='m=17.2')
plt.plot(z3, func(m=17.3), label='m=17.3')
plt.plot(z3, func(m=17.4), label='m=17.4')
plt.plot(z3, func(m=17.5), label='m=17.5')
plt.plot(z3, func(m=17.6), label='m=17.6')
plt.ylim(-24,-16)
plt.xlabel('z')
plt.ylabel('$M_r$')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
Al variar m, la función de ajuste sube y baja en el eje de $M_r$. Para ver cual es la que mejor ajusta a los puntos, se calcula para cada m el valor de $\chi^2$ entre los valores de func(i,m) con Mr_max(i) para cada i en z3.
###Code
#funcion chi-cuadrado
def chi2(x):
chi=0
for i in range(len(z3)):
chi=chi+((x[i] - Mr_max[i])**2/Mr_max[i])
return chi
#veo los valores para las funciones graficadas
print('chi2 para m=17.2:', chi2(func(m=17.2)))
print('chi2 para m=17.3:', chi2(func(m=17.3)))
print('chi2 para m=17.4:', chi2(func(m=17.4)))
print('chi2 para m=17.5:', chi2(func(m=17.5)))
print('chi2 para m=17.6:', chi2(func(m=17.6)))
###Output
chi2 para m=17.2: -0.19989430498627536
chi2 para m=17.3: -0.19072394317666652
chi2 para m=17.4: -0.22232237135734098
chi2 para m=17.5: -0.29468958952830093
chi2 para m=17.6: -0.40782559768954596
|
Supriya Kumari1.ipynb | ###Markdown
Writing Your First Python CodeEstimated time needed: **25** minutes ObjectivesAfter completing this lab you will be able to:* Write basic code in Python* Work with various types of data in Python* Convert the data from one type to another* Use expressions and variables to perform operations Table of Contents Say "Hello" to the world in Python What version of Python are we using? Writing comments in Python Errors in Python Does Python know about your error before it runs your code? Exercise: Your First Program Types of objects in Python Integers Floats Converting from one object type to a different object type Boolean data type Exercise: Types Expressions and Variables Expressions Exercise: Expressions Variables Exercise: Expression and Variables in Python Estimated time needed: 25 min Say "Hello" to the world in Python When learning a new programming language, it is customary to start with an "hello world" example. As simple as it is, this one line of code will ensure that we know how to print a string in output and how to execute code within cells in a notebook. [Tip]: To execute the Python code in the code cell below, click on the cell to select it and press Shift + Enter.
###Code
# Try your first Python output
print('Hello, Python!')
###Output
Hello, Python!
###Markdown
After executing the cell above, you should see that Python prints Hello, Python!. Congratulations on running your first Python code! [Tip:] print() is a function. You passed the string 'Hello, Python!' as an argument to instruct Python on what to print. What version of Python are we using? There are two popular versions of the Python programming language in use today: Python 2 and Python 3. The Python community has decided to move on from Python 2 to Python 3, and many popular libraries have announced that they will no longer support Python 2. Since Python 3 is the future, in this course we will be using it exclusively. How do we know that our notebook is executed by a Python 3 runtime? We can look in the top-right hand corner of this notebook and see "Python 3". We can also ask Python directly and obtain a detailed answer. Try executing the following code:
###Code
# Check the Python Version
import sys
print(sys.version)
###Output
3.6.13 | packaged by conda-forge | (default, Feb 19 2021, 05:36:01)
[GCC 9.3.0]
###Markdown
[Tip:] sys is a built-in module that contains many system-specific parameters and functions, including the Python version in use. Before using it, we must explictly import it. Writing comments in Python In addition to writing code, note that it's always a good idea to add comments to your code. It will help others understand what you were trying to accomplish (the reason why you wrote a given snippet of code). Not only does this help other people understand your code, it can also serve as a reminder to you when you come back to it weeks or months later. To write comments in Python, use the number symbol before writing your comment. When you run your code, Python will ignore everything past the on a given line.
###Code
# Practice on writing comments
print('Hello, Python!') # This line prints a string
print('Hi')
###Output
Hello, Python!
Hi
###Markdown
After executing the cell above, you should notice that This line prints a string did not appear in the output, because it was a comment (and thus ignored by Python). The second line was also not executed because print('Hi') was preceded by the number sign () as well! Since this isn't an explanatory comment from the programmer, but an actual line of code, we might say that the programmer commented out that second line of code. Errors in Python Everyone makes mistakes. For many types of mistakes, Python will tell you that you have made a mistake by giving you an error message. It is important to read error messages carefully to really understand where you made a mistake and how you may go about correcting it.For example, if you spell print as frint, Python will display an error message. Give it a try:
###Code
# Print string as error message
frint("Hello, Python!")
###Output
_____no_output_____
###Markdown
The error message tells you: where the error occurred (more useful in large notebook cells or scripts), and what kind of error it was (NameError) Here, Python attempted to run the function frint, but could not determine what frint is since it's not a built-in function and it has not been previously defined by us either. You'll notice that if we make a different type of mistake, by forgetting to close the string, we'll obtain a different error (i.e., a SyntaxError). Try it below:
###Code
# Try to see built-in error message
print("Hello, Python!)
###Output
_____no_output_____
###Markdown
Does Python know about your error before it runs your code? Python is what is called an interpreted language. Compiled languages examine your entire program at compile time, and are able to warn you about a whole class of errors prior to execution. In contrast, Python interprets your script line by line as it executes it. Python will stop executing the entire program when it encounters an error (unless the error is expected and handled by the programmer, a more advanced subject that we'll cover later on in this course). Try to run the code in the cell below and see what happens:
###Code
# Print string and error to see the running order
print("This will be printed")
frint("This will cause an error")
print("This will NOT be printed")
###Output
This will be printed
###Markdown
Exercise: Your First Program Generations of programmers have started their coding careers by simply printing "Hello, world!". You will be following in their footsteps.In the code cell below, use the print() function to print out the phrase: Hello, world!
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
print("Hello, world")
###Output
Hello, world
###Markdown
Click here for the solution```pythonprint("Hello, world!")``` Now, let's enhance your code with a comment. In the code cell below, print out the phrase: Hello, world! and comment it with the phrase Print the traditional hello world all in one line of code.
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
print('Hello, world!') #Print the traditional hello world
###Output
Hello, world!
###Markdown
Click here for the solution```pythonprint("Hello, world!") Print the traditional hello world``` Types of objects in Python Python is an object-oriented language. There are many different types of objects in Python. Let's start with the most common object types: strings, integers and floats. Anytime you write words (text) in Python, you're using character strings (strings for short). The most common numbers, on the other hand, are integers (e.g. -1, 0, 100) and floats, which represent real numbers (e.g. 3.14, -42.0). The following code cells contain some examples.
###Code
# Integer
11
# Float
2.14
# String
"Hello, Python 101!"
###Output
_____no_output_____
###Markdown
You can get Python to tell you the type of an expression by using the built-in type() function. You'll notice that Python refers to integers as int, floats as float, and character strings as str.
###Code
# Type of 12
type(12)
# Type of 2.14
type(2.14)
# Type of "Hello, Python 101!"
type("Hello, Python 101!")
###Output
_____no_output_____
###Markdown
In the code cell below, use the type() function to check the object type of 12.0.
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
type(12.0)
###Output
_____no_output_____
###Markdown
Click here for the solution```pythontype(12.0)``` Integers Here are some examples of integers. Integers can be negative or positive numbers: We can verify this is the case by using, you guessed it, the type() function:
###Code
# Print the type of -1
type(-1)
# Print the type of 4
type(4)
# Print the type of 0
type(0)
###Output
_____no_output_____
###Markdown
Floats Floats represent real numbers; they are a superset of integer numbers but also include "numbers with decimals". There are some limitations when it comes to machines representing real numbers, but floating point numbers are a good representation in most cases. You can learn more about the specifics of floats for your runtime environment, by checking the value of sys.float_info. This will also tell you what's the largest and smallest number that can be represented with them.Once again, can test some examples with the type() function:
###Code
# Print the type of 1.0
type(1.0) # Notice that 1 is an int, and 1.0 is a float
# Print the type of 0.5
type(0.5)
# Print the type of 0.56
type(0.56)
# System settings about float type
sys.float_info
###Output
_____no_output_____
###Markdown
Converting from one object type to a different object type You can change the type of the object in Python; this is called typecasting. For example, you can convert an integer into a float (e.g. 2 to 2.0).Let's try it:
###Code
# Verify that this is an integer
type(2)
###Output
_____no_output_____
###Markdown
Converting integers to floatsLet's cast integer 2 to float:
###Code
# Convert 2 to a float
float(2)
# Convert integer 2 to a float and check its type
type(float(2))
###Output
_____no_output_____
###Markdown
When we convert an integer into a float, we don't really change the value (i.e., the significand) of the number. However, if we cast a float into an integer, we could potentially lose some information. For example, if we cast the float 1.1 to integer we will get 1 and lose the decimal information (i.e., 0.1):
###Code
# Casting 1.1 to integer will result in loss of information
int(1.1)
###Output
_____no_output_____
###Markdown
Converting from strings to integers or floats Sometimes, we can have a string that contains a number within it. If this is the case, we can cast that string that represents a number into an integer using int():
###Code
# Convert a string into an integer
int('1')
###Output
_____no_output_____
###Markdown
But if you try to do so with a string that is not a perfect match for a number, you'll get an error. Try the following:
###Code
# Convert a string into an integer with error
int('1.0')
###Output
_____no_output_____
###Markdown
You can also convert strings containing floating point numbers into float objects:
###Code
# Convert the string "1.2" into a float
float('1.2 people')
###Output
_____no_output_____
###Markdown
[Tip:] Note that strings can be represented with single quotes ('1.2') or double quotes ("1.2"), but you can't mix both (e.g., "1.2'). Converting numbers to strings If we can convert strings to numbers, it is only natural to assume that we can convert numbers to strings, right?
###Code
# Convert an integer to a string
str(1)
###Output
_____no_output_____
###Markdown
And there is no reason why we shouldn't be able to make floats into strings as well:
###Code
# Convert a float to a string
str(1.2)
###Output
_____no_output_____
###Markdown
Boolean data type Boolean is another important type in Python. An object of type Boolean can take on one of two values: True or False:
###Code
# Value true
True
###Output
_____no_output_____
###Markdown
Notice that the value True has an uppercase "T". The same is true for False (i.e. you must use the uppercase "F").
###Code
# Value false
False
###Output
_____no_output_____
###Markdown
When you ask Python to display the type of a boolean object it will show bool which stands for boolean:
###Code
# Type of True
type(True)
# Type of False
type(False)
###Output
_____no_output_____
###Markdown
We can cast boolean objects to other data types. If we cast a boolean with a value of True to an integer or float we will get a one. If we cast a boolean with a value of False to an integer or float we will get a zero. Similarly, if we cast a 1 to a Boolean, you get a True. And if we cast a 0 to a Boolean we will get a False. Let's give it a try:
###Code
# Convert True to int
int(True)
# Convert 1 to boolean
bool(1)
# Convert 0 to boolean
bool(0)
# Convert True to float
float(True)
###Output
_____no_output_____
###Markdown
Exercise: Types What is the data type of the result of: 6 / 2?
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
type(6/2)
###Output
_____no_output_____
###Markdown
Click here for the solution```pythontype(6/2) float``` What is the type of the result of: 6 // 2? (Note the double slash //.)
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
type(6//2)
###Output
_____no_output_____
###Markdown
Click here for the solution```pythontype(6//2) int, as the double slashes stand for integer division ``` Expression and Variables Expressions Expressions in Python can include operations among compatible types (e.g., integers and floats). For example, basic arithmetic operations like adding multiple numbers:
###Code
# Addition operation expression
43 + 60 + 16 + 41
###Output
_____no_output_____
###Markdown
We can perform subtraction operations using the minus operator. In this case the result is a negative number:
###Code
# Subtraction operation expression
50 - 60
###Output
_____no_output_____
###Markdown
We can do multiplication using an asterisk:
###Code
# Multiplication operation expression
5 * 5
###Output
_____no_output_____
###Markdown
We can also perform division with the forward slash:
###Code
# Division operation expression
25 / 5
# Division operation expression
25 / 6
###Output
_____no_output_____
###Markdown
As seen in the quiz above, we can use the double slash for integer division, where the result is rounded down to the nearest integer:
###Code
# Integer division operation expression
25 // 5
# Integer division operation expression
25 // 6
###Output
_____no_output_____
###Markdown
Exercise: Expression Let's write an expression that calculates how many hours there are in 160 minutes:
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
160/60
160//60
###Output
_____no_output_____
###Markdown
Click here for the solution```python160/60 Or 160//60``` Python follows well accepted mathematical conventions when evaluating mathematical expressions. In the following example, Python adds 30 to the result of the multiplication (i.e., 120).
###Code
# Mathematical expression
30 + 2 * 60
###Output
_____no_output_____
###Markdown
And just like mathematics, expressions enclosed in parentheses have priority. So the following multiplies 32 by 60.
###Code
# Mathematical expression
(30 + 2) * 60
###Output
_____no_output_____
###Markdown
Variables Just like with most programming languages, we can store values in variables, so we can use them later on. For example:
###Code
# Store value into variable
x = 43 + 60 + 16 + 41
x
###Output
_____no_output_____
###Markdown
To see the value of x in a Notebook, we can simply place it on the last line of a cell:
###Code
# Print out the value in variable
x
###Output
_____no_output_____
###Markdown
We can also perform operations on x and save the result to a new variable:
###Code
# Use another variable to store the result of the operation between variable and value
y = x / 60
y
###Output
_____no_output_____
###Markdown
If we save a value to an existing variable, the new value will overwrite the previous value:
###Code
# Overwrite variable with new value
x = x / 60
x
###Output
_____no_output_____
###Markdown
It's a good practice to use meaningful variable names, so you and others can read the code and understand it more easily:
###Code
# Name the variables meaningfully
total_min = 43 + 42 + 57 # Total length of albums in minutes
total_min
# Name the variables meaningfully
total_hours = total_min / 60 # Total length of albums in hours
total_hours
###Output
_____no_output_____
###Markdown
In the cells above we added the length of three albums in minutes and stored it in total_min. We then divided it by 60 to calculate total length total_hours in hours. You can also do it all at once in a single expression, as long as you use parenthesis to add the albums length before you divide, as shown below.
###Code
# Complicate expression
total_hours = (43 + 42 + 57) / 60 # Total hours in a single expression
total_hours
###Output
_____no_output_____
###Markdown
If you'd rather have total hours as an integer, you can of course replace the floating point division with integer division (i.e., //). Exercise: Expression and Variables in Python What is the value of x where x = 3 + 2 * 2
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
x= 3+2*2
x
###Output
_____no_output_____
###Markdown
Click here for the solution```python7``` What is the value of y where y = (3 + 2) * 2?
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
y= (3+2)*2
y
###Output
_____no_output_____
###Markdown
Click here for the solution```python10``` What is the value of z where z = x + y?
###Code
# Write your code below. Don't forget to press Shift+Enter to execute the cell
z=x+y
z
###Output
_____no_output_____ |
dmu1/dmu1_ml_Herschel-Stripe-82/1.5_PanSTARRS-3SS.ipynb | ###Markdown
Herschel Stripe 82 master catalogue Preparation of Pan-STARRS1 - 3pi Steradian Survey (3SS) dataThis catalogue comes from `dmu0_PanSTARRS1-3SS`.In the catalogue, we keep:- The `uniquePspsSTid` as unique object identifier;- The r-band position which is given for all the sources;- The grizy `FApMag` aperture magnitude (see below);- The grizy `FKronMag` as total magnitude.The Pan-STARRS1-3SS catalogue provides for each band an aperture magnitude defined as “In PS1, an 'optimal' aperture radius is determined based on the local PSF. The wings of the same analytic PSF are then used to extrapolate the flux measured inside this aperture to a 'total' flux.”The observations used for the catalogue where done between 2010 and 2015 ([ref](https://confluence.stsci.edu/display/PANSTARRS/PS1+Image+data+products)).**TODO**: Check if the detection flag can be used to know in which bands an object was detected to construct the coverage maps.**TODO**: Check for stellarity.
###Code
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
from collections import OrderedDict
import os
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, mag_to_flux
OUT_DIR = os.environ.get('TMP_DIR', "./data_tmp")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "ps1_ra"
DEC_COL = "ps1_dec"
###Output
_____no_output_____
###Markdown
I - Column selection
###Code
imported_columns = OrderedDict({
"objID": "ps1_id",
"raMean": "ps1_ra",
"decMean": "ps1_dec",
"gFApMag": "m_ap_gpc1_g",
"gFApMagErr": "merr_ap_gpc1_g",
"gFKronMag": "m_gpc1_g",
"gFKronMagErr": "merr_gpc1_g",
"rFApMag": "m_ap_gpc1_r",
"rFApMagErr": "merr_ap_gpc1_r",
"rFKronMag": "m_gpc1_r",
"rFKronMagErr": "merr_gpc1_r",
"iFApMag": "m_ap_gpc1_i",
"iFApMagErr": "merr_ap_gpc1_i",
"iFKronMag": "m_gpc1_i",
"iFKronMagErr": "merr_gpc1_i",
"zFApMag": "m_ap_gpc1_z",
"zFApMagErr": "merr_ap_gpc1_z",
"zFKronMag": "m_gpc1_z",
"zFKronMagErr": "merr_gpc1_z",
"yFApMag": "m_ap_gpc1_y",
"yFApMagErr": "merr_ap_gpc1_y",
"yFKronMag": "m_gpc1_y",
"yFKronMagErr": "merr_gpc1_y"
})
catalogue = Table.read("../../dmu0/dmu0_PanSTARRS1-3SS/data/PanSTARRS1-3SS_Herschel-Stripe-82_v2.fits")[list(imported_columns)]
for column in imported_columns:
catalogue[column].name = imported_columns[column]
epoch = 2012
# Clean table metadata
catalogue.meta = None
# Adding flux and band-flag columns
for col in catalogue.colnames:
if col.startswith('m_'):
errcol = "merr{}".format(col[1:])
# -999 is used for missing values
catalogue[col][catalogue[col] < -900] = np.nan
catalogue[errcol][catalogue[errcol] < -900] = np.nan
flux, error = mag_to_flux(np.array(catalogue[col]), np.array(catalogue[errcol]))
# Fluxes are added in µJy
catalogue.add_column(Column(flux * 1.e6, name="f{}".format(col[1:])))
catalogue.add_column(Column(error * 1.e6, name="f{}".format(errcol[1:])))
# Band-flag column
if "ap" not in col:
catalogue.add_column(Column(np.zeros(len(catalogue), dtype=bool), name="flag{}".format(col[1:])))
# TODO: Set to True the flag columns for fluxes that should not be used for SED fitting.
catalogue[:10].show_in_notebook()
###Output
_____no_output_____
###Markdown
II - Removal of duplicated sources We remove duplicated objects from the input catalogues.
###Code
SORT_COLS = ['merr_ap_gpc1_r', 'merr_ap_gpc1_g', 'merr_ap_gpc1_i', 'merr_ap_gpc1_z', 'merr_ap_gpc1_y']
FLAG_NAME = 'ps1_flag_cleaned'
nb_orig_sources = len(catalogue)
catalogue = remove_duplicates(catalogue, RA_COL, DEC_COL, sort_col=SORT_COLS, flag_name=FLAG_NAME)
nb_sources = len(catalogue)
print("The initial catalogue had {} sources.".format(nb_orig_sources))
print("The cleaned catalogue has {} sources ({} removed).".format(nb_sources, nb_orig_sources - nb_sources))
print("The cleaned catalogue has {} sources flagged as having been cleaned".format(np.sum(catalogue[FLAG_NAME])))
###Output
/opt/anaconda3/envs/herschelhelp_internal/lib/python3.6/site-packages/astropy/table/column.py:1096: MaskedArrayFutureWarning: setting an item on a masked array which has a shared mask will not copy the mask and also change the original mask array in the future.
Check the NumPy 1.11 release notes for more information.
ma.MaskedArray.__setitem__(self, index, value)
###Markdown
III - Astrometry correctionWe match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
###Code
gaia = Table.read("../../dmu0/dmu0_GAIA/data/GAIA_Herschel-Stripe-82.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec, near_ra0=True)
delta_ra, delta_dec = astrometric_correction(
SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]),
gaia_coords, near_ra0=True
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
catalogue[RA_COL] += delta_ra.to(u.deg)
catalogue[DEC_COL] += delta_dec.to(u.deg)
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec, near_ra0=True)
###Output
_____no_output_____
###Markdown
IV - Flagging Gaia objects
###Code
catalogue.add_column(
gaia_flag_column(SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]), epoch, gaia)
)
GAIA_FLAG_NAME = "ps1_flag_gaia"
catalogue['flag_gaia'].name = GAIA_FLAG_NAME
print("{} sources flagged.".format(np.sum(catalogue[GAIA_FLAG_NAME] > 0)))
###Output
881634 sources flagged.
###Markdown
V - Flagging objects near bright stars VI - Saving to disk
###Code
catalogue.write("{}/PS1.fits".format(OUT_DIR), overwrite=True)
###Output
_____no_output_____ |
Cap01/JupyterNotebook-ManualUsuario.ipynb | ###Markdown
Manual Jupyter Notebook:https://athena.brynmawr.edu/jupyter/hub/dblank/public/Jupyter%20Notebook%20Users%20Manual.ipynb Jupyter Notebook Users ManualThis page describes the functionality of the [Jupyter](http://jupyter.org) electronic document system. Jupyter documents are called "notebooks" and can be seen as many things at once. For example, notebooks allow:* creation in a **standard web browser*** direct **sharing*** using **text with styles** (such as italics and titles) to be explicitly marked using a [wikitext language](http://en.wikipedia.org/wiki/Wiki_markup)* easy creation and display of beautiful **equations*** creation and execution of interactive embedded **computer programs*** easy creation and display of **interactive visualizations**Jupyter notebooks (previously called "IPython notebooks") are thus interesting and useful to different groups of people:* readers who want to view and execute computer programs* authors who want to create executable documents or documents with visualizations Table of Contents* [1. Getting to Know your Jupyter Notebook's Toolbar](1.-Getting-to-Know-your-Jupyter-Notebook's-Toolbar)* [2. Different Kinds of Cells](2.-Different-Kinds-of-Cells) * [2.1 Code Cells](2.1-Code-Cells) * [2.1.1 Code Cell Layout](2.1.1-Code-Cell-Layout) * [2.1.1.1 Row Configuration (Default Setting)](2.1.1.1-Row-Configuration-%28Default-Setting%29) * [2.1.1.2 Cell Tabbing](2.1.1.2-Cell-Tabbing) * [2.1.1.3 Column Configuration](2.1.1.3-Column-Configuration) * [2.2 Markdown Cells](2.2-Markdown-Cells) * [2.3 Raw Cells](2.3-Raw-Cells) * [2.4 Header Cells](2.4-Header-Cells) * [2.4.1 Linking](2.4.1-Linking) * [2.4.2 Automatic Section Numbering and Table of Contents Support](2.4.2-Automatic-Section-Numbering-and-Table-of-Contents-Support) * [2.4.2.1 Automatic Section Numbering](2.4.2.1-Automatic-Section-Numbering) * [2.4.2.2 Table of Contents Support](2.4.2.2-Table-of-Contents-Support) * [2.4.2.3 Using Both Automatic Section Numbering and Table of Contents Support](2.4.2.3-Using-Both-Automatic-Section-Numbering-and-Table-of-Contents-Support)* [3. Keyboard Shortcuts](3.-Keyboard-Shortcuts)* [4. Using Markdown Cells for Writing](4.-Using-Markdown-Cells-for-Writing) * [4.1 Block Elements](4.1-Block-Elements) * [4.1.1 Paragraph Breaks](4.1.1-Paragraph-Breaks) * [4.1.2 Line Breaks](4.1.2-Line-Breaks) * [4.1.2.1 Hard-Wrapping and Soft-Wrapping](4.1.2.1-Hard-Wrapping-and-Soft-Wrapping) * [4.1.2.2 Soft-Wrapping](4.1.2.2-Soft-Wrapping) * [4.1.2.3 Hard-Wrapping](4.1.2.3-Hard-Wrapping) * [4.1.3 Headers](4.1.3-Headers) * [4.1.4 Block Quotes](4.1.4-Block-Quotes) * [4.1.4.1 Standard Block Quoting](4.1.4.1-Standard-Block-Quoting) * [4.1.4.2 Nested Block Quoting](4.1.4.2-Nested-Block-Quoting) * [4.1.5 Lists](4.1.5-Lists) * [4.1.5.1 Ordered Lists](4.1.5.1-Ordered-Lists) * [4.1.5.2 Bulleted Lists](4.1.5.2-Bulleted-Lists) * [4.1.6 Section Breaks](4.1.6-Section-Breaks) * [4.2 Backslash Escape](4.2-Backslash-Escape) * [4.3 Hyperlinks](4.3-Hyperlinks) * [4.3.1 Automatic Links](4.3.1-Automatic-Links) * [4.3.2 Standard Links](4.3.2-Standard-Links) * [4.3.3 Standard Links With Mouse-Over Titles](4.3.3-Standard-Links-With-Mouse-Over-Titles) * [4.3.4 Reference Links](4.3.4-Reference-Links) * [4.3.5 Notebook-Internal Links](4.3.5-Notebook-Internal-Links) * [4.3.5.1 Standard Notebook-Internal Links Without Mouse-Over Titles](4.3.5.1-Standard-Notebook-Internal-Links-Without-Mouse-Over-Titles) * [4.3.5.2 Standard Notebook-Internal Links With Mouse-Over Titles](4.3.5.2-Standard-Notebook-Internal-Links-With-Mouse-Over-Titles) * [4.3.5.3 Reference-Style Notebook-Internal Links](4.3.5.3-Reference-Style-Notebook-Internal-Links) * [4.4 Tables](4.4-Tables) * [4.4.1 Cell Justification](4.4.1-Cell-Justification) * [4.5 Style and Emphasis](4.5-Style-and-Emphasis) * [4.6 Other Characters](4.6-Other-Characters) * [4.7 Including Code Examples](4.7-Including-Code-Examples) * [4.8 Images](4.8-Images) * [4.8.1 Images from the Internet](4.8.1-Images-from-the-Internet) * [4.8.1.1 Reference-Style Images from the Internet](4.8.1.1-Reference-Style-Images-from-the-Internet) * [4.9 LaTeX Math](4.9-LaTeX-Math)* [5. Bibliographic Support](5.-Bibliographic-Support) * [5.1 Creating a Bibtex Database](5.1-Creating-a-Bibtex-Database) * [5.1.1 External Bibliographic Databases](5.1.1-External-Bibliographic-Databases) * [5.1.2 Internal Bibliographic Databases](5.1.2-Internal-Bibliographic-Databases) * [5.1.2.1 Hiding Your Internal Database](5.1.2.1-Hiding-Your-Internal-Database) * [5.1.3 Formatting Bibtex Entries](5.1.3-Formatting-Bibtex-Entries) * [5.2 Cite Commands and Citation IDs](5.2-Cite-Commands-and-Citation-IDs)* [6. Turning Your Jupyter Notebook into a Slideshow](6.-Turning-Your-Jupyter-Notebook-into-a-Slideshow) 1. Getting to Know your Jupyter Notebook's Toolbar At the top of your Jupyter Notebook window there is a toolbar. It looks like this:  Below is a table which helpfully pairs a picture of each of the items in your toolbar with a corresponding explanation of its function. Button|Function-|-|This is your save button. You can click this button to save your notebook at any time, though keep in mind that Jupyter Notebooks automatically save your progress very frequently. |This is the new cell button. You can click this button any time you want a new cell in your Jupyter Notebook. |This is the cut cell button. If you click this button, the cell you currently have selected will be deleted from your Notebook. |This is the copy cell button. If you click this button, the currently selected cell will be duplicated and stored in your clipboard. |This is the past button. It allows you to paste the duplicated cell from your clipboard into your notebook. |These buttons allow you to move the location of a selected cell within a Notebook. Simply select the cell you wish to move and click either the up or down button until the cell is in the location you want it to be.|This button will "run" your cell, meaning that it will interpret your input and render the output in a way that depends on [what kind of cell] [cell kind] you're using. |This is the stop button. Clicking this button will stop your cell from continuing to run. This tool can be useful if you are trying to execute more complicated code, which can sometimes take a while, and you want to edit the cell before waiting for it to finish rendering. |This is the restart kernel button. See your kernel documentation for more information.|This is a drop down menu which allows you to tell your Notebook how you want it to interpret any given cell. You can read more about the [different kinds of cells] [cell kind] in the following section. |Individual cells can have their own toolbars. This is a drop down menu from which you can select the type of toolbar that you'd like to use with the cells in your Notebook. Some of the options in the cell toolbar menu will only work in [certain kinds of cells][cell kind]. "None," which is how you specify that you do not want any cell toolbars, is the default setting. If you select "Edit Metadata," a toolbar that allows you to edit data about [Code Cells][code cells] directly will appear in the corner of all the Code cells in your notebook. If you select "Raw Cell Format," a tool bar that gives you several formatting options will appear in the corner of all your [Raw Cells][raw cells]. If you want to view and present your notebook as a slideshow, you can select "Slideshow" and a toolbar that enables you to organize your cells in to slides, sub-slides, and slide fragments will appear in the corner of every cell. Go to [this section][slideshow] for more information on how to create a slideshow out of your Jupyter Notebook. |These buttons allow you to move the location of an entire section within a Notebook. Simply select the Header Cell for the section or subsection you wish to move and click either the up or down button until the section is in the location you want it to be. If your have used [Automatic Section Numbering][section numbering] or [Table of Contents Support][table of contents] remember to rerun those tools so that your section numbers or table of contents reflects your Notebook's new organization. |Clicking this button will automatically number your Notebook's sections. For more information, check out the Reference Guide's [section on Automatic Section Numbering][section numbering].|Clicking this button will generate a table of contents using the titles you've given your Notebook's sections. For more information, check out the Reference Guide's [section on Table of Contents Support][table of contents].|Clicking this button will search your document for [cite commands][] and automatically generate intext citations as well as a references cell at the end of your Notebook. For more information, you can read the Reference Guide's [section on Bibliographic Support][bib support].|Clicking this button will toggle [cell tabbing][], which you can learn more about in the Reference Guides' [section on the layout options for Code Cells][cell layout].|Clicking this button will toggle the [collumn configuration][] for Code Cells, which you can learn more about in the Reference Guides' [section on the layout options for Code Cells][cell layout].|Clicking this button will toggle spell checking. Spell checking only works in unrendered [Markdown Cells][] and [Header Cells][]. When spell checking is on all incorrectly spelled words will be underlined with a red squiggle. Keep in mind that the dictionary cannot tell what are [Markdown][md writing] commands and what aren't, so it will occasionally underline a correctly spelled word surrounded by asterisks, brackets, or other symbols that have specific meaning in Markdown. [cell kind]: 2.-Different-Kinds-of-Cells "Different Kinds of Cells"[code cells]: 2.1-Code-Cells "Code Cells"[raw cells]: 2.3-Raw-Cells "Raw Cells"[slideshow]: 6.-Turning-Your-Jupyter-Notebook-into-a-Slideshow "Turning Your Jupyter Notebook Into a Slideshow"[section numbering]: 2.4.2.1-Automatic-Section-Numbering[table of contents]: 2.4.2.2-Table-of-Contents-Support[cell tabbing]: 2.1.1.2-Cell-Tabbing[cell layout]: 2.1.1-Code-Cell-Layout[bib support]: 5.-Bibliographic-Support[cite commands]: 5.2-Cite-Commands-and-Citation-IDs[md writing]: 4.-Using-Markdown-Cells-for-Writing[collumn configuration]: 2.1.1.3-Column-Configuration[Markdown Cells]: 2.2-Markdown-Cells[Header Cells]: 2.4-Header-Cells 2. Different Kinds of Cells There are essentially four kinds of cells in your Jupyter notebook: Code Cells, Markdown Cells, Raw Cells, and Header Cells, though there are six levels of Header Cells. 2.1 Code Cells By default, Jupyter Notebooks' Code Cells will execute Python. Jupyter Notebooks generally also support JavaScript, Python, HTML, and Bash commands. For a more comprehensive list, see your Kernel's documentation. 2.1.1 Code Cell Layout Code cells have both an input and an output component. You can view these components in three different ways. 2.1.1.1 Row Configuration (Default Setting) Unless you specific otherwise, your Code Cells will always be configured this way, with both the input and output components appearing as horizontal rows and with the input above the output. Below is an example of a Code Cell in this default setting:
###Code
2 + 3
###Output
_____no_output_____
###Markdown
2.1.1.2 Cell Tabbing Cell tabbing allows you to look at the input and output components of a cell separately. It also allows you to hide either component behind the other, which can be usefull when creating visualizations of data. Below is an example of a tabbed Code Cell:
###Code
2+3
###Output
_____no_output_____
###Markdown
2.1.1.3 Column Configuration Like the row configuration, the column layout option allows you to look at both the input and the output components at once. In the column layout, however, the two components appear beside one another, with the input on the left and the output on the right. Below is an example of a Code Cell in the column configuration:
###Code
2+3
###Output
_____no_output_____ |
notebooks/statistics.ipynb | ###Markdown
Finchat: StatisticsSee detailed information and the labels for the data and columns from the readme file.
###Code
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
from collections import Counter
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
# Loading the corpus
chat_data = pd.read_csv('../finchat-corpus/finchat_chat_conversations.csv')
meta_data = pd.read_csv('../finchat-corpus/finchat_meta_data.csv')
chat_data.dtypes
meta_data.dtypes
###Output
_____no_output_____
###Markdown
The number of conversation and participants.
###Code
print('The number of the conversations:')
print(chat_data['CHAT_ID'].unique().size)
print(meta_data['CHAT_ID'].unique().size)
print('The number of the users:')
print(chat_data['SPEAKER_ID'].unique().size)
###Output
The number of the conversations:
86
85
The number of the users:
64
###Markdown
Topic and group statistics
###Code
# Changing types
meta_data["GROUP"] = meta_data["GROUP"].astype('category')
meta_data["TOPIC"] = meta_data["TOPIC"].astype('category')
meta_data["OFFTOPIC"] = meta_data["OFFTOPIC"].astype('category')
meta_data[['Q1', 'Q2', 'Q3', 'Q4', 'Q5']] = meta_data[['Q1', 'Q2', 'Q3', 'Q4', 'Q5']].astype('category')
print('groups')
print(meta_data["GROUP"].value_counts())
print()
print('topics')
print(meta_data["TOPIC"].value_counts())
print()
print('offtopics')
print(meta_data["OFFTOPIC"].value_counts())
print()
###Output
groups
1 82
3 61
2 20
Name: GROUP, dtype: int64
topics
sports 44
literature 31
tv 24
traveling 24
food 18
movies 14
music 8
Name: TOPIC, dtype: int64
offtopics
0 137
school 12
jokes 4
miscellaneous 3
career 3
literature 2
technology 1
sports 1
Name: OFFTOPIC, dtype: int64
###Markdown
Filter into smaller setsChoose only specific user group or topic to analyze.
###Code
# Filter by topic
#meta_subset = meta_data.loc[meta_data['TOPIC'] == 'tv']
#chat_id_list = meta_subset['CHAT_ID'].tolist()
#chat_subset = chat_data.loc[chat_data['CHAT_ID'].isin(chat_id_list)]
# Filter by group
# University staff = 1, university student = 2, high schooler = 3
#meta_subset = meta_data.loc[meta_data['GROUP'] == 3]
#chat_id_list = meta_subset['CHAT_ID'].tolist()
#chat_subset = chat_data.loc[chat_data['CHAT_ID'].isin(chat_id_list)]
# Everything: No subset
chat_id_list = meta_data['CHAT_ID'].tolist()
meta_subset = meta_data
chat_subset = chat_data
###Output
_____no_output_____
###Markdown
Clean text
###Code
# Clean text from commas etc to compute word statistics
sentences = chat_subset['TEXT'].tolist()
sentences_clean = []
word_list_text = ''
for sentence in sentences:
for ch in ['.','!','?',')','(',':',',']:
if ch in sentence:
sentence = sentence.replace(ch,'')
sentences_clean.append(sentence.lower())
word_list_text = word_list_text+" "+sentence.lower()
word_list = word_list_text.split()
###Output
_____no_output_____
###Markdown
Words Common words
###Code
word_count = Counter(word_list).most_common()
#print(word_count)
# Generate a word cloud image
stopwords=[]
wordcloud = WordCloud(background_color="white", width=1600, height=1000).generate(word_list_text)
plt.figure( figsize=(20,10) )
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.figure(figsize=[10,10])
plt.show()
###Output
_____no_output_____
###Markdown
Word length
###Code
# Word lengths
word_lengths = [] #np.zeros(30)
for word in word_list:
#ord_lengths[len(word)] += 1
word_lengths.append(len(word))
if len(word) > 25: print(word)
#plt.hist(word_lengths, bins=range(20))
#plt.xticks(range(20))
#plt.xlabel('word length')
#plt.ylabel('word count')
#plt.title('word lengths in corpus')
print('')
print('average word length:', np.mean(word_lengths))
# Word length distribution
sns.distplot(word_lengths, kde=False, norm_hist=True);
plt.xlim(0, 20)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Statistics Turns per conversation
###Code
# for each conversation
# see how many times the speaker id changes
turn_counter = np.zeros(len(chat_id_list))
for i in range(len(chat_id_list)):
chat = chat_data.loc[chat_data['CHAT_ID'] == chat_id_list[i]]
speaker_id_list = chat['SPEAKER_ID'].tolist()
prev_id = 0
for j in speaker_id_list:
if prev_id != j:
turn_counter[i] += 1
prev_id = j
turn_counter[i] = np.floor(turn_counter[i]/2)
print(np.mean(turn_counter))
###Output
14.05521472392638
###Markdown
Words in corpus
###Code
# Words in corpus
len(word_list)
###Output
_____no_output_____
###Markdown
Characters in corpus
###Code
total_char = 0
for word in word_list:
total_char += len(word)
print(total_char)
###Output
125121
###Markdown
Messages in corpus
###Code
print(len(sentences))
###Output
3628
###Markdown
Conversations
###Code
print('The number of the conversations:')
print(chat_subset['CHAT_ID'].unique().size)
###Output
The number of the conversations:
85
###Markdown
Questionaire scoresAnalyze questionaire scores. See readme documentation of the corpus for actual questions.
###Code
# Answers for each question
print(meta_subset["Q1"].value_counts())
print(meta_subset["Q2"].value_counts())
print(meta_subset["Q3"].value_counts())
print(meta_subset["Q4"].value_counts())
print(meta_subset["Q5"].value_counts())
###Output
1 120
2 43
Name: Q1, dtype: int64
1 157
2 6
Name: Q2, dtype: int64
1 128
2 35
Name: Q3, dtype: int64
3 92
1 33
2 22
0 16
Name: Q4, dtype: int64
3 97
1 35
0 16
2 15
Name: Q5, dtype: int64
###Markdown
The rate of interesting conversations
###Code
print(meta_subset["Q1"].value_counts()/sum(meta_subset["Q1"].value_counts()))
# All Questions
print(meta_subset["Q1"].value_counts()/sum(meta_subset["Q1"].value_counts()))
print(meta_subset["Q2"].value_counts()/sum(meta_subset["Q2"].value_counts()))
print(meta_subset["Q3"].value_counts()/sum(meta_data["Q3"].value_counts()))
print(meta_subset["Q4"].value_counts(sort=False)/sum(meta_subset["Q4"].value_counts()))
print(meta_subset["Q5"].value_counts(sort=False)/sum(meta_subset["Q5"].value_counts()))
###Output
1 0.736196
2 0.263804
Name: Q1, dtype: float64
1 0.96319
2 0.03681
Name: Q2, dtype: float64
1 0.785276
2 0.214724
Name: Q3, dtype: float64
0 0.098160
1 0.202454
2 0.134969
3 0.564417
Name: Q4, dtype: float64
0 0.098160
1 0.214724
2 0.092025
3 0.595092
Name: Q5, dtype: float64
###Markdown
Pie plots
###Code
# Pie charts for question Q1-Q3.
# For whole data set. Change meta_data -> meta_subset if want to plot for filtered subset.
######## Q1
# Pie chart
labels = ['Yes', 'No']
sizes = np.array(meta_data["Q1"].value_counts())
#colors
colors = ['#ff9999','#66b3ff']
colors = ['tomato','lightskyblue']
#explsion
explode = (0.05,0.05)
fig1, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(9,3.5))
ax1.pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
ax1.set_title('Conversation was interesting.')
# Equal aspect ratio ensures that pie is drawn as a circle
ax1.axis('equal')
plt.tight_layout()
#plt.show()
######## Q2
sizes2 = np.array(meta_data["Q2"].value_counts())
ax2.pie(sizes2, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
ax2.set_title('I was listened to.')
# Equal aspect ratio ensures that pie is drawn as a circle
ax2.axis('equal')
plt.tight_layout()
#plt.show()
######## Q3
sizes3 = np.array(meta_data["Q3"].value_counts())
ax3.pie(sizes3, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
ax3.set_title('Stayed on topic.')
# Equal aspect ratio ensures that pie is drawn as a circle
ax3.axis('equal')
plt.tight_layout()
# Save plot
#plt.savefig('figs/q1_q2_q3_pie.png', bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
More details
###Code
# Adding disagreement
# Q1
diff_list = []
pos_sum = 0 # both thought interesting
neg_sum = 0 # both thought not interesting
diff_sum = 0 # # disagreement
no_fb = 0
for chatid in chat_id_list:
meta_conv = meta_data.loc[meta_data['CHAT_ID'] == chatid]
#meta_conv = meta_data.loc[meta_data['CHAT_ID'].isin(chatid)]
# Q1
scores = meta_conv['Q1'].tolist()
if len(scores) == 2:
diff_list.append(abs(scores[0]-scores[1]))
if scores[0] + scores[1] == 2:
pos_sum += 1
elif scores[0] + scores[1] == 4:
neg_sum += 1
else:
diff_sum += 1
else:
#print(scores)
no_fb += 1
# Participants who both agreed that conversation vas intresting or not interesting.
#print(1-sum(diff_list)/len(diff_list))
#print('interesting')
#print(pos_sum/(pos_sum+neg_sum+diff_sum))
#print('not interesting')
#print(neg_sum/(pos_sum+neg_sum+diff_sum))
#print('disagreement')
#print(diff_sum/(pos_sum+neg_sum+diff_sum))
labels = ['Interesting', 'Not interesting', 'Disagreed']
sizes = np.array([pos_sum, neg_sum, diff_sum])
colors = ['tomato', 'lightskyblue','lightgray']
fig1, ax1 = plt.subplots(1,1, figsize=(7,3))
ax1.pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
ax1.set_title('Was conversation interesting?')
# Equal aspect ratio ensures that pie is drawn as a circle
ax1.axis('equal')
plt.tight_layout()
#plt.savefig('figs/q1_pie_wdis_staff.png', bbox_inches='tight', dpi=300)
# Similarties between answers for Q4 and Q5
q4_q5_same_sum = 0
q4_q5_notsame_sum = 0
q4_agreed_i = 0 # individual
q4_agreed_b = 0 # both
q4_disa_i = 0
q4_disa_b = 0
q4_disa = 0
q5_agreed_i = 0 # individual
q5_agreed_b = 0 # both
q5_disa = 0
pos_sun = 0
diff_sum = 0
count=0
for chatid in chat_id_list:
# Q4 and Q5 have same answer for same person
meta_conv = meta_data.loc[meta_data['CHAT_ID'] == chatid]
scores_q4 = meta_conv['Q4'].tolist()
scores_q5 = meta_conv['Q5'].tolist()
if len(scores_q4) == 2:
if scores_q4[0] == scores_q5[0]:
q4_q5_same_sum += 1
else: q4_q5_notsame_sum += 1
if scores_q4[1] == scores_q5[1]:
q4_q5_same_sum += 1
else: q4_q5_notsame_sum += 1
# For same conversation both agreed with the leader:
# answers 1, 2 OR 3, 3. Q4
if scores_q4[0] != scores_q4[1] and scores_q4[0] != 3 and scores_q4[1] != 3:
q4_agreed_i += 1
elif scores_q4[0] == scores_q4[1] and scores_q4[0] == 3:
q4_agreed_b += 1
elif scores_q4[0] != scores_q4[1] and (scores_q4[0] == 3 or scores_q4[1] == 3):
q4_disa_b += 1
else: q4_disa += 1
# For same conversation both agreed with the leader:
# answers 1, 2 OR 3, 3. Q5
if scores_q5[0] != scores_q5[1] and scores_q5[0] != 3 and scores_q5[1] != 3:
q5_agreed_i += 1
elif scores_q5[0] == scores_q5[1] and scores_q5[0] == 3:
q5_agreed_b += 1
else: q5_disa += 1
else: count+=1
print('Asking more question and leading conversation (own opinion)')
print(q4_q5_same_sum/(q4_q5_same_sum+q4_q5_notsame_sum))
print('Agreeing who is asking the questions (Q4)')
print((q4_agreed_b+q4_agreed_i)/(q4_agreed_i+q4_agreed_b+q4_disa))
print('Agreeing who is the leader (5)')
print((q5_agreed_i+q5_agreed_b)/(q5_agreed_i+q5_agreed_b+q5_disa))
# Plots with disagreement and no answers.
# Combine
font = {'family' : 'sans-serif',
'weight' : 'normal',
'size' : 11.5}
plt.rc('font', **font)
######## Q1
#explsion
explode = (0.05,0.05)
fig1, [ax1, ax2, ax3] = plt.subplots(1,3, figsize=(10,3.5))
labels = ['Yes', 'No', 'Disagreed']
sizes = np.array([pos_sum, neg_sum, diff_sum])
colors = ['tomato', 'lightskyblue','lightgray']
ax1.pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90, textprops={'fontsize': 13})
ax1.set_title('1. Conversation was interesting.')
# Equal aspect ratio ensures that pie is drawn as a circle
ax1.axis('equal')
plt.tight_layout()
# Q4 & Q5
# Pie chart
labels = ['No answer', 'Me', 'Partner', 'Both']
sizes = np.array(meta_data["Q4"].value_counts(sort=False))
#colors
#colors = ['#ff9999','#66b3ff']
colors = ['lightgray', 'tomato','lightskyblue','lightgreen']
ax2.set_title('2. Asked more questions.')
ax2.pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90, textprops={'fontsize': 13})
# Equal aspect ratio ensures that pie is drawn as a circle
ax2.axis('equal')
plt.tight_layout()
#plt.show()
##### Q5
sizes2 = np.array(meta_data["Q5"].value_counts(sort=False))
ax3.pie(sizes2, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90, textprops={'fontsize': 13})
ax3.set_title('3. Leader of the conversation.')
# Equal aspect ratio ensures that pie is drawn as a circle
ax3.axis('equal')
plt.tight_layout()
#axs[1, 2].axis('off')
plt.savefig('questionnairre_pies_smaller_fixed.png', bbox_inches='tight', dpi=300)
# Pie plots for Q4 & Q5
# For whole set. Change meta_data -> meta_subset if want to plot for subset.
# Pie chart
labels = ['No answer', 'Me', 'Partner', 'Both']
sizes = np.array(meta_data["Q4"].value_counts(sort=False))
#colors
#colors = ['#ff9999','#66b3ff']
colors = ['lightgray', 'tomato','lightskyblue','lightgreen']
fig1, (ax1, ax2) = plt.subplots(1,2, figsize=(7,3))
ax1.pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
ax1.set_title('Who asked more question?')
# Equal aspect ratio ensures that pie is drawn as a circle
ax1.axis('equal')
plt.tight_layout()
#plt.show()
##### Q5
sizes2 = np.array(meta_data["Q5"].value_counts(sort=False))
ax2.pie(sizes2, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
ax2.set_title('Who was leading the conversation?')
# Equal aspect ratio ensures that pie is drawn as a circle
ax2.axis('equal')
plt.tight_layout()
# Save plot
#plt.savefig('figs/q4_q5_pie.png', bbox_inches='tight', dpi=300)
# Combined plots.
font = {'family' : 'sans-serif',
'weight' : 'normal',
'size' : 11.5}
plt.rc('font', **font)
######## Q1
#explsion
explode = (0.05,0.05)
fig1, axs = plt.subplots(2,3, figsize=(9,7))
labels = ['Yes', 'No', 'Disagreed']
sizes = np.array([pos_sum, neg_sum, diff_sum])
colors = ['tomato', 'lightskyblue','lightgray']
axs[0, 0].pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90, textprops={'fontsize': 14})
axs[0, 0].set_title('1. Was conversation interesting?')
# Equal aspect ratio ensures that pie is drawn as a circle
axs[0, 0].axis('equal')
plt.tight_layout()
######## Q2
# Pie chart
labels = ['Yes', 'No']
#colors
colors = ['#ff9999','#66b3ff']
colors = ['tomato','lightskyblue']
sizes2 = np.array(meta_data["Q2"].value_counts())
axs[0, 1].pie(sizes2, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
axs[0, 1].set_title('2. I was listened to.')
# Equal aspect ratio ensures that pie is drawn as a circle
axs[0, 1].axis('equal')
plt.tight_layout()
#plt.show()
######## Q3
sizes3 = np.array(meta_data["Q3"].value_counts())
axs[0, 2].pie(sizes3, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
axs[0, 2].set_title('3. Stayed on topic.')
# Equal aspect ratio ensures that pie is drawn as a circle
axs[0, 2].axis('equal')
plt.tight_layout()
# Q4 & Q5
# Pie chart
labels = ['No answer', 'Me', 'Partner', 'Both']
sizes = np.array(meta_data["Q4"].value_counts(sort=False))
#colors
#colors = ['#ff9999','#66b3ff']
colors = ['lightgray', 'tomato','lightskyblue','lightgreen']
axs[1, 0].set_title('4. Asked more questions.')
axs[1, 0].pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
# Equal aspect ratio ensures that pie is drawn as a circle
axs[1, 0].axis('equal')
plt.tight_layout()
#plt.show()
##### Q5
sizes2 = np.array(meta_data["Q5"].value_counts(sort=False))
axs[1, 1].pie(sizes2, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
axs[1, 1].set_title('5. Learder of the conversation.')
# Equal aspect ratio ensures that pie is drawn as a circle
axs[1, 1].axis('equal')
plt.tight_layout()
axs[1, 2].axis('off')
#plt.savefig('figs/questionnairre_pies.png', bbox_inches='tight', dpi=300)
# Combine
font = {'family' : 'sans-serif',
'weight' : 'normal',
'size' : 11.5}
plt.rc('font', **font)
######## Q1
#explsion
explode = (0.05,0.05)
fig1, axs = plt.subplots(2,3, figsize=(9,7))
labels = ['Yes', 'No', 'Disagreed']
sizes = np.array([pos_sum, neg_sum, diff_sum])
colors = ['tomato', 'lightskyblue','lightgray']
axs[0, 0].pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90, textprops={'fontsize': 14})
axs[0, 0].set_title('1. Was conversation interesting?')
# Equal aspect ratio ensures that pie is drawn as a circle
axs[0, 0].axis('equal')
plt.tight_layout()
######## Q2
# Pie chart
labels = ['Yes', 'No']
#colors
colors = ['#ff9999','#66b3ff']
colors = ['tomato','lightskyblue']
sizes2 = np.array(meta_data["Q2"].value_counts())
axs[0, 1].pie(sizes2, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
axs[0, 1].set_title('2. I was listened to.')
# Equal aspect ratio ensures that pie is drawn as a circle
axs[0, 1].axis('equal')
plt.tight_layout()
#plt.show()
######## Q3
sizes3 = np.array(meta_data["Q3"].value_counts())
axs[0, 2].pie(sizes3, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
axs[0, 2].set_title('3. Stayed on topic.')
# Equal aspect ratio ensures that pie is drawn as a circle
axs[0, 2].axis('equal')
plt.tight_layout()
# Q4 & Q5
# Pie chart
labels = ['No answer', 'Me', 'Partner', 'Both']
sizes = np.array(meta_data["Q4"].value_counts(sort=False))
#colors
#colors = ['#ff9999','#66b3ff']
colors = ['lightgray', 'tomato','lightskyblue','lightgreen']
axs[1, 0].set_title('4. Asked more questions.')
axs[1, 0].pie(sizes, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
# Equal aspect ratio ensures that pie is drawn as a circle
axs[1, 0].axis('equal')
plt.tight_layout()
#plt.show()
##### Q5
sizes2 = np.array(meta_data["Q5"].value_counts(sort=False))
axs[1, 1].pie(sizes2, colors = colors, labels=labels, autopct='%1.1f%%', startangle=90)
axs[1, 1].set_title('5. Learder of the conversation.')
# Equal aspect ratio ensures that pie is drawn as a circle
axs[1, 1].axis('equal')
plt.tight_layout()
axs[1, 2].axis('off')
#plt.savefig('figs/questionnairre_pies_.png', bbox_inches='tight', dpi=300)
# Select group
#
#meta_subset = meta_data.loc[meta_data['GROUP'] == 1]
#chat_id_list = meta_subset['CHAT_ID'].tolist()
#chat_subset = chat_data.loc[chat_data['CHAT_ID'].isin(chat_id_list)]
#chat_id_list = chat_subset['CHAT_ID'].unique().tolist()
# Use all
#chat_id_list = chat_data['CHAT_ID'].unique().tolist()
# Commoness between question answers
scores_q4 = np.array(meta_data['Q4'].tolist()) # 3 = both
scores_q1 = np.array(meta_data['Q1'].tolist()) # 1 = yes
print(np.unique(scores_q1,return_counts=True))
count = 0
for i in range(len(scores_q1)):
if scores_q1[i] == 1 and scores_q4[i] == 2:
count += 1
print(count/len(scores_q1))
print(count/120)
# both ask questions and is interesting = 44% / 60%
# i asked more and is interesting = 14% / 19 %
# partner asked more and is interestin = 10 / 14%
###Output
(array([1, 2]), array([120, 43]))
0.10429447852760736
0.14166666666666666
###Markdown
Alternative to pies
###Code
category_names = ['Yes', 'Disagree', 'No']
sizes = [26,4,11]
sizes2 = [157,0,6]
sizes3 = [128,0,35]
results = {
'Question 1': sizes,
'Question 2': sizes2,
'Question 3': sizes3
}
def survey(results, category_names):
"""
Parameters
----------
results : dict
A mapping from question labels to a list of answers per category.
It is assumed all lists contain the same number of entries and that
it matches the length of *category_names*.
category_names : list of str
The category labels.
"""
labels = list(results.keys())
data = np.array(list(results.values()))
data_cum = data.cumsum(axis=1)
category_colors = plt.get_cmap('RdYlGn')(
np.linspace(0.15, 0.85, data.shape[1]))
fig, ax = plt.subplots(figsize=(9.2, 5))
ax.invert_yaxis()
ax.xaxis.set_visible(False)
ax.set_xlim(0, np.sum(data, axis=1).max())
for i, (colname, color) in enumerate(zip(category_names, category_colors)):
widths = data[:, i]
starts = data_cum[:, i] - widths
ax.barh(labels, widths, left=starts, height=0.5,
label=colname, color=color)
xcenters = starts + widths / 2
r, g, b, _ = color
text_color = 'white' if r * g * b < 0.5 else 'darkgrey'
for y, (x, c) in enumerate(zip(xcenters, widths)):
ax.text(x, y, str(int(c)), ha='center', va='center',
color=text_color)
ax.legend(ncol=len(category_names), bbox_to_anchor=(0, 1),
loc='lower left', fontsize='small')
return fig, ax
survey(results, category_names)
plt.show()
print(sizes3)
###Output
[128 35]
###Markdown
Get Statistics---using custom api for Onos `http://:8181/onos/ddos-detection/statistic/chi/values`this entry return an array of 10 statistic valuesformat example```json{ "statistic_values": { "0": { "Timestamp": "21:37:43", "Cantidad de trafico observado - bytes": "32663", "Chi value - bytes": "0.0", "Cantidad de trafico observado - paquetes": "361", "Chi value - paquetes": "0.0" }, ... }```
###Code
import time
import datetime
import requests
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
#plt.rcParams['animation.html'] = 'jshtml'
user ='karaf'
passw = 'karaf'
ip_onos = '192.168.1.13'
path = f'http://{ip_onos}:8181/onos/ddos-detection/statistic/chi/values'
step = 'Timestamp'
byte = 'Cantidad de trafico observado - bytes'
pkts = 'Cantidad de trafico observado - paquetes'
###Output
_____no_output_____
###Markdown
First set up the figure, the axis, and the plot element we want to animate
###Code
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(1,1,1)
ax.set_xlabel('bytes')
ax.set_ylabel('time stamp')
interval = 40000 # 10 seconds
def animate(i):
time.sleep(10)
r = requests.get(path, auth=(user, passw))
if r.status_code == 200:
x = []
y = []
statss = r.json()['statistic_values']
for key , value in statss.items():
y.append(int(value[byte]))
x.append(datetime.datetime.strptime(value[step], '%H:%M:%S'))
ax.clear()
ax.plot(x, y)
anim = animation.FuncAnimation(
fig, func=animate, frames=6, interval=600)
rc('animation', html='html5')
anim
###Output
_____no_output_____
###Markdown
Number of translations per languageWe use DBpedia set because there are all the questions presented (558)
###Code
test = read_json("../data/qald_9_plus_test_dbpedia.json")
train = read_json("../data/qald_9_plus_train_dbpedia.json")
show_stats(train)
show_stats(test)
###Output
en 150
de 176
ru 348
uk 176
lt 186
be 155
ba 117
hy 20
fr 26
###Markdown
Number of Wikipedia queries
###Code
test = read_json("../data/qald_9_plus_test_wikidata.json")
train = read_json("../data/qald_9_plus_train_wikidata.json")
len(train['questions'])
len(test['questions'])
###Output
_____no_output_____ |
notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-locally-trained-models.ipynb | ###Markdown
Run in Colab View on GitHub Open in Vertex AI Workbench Vertex AI: Track parameters and metrics for locally trained models OverviewThis notebook demonstrates how to track metrics and parameters for ML training jobs and analyze this metadata using Vertex SDK for Python. DatasetIn this notebook, we will train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg). ObjectiveIn this notebook, you will learn how to use Vertex SDK for Python to: * Track parameters and metrics for a locally trained model. * Extract and perform analysis for all parameters and metrics within an Experiment. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Vertex AI Workbench notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesRun the following commands to install the Vertex SDK for Python.
###Code
import sys
if "google.colab" in sys.modules:
USER_FLAG = ""
else:
USER_FLAG = "--user"
! pip3 install -U tensorflow==2.8 $USER_FLAG
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Vertex AI Workbench notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Import libraries and define constants Import required libraries.
###Code
import matplotlib.pyplot as plt
import pandas as pd
from google.cloud import aiplatform
from tensorflow.python.keras import Sequential, layers
from tensorflow.python.keras.utils import data_utils
###Output
_____no_output_____
###Markdown
Define some constants
###Code
EXPERIMENT_NAME = "" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1"
###Output
_____no_output_____
###Markdown
If EXEPERIMENT_NAME is not set, set a default one below:
###Code
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
ConceptsTo better understanding how parameters and metrics are stored and organized, we'd like to introduce the following concepts: ExperimentExperiments describe a context that groups your runs and the artifacts you create into a logical session. For example, in this notebook you create an Experiment and log data to that experiment. RunA run represents a single path/avenue that you executed while performing an experiment. A run includes artifacts that you used as inputs or outputs, and parameters that you used in this execution. An Experiment can contain multiple runs. Getting started tracking parameters and metricsYou can use the Vertex SDK for Python to track metrics and parameters for models trained locally. In the following example, you train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg). Load and process the training dataset Download and process the dataset.
###Code
def read_data(uri):
dataset_path = data_utils.get_file("auto-mpg.data", uri)
column_names = [
"MPG",
"Cylinders",
"Displacement",
"Horsepower",
"Weight",
"Acceleration",
"Model Year",
"Origin",
]
raw_dataset = pd.read_csv(
dataset_path,
names=column_names,
na_values="?",
comment="\t",
sep=" ",
skipinitialspace=True,
)
dataset = raw_dataset.dropna()
dataset["Origin"] = dataset["Origin"].map(
lambda x: {1: "USA", 2: "Europe", 3: "Japan"}.get(x)
)
dataset = pd.get_dummies(dataset, prefix="", prefix_sep="")
return dataset
dataset = read_data(
"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data"
)
###Output
_____no_output_____
###Markdown
Split dataset for training and testing.
###Code
def train_test_split(dataset, split_frac=0.8, random_state=0):
train_dataset = dataset.sample(frac=split_frac, random_state=random_state)
test_dataset = dataset.drop(train_dataset.index)
train_labels = train_dataset.pop("MPG")
test_labels = test_dataset.pop("MPG")
return train_dataset, test_dataset, train_labels, test_labels
train_dataset, test_dataset, train_labels, test_labels = train_test_split(dataset)
###Output
_____no_output_____
###Markdown
Normalize the features in the dataset for better model performance.
###Code
def normalize_dataset(train_dataset, test_dataset):
train_stats = train_dataset.describe()
train_stats = train_stats.transpose()
def norm(x):
return (x - train_stats["mean"]) / train_stats["std"]
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
return normed_train_data, normed_test_data
normed_train_data, normed_test_data = normalize_dataset(train_dataset, test_dataset)
###Output
_____no_output_____
###Markdown
Define ML model and training function
###Code
def train(
train_data,
train_labels,
num_units=64,
activation="relu",
dropout_rate=0.0,
validation_split=0.2,
epochs=1000,
):
model = Sequential(
[
layers.Dense(
num_units,
activation=activation,
input_shape=[len(train_dataset.keys())],
),
layers.Dropout(rate=dropout_rate),
layers.Dense(num_units, activation=activation),
layers.Dense(1),
]
)
model.compile(loss="mse", optimizer="adam", metrics=["mae", "mse"])
print(model.summary())
history = model.fit(
train_data, train_labels, epochs=epochs, validation_split=validation_split
)
return model, history
###Output
_____no_output_____
###Markdown
Initialize the Vertex AI SDK for Python and create an ExperimentInitialize the *client* for Vertex AI and create an experiment.
###Code
aiplatform.init(project=PROJECT_ID, location=REGION, experiment=EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Start several model training runsTraining parameters and metrics are logged for each run.
###Code
parameters = [
{"num_units": 16, "epochs": 3, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.2},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.2},
]
for i, params in enumerate(parameters):
aiplatform.start_run(run=f"auto-mpg-local-run-{i}")
aiplatform.log_params(params)
model, history = train(
normed_train_data,
train_labels,
num_units=params["num_units"],
activation="relu",
epochs=params["epochs"],
dropout_rate=params["dropout_rate"],
)
aiplatform.log_metrics(
{metric: values[-1] for metric, values in history.history.items()}
)
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
aiplatform.log_metrics({"eval_loss": loss, "eval_mae": mae, "eval_mse": mse})
###Output
_____no_output_____
###Markdown
Extract parameters and metrics into a dataframe for analysis We can also extract all parameters and metrics associated with any Experiment into a dataframe for further analysis.
###Code
experiment_df = aiplatform.get_experiment_df()
experiment_df
###Output
_____no_output_____
###Markdown
Visualizing an experiment's parameters and metrics
###Code
plt.rcParams["figure.figsize"] = [15, 5]
ax = pd.plotting.parallel_coordinates(
experiment_df.reset_index(level=0),
"run_name",
cols=[
"param.num_units",
"param.dropout_rate",
"param.epochs",
"metric.loss",
"metric.val_loss",
"metric.eval_loss",
],
color=["blue", "green", "pink", "red"],
)
ax.set_yscale("symlog")
ax.legend(bbox_to_anchor=(1.0, 0.5))
###Output
_____no_output_____
###Markdown
Visualizing experiments in Cloud Console Run the following to get the URL of Vertex AI Experiments for your project.
###Code
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub Open in Vertex AI Workbench Vertex AI: Track parameters and metrics for locally trained models OverviewThis notebook demonstrates how to track metrics and parameters for ML training jobs and analyze this metadata using Vertex SDK for Python. DatasetIn this notebook, we will train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg). ObjectiveIn this notebook, you learn how to use `Vertex ML Metadata` to track training parameters and evaluation metrics.This tutorial uses the following Google Cloud ML services:- `Vertex ML Metadata`- `Vertex AI Experiments`The steps performed include:- Track parameters and metrics for a locally trained model.- Extract and perform analysis for all parameters and metrics within an Experiment. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Vertex AI Workbench notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. InstallationInstall the packages required for executing this notebook.
###Code
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") and not os.getenv("VIRTUAL_ENV")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade google-cloud-aiplatform {USER_FLAG} -q
! pip3 install -U tensorflow==2.8 {USER_FLAG} -q
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Vertex AI Workbench notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Import libraries and define constants Import required libraries.
###Code
import matplotlib.pyplot as plt
import pandas as pd
from google.cloud import aiplatform
from tensorflow.python.keras import Sequential, layers
from tensorflow.python.keras.utils import data_utils
###Output
_____no_output_____
###Markdown
Define some constants
###Code
EXPERIMENT_NAME = "" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
If EXEPERIMENT_NAME is not set, set a default one below:
###Code
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
ConceptsTo better understanding how parameters and metrics are stored and organized, we'd like to introduce the following concepts: ExperimentExperiments describe a context that groups your runs and the artifacts you create into a logical session. For example, in this notebook you create an Experiment and log data to that experiment. RunA run represents a single path/avenue that you executed while performing an experiment. A run includes artifacts that you used as inputs or outputs, and parameters that you used in this execution. An Experiment can contain multiple runs. Getting started tracking parameters and metricsYou can use the Vertex SDK for Python to track metrics and parameters for models trained locally. In the following example, you train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg). Load and process the training dataset Download and process the dataset.
###Code
def read_data(uri):
dataset_path = data_utils.get_file("auto-mpg.data", uri)
column_names = [
"MPG",
"Cylinders",
"Displacement",
"Horsepower",
"Weight",
"Acceleration",
"Model Year",
"Origin",
]
raw_dataset = pd.read_csv(
dataset_path,
names=column_names,
na_values="?",
comment="\t",
sep=" ",
skipinitialspace=True,
)
dataset = raw_dataset.dropna()
dataset["Origin"] = dataset["Origin"].map(
lambda x: {1: "USA", 2: "Europe", 3: "Japan"}.get(x)
)
dataset = pd.get_dummies(dataset, prefix="", prefix_sep="")
return dataset
dataset = read_data(
"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data"
)
###Output
_____no_output_____
###Markdown
Split dataset for training and testing.
###Code
def train_test_split(dataset, split_frac=0.8, random_state=0):
train_dataset = dataset.sample(frac=split_frac, random_state=random_state)
test_dataset = dataset.drop(train_dataset.index)
train_labels = train_dataset.pop("MPG")
test_labels = test_dataset.pop("MPG")
return train_dataset, test_dataset, train_labels, test_labels
train_dataset, test_dataset, train_labels, test_labels = train_test_split(dataset)
###Output
_____no_output_____
###Markdown
Normalize the features in the dataset for better model performance.
###Code
def normalize_dataset(train_dataset, test_dataset):
train_stats = train_dataset.describe()
train_stats = train_stats.transpose()
def norm(x):
return (x - train_stats["mean"]) / train_stats["std"]
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
return normed_train_data, normed_test_data
normed_train_data, normed_test_data = normalize_dataset(train_dataset, test_dataset)
###Output
_____no_output_____
###Markdown
Define ML model and training function
###Code
def train(
train_data,
train_labels,
num_units=64,
activation="relu",
dropout_rate=0.0,
validation_split=0.2,
epochs=1000,
):
model = Sequential(
[
layers.Dense(
num_units,
activation=activation,
input_shape=[len(train_dataset.keys())],
),
layers.Dropout(rate=dropout_rate),
layers.Dense(num_units, activation=activation),
layers.Dense(1),
]
)
model.compile(loss="mse", optimizer="adam", metrics=["mae", "mse"])
print(model.summary())
history = model.fit(
train_data, train_labels, epochs=epochs, validation_split=validation_split
)
return model, history
###Output
_____no_output_____
###Markdown
Initialize the Vertex AI SDK for Python and create an ExperimentInitialize the *client* for Vertex AI and create an experiment.
###Code
aiplatform.init(project=PROJECT_ID, location=REGION, experiment=EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Start several model training runsTraining parameters and metrics are logged for each run.
###Code
parameters = [
{"num_units": 16, "epochs": 3, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.2},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.2},
]
for i, params in enumerate(parameters):
aiplatform.start_run(run=f"auto-mpg-local-run-{i}")
aiplatform.log_params(params)
model, history = train(
normed_train_data,
train_labels,
num_units=params["num_units"],
activation="relu",
epochs=params["epochs"],
dropout_rate=params["dropout_rate"],
)
aiplatform.log_metrics(
{metric: values[-1] for metric, values in history.history.items()}
)
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
aiplatform.log_metrics({"eval_loss": loss, "eval_mae": mae, "eval_mse": mse})
###Output
_____no_output_____
###Markdown
Extract parameters and metrics into a dataframe for analysis We can also extract all parameters and metrics associated with any Experiment into a dataframe for further analysis.
###Code
experiment_df = aiplatform.get_experiment_df()
experiment_df
###Output
_____no_output_____
###Markdown
Visualizing an experiment's parameters and metrics
###Code
plt.rcParams["figure.figsize"] = [15, 5]
ax = pd.plotting.parallel_coordinates(
experiment_df.reset_index(level=0),
"run_name",
cols=[
"param.num_units",
"param.dropout_rate",
"param.epochs",
"metric.loss",
"metric.val_loss",
"metric.eval_loss",
],
color=["blue", "green", "pink", "red"],
)
ax.set_yscale("symlog")
ax.legend(bbox_to_anchor=(1.0, 0.5))
###Output
_____no_output_____
###Markdown
Visualizing experiments in Cloud Console Run the following to get the URL of Vertex AI Experiments for your project.
###Code
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub Vertex AI: Track parameters and metrics for locally trained models OverviewThis notebook demonstrates how to track metrics and parameters for ML training jobs and analyze this metadata using Vertex AI SDK. DatasetIn this notebook, we will train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg). ObjectiveIn this notebook, you will learn how to use Vertex AI SDK to: * Track parameters and metrics for a locally trainined model. * Extract and perform analysis for all parameters and metrics within an Experiment. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Vertex AI Workbench notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesRun the following commands to install the Vertex AI SDK and packages used in this notebook.
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
###Output
_____no_output_____
###Markdown
Install Vertex AI SDK and tensorflow for training and evaluation model.
###Code
! pip install {USER_FLAG} --upgrade tensorflow
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Vertex AI Workbench notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "AI Platform"into the filter box, and select **AI Platform Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Import libraries and define constants Import required libraries.
###Code
import matplotlib.pyplot as plt
import pandas as pd
from google.cloud import aiplatform
from tensorflow.python.keras import Sequential, layers
from tensorflow.python.lib.io import file_io
###Output
_____no_output_____
###Markdown
Define some constants
###Code
EXPERIMENT_NAME = "" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
If EXEPERIMENT_NAME is not set, set a default one below:
###Code
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
ConceptsTo better understanding how parameters and metrics are stored and organized, we'd like to introduce the following concepts: ExperimentExperiments describe a context that groups your runs and the artifacts you create into a logical session. For example, in this notebook you create an Experiment and log data to that experiment. RunA run represents a single path/avenue that you executed while performing an experiment. A run includes artifacts that you used as inputs or outputs, and parameters that you used in this execution. An Experiment can contain multiple runs. Getting started tracking parameters and metricsYou can use the Vertex AI SDK to track metrics and parameters for models trained locally. In the following example, you train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg). Load and process the training dataset Download and process the dataset.
###Code
def read_data(file_path):
column_names = [
"MPG",
"Cylinders",
"Displacement",
"Horsepower",
"Weight",
"Acceleration",
"Model Year",
"Origin",
]
with file_io.FileIO(file_path, "r") as f:
raw_dataset = pd.read_csv(
f,
names=column_names,
na_values="?",
comment="\t",
sep=" ",
skipinitialspace=True,
)
dataset = raw_dataset.dropna()
dataset["Origin"] = dataset["Origin"].map(
lambda x: {1: "USA", 2: "Europe", 3: "Japan"}.get(x)
)
dataset = pd.get_dummies(dataset, prefix="", prefix_sep="")
return dataset
dataset = read_data("gs://cloud-samples-data/ai-platform/auto_mpg/auto-mpg.data")
###Output
_____no_output_____
###Markdown
Split dataset for training and testing.
###Code
def train_test_split(dataset, split_frac=0.8, random_state=0):
train_dataset = dataset.sample(frac=split_frac, random_state=random_state)
test_dataset = dataset.drop(train_dataset.index)
train_labels = train_dataset.pop("MPG")
test_labels = test_dataset.pop("MPG")
return train_dataset, test_dataset, train_labels, test_labels
train_dataset, test_dataset, train_labels, test_labels = train_test_split(dataset)
###Output
_____no_output_____
###Markdown
Normalize the features in the dataset for better model performance.
###Code
def normalize_dataset(train_dataset, test_dataset):
train_stats = train_dataset.describe()
train_stats = train_stats.transpose()
def norm(x):
return (x - train_stats["mean"]) / train_stats["std"]
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
return normed_train_data, normed_test_data
normed_train_data, normed_test_data = normalize_dataset(train_dataset, test_dataset)
###Output
_____no_output_____
###Markdown
Define ML model and training function
###Code
def train(
train_data,
train_labels,
num_units=64,
activation="relu",
dropout_rate=0.0,
validation_split=0.2,
epochs=1000,
):
model = Sequential(
[
layers.Dense(
num_units,
activation=activation,
input_shape=[len(train_dataset.keys())],
),
layers.Dropout(rate=dropout_rate),
layers.Dense(num_units, activation=activation),
layers.Dense(1),
]
)
model.compile(loss="mse", optimizer="adam", metrics=["mae", "mse"])
print(model.summary())
history = model.fit(
train_data, train_labels, epochs=epochs, validation_split=validation_split
)
return model, history
###Output
_____no_output_____
###Markdown
Initialize the Vertex AI SDK for Python and create an ExperimentInitialize the *client* for Vertex AI and create an experiment.
###Code
aiplatform.init(project=PROJECT_ID, location=REGION, experiment=EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Start several model training runsTraining parameters and metrics are logged for each run.
###Code
parameters = [
{"num_units": 16, "epochs": 3, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.2},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.2},
]
for i, params in enumerate(parameters):
aiplatform.start_run(run=f"auto-mpg-local-run-{i}")
aiplatform.log_params(params)
model, history = train(
normed_train_data,
train_labels,
num_units=params["num_units"],
activation="relu",
epochs=params["epochs"],
dropout_rate=params["dropout_rate"],
)
aiplatform.log_metrics(
{metric: values[-1] for metric, values in history.history.items()}
)
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
aiplatform.log_metrics({"eval_loss": loss, "eval_mae": mae, "eval_mse": mse})
###Output
_____no_output_____
###Markdown
Extract parameters and metrics into a dataframe for analysis We can also extract all parameters and metrics associated with any Experiment into a dataframe for further analysis.
###Code
experiment_df = aiplatform.get_experiment_df()
experiment_df
###Output
_____no_output_____
###Markdown
Visualizing an experiment's parameters and metrics
###Code
plt.rcParams["figure.figsize"] = [15, 5]
ax = pd.plotting.parallel_coordinates(
experiment_df.reset_index(level=0),
"run_name",
cols=[
"param.num_units",
"param.dropout_rate",
"param.epochs",
"metric.loss",
"metric.val_loss",
"metric.eval_loss",
],
color=["blue", "green", "pink", "red"],
)
ax.set_yscale("symlog")
ax.legend(bbox_to_anchor=(1.0, 0.5))
###Output
_____no_output_____
###Markdown
Visualizing experiments in Cloud Console Run the following to get the URL of Vertex AI Experiments for your project.
###Code
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub Vertex AI: Track parameters and metrics for locally trained models OverviewThis notebook demonstrates how to track metrics and parameters for ML training jobs and analyze this metadata using Vertex AI SDK. DatasetIn this notebook, we will train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg). ObjectiveIn this notebook, you will learn how to use Vertex AI SDK to: * Track parameters and metrics for a locally trainined model. * Extract and perform analysis for all parameters and metrics within an Experiment. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or AI Platform Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesRun the following commands to install the Vertex AI SDK and packages used in this notebook.
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
###Output
_____no_output_____
###Markdown
Install Vertex AI SDK and tensorflow for training and evaluation model.
###Code
! pip install {USER_FLAG} --upgrade tensorflow
! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using AI Platform Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "AI Platform"into the filter box, and select **AI Platform Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Import libraries and define constants Import required libraries.
###Code
import matplotlib.pyplot as plt
import pandas as pd
from google.cloud import aiplatform
from tensorflow.python.keras import Sequential, layers
from tensorflow.python.lib.io import file_io
###Output
_____no_output_____
###Markdown
Define some constants
###Code
EXPERIMENT_NAME = "" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
If EXEPERIMENT_NAME is not set, set a default one below:
###Code
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
ConceptsTo better understanding how parameters and metrics are stored and organized, we'd like to introduce the following concepts: ExperimentExperiments describe a context that groups your runs and the artifacts you create into a logical session. For example, in this notebook you create an Experiment and log data to that experiment. RunA run represents a single path/avenue that you executed while performing an experiment. A run includes artifacts that you used as inputs or outputs, and parameters that you used in this execution. An Experiment can contain multiple runs. Getting started tracking parameters and metricsYou can use the Vertex AI SDK to track metrics and parameters for models trained locally. In the following example, you train a simple distributed neural network (DNN) model to predict automobile's miles per gallon (MPG) based on automobile information in the [auto-mpg dataset](https://www.kaggle.com/devanshbesain/exploration-and-analysis-auto-mpg). Load and process the training dataset Download and process the dataset.
###Code
def read_data(file_path):
column_names = [
"MPG",
"Cylinders",
"Displacement",
"Horsepower",
"Weight",
"Acceleration",
"Model Year",
"Origin",
]
with file_io.FileIO(file_path, "r") as f:
raw_dataset = pd.read_csv(
f,
names=column_names,
na_values="?",
comment="\t",
sep=" ",
skipinitialspace=True,
)
dataset = raw_dataset.dropna()
dataset["Origin"] = dataset["Origin"].map(
lambda x: {1: "USA", 2: "Europe", 3: "Japan"}.get(x)
)
dataset = pd.get_dummies(dataset, prefix="", prefix_sep="")
return dataset
dataset = read_data("gs://cloud-samples-data/ai-platform/auto_mpg/auto-mpg.data")
###Output
_____no_output_____
###Markdown
Split dataset for training and testing.
###Code
def train_test_split(dataset, split_frac=0.8, random_state=0):
train_dataset = dataset.sample(frac=split_frac, random_state=random_state)
test_dataset = dataset.drop(train_dataset.index)
train_labels = train_dataset.pop("MPG")
test_labels = test_dataset.pop("MPG")
return train_dataset, test_dataset, train_labels, test_labels
train_dataset, test_dataset, train_labels, test_labels = train_test_split(dataset)
###Output
_____no_output_____
###Markdown
Normalize the features in the dataset for better model performance.
###Code
def normalize_dataset(train_dataset, test_dataset):
train_stats = train_dataset.describe()
train_stats = train_stats.transpose()
def norm(x):
return (x - train_stats["mean"]) / train_stats["std"]
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
return normed_train_data, normed_test_data
normed_train_data, normed_test_data = normalize_dataset(train_dataset, test_dataset)
###Output
_____no_output_____
###Markdown
Define ML model and training function
###Code
def train(
train_data,
train_labels,
num_units=64,
activation="relu",
dropout_rate=0.0,
validation_split=0.2,
epochs=1000,
):
model = Sequential(
[
layers.Dense(
num_units,
activation=activation,
input_shape=[len(train_dataset.keys())],
),
layers.Dropout(rate=dropout_rate),
layers.Dense(num_units, activation=activation),
layers.Dense(1),
]
)
model.compile(loss="mse", optimizer="adam", metrics=["mae", "mse"])
print(model.summary())
history = model.fit(
train_data, train_labels, epochs=epochs, validation_split=validation_split
)
return model, history
###Output
_____no_output_____
###Markdown
Initialize the Model Builder SDK and create an ExperimentInitialize the *client* for Vertex AI and create an experiment.
###Code
aiplatform.init(project=PROJECT_ID, location=REGION, experiment=EXPERIMENT_NAME)
###Output
_____no_output_____
###Markdown
Start several model training runsTraining parameters and metrics are logged for each run.
###Code
parameters = [
{"num_units": 16, "epochs": 3, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 16, "epochs": 10, "dropout_rate": 0.2},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.1},
{"num_units": 32, "epochs": 10, "dropout_rate": 0.2},
]
for i, params in enumerate(parameters):
aiplatform.start_run(run=f"auto-mpg-local-run-{i}")
aiplatform.log_params(params)
model, history = train(
normed_train_data,
train_labels,
num_units=params["num_units"],
activation="relu",
epochs=params["epochs"],
dropout_rate=params["dropout_rate"],
)
aiplatform.log_metrics(
{metric: values[-1] for metric, values in history.history.items()}
)
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
aiplatform.log_metrics({"eval_loss": loss, "eval_mae": mae, "eval_mse": mse})
###Output
_____no_output_____
###Markdown
Extract parameters and metrics into a dataframe for analysis We can also extract all parameters and metrics associated with any Experiment into a dataframe for further analysis.
###Code
experiment_df = aiplatform.get_experiment_df()
experiment_df
###Output
_____no_output_____
###Markdown
Visualizing an experiment's parameters and metrics
###Code
plt.rcParams["figure.figsize"] = [15, 5]
ax = pd.plotting.parallel_coordinates(
experiment_df.reset_index(level=0),
"run_name",
cols=[
"param.num_units",
"param.dropout_rate",
"param.epochs",
"metric.loss",
"metric.val_loss",
"metric.eval_loss",
],
color=["blue", "green", "pink", "red"],
)
ax.set_yscale("symlog")
ax.legend(bbox_to_anchor=(1.0, 0.5))
###Output
_____no_output_____
###Markdown
Visualizing experiments in Cloud Console Run the following to get the URL of Vertex AI Experiments for your project.
###Code
print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
)
###Output
_____no_output_____ |
use-cases/create_wikidata/Wikidata-Subsets.ipynb | ###Markdown
Generating Subsets of Wikidata Batch InvocationExample batch command. The second argument is a notebook where the output will be stored. You can load it to see progress.UPDATE EXAMPLE INVOCATION```papermill Wikidata\ Useful\ Files.ipynb useful-files.out.ipynb \-p wiki_file /Volumes/GoogleDrive/Shared\ drives/KGTK-public-graphs/wikidata-20200803-v3/all.tsv.gz \-p label_file /Volumes/GoogleDrive/Shared\ drives/KGTK-public-graphs/wikidata-20200803-v3/part.label.en.tsv.gz \-p item_file /Volumes/GoogleDrive/Shared\ drives/KGTK-public-graphs/wikidata-20200803-v3/part.wikibase-item.tsv.gz \-p property_item_file /Volumes/GoogleDrive/Shared\ drives/KGTK-public-graphs/wikidata-20200803-v3/part.property.wikibase-item.tsv.gz \-p qual_file /Volumes/GoogleDrive/Shared\ drives/KGTK-public-graphs/wikidata-20200803-v3/qual.tsv.gz \-p output_path \-p output_folder useful_files_v4 \-p temp_folder temp.useful_files_v4 \-p delete_database no \-p compute_pagerank no \-p languages es,ru,zh-cn ```
###Code
import io
import os
import subprocess
import sys
import numpy as np
import pandas as pd
import papermill as pm
from kgtk.configure_kgtk_notebooks import ConfigureKGTK
from kgtk.functions import kgtk, kypher
input_path = "/data/amandeep/wikidata-20220505/import-wikidata/data"
output_path = "/data/amandeep"
kgtk_path = "/data/amandeep/Github/kgtk"
graph_cache_path = None
project_name = "wikidata-20220505-dwd-v4"
files = 'isa,p279star'
# Classes to remove
remove_classes = "Q7318358,Q13442814"
useful_files_notebook = "Wikidata-Useful-Files.ipynb"
notebooks_folder = f"{kgtk_path}/use-cases"
languages = "en,ru,es,zh-cn,de,it,nl,pl,fr,pt,sv"
debug = False
files = files.split(',')
languages = languages.split(',')
ck = ConfigureKGTK(files, kgtk_path=kgtk_path)
ck.configure_kgtk(input_graph_path=input_path,
output_path=output_path,
project_name=project_name,
graph_cache_path=graph_cache_path)
ck.print_env_variables()
ck.load_files_into_cache()
###Output
kgtk query --graph-cache /data/amandeep/wikidata-20220505-dwd-v4/temp.wikidata-20220505-dwd-v4/wikidata.sqlite3.db -i "/data/amandeep/wikidata-20220505/import-wikidata/data/claims.tsv.gz" --as claims -i "/data/amandeep/wikidata-20220505/import-wikidata/data/labels.tsv.gz" --as label_all -i "/data/amandeep/wikidata-20220505/import-wikidata/data/aliases.tsv.gz" --as alias_all -i "/data/amandeep/wikidata-20220505/import-wikidata/data/descriptions.tsv.gz" --as description_all -i "/data/amandeep/wikidata-20220505/import-wikidata/data/claims.wikibase-item.tsv.gz" --as item -i "/data/amandeep/wikidata-20220505/import-wikidata/data/qualifiers.tsv.gz" --as qualifiers -i "/data/amandeep/wikidata-20220505/import-wikidata/data/metadata.property.datatypes.tsv.gz" --as datatypes -i "/data/amandeep/wikidata-20220505/import-wikidata/data/metadata.types.tsv.gz" --as types -i "/data/amandeep/wikidata-20220505/import-wikidata/data/derived.isa.tsv.gz" --as isa -i "/data/amandeep/wikidata-20220505/import-wikidata/data/derived.P279star.tsv.gz" --as p279star --limit 3
id node1 label node2 rank node2;wikidatatype
P10-P1628-32b85d-7927ece6-0 P10 P1628 "http://www.w3.org/2006/vcard/ns#Video" normal url
P10-P1628-acf60d-b8950832-0 P10 P1628 "https://schema.org/video" normal url
P10-P1629-Q34508-bcc39400-0 P10 P1629 Q34508 normal wikibase-item
###Markdown
Preview the input files It is always a good practice to peek a the files to make sure the column headings are what we expect
###Code
!zcat $claims | head
###Output
id node1 label node2 rank node2;wikidatatype
P10-P1628-32b85d-7927ece6-0 P10 P1628 "http://www.w3.org/2006/vcard/ns#Video" normal url
P10-P1628-acf60d-b8950832-0 P10 P1628 "https://schema.org/video" normal url
P10-P1629-Q34508-bcc39400-0 P10 P1629 Q34508 normal wikibase-item
P10-P1630-53947a-fbe9093e-0 P10 P1630 "https://commons.wikimedia.org/wiki/File:$1" normal string
P10-P1659-P1651-c4068028-0 P10 P1659 P1651 normal wikibase-property
P10-P1659-P18-5e4b9c4f-0 P10 P1659 P18 normal wikibase-property
P10-P1659-P4238-d21d1ac0-0 P10 P1659 P4238 normal wikibase-property
P10-P1659-P51-86aca4c5-0 P10 P1659 P51 normal wikibase-property
P10-P1855-Q15075950-7eff6d65-0 P10 P1855 Q15075950 normal wikibase-item
gzip: stdout: Broken pipe
###Markdown
Creating a list of all the items we want to remove Compute the items to be removed Compose the kypher command to remove the classes
###Code
!zcat $isa | head | col
###Output
node1 label node2
P10 isa Q18610173
P10 isa Q19847637
P1000 isa Q18608871
P10000 isa Q19833377
P10000 isa Q89560413
P10001 isa Q107738007
gzip: P10001 isa Q64221137
P10002 isa Q93433126
stdout: Broken pipe
P10003 isa Q108914651
###Markdown
Run the command, the items to remove will be in file `{temp}/items.remove.tsv.gz`
###Code
classes = ", ".join(list(map(lambda x: '"{}"'.format(x), remove_classes.replace(" ", "").split(","))))
classes
kypher(f""" -i isa -i p279star -o "$TEMP"/items.remove.tsv.gz
--match 'isa: (n1)-[:isa]->(c), p279star: (c)-[]->(class)'
--where 'class in [{classes}]'
--return 'distinct n1, "p31_p279star" as label, class as node2'
--order-by 'n1'
""")
###Output
_____no_output_____
###Markdown
Preview the file
###Code
!zcat < "$TEMP"/items.remove.tsv.gz | head | col
!zcat < "$TEMP"/items.remove.tsv.gz | wc
###Output
39873936 119621808 1314915334
###Markdown
Collect all the classes of items we will remove, just as a sanity check
###Code
!$kypher -i "$TEMP"/items.remove.tsv.gz \
--match '()-[]->(n2)' \
--return 'distinct n2' \
--limit 10
###Output
node2
Q13442814
Q7318358
###Markdown
Create the reduced edges file Remove the items from the all.tsv and the label, alias and description filesWe will be left with `reduced` files where the edges do not have the unwanted items. We have to remove them from the node1 and node2 positions, so we need to run the ifnotexists commands twice.Before we start preview the files to see the column headings and check whether they look sorted.
###Code
!zcat "$TEMP"/items.remove.tsv.gz | head | col
###Output
node1 label node2
gzip: Q100000005 p31_p279star Q13442814
Q100000009 p31_p279star Q13442814
stdout: Broken pipe
Q100000015 p31_p279star Q13442814
Q100000022 p31_p279star Q13442814
Q100000031 p31_p279star Q13442814
Q100000044 p31_p279star Q13442814
Q100000056 p31_p279star Q13442814
Q100000066 p31_p279star Q13442814
Q100000074 p31_p279star Q13442814
###Markdown
Remove from the full set of edges those edges that have a `node1` present in `items.remove.tsv`
###Code
kgtk("""ifnotexists
-i $claims
-o "$TEMP"/item.edges.reduced.tsv.gz
--filter-on "$TEMP"/items.remove.tsv.gz
--input-keys node1
--filter-keys node1
--presorted
""")
###Output
_____no_output_____
###Markdown
From the remaining edges, remove those that have a `node2` present in `items.remove.tsv`
###Code
kgtk(f"""sort
-i "$TEMP"/item.edges.reduced.tsv.gz
-o "$TEMP"/item.edges.reduced.sorted.tsv.gz
--extra '--parallel 24 --buffer-size 30% --temporary-directory {os.environ['TEMP']}'
--columns node2 label node1 id""")
kgtk("""ifnotexists
-i $TEMP/item.edges.reduced.sorted.tsv.gz
-o $TEMP/item.edges.reduced.2.tsv.gz
--filter-on $TEMP/items.remove.tsv.gz
--input-keys node2
--filter-keys node1
--presorted""")
###Output
_____no_output_____
###Markdown
Create a file with the labels, for all the languages specified, **FIX THIS**
###Code
kgtk("""ifnotexists -i $label_all
-o "$TEMP"/label.all.edges.reduced.tsv.gz
--filter-on "$TEMP"/items.remove.tsv.gz
--input-keys node1
--filter-keys node1
--presorted""")
kgtk(f"""sort
-i $TEMP/label.all.edges.reduced.tsv.gz
--extra '--parallel 24 --buffer-size 30% --temporary-directory {os.environ['TEMP']}'
-o $OUT/labels.tsv.gz""")
###Output
_____no_output_____
###Markdown
Create a file with the aliases, for all the languages specified
###Code
kgtk("""ifnotexists -i $alias_all
-o $TEMP/alias.all.edges.reduced.tsv.gz
--filter-on $TEMP/items.remove.tsv.gz
--input-keys node1
--filter-keys node1
--presorted""")
kgtk(f"""sort
-i $TEMP/alias.all.edges.reduced.tsv.gz
--extra '--parallel 24 --buffer-size 30% --temporary-directory {os.environ['TEMP']}'
-o $OUT/aliases.tsv.gz""")
###Output
_____no_output_____
###Markdown
Create a file with the descriptions, for all the languages specified
###Code
kgtk("""ifnotexists
-i $description_all
-o $TEMP/description.all.edges.reduced.tsv.gz
--filter-on $TEMP/items.remove.tsv.gz
--input-keys node1
--filter-keys node1
--presorted""")
kgtk(f"""sort
-i $TEMP/description.all.edges.reduced.tsv.gz
--extra '--parallel 24 --buffer-size 30% --temporary-directory {os.environ['TEMP']}'
-o $OUT/descriptions.tsv.gz""")
###Output
_____no_output_____
###Markdown
Produce the output files for claims, labels, aliases and descriptions
###Code
kgtk(f"""sort
-i $TEMP/item.edges.reduced.2.tsv.gz
--extra '--parallel 24 --buffer-size 30% --temporary-directory {os.environ['TEMP']}'
-o $OUT/claims.tsv.gz""")
###Output
_____no_output_____
###Markdown
Create the reduced qualifiers fileWe do this by finding all the ids of the reduced edges file, and then selecting out from `qual.tsv`We need to join by id, so we need to sort both files by id, node1, label, node2:- `$qualifiers` - `$OUT/claims.tsv.gz`
###Code
if debug:
!zcat < "$qualifiers" | head | column -t -s $'\t'
###Output
gzip: id node1 label node2 node2;wikidatatype
P10-P1630-53947a-fbe9093e-0-P407-Q20923490-0 P10-P1630-53947a-fbe9093e-0 P407 Q20923490 wikibase-item
stdout: Broken pipe
P10-P1855-Q15075950-7eff6d65-0-P10-54b214-0 P10-P1855-Q15075950-7eff6d65-0 P10 "Smoorverliefd 12 september.webm" commonsMedia
P10-P1855-Q15075950-7eff6d65-0-P3831-Q622550-0 P10-P1855-Q15075950-7eff6d65-0 P3831 Q622550 wikibase-item
P10-P1855-Q4504-a69d2c73-0-P10-bef003-0 P10-P1855-Q4504-a69d2c73-0 P10 "Komodo dragons video.ogv" commonsMedia
P10-P1855-Q69063653-c8cdb04c-0-P10-6fb08f-0 P10-P1855-Q69063653-c8cdb04c-0 P10 "Couch Commander.webm" commonsMedia
P10-P1855-Q825197-555592a4-0-P10-8a982d-0 P10-P1855-Q825197-555592a4-0 P10 "Elephants Dream (2006).webm" commonsMedia
P10-P2302-Q21502404-d012aef4-0-P1793-1f3adb-0 P10-P2302-Q21502404-d012aef4-0 P1793 "(?i).+\\.(webm\|ogv\|ogg\|gif\|svg)" string
P10-P2302-Q21502404-d012aef4-0-P2316-Q21502408-0 P10-P2302-Q21502404-d012aef4-0 P2316 Q21502408 wikibase-item
P10-P2302-Q21502404-d012aef4-0-P2916-cb0917-0 P10-P2302-Q21502404-d012aef4-0 P2916 'filename with extension: webm, ogg, ogv, or gif (case insensitive)'@en monolingualtext
###Markdown
Run `ifexists` to select out the quals for the edges in `{out}/wikidataos.qual.tsv.gz`. Note that we use `node1` in the qualifier file, matching to `id` in the `wikidataos.all.tsv` file.
###Code
kgtk("""ifexists
-i $qualifiers
-o $OUT/qualifiers.tsv.gz
--filter-on $OUT/claims.tsv.gz
--input-keys node1
--filter-keys id
--presorted""")
###Output
_____no_output_____
###Markdown
Look at the final output for qualifiers
###Code
if debug:
!zcat $OUT/qualifiers.tsv.gz | head | col
!ls -l "$OUT"
###Output
total 34220224
-rw-r--r-- 1 amandeep isdstaff 2214529468 May 14 20:50 aliases.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 11594856613 May 15 04:31 claims.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 12667243225 May 15 03:52 descriptions.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 6007956701 May 14 20:09 labels.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 2556913530 May 15 05:28 qualifiers.tsv.gz
drwxr-xr-x 2 amandeep isdstaff 288 May 15 04:31 temp.wikidata-20220505-dwd-v4
###Markdown
Copy the property datatypes and metadata types file over
###Code
!cp $datatypes $OUT/metadata.property.datatypes.tsv.gz
###Output
_____no_output_____
###Markdown
Filter out edges from metdata types file
###Code
kgtk("""ifexists
-i "$types" -o $OUT/metadata.types.tsv.gz
--filter-on $OUT/claims.tsv.gz
--input-keys node1
--filter-keys node1
--presorted""")
###Output
_____no_output_____
###Markdown
Get the sitelinks as well, the sitelinks are not in claims.tsv.gz
###Code
kgtk("""ifexists
-i "$GRAPH/sitelinks.tsv.gz"
-o "$OUT/sitelinks.tsv.gz"
--filter-on "$OUT/claims.tsv.gz"
--input-keys node1
--filter-keys node1
--presorted""")
###Output
_____no_output_____
###Markdown
Contruct the cat command to generate `all.tsv.gz`
###Code
kgtk("""cat -i "$OUT/labels.tsv.gz"
-i "$OUT/aliases.tsv.gz"
-i "$OUT/descriptions.tsv.gz"
-i "$OUT/claims.tsv.gz"
-i "$OUT/qualifiers.tsv.gz"
-i "$OUT/metadata.property.datatypes.tsv.gz"
-i "$OUT/metadata.types.tsv.gz"
-i "$OUT/sitelinks.tsv.gz"
-o "$OUT/all.tsv.gz"
""")
###Output
_____no_output_____
###Markdown
Run the Partitions Notebook
###Code
pm.execute_notebook(
"partition-wikidata.ipynb",
os.environ["TEMP"] + "/partition-wikidata.out.ipynb",
parameters=dict(
wikidata_input_path = os.environ["OUT"] + "/all.tsv.gz",
wikidata_parts_path = os.environ["OUT"] + "/parts",
temp_folder_path = os.environ["OUT"] + "/parts/temp",
sort_extras = "--buffer-size 30% --temporary-directory $OUT/parts/temp",
verbose = False,
gzip_command = 'gzip'
)
)
###Output
_____no_output_____
###Markdown
copy the `claims.wikibase-item.tsv` file from the `parts` folder
###Code
!cp $OUT/parts/claims.wikibase-item.tsv.gz $OUT
###Output
_____no_output_____
###Markdown
RUN the Useful Files notebook
###Code
pm.execute_notebook(
f'{useful_files_notebook}',
os.environ["TEMP"] + "/Wikidata-Useful-Files-Out.ipynb",
parameters=dict(
output_path = os.environ["OUT"],
input_path = os.environ["OUT"],
kgtk_path = kgtk_path,
compute_pagerank=True,
compute_degrees=True,
compute_isa_star=True,
compute_p31p279_star=True,
debug=False
)
)
###Output
_____no_output_____
###Markdown
Sanity checks
###Code
if debug:
!$kypher -i $OUT/claims.tsv.gz \
--match '(n1:Q368441)-[l]->(n2)' \
--limit 10 \
| col
if debug:
!$kypher -i $OUT/claims.tsv.gz \
--match '(n1:P131)-[l]->(n2)' \
--limit 10 \
| col
###Output
_____no_output_____
###Markdown
Summary of results
###Code
!ls -lh $OUT/*.tsv.gz
###Output
-rw-r--r-- 1 amandeep isdstaff 175M May 16 04:59 /data/amandeep/wikidata-20220505-dwd-v4/aliases.en.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 2.0G May 16 01:22 /data/amandeep/wikidata-20220505-dwd-v4/aliases.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 39G May 15 22:08 /data/amandeep/wikidata-20220505-dwd-v4/all.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 184M May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.commonsMedia.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 2.5G May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.external-id.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 779K May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.geo-shape.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 227M May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.globe-coordinate.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 689K May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.math.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 295M May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.monolingualtext.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 28K May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.musical-notation.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 88 May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.other.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 2.0G May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.quantity.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 1.1G May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.string.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 421K May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.tabular-data.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 301M May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.time.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 11G May 16 04:42 /data/amandeep/wikidata-20220505-dwd-v4/claims.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 123M May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.url.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 115K May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.wikibase-form.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 3.6G May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.wikibase-item.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 75K May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.wikibase-lexeme.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 643K May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.wikibase-property.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 965 May 16 07:06 /data/amandeep/wikidata-20220505-dwd-v4/claims.wikibase-sense.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 12G May 17 03:41 /data/amandeep/wikidata-20220505-dwd-v4/derived.isastar.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 189M May 16 13:27 /data/amandeep/wikidata-20220505-dwd-v4/derived.isa.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 699M May 16 13:05 /data/amandeep/wikidata-20220505-dwd-v4/derived.P279star.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 42M May 16 11:23 /data/amandeep/wikidata-20220505-dwd-v4/derived.P279.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 12G May 17 17:49 /data/amandeep/wikidata-20220505-dwd-v4/derived.P31P279star.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 717M May 16 11:22 /data/amandeep/wikidata-20220505-dwd-v4/derived.P31.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 395M May 16 06:01 /data/amandeep/wikidata-20220505-dwd-v4/descriptions.en.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 12G May 16 01:22 /data/amandeep/wikidata-20220505-dwd-v4/descriptions.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 640M May 16 06:30 /data/amandeep/wikidata-20220505-dwd-v4/labels.en.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 5.6G May 16 01:22 /data/amandeep/wikidata-20220505-dwd-v4/labels.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 79M May 17 21:15 /data/amandeep/wikidata-20220505-dwd-v4/metadata.in_degree.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 357M May 17 20:44 /data/amandeep/wikidata-20220505-dwd-v4/metadata.out_degree.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 559M May 17 18:52 /data/amandeep/wikidata-20220505-dwd-v4/metadata.pagerank.directed.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 770M May 17 19:59 /data/amandeep/wikidata-20220505-dwd-v4/metadata.pagerank.undirected.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 53K May 16 01:21 /data/amandeep/wikidata-20220505-dwd-v4/metadata.property.datatypes.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 271M May 16 01:22 /data/amandeep/wikidata-20220505-dwd-v4/metadata.types.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 16M May 16 07:12 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.commonsMedia.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 151M May 16 07:22 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.external-id.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 29K May 16 07:27 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.geo-shape.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 2.9M May 16 07:32 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.globe-coordinate.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 87K May 16 07:38 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.math.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 6.8M May 16 07:43 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.monolingualtext.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 1.8K May 16 07:48 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.musical-notation.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 900M May 16 07:58 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.quantity.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 530M May 16 08:07 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.string.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 201K May 16 08:12 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.tabular-data.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 16M May 16 08:18 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.time.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 2.5G May 16 04:52 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 35M May 16 08:23 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.url.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 1.1K May 16 08:28 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.wikibase-form.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 695M May 16 08:44 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.wikibase-item.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 9.3K May 16 08:49 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.wikibase-lexeme.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 21K May 16 08:54 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.wikibase-property.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 1.6K May 16 08:58 /data/amandeep/wikidata-20220505-dwd-v4/qualifiers.wikibase-sense.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 88 May 16 06:33 /data/amandeep/wikidata-20220505-dwd-v4/sitelinks.en.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 99 May 16 06:33 /data/amandeep/wikidata-20220505-dwd-v4/sitelinks.qualifiers.en.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 96 May 16 01:22 /data/amandeep/wikidata-20220505-dwd-v4/sitelinks.qualifiers.tsv.gz
-rw-r--r-- 1 amandeep isdstaff 1.8G May 16 01:22 /data/amandeep/wikidata-20220505-dwd-v4/sitelinks.tsv.gz
|
notebooks/thresholds-add-cids.ipynb | ###Markdown
Add CIDS to parsed_threshold_data_in_air.csv
###Code
import pandas as pd
import pyrfume
from pyrfume.odorants import get_cid, get_cids
from rickpy import ProgressBar
df = pyrfume.load_data('thresholds/parsed_threshold_data_in_air.csv')
df = df.set_index('canonical SMILES')
smiles_cids = get_cids(df.index, kind='SMILES')
df = df.join(pd.Series(smiles_cids, name='CID'))
df.head()
from rdkit.Chem import MolFromSmiles, MolToSmiles
df['SMILES'] = df.index
p = ProgressBar(len(smiles_cids))
for i, (old, cid) in enumerate(smiles_cids.items()):
p.animate(i, status=old)
if cid == 0:
mol = MolFromSmiles(old)
if mol is None:
new = ''
else:
new = MolToSmiles(mol, isomericSmiles=True)
if old != new:
cid = get_cid(new, kind='SMILES')
df.loc[old, ['SMILES', 'CID']] = [new, cid]
p.animate(i+1, status='Done')
df[df['SMILES']=='']
ozone_smiles = ozone_cid = get_cid('[O-][O+]=O', kind='SMILES')
df.loc['O=[O]=O', ['SMILES', 'CID']] = [ozone_smiles, ozone_cid]
df = df.set_index('CID').drop(['ez_smiles'], axis=1)
df = df.rename(columns={'author': 'year', 'year': 'author'})
df.head()
pyrfume.save_data(df, 'thresholds/parsed_threshold_data_in_air_fixed.csv')
###Output
_____no_output_____ |
Credit Card Fraud Detection assignment.ipynb | ###Markdown
Credit Card Fraud Detection:: Download dataset from this link:https://www.kaggle.com/mlg-ulb/creditcardfraud Description about dataset:: The datasets contains transactions made by credit cards in September 2013 by european cardholders.This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, … V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-senstive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise. WORKFLOW : 1.Load Data2.Check Missing Values ( If Exist ; Fill each record with mean of its feature )3.Standardized the Input Variables. 4.Split into 50% Training(Samples,Labels) , 30% Test(Samples,Labels) and 20% Validation Data(Samples,Labels).5.Model : input Layer (No. of features ), 3 hidden layers including 10,8,6 unit & Output Layer with activation function relu/tanh (check by experiment).6.Compilation Step (Note : Its a Binary problem , select loss , metrics according to it)7.Train the Model with Epochs (100).8.If the model gets overfit tune your model by changing the units , No. of layers , epochs , add dropout layer or add Regularizer according to the need .9.Prediction should be > 92%10.Evaluation Step11Prediction Task:: Identify fraudulent credit card transactions.
###Code
import pandas as pd
# loadind data
file = 'creditcard.csv'
cc_data = pd.read_csv(file)
cc_data = pd.DataFrame(cc_data)
cc_data
cc_data.head()
cc_data.describe()
cc_data.info() # to confirm about null values
# data distribution into X & y
X = cc_data.loc[:, cc_data.columns != 'Class']
y = cc_data.Class
# Split into 50% Training(Samples,Labels) , 30% Test(Samples,Labels) and 20% Validation Data(Samples,Labels).
from sklearn.model_selection import train_test_split
# ratios for the whole dataset.
train_ratio = 0.5
test_ratio = 0.3
validation_ratio = 0.2
# using train test split method
x_remaining, x_test, y_remaining, y_test = train_test_split(X, y, test_size=test_ratio)
# validation ratio from remaining dataset.
remaining = 1 - test_ratio
validation_adjusted = validation_ratio / remaining
# train and validation splits
x_train, x_validation, y_train, y_validation = train_test_split(x_remaining, y_remaining, test_size=validation_adjusted)
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape, x_validation.shape, y_validation.shape)
# model creation and training
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import *
model = Sequential()
tf.keras.backend.set_floatx('float64')
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=100, validation_data=(x_validation, y_validation))
###Output
Epoch 1/100
4451/4451 [==============================] - 8s 2ms/step - loss: 7.6426 - accuracy: 0.9963 - val_loss: 2.9872 - val_accuracy: 0.9983
Epoch 2/100
4451/4451 [==============================] - 7s 2ms/step - loss: 3.5244 - accuracy: 0.9964 - val_loss: 0.0974 - val_accuracy: 0.9939
Epoch 3/100
4451/4451 [==============================] - 8s 2ms/step - loss: 3.6054 - accuracy: 0.9961 - val_loss: 1.1088 - val_accuracy: 0.9982
Epoch 4/100
4451/4451 [==============================] - 8s 2ms/step - loss: 3.1665 - accuracy: 0.9962 - val_loss: 0.1564 - val_accuracy: 0.9968
Epoch 5/100
4451/4451 [==============================] - 8s 2ms/step - loss: 2.7538 - accuracy: 0.9964 - val_loss: 0.2018 - val_accuracy: 0.9883
Epoch 6/100
4451/4451 [==============================] - 9s 2ms/step - loss: 2.9331 - accuracy: 0.9966 - val_loss: 0.9958 - val_accuracy: 0.9983
Epoch 7/100
4451/4451 [==============================] - 9s 2ms/step - loss: 2.2481 - accuracy: 0.9965 - val_loss: 0.1648 - val_accuracy: 0.9962
Epoch 8/100
4451/4451 [==============================] - 9s 2ms/step - loss: 2.3553 - accuracy: 0.9967 - val_loss: 1.8224 - val_accuracy: 0.9985
Epoch 9/100
4451/4451 [==============================] - 7s 2ms/step - loss: 1.6336 - accuracy: 0.9971 - val_loss: 1.1124 - val_accuracy: 0.9985
Epoch 10/100
4451/4451 [==============================] - 7s 2ms/step - loss: 1.7988 - accuracy: 0.9973 - val_loss: 0.4405 - val_accuracy: 0.9637
Epoch 11/100
4451/4451 [==============================] - 8s 2ms/step - loss: 1.9826 - accuracy: 0.9970 - val_loss: 0.2357 - val_accuracy: 0.9985
Epoch 12/100
4451/4451 [==============================] - 7s 2ms/step - loss: 1.4688 - accuracy: 0.9972 - val_loss: 4.7987 - val_accuracy: 0.9984
Epoch 13/100
4451/4451 [==============================] - 7s 2ms/step - loss: 1.5460 - accuracy: 0.9973 - val_loss: 1.7267 - val_accuracy: 0.9986
Epoch 14/100
4451/4451 [==============================] - 8s 2ms/step - loss: 1.3524 - accuracy: 0.9972 - val_loss: 0.2254 - val_accuracy: 0.9984
Epoch 15/100
4451/4451 [==============================] - 7s 2ms/step - loss: 1.3034 - accuracy: 0.9973 - val_loss: 0.4511 - val_accuracy: 0.9988
Epoch 16/100
4451/4451 [==============================] - 7s 2ms/step - loss: 1.4981 - accuracy: 0.9976 - val_loss: 0.1794 - val_accuracy: 0.9984
Epoch 17/100
4451/4451 [==============================] - 8s 2ms/step - loss: 1.1188 - accuracy: 0.9975 - val_loss: 1.8823 - val_accuracy: 0.9986
Epoch 18/100
4451/4451 [==============================] - 8s 2ms/step - loss: 1.5484 - accuracy: 0.9974 - val_loss: 3.6194 - val_accuracy: 0.9984
Epoch 19/100
4451/4451 [==============================] - 8s 2ms/step - loss: 1.1093 - accuracy: 0.9979 - val_loss: 0.1721 - val_accuracy: 0.9986
Epoch 20/100
4451/4451 [==============================] - 8s 2ms/step - loss: 1.0012 - accuracy: 0.9974 - val_loss: 1.6799 - val_accuracy: 0.9986
Epoch 21/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.7255 - accuracy: 0.9975 - val_loss: 0.1488 - val_accuracy: 0.9984
Epoch 22/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.7440 - accuracy: 0.9976 - val_loss: 1.1986 - val_accuracy: 0.9985
Epoch 23/100
4451/4451 [==============================] - 7s 2ms/step - loss: 0.8144 - accuracy: 0.9977 - val_loss: 0.1188 - val_accuracy: 0.9990
Epoch 24/100
4451/4451 [==============================] - 7s 2ms/step - loss: 0.8950 - accuracy: 0.9975 - val_loss: 2.5671 - val_accuracy: 0.9984
Epoch 25/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.7938 - accuracy: 0.9979 - val_loss: 0.1546 - val_accuracy: 0.9987
Epoch 26/100
4451/4451 [==============================] - 7s 2ms/step - loss: 0.5884 - accuracy: 0.9977 - val_loss: 0.6092 - val_accuracy: 0.9987
Epoch 27/100
4451/4451 [==============================] - 7s 2ms/step - loss: 0.4694 - accuracy: 0.9979 - val_loss: 0.0996 - val_accuracy: 0.9988
Epoch 28/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.6198 - accuracy: 0.9980 - val_loss: 1.0428 - val_accuracy: 0.9985
Epoch 29/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.4245 - accuracy: 0.9981 - val_loss: 0.1854 - val_accuracy: 0.9990
Epoch 30/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.3659 - accuracy: 0.9982 - val_loss: 0.2747 - val_accuracy: 0.9990
Epoch 31/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.3990 - accuracy: 0.9980 - val_loss: 1.0235 - val_accuracy: 0.9984
Epoch 32/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.3540 - accuracy: 0.9979 - val_loss: 0.0256 - val_accuracy: 0.9992
Epoch 33/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.2975 - accuracy: 0.9979 - val_loss: 0.3110 - val_accuracy: 0.9984
Epoch 34/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.1517 - accuracy: 0.9983 - val_loss: 0.0093 - val_accuracy: 0.9992
Epoch 35/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0247 - accuracy: 0.9989 - val_loss: 0.0066 - val_accuracy: 0.9989
Epoch 36/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0108 - accuracy: 0.9986 - val_loss: 0.0117 - val_accuracy: 0.9984
Epoch 37/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0138 - accuracy: 0.9984 - val_loss: 0.0131 - val_accuracy: 0.9984
Epoch 38/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0120 - accuracy: 0.9984 - val_loss: 0.0121 - val_accuracy: 0.9984
Epoch 39/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0132 - accuracy: 0.9984 - val_loss: 0.0122 - val_accuracy: 0.9983
Epoch 40/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0144 - accuracy: 0.9984 - val_loss: 0.2542 - val_accuracy: 0.9984
Epoch 41/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0146 - accuracy: 0.9985 - val_loss: 0.0130 - val_accuracy: 0.9983
Epoch 42/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0128 - accuracy: 0.9984 - val_loss: 0.0137 - val_accuracy: 0.9990
Epoch 43/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0134 - accuracy: 0.9984 - val_loss: 0.0120 - val_accuracy: 0.9984
Epoch 44/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0115 - accuracy: 0.9984 - val_loss: 0.0117 - val_accuracy: 0.9983
Epoch 45/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0118 - accuracy: 0.9984 - val_loss: 0.0130 - val_accuracy: 0.9984
Epoch 46/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0173 - accuracy: 0.9984 - val_loss: 0.0125 - val_accuracy: 0.9984
Epoch 47/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0141 - accuracy: 0.9984 - val_loss: 0.0127 - val_accuracy: 0.9984
Epoch 48/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0127 - accuracy: 0.9984 - val_loss: 0.0126 - val_accuracy: 0.9984
Epoch 49/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0161 - accuracy: 0.9984 - val_loss: 0.0118 - val_accuracy: 0.9984
Epoch 50/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0143 - accuracy: 0.9984 - val_loss: 0.0126 - val_accuracy: 0.9984
Epoch 51/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0127 - accuracy: 0.9984 - val_loss: 0.0127 - val_accuracy: 0.9984
Epoch 52/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0178 - accuracy: 0.9985 - val_loss: 0.0123 - val_accuracy: 0.9984
Epoch 53/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0121 - accuracy: 0.9984 - val_loss: 0.0121 - val_accuracy: 0.9984
Epoch 54/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0130 - accuracy: 0.9984 - val_loss: 0.0126 - val_accuracy: 0.9984
Epoch 55/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0127 - accuracy: 0.9983 - val_loss: 0.0121 - val_accuracy: 0.9984
Epoch 56/100
4451/4451 [==============================] - 8s 2ms/step - loss: 0.0114 - accuracy: 0.9984 - val_loss: 0.0124 - val_accuracy: 0.9984
Epoch 57/100
###Markdown
Evaluation
###Code
results = model.evaluate(x_test, y_test)
###Output
2671/2671 [==============================] - 3s 1ms/step - loss: 0.0279 - accuracy: 0.9982
###Markdown
Predictions
###Code
pred = model.predict(x_validation)
pred # as accuracy is 99.8 % on evaluation
###Output
_____no_output_____ |
ipeds-completions-reader.ipynb | ###Markdown
SetupLoad libraries, set matplotlib to inline plotting, and create constants for analysis
###Code
%matplotlib inline
from io import BytesIO
from zipfile import ZipFile
from urllib.request import urlopen
import pyodbc as db
import numpy as np
import pandas as pd
from pandas import DataFrame, Series
import matplotlib.pyplot as plt
import seaborn as sns
start_year = 2000
end_year = 2005
years = [i for i in range(start_year, end_year + 1)]
# key columns
indices = ["unitid", "date_key", "year", "cipcode", "awlevel", "majornum"]
# value columns
cols = ["cnralm", "cnralw", "cunknm", "cunknw", "chispm", "chispw",
"caianm", "caianw", "casiam", "casiaw", "cbkaam", "cbkaaw",
"cnhpim", "cnhpiw", "cwhitm", "cwhitw", "c2morm", "c2morw"]
def fix_cols(dat, year):
dat.columns = [colname.lower() for colname in list(dat.columns.values)]
if year < 2001:
dat["majornum"] = 1
dat["cnralm"] = pd.to_numeric(dat["crace01"], errors = "coerce", downcast = "integer")
dat["cnralw"] = pd.to_numeric(dat["crace02"], errors = "coerce", downcast = "integer")
dat["cunknm"] = pd.to_numeric(dat["crace13"], errors = "coerce", downcast = "integer")
dat["cunknw"] = pd.to_numeric(dat["crace14"], errors = "coerce", downcast = "integer")
dat["chispm"] = pd.to_numeric(dat["crace09"], errors = "coerce", downcast = "integer")
dat["chispw"] = pd.to_numeric(dat["crace10"], errors = "coerce", downcast = "integer")
dat["caianm"] = pd.to_numeric(dat["crace05"], errors = "coerce", downcast = "integer")
dat["caianw"] = pd.to_numeric(dat["crace06"], errors = "coerce", downcast = "integer")
dat["casiam"] = pd.to_numeric(dat["crace07"], errors = "coerce", downcast = "integer")
dat["casiaw"] = pd.to_numeric(dat["crace08"], errors = "coerce", downcast = "integer")
dat["cbkaam"] = pd.to_numeric(dat["crace03"], errors = "coerce", downcast = "integer")
dat["cbkaaw"] = pd.to_numeric(dat["crace04"], errors = "coerce", downcast = "integer")
dat["cnhpim"] = 0
dat["cnhpiw"] = 0
dat["cwhitm"] = pd.to_numeric(dat["crace11"], errors = "coerce", downcast = "integer")
dat["cwhitw"] = pd.to_numeric(dat["crace12"], errors = "coerce", downcast = "integer")
dat["c2morm"] = 0
dat["c2morw"] = 0
elif year in range(2001, 2008):
dat["cnralm"] = pd.to_numeric(dat["crace01"], errors = "coerce", downcast = "integer")
dat["cnralw"] = pd.to_numeric(dat["crace02"], errors = "coerce", downcast = "integer")
dat["cunknm"] = pd.to_numeric(dat["crace13"], errors = "coerce", downcast = "integer")
dat["cunknw"] = pd.to_numeric(dat["crace14"], errors = "coerce", downcast = "integer")
dat["chispm"] = pd.to_numeric(dat["crace09"], errors = "coerce", downcast = "integer")
dat["chispw"] = pd.to_numeric(dat["crace10"], errors = "coerce", downcast = "integer")
dat["caianm"] = pd.to_numeric(dat["crace05"], errors = "coerce", downcast = "integer")
dat["caianw"] = pd.to_numeric(dat["crace06"], errors = "coerce", downcast = "integer")
dat["casiam"] = pd.to_numeric(dat["crace07"], errors = "coerce", downcast = "integer")
dat["casiaw"] = pd.to_numeric(dat["crace08"], errors = "coerce", downcast = "integer")
dat["cbkaam"] = pd.to_numeric(dat["crace03"], errors = "coerce", downcast = "integer")
dat["cbkaaw"] = pd.to_numeric(dat["crace04"], errors = "coerce", downcast = "integer")
dat["cnhpim"] = 0
dat["cnhpiw"] = 0
dat["cwhitm"] = pd.to_numeric(dat["crace11"], errors = "coerce", downcast = "integer")
dat["cwhitw"] = pd.to_numeric(dat["crace12"], errors = "coerce", downcast = "integer")
dat["c2morm"] = 0
dat["c2morw"] = 0
elif year in range(2008, 2011):
dat["cnralm"] = pd.to_numeric(dat["cnralm"], errors = "coerce", downcast = "integer")
dat["cnralw"] = pd.to_numeric(dat["cnralw"], errors = "coerce", downcast = "integer")
dat["cunknm"] = pd.to_numeric(dat["cunknm"], errors = "coerce", downcast = "integer")
dat["cunknw"] = pd.to_numeric(dat["cunknw"], errors = "coerce", downcast = "integer")
dat["chispm"] = pd.to_numeric(dat["dvchsm"], errors = "coerce", downcast = "integer")
dat["chispw"] = pd.to_numeric(dat["dvchsw"], errors = "coerce", downcast = "integer")
dat["caianm"] = pd.to_numeric(dat["dvcaim"], errors = "coerce", downcast = "integer")
dat["caianw"] = pd.to_numeric(dat["dvcaiw"], errors = "coerce", downcast = "integer")
dat["casiam"] = pd.to_numeric(dat["dvcapm"], errors = "coerce", downcast = "integer")
dat["casiaw"] = pd.to_numeric(dat["dvcapw"], errors = "coerce", downcast = "integer")
dat["cbkaam"] = pd.to_numeric(dat["dvcbkm"], errors = "coerce", downcast = "integer")
dat["cbkaaw"] = pd.to_numeric(dat["dvcbkw"], errors = "coerce", downcast = "integer")
dat["cnhpim"] = 0
dat["cnhpiw"] = 0
dat["cwhitm"] = pd.to_numeric(dat["dvcwhm"], errors = "coerce", downcast = "integer")
dat["cwhitw"] = pd.to_numeric(dat["dvcwhw"], errors = "coerce", downcast = "integer")
dat["c2morm"] = pd.to_numeric(dat["c2morm"], errors = "coerce", downcast = "integer")
dat["c2morw"] = pd.to_numeric(dat["c2morw"], errors = "coerce", downcast = "integer")
elif year > 2010:
dat["cnralm"] = pd.to_numeric(dat["cnralm"], errors = "coerce", downcast = "integer")
dat["cnralw"] = pd.to_numeric(dat["cnralw"], errors = "coerce", downcast = "integer")
dat["cunknm"] = pd.to_numeric(dat["cunknm"], errors = "coerce", downcast = "integer")
dat["cunknw"] = pd.to_numeric(dat["cunknw"], errors = "coerce", downcast = "integer")
dat["chispm"] = pd.to_numeric(dat["chispm"], errors = "coerce", downcast = "integer")
dat["chispw"] = pd.to_numeric(dat["chispw"], errors = "coerce", downcast = "integer")
dat["caianm"] = pd.to_numeric(dat["caianm"], errors = "coerce", downcast = "integer")
dat["caianw"] = pd.to_numeric(dat["caianw"], errors = "coerce", downcast = "integer")
dat["casiam"] = pd.to_numeric(dat["casiam"], errors = "coerce", downcast = "integer")
dat["casiaw"] = pd.to_numeric(dat["casiaw"], errors = "coerce", downcast = "integer")
dat["cbkaam"] = pd.to_numeric(dat["cbkaam"], errors = "coerce", downcast = "integer")
dat["cbkaaw"] = pd.to_numeric(dat["cbkaaw"], errors = "coerce", downcast = "integer")
dat["cnhpim"] = pd.to_numeric(dat["cnhpim"], errors = "coerce", downcast = "integer")
dat["cnhpiw"] = pd.to_numeric(dat["cnhpiw"], errors = "coerce", downcast = "integer")
dat["cwhitm"] = pd.to_numeric(dat["cwhitm"], errors = "coerce", downcast = "integer")
dat["cwhitw"] = pd.to_numeric(dat["cwhitw"], errors = "coerce", downcast = "integer")
dat["c2morm"] = pd.to_numeric(dat["c2morm"], errors = "coerce", downcast = "integer")
dat["c2morw"] = pd.to_numeric(dat["c2morw"], errors = "coerce", downcast = "integer")
years
###Output
_____no_output_____
###Markdown
Read DataRead data file from NCES website for each year selected, set column names to lower case for sanity, reduce to needed columns, and fill NaN with zero. Show data frame shape (to
###Code
df = DataFrame()
for year in years:
# prior to 2014, admissions was reported in the IPEDS-IC survey rather than IPEDS-ADM
url = "https://nces.ed.gov/ipeds/datacenter/data/c" + str(year) + "_a.zip"
file_name = "c" + str(year) + "_a.csv"
resp = urlopen(url)
zipfile = ZipFile(BytesIO(resp.read()))
myfile = zipfile.open(file_name)
temp = pd.read_csv(myfile,
low_memory = True,
encoding = "iso-8859-1")
fix_cols(temp, year)
temp["year"] = year
temp["date_key"] = (year * 10000) + 1015
temp = pd.concat((temp[indices], temp[cols]), axis = 1)
df = pd.concat([df, temp],
sort = True)
temp = None
# replace NaN with zero
df = df.fillna(0)
df.shape
###Output
_____no_output_____
###Markdown
Look At Summaries
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 173137 entries, 0 to 173136
Data columns (total 24 columns):
awlevel 173137 non-null int64
c2morm 173137 non-null int64
c2morw 173137 non-null int64
caianm 173137 non-null int16
caianw 173137 non-null int16
casiam 173137 non-null int16
casiaw 173137 non-null int16
cbkaam 173137 non-null int16
cbkaaw 173137 non-null int16
chispm 173137 non-null int16
chispw 173137 non-null int16
cipcode 173137 non-null float64
cnhpim 173137 non-null int64
cnhpiw 173137 non-null int64
cnralm 173137 non-null int16
cnralw 173137 non-null int16
cunknm 173137 non-null int16
cunknw 173137 non-null int16
cwhitm 173137 non-null int16
cwhitw 173137 non-null int16
date_key 173137 non-null int64
majornum 173137 non-null int64
unitid 173137 non-null int64
year 173137 non-null int64
dtypes: float64(1), int16(14), int64(9)
memory usage: 17.8 MB
###Markdown
Look at first and last cases
###Code
df.iloc[[0, 1, 2, 3, 4, -5, -4, -3, -2, -1],:]
###Output
_____no_output_____
###Markdown
Look for potential structural issues Zero applications, non-zero admissions
###Code
# zero apps, non-zero admits
df.loc[(df["applcn"] == 0) & (df["admssn"] > 0), :]
###Output
_____no_output_____
###Markdown
Zero admissions, non-zero enrollment
###Code
# zero admits, non-zero enrollment
df.loc[(df["admssn"] == 0) & (df["enrlt"] > 0), :]
###Output
_____no_output_____
###Markdown
Sum of men and women applications is greater than total applications
###Code
# total applications less than sum of parts
df.loc[df["applcn"] < df["applcnm"] + df["applcnw"],:]
###Output
_____no_output_____
###Markdown
Calculate unknown sex categoriesCalculate an unknown variable to accomodate those institutions where the total applications, admissions, and enrollment are greater than the sum of their headcounts of men and women. This will ensure that the details roll up to the total in the final long format data frame.
###Code
# calculate unknowns
df["applcnu"] = df["applcn"] - (df["applcnm"] + df["applcnw"])
df["admssnu"] = df["admssn"] - (df["admssnm"] + df["admssnw"])
df["enrlu"] = df["enrlt"] - (df["enrlm"] + df["enrlw"])
df.loc[df["applcnu"] > 0, ["applcn", "applcnm", "applcnw", "applcnu"]].head()
###Output
_____no_output_____
###Markdown
Convert From Wide to LongMelt the DataFrame, pivoting value columns into a single column. Create a field column to identify type of value.Create a sex column to identify values by sex.
###Code
# reshape from wide to long format
adm_long = pd.melt(df, id_vars = ["unitid", "date_key"],
value_vars = ["applcnm", "applcnw", "applcnu",
"admssnm", "admssnw", "admssnu",
"enrlm", "enrlw", "enrlu"],
value_name = "count")
# field indicator
adm_long["field"] = np.where(adm_long["variable"].str.slice(0, 3) == "app", "applications", "unknown")
adm_long["field"] = np.where(adm_long["variable"].str.slice(0, 3) == "adm", "admissions", adm_long["field"])
adm_long["field"] = np.where(adm_long["variable"].str.slice(0, 3) == "enr", "enrollment", adm_long["field"])
# sex indicator
adm_long["sex"] = np.where(adm_long["variable"].str.slice(-1) == "w", "women", "unknown")
adm_long["sex"] = np.where(adm_long["variable"].str.slice(-1) == "m", "men", adm_long["sex"])
adm_long.iloc[[0, 1, 2, 3, 4, -5, -4, -3, -2, -1],:]
###Output
_____no_output_____
###Markdown
Inspect Field ValuesCheck for unknown. If there is an unknown value here, something has changed in naming conventions.
###Code
adm_long["field"].value_counts()
###Output
_____no_output_____
###Markdown
Add Demographic KeyThis adds a demographic key for warehousing. The first 5 characters are all set to "unkn" because IPEDS-ADM does not collect race/ethnicity.
###Code
adm_long["demographic_key"] = "unknu"
adm_long["demographic_key"] = np.where(adm_long["sex"] == "men", "unknm", adm_long["demographic_key"])
adm_long["demographic_key"] = np.where(adm_long["sex"] == "women", "unknw", adm_long["demographic_key"])
adm_long["demographic_key"].value_counts()
###Output
_____no_output_____
###Markdown
Pivot Long Data to Final FormatPivot and aggregate (sum) the count column, converting the field variable back into three measures: applications, admissions, and enrollment. For warehousing, we will eventually drop the sex field, but it is kept here for data checking purposes.
###Code
adm = adm_long.pivot_table(index=["unitid", "date_key", "demographic_key", "sex"],
columns='field',
values='count',
aggfunc = np.sum,
fill_value = 0).reset_index()
# remove institutions with no applications
adm = adm.loc[adm["applications"] > 0]
adm.iloc[[0, 1, 2, 3, 4, -5, -4, -3, -2, -1],:]
###Output
_____no_output_____
###Markdown
Write Data to Warehouse (later) Create Some Variables and Do Basic Exploration
###Code
adm["acceptance_rate"] = adm["admissions"] / adm["applications"]
adm["yield_rate"] = adm["enrollment"] / adm["admissions"]
adm["isUNL"] = np.where(adm["unitid"] == 181464, "UNL", "Others")
# & (adm["acceptance_rate"] > 0) & (adm["acceptance_rate"] <= 1.0) & (adm["yield_rate"] > 0) & (adm["yield_rate"] <= 1.0)
cases = (adm["date_key"] == 20171015) & (adm["acceptance_rate"] < 1.0) & (adm["yield_rate"] < 1.0)
viz_set = adm[cases]
viz_set.shape
# Set theme
sns.set_style('darkgrid')
sns.jointplot(x = "acceptance_rate",
y = "yield_rate",
data = viz_set,
kind="hex",
color="#4CB391")
###Output
_____no_output_____ |
numbers_recognition_1.ipynb | ###Markdown
evaluate save
###Code
joblib.dump(model, "hand_written_digits_recognition.joblib")
###Output
_____no_output_____ |
data/Seq2Seq_Simple.ipynb | ###Markdown
Simple Seq2Seq machine translation using GRU based encoder-decoder architecture
###Code
import torch
from torch import nn
import numpy as np
import time
###Output
_____no_output_____
###Markdown
Preparing the dataset for Machine Translation
###Code
from ProcessData import *
###Output
Go. Va !
Hi. Salut !
go . va !
hi . salut !
run ! cours !
run ! courez !
who ? qui ?
wow ! ça alors !
['stop', '!'] ['stop', '!']
['i', 'try', '.'] ["j'essaye", '.']
Batch of 2 sentences:
X: tensor([[ 6, 124, 4, 3, 1, 1, 1, 1],
[ 6, 18, 101, 4, 3, 1, 1, 1]], dtype=torch.int32)
valid lengths for X: tensor([4, 5])
Y: tensor([[ 6, 27, 7, 0, 4, 3, 1, 1],
[ 6, 7, 158, 4, 3, 1, 1, 1]], dtype=torch.int32)
valid lengths for Y: tensor([6, 5])
{'<unk>': 0, '<pad>': 1, '<bos>': 2, '<eos>': 3, '.': 4, '!': 5, 'i': 6, "i'm": 7, 'it': 8, 'go': 9, 'tom': 10, '?': 11, 'me': 12, 'get': 13, 'be': 14, 'up': 15, 'come': 16, 'we': 17, 'am': 18, 'this': 19, 'lost': 20, 'on': 21, 'won': 22, 'us': 23, "it's": 24, 'down': 25, 'no': 26, 'nice': 27, 'away': 28, 'you': 29, 'back': 30, 'try': 31, 'way': 32, 'fair': 33, 'out': 34, 'lazy': 35, 'help': 36, 'hold': 37, 'off': 38, 'grab': 39, 'how': 40, 'who': 41, 'got': 42, 'calm': 43, 'call': 44, 'he': 45, 'a': 46, 'good': 47, 'job': 48, 'did': 49, 'use': 50, 'over': 51, "don't": 52, 'forget': 53, 'run': 54, 'in': 55, 'home': 56, 'fun': 57, "he's": 58, 'sure': 59, 'here': 60, 'stop': 61, 'cool': 62, 'drive': 63, 'fat': 64, 'shut': 65, 'wake': 66, 'leave': 67, 'sit': 68, 'can': 69, 'fire': 70, 'cheers': 71, 'now': 72, 'left': 73, 'ok': 74, 'ask': 75, 'drop': 76, 'hang': 77, "i'll": 78, 'keep': 79, 'tell': 80, 'him': 81, 'ahead': 82, 'hurry': 83, 'fine': 84, 'died': 85, 'taste': 86, 'they': 87, 'watch': 88, 'what': 89, 'feel': 90, 'that': 91, 'beg': 92, 'hug': 93, 'fell': 94, 'really': 95, 'quit': 96, 'tried': 97, 'wet': 98, 'kiss': 99, 'still': 100, 'busy': 101, 'free': 102, 'late': 103, 'okay': 104, 'may': 105, 'she': 106, 'came': 107, 'terrific': 108, 'catch': 109, 'win': 110, 'follow': 111, 'cringed': 112, 'hi': 113, 'wait': 114, 'hello': 115, 'see': 116, 'attack': 117, 'hop': 118, 'know': 119, 'paid': 120, 'slow': 121, 'runs': 122, 'agree': 123, 'dozed': 124, 'stood': 125, 'swore': 126, 'hit': 127, 'ill': 128, 'sad': 129, 'join': 130, ',': 131, 'too': 132, 'open': 133, 'show': 134, 'take': 135, 'wash': 136, 'them': 137, 'man': 138, 'beats': 139, 'find': 140, 'fix': 141, 'have': 142, 'phoned': 143, 'refuse': 144, 'rested': 145, 'saw': 146, 'stayed': 147, 'cold': 148, 'deaf': 149, 'full': 150, 'game': 151, 'rich': 152, 'sick': 153, 'tidy': 154, 'ugly': 155, 'weak': 156, 'well': 157, "i've": 158, 'works': 159, 'his': 160, 'new': 161, "let's": 162, 'look': 163, 'marry': 164, 'save': 165, 'speak': 166, 'trust': 167, 'some': 168, 'warn': 169, 'for': 170, 'write': 171, 'seated': 172, 'soon': 173, 'dogs': 174, 'bark': 175, 'die': 176, 'excuse': 177, 'ready': 178, 'to': 179, 'bed': 180, 'luck': 181, 'is': 182, "how's": 183}
{'<unk>': 0, '<pad>': 1, '<bos>': 2, '<eos>': 3, '.': 4, '!': 5, 'je': 6, 'suis': 7, 'tom': 8, '?': 9, "j'ai": 10, 'nous': 11, 'ça': 12, "c'est": 13, 'est': 14, 'à': 15, 'va': 16, 'bien': 17, 'il': 18, 'en': 19, 'soyez': 20, 'j’ai': 21, 'pas': 22, 'un': 23, 'qui': 24, 'gagné': 25, 'sois': 26, 'me': 27, 'tomber': 28, 'la': 29, 'ne': 30, 'ceci': 31, 'de': 32, 'vais': 33, 'bon': 34, 'venez': 35, 'le': 36, 'chez': 37, "j'en": 38, 'avons': 39, 'calme': 40, 'viens': 41, 'vous': 42, 'a': 43, 'moi': 44, 'au': 45, "l'ai": 46, 'emporté': 47, 'perdu': 48, 'allez': 49, 'plus': 50, 'fait': 51, 'comme': 52, 'ici': 53, 'feu': 54, 'maintenant': 55, 'compris': 56, 'sais': 57, 'gentil': 58, 'dégage': 59, 'malade': 60, 'fûmes': 61, 'été': 62, 'elle': 63, 'assieds-toi': 64, 'salut': 65, 'cours': 66, 'vas-y': 67, 'question': 68, 'juste': 69, 'entrez': 70, 'laisse': 71, 'chercher': 72, 'pars': 73, 'maison': 74, 'tiens': 75, 'tenez': 76, 'fais': 77, 'réveille-toi': 78, 'suis-je': 79, 'trouve': 80, 'trouvez': 81, 'boulot': 82, 'les': 83, "m'en": 84, 'paresseux': 85, 'certain': 86, 'puis-je': 87, 'aller': 88, 'asseyez-vous': 89, 'pouvons-nous': 90, 'attrape': 91, 'attrapez': 92, 'courez': 93, 'attends': 94, 'attendez': 95, 'poursuis': 96, 'continuez': 97, 'santé': 98, 'merci': 99, ',': 100, 'pigé': 101, 'capté': 102, 'dans': 103, 'tes': 104, 'bras': 105, 'tombé': 106, 'parti': 107, 'partie': 108, 'payé': 109, 'hors': 110, 'aucune': 111, 'essaye': 112, 'demande': 113, 'fantastique': 114, 'calmes': 115, 'détendu': 116, 'équitable': 117, 'gentille': 118, 'entre': 119, 'laissez': 120, 'sortez': 121, 'sors': 122, 'te': 123, 'faire': 124, 'foutre': 125, 'rentrez': 126, 'rentre': 127, 'doucement': 128, 'peu': 129, 'court': 130, 'aide-moi': 131, 'du': 132, 'debout': 133, 'signe': 134, 'gras': 135, 'gros': 136, 'triste': 137, 'mouillé': 138, 'joignez-vous': 139, 'ferme-la': 140, 'tard': 141, 'réveillez-vous': 142, 'battus': 143, 'battues': 144, 'défaits': 145, 'défaites': 146, 'tu': 147, 'recule\u2009': 148, 'reculez': 149, 'homme': 150, 'appelle': 151, 'rouler': 152, 'lâche-toi': 153, 'aide': 154, 'fais-moi': 155, 'refuse': 156, 'vu': 157, 'occupé': 158, 'froid': 159, 'ai': 160, 'libre': 161, 'retard': 162, 'fainéant': 163, 'paresseuse': 164, 'fainéante': 165, 'porte': 166, 'riche': 167, 'sûr': 168, 'faible': 169, 'bizarre': 170, 'allons-y': 171, 'partir': 172, 'y': 173, 'fort': 174, 'ils': 175, 'gagnèrent': 176, 'elles': 177, 'ont': 178, 'venu': 179, 'mort': 180, 'confiance': 181, 'quoi': 182, "qu'est-ce": 183, "qu'on": 184, "s'est": 185, 'calmez-vous': 186, 'bientôt': 187, 'chiens': 188, 'aboient': 189, 'touche': 190, 'oublie': 191, 'oublie-le': 192, 'emploi': 193, 'lit': 194, 'bonne': 195, 'chance': 196, 'comment': 197, 'prie': 198, 'mouvement': 199, 'recul': 200}
###Markdown
Encoder
###Code
class Seq2SeqEncoder(nn.Module):
"""The RNN encoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers, dropout=0):
super(Seq2SeqEncoder, self).__init__()
# Embedding layer
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = nn.GRU(embed_size, num_hiddens, num_layers, dropout=dropout, batch_first=True)
def forward(self, X):
# Input X: (`batch_size`, `num_steps`, `input_size`)
X = self.embedding(X)
# After embedding X: (`batch_size`, `num_steps`, `embed_size`)
# When batch_first is True:
# in RNN models, the first axis corresponds to batch_size
# the second axis corresponds to num_steps
# the first axis corresponds to embed_dim
# When state is not mentioned, it defaults to zeros
output, state = self.rnn(X)
# `output` shape: (`batch_size`, `num_steps`, `num_hiddens`)
# `state` shape: (`num_layers`, `batch_size`, `num_hiddens`)
return output, state
encoder = Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16, num_layers=1)
encoder.eval()
X = torch.zeros((5, 4), dtype=torch.long)
enc_output, enc_state = encoder(X)
print(enc_output.shape)
print(enc_state.shape)
###Output
torch.Size([5, 4, 16])
torch.Size([1, 5, 16])
###Markdown
Decoder
###Code
class Seq2SeqDecoder(nn.Module):
"""The RNN decoder for sequence to sequence learning."""
def __init__(self, vocab_size, embed_size, num_hiddens, num_layers, dropout=0):
super(Seq2SeqDecoder, self).__init__()
self.embedding = nn.Embedding(vocab_size, embed_size)
self.rnn = nn.GRU(embed_size + num_hiddens, num_hiddens, num_layers, dropout=dropout,batch_first=True)
self.dense = nn.Linear(num_hiddens, vocab_size)
def forward(self, X, state):
# Inputs:
# X : (`batch_size`, `num_steps`, `input_size`)
# initial hidden state : (`num_layers`, `batch_size`, `num_hiddens`) ,
# This comes from hidden state output from encoder.
X = self.embedding(X)
# After embedding X: (`batch_size`, `num_steps`, `embed_size`)
# Context is last layer hidden state from last timestep of encoder
# last layer hidden state from last time step of encoder
last_layer_state = state[-1] # shape (`batch_size`,`num_hiddens`)
# context is last timestep hidden state of encoder.
# Broadcast `context` so it has the same `num_steps` as `X`
context = last_layer_state.repeat(X.shape[1], 1, 1).permute(1,0,2)
# context has now shape (`batch_size`,`num_steps`,`num_hiddens`)
# concat(X,context) = X_and_context of shape (`batch_size`,`num_steps`,`emb_dim + num_hiddens`)
X_and_context = torch.cat((X, context), 2)
output, state = self.rnn(X_and_context, state)
# output : (`batch_size`,`num_steps`,`num_hiddens`)
# state : (`num_layers`,`batch_size`,`num_hiddens`), this is final timestep hidden state of decoder
output = self.dense(output)
# final output of decoder :
# `output` shape: (`batch_size`, `num_steps`, `vocab_size`)
# `state` shape: (`num_layers`, `batch_size`, `num_hiddens`)
return output, state
decoder = Seq2SeqDecoder(vocab_size=10, embed_size=8, num_hiddens=16, num_layers=1)
decoder.eval()
output, state = decoder(X, enc_state)
output.shape, state.shape
# You can feed input consisting of a single timestep as well.
dec_X = torch.from_numpy(np.zeros((5,1))).long()
print(dec_X.shape) # batch_size=5, num_steps=1
output, state = decoder(dec_X, enc_state)
output.shape, state.shape
###Output
torch.Size([5, 1])
###Markdown
Putting encoder and decoder together
###Code
class EncoderDecoder(nn.Module):
"""The base class for the encoder-decoder architecture."""
def __init__(self, encoder, decoder):
super(EncoderDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder
def forward(self, enc_X, dec_X):
enc_output, enc_state = self.encoder(enc_X)
return self.decoder(dec_X, enc_state)
encoder_decoder = EncoderDecoder(encoder,decoder)
encoder_decoder.eval()
###Output
_____no_output_____
###Markdown
Allow parts of sequence to be masked as we have variable length sequences
###Code
def sequence_mask(X, valid_len, value=0):
"""Mask irrelevant entries in sequences."""
maxlen = X.size(1)
mask = torch.arange((maxlen),device=X.device)[None, :] < valid_len[:, None]
X[~mask] = value
return X
X = torch.tensor([[1, 2, 3], [4, 5, 6]])
valid_lens = torch.tensor([1, 2])
print('Input has 2 sequences :\n',X)
print('Assume that first sequence has 1 valid elements, second sequence has 2 valid elements', valid_lens)
print('After masking:\n',sequence_mask(X, valid_lens))
###Output
Input has 2 sequences :
tensor([[1, 2, 3],
[4, 5, 6]])
Assume that first sequence has 1 valid elements, second sequence has 2 valid elements tensor([1, 2])
After masking:
tensor([[1, 0, 0],
[4, 5, 0]])
###Markdown
Build cross entropy loss using masked sequences
###Code
class MaskedSoftmaxCELoss(nn.CrossEntropyLoss):
"""The softmax cross-entropy loss with masks."""
# `pred` shape: (`batch_size`, `num_steps`, `vocab_size`)
# `label` shape: (`batch_size`, `num_steps`)
# `valid_len` shape: (`batch_size`,)
def forward(self, pred, label, valid_len):
weights = torch.ones_like(label)
weights = sequence_mask(weights, valid_len).float()
self.reduction='none'
unweighted_loss = super(MaskedSoftmaxCELoss, self).forward(pred.permute(0, 2, 1), label)
weighted_loss = (unweighted_loss * weights).mean(dim=1)
return weighted_loss
loss = MaskedSoftmaxCELoss()
###Output
_____no_output_____
###Markdown
Prepare for training
###Code
embed_size = 32
num_hiddens = 32
num_layers = 2
dropout = 0.1
batch_size = 64
num_steps = 10
lr = 0.005
num_epochs = 300
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('device=',device)
data_path = '../data/fra-eng/fra.txt'
train_iter, src_vocab, tgt_vocab = load_data_nmt(data_path, batch_size, num_steps)
encoder = Seq2SeqEncoder( len(src_vocab), embed_size, num_hiddens, num_layers, dropout )
decoder = Seq2SeqDecoder( len(tgt_vocab), embed_size, num_hiddens, num_layers, dropout )
net = EncoderDecoder(encoder, decoder)
net.eval()
###Output
device= cuda:0
###Markdown
Initialize weights in GRU layers of encoder and decoder
###Code
def xavier_init_weights(m):
if type(m) == nn.Linear:
nn.init.xavier_uniform_(m.weight)
if type(m) == nn.GRU:
#initialize biases and weights
for name, param in m.named_parameters():
if 'bias' in name:
nn.init.constant(param, 0.0)
elif 'weight' in name:
nn.init.xavier_uniform_(m._parameters[name])
net.apply(xavier_init_weights)
net.to(device)
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
loss = MaskedSoftmaxCELoss()
net.train()
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:8: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
###Markdown
Training
###Code
class Accumulator:
"""For accumulating sums over `n` variables."""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
def grad_clipping(net, theta):
"""Clip the gradient."""
if isinstance(net, nn.Module):
params = [p for p in net.parameters() if p.requires_grad]
else:
params = net.params
norm = torch.sqrt(sum(torch.sum((p.grad ** 2)) for p in params))
if norm > theta:
for param in params:
param.grad[:] *= theta / norm
cum_losses = []
for epoch in range(num_epochs):
start_time = time.time()
metric = Accumulator(2) # Sum of training loss, no. of tokens
for batch in train_iter:
X, X_valid_len, Y, Y_valid_len = [x.to(device) for x in batch]
bos = torch.tensor([tgt_vocab['<bos>']] * Y.shape[0], device=device).reshape(-1, 1)
dec_input = torch.cat([bos, Y[:, :-1]], 1) # Teacher forcing
Y_hat, _ = net(X, dec_input)
l = loss(Y_hat, Y, Y_valid_len)
l.sum().backward() # Make the loss scalar for `backward`
grad_clipping(net, 1)
num_tokens = Y_valid_len.sum()
optimizer.step()
with torch.no_grad():
metric.add(l.sum(), num_tokens)
if (epoch + 1) % 10 == 0:
print(epoch+1,metric[0] / metric[1])
cum_losses.append(metric[0] / metric[1])
elapsed_time = time.time() - start_time
print(f'loss {metric[0] / metric[1]:.3f}, {metric[1] / elapsed_time:.1f} 'f'tokens/sec on {str(device)}')
###Output
10 0.2090059375725408
20 0.1508961954908735
30 0.1133520789898929
40 0.08690908657946206
50 0.07063316426282791
60 0.060021722659216487
70 0.05148320665325719
80 0.044760870867852666
90 0.039765634578027766
100 0.0361578657795059
110 0.0323923955232034
120 0.030629563359752598
130 0.02831386459572772
140 0.027698246806871697
150 0.02605452747712779
160 0.025122531799292386
170 0.02461659434460036
180 0.023439871518924356
190 0.022659589061114038
200 0.022360768012294897
210 0.021564902962778836
220 0.021263796998871824
230 0.021205942258268114
240 0.02060909357562585
250 0.020626334025605556
260 0.020763019181198605
270 0.020957735334499197
280 0.020493817395086787
290 0.020140862192801725
300 0.02038786731241069
loss 0.020, 29684.8 tokens/sec on cuda:0
###Markdown
Plot the loss over epochs
###Code
import matplotlib.pyplot as plt
X = range(len(cum_losses))
plt.plot(X, cum_losses)
plt.show()
###Output
_____no_output_____
###Markdown
Prediction
###Code
def translate_a_sentence(src_sentence, src_vocab, bos_token, num_steps):
# First process the src_sentence, tokenize and truncate/pad it.
#src_sentence : a sentence to translate
#Tokenize the sentence
src_sentence_words = src_sentence.lower().split(' ')
print('src sentence words = ',src_sentence_words)
src_tokens = src_vocab[src_sentence.lower().split(' ')]
src_tokens = src_tokens + [src_vocab['<eos>']]
print('src_tokens = ',src_tokens)
enc_valid_len = torch.tensor([len(src_tokens)], device=device)
#Truncate the sentence to num_steps if the sentence is longer. If shorter, pad the sentence.
print('Truncating/padding to length',num_steps)
padding_token = src_vocab['<pad>']
if len(src_tokens) > num_steps:
line[:num_steps] # Truncate
#Pad
src_tokens = src_tokens + [padding_token] * (num_steps - len(src_tokens))
print('After truncating/padding',src_tokens,'\n')
# Next convert the src_tokens to a tensor to be fed to the decoder one word at a timestep
# Covert src_tokens to a tesnor, add the batch axis
enc_X = torch.unsqueeze(torch.tensor(src_tokens, dtype=torch.long, device=device), dim=0)
# Now shape of enc_X : (`batch_size` , `num_steps`) = (1,10)
# Pass it through the encoder
enc_output, enc_state = net.encoder(enc_X)
# feed the decoder one word token at a time
# prepare the first token for decoder : beginning of sentence
dec_X = torch.unsqueeze(torch.tensor([tgt_vocab['<bos>']], dtype=torch.long, device=device), dim=0)
#Initialize input state for the decoder to be the final timestep state of the encoder
dec_input_state = enc_state
output_token_seq = []
for _ in range(num_steps):
curr_output, curr_dec_state = decoder(dec_X, dec_input_state)
dec_input_state = curr_dec_state
# curr_output is of shape (`batch_size`, `num_steps`, `len(tgt_vocab)`) = (1,10,201)
# Use the token with the highest prediction likelihood as the input of the decoder for the next time step
dec_X = curr_output.argmax(dim=2) #next timestep input for decoder
#remove batch_size dimension as we are working with single sentences
pred = dec_X.squeeze(dim=0).type(torch.int32).item()
#eos predicted, stop
if pred == tgt_vocab['<eos>']:
break
output_token_seq.append(pred)
return output_token_seq
###Output
_____no_output_____
###Markdown
Lets look for some translation
###Code
english_batch = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
french_batch = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
bos_token = tgt_vocab['<bos>']
for eng_sent, fr_sent in zip(english_batch,french_batch):
fr_sent_predicted_tokens = translate_a_sentence(eng_sent, src_vocab, bos_token, num_steps)
fr_sent_predicted = ' '.join(tgt_vocab.to_tokens(fr_sent_predicted_tokens))
print(f'Actual translation: english:{eng_sent} => french:{fr_sent}')
print(f'Predicted translation: english:{eng_sent} => french:{fr_sent_predicted}')
print('-------------------------------------------------------------------')
###Output
src sentence words = ['go', '.']
src_tokens = [9, 4, 3]
Truncating/padding to length 10
After truncating/padding [9, 4, 3, 1, 1, 1, 1, 1, 1, 1]
Actual translation: english:go . => french:va !
Predicted translation: english:go . => french:va au gagné ?
-------------------------------------------------------------------
src sentence words = ['i', 'lost', '.']
src_tokens = [6, 20, 4, 3]
Truncating/padding to length 10
After truncating/padding [6, 20, 4, 3, 1, 1, 1, 1, 1, 1]
Actual translation: english:i lost . => french:j'ai perdu .
Predicted translation: english:i lost . => french:j'ai perdu .
-------------------------------------------------------------------
src sentence words = ["he's", 'calm', '.']
src_tokens = [58, 43, 4, 3]
Truncating/padding to length 10
After truncating/padding [58, 43, 4, 3, 1, 1, 1, 1, 1, 1]
Actual translation: english:he's calm . => french:il est calme .
Predicted translation: english:he's calm . => french:il est mouillé !
-------------------------------------------------------------------
src sentence words = ["i'm", 'home', '.']
src_tokens = [7, 56, 4, 3]
Truncating/padding to length 10
After truncating/padding [7, 56, 4, 3, 1, 1, 1, 1, 1, 1]
Actual translation: english:i'm home . => french:je suis chez moi .
Predicted translation: english:i'm home . => french:je suis chez moi <unk> .
-------------------------------------------------------------------
|
data_for_writeup.ipynb | ###Markdown
This notebook has all of the numbers and plots I used in the writeup. I tried to keep it reasonably organized but I was also a bit lazy, so some code might be sloppy hard to follow... sorry. load libraries and data
###Code
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from googleapiclient import discovery
import utils
import os
import json
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from scipy.stats import ttest_ind
import pickle
guest_df = utils.load_guest_list_file(apply_filters=True)
# vader sentiment analyzer
sia = SentimentIntensityAnalyzer()
# perspective API
api_key = open(utils.perspective_api_key_file).read().split()[0]
api = discovery.build('commentanalyzer', 'v1alpha1', developerKey=api_key)
###Output
_____no_output_____
###Markdown
I'm going to use R for some visualizations because ggplot kicks matplotlib's butt.
###Code
%load_ext rpy2.ipython
# install R packages with conda: conda install -c r r-ggplot2
%%capture
%%R
library(ggplot2)
library(dplyr)
library(readr)
library(tidyr)
df <- read_csv("./guest_list.csv") %>%
# these same filters are applied to the pandas dataframe
filter(!is.na(video_id),
guest != "holiday special",
guest != "Stephen Colbert")
df$season <- factor(df$season)
df$female_flag <- factor(df$female_flag)
###Output
_____no_output_____
###Markdown
Chrissy Teigen examples
###Code
# load comments file
video_id = guest_df.loc[guest_df['guest'] == 'Chrissy Teigen', 'video_id'].values[0]
comment_file = os.path.join(utils.comment_dir, f'comments-{video_id}.json')
comments = [c['commentText'] for c in json.load(open(comment_file, 'r')) if 'commentText' in c]
def get_scores(text):
sent = sia.polarity_scores(text)['compound']
analyze_request = {
'comment': {'text': text},
'requestedAttributes': {'TOXICITY': {}, 'SEVERE_TOXICITY': {}},
'languages': ['en']
}
response = api.comments().analyze(body=analyze_request).execute()
tox_score = response['attributeScores']['TOXICITY']['summaryScore']['value']
sev_tox_score = response['attributeScores']['SEVERE_TOXICITY']['summaryScore']['value']
out = f'\nsentiment score: {sent}'
out += f'\ntoxicity: {tox_score}'
out += f'\nsevere toxicity: {sev_tox_score}'
return out
c1 = comments[2288]
print(c1)
print(get_scores(c1))
c2 = comments[232]
print(c2)
print(get_scores(c2))
c3 = comments[6042]
print(c3)
print(get_scores(c3))
###Output
Hell nooo eat the DAM wing. You knew you were going to Hot Ones. What. waste of wing. SHAME Delete a the episode. Retake of season 7 episode 1 SMH.
sentiment score: -0.9356
toxicity: 0.76644164
severe toxicity: 0.5029349
###Markdown
sentiment analysis I'm using a metric that I'm calling positive ratio, defined as $$ positive\_ratio = \frac{\text{ of positive comments}}{\text{ of negative comments}} $$where positive comments have sentiment scores greater than 0 and negative comments have scores less than 0. As an example, a video with twice as many positive comments as negative would have a positive ratio of 2.The benefit to using positive ratio is that it removes the effect of neutral comments. Many comments in the dataset have sentiment scores of exactly 0, and all of those 0 values can dilute a metric like average sentiment score.The downside to using positive ratio is that it ignores the magnitude of sentiment scores. For example, a comment with sentiment score 0.1 has the same effect on positive ratio as a comment with sentiment score 0.9.Due to the large number of neutral comments, I decided that positive ratio was the most appropriate metric for this dataset. In any case, the results are pretty similar with any metric.
###Code
%%R -w 700 -h 350
p <- df %>%
mutate(female_flag = if_else(female_flag == 0, ' male', ' female')) %>%
filter(season != 1) %>%
ggplot() +
geom_density(aes(x=positive_ratio, color=female_flag, fill=female_flag), alpha=0.2) +
labs(title="Positive Ratio for Female vs. Male Guests", x="positive ratio") +
expand_limits(x=0) +
theme_light(base_size=14) +
theme(plot.title=element_text(hjust = 0.5), legend.title=element_blank())
ggsave(filename='./visualizations/positive_ratio_by_male_female.png', plot=p)
p
df = guest_df[guest_df['season'] != 1]
f_vals = df[df['female_flag'] == 1]['positive_ratio']
m_vals = df[df['female_flag'] == 0]['positive_ratio']
print(f'female guest positive ratio: {round(f_vals.mean(), 3)}')
print(f'male guest positive ratio : {round(m_vals.mean(), 3)}')
ttest_ind(m_vals, f_vals)
###Output
_____no_output_____
###Markdown
toxicity scores
###Code
%%R -w 700 -h 350
p <- df %>%
mutate(female_flag = if_else(female_flag == 0, ' male', ' female')) %>%
filter(season != 1) %>%
rename(toxicity = mean_toxicity,
`severe toxicity` = mean_severe_toxicity) %>%
gather("metric", "value", c("toxicity", "severe toxicity")) %>%
mutate(metric = factor(metric, levels=c("toxicity", "severe toxicity"))) %>%
ggplot() +
geom_density(aes(x=value, color=female_flag, fill=female_flag), alpha=0.2) +
labs(title="Perspective Toxicity Scores for Female vs. Male Guests", x="score") +
expand_limits(x=0) +
expand_limits(x=0.5) +
facet_grid(. ~ metric) +
theme_light(base_size=14) +
theme(plot.title=element_text(hjust = 0.5), legend.title=element_blank())
ggsave(filename='./visualizations/toxicity_scores_by_male_female.png', plot=p)
p
df = guest_df[guest_df['season'] != 1]
f_tox = df[df['female_flag'] == 1]['mean_toxicity']
m_tox = df[df['female_flag'] == 0]['mean_toxicity']
f_sev_tox = df[df['female_flag'] == 1]['mean_severe_toxicity']
m_sev_tox = df[df['female_flag'] == 0]['mean_severe_toxicity']
print(f'female guest average toxicity: {round(f_tox.mean(), 3)}')
print(f'male guest average toxicity : {round(m_tox.mean(), 3)}')
print(f'female guest average severe toxicity: {round(f_sev_tox.mean(), 3)}')
print(f'male guest average severe toxicity : {round(m_sev_tox.mean(), 3)}')
ttest_ind(m_tox, f_tox)
ttest_ind(m_sev_tox, f_sev_tox)
###Output
_____no_output_____
###Markdown
why exclude season 1 from sentiment analysis? Far lower average sentiment score than later seasons. Show was still finding its stride and had some structural and aesthetic differences from later seasons. Some major outliers (especially the infamous DJ Khaled episode).
###Code
%%R -w 700 -h 350
p <- df %>%
ggplot() +
geom_density(aes(x=positive_ratio, color=season, fill=season), alpha=0.1) +
labs(title="Positive Ratio by Season", x="positive ratio") +
expand_limits(x=0) +
theme_light(base_size=14) +
theme(plot.title=element_text(hjust = 0.5))
ggsave(filename='./visualizations/positive_ratio_by_season.png', plot=p)
p
###Output
_____no_output_____
###Markdown
word usage
###Code
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_colwidth', 50)
word_df = pickle.load(open('./data/gender_analysis_bigram.pickle', 'rb'))
f_top_100 = word_df[['token', 'z_score']].rename(columns={'token': 'female'}).head(100)
m_top_100 = word_df[['token', 'z_score']].rename(columns={'token': 'male'}).sort_index(ascending=False).reset_index(drop=True).head(100)
f_top_100.join(m_top_100, lsuffix='_f', rsuffix='_m')
###Output
_____no_output_____ |
notebooks/introduction_jupyter-notebook.ipynb | ###Markdown
Jupyter NotebookThis notebook was adapted from https://github.com/oesteban/biss2016 and is originally based on https://github.com/jvns/pandas-cookbook.[Jupyter Notebook](http://jupyter.org/) started as a web application, based on [IPython](https://ipython.org/) that can run Python code directly in the webbrowser. Now, Jupyter Notebook can handle over 40 programming languages and is *the* interactive, open source web application to run any scientific code.You might also want to try a new Jupyter environment [JupyterLab](https://github.com/jupyterlab/jupyterlab). How to run a cellFirst, we need to explain how to run cells. Try to run the cell below!
###Code
import pandas as pd
print("Hi! This is a cell. Click on it and press the ▶ button above to run it")
###Output
_____no_output_____
###Markdown
You can also run a cell with `Ctrl+Enter` or `Shift+Enter`. Experiment a bit with that. Tab Completion One of the most useful things about Jupyter Notebook is its tab completion. Try this: click just after `read_csv(` in the cell below and press `Shift+Tab` 4 times, slowly. Note that if you're using JupyterLab you don't have an additional help box option.
###Code
pd.read_csv(
###Output
_____no_output_____
###Markdown
After the first time, you should see this:After the second time:After the fourth time, a big help box should pop up at the bottom of the screen, with the full documentation for the `read_csv` function:I find this amazingly useful. I think of this as "the more confused I am, the more times I should press `Shift+Tab`".Okay, let's try tab completion for function names!
###Code
pd.r
###Output
_____no_output_____
###Markdown
You should see this: Get HelpThere's an additional way on how you can reach the help box shown above after the fourth `Shift+Tab` press. Instead, you can also use `obj?` or `obj??` to get help or more help for an object.
###Code
pd.read_csv?
###Output
_____no_output_____
###Markdown
Writing codeWriting code in the notebook is pretty normal.
###Code
def print_10_nums():
for i in range(10):
print(i)
print_10_nums()
###Output
_____no_output_____
###Markdown
If you messed something up and want to revert to an older version of a code in a cell, use `Ctrl+Z` or to go than back `Ctrl+Y`.For a full list of all keyboard shortcuts, click on the small keyboard icon in the notebook header or click on `Help > Keyboard Shortcuts`. Saving a NotebookJupyter Notebooks autosave, so you don't have to worry about losing code too much. At the top of the page you can usually see the current save status:- Last Checkpoint: 2 minutes ago (unsaved changes)- Last Checkpoint: a few seconds ago (autosaved)If you want to save a notebook on purpose, either click on `File > Save and Checkpoint` or press `Ctrl+S`. Magic functions IPython has all kinds of magic functions. Magic functions are prefixed by % or %%, and typically take their arguments without parentheses, quotes or even commas for convenience. Line magics take a single % and cell magics are prefixed with two %%.Some useful magic functions are:Magic Name | Effect---------- | -------------------------------------------------------------%env | Get, set, or list environment variables%pdb | Control the automatic calling of the pdb interactive debugger%pylab | Load numpy and matplotlib to work interactively%%debug | Activates debugging mode in cell%%html | Render the cell as a block of HTML%%latex | Render the cell as a block of latex%%sh | %%sh script magic%%time | Time execution of a Python statement or expressionYou can run `%magic` to get a list of magic functions or `%quickref` for a reference sheet. Example 1: Let's see how long a specific command takes with `%time` or `%%time`:
###Code
%time result = sum([x for x in range(10**6)])
###Output
_____no_output_____
###Markdown
Example 2: Let's use `%%latex` to render a block of latex
###Code
%%latex
$$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} \mathrm{d} x$$
###Output
_____no_output_____
###Markdown
Jupyter NotebookThis notebook was adapted from https://github.com/oesteban/biss2016 and is originally based on https://github.com/jvns/pandas-cookbook.[Jupyter Notebook](http://jupyter.org/) started as a web application, based on [IPython](https://ipython.org/) that can run Python code directly in the webbrowser. Now, Jupyter Notebook can handle over 40 programming languages and is *the* interactive, open source web application to run any scientific code.You might also want to try a new Jupyter environment [JupyterLab](https://github.com/jupyterlab/jupyterlab). How to run a cellFirst, we need to explain how to run cells. Try to run the cell below!
###Code
import pandas as pd
print("Hi! This is a cell. Click on it and press the ▶ button above to run it")
###Output
_____no_output_____
###Markdown
You can also run a cell with `Ctrl+Enter` or `Shift+Enter`. Experiment a bit with that. Tab Completion One of the most useful things about Jupyter Notebook is its tab completion. Try this: click just after `read_csv(` in the cell below and press `Shift+Tab` 4 times, slowly. Note that if you're using JupyterLab you don't have an additional help box option.
###Code
# NBVAL_SKIP
# Use TAB completion for function info
pd.read_csv(
###Output
_____no_output_____
###Markdown
After the first time, you should see this:After the second time:After the fourth time, a big help box should pop up at the bottom of the screen, with the full documentation for the `read_csv` function:I find this amazingly useful. I think of this as "the more confused I am, the more times I should press `Shift+Tab`".Okay, let's try tab completion for function names!
###Code
# NBVAL_SKIP
# Use TAB completion to see possible function names
pd.r
###Output
_____no_output_____
###Markdown
You should see this: Get HelpThere's an additional way on how you can reach the help box shown above after the fourth `Shift+Tab` press. Instead, you can also use `obj?` or `obj??` to get help or more help for an object.
###Code
pd.read_csv?
###Output
_____no_output_____
###Markdown
Writing codeWriting code in the notebook is pretty normal.
###Code
def print_10_nums():
for i in range(10):
print(i)
print_10_nums()
###Output
_____no_output_____
###Markdown
If you messed something up and want to revert to an older version of a code in a cell, use `Ctrl+Z` or to go than back `Ctrl+Y`.For a full list of all keyboard shortcuts, click on the small keyboard icon in the notebook header or click on `Help > Keyboard Shortcuts`. Saving a NotebookJupyter Notebooks autosave, so you don't have to worry about losing code too much. At the top of the page you can usually see the current save status:- Last Checkpoint: 2 minutes ago (unsaved changes)- Last Checkpoint: a few seconds ago (autosaved)If you want to save a notebook on purpose, either click on `File > Save and Checkpoint` or press `Ctrl+S`. Magic functions IPython has all kinds of magic functions. Magic functions are prefixed by % or %%, and typically take their arguments without parentheses, quotes or even commas for convenience. Line magics take a single % and cell magics are prefixed with two %%.Some useful magic functions are:Magic Name | Effect---------- | -------------------------------------------------------------%env | Get, set, or list environment variables%pdb | Control the automatic calling of the pdb interactive debugger%pylab | Load numpy and matplotlib to work interactively%%debug | Activates debugging mode in cell%%html | Render the cell as a block of HTML%%latex | Render the cell as a block of latex%%sh | %%sh script magic%%time | Time execution of a Python statement or expressionYou can run `%magic` to get a list of magic functions or `%quickref` for a reference sheet. Example 1Let's see how long a specific command takes with `%time` or `%%time`:
###Code
%time result = sum([x for x in range(10**6)])
###Output
_____no_output_____
###Markdown
Example 2Let's use `%%latex` to render a block of latex
###Code
%%latex
$$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} \mathrm{d} x$$
###Output
_____no_output_____
###Markdown
Jupyter NotebookThis notebook was adapted from https://github.com/oesteban/biss2016 and is originally based on https://github.com/jvns/pandas-cookbook.[Jupyter Notebook](http://jupyter.org/) started as a web application, based on [IPython](https://ipython.org/) that can run Python code directly in the web browser. Now, Jupyter Notebook can handle over 40 programming languages and is *the* interactive, open source web application to run any scientific code.You might also want to try a new Jupyter environment [JupyterLab](https://github.com/jupyterlab/jupyterlab). How to run a cellFirst, we need to explain how to run cells. Try to run the cell below!
###Code
import pandas as pd
print("Hi! This is a cell. Click on it and press the ▶ button above to run it")
###Output
Hi! This is a cell. Click on it and press the ▶ button above to run it
###Markdown
You can also run a cell with `Ctrl+Enter` or `Shift+Enter`. Experiment a bit with that. Tab Completion One of the most useful things about Jupyter Notebook is its tab completion. Try this: click just after `read_csv(` in the cell below and press `Shift+Tab` 4 times, slowly. Note that if you're using JupyterLab you don't have an additional help box option.
###Code
# NBVAL_SKIP
# Use TAB completion for function info
pd.read_csv(
###Output
_____no_output_____
###Markdown
After the first time, you should see this:After the second time:After the fourth time, a big help box should pop up at the bottom of the screen, with the full documentation for the `read_csv` function:I find this amazingly useful. I think of this as "the more confused I am, the more times I should press `Shift+Tab`".Okay, let's try tab completion for function names!
###Code
# NBVAL_SKIP
# Use TAB completion to see possible function names
pd.r
###Output
_____no_output_____
###Markdown
You should see this: Get HelpThere's an additional way on how you can reach the help box shown above after the fourth `Shift+Tab` press. Instead, you can also use `obj?` or `obj??` to get help or more help for an object.
###Code
pd.read_csv?
###Output
_____no_output_____
###Markdown
Writing codeWriting code in the notebook is pretty normal.
###Code
def print_10_nums():
for i in range(10):
print(i)
print_10_nums()
###Output
_____no_output_____
###Markdown
If you messed something up and want to revert to an older version of a code in a cell, use `Ctrl+Z` or to go than back `Ctrl+Y`.For a full list of all keyboard shortcuts, click on the small keyboard icon in the notebook header or click on `Help > Keyboard Shortcuts`. Saving a NotebookJupyter Notebooks autosave, so you don't have to worry about losing code too much. At the top of the page you can usually see the current save status:- Last Checkpoint: 2 minutes ago (unsaved changes)- Last Checkpoint: a few seconds ago (autosaved)If you want to save a notebook on purpose, either click on `File > Save and Checkpoint` or press `Ctrl+S`. Magic functions IPython has all kinds of magic functions. Magic functions are prefixed by % or %%, and typically take their arguments without parentheses, quotes or even commas for convenience. Line magics take a single % and cell magics are prefixed with two %%.Some useful magic functions are:Magic Name | Effect---------- | -------------------------------------------------------------%env | Get, set, or list environment variables%pdb | Control the automatic calling of the pdb interactive debugger%pylab | Load numpy and matplotlib to work interactively%%debug | Activates debugging mode in cell%%html | Render the cell as a block of HTML%%latex | Render the cell as a block of latex%%sh | %%sh script magic%%time | Time execution of a Python statement or expressionYou can run `%magic` to get a list of magic functions or `%quickref` for a reference sheet. Example 1Let's see how long a specific command takes with `%time` or `%%time`:
###Code
%time result = sum([x for x in range(10**6)])
###Output
_____no_output_____
###Markdown
Example 2Let's use `%%latex` to render a block of latex
###Code
%%latex
$$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} \mathrm{d} x$$
###Output
_____no_output_____
###Markdown
Jupyter NotebookThis notebook was adapted from https://github.com/oesteban/biss2016 and is originally based on https://github.com/jvns/pandas-cookbook.[Jupyter Notebook](http://jupyter.org/) started as a web application, based on [IPython](https://ipython.org/) that can run Python code directly in the webbrowser. Now, Jupyter Notebook can handle over 40 programming languages and is *the* interactive, open source web application to run any scientific code.You might also want to try a new Jupyter environment [JupyterLab](https://github.com/jupyterlab/jupyterlab). How to run a cellFirst, we need to explain how to run cells. Try to run the cell below!
###Code
import pandas as pd
print("Hi! This is a cell. Click on it and press the ▶ button above to run it")
###Output
_____no_output_____
###Markdown
You can also run a cell with `Ctrl+Enter` or `Shift+Enter`. Experiment a bit with that. Tab Completion One of the most useful things about Jupyter Notebook is its tab completion. Try this: click just after `read_csv(` in the cell below and press `Shift+Tab` 4 times, slowly. Note that if you're using JupyterLab you don't have an additional help box option.
###Code
pd.read_csv(
###Output
_____no_output_____
###Markdown
After the first time, you should see this:After the second time:After the fourth time, a big help box should pop up at the bottom of the screen, with the full documentation for the `read_csv` function:I find this amazingly useful. I think of this as "the more confused I am, the more times I should press `Shift+Tab`".Okay, let's try tab completion for function names!
###Code
pd.r
###Output
_____no_output_____
###Markdown
You should see this: Get HelpThere's an additional way on how you can reach the help box shown above after the fourth `Shift+Tab` press. Instead, you can also use `obj?` or `obj??` to get help or more help for an object.
###Code
pd.read_csv?
###Output
_____no_output_____
###Markdown
Writing codeWriting code in the notebook is pretty normal.
###Code
def print_10_nums():
for i in range(10):
print(i)
print_10_nums()
###Output
_____no_output_____
###Markdown
If you messed something up and want to revert to an older version of a code in a cell, use `Ctrl+Z` or to go than back `Ctrl+Y`.For a full list of all keyboard shortcuts, click on the small keyboard icon in the notebook header or click on `Help > Keyboard Shortcuts`. Saving a NotebookJupyter Notebooks autosave, so you don't have to worry about losing code too much. At the top of the page you can usually see the current save status:- Last Checkpoint: 2 minutes ago (unsaved changes)- Last Checkpoint: a few seconds ago (autosaved)If you want to save a notebook on purpose, either click on `File > Save and Checkpoint` or press `Ctrl+S`. Magic functions IPython has all kinds of magic functions. Magic functions are prefixed by % or %%, and typically take their arguments without parentheses, quotes or even commas for convenience. Line magics take a single % and cell magics are prefixed with two %%.Some useful magic functions are:Magic Name | Effect---------- | -------------------------------------------------------------%env | Get, set, or list environment variables%pdb | Control the automatic calling of the pdb interactive debugger%pylab | Load numpy and matplotlib to work interactively%%debug | Activates debugging mode in cell%%html | Render the cell as a block of HTML%%latex | Render the cell as a block of latex%%sh | %%sh script magic%%time | Time execution of a Python statement or expressionYou can run `%magic` to get a list of magic functions or `%quickref` for a reference sheet. Example 1: Let's see how long a specific command takes with `%time` or `%%time`:
###Code
%time result = sum([x for x in range(10**6)])
###Output
_____no_output_____
###Markdown
Example 2: Let's use `%%latex` to render a block of latex
###Code
%%latex
$$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} \mathrm{d} x$$
###Output
_____no_output_____
###Markdown
Jupyter NotebookThis notebook was adapted from https://github.com/oesteban/biss2016 and is originally based on https://github.com/jvns/pandas-cookbook.[Jupyter Notebook](http://jupyter.org/) started as a web application, based on [IPython](https://ipython.org/) that can run Python code directly in the webbrowser. Now, Jupyter Notebook can handle over 40 programming languages and is *the* interactive, open source web application to run any scientific code.You might also want to try a new Jupyter environment [JupyterLab](https://github.com/jupyterlab/jupyterlab). How to run a cellFirst, we need to explain how to run cells. Try to run the cell below!
###Code
import pandas as pd
print("Hi! This is a cell. Click on it and press the ▶ button above to run it")
###Output
Hi! This is a cell. Click on it and press the ▶ button above to run it
###Markdown
You can also run a cell with `Ctrl+Enter` or `Shift+Enter`. Experiment a bit with that. Tab Completion One of the most useful things about Jupyter Notebook is its tab completion. Try this: click just after `read_csv(` in the cell below and press `Shift+Tab` 4 times, slowly. Note that if you're using JupyterLab you don't have an additional help box option.
###Code
# NBVAL_SKIP
# Use TAB completion for function info
pd.read_csv(
###Output
_____no_output_____
###Markdown
After the first time, you should see this:After the second time:After the fourth time, a big help box should pop up at the bottom of the screen, with the full documentation for the `read_csv` function:I find this amazingly useful. I think of this as "the more confused I am, the more times I should press `Shift+Tab`".Okay, let's try tab completion for function names!
###Code
# NBVAL_SKIP
# Use TAB completion to see possible function names
pd.r
###Output
_____no_output_____
###Markdown
You should see this: Get HelpThere's an additional way on how you can reach the help box shown above after the fourth `Shift+Tab` press. Instead, you can also use `obj?` or `obj??` to get help or more help for an object.
###Code
pd.read_csv?
###Output
_____no_output_____
###Markdown
Writing codeWriting code in the notebook is pretty normal.
###Code
def print_10_nums():
for i in range(10):
print(i)
print_10_nums()
###Output
_____no_output_____
###Markdown
If you messed something up and want to revert to an older version of a code in a cell, use `Ctrl+Z` or to go than back `Ctrl+Y`.For a full list of all keyboard shortcuts, click on the small keyboard icon in the notebook header or click on `Help > Keyboard Shortcuts`. Saving a NotebookJupyter Notebooks autosave, so you don't have to worry about losing code too much. At the top of the page you can usually see the current save status:- Last Checkpoint: 2 minutes ago (unsaved changes)- Last Checkpoint: a few seconds ago (autosaved)If you want to save a notebook on purpose, either click on `File > Save and Checkpoint` or press `Ctrl+S`. Magic functions IPython has all kinds of magic functions. Magic functions are prefixed by % or %%, and typically take their arguments without parentheses, quotes or even commas for convenience. Line magics take a single % and cell magics are prefixed with two %%.Some useful magic functions are:Magic Name | Effect---------- | -------------------------------------------------------------%env | Get, set, or list environment variables%pdb | Control the automatic calling of the pdb interactive debugger%pylab | Load numpy and matplotlib to work interactively%%debug | Activates debugging mode in cell%%html | Render the cell as a block of HTML%%latex | Render the cell as a block of latex%%sh | %%sh script magic%%time | Time execution of a Python statement or expressionYou can run `%magic` to get a list of magic functions or `%quickref` for a reference sheet. Example 1Let's see how long a specific command takes with `%time` or `%%time`:
###Code
%time result = sum([x for x in range(10**6)])
###Output
_____no_output_____
###Markdown
Example 2Let's use `%%latex` to render a block of latex
###Code
%%latex
$$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} \mathrm{d} x$$
###Output
_____no_output_____ |
tutorials/tutorial04_visualize.ipynb | ###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing the results of Flow experiments, and of replaying them. **Note:** This tutorial is only relevant if you use SUMO as a simulator. We currently do not support policy replay nor data collection when using Aimsun. The only exception is for reward plotting, which is independent on whether you have used SUMO or Aimsun during training. 1. Visualization components The visualization of simulation results breaks down into three main components:- **reward plotting**: Visualization of the reward function is an essential step in evaluating the effectiveness and training progress of RL agents.- **policy replay**: Flow includes tools for visualizing trained policies using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations and rewards in order to produce some desired behavior. The visualizers also generate plots of observations and a plot of the reward function over the course of the rollout.- **data collection and analysis**: Any Flow experiment can output its simulation data to a CSV file, `emission.csv`, containing the contents of SUMO's built-in `emission.xml` files. This file contains various data such as the speed, position, time, fuel consumption and many other metrics for every vehicle in the network and at each time step of the simulation. Once you have generated the `emission.csv` file, you can open it and read the data it contains using Python's [csv library](https://docs.python.org/3/library/csv.html) (or using Excel). Visualization is different depending on which reinforcement learning library you are using, if any. Accordingly, the rest of this tutorial explains how to plot rewards, replay policies and collect data when using either no RL library, RLlib or rllab. **Contents:**[How to visualize using SUMO without training](2.1---Using-SUMO-without-training)[How to visualize using SUMO with RLlib](2.2---Using-SUMO-with-RLlib)[How to visualize using SUMO with rllab](2.3---Using-SUMO-with-rllab)[**_Example: visualize data on a ring trained using RLlib_**](2.4---Example:-Visualize-data-on-a-ring-trained-using-RLlib) 2. How to visualize 2.1 - Using SUMO without training_In this case, since there is no training, there is no reward to plot and no policy to replay._ Data collection and analysisSUMO-only experiments can generate emission CSV files seamlessly:First, you have to tell SUMO to generate the `emission.xml` files. You can do that by specifying `emission_path` in the simulation parameters (class `SumoParams`), which is the path where the emission files will be generated. For instance:
###Code
from flow.core.params import SumoParams
sumo_params = SumoParams(sim_step=0.1, render=True, emission_path='data')
###Output
_____no_output_____
###Markdown
Then, you have to tell Flow to convert these XML emission files into CSV files. To do that, pass in `convert_to_csv=True` to the `run` method of your experiment object. For instance:```pythonexp.run(1, 1500, convert_to_csv=True)``` When running experiments, Flow will now automatically create CSV files next to the SUMO-generated XML files. 2.2 - Using SUMO with RLlib Reward plottingRLlib supports reward visualization over the period of the training using the `tensorboard` command. It takes one command-line parameter, `--logdir`, which is an RLlib result directory. By default, it would be located within an experiment directory inside your `~/ray_results` directory. An example call would look like:`tensorboard --logdir ~/ray_results/experiment_dir/result/directory`You can also run `tensorboard --logdir ~/ray_results` if you want to select more than just one experiment.If you do not wish to use `tensorboard`, an other way is to use our `flow/visualize/plot_ray_results.py` tool. It takes as arguments:- the path to the `progress.csv` file located inside your experiment results directory (`~/ray_results/...`),- the name(s) of the column(s) you wish to plot (reward or other things).An example call would look like:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv training/return-average training/return-min`If you do not know what the names of the columns are, run the command without specifying any column:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv`and the list of all available columns will be displayed to you. Policy replayThe tool to replay a policy trained using RLlib is located at `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results (by default located within `~/ray_results`), and secondly the number of the checkpoint you wish to visualize (which correspond to the folder `checkpoint_` inside the experiment results directory).An example call would look like this:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1`There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`. Data collection and analysisSimulation data can be generated the same way as it is done [without training](2.1---Using-SUMO-without-training).If you need to generate simulation data after the training, you can run a policy replay as mentioned above, and add the `--gen-emission` parameter.An example call would look like:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1 --gen_emission` 2.4 - Example: Visualize data on a ring trained using RLlib
###Code
!pwd # make sure you are in the flow/tutorials folder
###Output
_____no_output_____
###Markdown
The folder `flow/tutorials/data/trained_ring` contains the data generated in `ray_results` after training an agent on a ring scenario for 200 iterations using RLlib (the experiment can be found in `flow/examples/rllib/stabilizing_the_ring.py`).Let's first have a look at what's available in the `progress.csv` file:
###Code
!python ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv
###Output
_____no_output_____
###Markdown
This gives us a list of everything that we can plot. Let's plot the reward and its boundaries:
###Code
%matplotlib notebook
# if this doesn't display anything, try with "%matplotlib inline" instead
%run ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv \
episode_reward_mean episode_reward_min episode_reward_max
###Output
_____no_output_____
###Markdown
We can see that the policy had already converged by the iteration 50.Now let's see what this policy looks like. Run the following script, then click on the green arrow to run the simulation (you may have to click several times).
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000
###Output
_____no_output_____
###Markdown
The RL agent is properly stabilizing the ring! Indeed, without an RL agent, the vehicles start forming stop-and-go waves which significantly slows down the traffic, as you can see in this simulation:
###Code
!python ../examples/sumo/sugiyama.py
###Output
_____no_output_____
###Markdown
In the trained ring folder, there is a checkpoint generated every 20 iterations. Try to run the second previous command but replace 200 by 20. On the reward plot, you can see that the reward is already quite high at iteration 20, but hasn't converged yet, so the agent will perform a little less well than at iteration 200. That's it for this example! Feel free to play around with the other scripts in `flow/visualize`. Run them with the `--help` parameter and it should tell you how to use it. Also, if you need the emission file for the trained ring, you can obtain it by running the following command:
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000 --gen_emission
###Output
_____no_output_____
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing the results of Flow experiments, and of replaying them. **Note:** This tutorial is only relevant if you use SUMO as a simulator. We currently do not support policy replay nor data collection when using Aimsun. The only exception is for reward plotting, which is independent on whether you have used SUMO or Aimsun during training. 1. Visualization components The visualization of simulation results breaks down into three main components:- **reward plotting**: Visualization of the reward function is an essential step in evaluating the effectiveness and training progress of RL agents.- **policy replay**: Flow includes tools for visualizing trained policies using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations and rewards in order to produce some desired behavior. The visualizers also generate plots of observations and a plot of the reward function over the course of the rollout.- **data collection and analysis**: Any Flow experiment can output its simulation data to a CSV file, `emission.csv`, containing the contents of SUMO's built-in `emission.xml` files. This file contains various data such as the speed, position, time, fuel consumption and many other metrics for every vehicle in the network and at each time step of the simulation. Once you have generated the `emission.csv` file, you can open it and read the data it contains using Python's [csv library](https://docs.python.org/3/library/csv.html) (or using Excel). Visualization is different depending on which reinforcement learning library you are using, if any. Accordingly, the rest of this tutorial explains how to plot rewards, replay policies and collect data when using either no RL library, RLlib, or stable-baselines. **Contents:**[How to visualize using SUMO without training](2.1---Using-SUMO-without-training)[How to visualize using SUMO with RLlib](2.2---Using-SUMO-with-RLlib)[**_Example: visualize data on a ring trained using RLlib_**](2.3---Example:-Visualize-data-on-a-ring-trained-using-RLlib) 2. How to visualize 2.1 - Using SUMO without training_In this case, since there is no training, there is no reward to plot and no policy to replay._ Data collection and analysisSUMO-only experiments can generate emission CSV files seamlessly:First, you have to tell SUMO to generate the `emission.xml` files. You can do that by specifying `emission_path` in the simulation parameters (class `SumoParams`), which is the path where the emission files will be generated. For instance:
###Code
from flow.core.params import SumoParams
sim_params = SumoParams(sim_step=0.1, render=True, emission_path='data')
###Output
_____no_output_____
###Markdown
Then, you have to tell Flow to convert these XML emission files into CSV files. To do that, pass in `convert_to_csv=True` to the `run` method of your experiment object. For instance:```pythonexp.run(1, convert_to_csv=True)``` When running experiments, Flow will now automatically create CSV files next to the SUMO-generated XML files. 2.2 - Using SUMO with RLlib Reward plottingRLlib supports reward visualization over the period of the training using the `tensorboard` command. It takes one command-line parameter, `--logdir`, which is an RLlib result directory. By default, it would be located within an experiment directory inside your `~/ray_results` directory. An example call would look like:`tensorboard --logdir ~/ray_results/experiment_dir/result/directory`You can also run `tensorboard --logdir ~/ray_results` if you want to select more than just one experiment.If you do not wish to use `tensorboard`, an other way is to use our `flow/visualize/plot_ray_results.py` tool. It takes as arguments:- the path to the `progress.csv` file located inside your experiment results directory (`~/ray_results/...`),- the name(s) of the column(s) you wish to plot (reward or other things).An example call would look like:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv training/return-average training/return-min`If you do not know what the names of the columns are, run the command without specifying any column:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv`and the list of all available columns will be displayed to you. Policy replayThe tool to replay a policy trained using RLlib is located at `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results (by default located within `~/ray_results`), and secondly the number of the checkpoint you wish to visualize (which correspond to the folder `checkpoint_` inside the experiment results directory).An example call would look like this:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1`There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`. Data collection and analysisSimulation data can be generated the same way as it is done [without training](2.1---Using-SUMO-without-training).If you need to generate simulation data after the training, you can run a policy replay as mentioned above, and add the `--gen-emission` parameter.An example call would look like:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1 --gen_emission` 2.3 - Example: Visualize data on a ring trained using RLlib
###Code
!pwd # make sure you are in the flow/tutorials folder
###Output
_____no_output_____
###Markdown
The folder `flow/tutorials/data/trained_ring` contains the data generated in `ray_results` after training an agent on a ring scenario for 200 iterations using RLlib (the experiment can be found in `flow/examples/rllib/stabilizing_the_ring.py`).Let's first have a look at what's available in the `progress.csv` file:
###Code
!python ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv
###Output
_____no_output_____
###Markdown
This gives us a list of everything that we can plot. Let's plot the reward and its boundaries:
###Code
%matplotlib notebook
# if this doesn't display anything, try with "%matplotlib inline" instead
%run ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv \
episode_reward_mean episode_reward_min episode_reward_max
###Output
_____no_output_____
###Markdown
We can see that the policy had already converged by the iteration 50.Now let's see what this policy looks like. Run the following script, then click on the green arrow to run the simulation (you may have to click several times).
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000
###Output
_____no_output_____
###Markdown
The RL agent is properly stabilizing the ring! Indeed, without an RL agent, the vehicles start forming stop-and-go waves which significantly slows down the traffic, as you can see in this simulation:
###Code
!python ../examples/simulate.py ring
###Output
_____no_output_____
###Markdown
In the trained ring folder, there is a checkpoint generated every 20 iterations. Try to run the second previous command but replace 200 by 20. On the reward plot, you can see that the reward is already quite high at iteration 20, but hasn't converged yet, so the agent will perform a little less well than at iteration 200. That's it for this example! Feel free to play around with the other scripts in `flow/visualize`. Run them with the `--help` parameter and it should tell you how to use it. Also, if you need the emission file for the trained ring, you can obtain it by running the following command:
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000 --gen_emission
###Output
_____no_output_____
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing and replaying the results of Flow experiments run using RL. The process of visualizing results breaks down into two main components:- reward plotting- policy replayNote that this tutorial only talks about visualization using sumo, and not other simulators like Aimsun. Visualization with RLlib Plotting RewardSimilarly to how rllab handles reward plotting, RLlib supports reward visualization over the period of training using `tensorboard`. `tensorboard` takes one command-line input, `--logdir`, which is an rllib result directory (usually located within an experiment directory inside your `ray_results` directory). An example function call is below.
###Code
! tensorboard --logdir /ray_results/experiment_dir/result/directory
###Output
_____no_output_____
###Markdown
If you do not wish to use `tensorboard`, you can also use the `flow/visualize/plot_ray_results.py` file. It takes as arguments the path to the `progress.csv` file located inside your experiment results directory, and the name(s) of the column(s) to plot. If you do not know what the name of the columns are, simply do not put any and a list of all available columns will be displayed to you. Example usage:
###Code
! plot_ray_results.py /ray_results/experiment_dir/progress.csv training/return-average training/return-min
###Output
_____no_output_____
###Markdown
Replaying a Trained Policy The tool to replay a policy trained using RLlib is located in `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results, and second the number of the checkpoint you wish to visualize. There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`.
###Code
! python ../../flow/visualize/visualizer_rllib.py /ray_results/experiment_dir/result/directory 1
###Output
_____no_output_____
###Markdown
Data Collection and AnalysisAny Flow experiment can output its results to a CSV file containing the contents of SUMO's built-in `emission.xml` files, specifying speed, position, time, fuel consumption, and many other metrics for all vehicles in a network over time. This section describes how to generate those `emission.csv` files when replaying and analyzing a trained policy. RLlib
###Code
# --emission_to_csv does the same as above
! python ../../flow/visualize/visualizer_rllib.py results/sample_checkpoint 1 --gen_emission
###Output
_____no_output_____
###Markdown
Exercise 04: Visualizing Experiment Results This tutorial describes the process of visualizing and replaying the results of Flow experiments run using RL. The process of visualizing results breaks down into two main components:- reward plotting- policy replayFurthermore, visualization is different depending on whether your experiments were run using rllab or RLlib. Accordingly, this tutorial is divided in two parts (one for rllab and one for RLlib). rllab Plotting Reward An essential step in evaluating the effectiveness and training progress of RL agents is visualization of reward. rllab includes a tool to plot the _average cumulative reward per rollout_ against _iteration number_ to show training progress. This "reward plot" can be generated for just one experiment or many. The tool to be called is rllab's `frontend.py`, which is inside the directory `rllab/viskit/` (assuming a user is already inside the directory `rllab-multiagent`). `frontend.py` requires only one command-line input: the path to the result directory that a user wants to visualize. The directory should contain a `progress.csv` and `params.json` file—pickle files containing per-iteration results are not necessary. An example call to `frontend.py` is below. Click on the link to http://localhost:5000 to view reward over time.
###Code
! python ../../../rllab/viskit/frontend.py /path/to/result/directory
###Output
_____no_output_____
###Markdown
Replaying a Trained Policy Flow includes a tool for visualizing a trained policy in its environment using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations, and rewards in order to produce desired behavior. The visualizer also generates plots of observations and a plot of reward over the course of the rollout. The tool to be called is `visualizer_rllab.py` within `flow/visualize` (assuming a user is already inside the parent directory `flow`). `visualizer_rllab.py` requires one command-line input and has three additional optional arguments. The required input is the path to the pickle file to be visualized (this is usually within an rllab result directory). The optional inputs are: - `--num_rollouts`, the number of rollouts to be visualized. The default value is 100. This argument takes integer input.- `--plotname`, the name of the plot generated by the visualizer. The default value is `traffic_plot`. This argument takes string input.- `--emission_to_csv`, whether to convert SUMO's emission file into a CSV file. Emission files will be discussed later in this tutorial. By default, emission CSV files are not stored. `--emission_to_csv` is a flag and takes no input.An example call to `visualizer_rllab.py` is below.
###Code
! python ../../flow/visualize/visualizer_rllab.py /path/to/result.pkl --num_rollouts 1 --plotname plot_test --emission_to_csv
###Output
_____no_output_____
###Markdown
RLlib Plotting RewardSimilarly to how rllab handles reward plotting, RLlib supports reward visualization over the period of training using `tensorboard`. `tensorboard` takes one command-line input, `--logdir`, which is an rllib result directory (usually located within an experiment directory inside your `ray_results` directory). An example function call is below.
###Code
! tensorboard /ray_results/experiment_dir/result/directory
###Output
_____no_output_____
###Markdown
Replaying a Trained Policy
###Code
! python ../../flow/visualize/visualizer_rllib.py /ray_results/experiment_dir/result/directory 1
###Output
_____no_output_____
###Markdown
Data Collection and AnalysisAny Flow experiment can output its results to a CSV file containing the contents of SUMO's built-in `emission.xml` files, specifying speed, position, time, fuel consumption, and many other metrics for all vehicles in a network over time. This section describes how to generate those `emission.csv` files when replaying and analyzing a trained policy. rllab
###Code
# Calling the visualizer with the flag --emission_to_csv replays the policy and creates an emission file
! python ../../flow/visualize/visualizer_rllab.py path/to/result.pkl --emission_to_csv
###Output
_____no_output_____
###Markdown
The generated `emission.csv` is placed in the directory `test_time_rollout/` inside the directory from which you've just run the visualizer. That emission file can be opened in Excel, loaded in Python and plotted, and more. RLlib
###Code
# --emission_to_csv does the same as above
! python ../../flow/visualize/visualizer_rllib.py results/sample_checkpoint 1 --emission_to_csv
###Output
_____no_output_____
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing and replaying the results of Flow experiments run using RL. The process of visualizing results breaks down into two main components:- reward plotting- policy replayNote that this tutorial only talks about visualization using sumo, and not other simulators like Aimsun. Visualization with RLlib Plotting RewardSimilarly to how rllab handles reward plotting, RLlib supports reward visualization over the period of training using `tensorboard`. `tensorboard` takes one command-line input, `--logdir`, which is an rllib result directory (usually located within an experiment directory inside your `ray_results` directory). An example function call is below.
###Code
! tensorboard --logdir /ray_results/dirthatcontainsallcheckpoints/
###Output
_____no_output_____
###Markdown
If you do not wish to use `tensorboard`, you can also use the `flow/visualize/plot_ray_results.py` file. It takes as arguments the path to the `progress.csv` file located inside your experiment results directory, and the name(s) of the column(s) to plot. If you do not know what the name of the columns are, simply do not put any and a list of all available columns will be displayed to you. Example usage:
###Code
! plot_ray_results.py /ray_results/experiment_dir/progress.csv training/return-average training/return-min
###Output
_____no_output_____
###Markdown
Replaying a Trained Policy The tool to replay a policy trained using RLlib is located in `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results, and second the number of the checkpoint you wish to visualize. There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`.
###Code
! python ../../flow/visualize/visualizer_rllib.py /ray_results/dirthatcontainsallcheckpoints/ 1
###Output
_____no_output_____
###Markdown
Data Collection and AnalysisAny Flow experiment can output its results to a CSV file containing the contents of SUMO's built-in `emission.xml` files, specifying speed, position, time, fuel consumption, and many other metrics for all vehicles in a network over time. This section describes how to generate those `emission.csv` files when replaying and analyzing a trained policy. RLlib
###Code
# --emission_to_csv does the same as above
! python ../../flow/visualize/visualizer_rllib.py results/sample_checkpoint 1 --gen_emission
###Output
_____no_output_____
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing the results of Flow experiments, and of replaying them. **Note:** This tutorial is only relevant if you use SUMO as a simulator. We currently do not support policy replay nor data collection when using Aimsun. The only exception is for reward plotting, which is independent on whether you have used SUMO or Aimsun during training. 1. Visualization components The visualization of simulation results breaks down into three main components:- **reward plotting**: Visualization of the reward function is an essential step in evaluating the effectiveness and training progress of RL agents.- **policy replay**: Flow includes tools for visualizing trained policies using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations and rewards in order to produce some desired behavior. The visualizers also generate plots of observations and a plot of the reward function over the course of the rollout.- **data collection and analysis**: Any Flow experiment can output its simulation data to a CSV file, `emission.csv`, containing the contents of SUMO's built-in `emission.xml` files. This file contains various data such as the speed, position, time, fuel consumption and many other metrics for every vehicle in the network and at each time step of the simulation. Once you have generated the `emission.csv` file, you can open it and read the data it contains using Python's [csv library](https://docs.python.org/3/library/csv.html) (or using Excel). Visualization is different depending on which reinforcement learning library you are using, if any. Accordingly, the rest of this tutorial explains how to plot rewards, replay policies and collect data when using either no RL library, RLlib, or stable-baselines. **Contents:**[How to visualize using SUMO without training](2.1---Using-SUMO-without-training)[How to visualize using SUMO with RLlib](2.2---Using-SUMO-with-RLlib)[**_Example: visualize data on a ring trained using RLlib_**](2.3---Example:-Visualize-data-on-a-ring-trained-using-RLlib) 2. How to visualize 2.1 - Using SUMO without training_In this case, since there is no training, there is no reward to plot and no policy to replay._ Data collection and analysisSUMO-only experiments can generate emission CSV files seamlessly:First, you have to tell SUMO to generate the `emission.xml` files. You can do that by specifying `emission_path` in the simulation parameters (class `SumoParams`), which is the path where the emission files will be generated. For instance:
###Code
from flow.core.params import SumoParams
sim_params = SumoParams(sim_step=0.1, render=True, emission_path='data')
###Output
_____no_output_____
###Markdown
Then, you have to tell Flow to convert these XML emission files into CSV files. To do that, pass in `convert_to_csv=True` to the `run` method of your experiment object. For instance:```pythonexp.run(1, convert_to_csv=True)``` When running experiments, Flow will now automatically create CSV files next to the SUMO-generated XML files. 2.2 - Using SUMO with RLlib Reward plottingRLlib supports reward visualization over the period of the training using the `tensorboard` command. It takes one command-line parameter, `--logdir`, which is an RLlib result directory. By default, it would be located within an experiment directory inside your `~/ray_results` directory. An example call would look like:`tensorboard --logdir ~/ray_results/experiment_dir/result/directory`You can also run `tensorboard --logdir ~/ray_results` if you want to select more than just one experiment.If you do not wish to use `tensorboard`, an other way is to use our `flow/visualize/plot_ray_results.py` tool. It takes as arguments:- the path to the `progress.csv` file located inside your experiment results directory (`~/ray_results/...`),- the name(s) of the column(s) you wish to plot (reward or other things).An example call would look like:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv training/return-average training/return-min`If you do not know what the names of the columns are, run the command without specifying any column:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv`and the list of all available columns will be displayed to you. Policy replayThe tool to replay a policy trained using RLlib is located at `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results (by default located within `~/ray_results`), and secondly the number of the checkpoint you wish to visualize (which correspond to the folder `checkpoint_` inside the experiment results directory).An example call would look like this:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1`There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`. Data collection and analysisSimulation data can be generated the same way as it is done [without training](2.1---Using-SUMO-without-training).If you need to generate simulation data after the training, you can run a policy replay as mentioned above, and add the `--gen-emission` parameter.An example call would look like:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1 --gen_emission` 2.3 - Example: Visualize data on a ring trained using RLlib
###Code
!pwd # make sure you are in the flow/tutorials folder
###Output
_____no_output_____
###Markdown
The folder `flow/tutorials/data/trained_ring` contains the data generated in `ray_results` after training an agent on a ring scenario for 200 iterations using RLlib (the experiment can be found in `flow/examples/rllib/stabilizing_the_ring.py`).Let's first have a look at what's available in the `progress.csv` file:
###Code
!python ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv
###Output
_____no_output_____
###Markdown
This gives us a list of everything that we can plot. Let's plot the reward and its boundaries:
###Code
%matplotlib notebook
# if this doesn't display anything, try with "%matplotlib inline" instead
%run ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv \
episode_reward_mean episode_reward_min episode_reward_max
###Output
_____no_output_____
###Markdown
We can see that the policy had already converged by the iteration 50.Now let's see what this policy looks like. Run the following script, then click on the green arrow to run the simulation (you may have to click several times).
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000
###Output
_____no_output_____
###Markdown
The RL agent is properly stabilizing the ring! Indeed, without an RL agent, the vehicles start forming stop-and-go waves which significantly slows down the traffic, as you can see in this simulation:
###Code
!python ../examples/sumo/sugiyama.py
###Output
_____no_output_____
###Markdown
In the trained ring folder, there is a checkpoint generated every 20 iterations. Try to run the second previous command but replace 200 by 20. On the reward plot, you can see that the reward is already quite high at iteration 20, but hasn't converged yet, so the agent will perform a little less well than at iteration 200. That's it for this example! Feel free to play around with the other scripts in `flow/visualize`. Run them with the `--help` parameter and it should tell you how to use it. Also, if you need the emission file for the trained ring, you can obtain it by running the following command:
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000 --gen_emission
###Output
_____no_output_____
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing and replaying the results of Flow experiments run using RL. The process of visualizing results breaks down into two main components:- reward plotting- policy replayFurthermore, visualization is different depending on whether your experiments were run using rllab or RLlib. Accordingly, this tutorial is divided in two parts (one for rllab and one for RLlib). rllab Plotting Reward An essential step in evaluating the effectiveness and training progress of RL agents is visualization of reward. rllab includes a tool to plot the _average cumulative reward per rollout_ against _iteration number_ to show training progress. This "reward plot" can be generated for just one experiment or many. The tool to be called is rllab's `frontend.py`, which is inside the directory `rllab/viskit/` (assuming a user is already inside the directory `rllab-multiagent`). `frontend.py` requires only one command-line input: the path to the result directory that a user wants to visualize. The directory should contain a `progress.csv` and `params.json` file—pickle files containing per-iteration results are not necessary. An example call to `frontend.py` is below. Click on the link to http://localhost:5000 to view reward over time.
###Code
! python ../../../rllab/viskit/frontend.py /path/to/result/directory
###Output
_____no_output_____
###Markdown
Replaying a Trained Policy Flow includes a tool for visualizing a trained policy in its environment using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations, and rewards in order to produce desired behavior. The visualizer also generates plots of observations and a plot of reward over the course of the rollout. The tool to be called is `visualizer_rllab.py` within `flow/visualize` (assuming a user is already inside the parent directory `flow`). `visualizer_rllab.py` requires one command-line input and has three additional optional arguments. The required input is the path to the pickle file to be visualized (this is usually within an rllab result directory). The optional inputs are: - `--num_rollouts`, the number of rollouts to be visualized. The default value is 100. This argument takes integer input.- `--plotname`, the name of the plot generated by the visualizer. The default value is `traffic_plot`. This argument takes string input.- `--emission_to_csv`, whether to convert SUMO's emission file into a CSV file. Emission files will be discussed later in this tutorial. By default, emission CSV files are not stored. `--emission_to_csv` is a flag and takes no input.An example call to `visualizer_rllab.py` is below.
###Code
! python ../../flow/visualize/visualizer_rllab.py /path/to/result.pkl --num_rollouts 1 --plotname plot_test --emission_to_csv
###Output
_____no_output_____
###Markdown
RLlib Plotting RewardSimilarly to how rllab handles reward plotting, RLlib supports reward visualization over the period of training using `tensorboard`. `tensorboard` takes one command-line input, `--logdir`, which is an rllib result directory (usually located within an experiment directory inside your `ray_results` directory). An example function call is below.
###Code
! tensorboard /ray_results/experiment_dir/result/directory
###Output
_____no_output_____
###Markdown
Replaying a Trained Policy
###Code
! python ../../flow/visualize/visualizer_rllib.py /ray_results/experiment_dir/result/directory 1
###Output
_____no_output_____
###Markdown
Data Collection and AnalysisAny Flow experiment can output its results to a CSV file containing the contents of SUMO's built-in `emission.xml` files, specifying speed, position, time, fuel consumption, and many other metrics for all vehicles in a network over time. This section describes how to generate those `emission.csv` files when replaying and analyzing a trained policy. rllab
###Code
# Calling the visualizer with the flag --emission_to_csv replays the policy and creates an emission file
! python ../../flow/visualize/visualizer_rllab.py path/to/result.pkl --emission_to_csv
###Output
_____no_output_____
###Markdown
The generated `emission.csv` is placed in the directory `test_time_rollout/` inside the directory from which you've just run the visualizer. That emission file can be opened in Excel, loaded in Python and plotted, and more. RLlib
###Code
# --emission_to_csv does the same as above
! python ../../flow/visualize/visualizer_rllib.py results/sample_checkpoint 1 --emission-to-csv
###Output
_____no_output_____
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing the results of Flow experiments, and of replaying them. **Note:** This tutorial is only relevant if you use SUMO as a simulator. We currently do not support policy replay nor data collection when using Aimsun. The only exception is for reward plotting, which is independent on whether you have used SUMO or Aimsun during training. 1. Visualization components The visualization of simulation results breaks down into three main components:- **reward plotting**: Visualization of the reward function is an essential step in evaluating the effectiveness and training progress of RL agents.- **policy replay**: Flow includes tools for visualizing trained policies using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations and rewards in order to produce some desired behavior. The visualizers also generate plots of observations and a plot of the reward function over the course of the rollout.- **data collection and analysis**: Any Flow experiment can output its simulation data to a CSV file, `emission.csv`, containing the contents of SUMO's built-in `emission.xml` files. This file contains various data such as the speed, position, time, fuel consumption and many other metrics for every vehicle in the network and at each time step of the simulation. Once you have generated the `emission.csv` file, you can open it and read the data it contains using Python's [csv library](https://docs.python.org/3/library/csv.html) (or using Excel). Visualization is different depending on which reinforcement learning library you are using, if any. Accordingly, the rest of this tutorial explains how to plot rewards, replay policies and collect data when using either no RL library, RLlib, or stable-baselines. **Contents:**[How to visualize using SUMO without training](2.1---Using-SUMO-without-training)[How to visualize using SUMO with RLlib](2.2---Using-SUMO-with-RLlib)[**_Example: visualize data on a ring trained using RLlib_**](2.3---Example:-Visualize-data-on-a-ring-trained-using-RLlib) 2. How to visualize 2.1 - Using SUMO without training_In this case, since there is no training, there is no reward to plot and no policy to replay._ Data collection and analysisSUMO-only experiments can generate emission CSV files seamlessly:First, you have to tell SUMO to generate the `emission.xml` files. You can do that by specifying `emission_path` in the simulation parameters (class `SumoParams`), which is the path where the emission files will be generated. For instance:
###Code
from flow.core.params import SumoParams
sim_params = SumoParams(sim_step=0.1, render=True, emission_path='data')
###Output
_____no_output_____
###Markdown
Then, you have to tell Flow to convert these XML emission files into CSV files. To do that, pass in `convert_to_csv=True` to the `run` method of your experiment object. For instance:```pythonexp.run(1, convert_to_csv=True)``` When running experiments, Flow will now automatically create CSV files next to the SUMO-generated XML files. 2.2 - Using SUMO with RLlib Reward plottingRLlib supports reward visualization over the period of the training using the `tensorboard` command. It takes one command-line parameter, `--logdir`, which is an RLlib result directory. By default, it would be located within an experiment directory inside your `~/ray_results` directory. An example call would look like:`tensorboard --logdir ~/ray_results/experiment_dir/result/directory`You can also run `tensorboard --logdir ~/ray_results` if you want to select more than just one experiment.If you do not wish to use `tensorboard`, an other way is to use our `flow/visualize/plot_ray_results.py` tool. It takes as arguments:- the path to the `progress.csv` file located inside your experiment results directory (`~/ray_results/...`),- the name(s) of the column(s) you wish to plot (reward or other things).An example call would look like:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv training/return-average training/return-min`If you do not know what the names of the columns are, run the command without specifying any column:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv`and the list of all available columns will be displayed to you. Policy replayThe tool to replay a policy trained using RLlib is located at `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results (by default located within `~/ray_results`), and secondly the number of the checkpoint you wish to visualize (which correspond to the folder `checkpoint_` inside the experiment results directory).An example call would look like this:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1`There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`. Data collection and analysisSimulation data can be generated the same way as it is done [without training](2.1---Using-SUMO-without-training).If you need to generate simulation data after the training, you can run a policy replay as mentioned above, and add the `--gen-emission` parameter.An example call would look like:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1 --gen_emission` 2.3 - Example: Visualize data on a ring trained using RLlib
###Code
!pwd # make sure you are in the flow/tutorials folder
###Output
/home/bill/flow/tutorials
###Markdown
The folder `flow/tutorials/data/trained_ring` contains the data generated in `ray_results` after training an agent on a ring scenario for 200 iterations using RLlib (the experiment can be found in `flow/examples/rllib/stabilizing_the_ring.py`).Let's first have a look at what's available in the `progress.csv` file:
###Code
!python ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv
###Output
Columns are: episode_reward_max, episode_reward_min, episode_reward_mean, episode_len_mean, episodes_this_iter, timesteps_this_iter, done, timesteps_total, episodes_total, training_iteration, experiment_id, date, timestamp, time_this_iter_s, time_total_s, pid, hostname, node_ip, time_since_restore, timesteps_since_restore, iterations_since_restore, num_healthy_workers, trial_id, sampler_perf/mean_env_wait_ms, sampler_perf/mean_processing_ms, sampler_perf/mean_inference_ms, info/num_steps_trained, info/num_steps_sampled, info/sample_time_ms, info/load_time_ms, info/grad_time_ms, info/update_time_ms, perf/cpu_util_percent, perf/ram_util_percent, info/learner/default_policy/cur_kl_coeff, info/learner/default_policy/cur_lr, info/learner/default_policy/total_loss, info/learner/default_policy/policy_loss, info/learner/default_policy/vf_loss, info/learner/default_policy/vf_explained_var, info/learner/default_policy/kl, info/learner/default_policy/entropy, info/learner/default_policy/entropy_coeff
###Markdown
This gives us a list of everything that we can plot. Let's plot the reward and its boundaries:
###Code
%matplotlib notebook
# if this doesn't display anything, try with "%matplotlib inline" instead
%run ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv \
episode_reward_mean episode_reward_min episode_reward_max
###Output
_____no_output_____
###Markdown
We can see that the policy had already converged by the iteration 50.Now let's see what this policy looks like. Run the following script, then click on the green arrow to run the simulation (you may have to click several times).
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000
###Output
2021-10-18 10:17:55,055 WARNING services.py:597 -- setpgrp failed, processes may not be cleaned up properly: [Errno 1] Operation not permitted.
2021-10-18 10:17:55,056 INFO resource_spec.py:216 -- Starting Ray with 5.22 GiB memory available for workers and up to 2.63 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2021-10-18 10:17:55,454 INFO trainer.py:371 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2021-10-18 10:17:57,277 INFO rollout_worker.py:770 -- Built policy map: {'default_policy': <ray.rllib.policy.tf_policy_template.PPOTFPolicy object at 0x7fbad5cb37b8>}
2021-10-18 10:17:57,277 INFO rollout_worker.py:771 -- Built preprocessor map: {'default_policy': <ray.rllib.models.preprocessors.NoPreprocessor object at 0x7fbad5cb3550>}
2021-10-18 10:17:57,277 INFO rollout_worker.py:372 -- Built filter map: {'default_policy': <ray.rllib.utils.filter.NoFilter object at 0x7fbad5d029e8>}
2021-10-18 10:17:57,279 INFO multi_gpu_optimizer.py:93 -- LocalMultiGPUOptimizer devices ['/cpu:0']
2021-10-18 10:17:58,648 WARNING util.py:45 -- Install gputil for GPU system monitoring.
Traceback (most recent call last):
File "../flow/visualize/visualizer_rllib.py", line 386, in <module>
visualizer_rllib(args)
File "../flow/visualize/visualizer_rllib.py", line 155, in visualizer_rllib
agent.restore(checkpoint)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/tune/trainable.py", line 341, in restore
self._restore(checkpoint_path)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 559, in _restore
self.__setstate__(extra_data)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 161, in __setstate__
Trainer.__setstate__(self, state)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 855, in __setstate__
self.workers.local_worker().restore(state["worker"])
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 712, in restore
self.policy_map[pid].set_state(state)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/policy/policy.py", line 250, in set_state
self.set_weights(state)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/policy/tf_policy.py", line 269, in set_weights
return self._variables.set_weights(weights)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/experimental/tf_utils.py", line 186, in set_weights
self.assignment_nodes[name] for name in new_weights.keys()
AttributeError: 'numpy.ndarray' object has no attribute 'keys'
###Markdown
The RL agent is properly stabilizing the ring! Indeed, without an RL agent, the vehicles start forming stop-and-go waves which significantly slows down the traffic, as you can see in this simulation:
###Code
!python ../examples/simulate.py ring
###Output
Error during start: Traceback (most recent call last):
File "/home/bill/flow/flow/core/kernel/simulation/traci.py", line 251, in start_simulation
traci_connection.setOrder(0)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/connection.py", line 348, in setOrder
self._sendExact()
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/connection.py", line 99, in _sendExact
raise FatalTraCIError("connection closed by SUMO")
traci.exceptions.FatalTraCIError: connection closed by SUMO
Error during teardown: [Errno 3] No such process
Error during start: Traceback (most recent call last):
File "/home/bill/flow/flow/core/kernel/simulation/traci.py", line 251, in start_simulation
traci_connection.setOrder(0)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/connection.py", line 348, in setOrder
self._sendExact()
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/connection.py", line 99, in _sendExact
raise FatalTraCIError("connection closed by SUMO")
traci.exceptions.FatalTraCIError: connection closed by SUMO
Error during teardown: [Errno 3] No such process
Error during start: Traceback (most recent call last):
File "/home/bill/flow/flow/core/kernel/simulation/traci.py", line 251, in start_simulation
traci_connection.setOrder(0)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/connection.py", line 348, in setOrder
self._sendExact()
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/connection.py", line 99, in _sendExact
raise FatalTraCIError("connection closed by SUMO")
traci.exceptions.FatalTraCIError: connection closed by SUMO
Error during teardown: [Errno 3] No such process
FATAL: exception not rethrown
Retrying in 1 seconds
Could not connect to TraCI server at localhost:57889 [Errno 111] Connection refused
Retrying in 2 seconds
Could not connect to TraCI server at localhost:57889 [Errno 111] Connection refused
Retrying in 3 seconds
Could not connect to TraCI server at localhost:57889 [Errno 111] Connection refused
Retrying in 4 seconds
Could not connect to TraCI server at localhost:57889 [Errno 111] Connection refused
Retrying in 5 seconds
^C
Traceback (most recent call last):
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/__init__.py", line 68, in connect
return Connection(host, port, proc)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/connection.py", line 54, in __init__
self._socket.connect((host, port))
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "../examples/simulate.py", line 93, in <module>
exp.run(flags.num_runs, convert_to_csv=flags.gen_emission)
File "/home/bill/flow/flow/core/experiment.py", line 142, in run
state = self.env.reset()
File "/home/bill/flow/flow/envs/ring/accel.py", line 177, in reset
obs = super().reset()
File "/home/bill/flow/flow/envs/base.py", line 439, in reset
self.restart_simulation(self.sim_params)
File "/home/bill/flow/flow/envs/base.py", line 264, in restart_simulation
network=self.k.network, sim_params=self.sim_params)
File "/home/bill/flow/flow/core/kernel/simulation/traci.py", line 250, in start_simulation
traci_connection = traci.connect(port, numRetries=100)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/traci/__init__.py", line 74, in connect
time.sleep(wait)
KeyboardInterrupt
###Markdown
In the trained ring folder, there is a checkpoint generated every 20 iterations. Try to run the second previous command but replace 200 by 20. On the reward plot, you can see that the reward is already quite high at iteration 20, but hasn't converged yet, so the agent will perform a little less well than at iteration 200. That's it for this example! Feel free to play around with the other scripts in `flow/visualize`. Run them with the `--help` parameter and it should tell you how to use it. Also, if you need the emission file for the trained ring, you can obtain it by running the following command:
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000 --gen_emission
###Output
2021-10-18 10:16:44,145 WARNING services.py:597 -- setpgrp failed, processes may not be cleaned up properly: [Errno 1] Operation not permitted.
2021-10-18 10:16:44,146 INFO resource_spec.py:216 -- Starting Ray with 5.18 GiB memory available for workers and up to 2.59 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2021-10-18 10:16:44,481 INFO trainer.py:371 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2021-10-18 10:16:46,306 INFO rollout_worker.py:770 -- Built policy map: {'default_policy': <ray.rllib.policy.tf_policy_template.PPOTFPolicy object at 0x7feab4437710>}
2021-10-18 10:16:46,306 INFO rollout_worker.py:771 -- Built preprocessor map: {'default_policy': <ray.rllib.models.preprocessors.NoPreprocessor object at 0x7feab44374a8>}
2021-10-18 10:16:46,306 INFO rollout_worker.py:372 -- Built filter map: {'default_policy': <ray.rllib.utils.filter.NoFilter object at 0x7feab4437320>}
2021-10-18 10:16:46,308 INFO multi_gpu_optimizer.py:93 -- LocalMultiGPUOptimizer devices ['/cpu:0']
2021-10-18 10:16:47,678 WARNING util.py:45 -- Install gputil for GPU system monitoring.
Traceback (most recent call last):
File "../flow/visualize/visualizer_rllib.py", line 386, in <module>
visualizer_rllib(args)
File "../flow/visualize/visualizer_rllib.py", line 155, in visualizer_rllib
agent.restore(checkpoint)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/tune/trainable.py", line 341, in restore
self._restore(checkpoint_path)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 559, in _restore
self.__setstate__(extra_data)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 161, in __setstate__
Trainer.__setstate__(self, state)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 855, in __setstate__
self.workers.local_worker().restore(state["worker"])
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 712, in restore
self.policy_map[pid].set_state(state)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/policy/policy.py", line 250, in set_state
self.set_weights(state)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/rllib/policy/tf_policy.py", line 269, in set_weights
return self._variables.set_weights(weights)
File "/home/bill/anaconda3/envs/flow/lib/python3.7/site-packages/ray/experimental/tf_utils.py", line 186, in set_weights
self.assignment_nodes[name] for name in new_weights.keys()
AttributeError: 'numpy.ndarray' object has no attribute 'keys'
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing the results of Flow experiments, and of replaying them. 本教程描述了可视化流实验结果的过程,以及重放它们的过程。**Note:** This tutorial is only relevant if you use SUMO as a simulator. We currently do not support policy replay nor data collection when using Aimsun. The only exception is for reward plotting, which is independent on whether you have used SUMO or Aimsun during training.**注意:**本教程只适用于您使用sumo作为模拟器的情况。我们目前不支持使用Aimsun时的策略重放和数据收集。唯一的例外是奖励计划,它独立于你在训练中是否使用过相扑或漫无目的的。 1. Visualization components 可视化组成 The visualization of simulation results breaks down into three main components:仿真结果可视化主要分为三个部分:- **reward plotting**: Visualization of the reward function is an essential step in evaluating the effectiveness and training progress of RL agents.** *奖励绘制**:奖励函数可视化是评价RL agent有效性和训练进度的重要步骤。- **policy replay**: Flow includes tools for visualizing trained policies using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations and rewards in order to produce some desired behavior. The visualizers also generate plots of observations and a plot of the reward function over the course of the rollout.- **策略重放**:流包括使用SUMO的GUI可视化训练过的策略的工具。这使得可以对策略进行更细粒度的分析,而不仅仅是对其累积的奖励,这反过来又允许用户调整操作、观察和奖励,以产生一些期望的行为。视觉化者还会生成观察图和奖励函数图。- **data collection and analysis**: Any Flow experiment can output its simulation data to a CSV file, `emission.csv`, containing the contents of SUMO's built-in `emission.xml` files. This file contains various data such as the speed, position, time, fuel consumption and many other metrics for every vehicle in the network and at each time step of the simulation. Once you have generated the `emission.csv` file, you can open it and read the data it contains using Python's [csv library](https://docs.python.org/3/library/csv.html) (or using Excel).- **数据收集与分析**:任何流量实验都可以将其模拟数据输出到CSV文件“emission.csv ',包含sumo内置的'发射'内容xml的文件。该文件包含各种数据,如速度、位置、时间、燃料消耗和网络中每辆车的许多其他指标,以及模拟的每个时间步长。一旦你产生了“排放”。您可以使用Python的[csv库](https://docs.python.org/3/library/csv.html)(或使用Excel)打开它并读取它包含的数据。 Visualization is different depending on which reinforcement learning library you are using, if any. Accordingly, the rest of this tutorial explains how to plot rewards, replay policies and collect data when using either no RL library, RLlib, or stable-baselines. 可视化是不同的,这取决于你使用的强化学习库,如果有的话。因此,本教程的其余部分将解释如何在不使用RL库、RLlib或稳定基线的情况下绘制奖励、重放策略和收集数据。 **Contents:**[How to visualize using SUMO without training](2.1---Using-SUMO-without-training)[How to visualize using SUMO with RLlib](2.2---Using-SUMO-with-RLlib)[**_Example: visualize data on a ring trained using RLlib_**](2.3---Example:-Visualize-data-on-a-ring-trained-using-RLlib) 2. How to visualize 如何可视化 2.1 - Using SUMO without training _In this case, since there is no training, there is no reward to plot and no policy to replay._在这种情况下,因为没有训练,所以没有奖励去策划,也没有政策去重播 Data collection and analysis 数据收集与分析SUMO-only experiments can generate emission CSV files seamlessly:First, you have to tell SUMO to generate the `emission.xml` files. You can do that by specifying `emission_path` in the simulation parameters (class `SumoParams`), which is the path where the emission files will be generated. For instance:SUMO-only实验可以无缝生成排放CSV文件:首先,你必须告诉相扑产生“emission.xml"的文件。您可以通过在模拟参数(“SumoParams”类)中指定“emission_path”来实现这一点,该参数是生成排放文件的路径。例如:
###Code
from flow.core.params import SumoParams
sim_params = SumoParams(sim_step=0.1, render=True, emission_path='data')
###Output
_____no_output_____
###Markdown
Then, you have to tell Flow to convert these XML emission files into CSV files. To do that, pass in `convert_to_csv=True` to the `run` method of your experiment object. For instance:然后,您必须告诉Flow将这些XML发射文件转换为CSV文件。为此,将' convert_to_csv=True '传递给实验对象的' run '方法。例如:```pythonexp.run(1, convert_to_csv=True)``` When running experiments, Flow will now automatically create CSV files next to the SUMO-generated XML files.在运行实验时,Flow现在将自动在sumo生成的XML文件旁边创建CSV文件。 2.2 - Using SUMO with RLlib Reward plottingRLlib supports reward visualization over the period of the training using the `tensorboard` command. It takes one command-line parameter, `--logdir`, which is an RLlib result directory. By default, it would be located within an experiment directory inside your `~/ray_results` directory. RLlib支持在训练期间使用“tensorboard”命令进行奖励可视化。它接受一个命令行参数'——logdir ',这是一个RLlib结果目录。默认情况下,它将位于' ~/ray_results '目录下的实验目录中。An example call would look like:一个示例调用如下:`tensorboard --logdir ~/ray_results/experiment_dir/result/directory`You can also run `tensorboard --logdir ~/ray_results` if you want to select more than just one experiment.如果您想要选择多个实验,还可以运行“tensorboard—logdir ~/ray_results”。If you do not wish to use `tensorboard`, an other way is to use our `flow/visualize/plot_ray_results.py` tool. It takes as arguments:如果你不想使用“tensorboard”,另一种方法是使用我们的“flow/ visual/plot_ray_results.py"工具。它作为参数:- the path to the `progress.csv` file located inside your experiment results directory (`~/ray_results/...`),- the name(s) of the column(s) you wish to plot (reward or other things).通往“progress.csv'文件位于您的实验结果目录中(' ~/ray_results/…'),-您希望绘制的列的名称(奖励或其他东西)。An example call would look like:一个示例调用如下:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv training/return-average training/return-min`If you do not know what the names of the columns are, run the command without specifying any column:如果您不知道列的名称,请运行该命令,而不指定任何列:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv`and the list of all available columns will be displayed to you.所有可用列的列表将显示给您。 Policy replayThe tool to replay a policy trained using RLlib is located at `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results (by default located within `~/ray_results`), and secondly the number of the checkpoint you wish to visualize (which correspond to the folder `checkpoint_` inside the experiment results directory).使用RLlib训练的策略回放工具位于“flow/ visualizer_rllib.py”处。它作为参数,首先是实验结果的路径(默认位于' ~/ray_results '中),其次是您希望可视化的检查点的数量(对应于实验结果目录中的文件夹' checkpoint_ ')。An example call would look like this:一个示例调用是这样的:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1`There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`. 还可以通过运行“visualizer_rllib.py——help”了解其他可选参数。 Data collection and analysisSimulation data can be generated the same way as it is done [without training](2.1---Using-SUMO-without-training).模拟数据的生成方法与不经过训练的生成方法相同(2.1—使用—sumo—不经过训练)。If you need to generate simulation data after the training, you can run a policy replay as mentioned above, and add the `--gen-emission` parameter.如果您需要在培训后生成模拟数据,您可以运行上面提到的策略重播,并添加“—gen-emission”参数。An example call would look like:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1 --gen_emission` 2.3 - Example: Visualize data on a ring trained using RLlib
###Code
!pwd # make sure you are in the flow/tutorials folder
###Output
/home/ryc/flow/tutorials
###Markdown
The folder `flow/tutorials/data/trained_ring` contains the data generated in `ray_results` after training an agent on a ring scenario for 200 iterations using RLlib (the experiment can be found in `flow/examples/rllib/stabilizing_the_ring.py`).“flow/tutorials/data/trained_ring”文件夹包含使用RLlib对一个代理进行200次迭代后在“ray_results”中生成的数据(实验可以在“flow/examples/ RLlib / izing_the_ering .py”中找到)。Let's first have a look at what's available in the `progress.csv` file:让我们先来看看“progress.csv”中有哪些内容:
###Code
!python ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv
###Output
Columns are: episode_reward_max, episode_reward_min, episode_reward_mean, episode_len_mean, episodes_this_iter, timesteps_this_iter, done, timesteps_total, episodes_total, training_iteration, experiment_id, date, timestamp, time_this_iter_s, time_total_s, pid, hostname, node_ip, time_since_restore, timesteps_since_restore, iterations_since_restore, num_healthy_workers, trial_id, sampler_perf/mean_env_wait_ms, sampler_perf/mean_processing_ms, sampler_perf/mean_inference_ms, info/num_steps_trained, info/num_steps_sampled, info/sample_time_ms, info/load_time_ms, info/grad_time_ms, info/update_time_ms, perf/cpu_util_percent, perf/ram_util_percent, info/learner/default_policy/cur_kl_coeff, info/learner/default_policy/cur_lr, info/learner/default_policy/total_loss, info/learner/default_policy/policy_loss, info/learner/default_policy/vf_loss, info/learner/default_policy/vf_explained_var, info/learner/default_policy/kl, info/learner/default_policy/entropy, info/learner/default_policy/entropy_coeff
###Markdown
This gives us a list of everything that we can plot. Let's plot the reward and its boundaries:这给了我们一个我们可以画的所有东西的列表。让我们来画出奖励和它的边界:
###Code
%matplotlib notebook
# if this doesn't display anything, try with "%matplotlib inline" instead
%run ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv \
episode_reward_mean episode_reward_min episode_reward_max
###Output
_____no_output_____
###Markdown
We can see that the policy had already converged by the iteration 50.Now let's see what this policy looks like. Run the following script, then click on the green arrow to run the simulation (you may have to click several times).我们可以看到策略在迭代50之前已经收敛了。现在让我们看看这个策略是什么样的。运行以下脚本,然后单击绿色箭头运行模拟(您可能需要多次单击)。
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000
###Output
_____no_output_____
###Markdown
The RL agent is properly stabilizing the ring! Indeed, without an RL agent, the vehicles start forming stop-and-go waves which significantly slows down the traffic, as you can see in this simulation:RL代理正确地稳定了戒指!事实上,在没有RL代理的情况下,车辆开始形成走走停停的波,这大大减慢了交通,正如你在这个模拟中看到的:
###Code
!python ../examples/simulate.py ring
###Output
_____no_output_____
###Markdown
In the trained ring folder, there is a checkpoint generated every 20 iterations. Try to run the second previous command but replace 200 by 20. On the reward plot, you can see that the reward is already quite high at iteration 20, but hasn't converged yet, so the agent will perform a little less well than at iteration 200.在训练的环形文件夹中,每20次迭代生成一个检查点。尝试运行前面的第二个命令,但将200替换为20。在奖励图上,您可以看到在迭代20时奖励已经相当高了,但是还没有收敛,因此代理的表现会比迭代200时稍差一些。 That's it for this example! Feel free to play around with the other scripts in `flow/visualize`. Run them with the `--help` parameter and it should tell you how to use it. Also, if you need the emission file for the trained ring, you can obtain it by running the following command:这就是这个例子!你可以自由地在“流/可视化”中尝试其他脚本。使用“——help”参数运行它们,它应该会告诉您如何使用它。此外,如果你需要训练环的发射文件,你可以通过运行以下命令获得:
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000 --gen_emission
###Output
_____no_output_____
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing the results of Flow experiments, and of replaying them. **Note:** This tutorial is only relevant if you use SUMO as a simulator. We currently do not support policy replay nor data collection when using Aimsun. The only exception is for reward plotting, which is independent on whether you have used SUMO or Aimsun during training. 1. Visualization components The visualization of simulation results breaks down into three main components:- **reward plotting**: Visualization of the reward function is an essential step in evaluating the effectiveness and training progress of RL agents.- **policy replay**: Flow includes tools for visualizing trained policies using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations and rewards in order to produce some desired behavior. The visualizers also generate plots of observations and a plot of the reward function over the course of the rollout.- **data collection and analysis**: Any Flow experiment can output its simulation data to a CSV file, `emission.csv`, containing the contents of SUMO's built-in `emission.xml` files. This file contains various data such as the speed, position, time, fuel consumption and many other metrics for every vehicle in the network and at each time step of the simulation. Once you have generated the `emission.csv` file, you can open it and read the data it contains using Python's [csv library](https://docs.python.org/3/library/csv.html) (or using Excel). Visualization is different depending on which reinforcement learning library you are using, if any. Accordingly, the rest of this tutorial explains how to plot rewards, replay policies and collect data when using either no RL library, RLlib, or stable-baselines. **Contents:**[How to visualize using SUMO without training](2.1---Using-SUMO-without-training)[How to visualize using SUMO with RLlib](2.2---Using-SUMO-with-RLlib)[**_Example: visualize data on a ring trained using RLlib_**](2.3---Example:-Visualize-data-on-a-ring-trained-using-RLlib) 2. How to visualize 2.1 - Using SUMO without training_In this case, since there is no training, there is no reward to plot and no policy to replay._ Data collection and analysisSUMO-only experiments can generate emission CSV files seamlessly:First, you have to tell SUMO to generate the `emission.xml` files. You can do that by specifying `emission_path` in the simulation parameters (class `SumoParams`), which is the path where the emission files will be generated. For instance:
###Code
from flow.core.params import SumoParams
sim_params = SumoParams(sim_step=0.1, render=True, emission_path='data')
###Output
_____no_output_____
###Markdown
Then, you have to tell Flow to convert these XML emission files into CSV files. To do that, pass in `convert_to_csv=True` to the `run` method of your experiment object. For instance:```pythonexp.run(1, convert_to_csv=True)``` When running experiments, Flow will now automatically create CSV files next to the SUMO-generated XML files. 2.2 - Using SUMO with RLlib Reward plottingRLlib supports reward visualization over the period of the training using the `tensorboard` command. It takes one command-line parameter, `--logdir`, which is an RLlib result directory. By default, it would be located within an experiment directory inside your `~/ray_results` directory. An example call would look like:`tensorboard --logdir ~/ray_results/experiment_dir/result/directory`You can also run `tensorboard --logdir ~/ray_results` if you want to select more than just one experiment.If you do not wish to use `tensorboard`, an other way is to use our `flow/visualize/plot_ray_results.py` tool. It takes as arguments:- the path to the `progress.csv` file located inside your experiment results directory (`~/ray_results/...`),- the name(s) of the column(s) you wish to plot (reward or other things).An example call would look like:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv training/return-average training/return-min`If you do not know what the names of the columns are, run the command without specifying any column:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv`and the list of all available columns will be displayed to you. Policy replayThe tool to replay a policy trained using RLlib is located at `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results (by default located within `~/ray_results`), and secondly the number of the checkpoint you wish to visualize (which correspond to the folder `checkpoint_` inside the experiment results directory).An example call would look like this:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1`There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`. Data collection and analysisSimulation data can be generated the same way as it is done [without training](2.1---Using-SUMO-without-training).If you need to generate simulation data after the training, you can run a policy replay as mentioned above, and add the `--gen-emission` parameter.An example call would look like:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1 --gen_emission` 2.3 - Example: Visualize data on a ring trained using RLlib
###Code
!pwd # make sure you are in the flow/tutorials folder
###Output
_____no_output_____
###Markdown
The folder `flow/tutorials/data/trained_ring` contains the data generated in `ray_results` after training an agent on a ring scenario for 200 iterations using RLlib (the experiment can be found in `flow/examples/rllib/stabilizing_the_ring.py`).Let's first have a look at what's available in the `progress.csv` file:
###Code
!python ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv
###Output
_____no_output_____
###Markdown
This gives us a list of everything that we can plot. Let's plot the reward and its boundaries:
###Code
%matplotlib notebook
# if this doesn't display anything, try with "%matplotlib inline" instead
%run ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv \
episode_reward_mean episode_reward_min episode_reward_max
###Output
_____no_output_____
###Markdown
We can see that the policy had already converged by the iteration 50.Now let's see what this policy looks like. Run the following script, then click on the green arrow to run the simulation (you may have to click several times).
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000
###Output
_____no_output_____
###Markdown
The RL agent is properly stabilizing the ring! Indeed, without an RL agent, the vehicles start forming stop-and-go waves which significantly slows down the traffic, as you can see in this simulation:
###Code
!python ../examples/simulate.py ring
###Output
_____no_output_____
###Markdown
In the trained ring folder, there is a checkpoint generated every 20 iterations. Try to run the second previous command but replace 200 by 20. On the reward plot, you can see that the reward is already quite high at iteration 20, but hasn't converged yet, so the agent will perform a little less well than at iteration 200. That's it for this example! Feel free to play around with the other scripts in `flow/visualize`. Run them with the `--help` parameter and it should tell you how to use it. Also, if you need the emission file for the trained ring, you can obtain it by running the following command:
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000 --gen_emission
###Output
_____no_output_____
###Markdown
Tutorial 04: Visualizing Experiment Results This tutorial describes the process of visualizing the results of Flow experiments, and of replaying them. **Note:** This tutorial is only relevant if you use SUMO as a simulator. We currently do not support policy replay nor data collection when using Aimsun. The only exception is for reward plotting, which is independent on whether you have used SUMO or Aimsun during training. 1. Visualization components The visualization of simulation results breaks down into three main components:- **reward plotting**: Visualization of the reward function is an essential step in evaluating the effectiveness and training progress of RL agents.- **policy replay**: Flow includes tools for visualizing trained policies using SUMO's GUI. This enables more granular analysis of policies beyond their accrued reward, which in turn allows users to tweak actions, observations and rewards in order to produce some desired behavior. The visualizers also generate plots of observations and a plot of the reward function over the course of the rollout.- **data collection and analysis**: Any Flow experiment can output its simulation data to a CSV file, `emission.csv`, containing the contents of SUMO's built-in `emission.xml` files. This file contains various data such as the speed, position, time, fuel consumption and many other metrics for every vehicle in the network and at each time step of the simulation. Once you have generated the `emission.csv` file, you can open it and read the data it contains using Python's [csv library](https://docs.python.org/3/library/csv.html) (or using Excel). Visualization is different depending on which reinforcement learning library you are using, if any. Accordingly, the rest of this tutorial explains how to plot rewards, replay policies and collect data when using either no RL library, RLlib, or stable-baselines. **Contents:**[How to visualize using SUMO without training](2.1---Using-SUMO-without-training)[How to visualize using SUMO with RLlib](2.2---Using-SUMO-with-RLlib)[**_Example: visualize data on a ring trained using RLlib_**](2.3---Example:-Visualize-data-on-a-ring-trained-using-RLlib) 2. How to visualize 2.1 - Using SUMO without training_In this case, since there is no training, there is no reward to plot and no policy to replay._ Data collection and analysisSUMO-only experiments can generate emission CSV files seamlessly:First, you have to tell SUMO to generate the `emission.xml` files. You can do that by specifying `emission_path` in the simulation parameters (class `SumoParams`), which is the path where the emission files will be generated. For instance:
###Code
from flow.core.params import SumoParams
sim_params = SumoParams(sim_step=1, render=True, emission_path='data')
###Output
_____no_output_____
###Markdown
Then, you have to tell Flow to convert these XML emission files into CSV files. To do that, pass in `convert_to_csv=True` to the `run` method of your experiment object. For instance:```pythonexp.run(1, convert_to_csv=True)``` When running experiments, Flow will now automatically create CSV files next to the SUMO-generated XML files. 2.2 - Using SUMO with RLlib Reward plottingRLlib supports reward visualization over the period of the training using the `tensorboard` command. It takes one command-line parameter, `--logdir`, which is an RLlib result directory. By default, it would be located within an experiment directory inside your `~/ray_results` directory. An example call would look like:`tensorboard --logdir ~/ray_results/experiment_dir/result/directory`You can also run `tensorboard --logdir ~/ray_results` if you want to select more than just one experiment.If you do not wish to use `tensorboard`, an other way is to use our `flow/visualize/plot_ray_results.py` tool. It takes as arguments:- the path to the `progress.csv` file located inside your experiment results directory (`~/ray_results/...`),- the name(s) of the column(s) you wish to plot (reward or other things).An example call would look like:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv training/return-average training/return-min`If you do not know what the names of the columns are, run the command without specifying any column:`flow/visualize/plot_ray_results.py ~/ray_results/experiment_dir/result/progress.csv`and the list of all available columns will be displayed to you. Policy replayThe tool to replay a policy trained using RLlib is located at `flow/visualize/visualizer_rllib.py`. It takes as argument, first the path to the experiment results (by default located within `~/ray_results`), and secondly the number of the checkpoint you wish to visualize (which correspond to the folder `checkpoint_` inside the experiment results directory).An example call would look like this:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1`There are other optional parameters which you can learn about by running `visualizer_rllib.py --help`. Data collection and analysisSimulation data can be generated the same way as it is done [without training](2.1---Using-SUMO-without-training).If you need to generate simulation data after the training, you can run a policy replay as mentioned above, and add the `--gen-emission` parameter.An example call would look like:`python flow/visualize/visualizer_rllib.py ~/ray_results/experiment_dir/result/directory 1 --gen_emission` 2.3 - Example: Visualize data on a ring trained using RLlib
###Code
!pwd # make sure you are in the flow/tutorials folder
###Output
/home/roberto/flow/tutorials
###Markdown
The folder `flow/tutorials/data/trained_ring` contains the data generated in `ray_results` after training an agent on a ring scenario for 200 iterations using RLlib (the experiment can be found in `flow/examples/rllib/stabilizing_the_ring.py`).Let's first have a look at what's available in the `progress.csv` file:
###Code
!python ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv
###Output
Columns are: episode_reward_max, episode_reward_min, episode_reward_mean, episode_len_mean, episodes_this_iter, timesteps_this_iter, done, timesteps_total, episodes_total, training_iteration, experiment_id, date, timestamp, time_this_iter_s, time_total_s, pid, hostname, node_ip, time_since_restore, timesteps_since_restore, iterations_since_restore, num_healthy_workers, trial_id, sampler_perf/mean_env_wait_ms, sampler_perf/mean_processing_ms, sampler_perf/mean_inference_ms, info/num_steps_trained, info/num_steps_sampled, info/sample_time_ms, info/load_time_ms, info/grad_time_ms, info/update_time_ms, perf/cpu_util_percent, perf/ram_util_percent, info/learner/default_policy/cur_kl_coeff, info/learner/default_policy/cur_lr, info/learner/default_policy/total_loss, info/learner/default_policy/policy_loss, info/learner/default_policy/vf_loss, info/learner/default_policy/vf_explained_var, info/learner/default_policy/kl, info/learner/default_policy/entropy, info/learner/default_policy/entropy_coeff
###Markdown
This gives us a list of everything that we can plot. Let's plot the reward and its boundaries:
###Code
%matplotlib inline
# if this doesn't display anything, try with "%matplotlib inline" instead
%run ../flow/visualize/plot_ray_results.py data/trained_ring/progress.csv \
episode_reward_mean episode_reward_min episode_reward_max
###Output
_____no_output_____
###Markdown
We can see that the policy had already converged by the iteration 50.Now let's see what this policy looks like. Run the following script, then click on the green arrow to run the simulation (you may have to click several times).
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 200000
###Output
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-05-25 19:07:51,527 INFO node.py:498 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2020-05-25_19-07-51_527459_28558/logs.
2020-05-25 19:07:51,633 INFO services.py:409 -- Waiting for redis server at 127.0.0.1:32073 to respond...
2020-05-25 19:07:51,757 INFO services.py:409 -- Waiting for redis server at 127.0.0.1:27886 to respond...
2020-05-25 19:07:51,761 INFO services.py:809 -- Starting Redis shard with 3.33 GB max memory.
2020-05-25 19:07:51,800 INFO node.py:512 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2020-05-25_19-07-51_527459_28558/logs.
2020-05-25 19:07:51,802 INFO services.py:1475 -- Starting the Plasma object store with 5.0 GB memory using /dev/shm.
2020-05-25 19:07:51,864 ERROR log_sync.py:34 -- Log sync requires cluster to be setup with `ray up`.
2020-05-25 19:07:51,879 WARNING ppo.py:143 -- FYI: By default, the value function will not share layers with the policy model ('vf_share_layers': False).
2020-05-25 19:07:53,017 INFO rollout_worker.py:319 -- Creating policy evaluation worker 0 on CPU (please ignore any CUDA init errors)
2020-05-25 19:07:53.018642: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-25 19:07:53.077303: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-25 19:07:53.077517: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
totalMemory: 3.95GiB freeMemory: 3.48GiB
2020-05-25 19:07:53.077533: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2020-05-25 19:07:53.405984: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-25 19:07:53.406012: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0
2020-05-25 19:07:53.406021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N
2020-05-25 19:07:53.406091: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3202 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-05-25 19:07:53,667 INFO dynamic_tf_policy.py:324 -- Initializing loss function with dummy input:
{ 'action_prob': <tf.Tensor 'default_policy/action_prob:0' shape=(?,) dtype=float32>,
'actions': <tf.Tensor 'default_policy/actions:0' shape=(?, 1) dtype=float32>,
'advantages': <tf.Tensor 'default_policy/advantages:0' shape=(?,) dtype=float32>,
'behaviour_logits': <tf.Tensor 'default_policy/behaviour_logits:0' shape=(?, 2) dtype=float32>,
'dones': <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=bool>,
'new_obs': <tf.Tensor 'default_policy/new_obs:0' shape=(?, 3) dtype=float32>,
'obs': <tf.Tensor 'default_policy/observation:0' shape=(?, 3) dtype=float32>,
'prev_actions': <tf.Tensor 'default_policy/action:0' shape=(?, 1) dtype=float32>,
'prev_rewards': <tf.Tensor 'default_policy/prev_reward:0' shape=(?,) dtype=float32>,
'rewards': <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>,
'value_targets': <tf.Tensor 'default_policy/value_targets:0' shape=(?,) dtype=float32>,
'vf_preds': <tf.Tensor 'default_policy/vf_preds:0' shape=(?,) dtype=float32>}
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
2020-05-25 19:07:54,416 INFO rollout_worker.py:742 -- Built policy map: {'default_policy': <ray.rllib.policy.tf_policy_template.PPOTFPolicy object at 0x7f4db52fcf60>}
2020-05-25 19:07:54,416 INFO rollout_worker.py:743 -- Built preprocessor map: {'default_policy': <ray.rllib.models.preprocessors.NoPreprocessor object at 0x7f4db52fcc18>}
2020-05-25 19:07:54,416 INFO rollout_worker.py:356 -- Built filter map: {'default_policy': <ray.rllib.utils.filter.NoFilter object at 0x7f4db52fcac8>}
2020-05-25 19:07:54,420 INFO multi_gpu_optimizer.py:93 -- LocalMultiGPUOptimizer devices ['/cpu:0']
2020-05-25 19:07:56,332 WARNING util.py:47 -- Install gputil for GPU system monitoring.
-----------------------
ring length: 268
v_max: 5.5194978538368025
-----------------------
Traceback (most recent call last):
File "../flow/visualize/visualizer_rllib.py", line 386, in <module>
visualizer_rllib(args)
File "../flow/visualize/visualizer_rllib.py", line 204, in visualizer_rllib
state = env.reset()
File "/home/roberto/flow/flow/envs/ring/wave_attenuation.py", line 210, in reset
return super().reset()
File "/home/roberto/flow/flow/envs/base.py", line 528, in reset
self.k.vehicle.update_vehicle_colors()
File "/home/roberto/flow/flow/core/kernel/vehicle/traci.py", line 1060, in update_vehicle_colors
if self._color_by_speed:
AttributeError: 'TraCIVehicle' object has no attribute '_color_by_speed'
###Markdown
The RL agent is properly stabilizing the ring! Indeed, without an RL agent, the vehicles start forming stop-and-go waves which significantly slows down the traffic, as you can see in this simulation:
###Code
!python ../examples/simulate.py ring
###Output
Round 0, return: 424.12213238365126
Average, std returns: 424.12213238365126, 0.0
Average, std velocities: 2.883939027587335, 0.0
Average, std outflows: 0.0, 0.0
Total time: 8.959779739379883
steps/second: 287.013183011324
###Markdown
In the trained ring folder, there is a checkpoint generated every 20 iterations. Try to run the second previous command but replace 200 by 20. On the reward plot, you can see that the reward is already quite high at iteration 20, but hasn't converged yet, so the agent will perform a little less well than at iteration 200. That's it for this example! Feel free to play around with the other scripts in `flow/visualize`. Run them with the `--help` parameter and it should tell you how to use it. Also, if you need the emission file for the trained ring, you can obtain it by running the following command:
###Code
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 20000 --gen_emission
###Output
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
2020-05-25 19:15:39,796 INFO node.py:498 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2020-05-25_19-15-39_795951_29026/logs.
2020-05-25 19:15:39,902 INFO services.py:409 -- Waiting for redis server at 127.0.0.1:36303 to respond...
2020-05-25 19:15:40,026 INFO services.py:409 -- Waiting for redis server at 127.0.0.1:62275 to respond...
2020-05-25 19:15:40,029 INFO services.py:809 -- Starting Redis shard with 3.33 GB max memory.
2020-05-25 19:15:40,083 INFO node.py:512 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2020-05-25_19-15-39_795951_29026/logs.
2020-05-25 19:15:40,085 INFO services.py:1475 -- Starting the Plasma object store with 5.0 GB memory using /dev/shm.
2020-05-25 19:15:40,144 ERROR log_sync.py:34 -- Log sync requires cluster to be setup with `ray up`.
2020-05-25 19:15:40,160 WARNING ppo.py:143 -- FYI: By default, the value function will not share layers with the policy model ('vf_share_layers': False).
2020-05-25 19:15:41,277 INFO rollout_worker.py:319 -- Creating policy evaluation worker 0 on CPU (please ignore any CUDA init errors)
2020-05-25 19:15:41.278186: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-25 19:15:41.335262: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-25 19:15:41.335489: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62
pciBusID: 0000:01:00.0
totalMemory: 3.95GiB freeMemory: 3.49GiB
2020-05-25 19:15:41.335506: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2020-05-25 19:15:41.657799: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-25 19:15:41.657826: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0
2020-05-25 19:15:41.657832: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N
2020-05-25 19:15:41.657902: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3215 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-05-25 19:15:41,917 INFO dynamic_tf_policy.py:324 -- Initializing loss function with dummy input:
{ 'action_prob': <tf.Tensor 'default_policy/action_prob:0' shape=(?,) dtype=float32>,
'actions': <tf.Tensor 'default_policy/actions:0' shape=(?, 1) dtype=float32>,
'advantages': <tf.Tensor 'default_policy/advantages:0' shape=(?,) dtype=float32>,
'behaviour_logits': <tf.Tensor 'default_policy/behaviour_logits:0' shape=(?, 2) dtype=float32>,
'dones': <tf.Tensor 'default_policy/dones:0' shape=(?,) dtype=bool>,
'new_obs': <tf.Tensor 'default_policy/new_obs:0' shape=(?, 3) dtype=float32>,
'obs': <tf.Tensor 'default_policy/observation:0' shape=(?, 3) dtype=float32>,
'prev_actions': <tf.Tensor 'default_policy/action:0' shape=(?, 1) dtype=float32>,
'prev_rewards': <tf.Tensor 'default_policy/prev_reward:0' shape=(?,) dtype=float32>,
'rewards': <tf.Tensor 'default_policy/rewards:0' shape=(?,) dtype=float32>,
'value_targets': <tf.Tensor 'default_policy/value_targets:0' shape=(?,) dtype=float32>,
'vf_preds': <tf.Tensor 'default_policy/vf_preds:0' shape=(?,) dtype=float32>}
/home/roberto/anaconda3/envs/flow/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
2020-05-25 19:15:42,637 INFO rollout_worker.py:742 -- Built policy map: {'default_policy': <ray.rllib.policy.tf_policy_template.PPOTFPolicy object at 0x7fd5882b7fd0>}
2020-05-25 19:15:42,637 INFO rollout_worker.py:743 -- Built preprocessor map: {'default_policy': <ray.rllib.models.preprocessors.NoPreprocessor object at 0x7fd5882b7c88>}
2020-05-25 19:15:42,637 INFO rollout_worker.py:356 -- Built filter map: {'default_policy': <ray.rllib.utils.filter.NoFilter object at 0x7fd5882b7b38>}
2020-05-25 19:15:42,640 INFO multi_gpu_optimizer.py:93 -- LocalMultiGPUOptimizer devices ['/cpu:0']
2020-05-25 19:15:44,493 WARNING util.py:47 -- Install gputil for GPU system monitoring.
-----------------------
ring length: 266
v_max: 5.42459972166245
-----------------------
Traceback (most recent call last):
File "../flow/visualize/visualizer_rllib.py", line 386, in <module>
visualizer_rllib(args)
File "../flow/visualize/visualizer_rllib.py", line 204, in visualizer_rllib
state = env.reset()
File "/home/roberto/flow/flow/envs/ring/wave_attenuation.py", line 210, in reset
return super().reset()
File "/home/roberto/flow/flow/envs/base.py", line 528, in reset
self.k.vehicle.update_vehicle_colors()
File "/home/roberto/flow/flow/core/kernel/vehicle/traci.py", line 1060, in update_vehicle_colors
if self._color_by_speed:
AttributeError: 'TraCIVehicle' object has no attribute '_color_by_speed'
|
Python_Stock/Technical_Indicators/Tenkan_Sen.ipynb | ###Markdown
Tenkan-Sen (Conversion Line) https://www.investopedia.com/terms/t/tenkansen.asp
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# fix_yahoo_finance is used to fetch data
import fix_yahoo_finance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2018-01-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
# Tenkan-sen (Conversion Line): (9-Period High + 9-Period Low)/2))
Nine_Period_High = df['High'].rolling(window=9).max()
Nine_Period_Low = df['Low'].rolling(window=9).min()
df['Tenkan_Sen'] = (Nine_Period_High + Nine_Period_Low)/2
plt.figure(figsize=(14,8))
plt.plot(df['Adj Close'])
plt.plot(df['Tenkan_Sen'], color='red')
plt.title('Moving Maximum Price Indicator of Stock')
plt.legend()
plt.xlabel('Date')
plt.ylabel('Price')
plt.show()
###Output
_____no_output_____
###Markdown
Candlestick with Tenkan-Sen
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.plot(df['Tenkan_Sen'], color='orange')
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.legend()
ax2 = plt.subplot(2, 1, 2)
df['VolumePositive'] = df['Open'] < df['Adj Close']
ax2.bar(df.index, df['Volume'], color=df.VolumePositive.map({True: 'g', False: 'r'}), label='macdhist')
ax2.grid()
ax2.set_ylabel('Volume')
ax2.set_xlabel('Date')
###Output
_____no_output_____
###Markdown
Tenkan-Sen (Conversion Line) https://www.investopedia.com/terms/t/tenkansen.asp
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2018-01-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
# Tenkan-sen (Conversion Line): (9-Period High + 9-Period Low)/2))
Nine_Period_High = df['High'].rolling(window=9).max()
Nine_Period_Low = df['Low'].rolling(window=9).min()
df['Tenkan_Sen'] = (Nine_Period_High + Nine_Period_Low)/2
plt.figure(figsize=(14,8))
plt.plot(df['Adj Close'])
plt.plot(df['Tenkan_Sen'], color='red')
plt.title('Moving Maximum Price Indicator of Stock')
plt.legend()
plt.xlabel('Date')
plt.ylabel('Price')
plt.show()
###Output
_____no_output_____
###Markdown
Candlestick with Tenkan-Sen
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
from mpl_finance import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax1 = plt.subplot(2, 1, 1)
candlestick_ohlc(ax1,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax1.plot(df['Tenkan_Sen'], color='orange')
ax1.xaxis_date()
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax1.grid(True, which='both')
ax1.minorticks_on()
ax1v = ax1.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
ax1v.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
ax1v.axes.yaxis.set_ticklabels([])
ax1v.set_ylim(0, 3*df.Volume.max())
ax1.set_title('Stock '+ symbol +' Closing Price')
ax1.set_ylabel('Price')
ax1.legend()
ax2 = plt.subplot(2, 1, 2)
df['VolumePositive'] = df['Open'] < df['Adj Close']
ax2.bar(df.index, df['Volume'], color=df.VolumePositive.map({True: 'g', False: 'r'}), label='macdhist')
ax2.grid()
ax2.set_ylabel('Volume')
ax2.set_xlabel('Date')
###Output
_____no_output_____ |
parkinsons disease.ipynb | ###Markdown
Libraries
###Code
!pip install xgboost
import pandas as pd
import numpy as np
import os, sys
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from xgboost import XGBClassifier
# Read The Data
data = pd.read_csv("parkinsons.data")
data.shape
data.head()
data.isnull().sum()
data.nunique()
data.dtypes
data.describe()
###Output
_____no_output_____
###Markdown
---------------Here we can see that there is great difference between values of different columns...So, we have to rescale the values
###Code
X = data.drop(['name', 'status'], axis=1)
y = data['status']
# Scaler init
scaler = MinMaxScaler((-1, 1))
X = scaler.fit_transform(X)
# split the data bw training and testing data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 22)
# initialize classifier
clf = XGBClassifier()
%time clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
metrics.accuracy_score(y_pred, y_test)*100
###Output
_____no_output_____ |
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/CLARUSWAY/STATISTICS/Statistics_Session_2.ipynb | ###Markdown
Population and Sample
###Code
import numpy as np
np.random.seed(42)
population=np.random.randint(0,50,10000)
population
len(population)
np.random.seed(42)
sample=np.random.choice(population, 100)
np.random.seed(42)
sample_1000=np.random.choice(population, 1000)
len(sample)
len(sample_1000)
sample
sample.mean()
sample_1000.mean()
population.mean()
np.random.seed(42)
for i in range(20):
sample=np.random.choice(population, 100)
print(sample.mean())
np.random.seed(42)
sample_means=[]
for i in range(20):
sample=np.random.choice(population, 10000)
sample_means.append(sample.mean())
sample_means
np.mean(sample_means)
population.mean()
sum(sample_means)/len(sample_means)
###Output
_____no_output_____
###Markdown
Skewness and Kurtosis
###Code
import numpy as np
from scipy.stats import kurtosis, skew
from scipy import stats
import matplotlib.pyplot as plt
x=np.random.normal(0,2,1000)
# print(excess kurtosis of normal distribution (should be 0): {}'.format( kurtosis(x) ))
# print(skewness of normal distribution (should be 0): {}'.format( skew(x) ))
#In finance, high excess kurtosis is an indication of high risk.
plt.hist(x,bins=100);
x=np.random.normal(0,2,1000000)
# print(excess kurtosis of normal distribution (should be 0): {}'.format( kurtosis(x) ))
# print(skewness of normal distribution (should be 0): {}'.format( skew(x) ))
#In finance, high excess kurtosis is an indication of high risk.
plt.hist(x,bins=100);
kurtosis(x)
skew(x)
shape, scale = 2, 2
s=np.random.gamma(shape,scale, 1000)
plt.hist(s, bins=100);
shape, scale = 2, 2
s=np.random.gamma(shape,scale, 100000)
plt.hist(s, bins=100);
kurtosis(s)
skew(s)
shape, scale = 6, 2
s=np.random.gamma(shape,scale, 100000)
plt.hist(s, bins=100);
kurtosis(s)
skew(s)
###Output
_____no_output_____ |
10_Applied_Data_Science_Capstone/04_EDA_with_SQL.ipynb | ###Markdown
Assignment: SQL Notebook for Peer AssignmentEstimated time needed: **60** minutes. IntroductionUsing this Python notebook you will:1. Understand the Spacex DataSet2. Load the dataset into the corresponding table in a Db2 database3. Execute SQL queries to answer assignment questions Overview of the DataSetSpaceX has gained worldwide attention for a series of historic milestones.It is the only private company ever to return a spacecraft from low-earth orbit, which it first accomplished in December 2010.SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars wheras other providers cost upward of 165 million dollars each, much of the savings is because Space X can reuse the first stage.Therefore if we can determine if the first stage will land, we can determine the cost of a launch.This information can be used if an alternate company wants to bid against SpaceX for a rocket launch.This dataset includes a record for each payload carried during a SpaceX mission into outer space. Download the datasetsThis assignment requires you to load the spacex dataset.In many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. Click on the link below to download and save the dataset (.CSV file):Spacex DataSet Store the dataset in database table**it is highly recommended to manually load the table using the database console LOAD tool in DB2**.Now open the Db2 console, open the LOAD tool, Select / Drag the .CSV file for the dataset, Next create a New Table, and then follow the steps on-screen instructions to load the data. Name the new table as follows:**SPACEXDATASET****Follow these steps while using old DB2 UI which is having Open Console Screen****Note:While loading Spacex dataset, ensure that detect datatypes is disabled. Later click on the pencil icon(edit option).**1. Change the Date Format by manually typing DD-MM-YYYY and timestamp format as DD-MM-YYYY HH\:MM:SS. Here you should place the cursor at Date field and manually type as DD-MM-YYYY.2. Change the PAYLOAD_MASS\_\_KG\_ datatype to INTEGER. **Changes to be considered when having DB2 instance with the new UI having Go to UI screen*** Refer to this insruction in this link for viewing the new Go to UI screen.* Later click on **Data link(below SQL)** in the Go to UI screen and click on **Load Data** tab.* Later browse for the downloaded spacex file.* Once done select the schema andload the file.
###Code
%%capture --no-display
!pip install sqlalchemy==1.3.9
!pip install ibm_db_sa
!pip install ipython-sql
###Output
_____no_output_____
###Markdown
Connect to the databaseLet us first load the SQL extension and establish a connection with the database
###Code
%load_ext sql
%config SqlMagic.displaycon = False
###Output
_____no_output_____
###Markdown
**DB2 magic in case of old UI service credentials.**In the next cell enter your db2 connection string. Recall you created Service Credentials for your Db2 instance before. From the **uri** field of your Db2 service credentials copy everything after db2:// (except the double quote at the end) and paste it in the cell below after ibm_db_sa://in the following format**%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name****DB2 magic in case of new UI service credentials.** * Use the following format.* Add security=SSL at the end**%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name?security=SSL**
###Code
import os
from dotenv import load_dotenv
load_dotenv('../.env')
my_username=os.environ.get('my_username')
my_password=os.environ.get('my_password')
my_hostname=os.environ.get('my_hostname')
my_port=os.environ.get('my_port')
my_db_name=os.environ.get('my_db_name')
sql_connection= "ibm_db_sa://{0}:{1}@{2}:{3}/{4}?security=SSL".format(my_username,my_password,my_hostname,my_port,my_db_name)
%sql $sql_connection
###Output
_____no_output_____
###Markdown
TasksNow write and execute SQL queries to solve the assignment tasks. Task 1 Display the names of the unique launch sites in the space mission
###Code
%sql select distinct(launch_site) from spacex
###Output
Done.
###Markdown
Task 2 Display 5 records where launch sites begin with the string 'CCA'
###Code
%sql select * from spacex where launch_site like 'CCA%' limit 5
###Output
Done.
###Markdown
Task 3 Display the total payload mass carried by boosters launched by NASA (CRS)
###Code
%sql select sum(payload_mass__kg_) as "Total mass carried by 'NASA (CRS)'" from spacex where customer like 'NASA (CRS)'
###Output
Done.
###Markdown
Task 4 Display average payload mass carried by booster version F9 v1.1
###Code
%sql select avg(payload_mass__kg_) as "Average payload mass carried by booster 'F9 v.1.1'" from spacex where booster_version like 'F9 v1.1'
###Output
Done.
###Markdown
Task 5 List the date when the first successful landing outcome in ground pad was acheived.*Hint:Use min function*
###Code
%sql select min(date) as "First time success landing in ground pad" from spacex where landing__outcome like 'Success (ground pad)'
###Output
Done.
###Markdown
Task 6 List the names of the boosters which have success in drone ship and have payload mass greater than 4000 but less than 6000
###Code
%sql select booster_version as "Booster version" from spacex where landing__outcome like 'Success (drone ship)' and payload_mass__kg_ between 4000 and 6000
###Output
Done.
###Markdown
Task 7 List the total number of successful and failure mission outcomes
###Code
%sql select mission_outcome as "Mission Outcome", count(mission_outcome) as "Nº of times" from spacex group by mission_outcome
###Output
Done.
###Markdown
Task 8 List the names of the booster_versions which have carried the maximum payload mass. Use a subquery
###Code
%sql select booster_version from spacex where payload_mass__kg_=(select max(payload_mass__kg_) from spacex)
###Output
Done.
###Markdown
Task 9 List the failed landing_outcomes in drone ship, their booster versions, and launch site names for in year 2015
###Code
%sql select booster_version, launch_site from spacex where year(date)=2015 and landing__outcome like 'Failure (drone ship)'
###Output
Done.
###Markdown
Task 10 Rank the count of landing outcomes (such as Failure (drone ship) or Success (ground pad)) between the date 2010-06-04 and 2017-03-20, in descending order
###Code
%sql select landing__outcome as "Landing Outcome", count(landing__outcome) as "Nº of times" from spacex where date between '2010-06-04' and '2017-03-20' group by landing__outcome order by 2 desc
###Output
Done.
|
module3-make-explanatory-visualizations/LS_DS_124_Sequence_your_narrative_Assignment.ipynb | ###Markdown
_Lambda School Data Science_ Sequence Your Narrative - AssignmentToday we will create a sequence of visualizations inspired by [Hans Rosling's 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo).Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/):- [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv)- [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv)- [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv)- [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv)- [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv) Objectives- sequence multiple visualizations- combine qualitative anecdotes with quantitative aggregatesLinks- [Hans Rosling’s TED talks](https://www.ted.com/speakers/hans_rosling)- [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474)- "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays."- [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling ASSIGNMENT 1. Replicate the Lesson Code2. Take it further by using the same gapminder dataset to create a sequence of visualizations that combined tell a story of your choosing.Get creative! Use text annotations to call out specific countries, maybe: change how the points are colored, change the opacity of the points, change their sized, pick a specific time window. Maybe only work with a subset of countries, change fonts, change background colors, etc. make it your own!
###Code
# TODO
#!pip install --upgrade seaborn
import seaborn as sns
sns.__version__
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
income = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv')
lifespan = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv')
population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv')
entities = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv')
concepts = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv')
income.shape, lifespan.shape, population.shape, entities.shape, concepts.shape
income.head()
income.sample(10)
lifespan.head()
lifespan.sample(10)
population.head()
population.sample(10)
entities.head()
concepts.head()
df_1 = pd.DataFrame({'Student': ['Peter', 'Zach', 'Jane'], 'Math Test Score': [75, 89, 82]})
df_1
df_2 = pd.DataFrame({'Student': ['Alice', 'Peter', 'Jane'], 'Biology Test Score': [78, 87, 90]})
df_2
pd.merge(df_1, df_2, on='Student', how='right')
df_3 = pd.DataFrame({'Name': ['Alice', 'Andrew', 'Jane'],
'Chemistry Test Scores': ['A', 'B', 'C']})
df_3
df_4 = pd.merge(df_2, df_3, left_on='Student', right_on='Name').drop(columns=['Student'])
df_4
income.shape
lifespan.shape
df = pd.merge(income, lifespan)
df.shape
df = pd.merge(income, lifespan, on=['geo', 'time'], how='inner')
df.shape
df.head()
df = pd.merge(df, population)
df.head()
entities.head()
subset_cols = ['country', 'name', 'world_4region', 'world_6region']
merged = pd.merge(df, entities[subset_cols], left_on='geo', right_on='country')
merged = merged.drop(columns=['geo'])
merged.head()
mapping_1 = {
'time': 'year',
'country': 'country_code',
'income_per_person_gdppercapita_ppp_inflation_adjusted': 'income',
'population_total': 'population',
'world_4region': '4region',
'world_6region': '6region',
'life_expectancy_years': 'lifespan'
}
mapping_2 = {'name': 'country'}
merged = merged.rename(columns=mapping_1)
merged = merged.rename(columns=mapping_2)
merged.head()
merged.dtypes
merged.describe()
merged.describe(exclude='number')
merged['country'].value_counts()
###Output
_____no_output_____
###Markdown
STRETCH OPTIONS 1. Animate!- [How to Create Animated Graphs in Python](https://towardsdatascience.com/how-to-create-animated-graphs-in-python-bb619cc2dec1)- Try using [Plotly](https://plot.ly/python/animations/)!- [The Ultimate Day of Chicago Bikeshare](https://chrisluedtke.github.io/divvy-data.html) (Lambda School Data Science student)- [Using Phoebe for animations in Google Colab](https://colab.research.google.com/github/phoebe-project/phoebe2-docs/blob/2.1/tutorials/animations.ipynb) 2. Study for the Sprint Challenge- Concatenate DataFrames- Merge DataFrames- Reshape data with `pivot_table()` and `.melt()`- Be able to reproduce a FiveThirtyEight graph using Matplotlib or Seaborn. 3. Work on anything related to your portfolio site / Data Storytelling Project
###Code
# TODO
###Output
_____no_output_____ |
D6_EDA_欄位的資料類型介紹及處理/Day_006_HW.ipynb | ###Markdown
[作業目標]- 仿造範例的 One Hot Encoding, 將指定的資料進行編碼- 請閱讀 [One Hot Encoder vs Label Encoder](https://medium.com/@contactsunny/label-encoder-vs-one-hot-encoder-in-machine-learning-3fc273365621) [作業重點]- 將 sub_train 進行 One Hot Encoding 編碼 (In[4], Out[4])
###Code
import os
import numpy as np
import pandas as pd
# 設定 data_path, 並讀取 app_train
dir_data = '../data/'
f_app_train = os.path.join(dir_data, 'application_train.csv')
app_train = pd.read_csv(f_app_train)
###Output
_____no_output_____
###Markdown
作業將下列部分資料片段 sub_train 使用 One Hot encoding, 並觀察轉換前後的欄位數量 (使用 shape) 與欄位名稱 (使用 head) 變化
###Code
sub_train = pd.DataFrame(app_train['WEEKDAY_APPR_PROCESS_START'])
print(sub_train.shape)
sub_train.head()
"""
Your Code Here
"""
sub_train = pd.get_dummies(sub_train)
sub_train.head(10)
sub_train.shape
###Output
_____no_output_____ |
Data wrangling - Duplicate transactions.ipynb | ###Markdown
Data Pre-processing
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import timedelta
df = pd.read_json('transactions.json',lines=True,orient='records')
# Replacing null values with NaN
df = df.replace(r'^\s*$', np.nan, regex=True)
# Dropping with features with zero entries
df.drop(['echoBuffer','merchantCity','merchantState','merchantZip','posOnPremises','recurringAuthInd'],
axis=1,inplace=True)
# Creating Date and Time Column
df['Date']=df['transactionDateTime'].apply(lambda x: x.split('T')[0])
df['Time']=df['transactionDateTime'].apply(lambda x: x.split('T')[1])
def concat(x):
y=str('T') + str(x)
return y
df.reset_index(drop=False,inplace=True)
df['transactionKey']=df['index'].apply(lambda x: concat(x))
df.drop(['index'],axis=1,inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Identifying Reversed transactions
###Code
df['transactionType'].isnull().sum()
#Replacing missing values in transactionType as 'PURCHASE'
df['transactionType']=df['transactionType'].fillna(value='PURCHASE')
# Creating Timestamp column
df.Date=pd.to_datetime(df.Date)
df['Timestamp']=pd.to_datetime(df.Date.astype(str)+' '+df.Time.astype(str))
# Sorting data in ascending order with respect to customerId, merchantName, Timestamp
data=df[['customerId','merchantName','transactionAmount','transactionType','transactionKey','Timestamp']]
data=data.sort_values(by=['customerId','merchantName','Timestamp'],ascending=True)
data.head()
# Removing transactions with ADDRESS_VERIFICATION type since, there transaction amount is $0
data=data.reset_index(drop=True)
data=data[-(data.transactionType=='ADDRESS_VERIFICATION')]
data=data.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Considering the purchase and reversal transaction are consecutive
###Code
reverse = data.loc[data['transactionType']=='REVERSAL']
reverse.head()
purchase_index = reverse.index.values.astype(int)-1
purchase = data.iloc[purchase_index]
purchase.head()
purchase_index=pd.DataFrame(purchase_index)
purchase_index.columns=['Purchase_Index']
reverse_index=pd.DataFrame(reverse_index)
reverse_index.columns=['Reverse_Index']
purchase_amount=pd.DataFrame(purchase['transactionAmount'].values)
purchase_amount.columns=['Purchase_Amount']
reverse_amount=pd.DataFrame(reverse['transactionAmount'].values)
reverse_amount.columns=['Reversal_Amount']
# Creating DataFrame to check reversal transactions
chk=pd.concat([purchase_index,reverse_index,purchase_amount,reverse_amount],axis=1)
chk.head()
chk['new'] = np.where((chk['Purchase_Amount'] == chk['Reversal_Amount']), True, False)
chk.head()
chk.new.value_counts()
###Output
_____no_output_____
###Markdown
By checking the consecutive transactions, I was able to track 12665 reversal transactions.
###Code
tracked = chk.loc[chk['new']==True]
###Output
_____no_output_____
###Markdown
Creating DataFrame for all tracked reversal transactions
###Code
track_index=np.concatenate((tracked['Purchase_Index'], tracked['Reverse_Index']))
track_reverse=data.iloc[track_index]
track_reverse=track_reverse.sort_values(by=['customerId','merchantName','Timestamp'],ascending=True)
track_reverse.head()
track_reverse.transactionType.value_counts()
###Output
_____no_output_____
###Markdown
I was able to track 12665 reverse transactions out of the 20303 transactions
###Code
track_reverse.groupby(['transactionType']).sum()['transactionAmount']
reverse.groupby(['transactionType']).sum()['transactionAmount']
###Output
_____no_output_____
###Markdown
The transaction amount of tracked reversal transactions is $1907433.62 The overall transaction amount of reverse transactions is $2821792.5 7638 transactions were not tracked with transaction amount of $914,358.88 Identifying Multi-swipe transactions To identify multi-swipes, I am initially sorting the DataFrame in ascending order with respect to cusromerId, merchantName and Timestamp. The sorted data is available from previous. Then, to identify the multi-swipe transaction, I subset data with all the duplicate transactions and then classify the multi-swipe transactions with time difference less than 180 seconds.
###Code
# Removing Reverse transactions
mst=data.reset_index(drop=True)
mst=mst[-(data.transactionType=='REVERSAL')]
mst=mst.reset_index(drop=True)
mst.head()
k=mst.loc[:,['customerId','merchantName','transactionAmount','transactionType']]
k.head()
# Finding duplicate transactions
duplicate = k.duplicated(keep=False)
Duplicate=pd.DataFrame(duplicate)
Duplicate.columns=['Duplicate']
a=pd.concat([mst,Duplicate],axis=1)
a.head()
# Subsetting the data with duplicate transactions only
b = a.loc[a['Duplicate']==True]
b.head()
# Calculating time difference between the transactions
b['difference']=b.Timestamp.diff()
b.head()
z = timedelta(0,0)
z
td = timedelta(0,180)
td
# Checking for timedifference less than 180 seconds
b['multi_swipe']=(b['difference']<td) & (b['difference']>z)
#Sub-setting data with multi-swipe transactions only
multi_swipe = b.loc[b['multi_swipe']==True]
multi_swipe.head()
multi_swipe.transactionType.value_counts()
# Calculating total mutli-swipe transaction amount leaving the first transaction
multi_swipe.sum()['transactionAmount']
###Output
_____no_output_____ |
docs/source/tutorials/2-creating-light-curves/2-3-how-to-use-cbvcorrector.ipynb | ###Markdown
Removing noise from Kepler, K2, and TESS light curves using Cotrending Basis Vectors (`CBVCorrector`) Cotrending Basis Vectors (CBVs) are generated in the PDC component of the Kepler/K2/TESS pipeline and are used to remove systematic trends in light curves. They are built from the most common systematic trends observed in each PDC Unit of Work (Quarter for Kepler, Campaign for K2 and Sector for TESS). Each Kepler and K2 module output and each TESS CCD has its own set of CBVs. You can read an introduction to the CBVs in [Demystifying Kepler Data](https://arxiv.org/pdf/1207.3093.pdf) or to greater detail in the [Kepler Data Processing Handbook](https://archive.stsci.edu/kepler/manuals/KSCI-19081-003-KDPH.pdf). The same basic method to generate CBVs is used for all three missions.This tutorial provides examples of how to utilize the various CBVs to clean lightcurves of common trends experienced by all targets. The technique exploits two goodness metrics that characterize the performance of the fit. [CBVCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html) inherits the [RegressionCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.html?highlight=regressioncorrector) class in LightKurve. It is recommend to first read the tutorial on [obtaining the CBVs](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-2-how-to-use-cbvs.html) before reading this tutorial. Cotrending Basis Vector Types There are three basic types of CBVs: - **Single-Scale** contains all systematic trends combined in a single set of basis vectors. - **Multi-Scale** contains systematic trends in specific wavelet-based band passes. There are usually three sets of multi-scale basis vectors in three bands.- **Spike** contains only short impulsive spike systematics.There are two different correction methods in PDC: Single-Scale and Multi-Scale. Single-Scale performs the correction in a single bandpass. Multi-Scale performs the correction in three separate wavelet-based bandpasses. Both corrections are performed in PDC but we can only export a single PDC light curve for each target. So, PDC must choose which of the two to export on a per-target basis. Generally speaking, single-scale performs better at preserving longer period signals. But at periods close to transiting planet durations multi-scale performs better at preserving signals. PDC therefore mostly chooses multi-scale for use within the planet finding pipeline and for the archive. You can find in the light curve FITS header which PDC method was chosen (keyword “PDCMETHD”). Additionally, a seperate correction is alway performed to remove short impulsive systematic spikes.For an individual's research needs, the mission supplied PDC lightcurves might not be ideal and so the CBVs are provided to the user to perform their own correction. All three CBV types are provided at MAST for TESS, however only Single-Scale is provided at MAST for Kepler and K2. Also for Kepler and K2, Cotrending Basis Vectors are supplied for only the 30-minute target cadence. Obtaining the CBVs One can directly obtain the CBVs with `download_tess_cbvs` and `downlaod_kepler_cbvs`. However when generating a [CBVCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html?highlight=cbvcorrector) object the appropriate CBVs are automatically downloaded from MAST and aligned to the lightcurve. Let's generate this object for a particularily interesting TESS variable target. We first download the SAP lightcurve.
###Code
from lightkurve import search_lightcurve
import numpy as np
import matplotlib.pyplot as plt
lc = search_lightcurve('TIC 99180739', author='SPOC', sector=10).download(flux_column='sap_flux')
###Output
_____no_output_____
###Markdown
Next, we create a `CBVCorrector` object. This will download the CBVs appropriate for this target and store them in the `CBVCorrector` object. In the case of TESS, this means the CBVs associated with the CCD this target is on and for Sector 10.
###Code
from lightkurve.correctors import CBVCorrector
cbvCorrector = CBVCorrector(lc)
###Output
_____no_output_____
###Markdown
Let's look at the CBVs downloaded.
###Code
cbvCorrector.cbvs
###Output
_____no_output_____
###Markdown
We see that there are a total of 5 sets of CBVs, all associated with TESS Sector 10, Camera 1 and CCD 1. The number of CBVs per type is also given. Let's plot the Single-Scale CBVs, which contain all systematics combined.
###Code
cbvCorrector.cbvs[0].plot();
###Output
_____no_output_____
###Markdown
The first several CBVs contain most of the systematics. The latter CBVs pose a greater risk of injecting more noise than helping. The default behavior in CBVCorrector is to use the first 8 CBVs. Assessing Over- and Under-Fitting with the Goodness Metrics Two very common issues when fitting a model to a data set it over-fitting and under-fitting. Over-fitting occurs when the model has too many degrees of freedom and fits the data _at all costs_, instead of just modelling the physical process it is attempting to model. This can exhibit itself in different ways, depending on the system and application. In the case of fitting systematic trend basis vectors to a time series, over-fitting can result in the basis vectors removing intrinsic signals in the times series instead of just the systematics. It can also result in introduced broad-band noise. This can be particularily prominant in an unconstrained least-squares fit. A least-squares fit only cares about minimizing it's loss function, which is the Root Mean Square error. In the course of minimizing the RMS, narrow-band power representing the systematic trends are exchanged for broad-band noise intrinsic in the basis vectors. This results in the overall RMS decreasing but the noise in the time series increasing, resulting in the obscuration of the signals under interest. A very common method to inhibit over-fitting is to introduce a regularization term in the loss function. This constrains the fit and effectively reduces the degrees of freedom.Under-fitting occurs when the model has too few degrees of freedom and fails to adequately model the physical process it is attempting to model. In the case of fitting systematic trend basis vectors to a time series, under-fitting can result in residual systematics. Under-fitting can either be the result of the basis vectors not adequately representing the systematics or, placing too great of a restriction on the model during fitting. The regularization technique used to inhibit over-fitting can therefore result in under-fitting. The ideal fit will balance the counter-acting phenomena of over- and under-fitting. To this end, a method can be developed to measure the degree to which these two phenomena occur.PDC has two **Goodness Metrics** to assess over- and under-fitting:- **Over-fitting metric**: Measures the introduced noise in the light curve after the correction. It does so by measuring the broad-band power spectrum via a Lomb-Scargle Periodogram both before and after the correction. If power has increased after the correction then this is an indication the CBV fit has over-fitted and introduced noise. The metric treats all frequencies equally when measuring power increase; from one frequency separation to the Nyquist frequency. This metric is callibrated such that a metric value of 0.5 means the introduced noise due to over-fitting is at the same power level as the uncertainties in the light curve.- **Under-fitting metric**: Measures the mean residual target to target Pearson correlation between the target under study and a selection of neighboring targets. This metric will find and download a selection of neighboring SPOC SAP targets in RA and Decl. until a minimum number is found. The metric is callibrated such that a value of 0.95 means the residual correlations in the target is equivalent to chance correlations of White Gaussian Noise._The Goodness Metrics are not perfect!_ They are an estimate of over- and under-fitting and are to be used as a guideline along other other metrics to assess the quality of your light curve. The Goodness Metrics are part of the `lightkurve.correctors.metrics` module and can be computed directly with calls to `overfit_metric_lombscargle` and `underfit_metric_neighbors`. The 'CBVCorrector' has convenience wrappers for the two metrics and so they do not need to be called directly, as we will show below. Example Correction with CBVCorrector to Inhibit Over-Fitting There are four correction methods within `CBVCorrector`:- **correct**: Performs a numerical correction using the LightKurve [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) method while optimizing the L2-Norm regularization penalty term using the goodness metrics as the Loss Function.- **correct_gaussian_prior**: Performs an analytical correction using the LightKurve [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) method while setting the L2-Norm (Ridge Regression) regularization penalty term as the Gaussian prior.- **correct_elasticnet**: Performs the correction using Scikit-Learn's [ElasticNet](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html), which combines both an L1- and L2-Norm regularization.- **correct_regressioncorrector**: Performs the standard [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) correction. RegressionCorrector is the superclass to CBVCorrector and this is a "passthrough" method to access the superclass `correct` method.If you are unfamilar with L1-Norm (LASSO) and L2-Norm (Ridge Regression) regularization then there are [severel](https://www.statlearning.com/), [excellent](https://press.princeton.edu/books/hardcover/9780691198309/statistics-data-mining-and-machine-learning-in-astronomy), [introductions](https://en.wikipedia.org/wiki/Regularization_(mathematics)). You can read how a L2-norm relates to a Gaussian prior in a linear design matrix in [this reference](https://katbailey.github.io/post/from-both-sides-now-the-math-of-linear-regression/).The default method is `correct` and generally speaking, one can use just this method to obtain a good fit. The other methods are for advanced usage.We'll start with `correct_gaussian_prior` in order to introduce the concepts. Doing so will allow us to force a very weak regularization term (alpha=1e-4) as an illustration.
###Code
# Select which CBVs to use in the correction
cbv_type = ['SingleScale', 'Spike']
# Select which CBV indices to use
# Use the first 8 SingleScale and all Spike CBVS
cbv_indices = [np.arange(1,9), 'ALL']
# Perform the correction
cbvCorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, alpha=1e-4)
cbvCorrector.diagnose();
###Output
_____no_output_____
###Markdown
First note that CBVCorrector always fits a constant term in the model, but the constant is never subtracted in the resultant corrected flux. The median flux value of the light curve is always preserved.At first sight, this looks like a good correction. Both the Single-Scale and Spike basis vectors are being utilized to fit out as much of the signal as possible. The corrected light curve is indeed flatter. But this was essentially an unrestricted least-squares correction and we may have _over-fitted_. The very strong lips right at the beginning of each orbit is probably a chance correlation between the star's inherent stellar variability and the thermal settling systematic that is common in Kepler and TESS lightcurves. Let's look at the CBVCorrector goodness metrics to determine if this is the case.
###Code
# Note: this cell will be slow to run
print('Over fitting Metric: {}'.format(cbvCorrector.over_fitting_metric()))
print('Under fitting Metric: {}'.format(cbvCorrector.under_fitting_metric()))
###Output
_____no_output_____
###Markdown
The first time you run the under-fitting goodness metric it will download the SPOC SAP light curves of targets in the neighborhood around the target under study in order to estimate the residual systematics (they are stored in the CBVCorrector object for subsequent computations). A goodness metric of 0.8 or above is generally considered good. In this case, it looks like we over-fitted (over-fitting metric = 0.71). Even though the corrected light curve looks better, our metric is telling us we probably injected signals into our lightcurve and we should not trust this really nice looking curve. Perhaps we can do better if we _regularize_ the fit. Using the Goodness Metrics to Optimize the Fit We will start by performing a scan of the over- and under-fit goodness metrics as a function of the L2-Norm regularization term, alpha.
###Code
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
This scan also plots the last used alpha parameter as a vertical black line (alpha=1e-4 in our case). We are clearly not optimizing this fit for both over- and under-fitting. Let's use the `correct` numerical optimizer to try to optimize the fit.
###Code
cbvCorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices);
cbvCorrector.diagnose();
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
Much better! We see the thermal settling systematic is still being removed, but the stellar variabity is better preserved. Note that the optimizer did not set the alpha parameter at exactly the red and blue curve intersection point. The default target goodness scores is 0.8 or above, which is fulfilled at alpha=1.45e-1. If we want to optimize the fit even more, by perhaps ensuring we are not over-fitting at all, then we can adjust the target over and under scores to emphasize which metric we are more interested in. Below we more greatly emphasize improving the over-fitting metric by setting the target to 0.9.
###Code
cbvCorrector.correct(cbv_type=cbv_type,
cbv_indices=cbv_indices,
target_over_score=0.9,
target_under_score=0.5)
cbvCorrector.diagnose();
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
We are now perhaps biasing too far towards under-fitting, but depending on your research interests, this might be best. No Single Best Answer Example Let's now look at another example, this time where there is no clear single best answer. Again, we will use the Single-Scale and Spike basis vectors for the correction and begin with low regularization.
###Code
lc = search_lightcurve('TIC 38574307', author='SPOC', sector=2).download(flux_column='sap_flux')
cbvCorrector = CBVCorrector(lc)
cbv_type = ['SingleScale', 'Spike']
cbv_indices = [np.arange(1,9), 'ALL']
cbvCorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, alpha=1e-4)
cbvCorrector.diagnose();
###Output
_____no_output_____
###Markdown
At first sight, this looks good. The long term trends have been removed and the periodic noisy bits have been removed with the spike basis vectors. But did we really do a good job?
###Code
print('Over fitting Metric: {}'.format(cbvCorrector.over_fitting_metric()))
print('Under fitting Metric: {}'.format(cbvCorrector.under_fitting_metric()))
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
Hmm... The over-fitting goodness metric says we are severely over-fitting. Not only that, there appears to not be an Alpha parameter that brings both goodness metrics above 0.8. And yet, the fit looks really good. What's going on here? Let's zoom in on the correction.
###Code
pltAxis = cbvCorrector.diagnose()
pltAxis[0].set_xlim(1360.5, 1361.1)
pltAxis[0].set_ylim(1.4314e7, 1.4326e7);
pltAxis[1].set_xlim(1360.5, 1361.1)
pltAxis[1].set_ylim(1.431e7, 1.4326e7);
###Output
_____no_output_____
###Markdown
We see in the top plot that the _SingleScale_ correction has comperable noise to the _original_ light curve. This means the correction is injecting high frequency noise at comperable amplitude to the original signal. We have indeed over-fitted! The goodness metrics perform a _broad-band_ analysis of over- and under-fitting. Even though our eyes did not see the high frequency noise injection, the goodness metrics did. So, what should be done? It depends on what you are trying to investigate. If you are only looking at the low frequency signals in the data then perhaps you don't care about the high frequency noise injection. If you really do care about the high frequency signals then you should increase the Alpha parameter, or set the target goodness scores as we do below (target_over_score=0.8, target_under_score=0.5).
###Code
# Optimize the fit but overemphasize the importance of not over-fitting.
cbvCorrector.correct(cbv_type=cbv_type,
cbv_indices=cbv_indices,
target_over_score=0.8,
target_under_score=0.5)
cbvCorrector.diagnose()
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
# Again, zoom in to see the detail
pltAxis = cbvCorrector.diagnose()
pltAxis[0].set_xlim(1360.5, 1361.1)
pltAxis[0].set_ylim(1.4314e7, 1.4326e7);
pltAxis[1].set_xlim(1360.5, 1361.1)
pltAxis[1].set_ylim(1.431e7, 1.4326e7);
###Output
_____no_output_____
###Markdown
We see now that the high frequency noise injection is small compared to the original amplitudes in the lightkurve. We barely removed any of the systematics and are now under-fitting, but that might be the best we can do if we want to ensure low noise injection. ...Or Can We Still Do Better? Perhaps we are using the incorrect CBVs for this target. Below is a tuned multi-step fit where we first fit the multi-scale Band 2 CBVs then the Spike CBVs. The multi-scale band 2 CBVs contain intermediate frequency systematic signals. They should not inject high frequency noise. We also utilize the `correct_elasticnet` corrector, which allows us to add in a L1-Norm term (Lasso Regularization). L1-Norm helps snap some basis vector fit coefficients to zero and can result in a more stable, less noisy fit. The result is a much better compromise between over- and under-fitting. The spikes are not well removed but increasing the weight on the Spike CBV removal results in over-fitting. We can also try the multi-scale band 3 CBVs, which contain high frequency systematics, but the over-fitting metric indicates using them results in even greater over-fitting. The resultant is now much better than what we achieved above but more tuning and optimization could possibly get us even closer to an ideal fit.
###Code
# Fit to the Multi-Scale Band 2 CBVs with ElasticNet to add in a L1-Norm (Lasso) term
cbvCorrector.correct_elasticnet(cbv_type=['MultiScale.2'], cbv_indices=[np.arange(1,9)], alpha=1.0e-7, l1_ratio=0.5)
ax = cbvCorrector.diagnose()
ax[0].set_title('Result of First Correction to MultiScale.2 CBVs');
# Set the corrected LC as the initial LC in a new CBVCorrector object before moving to the next correction.
# You could instead just reassign to the first cbvCorrector object, if you do not wish to save the original.
cbvCorrectorIter2 = cbvCorrector.copy()
cbvCorrectorIter2.lc = cbvCorrectorIter2.corrected_lc.copy()
# Fit to the Spike Basis Vectors, using an L1-Norm term.
cbvCorrectorIter2.correct_elasticnet(cbv_type=['Spike'], cbv_indices=['ALL'], alpha=2.0e-5, l1_ratio=0.7)
ax = cbvCorrectorIter2.diagnose()
ax[0].set_title('Result of Second Correction to Spike CBVs');
# Compute the final goodness metrics compared to the original lightcurve.
# This requires us to copy the original light curve into cbvCorrectorIter2.lc so that the goodness metrics compares the corrected_lc to the proper initial light curve.
cbvCorrectorIter2.lc = cbvCorrector.lc.copy()
print('Over-fitting Metric: {}'.format(cbvCorrectorIter2.over_fitting_metric()))
print('Under-fitting Metric: {}'.format(cbvCorrectorIter2.under_fitting_metric()))
# Plot the final correction
_, ax = plt.subplots(1, figsize=(10, 6))
cbvCorrectorIter2.lc.plot(ax=ax, normalize=False, alpha=0.2, label='Original')
cbvCorrectorIter2.corrected_lc[~cbvCorrectorIter2.cadence_mask].scatter(
normalize=False, c='r', marker='x',
s=10, label='Outliers', ax=ax)
cbvCorrectorIter2.corrected_lc.plot(normalize=False, label='Corrected', ax=ax, c='k')
ax.set_title('Comparison between original and final corrected lightcurve');
###Output
_____no_output_____
###Markdown
So, which CBVs are best to use? There is no one single answer, but generally speaking, the Multi-Scale Basis vectors are more versatile. The trade-off is there are also more of them, which means more degrees of freedom in your fit. More degrees of freedom can result in more over-fitting without proper regularization. It is recommened the user tries different combinations of CBVs and use objective metrics to decide which fit is the best for their particular needs. Using the Goodness Metrics and CBVCorrector with other Design Matrices The Goodness Metrics and CBVCorrector can also be used in conjunction with other external design matrices. Let's work on a famous planet example to show how the CBVCorrector can be utilized to imporove the generated light curve. We will begin by using [search_tesscut](https://docs.lightkurve.org/reference/api/lightkurve.search_tesscut.html?highlight=search_tesscut) to extract an FFI light curve for HAT-P 11 and then create a DesignMatrix using the background pixels.
###Code
# HAT-P 11b
from lightkurve import search_tesscut
from lightkurve.correctors import DesignMatrix
search_result = search_tesscut('HAT-P-11', sector=14)
tpf = search_result.download(cutout_size=20)
# Create a simple thresholded aperture mask
aper = tpf.create_threshold_mask(threshold=15, reference_pixel='center')
# Generate a simple aperture photometry light curve
raw_lc = tpf.to_lightcurve(aperture_mask=aper)
# Create a design matrix using PCA components from the cutout background
dm = DesignMatrix(tpf.flux[:, ~aper], name='pixel regressors').pca(5).append_constant()
###Output
_____no_output_____
###Markdown
The [DesignMatrix](https://docs.lightkurve.org/reference/api/lightkurve.correctors.DesignMatrix.html?highlight=designmatrixlightkurve.correctors.DesignMatrix) `dm` now contains the common trends in the background pixels in the data. We will first try to fit the pixel-based design matrix using an unrestricted least-squares fit (I.e. a very weak regularization by setting alpha to a small number). We tell CBVCorrector to only use the external design matrix with `ext_dm=`. When we generate the CBVCorrector object the CBVs will be downloaded, but the CBVs are for 2-minute cadence and not the 30-minute FFIs. We therefore use the `interpolate_cbvs=True` option to tell the CBVCorrector to interpolate the CBVs to the light curve cadence.
###Code
# Generate the CBVCorrector object and interpolate the downloaded CBVs to the light curve cadence
cbvcorrector = CBVCorrector(raw_lc, interpolate_cbvs=True)
# Perform an unrestricted least-squares fit using only the pixel-derived design matrix.
cbvcorrector.correct_gaussian_prior(cbv_type=None, cbv_indices=None, ext_dm=dm, alpha=1e-4)
cbvcorrector.diagnose()
print('Over-fitting metric: {}'.format(cbvcorrector.over_fitting_metric()))
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
corrected_lc_just_pixel_dm = cbvcorrector.corrected_lc
###Output
_____no_output_____
###Markdown
The least-squares fit did remove the background flux trend and at first sight the resultant might look good, but the over-fitting goodness metric is `0.08`. That's not very good! It looks like we are dramatically over-fitting. We can see this in the bottom plot where the corrected curve has more high-frequency noise than the original. Let's now add in the multi-scale basis vectors and see if we can do better. Note that we are joint fitting the CBVs and the external pixel-derived design matrix.
###Code
cbv_type = ['MultiScale.1', 'MultiScale.2', 'MultiScale.3','Spike']
cbv_indices = [np.arange(1,9), np.arange(1,9), np.arange(1,9), 'ALL']
cbvcorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm, alpha=1e-4)
cbvcorrector.diagnose()
print('Over-fitting metric: {}'.format(cbvcorrector.over_fitting_metric()))
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
corrected_lc_joint_fit = cbvcorrector.corrected_lc
###Output
_____no_output_____
###Markdown
That looks a lot better! Could we do a bit better by adding in a regualrization term? Let's do a goodness metric scan.
###Code
cbvcorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm);
###Output
_____no_output_____
###Markdown
There are a couple observations to make here. First, the under-fitting metric has a very good score throughout the regularization scan. This is because the under-fitting metric compares the corrected light curve to neighboring targets in RA and Decl. that are archived as 2-minute SAP-flux targets. _The SAP flux already has the background removed_ so the neighoring targets do not contain the very large background trends. The under-fitting metric is therefore not very helpful. In the next run we will disable the under-fitting metric in the optimization (by setting target_under_score=-1).We see the over-fitting metric is not a simple function of the regularization factor _alpha_. This can happen due to the interaction of the various basis vectors during fitting when regularization is applied. We see a minima (most over-fitting) at about alpha=1e-1. Once alpha moves above this value we begin to over-constrain the fit which results in gradually less removal of systematics. The under-fitting metric should be an indicator that we are going too far constraining the fit, and indeed, we do see the under-fitting metric degrades slightly beginning at alpha=1e-1.We will now try to optimize the fit and account for these two issues by 1) setting the bounds on the alpha parameter (`alpha_bounds=[1e-6, 1e-2]`) and 2) disregarding the under-fitting metric (`target_under_score=-1`).
###Code
# Optimize the fit but ignore the under-fitting metric and set bounds on the alpha parameter.
cbvcorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm, alpha_bounds=[1e-6, 1e-2], target_over_score=0.8, target_under_score=-1)
cbvcorrector.diagnose();
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
###Output
_____no_output_____
###Markdown
This is looking like a pretty good light curve. However, the CDPP increased a little as we optimized the over-fitting metric. Which correction to use may depend on your application. Since we are interested in the transiting planet, we will choose the corrected light curve with the lowest CDPP. Below we compare the light curve between just using the pixel-derived design matrix to also adding in the CBVs as a joint, regularized fit.
###Code
_, ax = plt.subplots(3, figsize=(10, 6))
cbvcorrector.lc.plot(ax=ax[0], normalize=False, label='Uncorrected LC', c='k')
corrected_lc_just_pixel_dm.plot(normalize=False, label='Pixel-Level Corrected; CDPP={0:.1f}'.format(corrected_lc_just_pixel_dm.estimate_cdpp()), ax=ax[1], c='m')
corrected_lc_joint_fit.plot(normalize=False, label='Joint Fit Corrected; CDPP={0:.1f}'.format(corrected_lc_joint_fit.estimate_cdpp()), ax=ax[2], c='b')
ax[0].set_title('Comparison Between original and final corrected lightcurve');
###Output
_____no_output_____
###Markdown
The superiority of the bottom curve is blatantly obvious. We can clearly see HAT-P 11b on it's 4.9 day period orbit. Our over-fitting metric settled at about 0.35 indicating we might still be over-fitting and should keep that in mind. However the low CDPP indicates the over-fitting is probably not over transit time-scales. Some final comments on CBVCorrector Application to Kepler vs K2 vs TESS CBVCorrector works equally across Kepler, K2 and TESS. However the Multi-Scale and Spike basis vectors are only available for TESS[1](fn1). For K2, the [PLDCorrector](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-3-k2-pldcorrector.html) and [SFFCorrector](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-3-k2-sffcorrector.html) classes might work better than `CBVCorrector`.If you want to just get the CBVs but not generate a CBVCorrector object then use the functions _download_kepler_cbvs_ and _download_tess_cbvs_ within the cbvcorrector module as explained [here](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-2-how-to-use-cbvs.html). 1 Unfortunately, the Multi-Scale and Spike CBVs are not archived at MAST for Kepler/K2. Applicability of the Over- and Under-fitting Goodness Metrics The under-fitting metric computes the correlation between the corrected light curve and a selection of neighboring SPOC SAP light curves. If the light curve you are trying to correct was not generated by the SPOC pipeline (I.e. not a SAP light curve), then the neighboring SAP light curves might not contain the same instrumental systematics and the under-fitting metric might not properly measure when under-fitting is occuring. The over-fitting metric examines the periodogram of the light curve before and after the correction and is therefore indifferent to how the light curve was generated. It simply looks to see if noise was injected into the light curve. The over-fitting metric is therefore much more generally applicable.The Goodness Metrics are part of the `lightkurve.correctors.metrics` module and can be computed directly with calls to `overfit_metric_lombscargle` and `underfit_metric_neighbors`. A savvy expert user can use these and other quality metrics to generate their own Loss Function for optimizing a fit. Aligning versus Interpolating CBVs By default, all loaded CBVS in `CBVCorrector` are "aligned" to the light curve cadence numbers (`CBVCorrector.lc.cadenceno`). This means only cadence numbers that exist in both the CBVs and the light curve will have values in the returned CBVs. All cadence numbers that exist in the light curve but not in the CBVs will have NaNs returned for the CBVs on those cadences and the Gap Indicator set to True. Any cadences in the CBVs not in the light curve will be removed from the CBVs.If the light curve cadences do not overlap well with the CBVs then you can set `interpolate_cbvs=True` when generating the `CBVCorrector` object. Doing so will generate interpolated CBV values for all cadences in the light curve. If the light curve has cadences past either end of the cadences in the CBVs then one must extrapolate. A second argument, `extrapolate_cbvs`, can be used to also extrapolate the CBV values to the light curve cadences. If `extrapolate_cbvs=False` then the exterior values are set to NaNs, which will probably result is a very poor fit.**Warning**: *The safest method is to align*. This will not generate any new values for the CBVs. Interpolation can be potentially dangerous. Interpolation uses Piecewise Cubic Hermite Interpolating Polynomial (PCHIP), which can be more stable than a simple spline, but no interpolation method works in all situations. Extrapolation is even more dangerious, which is why an extra parameter must be set if one desires to extrapolate. *Be sure to manually examine the extrapolated CBVs before use!* Joint Fitting By including the `ext_dm=` parameter in the `correct_*` methods we allow for joint fitting between the CBVs and other design matrices. Generally speaking, if fitting a collection of different models to a system, joint fitting is ideal. For example, if performing transit analysis one could add in a transit model to the joint fit to get the best transit recovery. The fit coefficient to the transit model is stored in the `CBVCorrector` object after fitting and can be recovered. Hyperparameter Optimization Any model fitting should include a hyperparameter optimization step. The `correct_optimizer` is essentially a 1-dimensional optimizer and is very fast. More advanced hypterparameter optimization can be performed by tuning the `alpha` and `l1_ratio` parameters in `correct_elasticnet` plus the number and type of CBVs, along with an external design matrix. The optimization Loss Function can use a combination of the `under_fitting_metric`, `over_fitting_metric` and `lc.estimate_cdpp` methods. Writing such an optimzer is left as an exercise to the reader and to be tuned to the reader's particular application. More Generalized Design Matrix Priors The main [CBVCorrector.correct*](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html?highlight=cbvcorrector%20correclightkurve.correctors.CBVCorrector) methods utilize a similar prior for all design matrix vectors as is typically used in L1-Norm and L2-Norm regularization. However you can perform fine tuning to the correction using more sophisticated priors. After performing a fit with one of the `CBVCorrector.correct*` methods, `CBVCorrector.design_matrix_collection` will have the priors set. One can then manually adjust the priors and use `CBVCorrector.correct_regressioncorrector` to perform the standard [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) correction. An illustration is below:
###Code
# 1) Perform an initial optimization with a L2-Norm regularization
# cbvCorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices);
# 2) Examine the quality of the resultant lightcurve in cbvcorrector.corrected_lc
# Determine how to adjust the priors and make changes to the design matrix
# cbvCorrector.design_matrix_collection[i].prior_sigma[j] = # ... adjust the priors
# 3) Call the superclass correct method with the adjusted design_matrix_collection
# cbvCorrector.correct_regressioncorrector(cbvCorrector.design_matrix_collection, **kwargs)
###Output
_____no_output_____
###Markdown
The `cbvCorrector.corrected_lc` will now be the result of the fit using whatever `cbvCorrector.design_matrix_collection` you had just provided. NaNs, Units and Normalization NaNs are removed from the light curve when used to generate the `CBVCorrector` object and is stored in `CBVCorrector.lc`. The CBVCorrector performs its corrections in absolute flux units (typically electrons per second). The returned corrected light curve `corrected_lc` is also in absolute units and the median flux of the light curve is preserved.
###Code
print('LC unit: {}'.format(cbvCorrector.lc.flux.unit))
print('Corrected LC unit: {}'.format(cbvCorrector.corrected_lc.flux.unit))
###Output
_____no_output_____
###Markdown
The goodness metrics are computed using median normalized units in order to properly calibrate the metric to be between 0.0 and 1.0 for all light curves. The normalization is as follows:
###Code
normalized_lc = cbvCorrector.lc.normalize()
normalized_lc -= 1.0
print('Normalized Light curve units: {} (i.e astropy.units.dimensionless_unscaled)'.format(normalized_lc.flux.unit))
###Output
_____no_output_____
###Markdown
Removing noise from Kepler, K2, and TESS light curves using Cotrending Basis Vectors (`CBVCorrector`) Cotrending Basis Vectors (CBVs) are generated in the PDC component of the Kepler/K2/TESS pipeline and are used to remove systematic trends in light curves. They are built from the most common systematic trends observed in each PDC Unit of Work (Quarter for Kepler, Campaign for K2 and Sector for TESS). Each Kepler and K2 module output and each TESS CCD has its own set of CBVs. You can read an introduction to the CBVs in [Demystifying Kepler Data](https://arxiv.org/pdf/1207.3093.pdf) or to greater detail in the [Kepler Data Processing Handbook](https://archive.stsci.edu/kepler/manuals/KSCI-19081-003-KDPH.pdf). The same basic method to generate CBVs is used for all three missions.This tutorial provides examples of how to utilize the various CBVs to clean lightcurves of common trends experienced by all targets. The technique exploits two goodness metrics that characterize the performance of the fit. [CBVCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html?highlight=cbvcorrector) inherits the [RegressionCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.html?highlight=regressioncorrector) class in LightKurve. It is recommend to first read the tutorial on [obtaining the CBVs](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-2-how-to-use-cbvs.html) before reading this tutorial. Cotrending Basis Vector Types There are three basic types of CBVs: - **Single-Scale** contains all systematic trends combined in a single set of basis vectors. - **Multi-Scale** contains systematic trends in specific wavelet-based band passes. There are usually three sets of multi-scale basis vectors in three bands.- **Spike** contains only short impulsive spike systematics.There are two different correction methods in PDC: Single-Scale and Multi-Scale. Single-Scale performs the correction in a single bandpass. Multi-Scale performs the correction in three separate wavelet-based bandpasses. Both corrections are performed in PDC but we can only export a single PDC light curve for each target. So, PDC must choose which of the two to export on a per-target basis. Generally speaking, single-scale performs better at preserving longer period signals. But at periods close to transiting planet durations multi-scale performs better at preserving signals. PDC therefore mostly chooses multi-scale for use within the planet finding pipeline and for the archive. You can find in the light curve FITS header which PDC method was chosen (keyword “PDCMETHD”). Additionally, a seperate correction is alway performed to remove short impulsive systematic spikes.For an individual's research needs, the mission supplied PDC lightcurves might not be ideal and so the CBVs are provided to the user to perform their own correction. All three CBV types are provided at MAST for TESS, however only Single-Scale is provided at MAST for Kepler and K2. Also for Kepler and K2, Cotrending Basis Vectors are supplied for only the 30-minute target cadence. Obtaining the CBVs One can directly obtain the CBVs with `download_tess_cbvs` and `downlaod_kepler_cbvs`. However when generating a [CBVCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html?highlight=cbvcorrector) object the appropriate CBVs are automatically downloaded from MAST and aligned to the lightcurve. Let's generate this object for a particularily interesting TESS variable target. We first download the SAP lightcurve.
###Code
from lightkurve import search_lightcurve
import numpy as np
import matplotlib.pyplot as plt
lc = search_lightcurve('TIC 99180739', author='SPOC', sector=10).download(flux_column='sap_flux')
###Output
_____no_output_____
###Markdown
Next, we create a `CBVCorrector` object. This will download the CBVs appropriate for this target and store them in the `CBVCorrector` object. In the case of TESS, this means the CBVs associated with the CCD this target is on and for Sector 10.
###Code
from lightkurve.correctors import CBVCorrector
cbvCorrector = CBVCorrector(lc)
###Output
_____no_output_____
###Markdown
Let's look at the CBVs downloaded.
###Code
cbvCorrector.cbvs
###Output
_____no_output_____
###Markdown
We see that there are a total of 5 sets of CBVs, all associated with TESS Sector 10, Camera 1 and CCD 1. The number of CBVs per type is also given. Let's plot the Single-Scale CBVs, which contain all systematics combined.
###Code
cbvCorrector.cbvs[0].plot();
###Output
_____no_output_____
###Markdown
The first several CBVs contain most of the systematics. The latter CBVs pose a greater risk of injecting more noise than helping. The default behavior in CBVCorrector is to use the first 8 CBVs. Assessing Over- and Under-Fitting with the Goodness Metrics Two very common issues when fitting a model to a data set it over-fitting and under-fitting. Over-fitting occurs when the model has too many degrees of freedom and fits the data _at all costs_, instead of just modelling the physical process it is attempting to model. This can exhibit itself in different ways, depending on the system and application. In the case of fitting systematic trend basis vectors to a time series, over-fitting can result in the basis vectors removing intrinsic signals in the times series instead of just the systematics. It can also result in introduced broad-band noise. This can be particularily prominant in an unconstrained least-squares fit. A least-squares fit only cares about minimizing it's loss function, which is the Root Mean Square error. In the course of minimizing the RMS, narrow-band power representing the systematic trends are exchanged for broad-band noise intrinsic in the basis vectors. This results in the overall RMS decreasing but the noise in the time series increasing, resulting in the obscuration of the signals under interest. A very common method to inhibit over-fitting is to introduce a regularization term in the loss function. This constrains the fit and effectively reduces the degrees of freedom.Under-fitting occurs when the model has too few degrees of freedom and fails to adequately model the physical process it is attempting to model. In the case of fitting systematic trend basis vectors to a time series, under-fitting can result in residual systematics. Under-fitting can either be the result of the basis vectors not adequately representing the systematics or, placing too great of a restriction on the model during fitting. The regularization technique used to inhibit over-fitting can therefore result in under-fitting. The ideal fit will balance the counter-acting phenomena of over- and under-fitting. To this end, a method can be developed to measure the degree to which these two phenomena occur.PDC has two **Goodness Metrics** to assess over- and under-fitting:- **Over-fitting metric**: Measures the introduced noise in the light curve after the correction. It does so by measuring the broad-band power spectrum via a Lomb-Scargle Periodogram both before and after the correction. If power has increased after the correction then this is an indication the CBV fit has over-fitted and introduced noise. The metric treats all frequencies equally when measuring power increase; from one frequency separation to the Nyquist frequency. This metric is callibrated such that a metric value of 0.5 means the introduced noise due to over-fitting is at the same power level as the uncertainties in the light curve.- **Under-fitting metric**: Measures the mean residual target to target Pearson correlation between the target under study and a selection of neighboring targets. This metric will find and download a selection of neighboring SPOC SAP targets in RA and Decl. until a minimum number is found. The metric is callibrated such that a value of 0.95 means the residual correlations in the target is equivalent to chance correlations of White Gaussian Noise._The Goodness Metrics are not perfect!_ They are an estimate of over- and under-fitting and are to be used as a guideline along other other metrics to assess the quality of your light curve. The Goodness Metrics are part of the `lightkurve.correctors.metrics` module and can be computed directly with calls to `overfit_metric_lombscargle` and `underfit_metric_neighbors`. The 'CBVCorrector' has convenience wrappers for the two metrics and so they do not need to be called directly, as we will show below. Example Correction with CBVCorrector to Inhibit Over-Fitting There are four correction methods within `CBVCorrector`:- **correct**: Performs a numerical correction by optimizing the L2-Norm regularization penalty term using the goodness metrics as the Loss Function.- **correct_gaussian_prior**: Performs an analytical correction using the LightKurve [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) method while setting the L2-Norm (Ridge Regression) regularization penalty term as the Gaussian prior.- **correct_elasticnet**: Performs the correction using Scikit-Learn's [ElasticNet](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html), which combines both an L1- and L2-Norm regularization.- **correct_regressioncorrector**: Performs the standard `RegressionCorrector.correct` correction. RegressionCorrector is the superclass to CBVCorrector.If you are unfamilar with L1-Norm (LASSO) and L2-Norm (Ridge Regression) regularization then there are severel [excellent](https://press.princeton.edu/books/hardcover/9780691198309/statistics-data-mining-and-machine-learning-in-astronomy) [introductions](https://en.wikipedia.org/wiki/Regularization_(mathematics)). You can read how a L2-norm relates to a Gaussian prior in a linear design matrix in [this reference](https://katbailey.github.io/post/from-both-sides-now-the-math-of-linear-regression/).The default method is `correct` and generally speaking, one can use just this method to obtain a good fit. The other methods are for advanced usage.We'll start with `correct_gaussian_prior` in order to introduce the concepts. Doing so will allow us to force a very weak regularization term (alpha=1e-4) as an illustration.
###Code
# Select which CBVs to use in the correction
cbv_type = ['SingleScale', 'Spike']
# Select which CBV indices to use
# Use the first 8 SingleScale and all Spike CBVS
cbv_indices = [np.arange(1,9), 'ALL']
# Perform the correction
cbvCorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, alpha=1e-4)
cbvCorrector.diagnose();
###Output
_____no_output_____
###Markdown
First note that CBVCorrector always fits a constant term in the model, but the constant is never subtracted in the resultant corrected flux. The median flux value of the light curve is always preserved.At first sight, this looks like a good correction. Both the Single-Scale and Spike basis vectors are being utilized to fit out as much of the signal as possible. The corrected light curve is indeed flatter. But this was essentially an unrestricted least-squares correction and we may have _over-fitted_. The very strong lips right at the beginning of each orbit is probably a chance correlation between the star's inherent stellar variability and the thermal settling systematic that is common in Kepler and TESS lightcurves. Let's look at the CBVCorrector goodness metrics to determine if this is the case.
###Code
# Note: this cell will be slow to run
print('Over fitting Metric: {}'.format(cbvCorrector.over_fitting_metric()))
print('Under fitting Metric: {}'.format(cbvCorrector.under_fitting_metric()))
###Output
_____no_output_____
###Markdown
The first time you run the under-fitting goodness metric it will download the SPOC SAP light curves of targets in the neighborhood around the target under study in order to estimate the residual systematics (they are stored in the CBVCorrector object for subsequent computations). A goodness metric of 0.8 or above is generally considered good. In this case, it looks like we over-fitted (over-fitting metric = 0.71). Even though the corrected light curve looks better, our metric is telling us we probably injected signals into our lightcurve and we should not trust this really nice looking curve. Perhaps we can do better if we _regularize_ the fit. Using the Goodness Metrics to Optimize the Fit We will start by performing a scan of the over- and under-fit goodness metrics as a function of the L2-Norm regularization term, alpha.
###Code
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
This scan also plots the last used alpha parameter as a vertical black line (alpha=1e-4 in our case). We are clearly not optimizing this fit for both over- and under-fitting. Let's use the `correct` numerical optimizer to try to optimize the fit.
###Code
cbvCorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices);
cbvCorrector.diagnose();
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
Much better! We see the thermal settling systematic is still being removed, but the stellar variabity is better preserved. Note that the optimizer did not set the alpha parameter at exactly the red and blue curve intersection point. The default target goodness scores is 0.8 or above, which is fulfilled at alpha=1.45e-1. If we want to optimize the fit even more, by perhaps ensuring we are not over-fitting at all, then we can adjust the target over and under scores to emphasize which metric we are more interested in. Below we more greatly emphasize improving the over-fitting metric by setting the target to 0.9.
###Code
cbvCorrector.correct(cbv_type=cbv_type,
cbv_indices=cbv_indices,
target_over_score=0.9,
target_under_score=0.5)
cbvCorrector.diagnose();
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
We are now perhaps biasing too far towards under-fitting, but depending on your research interests, this might be best. No Single Best Answer Example Let's now look at another example, this time where there is no clear single best answer. Again, we will use the Single-Scale and Spike basis vectors for the correction and begin with low regularization.
###Code
lc = search_lightcurve('TIC 38574307', author='SPOC', sector=2).download(flux_column='sap_flux')
cbvCorrector = CBVCorrector(lc)
cbv_type = ['SingleScale', 'Spike']
cbv_indices = [np.arange(1,9), 'ALL']
cbvCorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, alpha=1e-4)
cbvCorrector.diagnose();
###Output
_____no_output_____
###Markdown
At first sight, this looks good. The long term trends have been removed and the periodic noisy bits have been removed with the spike basis vectors. But did we really do a good job?
###Code
print('Over fitting Metric: {}'.format(cbvCorrector.over_fitting_metric()))
print('Under fitting Metric: {}'.format(cbvCorrector.under_fitting_metric()))
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
Hmm... The over-fitting goodness metric says we are severely over-fitting. Not only that, there appears to not be an Alpha parameter that brings both goodness metrics above 0.8. And yet, the fit looks really good. What's going on here? Let's zoom in on the correction.
###Code
pltAxis = cbvCorrector.diagnose()
pltAxis[0].set_xlim(1360.5, 1361.1)
pltAxis[0].set_ylim(1.4314e7, 1.4326e7);
pltAxis[1].set_xlim(1360.5, 1361.1)
pltAxis[1].set_ylim(1.431e7, 1.4326e7);
###Output
_____no_output_____
###Markdown
We see in the top plot that the _SingleScale_ correction has comperable noise to the _original_ light curve. This means the correction is injecting high frequency noise at comperable amplitude to the original signal. We have indeed over-fitted! The goodness metrics perform a _broad-band_ analysis of over- and under-fitting. Even though our eyes did not see the high frequency noise injection, the goodness metrics did. So, what should be done? It depends on what you are trying to investigate. If you are only looking at the low frequency signals in the data then perhaps you don't care about the high frequency noise injection. If you really do care about the high frequency signals then you should increase the Alpha parameter, or set the target goodness scores as we do below (target_over_score=0.8, target_under_score=0.5).
###Code
# Optimize the fit but overemphasize the importance of not over-fitting.
cbvCorrector.correct(cbv_type=cbv_type,
cbv_indices=cbv_indices,
target_over_score=0.8,
target_under_score=0.5)
cbvCorrector.diagnose()
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
# Again, zoom in to see the detail
pltAxis = cbvCorrector.diagnose()
pltAxis[0].set_xlim(1360.5, 1361.1)
pltAxis[0].set_ylim(1.4314e7, 1.4326e7);
pltAxis[1].set_xlim(1360.5, 1361.1)
pltAxis[1].set_ylim(1.431e7, 1.4326e7);
###Output
_____no_output_____
###Markdown
We see now that the high frequency noise injection is small compared to the original amplitudes in the lightkurve. We barely removed any of the systematics and are now under-fitting, but that might be the best we can do if we want to ensure low noise injection. ...Or Can We Still Do Better? Perhaps we are using the incorrect CBVs for this target. Below is a tuned multi-step fit where we first fit the multi-scale Band 2 CBVs then the Spike CBVs. The multi-scale band 2 CBVs contain intermediate frequency systematic signals. They should not inject high frequency noise. We also utilize the `correct_elasticnet` corrector, which allows us to add in a L1-Norm term (Lasso Regularization). L1-Norm helps snap some basis vector fit coefficients to zero and can result in a more stable, less noisy fit. The result is a much better compromise between over- and under-fitting. The spikes are not well removed but increasing the weight on the Spike CBV removal results in over-fitting. We can also try the multi-scale band 3 CBVs, which contain high frequency systematics, but the over-fitting metric indicates using them results in even greater over-fitting. The resultant is now much better than what we achieved above but more tuning and optimization could possibly get us even closer to an ideal fit.
###Code
# Fit to the Multi-Scale Band 2 CBVs with ElasticNet to add in a L1-Norm (Lasso) term
cbvCorrector.correct_elasticnet(cbv_type=['MultiScale.2'], cbv_indices=[np.arange(1,9)], alpha=1.0e-7, l1_ratio=0.5)
ax = cbvCorrector.diagnose()
ax[0].set_title('Result of First Correction to MultiScale.2 CBVs');
# Set the corrected LC as the initial LC in a new CBVCorrector object before moving to the next correction.
# You could instead just reassign to the first cbvCorrector object, if you do not wish to save the original.
cbvCorrectorIter2 = cbvCorrector.copy()
cbvCorrectorIter2.lc = cbvCorrectorIter2.corrected_lc.copy()
# Fit to the Spike Basis Vectors, using an L1-Norm term.
cbvCorrectorIter2.correct_elasticnet(cbv_type=['Spike'], cbv_indices=['ALL'], alpha=2.0e-5, l1_ratio=0.7)
ax = cbvCorrectorIter2.diagnose()
ax[0].set_title('Result of Second Correction to Spike CBVs');
# Compute the final goodness metrics compared to the original lightcurve.
# This requires us to copy the original light curve into cbvCorrectorIter2.lc so that the goodness metrics compares the corrected_lc to the proper initial light curve.
cbvCorrectorIter2.lc = cbvCorrector.lc.copy()
print('Over-fitting Metric: {}'.format(cbvCorrectorIter2.over_fitting_metric()))
print('Under-fitting Metric: {}'.format(cbvCorrectorIter2.under_fitting_metric()))
# Plot the final correction
_, ax = plt.subplots(1, figsize=(10, 6))
cbvCorrectorIter2.lc.plot(ax=ax, normalize=False, alpha=0.2, label='Original')
cbvCorrectorIter2.corrected_lc[~cbvCorrectorIter2.cadence_mask].scatter(
normalize=False, c='r', marker='x',
s=10, label='Outliers', ax=ax)
cbvCorrectorIter2.corrected_lc.plot(normalize=False, label='Corrected', ax=ax, c='k')
ax.set_title('Comparison between original and final corrected lightcurve');
###Output
_____no_output_____
###Markdown
So, which CBVs are best to use? There is no one single answer, but generally speaking, the Multi-Scale Basis vectors are more versatile. The trade-off is there are also more of them, which means more degrees of freedom in your fit. More degrees of freedom can result in more over-fitting without proper regularization. It is recommened the user tries different combinations of CBVs and use objective metrics to decide which fit is the best for their particular needs. Using the Goodness Metrics and CBVCorrector with other Design Matrices The Goodness Metrics and CBVCorrector can also be used in conjunction with other external design matrices. Let's work on a famous planet example to show how the CBVCorrector can be utilized to imporove the generated light curve. We will begin by using [search_tesscut](https://docs.lightkurve.org/reference/api/lightkurve.search_tesscut.html?highlight=search_tesscut) to extract an FFI light curve for HAT-P 11 and then create a DesignMatrix using the background pixels.
###Code
# HAT-P 11b
from lightkurve import search_tesscut
from lightkurve.correctors import DesignMatrix
search_result = search_tesscut('HAT-P-11', sector=14)
tpf = search_result.download(cutout_size=20)
# Create a simple thresholded aperture mask
aper = tpf.create_threshold_mask(threshold=15, reference_pixel='center')
# Generate a simple aperture photometry light curve
raw_lc = tpf.to_lightcurve(aperture_mask=aper)
# Create a design matrix using PCA components from the cutout background
dm = DesignMatrix(tpf.flux[:, ~aper], name='pixel regressors').pca(5).append_constant()
###Output
_____no_output_____
###Markdown
The [DesignMatrix](https://docs.lightkurve.org/reference/api/lightkurve.correctors.DesignMatrix.html?highlight=designmatrixlightkurve.correctors.DesignMatrix) `dm` now contains the common trends in the background pixels in the data. We will first try to fit the pixel-based design matrix using an unrestricted least-squares fit (I.e. a very weak regularization by setting alpha to a small number). We tell CBVCorrector to only use the external design matrix with `ext_dm=`. When we generate the CBVCorrector object the CBVs will be downloaded, but the CBVs are for 2-minute cadence and not the 30-minute FFIs. We therefore use the `interpolate_cbvs=True` option to tell the CBVCorrector to interpolate the CBVs to the light curve cadence.
###Code
# Generate the CBVCorrector object and interpolate the downloaded CBVs to the light curve cadence
cbvcorrector = CBVCorrector(raw_lc, interpolate_cbvs=True)
# Perform an unrestricted least-squares fit using only the pixel-derived design matrix.
cbvcorrector.correct_gaussian_prior(cbv_type=None, cbv_indices=None, ext_dm=dm, alpha=1e-4)
cbvcorrector.diagnose()
print('Over-fitting metric: {}'.format(cbvcorrector.over_fitting_metric()))
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
corrected_lc_just_pixel_dm = cbvcorrector.corrected_lc
###Output
_____no_output_____
###Markdown
The least-squares fit did remove the background flux trend and at first sight the resultant might look good, but the over-fitting goodness metric is `0.08`. That's not very good! It looks like we are dramatically over-fitting. We can see this in the bottom plot where the corrected curve has more high-frequency noise than the original. Let's now add in the multi-scale basis vectors and see if we can do better. Note that we are joint fitting the CBVs and the external pixel-derived design matrix.
###Code
cbv_type = ['MultiScale.1', 'MultiScale.2', 'MultiScale.3','Spike']
cbv_indices = [np.arange(1,9), np.arange(1,9), np.arange(1,9), 'ALL']
cbvcorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm, alpha=1e-4)
cbvcorrector.diagnose()
print('Over-fitting metric: {}'.format(cbvcorrector.over_fitting_metric()))
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
corrected_lc_joint_fit = cbvcorrector.corrected_lc
###Output
_____no_output_____
###Markdown
That looks a lot better! Could we do a bit better by adding in a regualrization term? Let's do a goodness metric scan.
###Code
cbvcorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm);
###Output
_____no_output_____
###Markdown
There are a couple observations to make here. First, the under-fitting metric has a very good score throughout the regularization scan. This is because the under-fitting metric compares the corrected light curve to neighboring targets in RA and Decl. that are archived as 2-minute SAP-flux targets. _The SAP flux already has the background removed_ so the neighoring targets do not contain the very large background trends. The under-fitting metric is therefore not very helpful. In the next run we will disable the under-fitting metric in the optimization (by setting target_under_score=-1).We see the over-fitting metric is not a simple function of the regularization factor _alpha_. This can happen due to the interaction of the various basis vectors during fitting when regularization is applied. We see a minima (most over-fitting) at about alpha=1e-1. Once alpha moves above this value we begin to over-constrain the fit which results in gradually less removal of systematics. The under-fitting metric should be an indicator that we are going too far constraining the fit, and indeed, we do see the under-fitting metric degrades slightly beginning at alpha=1e-1.We will now try to optimize the fit and account for these two issues by 1) setting the bounds on the alpha parameter (`alpha_bounds=[1e-6, 1e-2]`) and 2) disregarding the under-fitting metric (`target_under_score=-1`).
###Code
# Optimize the fit but ignore the under-fitting metric and set bounds on the alpha parameter.
cbvcorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm, alpha_bounds=[1e-6, 1e-2], target_over_score=0.8, target_under_score=-1)
cbvcorrector.diagnose();
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
###Output
_____no_output_____
###Markdown
This is looking like a pretty good light curve. However, the CDPP increased a little as we optimized the over-fitting metric. Which correction to use may depend on your application. Since we are interested in the transiting planet, we will choose the corrected light curve with the lowest CDPP. Below we compare the light curve between just using the pixel-derived design matrix to also adding in the CBVs as a joint, regularized fit.
###Code
_, ax = plt.subplots(3, figsize=(10, 6))
cbvcorrector.lc.plot(ax=ax[0], normalize=False, label='Uncorrected LC', c='k')
corrected_lc_just_pixel_dm.plot(normalize=False, label='Pixel-Level Corrected; CDPP={0:.1f}'.format(corrected_lc_just_pixel_dm.estimate_cdpp()), ax=ax[1], c='m')
corrected_lc_joint_fit.plot(normalize=False, label='Joint Fit Corrected; CDPP={0:.1f}'.format(corrected_lc_joint_fit.estimate_cdpp()), ax=ax[2], c='b')
ax[0].set_title('Comparison Between original and final corrected lightcurve');
###Output
_____no_output_____
###Markdown
The superiority of the bottom curve is blatantly obvious. We can clearly see HAT-P 11b on it's 4.9 day period orbit. Our over-fitting metric settled at about 0.35 indicating we might still be over-fitting and should keep that in mind. However the low CDPP indicates the over-fitting is probably not over transit time-scales. Some final comments on CBVCorrector Application to Kepler vs K2 vs TESS CBVCorrector works equally across Kepler, K2 and TESS. However the Multi-Scale and Spike basis vectors are only available for TESS[1](fn1). If you want to just get the CBVs but not generate a CBVCorrector object then use the functions _download_kepler_cbvs_ and _download_tess_cbvs_ within the cbvcorrector module as explained [here](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-2-how-to-use-cbvs.html). 1 Unfortunately, the Multi-Scale and Spike CBVs are not archived at MAST for Kepler/K2. Applicability of the Over- and Under-fitting Goodness Metrics The under-fitting metric computes the correlation between the corrected light curve and a selection of neighboring SPOC SAP light curves. If the light curve you are trying to correct was not generated by the SPOC pipeline (I.e. not a SAP light curve), then the neighboring SAP light curves might not contain the same instrumental systematics and the under-fitting metric might not properly measure when under-fitting is occuring. The over-fitting metric examines the periodogram of the light curve before and after the correction and is therefore indifferent to how the light curve was generated. It simply looks to see if noise was injected into the light curve. The over-fitting metric is therefore much more generally applicable.The Goodness Metrics are part of the `lightkurve.correctors.metrics` module and can be computed directly with calls to `overfit_metric_lombscargle` and `underfit_metric_neighbors`. A savvy expert user can use these and other quality metrics to generate their own Loss Function for optimizing a fit. Aligning versus Interpolating CBVs By default, all loaded CBVS in `CBVCorrector` are "aligned" to the light curve cadence numbers (`CBVCorrector.lc.cadenceno`). This means only cadence numbers that exist in both the CBVs and the light curve will have values in the returned CBVs. All cadence numbers that exist in the light curve but not in the CBVs will have NaNs returned for the CBVs on those cadences and the Gap Indicator set to True. Any cadences in the CBVs not in the light curve will be removed from the CBVs.If the light curve cadences do not overlap well with the CBVs then you can set `interpolate_cbvs=True` when generating the `CBVCorrector` object. Doing so will generate interpolated CBV values for all cadences in the light curve. If the light curve has cadences past either end of the cadences in the CBVs then one must extrapolate. A second argument, `extrapolate_cbvs`, can be used to also extrapolate the CBV values to the light curve cadences. If `extrapolate_cbvs=False` then the exterior values are set to NaNs, which will probably result is a very poor fit.**Warning**: *The safest method is to align*. This will not generate any new values for the CBVs. Interpolation can be potentially dangerous. Interpolation uses Piecewise Cubic Hermite Interpolating Polynomial (PCHIP), which can be more stable than a simple spline, but no interpolation method works in all situations. Extrapolation is even more dangerious, which is why an extra parameter must be set if one desires to extrapolate. *Be sure to manually examine the extrapolated CBVs before use!* Joint Fitting By including the `ext_dm=` parameter in the `correct_*` methods we allow for joint fitting between the CBVs and other design matrices. Generally speaking, if fitting a collection of different models to a system, joint fitting is ideal. For example, if performing transit analysis one could add in a transit model to the joint fit to get the best transit recovery. The fit coefficient to the transit model is stored in the `CBVCorrector` object after fitting and can be recovered. Hyperparameter Optimization Any model fitting should include a hyperparameter optimization step. The `correct_optimizer` is essentially a 1-dimensional optimizer and is very fast. More advanced hypterparameter optimization can be performed by tuning the `alpha` and `l1_ratio` parameters in `correct_elasticnet` plus the number and type of CBVs, along with an external design matrix. The optimization Loss Function can use a combination of the `under_fitting_metric`, `over_fitting_metric` and `lc.estimate_cdpp` methods. Writing such an optimzer is left as an exercise to the reader and to be tuned to the reader's particular application. More Generalized Design Matrix Priors The main [CBVCorrector.correct*](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html?highlight=cbvcorrector%20correclightkurve.correctors.CBVCorrector) methods utilize a similar prior for all design matrix vectors as is typically used in L1-Norm and L2-Norm regularization. However you can perform fine tuning to the correction using more sophisticated priors. After performing a fit with one of the `CBVCorrector.correct*` methods, `CBVCorrector.design_matrix_collection` will have the priors set. One can then manually adjust the priors and use `CBVCorrector.correct_regressioncorrector` to perform the standard [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) correction. An illustration is below:
###Code
# Perform an initial optimization with a L2-Norm regularization
cbvCorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices);
# Examine the quality of the resultant lightcurve in cbvcorrector.corrected_lc
# Determine how to adjust the priors and make changes to the design matrix
cbvCorrector.design_matrix_collection[i].prior_sigma[j] = # ... adjust the priors
# Call the superclass correct method with the adjusted design_matrix_collection
cbvCorrector.correct_regressioncorrector(cbvCorrector.design_matrix_collection, **kwargs)
###Output
_____no_output_____
###Markdown
The `cbvCorrector.corrected_lc` will now be the result of the fit using whatever `cbvCorrector.design_matrix_collection` you had just provided. NaNs, Units and Normalization NaNs are removed from the light curve when used to generate the `CBVCorrector` object and is stored in `CBVCorrector.lc`. The CBVCorrector performs its corrections in absolute flux units (typically electrons per second). The returned corrected light curve `corrected_lc` is also in absolute units and the median flux of the light curve is preserved.
###Code
print('LC unit: {}'.format(cbvCorrector.lc.flux.unit))
print('Corrected LC unit: {}'.format(cbvCorrector.corrected_lc.flux.unit))
###Output
_____no_output_____
###Markdown
The goodness metrics are computed using median normalized units in order to properly calibrate the metric to be between 0.0 and 1.0 for all light curves. The normalization is as follows:
###Code
normalized_lc = cbvCorrector.lc.normalize()
normalized_lc -= 1.0
print('Normalized Light curve units: {} (i.e astropy.units.dimensionless_unscaled)'.format(normalized_lc.flux.unit))
###Output
_____no_output_____
###Markdown
Removing noise from Kepler, K2, and TESS light curves using Cotrending Basis Vectors (`CBVCorrector`) Cotrending Basis Vectors (CBVs) are generated in the PDC component of the Kepler/K2/TESS pipeline and are used to remove systematic trends in light curves. They are built from the most common systematic trends observed in each PDC Unit of Work (Quarter for Kepler, Campaign for K2 and Sector for TESS). Each Kepler and K2 module output and each TESS CCD has its own set of CBVs. You can read an introduction to the CBVs in [Demystifying Kepler Data](https://arxiv.org/pdf/1207.3093.pdf) or to greater detail in the [Kepler Data Processing Handbook](https://archive.stsci.edu/kepler/manuals/KSCI-19081-003-KDPH.pdf). The same basic method to generate CBVs is used for all three missions.This tutorial provides examples of how to utilize the various CBVs to clean lightcurves of common trends experienced by all targets. The technique exploits two goodness metrics that characterize the performance of the fit. `CBVCorrector` inherits the `RegressionCorrector` class in LightKurve, you can find details of that class [here](../api/lightkurve.correctors.RegressionCorrector.html). It is recommend to first read the tutorial on [obtaining the CBVs](https://docs.lightkurve.org/tutorials/04-how-to-use-cbvs.html) before reading this tutorial. Cotrending Basis Vector Types There are three basic types of CBVs: - **Single-Scale** contains all systematic trends combined in a single set of basis vectors. - **Multi-Scale** contains systematic trends in specific wavelet-based band passes. There are usually three sets of multi-scale basis vectors in three bands.- **Spike** contains only short impulsive spike systematics.There are two different correction methods in PDC: Single-Scale and Multi-Scale. Single-Scale performs the correction in a single bandpass. Multi-Scale performs the correction in three separate wavelet-based bandpasses. Both corrections are performed in PDC but we can only export a single PDC light curve for each target. So, PDC must choose which of the two to export on a per-target basis. Generally speaking, single-scale performs better at preserving longer period signals. But at periods close to transiting planet durations multi-scale performs better at preserving signals. PDC therefore mostly chooses multi-scale for use within the planet finding pipeline and for the archive. You can find in the light curve FITS header which PDC method was chosen (keyword “PDCMETHD”). Additionally, a seperate correction is alway performed to remove short impulsive systematic spikes.For an individual's research needs, the mission supplied PDC lightcurves might not be ideal and so the CBVs are provided to the user to perform their own correction. All three CBV types are provided at MAST for TESS, however only Single-Scale is provided at MAST for Kepler and K2. Also for Kepler and K2, Cotrending Basis Vectors are supplied for only the 30-minute target cadence. Obtaining the CBVs One can directly obtain the CBVs with `download_tess_cbvs` and `downlaod_kepler_cbvs`. However when generating a `CBVCorrector` object the appropriate CBVs are automatically downloaded from MAST and aligned to the lightcurve. Let's generate this object for a particularily interesting TESS variable target. We first download the SAP lightcurve.
###Code
from lightkurve import search_lightcurve
import numpy as np
import matplotlib.pyplot as plt
lc = search_lightcurve('TIC 99180739', author='SPOC', sector=10).download(flux_column='sap_flux')
###Output
_____no_output_____
###Markdown
Next, we create a `CBVCorrector` object. This will download the CBVs appropriate for this target and store them in the `CBVCorrector` object. In the case of TESS, this means the CBVs associated with the CCD this target is on and for Sector 10.
###Code
from lightkurve.correctors import CBVCorrector
cbvCorrector = CBVCorrector(lc)
###Output
_____no_output_____
###Markdown
Let's look at the CBVs downloaded.
###Code
cbvCorrector.cbvs
###Output
_____no_output_____
###Markdown
We see that there are a total of 5 sets of CBVs, all associated with TESS Sector 10, Camera 1 and CCD 1. The number of CBVs per type is also given. Let's plot the Single-Scale CBVs, which contain all systematics combined.
###Code
cbvCorrector.cbvs[0].plot();
###Output
_____no_output_____
###Markdown
The first several CBVs contain most of the systematics. The latter CBVs pose a greater risk of injecting more noise than helping. The default behavior in CBVCorrector is to use the first 8 CBVs. Assessing Over- and Under-Fitting with the Goodness Metrics Two very common issues when fitting a model to a data set it over-fitting and under-fitting. Over-fitting occurs when the model has too many degrees of freedom and fits the data _at all costs_, instead of just modelling the physical process it is attempting to model. This can exhibit itself in different ways, depending on the system and application. In the case of fitting systematic trend basis vectors to a time series, over-fitting can result in the basis vectors removing intrinsic signals in the times series instead of just the systematics. It can also result in introduced broad-band noise. This can be particularily prominant in an unconstrained least-squares fit. A least-squares fit only cares about minimizing it's loss function, which is the Root Mean Square error. In the course of minimizing the RMS, narrow-band power representing the systematic trends are exchanged for broad-band noise intrinsic in the basis vectors. This results in the overall RMS decreasing but the noise in the time series increasing, resulting in the obscuration of the signals under interest. A very common method to inhibit over-fitting is to introduce a regularization term in the loss function. This constrains the fit and effectively reduces the degrees of freedom.Under-fitting occurs when the model has too few degrees of freedom and fails to adequately model the physical process it is attempting to model. In the case of fitting systematic trend basis vectors to a time series, under-fitting can result in residual systematics. Under-fitting can either be the result of the basis vectors not adequately representing the systematics or, placing too great of a restriction on the model during fitting. The regularization technique used to inhibit over-fitting can therefore result in under-fitting. The ideal fit will balance the counter-acting phenomena of over- and under-fitting. To this end, a method can be developed to measure the degree to which these two phenomena occur.PDC has two **Goodness Metrics** to assess over- and under-fitting:- **Over-fitting metric**: Measures the introduced noise in the light curve after the correction. It does so by measuring the broad-band power spectrum via a Lomb-Scargle Periodogram both before and after the correction. If power has increased after the correction then this is an indication the CBV fit has over-fitted and introduced noise. The metric treats all frequencies equally when measuring power increase; from one frequency separation to the Nyquist frequency. This metric is callibrated such that a metric value of 0.5 means the introduced noise due to over-fitting is at the same power level as the uncertainties in the light curve.- **Under-fitting metric**: Measures the mean residual target to target Pearson correlation between the target under study and a selection of neighboring targets. This metric will find and download a selection of neighboring SPOC SAP targets in RA and Decl. until a minimum number is found. The metric is callibrated such that a value of 0.95 means the residual correlations in the target is equivalent to chance correlations of White Gaussian Noise._The Goodness Metrics are not perfect!_ They are an estimate of over- and under-fitting and are to be used as a guideline along other other metrics to assess the quality of your light curve. The Goodness Metrics are part of the `lightkurve.correctors.metrics` module and can be computed directly with calls to `overfit_metric_lombscargle` and `underfit_metric_neighbors`. The 'CBVCorrector' has convenience wrappers for the two metrics and so they do not need to be called directly, as we will show below. Example Correction with CBVCorrector to Inhibit Over-Fitting There are four correction methods within `CBVCorrector`:- **correct**: Performs a numerical correction by optimizing the L2-Norm regularization penalty term using the goodness metrics as the Loss Function.- **correct_gaussian_prior**: Performs an analytical correction using the LightKurve `RegressionCorrector.correct` method while setting the L2-Norm (Ridge Regression) regularization penalty term as the Gaussian prior.- **correct_elasticnet**: Performs the correction using Scikit-Learn's [ElasticNet](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html), which combines both an L1- and L2-Norm regularization.- **correct_regressioncorrector**: Performs the standard `RegressionCorrector.correct` correction. RegressionCorrector is the superclass to CBVCorrector.If you are unfamilar with L1-Norm (LASSO) and L2-Norm (Ridge Regression) regularization then there are [severel](https://statlearning.com/ISLR%20Seventh%20Printing.pdf), [excellent](https://press.princeton.edu/books/hardcover/9780691198309/statistics-data-mining-and-machine-learning-in-astronomy), [introductions](https://en.wikipedia.org/wiki/Regularization_(mathematics)). You can read how a L2-norm relates to a Gaussian prior in a linear design matrix in [this reference](https://katbailey.github.io/post/from-both-sides-now-the-math-of-linear-regression/).The default method is `correct` and generally speaking, one can use just this method to obtain a good fit. The other methods are for advanced usage.We'll start with `correct_gaussian_prior` in order to introduce the concepts. Doing so will allow us to force a very weak regularization term (alpha=1e-4) as an illustration.
###Code
# Select which CBVs to use in the correction
cbv_type = ['SingleScale', 'Spike']
# Select which CBV indices to use
# Use the first 8 SingleScale and all Spike CBVS
cbv_indices = [np.arange(1,9), 'ALL']
# Perform the correction
cbvCorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, alpha=1e-4)
cbvCorrector.diagnose();
###Output
_____no_output_____
###Markdown
First note that CBVCorrector always fits a constant term in the model, but the constant is never subtracted in the resultant corrected flux. The median flux value of the light curve is always preserved.At first sight, this looks like a good correction. Both the Single-Scale and Spike basis vectors are being utilized to fit out as much of the signal as possible. The corrected light curve is indeed flatter. But this was essentially an unrestricted least-squares correction and we may have _over-fitted_. The very strong lips right at the beginning of each orbit is probably a chance correlation between the star's inherent stellar variability and the thermal settling systematic that is common in Kepler and TESS lightcurves. Let's look at the CBVCorrector goodness metrics to determine if this is the case.
###Code
# Note: this cell will be slow to run
print('Over fitting Metric: {}'.format(cbvCorrector.over_fitting_metric()))
print('Under fitting Metric: {}'.format(cbvCorrector.under_fitting_metric()))
###Output
_____no_output_____
###Markdown
The first time you run the under-fitting goodness metric it will download the SPOC SAP light curves of targets in the neighborhood around the target under study in order to estimate the residual systematics (they are stored in the CBVCorrector object for subsequent computations). A goodness metric of 0.8 or above is generally considered good. In this case, it looks like we over-fitted (over-fitting metric = 0.71). Even though the corrected light curve looks better, our metric is telling us we probably injected signals into our lightcurve and we should not trust this really nice looking curve. Perhaps we can do better if we _regularize_ the fit. Using the Goodness Metrics to Optimize the Fit We will start by performing a scan of the over- and under-fit goodness metrics as a function of the L2-Norm regularization term, alpha.
###Code
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
This scan also plots the last used alpha parameter as a vertical black line (alpha=1e-4 in our case). We are clearly not optimizing this fit for both over- and under-fitting. Let's use the `correct` numerical optimizer to try to optimize the fit.
###Code
cbvCorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices);
cbvCorrector.diagnose();
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
Much better! We see the thermal settling systematic is still being removed, but the stellar variabity is better preserved. Note that the optimizer did not set the alpha parameter at exactly the red and blue curve intersection point. The default target goodness scores is 0.8 or above, which is fulfilled at alpha=1.45e-1. If we want to optimize the fit even more, by perhaps ensuring we are not over-fitting at all, then we can adjust the target over and under scores to emphasize which metric we are more interested in. Below we more greatly emphasize improving the over-fitting metric by setting the target to 0.9.
###Code
cbvCorrector.correct(cbv_type=cbv_type,
cbv_indices=cbv_indices,
target_over_score=0.9,
target_under_score=0.5)
cbvCorrector.diagnose();
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
We are now perhaps biasing too far towards under-fitting, but depending on your research interests, this might be best. No Single Best Answer Example Let's now look at another example, this time where there is no clear single best answer. Again, we will use the Single-Scale and Spike basis vectors for the correction and begin with low regularization.
###Code
lc = search_lightcurve('TIC 38574307', author='SPOC', sector=2).download(flux_column='sap_flux')
cbvCorrector = CBVCorrector(lc)
cbv_type = ['SingleScale', 'Spike']
cbv_indices = [np.arange(1,9), 'ALL']
cbvCorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, alpha=1e-4)
cbvCorrector.diagnose();
###Output
_____no_output_____
###Markdown
At first sight, this looks good. The long term trends have been removed and the periodic noisy bits have been removed with the spike basis vectors. But did we really do a good job?
###Code
print('Over fitting Metric: {}'.format(cbvCorrector.over_fitting_metric()))
print('Under fitting Metric: {}'.format(cbvCorrector.under_fitting_metric()))
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
Hmm... The over-fitting goodness metric says we are severely over-fitting. Not only that, there appears to not be an Alpha parameter that brings both goodness metrics above 0.8. And yet, the fit looks really good. What's going on here? Let's zoom in on the correction.
###Code
pltAxis = cbvCorrector.diagnose()
pltAxis[0].set_xlim(1360.5, 1361.1)
pltAxis[0].set_ylim(1.4314e7, 1.4326e7);
pltAxis[1].set_xlim(1360.5, 1361.1)
pltAxis[1].set_ylim(1.431e7, 1.4326e7);
###Output
_____no_output_____
###Markdown
We see in the top plot that the _SingleScale_ correction has comperable noise to the _original_ light curve. This means the correction is injecting high frequency noise at comperable amplitude to the original signal. We have indeed over-fitted! The goodness metrics perform a _broad-band_ analysis of over- and under-fitting. Even though our eyes did not see the high frequency noise injection, the goodness metrics did. So, what should be done? It depends on what you are trying to investigate. If you are only looking at the low frequency signals in the data then perhaps you don't care about the high frequency noise injection. If you really do care about the high frequency signals then you should increase the Alpha parameter, or set the target goodness scores as we do below (target_over_score=0.8, target_under_score=0.5).
###Code
# Optimize the fit but overemphasize the importance of not over-fitting.
cbvCorrector.correct(cbv_type=cbv_type,
cbv_indices=cbv_indices,
target_over_score=0.8,
target_under_score=0.5)
cbvCorrector.diagnose()
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
# Again, zoom in to see the detail
pltAxis = cbvCorrector.diagnose()
pltAxis[0].set_xlim(1360.5, 1361.1)
pltAxis[0].set_ylim(1.4314e7, 1.4326e7);
pltAxis[1].set_xlim(1360.5, 1361.1)
pltAxis[1].set_ylim(1.431e7, 1.4326e7);
###Output
_____no_output_____
###Markdown
We see now that the high frequency noise injection is small compared to the original amplitudes in the lightkurve. We barely removed any of the systematics and are now under-fitting, but that might be the best we can do if we want to ensure low noise injection. ...Or Can We Still Do Better? Perhaps we are using the incorrect CBVs for this target. Below is a tuned multi-step fit where we first fit the multi-scale Band 2 CBVs then the Spike CBVs. The multi-scale band 2 CBVs contain intermediate frequency systematic signals. They should not inject high frequency noise. We also utilize the `correct_elasticnet` corrector, which allows us to add in a L1-Norm term (Lasso Regularization). L1-Norm helps snap some basis vector fit coefficients to zero and can result in a more stable, less noisy fit. The result is a much better compromise between over- and under-fitting. The spikes are not well removed but increasing the weight on the Spike CBV removal results in over-fitting. We can also try the multi-scale band 3 CBVs, which contain high frequency systematics, but the over-fitting metric indicates using them results in even greater over-fitting. The resultant is now much better than what we achieved above but more tuning and optimization could possibly get us even closer to an ideal fit.
###Code
# Fit to the Multi-Scale Band 2 CBVs with ElasticNet to add in a L1-Norm (Lasso) term
cbvCorrector.correct_elasticnet(cbv_type=['MultiScale.2'], cbv_indices=[np.arange(1,9)], alpha=1.0e-7, l1_ratio=0.5)
ax = cbvCorrector.diagnose()
ax[0].set_title('Result of First Correction to MultiScale.2 CBVs');
# Set the corrected LC as the initial LC in a new CBVCorrector object before moving to the next correction.
# You could instead just reassign to the first cbvCorrector object, if you do not wish to save the original.
cbvCorrectorIter2 = cbvCorrector.copy()
cbvCorrectorIter2.lc = cbvCorrectorIter2.corrected_lc.copy()
# Fit to the Spike Basis Vectors, using an L1-Norm term.
cbvCorrectorIter2.correct_elasticnet(cbv_type=['Spike'], cbv_indices=['ALL'], alpha=2.0e-5, l1_ratio=0.7)
ax = cbvCorrectorIter2.diagnose()
ax[0].set_title('Result of Second Correction to Spike CBVs');
# Compute the final goodness metrics compared to the original lightcurve.
# This requires us to copy the original light curve into cbvCorrectorIter2.lc so that the goodness metrics compares the corrected_lc to the proper initial light curve.
cbvCorrectorIter2.lc = cbvCorrector.lc.copy()
print('Over-fitting Metric: {}'.format(cbvCorrectorIter2.over_fitting_metric()))
print('Under-fitting Metric: {}'.format(cbvCorrectorIter2.under_fitting_metric()))
# Plot the final correction
_, ax = plt.subplots(1, figsize=(10, 6))
cbvCorrectorIter2.lc.plot(ax=ax, normalize=False, alpha=0.2, label='Original')
cbvCorrectorIter2.corrected_lc[~cbvCorrectorIter2.cadence_mask].scatter(
normalize=False, c='r', marker='x',
s=10, label='Outliers', ax=ax)
cbvCorrectorIter2.corrected_lc.plot(normalize=False, label='Corrected', ax=ax, c='k')
ax.set_title('Comparison between original and final corrected lightcurve');
###Output
_____no_output_____
###Markdown
So, which CBVs are best to use? There is no one single answer, but generally speaking, the Multi-Scale Basis vectors are more versatile. The trade-off is there are also more of them, which means more degrees of freedom in your fit. More degrees of freedom can result in more over-fitting without proper regularization. It is recommened the user tries different combinations of CBVs and use objective metrics to decide which fit is the best for their particular needs. Using the Goodness Metrics and CBVCorrector with other Design Matrices The Goodness Metrics and CBVCorrector can also be used in conjunction with other external design matrices. Let's work on a famous planet example to show how the CBVCorrector can be utilized to imporove the generated light curve. We will begin by using `search_tesscut` to extract an FFI light curve for HAT-P 11 and then create a DesignMatrix using the background pixels.
###Code
# HAT-P 11b
from lightkurve import search_tesscut
from lightkurve.correctors import DesignMatrix
search_result = search_tesscut('HAT-P-11', sector=14)
tpf = search_result.download(cutout_size=20)
# Create a simple thresholded aperture mask
aper = tpf.create_threshold_mask(threshold=15, reference_pixel='center')
# Generate a simple aperture photometry light curve
raw_lc = tpf.to_lightcurve(aperture_mask=aper)
# Create a design matrix using PCA components from the cutout background
dm = DesignMatrix(tpf.flux[:, ~aper], name='pixel regressors').pca(5).append_constant()
###Output
_____no_output_____
###Markdown
The DesignMatrix `dm` now contains the common trends in the background pixels in the data. We will first try to fit the pixel-based design matrix using an unrestricted least-squares fit (I.e. a very weak regularization by setting alpha to a small number). We tell CBVCorrector to only use the external design matrix with `ext_dm=`. When we generate the CBVCorrector object the CBVs will be downloaded, but the CBVs are for 2-minute cadence and not the 30-minute FFIs. We therefore use the `interpolate_cbvs=True` option to tell the CBVCorrector to interpolate the CBVs to the light curve cadence.
###Code
# Generate the CBVCorrector object and interpolate the downloaded CBVs to the light curve cadence
cbvcorrector = CBVCorrector(raw_lc, interpolate_cbvs=True)
# Perform an unrestricted least-squares fit using only the pixel-derived design matrix.
cbvcorrector.correct_gaussian_prior(cbv_type=None, cbv_indices=None, ext_dm=dm, alpha=1e-4)
cbvcorrector.diagnose()
print('Over-fitting metric: {}'.format(cbvcorrector.over_fitting_metric()))
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
corrected_lc_just_pixel_dm = cbvcorrector.corrected_lc
###Output
_____no_output_____
###Markdown
The least-squares fit did remove the background flux trend and at first sight the resultant might look good, but the over-fitting goodness metric is `0.08`. That's not very good! It looks like we are dramatically over-fitting. We can see this in the bottom plot where the corrected curve has more high-frequency noise than the original. Let's now add in the multi-scale basis vectors and see if we can do better. Note that we are joint fitting the CBVs and the external pixel-derived design matrix.
###Code
cbv_type = ['MultiScale.1', 'MultiScale.2', 'MultiScale.3','Spike']
cbv_indices = [np.arange(1,9), np.arange(1,9), np.arange(1,9), 'ALL']
cbvcorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm, alpha=1e-4)
cbvcorrector.diagnose()
print('Over-fitting metric: {}'.format(cbvcorrector.over_fitting_metric()))
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
corrected_lc_joint_fit = cbvcorrector.corrected_lc
###Output
_____no_output_____
###Markdown
That looks a lot better! Could we do a bit better by adding in a regualrization term? Let's do a goodness metric scan.
###Code
cbvcorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm);
###Output
_____no_output_____
###Markdown
There are a couple observations to make here. First, the under-fitting metric has a very good score throughout the regularization scan. This is because the under-fitting metric compares the corrected light curve to neighboring targets in RA and Decl. that are archived as 2-minute SAP-flux targets. _The SAP flux already has the background removed_ so the neighoring targets do not contain the very large background trends. The under-fitting metric is therefore not very helpful. In the next run we will disable the under-fitting metric in the optimization (by setting target_under_score=-1).We see the over-fitting metric is not a simple function of the regularization factor _alpha_. This can happen due to the interaction of the various basis vectors during fitting when regularization is applied. We see a minima (most over-fitting) at about alpha=1e-1. Once alpha moves above this value we begin to over-constrain the fit which results in gradually less removal of systematics. The under-fitting metric should be an indicator that we are going too far constraining the fit, and indeed, we do see the under-fitting metric degrades slightly beginning at alpha=1e-1.We will now try to optimize the fit and account for these two issues by 1) setting the bounds on the alpha parameter (`alpha_bounds=[1e-6, 1e-2]`) and 2) disregarding the under-fitting metric (`target_under_score=-1`).
###Code
# Optimize the fit but ignore the under-fitting metric and set bounds on the alpha parameter.
cbvcorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm, alpha_bounds=[1e-6, 1e-2], target_over_score=0.8, target_under_score=-1)
cbvcorrector.diagnose();
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
###Output
_____no_output_____
###Markdown
This is looking like a pretty good light curve. However, the CDPP increased a little as we optimized the over-fitting metric. Which correction to use may depend on your application. Since we are interested in the transiting planet, we will choose the corrected light curve with the lowest CDPP. Below we compare the light curve between just using the pixel-derived design matrix to also adding in the CBVs as a joint, regularized fit.
###Code
_, ax = plt.subplots(3, figsize=(10, 6))
cbvcorrector.lc.plot(ax=ax[0], normalize=False, label='Uncorrected LC', c='k')
corrected_lc_just_pixel_dm.plot(normalize=False, label='Pixel-Level Corrected; CDPP={0:.1f}'.format(corrected_lc_just_pixel_dm.estimate_cdpp()), ax=ax[1], c='m')
corrected_lc_joint_fit.plot(normalize=False, label='Joint Fit Corrected; CDPP={0:.1f}'.format(corrected_lc_joint_fit.estimate_cdpp()), ax=ax[2], c='b')
ax[0].set_title('Comparison Between original and final corrected lightcurve');
###Output
_____no_output_____
###Markdown
The superiority of the bottom curve is blatantly obvious. We can clearly see HAT-P 11b on it's 4.9 day period orbit. Our over-fitting metric settled at about 0.35 indicating we might still be over-fitting and should keep that in mind. However the low CDPP indicates the over-fitting is probably not over transit time-scales. Some final comments on CBVCorrector Application to Kepler vs K2 vs TESS CBVCorrector works equally across Kepler, K2 and TESS. However the Multi-Scale and Spike basis vectors are only available for TESS[1](fn1). If you want to just get the CBVs but not generate a CBVCorrector object then use the functions _download_kepler_cbvs_ and _download_tess_cbvs_ within the cbvcorrector module as explained [here](https://docs.lightkurve.org/tutorials/04-how-to-use-cbvs.html). 1 Unfortunately, the Multi-Scale and Spike CBVs are not archived at MAST for Kepler/K2. Applicability of the Over- and Under-fitting Goodness Metrics The under-fitting metric computes the correlation between the corrected light curve and a selection of neighboring SPOC SAP light curves. If the light curve you are trying to correct was not generated by the SPOC pipeline (I.e. not a SAP light curve), then the neighboring SAP light curves might not contain the same instrumental systematics and the under-fitting metric might not properly measure when under-fitting is occuring. The over-fitting metric examines the periodogram of the light curve before and after the correction and is therefore indifferent to how the light curve was generated. It simply looks to see if noise was injected into the light curve. The over-fitting metric is therefore much more generally applicable.The Goodness Metrics are part of the `lightkurve.correctors.metrics` module and can be computed directly with calls to `overfit_metric_lombscargle` and `underfit_metric_neighbors`. A savvy expert user can use these and other quality metrics to generate their own Loss Function for optimizing a fit. Joint Fitting By including the `ext_dm=` parameter in the `correct_*` methods we allow for joint fitting between the CBVs and other design matrices. Generally speaking, if fitting a collection of different models to a system, joint fitting is ideal. For example, if performing transit analysis one could add in a transit model to the joint fit to get the best transit recovery. The fit coefficient to the transit model is stored in the `CBVCorrector` object after fitting and can be recovered. Hyperparameter Optimization Any model fitting should include a hyperparameter optimization step. The `correct_optimizer` is essentially a 1-dimensional optimizer and is very fast. More advanced hypterparameter optimization can be performed by tuning the `alpha` and `l1_ratio` parameters in `correct_elasticnet` plus the number and type of CBVs, along with an external design matrix. The optimization Loss Function can use a combination of the `under_fitting_metric`, `over_fitting_metric` and `lc.estimate_cdpp` methods. Writing such an optimzer is left as an exercise to the reader and to be tuned to the reader's particular application. More Generalized Design Matrix Priors The main `CBVCorrector.correct*` methods utilize a similar prior for all design matrix vectors as is typically used in L1-Norm and L2-Norm regularization. However you can perform fine tuning to the correction using more sophisticated priors. After performing a fit with one of the `CBVCorrector.correct*` methods, `CBVCorrector.design_matrix_collection` will have the priors set. One can then manually adjust the priors and use `CBVCorrector.correct_regressioncorrector` to perform the standard `RegressionCorrector.correct` correction. An illustration is below:
###Code
# Perform an initial optimization with a L2-Norm regularization
cbvCorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices);
# Examine the quality of the resultant lightcurve in cbvcorrector.corrected_lc
# Determine how to adjust the priors and make changes to the design matrix
cbvCorrector.design_matrix_collection[i].prior_sigma[j] = # ... adjust the priors
# Call the superclass correct method with the adjusted design_matrix_collection
cbvCorrector.correct_regressioncorrector(cbvCorrector.design_matrix_collection, **kwargs)
###Output
_____no_output_____
###Markdown
The `cbvCorrector.corrected_lc` will now be the result of the fit using whatever `cbvCorrector.design_matrix_collection` you had just provided. Units, NaN and Normalization NaNs are removed from the light curve when used to generate the `CBVCorrector` object and is stored in `CBVCorrector.lc`. All loaded CBVS are also aligned to the light curve cadence numbers (`CBVCorrector.lc.cadenceno`). If the light curve cadences do not well overlap with the CBVs then you can set `interpolate_cbvs=True` when generating the `CBVCorrector` object. Doing so will generate interpolated CBV values for all cadences in the light curve. The CBVCorrector performs its corrections in absolute flux units (typically electrons per second). The returned corrected light curve `corrected_lc` is also in absolute units and the median flux of the light curve is preserved.
###Code
print('LC unit: {}'.format(cbvCorrector.lc.flux.unit))
print('Corrected LC unit: {}'.format(cbvCorrector.corrected_lc.flux.unit))
###Output
_____no_output_____
###Markdown
The goodness metrics are computed using median normalized units in order to properly calibrate the metric to be between 0.0 and 1.0 for all light curves. The normalization is as follows:
###Code
normalized_lc = cbvCorrector.lc.normalize()
normalized_lc -= 1.0
print('Normalized Light curve units: {} (i.e astropy.units.dimensionless_unscaled)'.format(normalized_lc.flux.unit))
###Output
_____no_output_____
###Markdown
Removing noise from Kepler, K2, and TESS light curves using Cotrending Basis Vectors (`CBVCorrector`) Cotrending Basis Vectors (CBVs) are generated in the PDC component of the Kepler/K2/TESS pipeline and are used to remove systematic trends in light curves. They are built from the most common systematic trends observed in each PDC Unit of Work (Quarter for Kepler, Campaign for K2 and Sector for TESS). Each Kepler and K2 module output and each TESS CCD has its own set of CBVs. You can read an introduction to the CBVs in [Demystifying Kepler Data](https://arxiv.org/pdf/1207.3093.pdf) or to greater detail in the [Kepler Data Processing Handbook](https://archive.stsci.edu/kepler/manuals/KSCI-19081-003-KDPH.pdf). The same basic method to generate CBVs is used for all three missions.This tutorial provides examples of how to utilize the various CBVs to clean lightcurves of common trends experienced by all targets. The technique exploits two goodness metrics that characterize the performance of the fit. [CBVCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html) inherits the [RegressionCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.html?highlight=regressioncorrector) class in LightKurve. It is recommend to first read the tutorial on [obtaining the CBVs](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-2-how-to-use-cbvs.html) before reading this tutorial. Cotrending Basis Vector Types There are three basic types of CBVs: - **Single-Scale** contains all systematic trends combined in a single set of basis vectors. - **Multi-Scale** contains systematic trends in specific wavelet-based band passes. There are usually three sets of multi-scale basis vectors in three bands.- **Spike** contains only short impulsive spike systematics.There are two different correction methods in PDC: Single-Scale and Multi-Scale. Single-Scale performs the correction in a single bandpass. Multi-Scale performs the correction in three separate wavelet-based bandpasses. Both corrections are performed in PDC but we can only export a single PDC light curve for each target. So, PDC must choose which of the two to export on a per-target basis. Generally speaking, single-scale performs better at preserving longer period signals. But at periods close to transiting planet durations multi-scale performs better at preserving signals. PDC therefore mostly chooses multi-scale for use within the planet finding pipeline and for the archive. You can find in the light curve FITS header which PDC method was chosen (keyword “PDCMETHD”). Additionally, a seperate correction is alway performed to remove short impulsive systematic spikes.For an individual's research needs, the mission supplied PDC lightcurves might not be ideal and so the CBVs are provided to the user to perform their own correction. All three CBV types are provided at MAST for TESS, however only Single-Scale is provided at MAST for Kepler and K2. Also for Kepler and K2, Cotrending Basis Vectors are supplied for only the 30-minute target cadence. Obtaining the CBVs One can directly obtain the CBVs with `load_tess_cbvs` and `load_kepler_cbvs`, either from MAST by default or from a local directory `cbv_dir`. However when generating a [CBVCorrector](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html?highlight=cbvcorrector) object the appropriate CBVs are automatically downloaded from MAST and aligned to the lightcurve. Let's generate this object for a particularily interesting TESS variable target. We first download the SAP lightcurve.
###Code
from lightkurve import search_lightcurve
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
lc = search_lightcurve('TIC 99180739', author='SPOC', sector=10).download(flux_column='sap_flux')
###Output
_____no_output_____
###Markdown
Next, we create a `CBVCorrector` object. This will download the CBVs appropriate for this target and store them in the `CBVCorrector` object. In the case of TESS, this means the CBVs associated with the CCD this target is on and for Sector 10.
###Code
from lightkurve.correctors import CBVCorrector
cbvCorrector = CBVCorrector(lc)
###Output
_____no_output_____
###Markdown
Let's look at the CBVs downloaded.
###Code
cbvCorrector.cbvs
###Output
_____no_output_____
###Markdown
We see that there are a total of 5 sets of CBVs, all associated with TESS Sector 10, Camera 1 and CCD 1. The number of CBVs per type is also given. Let's plot the Single-Scale CBVs, which contain all systematics combined.
###Code
cbvCorrector.cbvs[0].plot();
###Output
_____no_output_____
###Markdown
The first several CBVs contain most of the systematics. The latter CBVs pose a greater risk of injecting more noise than helping. The default behavior in CBVCorrector is to use the first 8 CBVs. Assessing Over- and Under-Fitting with the Goodness Metrics Two very common issues when fitting a model to a data set it over-fitting and under-fitting. Over-fitting occurs when the model has too many degrees of freedom and fits the data _at all costs_, instead of just modelling the physical process it is attempting to model. This can exhibit itself in different ways, depending on the system and application. In the case of fitting systematic trend basis vectors to a time series, over-fitting can result in the basis vectors removing intrinsic signals in the times series instead of just the systematics. It can also result in introduced broad-band noise. This can be particularily prominant in an unconstrained least-squares fit. A least-squares fit only cares about minimizing it's loss function, which is the Root Mean Square error. In the course of minimizing the RMS, narrow-band power representing the systematic trends are exchanged for broad-band noise intrinsic in the basis vectors. This results in the overall RMS decreasing but the noise in the time series increasing, resulting in the obscuration of the signals under interest. A very common method to inhibit over-fitting is to introduce a regularization term in the loss function. This constrains the fit and effectively reduces the degrees of freedom.Under-fitting occurs when the model has too few degrees of freedom and fails to adequately model the physical process it is attempting to model. In the case of fitting systematic trend basis vectors to a time series, under-fitting can result in residual systematics. Under-fitting can either be the result of the basis vectors not adequately representing the systematics or, placing too great of a restriction on the model during fitting. The regularization technique used to inhibit over-fitting can therefore result in under-fitting. The ideal fit will balance the counter-acting phenomena of over- and under-fitting. To this end, a method can be developed to measure the degree to which these two phenomena occur.PDC has two **Goodness Metrics** to assess over- and under-fitting:- **Over-fitting metric**: Measures the introduced noise in the light curve after the correction. It does so by measuring the broad-band power spectrum via a Lomb-Scargle Periodogram both before and after the correction. If power has increased after the correction then this is an indication the CBV fit has over-fitted and introduced noise. The metric treats all frequencies equally when measuring power increase; from one frequency separation to the Nyquist frequency. This metric is callibrated such that a metric value of 0.5 means the introduced noise due to over-fitting is at the same power level as the uncertainties in the light curve.- **Under-fitting metric**: Measures the mean residual target to target Pearson correlation between the target under study and a selection of neighboring targets. This metric will find and download a selection of neighboring SPOC SAP targets in RA and Decl. until a minimum number is found. The metric is callibrated such that a value of 0.95 means the residual correlations in the target is equivalent to chance correlations of White Gaussian Noise._The Goodness Metrics are not perfect!_ They are an estimate of over- and under-fitting and are to be used as a guideline along other other metrics to assess the quality of your light curve. The Goodness Metrics are part of the `lightkurve.correctors.metrics` module and can be computed directly with calls to `overfit_metric_lombscargle` and `underfit_metric_neighbors`. The 'CBVCorrector' has convenience wrappers for the two metrics and so they do not need to be called directly, as we will show below. Example Correction with CBVCorrector to Inhibit Over-Fitting There are four correction methods within `CBVCorrector`:- **correct**: Performs a numerical correction using the LightKurve [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) method while optimizing the L2-Norm regularization penalty term using the goodness metrics as the Loss Function.- **correct_gaussian_prior**: Performs an analytical correction using the LightKurve [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) method while setting the L2-Norm (Ridge Regression) regularization penalty term as the Gaussian prior.- **correct_elasticnet**: Performs the correction using Scikit-Learn's [ElasticNet](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html), which combines both an L1- and L2-Norm regularization.- **correct_regressioncorrector**: Performs the standard [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) correction. RegressionCorrector is the superclass to CBVCorrector and this is a "passthrough" method to access the superclass `correct` method.If you are unfamilar with L1-Norm (LASSO) and L2-Norm (Ridge Regression) regularization then there are [severel](https://www.statlearning.com/), [excellent](https://press.princeton.edu/books/hardcover/9780691198309/statistics-data-mining-and-machine-learning-in-astronomy), [introductions](https://en.wikipedia.org/wiki/Regularization_(mathematics)). You can read how a L2-norm relates to a Gaussian prior in a linear design matrix in [this reference](https://katbailey.github.io/post/from-both-sides-now-the-math-of-linear-regression/).The default method is `correct` and generally speaking, one can use just this method to obtain a good fit. The other methods are for advanced usage.We'll start with `correct_gaussian_prior` in order to introduce the concepts. Doing so will allow us to force a very weak regularization term (alpha=1e-4) as an illustration.
###Code
# Select which CBVs to use in the correction
cbv_type = ['SingleScale', 'Spike']
# Select which CBV indices to use
# Use the first 8 SingleScale and all Spike CBVS
cbv_indices = [np.arange(1,9), 'ALL']
# Perform the correction
cbvCorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, alpha=1e-4)
cbvCorrector.diagnose();
###Output
_____no_output_____
###Markdown
First note that CBVCorrector always fits a constant term in the model, but the constant is never subtracted in the resultant corrected flux. The median flux value of the light curve is always preserved.At first sight, this looks like a good correction. Both the Single-Scale and Spike basis vectors are being utilized to fit out as much of the signal as possible. The corrected light curve is indeed flatter. But this was essentially an unrestricted least-squares correction and we may have _over-fitted_. The very strong lips right at the beginning of each orbit is probably a chance correlation between the star's inherent stellar variability and the thermal settling systematic that is common in Kepler and TESS lightcurves. Let's look at the CBVCorrector goodness metrics to determine if this is the case.
###Code
# Note: this cell will be slow to run
print('Over fitting Metric: {}'.format(cbvCorrector.over_fitting_metric()))
print('Under fitting Metric: {}'.format(cbvCorrector.under_fitting_metric()))
###Output
_____no_output_____
###Markdown
The first time you run the under-fitting goodness metric it will download the SPOC SAP light curves of targets in the neighborhood around the target under study in order to estimate the residual systematics (they are stored in the CBVCorrector object for subsequent computations). A goodness metric of 0.8 or above is generally considered good. In this case, it looks like we over-fitted (over-fitting metric = 0.71). Even though the corrected light curve looks better, our metric is telling us we probably injected signals into our lightcurve and we should not trust this really nice looking curve. Perhaps we can do better if we _regularize_ the fit. Using the Goodness Metrics to Optimize the Fit We will start by performing a scan of the over- and under-fit goodness metrics as a function of the L2-Norm regularization term, alpha.
###Code
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
This scan also plots the last used alpha parameter as a vertical black line (alpha=1e-4 in our case). We are clearly not optimizing this fit for both over- and under-fitting. Let's use the `correct` numerical optimizer to try to optimize the fit.
###Code
cbvCorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices);
cbvCorrector.diagnose();
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
Much better! We see the thermal settling systematic is still being removed, but the stellar variabity is better preserved. Note that the optimizer did not set the alpha parameter at exactly the red and blue curve intersection point. The default target goodness scores is 0.8 or above, which is fulfilled at alpha=1.45e-1. If we want to optimize the fit even more, by perhaps ensuring we are not over-fitting at all, then we can adjust the target over and under scores to emphasize which metric we are more interested in. Below we more greatly emphasize improving the over-fitting metric by setting the target to 0.9.
###Code
cbvCorrector.correct(cbv_type=cbv_type,
cbv_indices=cbv_indices,
target_over_score=0.9,
target_under_score=0.5)
cbvCorrector.diagnose();
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
We are now perhaps biasing too far towards under-fitting, but depending on your research interests, this might be best. No Single Best Answer Example Let's now look at another example, this time where there is no clear single best answer. Again, we will use the Single-Scale and Spike basis vectors for the correction and begin with low regularization.
###Code
lc = search_lightcurve('TIC 38574307', author='SPOC', sector=2).download(flux_column='sap_flux')
cbvCorrector = CBVCorrector(lc)
cbv_type = ['SingleScale', 'Spike']
cbv_indices = [np.arange(1,9), 'ALL']
cbvCorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, alpha=1e-4)
cbvCorrector.diagnose();
###Output
_____no_output_____
###Markdown
At first sight, this looks good. The long term trends have been removed and the periodic noisy bits have been removed with the spike basis vectors. But did we really do a good job?
###Code
print('Over fitting Metric: {}'.format(cbvCorrector.over_fitting_metric()))
print('Under fitting Metric: {}'.format(cbvCorrector.under_fitting_metric()))
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
###Output
_____no_output_____
###Markdown
Hmm... The over-fitting goodness metric says we are severely over-fitting. Not only that, there appears to not be an Alpha parameter that brings both goodness metrics above 0.8. And yet, the fit looks really good. What's going on here? Let's zoom in on the correction.
###Code
pltAxis = cbvCorrector.diagnose()
pltAxis[0].set_xlim(1360.5, 1361.1)
pltAxis[0].set_ylim(1.4755e7, 1.477e7);
pltAxis[1].set_xlim(1360.5, 1361.1)
pltAxis[1].set_ylim(1.475e7, 1.477e7);
###Output
_____no_output_____
###Markdown
We see in the top plot that the _SingleScale_ correction has comperable noise to the _original_ light curve. This means the correction is injecting high frequency noise at comperable amplitude to the original signal. We have indeed over-fitted! The goodness metrics perform a _broad-band_ analysis of over- and under-fitting. Even though our eyes did not see the high frequency noise injection, the goodness metrics did. So, what should be done? It depends on what you are trying to investigate. If you are only looking at the low frequency signals in the data then perhaps you don't care about the high frequency noise injection. If you really do care about the high frequency signals then you should increase the Alpha parameter, or set the target goodness scores as we do below (target_over_score=0.8, target_under_score=0.5).
###Code
# Optimize the fit but overemphasize the importance of not over-fitting.
cbvCorrector.correct(cbv_type=cbv_type,
cbv_indices=cbv_indices,
target_over_score=0.8,
target_under_score=0.5)
cbvCorrector.diagnose()
cbvCorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices);
# Again, zoom in to see the detail
pltAxis = cbvCorrector.diagnose()
pltAxis[0].set_xlim(1360.5, 1361.1)
pltAxis[0].set_ylim(1.4755e7, 1.477e7);
pltAxis[1].set_xlim(1360.5, 1361.1)
pltAxis[1].set_ylim(1.475e7, 1.477e7);
###Output
_____no_output_____
###Markdown
We see now that the high frequency noise injection is small compared to the original amplitudes in the lightkurve. We barely removed any of the systematics and are now under-fitting, but that might be the best we can do if we want to ensure low noise injection. ...Or Can We Still Do Better? Perhaps we are using the incorrect CBVs for this target. Below is a tuned multi-step fit where we first fit the multi-scale Band 2 CBVs then the Spike CBVs. The multi-scale band 2 CBVs contain intermediate frequency systematic signals. They should not inject high frequency noise. We also utilize the `correct_elasticnet` corrector, which allows us to add in a L1-Norm term (Lasso Regularization). L1-Norm helps snap some basis vector fit coefficients to zero and can result in a more stable, less noisy fit. The result is a much better compromise between over- and under-fitting. The spikes are not well removed but increasing the weight on the Spike CBV removal results in over-fitting. We can also try the multi-scale band 3 CBVs, which contain high frequency systematics, but the over-fitting metric indicates using them results in even greater over-fitting. The resultant is now much better than what we achieved above but more tuning and optimization could possibly get us even closer to an ideal fit.
###Code
# Fit to the Multi-Scale Band 2 CBVs with ElasticNet to add in a L1-Norm (Lasso) term
cbvCorrector.correct_elasticnet(cbv_type=['MultiScale.2'], cbv_indices=[np.arange(1,9)], alpha=1.0e-7, l1_ratio=0.5)
ax = cbvCorrector.diagnose()
ax[0].set_title('Result of First Correction to MultiScale.2 CBVs');
# Set the corrected LC as the initial LC in a new CBVCorrector object before moving to the next correction.
# You could instead just reassign to the first cbvCorrector object, if you do not wish to save the original.
cbvCorrectorIter2 = cbvCorrector.copy()
cbvCorrectorIter2.lc = cbvCorrectorIter2.corrected_lc.copy()
# Fit to the Spike Basis Vectors, using an L1-Norm term.
cbvCorrectorIter2.correct_elasticnet(cbv_type=['Spike'], cbv_indices=['ALL'], alpha=2.0e-5, l1_ratio=0.7)
ax = cbvCorrectorIter2.diagnose()
ax[0].set_title('Result of Second Correction to Spike CBVs');
# Compute the final goodness metrics compared to the original lightcurve.
# This requires us to copy the original light curve into cbvCorrectorIter2.lc so that the goodness metrics compares the corrected_lc to the proper initial light curve.
cbvCorrectorIter2.lc = cbvCorrector.lc.copy()
print('Over-fitting Metric: {}'.format(cbvCorrectorIter2.over_fitting_metric()))
print('Under-fitting Metric: {}'.format(cbvCorrectorIter2.under_fitting_metric()))
# Plot the final correction
_, ax = plt.subplots(1, figsize=(10, 6))
cbvCorrectorIter2.lc.plot(ax=ax, normalize=False, alpha=0.2, label='Original')
cbvCorrectorIter2.corrected_lc[~cbvCorrectorIter2.cadence_mask].scatter(
normalize=False, c='r', marker='x',
s=10, label='Outliers', ax=ax)
cbvCorrectorIter2.corrected_lc.plot(normalize=False, label='Corrected', ax=ax, c='k')
ax.set_title('Comparison between original and final corrected lightcurve');
###Output
_____no_output_____
###Markdown
So, which CBVs are best to use? There is no one single answer, but generally speaking, the Multi-Scale Basis vectors are more versatile. The trade-off is there are also more of them, which means more degrees of freedom in your fit. More degrees of freedom can result in more over-fitting without proper regularization. It is recommened the user tries different combinations of CBVs and use objective metrics to decide which fit is the best for their particular needs. Using the Goodness Metrics and CBVCorrector with other Design Matrices The Goodness Metrics and CBVCorrector can also be used in conjunction with other external design matrices. Let's work on a famous planet example to show how the CBVCorrector can be utilized to imporove the generated light curve. We will begin by using [search_tesscut](https://docs.lightkurve.org/reference/api/lightkurve.search_tesscut.html?highlight=search_tesscut) to extract an FFI light curve for HAT-P 11 and then create a DesignMatrix using the background pixels.
###Code
# HAT-P 11b
from lightkurve import search_tesscut
from lightkurve.correctors import DesignMatrix
search_result = search_tesscut('HAT-P-11', sector=14)
tpf = search_result.download(cutout_size=20)
# Create a simple thresholded aperture mask
aper = tpf.create_threshold_mask(threshold=15, reference_pixel='center')
# Generate a simple aperture photometry light curve
raw_lc = tpf.to_lightcurve(aperture_mask=aper)
# Create a design matrix using PCA components from the cutout background
dm = DesignMatrix(tpf.flux[:, ~aper], name='pixel regressors').pca(5).append_constant()
###Output
_____no_output_____
###Markdown
The [DesignMatrix](https://docs.lightkurve.org/reference/api/lightkurve.correctors.DesignMatrix.html?highlight=designmatrixlightkurve.correctors.DesignMatrix) `dm` now contains the common trends in the background pixels in the data. We will first try to fit the pixel-based design matrix using an unrestricted least-squares fit (I.e. a very weak regularization by setting alpha to a small number). We tell CBVCorrector to only use the external design matrix with `ext_dm=`. When we generate the CBVCorrector object the CBVs will be downloaded, but the CBVs are for 2-minute cadence and not the 30-minute FFIs. We therefore use the `interpolate_cbvs=True` option to tell the CBVCorrector to interpolate the CBVs to the light curve cadence.
###Code
# Generate the CBVCorrector object and interpolate the downloaded CBVs to the light curve cadence
cbvcorrector = CBVCorrector(raw_lc, interpolate_cbvs=True)
# Perform an unrestricted least-squares fit using only the pixel-derived design matrix.
cbvcorrector.correct_gaussian_prior(cbv_type=None, cbv_indices=None, ext_dm=dm, alpha=1e-4)
cbvcorrector.diagnose()
print('Over-fitting metric: {}'.format(cbvcorrector.over_fitting_metric()))
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
corrected_lc_just_pixel_dm = cbvcorrector.corrected_lc
###Output
_____no_output_____
###Markdown
The least-squares fit did remove the background flux trend and at first sight the resultant might look good, but the over-fitting goodness metric is `0.08`. That's not very good! It looks like we are dramatically over-fitting. We can see this in the bottom plot where the corrected curve has more high-frequency noise than the original. Let's now add in the multi-scale basis vectors and see if we can do better. Note that we are joint fitting the CBVs and the external pixel-derived design matrix.
###Code
cbv_type = ['MultiScale.1', 'MultiScale.2', 'MultiScale.3','Spike']
cbv_indices = [np.arange(1,9), np.arange(1,9), np.arange(1,9), 'ALL']
cbvcorrector.correct_gaussian_prior(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm, alpha=1e-4)
cbvcorrector.diagnose()
print('Over-fitting metric: {}'.format(cbvcorrector.over_fitting_metric()))
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
corrected_lc_joint_fit = cbvcorrector.corrected_lc
###Output
_____no_output_____
###Markdown
That looks a lot better! Could we do a bit better by adding in a regularization term? Let's do a goodness metric scan.
###Code
cbvcorrector.goodness_metric_scan_plot(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm);
###Output
_____no_output_____
###Markdown
There are a couple observations to make here. First, the under-fitting metric has a very good score throughout the regularization scan. This is because the under-fitting metric compares the corrected light curve to neighboring targets in RA and Decl. that are archived as 2-minute SAP-flux targets. _The SAP flux already has the background removed_ so the neighoring targets do not contain the very large background trends. The under-fitting metric is therefore not very helpful. In the next run we will disable the under-fitting metric in the optimization (by setting target_under_score=-1).We see the over-fitting metric is not a simple function of the regularization factor _alpha_. This can happen due to the interaction of the various basis vectors during fitting when regularization is applied. We see a minima (most over-fitting) at about alpha=1e-1. Once alpha moves above this value we begin to over-constrain the fit which results in gradually less removal of systematics. The under-fitting metric should be an indicator that we are going too far constraining the fit, and indeed, we do see the under-fitting metric degrades slightly beginning at alpha=1e-1.We will now try to optimize the fit and account for these two issues by 1) setting the bounds on the alpha parameter (`alpha_bounds=[1e-6, 1e-2]`) and 2) disregarding the under-fitting metric (`target_under_score=-1`).
###Code
# Optimize the fit but ignore the under-fitting metric and set bounds on the alpha parameter.
cbvcorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices, ext_dm=dm, alpha_bounds=[1e-6, 1e-2], target_over_score=0.8, target_under_score=-1)
cbvcorrector.diagnose();
print('CDPP: {}'.format(cbvcorrector.corrected_lc.estimate_cdpp()))
###Output
_____no_output_____
###Markdown
This is looking like a pretty good light curve. However, the CDPP increased a little as we optimized the over-fitting metric. Which correction to use may depend on your application. Since we are interested in the transiting planet, we will choose the corrected light curve with the lowest CDPP. Below we compare the light curve between just using the pixel-derived design matrix to also adding in the CBVs as a joint, regularized fit.
###Code
_, ax = plt.subplots(3, figsize=(10, 6))
cbvcorrector.lc.plot(ax=ax[0], normalize=False, label='Uncorrected LC', c='k')
corrected_lc_just_pixel_dm.plot(normalize=False, label='Pixel-Level Corrected; CDPP={0:.1f}'.format(corrected_lc_just_pixel_dm.estimate_cdpp()), ax=ax[1], c='m')
corrected_lc_joint_fit.plot(normalize=False, label='Joint Fit Corrected; CDPP={0:.1f}'.format(corrected_lc_joint_fit.estimate_cdpp()), ax=ax[2], c='b')
ax[0].set_title('Comparison Between original and final corrected lightcurve');
###Output
_____no_output_____
###Markdown
The superiority of the bottom curve is blatantly obvious. We can clearly see HAT-P 11b on it's 4.9 day period orbit. Our over-fitting metric settled at about 0.35 indicating we might still be over-fitting and should keep that in mind. However the low CDPP indicates the over-fitting is probably not over transit time-scales. Some final comments on CBVCorrector Application to Kepler vs K2 vs TESS CBVCorrector works equally across Kepler, K2 and TESS. However the Multi-Scale and Spike basis vectors are only available for TESS[1](fn1). For K2, the [PLDCorrector](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-3-k2-pldcorrector.html) and [SFFCorrector](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-3-k2-sffcorrector.html) classes might work better than `CBVCorrector`.If you want to just get the CBVs but not generate a CBVCorrector object then use the functions _load_kepler_cbvs_ and _load_tess_cbvs_ within the cbvcorrector module as explained [here](https://docs.lightkurve.org/tutorials/2-creating-light-curves/2-2-how-to-use-cbvs.html). 1 Unfortunately, the Multi-Scale and Spike CBVs are not archived at MAST for Kepler/K2. Applicability of the Over- and Under-fitting Goodness Metrics The under-fitting metric computes the correlation between the corrected light curve and a selection of neighboring SPOC SAP light curves. If the light curve you are trying to correct was not generated by the SPOC pipeline (I.e. not a SAP light curve), then the neighboring SAP light curves might not contain the same instrumental systematics and the under-fitting metric might not properly measure when under-fitting is occuring. The over-fitting metric examines the periodogram of the light curve before and after the correction and is therefore indifferent to how the light curve was generated. It simply looks to see if noise was injected into the light curve. The over-fitting metric is therefore much more generally applicable.The Goodness Metrics are part of the `lightkurve.correctors.metrics` module and can be computed directly with calls to `overfit_metric_lombscargle` and `underfit_metric_neighbors`. A savvy expert user can use these and other quality metrics to generate their own Loss Function for optimizing a fit. Aligning versus Interpolating CBVs By default, all loaded CBVS in `CBVCorrector` are "aligned" to the light curve cadence numbers (`CBVCorrector.lc.cadenceno`). This means only cadence numbers that exist in both the CBVs and the light curve will have values in the returned CBVs. All cadence numbers that exist in the light curve but not in the CBVs will have NaNs returned for the CBVs on those cadences and the Gap Indicator set to True. Any cadences in the CBVs not in the light curve will be removed from the CBVs.If the light curve cadences do not overlap well with the CBVs then you can set `interpolate_cbvs=True` when generating the `CBVCorrector` object. Doing so will generate interpolated CBV values for all cadences in the light curve. If the light curve has cadences past either end of the cadences in the CBVs then one must extrapolate. A second argument, `extrapolate_cbvs`, can be used to also extrapolate the CBV values to the light curve cadences. If `extrapolate_cbvs=False` then the exterior values are set to NaNs, which will probably result is a very poor fit.**Warning**: *The safest method is to align*. This will not generate any new values for the CBVs. Interpolation can be potentially dangerous. Interpolation uses Piecewise Cubic Hermite Interpolating Polynomial (PCHIP), which can be more stable than a simple spline, but no interpolation method works in all situations. Extrapolation is even more dangerious, which is why an extra parameter must be set if one desires to extrapolate. *Be sure to manually examine the extrapolated CBVs before use!* Joint Fitting By including the `ext_dm=` parameter in the `correct_*` methods we allow for joint fitting between the CBVs and other design matrices. Generally speaking, if fitting a collection of different models to a system, joint fitting is ideal. For example, if performing transit analysis one could add in a transit model to the joint fit to get the best transit recovery. The fit coefficient to the transit model is stored in the `CBVCorrector` object after fitting and can be recovered. Hyperparameter Optimization Any model fitting should include a hyperparameter optimization step. The `correct_optimizer` is essentially a 1-dimensional optimizer and is very fast. More advanced hypterparameter optimization can be performed by tuning the `alpha` and `l1_ratio` parameters in `correct_elasticnet` plus the number and type of CBVs, along with an external design matrix. The optimization Loss Function can use a combination of the `under_fitting_metric`, `over_fitting_metric` and `lc.estimate_cdpp` methods. Writing such an optimzer is left as an exercise to the reader and to be tuned to the reader's particular application. More Generalized Design Matrix Priors The main [CBVCorrector.correct*](https://docs.lightkurve.org/reference/api/lightkurve.correctors.CBVCorrector.html?highlight=cbvcorrector%20correclightkurve.correctors.CBVCorrector) methods utilize a similar prior for all design matrix vectors as is typically used in L1-Norm and L2-Norm regularization. However you can perform fine tuning to the correction using more sophisticated priors. After performing a fit with one of the `CBVCorrector.correct*` methods, `CBVCorrector.design_matrix_collection` will have the priors set. One can then manually adjust the priors and use `CBVCorrector.correct_regressioncorrector` to perform the standard [RegressionCorrector.correct](https://docs.lightkurve.org/reference/api/lightkurve.correctors.RegressionCorrector.correct.html?highlight=regressioncorrector%20correctlightkurve.correctors.RegressionCorrector.correct) correction. An illustration is below:
###Code
# 1) Perform an initial optimization with a L2-Norm regularization
# cbvCorrector.correct(cbv_type=cbv_type, cbv_indices=cbv_indices);
# 2) Examine the quality of the resultant lightcurve in cbvcorrector.corrected_lc
# Determine how to adjust the priors and make changes to the design matrix
# cbvCorrector.design_matrix_collection[i].prior_sigma[j] = # ... adjust the priors
# 3) Call the superclass correct method with the adjusted design_matrix_collection
# cbvCorrector.correct_regressioncorrector(cbvCorrector.design_matrix_collection, **kwargs)
###Output
_____no_output_____
###Markdown
The `cbvCorrector.corrected_lc` will now be the result of the fit using whatever `cbvCorrector.design_matrix_collection` you had just provided. NaNs, Units and Normalization NaNs are removed from the light curve when used to generate the `CBVCorrector` object and is stored in `CBVCorrector.lc`. The CBVCorrector performs its corrections in absolute flux units (typically electrons per second). The returned corrected light curve `corrected_lc` is also in absolute units and the median flux of the light curve is preserved.
###Code
print('LC unit: {}'.format(cbvCorrector.lc.flux.unit))
print('Corrected LC unit: {}'.format(cbvCorrector.corrected_lc.flux.unit))
###Output
_____no_output_____
###Markdown
The goodness metrics are computed using median normalized units in order to properly calibrate the metric to be between 0.0 and 1.0 for all light curves. The normalization is as follows:
###Code
normalized_lc = cbvCorrector.lc.normalize()
normalized_lc -= 1.0
print('Normalized Light curve units: {} (i.e astropy.units.dimensionless_unscaled)'.format(normalized_lc.flux.unit))
###Output
_____no_output_____ |
The_Battle_of_Neighborhoods.ipynb | ###Markdown
###Code
import pandas as pd
!pip install beautifulsoup4
!pip install lxml
!pip install html5lib
!pip install requests
!pip install geocoder
!pip install geopy
!pip install Nominatim
!pip install folium
!pip install requests
!pip install sklearn
###Output
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.6/dist-packages (4.6.3)
Requirement already satisfied: lxml in /usr/local/lib/python3.6/dist-packages (4.2.6)
Requirement already satisfied: html5lib in /usr/local/lib/python3.6/dist-packages (1.0.1)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from html5lib) (0.5.1)
Requirement already satisfied: six>=1.9 in /usr/local/lib/python3.6/dist-packages (from html5lib) (1.12.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (2.21.0)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests) (2019.9.11)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests) (3.0.4)
Requirement already satisfied: geocoder in /usr/local/lib/python3.6/dist-packages (1.38.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from geocoder) (0.16.0)
Requirement already satisfied: ratelim in /usr/local/lib/python3.6/dist-packages (from geocoder) (0.1.6)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from geocoder) (1.12.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from geocoder) (2.21.0)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from geocoder) (7.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ratelim->geocoder) (4.4.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->geocoder) (2019.9.11)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->geocoder) (1.24.3)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->geocoder) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->geocoder) (3.0.4)
Requirement already satisfied: geopy in /usr/local/lib/python3.6/dist-packages (1.17.0)
Requirement already satisfied: geographiclib<2,>=1.49 in /usr/local/lib/python3.6/dist-packages (from geopy) (1.50)
Requirement already satisfied: Nominatim in /usr/local/lib/python3.6/dist-packages (0.1)
Requirement already satisfied: folium in /usr/local/lib/python3.6/dist-packages (0.8.3)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from folium) (2.21.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from folium) (1.12.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from folium) (2.10.3)
Requirement already satisfied: branca>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from folium) (0.3.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from folium) (1.16.5)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->folium) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->folium) (2.8)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->folium) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->folium) (2019.9.11)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->folium) (1.1.1)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (2.21.0)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests) (1.24.3)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests) (2019.9.11)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests) (2.8)
Requirement already satisfied: sklearn in /usr/local/lib/python3.6/dist-packages (0.0)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from sklearn) (0.21.3)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (0.14.0)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.3.1)
Requirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.16.5)
###Markdown
**Introduction/Business Problem**A constractor want to start a new business in fast food in london . Unfortunately he has no idea about the right area for this project.So he decided to rely on the science of data analysis in order to find the appropriate area for this new project, especially the population density in various neighborhoods of London, as well as the distribution of different venues and facilities in the city of london . **1. Data acquisition : Data about London Boroughs and populations in london**In this project we will use data from "https://en.wikipedia.org/wiki/List_of_London_boroughs" , but we need first to get data by scraping and clean it .
###Code
#importing libraries
from bs4 import BeautifulSoup
import requests
import pandas as pd
import numpy as np
#Source of Data
source = requests.get("https://en.wikipedia.org/wiki/List_of_London_boroughs").text
soup=BeautifulSoup(source,'lxml')
#add an empty dataframe
df_london = pd.DataFrame(columns=['Borough','Population','Coord'])
#finding our data and scraping it using beautifulSoup library
table=soup.find('table')
body=table.find('tbody')
for row in body.find_all('tr'):
List=[]
for x in row.find_all('td'):
List.append(x.text)
list_df = pd.DataFrame(List)
#for each row we check if the row has values and borough different to not assigned then we add row to dataframe
if len(List)>0 :
df2 = pd.DataFrame({'Borough':[List[0]], 'Population':[List[7]],'Coord':[List[8]]})
if List[1] != "Not assigned" :
df_london = df_london.append(df2,ignore_index=True)
#Data wrangling and data cleaning :
df_london['Borough']=df_london['Borough'].astype(str).str.replace('\n','')
df_london['Population']=df_london['Population'].astype(str).str.replace('\n','')
df_london[['Latitude','Longitude']] = df_london.Coord.str.split("″N ",expand=True)
df_london['Longitude'] = df_london.Longitude.str.split("/" ,expand=True)
df_london['Borough'] = df_london.Borough.str.split("[" ,expand=True)
df_london['Latitude']=df_london['Latitude'].astype(str).str.replace('°',',')
df_london['Latitude']=df_london['Latitude'].astype(str).str.replace('′','')
df_london['Longitude']=df_london['Longitude'].astype(str).str.replace('°',',')
df_london['Longitude']=df_london['Longitude'].astype(str).str.replace('′','')
df_london['Longitude']=df_london['Longitude'].astype(str).str.replace('″E','')
df_london.loc[df_london.Longitude.astype(str).str.contains('″W'), 'Longitude']='-'+df_london['Longitude'].astype(str)
df_london['Longitude']=df_london['Longitude'].astype(str).str.replace('″W','')
df_london['Longitude']=df_london['Longitude'].astype(str).str.replace('\ufeff','')
df_london['Latitude']=df_london['Latitude'].astype(str).str.replace('\ufeff','')
df_london['Population']=df_london['Population'].astype(str).str.replace(',','')
#replace , by . in order to make transformation object -> String : possible
df_london['Latitude']=df_london['Latitude'].astype(str).str.replace(',','.')
df_london['Longitude']=df_london['Longitude'].astype(str).str.replace(',','.')
#we don't need this column anymore
del df_london['Coord']
#Transformation to numeric forme
df_london['Latitude'] = pd.to_numeric(df_london['Latitude'])
df_london['Longitude'] = pd.to_numeric(df_london['Longitude'])
df_london['Population'] = pd.to_numeric(df_london['Population'])
#some changes in coordinate due to tranformation of Geographic Coordinates to Decimal
my_list1=[]
my_list2=[]
for i, row in df_london.iterrows():
x=row['Latitude']
x2=row['Longitude']
y=100*(x-int(x))
z=100*(y-int(y))
y2=100*(x2-int(x2))
z2=100*(y2-int(y2))
my_list1.append(int(x) + y/60 + z/3600)
my_list2.append(int(x2) + y2/60 + z2/3600)
df_london['Latitude'] = my_list1
df_london['Longitude'] = my_list2
#Finally our DATA
df_london.head()
###Output
_____no_output_____
###Markdown
**2. Data analysis : population and clusturing**In this section we will use our data and forsquare api in order to get all informations about london boroughs that can help us to find a solution to the business problem
###Code
df_sorted=df_london[['Borough','Population']].sort_values(by='Population',ascending=True)
df_sorted.set_index('Borough', inplace=True)
df_sorted.tail(5).plot(kind='barh',stacked=True ,figsize=(10, 6))
plt.title('Top 5 Boroughs by Population on 2013')
import folium # map rendering library
import matplotlib.cm as cm
import matplotlib.colors as colors
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
import matplotlib.pyplot as plt
address = 'London'
geolocator = Nominatim(user_agent="tr_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of London City are {}, {}.'.format(latitude, longitude))
# create map of London using latitude and longitude values
map_London = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map''
for lat, lng, borough in zip(df_london['Latitude'], df_london['Longitude'], df_london['Borough']):
label = '{},{}'.format(borough,borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_London)
map_London
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
CLIENT_ID='G5TUS4T0FKE1X5DVH22U4I1SUFQD5FPUYVQVFBS5JVEBTGGU'
CLIENT_SECRET='UZYX5HLCR3G3ZFBIANE5WLF4EY3FSRV5YOYWOMDK2NEPZXYF'
VERSION = '20180604'
LIMIT = 30
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
London_venues = getNearbyVenues(names=df_london['Borough'],
latitudes=df_london['Latitude'],
longitudes=df_london['Longitude']
)
# one hot encoding
London_onehot = pd.get_dummies(London_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
London_onehot['Neighborhood'] = London_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [London_onehot.columns[-1]] + list(London_onehot.columns[:-1])
London_onehot = London_onehot[fixed_columns]
London_onehot.head(20)
London_grouped = London_onehot.groupby('Neighborhood').mean().reset_index()
London_grouped
num_top_venues = 5
for hood in London_grouped['Neighborhood']:
print("----"+hood+"----")
temp = London_grouped[London_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
import numpy as np
num_top_venues = 5
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = London_grouped['Neighborhood']
for ind in np.arange(London_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(London_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
# import k-means from clustering stage
from sklearn.cluster import KMeans
# set number of clusters
kclusters = 5
London_grouped_clustering = London_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(London_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
# add clustering labels
#neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
London_merged = df_london
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
London_merged = London_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Borough')
London_merged.head(20) # check the last columns!
import folium
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
#removing nan values
London_merged.dropna(inplace=True)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(London_merged['Latitude'], London_merged['Longitude'], London_merged['Borough'], London_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[int(cluster-1)],
fill=True,
fill_color=rainbow[int(cluster-1)],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____ |
Statistical Mechanics 6 MSD to Diffusion Coefficient.ipynb | ###Markdown
Brownian LimitIn the Brownian limit, the ratio of the mass $m$ of the background particles to that of the selected heavy B particle $M_B$, $\lambda = \frac{m}{M_B}$, becomes small, it is then convenient to divide the particles up into two subgroups because of hte enormous difference in time scales of motion of the B and bath particles.In the Brownian limit $\lambda = \sqrt{\frac{m}{M_B}} \rightarrow 0$, memory function for heavy particles given by delta function in time,$$K_v(t) = \lambda_1 \delta(t)$$or$$\tilde{K_v}(s) = \lambda_1 = \dfrac{\zeta}{M_B} = \gamma$$where $\gamma$ is friction coeff and $\zeta$ the friction factor $\zeta = M_B \gamma$. Stokes EinsteinIf Stokes-Einstein holds, then friction factor $\gamma$ is$$\gamma = 6 \pi m_i \eta a_i$$$$\gamma = \dfrac{k_B T}{m_i D_s}$$Now writing chosen particle's velocity $v_i$ as $V_B$ and mass as $M_B$ gives$$M_B \dfrac{d}{dt} V_B(t) = - \zeta V_B(t) + F_{B}^{R}(t)$$and$$\langle F_B^R(0) \rangle = 0 \\\langle F_B^R(0) \cdot F_B^R(t) \rangle = 3 \gamma M_B k_B T \delta(t)$$or$$\langle v_i \cdot v_i \rangle = \dfrac{3k_B T}{m_i}$$
###Code
Ndim = 2
N = 10000
dp = 1e-6
nu = 8.9e-4
T = 293
kB = 1.38e-23
pi = np.pi
T = 10000.0
dt = T/N
def get_Dtheor(T, Ndim, dp, nu):
Dtheor = (kB*T)/(3*Ndim*pi*dp*nu)
return Dtheor
Dtheor = get_Dtheor(T,Ndim,dp,nu)
print(Dtheor)
# Variance of step size distribution
# (units of m)
var = 2*Dtheor*dt
stdev = np.sqrt(var)
print(stdev)
###Output
4.05610301218e-06
###Markdown
Verification of the Diffusion CoefficientWe are simulating random walks (integrating a single random realization of a random diffusion walk) using some parameter to control the distribution of step size. This distribution results in a diffusion coefficient.We can verify that the diffusion coefficient we back out from the realizations of random walks matches the theoretical diffusion coefficient.To back out the diffusion coefficient from MSD:* Compute MSD versus lag time* Plot MSD versus lag time* Fit data to line - displacement vs. time* This velocity is proportional to $v \sim \dfrac{2D}{\delta t}$[This page](https://tinevez.github.io/msdanalyzer/tutorial/MSDTuto_brownian.html) mentions a reference for the 2D/t relation, which is also derived in the stat mech textbook mentioned in notebook 4, and is also derived (third method) in the brownian motion notes Z sent me.
###Code
# Single random diffusion walk
# mean 0, std dev computed above
dx = stdev*np.random.randn(N,)
dy = stdev*np.random.randn(N,)
x = np.cumsum(dx)
y = np.cumsum(dy)
plt.plot(x, y, '-')
plt.xlabel('x'); plt.ylabel('y');
plt.title("Brownian Motion 2D Walk")
plt.show()
# Compute MSD versus lag time
# 0 to sqrt(N) avoids bias of longer lag times
upper = int(round(np.sqrt(N)))
msd = np.zeros(upper,)
lag = np.zeros(upper,)
for i, p in enumerate(range(1,upper+1)):
lagtime = dt*p
delx = ( x[p:] - x[:-p] )
dely = ( y[p:] - y[:-p] )
msd[i] = np.mean(delx*delx + dely*dely)
lag[i] = lagtime
m, b = np.polyfit(lag, msd, 1)
plt.loglog(lag, msd, 'o')
plt.loglog(lag, m*lag+b, '--k')
plt.xlabel('Lag time (s)')
plt.ylabel('MSD (m)')
plt.title('Linear Fit: MSD vs. Lag Time')
plt.show()
print("linear fit:")
print("Slope = %0.2g"%(m))
print("Intercept = %0.2g"%(b))
###Output
_____no_output_____
###Markdown
**NOTE:** If the total time being simulated *decreases* such that timesteps are on the order of $10^{-1}$ or $10^{-2}$, the scale of the MSD becomes $10^{-14}$ and numerical error becomes significant.
###Code
# Slope is:
# v = dx / dt
# v = 2 D / dt
# Rearrange:
# D = v * dt / 2
v = m
Dempir = (v*dt)/2
err = (np.abs(Dtheor-Dempir)/Dtheor)*100
print("Theoretical D:\t%0.4g"%(Dtheor))
print("Empirical D:\t%0.4g"%(Dempir))
print("Percent Error:\t%0.4g"%(err))
print("\nNote: this result is from a single realization. Taking an ensemble yields a more accurate predicted D.")
def msd_ensemble(T, Ndim, dp, nu, N, Nwalks):
Dtheor = get_Dtheor(T, Ndim, dp, nu)
ms = []
msds = []
msdxs = []
msdys = []
lags = []
for w in range(Nwalks):
# Single random diffusion walk
# mean 0, std dev computed above
dx = stdev*np.random.randn(N,)
dy = stdev*np.random.randn(N,)
# accumulate
x = np.cumsum(dx)
y = np.cumsum(dy)
# Compute MSD versus lag time
# 0 to sqrt(N) avoids bias of longer lag times
upper = int(round(np.sqrt(N)))
msd = np.zeros(upper,)
msdx = np.zeros(upper,)
msdy = np.zeros(upper,)
lag = np.zeros(upper,)
for i, p in enumerate(range(1,upper+1)):
lagtime = dt*p
delx = ( x[p:] - x[:-p] )
dely = ( y[p:] - y[:-p] )
msd[i] = np.mean((delx*delx + dely*dely)/2)
msdx[i] = np.mean(delx*delx)
msdy[i] = np.mean(dely*dely)
lag[i] = lagtime
slope, _ = np.polyfit(lag, msd, 1)
ms.append( slope )
msds.append( msd )
msdxs.append(msdx)
msdys.append(msdy)
lags.append( lag )
return (ms, msds, msdxs, msdys, lags)
Ndim = 2
N = 10000
dp = 1e-6
nu = 8.9e-4
T = 293
kB = 1.38e-23
pi = np.pi
T = 10000.0
dt = T/N
Nwalks = 1000
slopes, msds, msdxs, msdys, lags = msd_ensemble(T, Ndim, dp, nu, N, Nwalks)
Dempir = np.mean((np.array(slopes)*dt)/2)
err = (np.abs(Dtheor-Dempir)/Dtheor)*100
print("Theoretical D:\t%0.4g"%(Dtheor))
print("Empirical D:\t%0.4g"%(Dempir))
print("Percent Error:\t%0.4g%%"%(err))
print("\nUsing an ensemble of %d particles greatly improves accuracy of predicted D."%(N))
for i, (msd, lag) in enumerate(zip(msdxs,lags)):
if(i>200):
break
plt.loglog(lag,msd,'b',alpha=0.1)
for i, (msd, lag) in enumerate(zip(msdys,lags)):
if(i>200):
break
plt.loglog(lag,msd,'r',alpha=0.1)
for i, (msd, lag) in enumerate(zip(msds,lags)):
if(i>200):
break
plt.loglog(lag,msd,'k',alpha=0.1)
plt.xlabel('Lag Time (m)')
plt.ylabel('MSD (s)')
plt.title('MSD vs Lag Time: \nMSD X (blue), MSD Y (red), MSD MAG (black)')
plt.show()
###Output
_____no_output_____ |
bronze/.ipynb_checkpoints/B30_Visulization_of_a_Qubit-checkpoint.ipynb | ###Markdown
Abuzer Yakaryilmaz | June 16, 2019 (updated) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Visulization of a (Real-Valued) Qubit We use certain tools from python library "matplotlib.pyplot" for drawing. Check the notebook "Python: Drawing" for the list of these tools. Suppose that we have a single qubit. Each possible (real-valued) quantum state of this qubit is a point on 2-dimensional space.It can also be represented as a vector from origin to that point.We start with the visual representation of the following quantum states: $$ \ket{0} = \myvector{1\\0}, ~~ \ket{1} = \myvector{0\\1} , ~~ -\ket{0} = \myrvector{-1\\0}, ~~\mbox{and}~~ -\ket{1} = \myrvector{0\\-1}. $$ We first draw these quantum states as points.
###Code
# import the drawing methods
from matplotlib.pyplot import plot, figure
# draw a figure
figure(figsize=(6,6), dpi=60)
# draw the origin
plot(0,0,'ro') # a point in red color
# draw the quantum states as points (in blue color)
plot(1,0,'bo')
plot(0,1,'bo')
plot(-1,0,'bo')
plot(0,-1,'bo')
###Output
_____no_output_____
###Markdown
Then, we draw the points and the axes together. We have a predefined function for drawing axes: "draw_axes()".We include our predefined functions with the following line of code: %run qlatvia.py
###Code
# import the drawing methods
from matplotlib.pyplot import plot, figure
# draw a figure
figure(figsize=(6,6), dpi=80)
# include our predefined functions
%run qlatvia.py
# draw the axes
draw_axes()
# draw the origin
plot(0,0,'ro') # a point in red color
# draw these quantum states as points (in blue color)
plot(1,0,'bo')
plot(0,1,'bo')
plot(-1,0,'bo')
plot(0,-1,'bo')
###Output
_____no_output_____
###Markdown
Now, we draw the quantum states as vectors by also showing axes:
###Code
# import the drawing methods
from matplotlib.pyplot import figure, arrow
# draw a figure
figure(figsize=(6,6), dpi=80)
# include our predefined functions
%run qlatvia.py
# draw the axes
draw_axes()
# draw the quantum states as vectors (in blue color)
arrow(0,0,0.92,0,head_width=0.04, head_length=0.08, color="blue")
arrow(0,0,0,0.92,head_width=0.04, head_length=0.08, color="blue")
arrow(0,0,-0.92,0,head_width=0.04, head_length=0.08, color="blue")
arrow(0,0,0,-0.92,head_width=0.04, head_length=0.08, color="blue")
###Output
_____no_output_____
###Markdown
Task 1 Write a function that returns a randomly created 2-dimensional (real-valued) quantum state.You may use your code written for a task given in notebook "Quantum States".Create 100 random quantum states by using your function, and then draw all of them as points.
###Code
# import the drawing methods
from matplotlib.pyplot import plot, figure
# draw a figure
figure(figsize=(6,6), dpi=60)
# draw the origin
plot(0,0,'ro')
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 2 Repeat the previous task by drawing the quantum states as vectors (arrows) instead of points.Please keep the codes below for drawing axes for getting a better visual focus.
###Code
# import the drawing methods
from matplotlib.pyplot import plot, figure, arrow
# draw a figure
figure(figsize=(6,6), dpi=60)
# include our predefined functions
%run qlatvia.py
# draw the axes
draw_axes()
# draw the origin
plot(0,0,'ro')
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Unit circle All quantum states of a qubit form the unit circle.The length of each quantum state is 1.All points that are 1 unit away from the origin form the circle with radius 1 unit.We can draw the unit circle with python.We have a predefined function for drawing the unit circle: "draw_unit_circle()".
###Code
# import the drawing methods
from matplotlib.pyplot import figure
figure(figsize=(6,6), dpi=80) # size of the figure
# include our predefined functions
%run qlatvia.py
# draw axes
draw_axes()
# draw the unit circle
draw_unit_circle()
###Output
_____no_output_____
###Markdown
Quantum state of a qubit Suppose that we have a single qubit. Each possible (real-valued) quantum state of this qubit is a point on 2-dimensional space.It can also be represented as a vector from origin to that point.We draw the quantum state $ \myvector{3/5 \\ 4/5} $ and its elements. Our predefined function "draw_qubit()" draws a figure, the origin, the axes, the unit circle, and base quantum states.Our predefined function "draw_quantum_state(x,y,name)" draws an arrow from (0,0) to (x,y) and associates it with name.We include our predefined functions with the following line of code: %run qlatvia.py
###Code
%run qlatvia.py
draw_qubit()
draw_quantum_state(3/5,4/5,"|v>")
###Output
_____no_output_____
###Markdown
Now, we draw its angle with $ \ket{0} $-axis and its projections on both axes. For drawing the angle, we use the method "Arc" from library "matplotlib.patches".
###Code
%run qlatvia.py
draw_qubit()
draw_quantum_state(3/5,4/5,"|v>")
from matplotlib.pyplot import arrow, text, gca
# the projection on |0>-axis
arrow(0,0,3/5,0,color="blue",linewidth=1.5)
arrow(0,4/5,3/5,0,color="blue",linestyle='dotted')
text(0.1,-0.1,"cos(a)=3/5")
# the projection on |1>-axis
arrow(0,0,0,4/5,color="blue",linewidth=1.5)
arrow(3/5,0,0,4/5,color="blue",linestyle='dotted')
text(-0.1,0.55,"sin(a)=4/5",rotation="90")
# drawing the angle with |0>-axis
from matplotlib.patches import Arc
gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=0,theta2=53) )
text(0.08,0.05,'.',fontsize=30)
text(0.21,0.09,'a')
###Output
_____no_output_____
###Markdown
Observations: The angle of quantum state with state $ \ket{0} $ is $a$. The amplitude of state $ \ket{0} $ is $ \cos(a) = \frac{3}{5} $. The probability of observing state $ \ket{0} $ is $ \cos^2(a) = \frac{9}{25} $. The amplitude of state $ \ket{1} $ is $ \sin(a) = \frac{4}{5} $. The probability of observing state $ \ket{1} $ is $ \sin^2(a) = \frac{16}{25} $. Now we consider the following two quantum states: $ \ket{v_1} = \myvector{4/5 \\ 3/5} \mbox{ and } \ket{v_2} = \myrvector{4/5 \\ -3/5}. $We draw them with their elements:
###Code
%run qlatvia.py
draw_qubit()
draw_quantum_state(3/5,4/5,"|v1>")
draw_quantum_state(3/5,-4/5,"|v2>")
from matplotlib.pyplot import arrow, text, gca
# the projection on |0>-axis
arrow(0,0,3/5,0,color="blue",linewidth=1.5)
arrow(0,0,3/5,0,color="blue",linewidth=1.5)
arrow(0,4/5,3/5,0,color="blue",linestyle='dotted')
arrow(0,-4/5,3/5,0,color="blue",linestyle='dotted')
text(0.36,0.05,"cos(a)=3/5")
text(0.36,-0.09,"cos($-$a)=3/5")
# the projection on |1>-axis
arrow(0,0,0,4/5,color="blue",linewidth=1.5)
arrow(0,0,0,-4/5,color="blue",linewidth=1.5)
arrow(3/5,0,0,4/5,color="blue",linestyle='dotted')
arrow(3/5,0,0,-4/5,color="blue",linestyle='dotted')
text(-0.1,0.55,"sin(a)=4/5",rotation="90")
text(-0.14,-0.2,"sin($-$a)=$-$4/5",rotation="270")
# drawing the angle with |0>-axis
from matplotlib.patches import Arc
gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=0,theta2=53) )
text(0.08,0.05,'.',fontsize=30)
text(0.21,0.09,'a')
gca().add_patch( Arc((0,0),0.4,0.4,angle=0,theta1=-53,theta2=0) )
text(0.08,-0.07,'.',fontsize=30)
text(0.19,-0.12,'$-$a')
###Output
_____no_output_____
###Markdown
Observations: The angle between $ \ket{v_2} $ and $ \ket{0} $ is still $a$, but the angle of $ \ket{v_2} $ is $ -a $. As $ \cos(x) = \cos(-x) $, we have $ \cos(a) = \cos(-a) = \frac{3}{5} $. As $ \sin(x) = -\sin(-x) $, we have $ \sin(a) = \frac{4}{5} $ and $ \sin(-a) = - \frac{4}{5} $. Remark: For any angle $ x $ (in radians), we have$$ \cdots = \sin(x+4\pi) = \sin(x+2\pi) = \sin(x) = \sin(x-2\pi) = \sin(x-4\pi) = \cdots $$$$ \cdots = \cos(x+4\pi) = \cos(x+2\pi) = \cos(x) = \cos(x-2\pi) = \cos(x-4\pi) = \cdots . $$Thus, the angle of $ \ket{v_2} $ can also be referred as $2\pi-a$. The angle of a quantum state The angle of a vector (in radians) on the unit circle is the length of arc in counter-clockwise direction that starts from $ (1,0) $ and with the points representing the vector.We execute the following code a couple of times to see different examples, where the angle is picked randomly in each run.You can also set the value of "myangle" manually for seeing a specific angle.
###Code
# set the angle
from random import randrange
myangle = randrange(361)
################################################
from matplotlib.pyplot import figure,gca
from matplotlib.patches import Arc
from math import sin,cos,pi
# draw a figure
figure(figsize=(6,6), dpi=60)
%run qlatvia.py
draw_axes()
print("the selected angle is",myangle,"degrees")
ratio_of_arc = ((1000*myangle/360)//1)/1000
print("it is",ratio_of_arc,"of a full circle")
print("its length is",ratio_of_arc,"x 2\u03C0","=",ratio_of_arc*2*pi)
myangle_in_radian = 2*pi*(myangle/360)
print("its radian value is",myangle_in_radian)
gca().add_patch( Arc((0,0),0.2,0.2,angle=0,theta1=0,theta2=myangle,color="red") )
gca().add_patch( Arc((0,0),2,2,angle=0,theta1=0,theta2=myangle,color="blue") )
x = cos(myangle_in_radian)
y = sin(myangle_in_radian)
draw_quantum_state(x,y,"|v>")
###Output
the selected angle is 302 degrees
it is 0.838 of a full circle
its length is 0.838 x 2π = 5.265309287416493
its radian value is 5.270894341022875
###Markdown
Random quantum states Any quantum state of a (real-valued) qubit is a point on the unit circle.We use this fact to create random quantum states by picking a random point on the unit circle. For this purpose, we randomly pick an angle between zero and 360 degrees and then find the amplitudes of the quantum state by using the basic trigonometric functions. Task 3 Define a function randomly creating a quantum state based on the this idea.Randomly create a quantum state by using this function.Draw the quantum state on the unit circle.Repeat the task for a few times.Randomly create 100 quantum states and draw all of them without labeling. You can save your function for using later: comment out the first command, give an appropriate file name, and then run the cell.
###Code
# %%writefile FILENAME.py
# your function is here
from math import cos, sin, pi
from random import randrange
def random_quantum_state2():
#
# your codes are here
#
###Output
_____no_output_____
###Markdown
Our predefined function "draw_qubit()" draws a figure, the origin, the axes, the unit circle, and base quantum states.Our predefined function "draw_quantum_state(x,y,name)" draws an arrow from (0,0) to (x,y) and associates it with name.We include our predefined functions with the following line of code: %run qlatvia.py
###Code
# visually test your function
%run qlatvia.py
draw_qubit()
#
# your solution is here
#
# draw_quantum_state(x,y,"")
###Output
_____no_output_____ |
Session_3/4_fsm_generator.ipynb | ###Markdown
Finite State Machine GeneratorThis notebook will show how to use the Finite State Machine (FSM) Generator to generate a state machine. The FSM we will build is a Gray code counter. The counter has three state bits and can count up or down through eight states. The counter outputs are Gray coded, meaning that there is only a single-bit transition between the output vector of any state and its next states. Step 1: Download the `logictools` overlay
###Code
from pynq.overlays.logictools import LogicToolsOverlay
from pynq.lib.logictools import FSMGenerator
logictools_olay = LogicToolsOverlay('logictools.bit')
###Output
_____no_output_____
###Markdown
Step 2: Specify the FSM
###Code
fsm_spec = {'inputs': [('reset','D0'), ('direction','D1')],
'outputs': [('bit2','D3'), ('bit1','D4'), ('bit0','D5')],
'states': ['S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7'],
'transitions': [['01', 'S0', 'S1', '000'],
['00', 'S0', 'S7', '000'],
['01', 'S1', 'S2', '001'],
['00', 'S1', 'S0', '001'],
['01', 'S2', 'S3', '011'],
['00', 'S2', 'S1', '011'],
['01', 'S3', 'S4', '010'],
['00', 'S3', 'S2', '010'],
['01', 'S4', 'S5', '110'],
['00', 'S4', 'S3', '110'],
['01', 'S5', 'S6', '111'],
['00', 'S5', 'S4', '111'],
['01', 'S6', 'S7', '101'],
['00', 'S6', 'S5', '101'],
['01', 'S7', 'S0', '100'],
['00', 'S7', 'S6', '100'],
['1-', '*', 'S0', '']]}
###Output
_____no_output_____
###Markdown
__Notes on the FSM specification format__ Step 3: Instantiate the FSM generator object
###Code
fsm_generator = logictools_olay.fsm_generator
###Output
_____no_output_____
###Markdown
__Setup to use trace analyzer__ In this notebook, the trace analyzer is used to check if the inputs and outputs of the FSM.Users can choose whether to use the trace analyzer by calling the `trace()` method.
###Code
fsm_generator.trace()
###Output
_____no_output_____
###Markdown
Step 5: Setup the FSM generatorThe FSM generator will work at the default frequency of 10MHz. This can be modified using a `frequency` argument in the `setup()` method.
###Code
fsm_generator.setup(fsm_spec)
###Output
_____no_output_____
###Markdown
__Display the FSM state diagram__ This method should only be called after the generator has been properly set up.
###Code
fsm_generator.show_state_diagram()
###Output
_____no_output_____
###Markdown
__Set up the FSM inputs on the PYNQ board__* Check that the reset and direction inputs are correctly wired on the PYNQ board, as shown below: * Connect D0 to GND * Connect D1 to 3.3V __Notes:__ * The 3-bit Gray code counter is an up-down, wrap-around counter that will count from states 000 to 100 in either ascending or descending order * The reset input is connected to pin D0 of the Arduino connector * Connect the reset input to GND for normal operation * When the reset input is set to logic 1 (3.3V), the counter resets to state 000 * The direction input is connected to pin D1 of the Arduino connector * When the direction is set to logic 0, the counter counts down * Conversely, when the direction input is set to logic 1, the counter counts up Step 6: Run and display waveformThe ` run()` method will execute all the samples, `show_waveform()` method is used to display the waveforms
###Code
fsm_generator.run()
fsm_generator.show_waveform()
###Output
_____no_output_____
###Markdown
Verify the trace output against the expected Gray code count sequence| State | FSM output bits: bit2, bit1, bit0 ||:-----:|:----------------------------------------:|| s0 | 000 || s1 | 001 || s2 | 011 || s3 | 010 || s4 | 110 || s5 | 111 || s6 | 101 || s7 | 100 | Step 7: Stop the FSM generatorCalling `stop()` will clear the logic values on output pins; however, the waveform will be recorded locally in the FSM instance.
###Code
fsm_generator.stop()
###Output
_____no_output_____
###Markdown
Finite State Machine GeneratorThis notebook will show how to use the Finite State Machine (FSM) Generator to generate a state machine. The FSM we will build is a Gray code counter. The counter has three state bits and can count up or down through eight states. The counter outputs are Gray coded, meaning that there is only a single-bit transition between the output vector of any state and its next states. Step 1: Download the `logictools` overlay
###Code
from pynq.overlays.logictools import LogicToolsOverlay
from pynq.lib.logictools import FSMGenerator
logictools_olay = LogicToolsOverlay('logictools.bit')
###Output
_____no_output_____
###Markdown
Step 2: Specify the FSM
###Code
fsm_spec = {'inputs': [('reset','D0'), ('direction','D1')],
'outputs': [('bit2','D3'), ('bit1','D4'), ('bit0','D5')],
'states': ['S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7'],
'transitions': [['01', 'S0', 'S1', '000'],
['00', 'S0', 'S7', '000'],
['01', 'S1', 'S2', '001'],
['00', 'S1', 'S0', '001'],
['01', 'S2', 'S3', '011'],
['00', 'S2', 'S1', '011'],
['01', 'S3', 'S4', '010'],
['00', 'S3', 'S2', '010'],
['01', 'S4', 'S5', '110'],
['00', 'S4', 'S3', '110'],
['01', 'S5', 'S6', '111'],
['00', 'S5', 'S4', '111'],
['01', 'S6', 'S7', '101'],
['00', 'S6', 'S5', '101'],
['01', 'S7', 'S0', '100'],
['00', 'S7', 'S6', '100'],
['1-', '*', 'S0', '']]}
###Output
_____no_output_____
###Markdown
__Notes on the FSM specification format__ Step 3: Instantiate the FSM generator object
###Code
fsm_generator = logictools_olay.fsm_generator
###Output
_____no_output_____
###Markdown
__Setup to use trace analyzer__ In this notebook trace analyzer is used to check if the inputs and outputs of the FSM.Users can choose whether to use the trace analyzer by calling the `trace()` method.
###Code
fsm_generator.trace()
###Output
_____no_output_____
###Markdown
Step 5: Setup the FSM generatorThe FSM generator will work at the default frequency of 10MHz. This can be modified using a `frequency` argument in the `setup()` method.
###Code
fsm_generator.setup(fsm_spec)
###Output
_____no_output_____
###Markdown
__Display the FSM state diagram__ This method should only be called after the generator has been properly set up.
###Code
fsm_generator.show_state_diagram()
###Output
_____no_output_____
###Markdown
__Set up the FSM inputs on the PYNQ board__* Check that the reset and direction inputs are correctly wired on the PYNQ board, as shown below: * Connect D0 to GND * Connect D1 to 3.3V __Notes:__ * The 3-bit Gray code counter is an up-down, wrap-around counter that will count from states 000 to 100 in either ascending or descending order * The reset input is connected to pin D0 of the Arduino connector * Connect the reset input to GND for normal operation * When the reset input is set to logic 1 (3.3V), the counter resets to state 000 * The direction input is connected to pin D1 of the Arduino connector * When the direction is set to logic 0, the counter counts down * Conversely, when the direction input is set to logic 1, the counter counts up Step 6: Run and display waveformThe ` run()` method will execute all the samples, `show_waveform()` method is used to display the waveforms
###Code
fsm_generator.run()
fsm_generator.show_waveform()
###Output
_____no_output_____
###Markdown
Verify the trace output against the expected Gray code count sequence| State | FSM output bits: bit2, bit1, bit0 ||:-----:|:----------------------------------------:|| s0 | 000 || s1 | 001 || s2 | 011 || s3 | 010 || s4 | 110 || s5 | 111 || s6 | 101 || s7 | 100 | Step 7: Stop the FSM generatorCalling `stop()` will clear the logic values on output pins; however, the waveform will be recorded locally in the FSM instance.
###Code
fsm_generator.stop()
###Output
_____no_output_____
###Markdown
Finite State Machine GeneratorThis notebook will show how to use the Finite State Machine (FSM) Generator to generate a state machine. The FSM we will build is a Gray code counter. The counter has three state bits and can count up or down through eight states. The counter outputs are Gray coded, meaning that there is only a single-bit transition between the output vector of any state and its next states. Step 1: Download the `logictools` overlay
###Code
from pynq.overlays.logictools import LogicToolsOverlay
from pynq.lib.logictools import FSMGenerator
logictools_olay = LogicToolsOverlay('logictools.bit')
###Output
_____no_output_____
###Markdown
Step 2: Specify the FSM
###Code
fsm_spec = {'inputs': [('reset','D0'), ('direction','D1')],
'outputs': [('bit2','D3'), ('bit1','D4'), ('bit0','D5')],
'states': ['S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7'],
'transitions': [['01', 'S0', 'S1', '000'],
['00', 'S0', 'S7', '000'],
['01', 'S1', 'S2', '001'],
['00', 'S1', 'S0', '001'],
['01', 'S2', 'S3', '011'],
['00', 'S2', 'S1', '011'],
['01', 'S3', 'S4', '010'],
['00', 'S3', 'S2', '010'],
['01', 'S4', 'S5', '110'],
['00', 'S4', 'S3', '110'],
['01', 'S5', 'S6', '111'],
['00', 'S5', 'S4', '111'],
['01', 'S6', 'S7', '101'],
['00', 'S6', 'S5', '101'],
['01', 'S7', 'S0', '100'],
['00', 'S7', 'S6', '100'],
['1-', '*', 'S0', '']]}
###Output
_____no_output_____
###Markdown
__Notes on the FSM specification format__ Step 3: Instantiate the FSM generator object
###Code
fsm_generator = logictools_olay.fsm_generator
###Output
_____no_output_____
###Markdown
__Setup to use trace analyzer__ In this notebook, the trace analyzer is used to check if the inputs and outputs of the FSM.Users can choose whether to use the trace analyzer by calling the `trace()` method.
###Code
fsm_generator.trace()
###Output
_____no_output_____
###Markdown
Step 5: Setup the FSM generatorThe FSM generator will work at the default frequency of 10MHz. This can be modified using a `frequency` argument in the `setup()` method.
###Code
fsm_generator.setup(fsm_spec)
###Output
_____no_output_____
###Markdown
__Display the FSM state diagram__ This method should only be called after the generator has been properly set up.
###Code
fsm_generator.show_state_diagram()
###Output
_____no_output_____
###Markdown
__Set up the FSM inputs on the PYNQ board__* Check that the reset and direction inputs are correctly wired on the PYNQ board, as shown below:| | PYNQ-Z1 | PYNQ-Z2 ||--|---------|---------|| GND | D0 | AR0 || 3.3V | D1 | AR1 | __Notes:__ * The 3-bit Gray code counter is an up-down, wrap-around counter that will count from states 000 to 100 in either ascending or descending order * The reset input is connected to pin D0/AR0 of the Arduino connector * Connect the reset input to GND for normal operation * When the reset input is set to logic 1 (3.3V), the counter resets to state 000 * The direction input is connected to pin D1/AR1 of the Arduino connector * When the direction is set to logic 0, the counter counts down * Conversely, when the direction input is set to logic 1, the counter counts up Step 6: Run and display waveformThe ` run()` method will execute all the samples, `show_waveform()` method is used to display the waveforms
###Code
fsm_generator.run()
fsm_generator.show_waveform()
###Output
_____no_output_____
###Markdown
Verify the trace output against the expected Gray code count sequence| State | FSM output bits: bit2, bit1, bit0 ||:-----:|:----------------------------------------:|| s0 | 000 || s1 | 001 || s2 | 011 || s3 | 010 || s4 | 110 || s5 | 111 || s6 | 101 || s7 | 100 | Step 7: Stop the FSM generatorCalling `stop()` will clear the logic values on output pins; however, the waveform will be recorded locally in the FSM instance.
###Code
fsm_generator.stop()
###Output
_____no_output_____ |
notebooks/dev/old_notebooks/test_preprocessing.ipynb | ###Markdown
Check! Computed BPs correctly
###Code
unnorm_features[0, 0, 0, -1] # SOLIN
aqua['SOLIN'][1, 0, 0]
unnorm_features[0, 0, 0, -2] # PS
aqua['PS'][0, 0, 0]
###Output
_____no_output_____
###Markdown
Check! Took the correct time steps for PS
###Code
norm['target_names'][:]
###Output
_____no_output_____
###Markdown
Now check the targets.
###Code
targets = nc.Dataset(target_fn); targets
targets['target_names'][:]
plt.plot(np.mean(targets['targets'][:], axis=0))
###Output
_____no_output_____
###Markdown
Now check the pure crm inputs file
###Code
feature_fn2 = '/beegfs/DATA/pritchard/srasp/preprcessed_data/full_physics_essentials_train_test2_features.nc'
target_fn2 = '/beegfs/DATA/pritchard/srasp/preprcessed_data/full_physics_essentials_train_test2_targets.nc'
norm_fn2 = '/beegfs/DATA/pritchard/srasp/preprcessed_data/full_physics_essentials_train_test2_norm.nc'
features2 = nc.Dataset(feature_fn2)
norm2 = nc.Dataset(norm_fn2)
features2['feature_names'][:]
unnorm_features2 = features2['features'][:] * norm2['feature_stds'][:] + norm2['feature_means'][:]
unnorm_features2.shape
# sample, lev --> [time, lat, lon]
unnorm_features2 = unnorm_features2.reshape(-1, 64, 128, 152)
# T_C = TAP[t-1] - DTV[t-1] * dt
ctrl_T_C = aqua['TAP'][:-1] - aqua['DTV'][:-1] * 1800.
unnorm_features2[0, 0, 0, :5]
ctrl_T_C[0, :5, 0, 0]
# adiab = (TBP - TC)/dt
ctrl_T_C.shape, ctrl_TBP.shape
ctrl_adiab = (ctrl_TBP - ctrl_T_C) / 1800.
features2['feature_names'][90]
unnorm_features2[0, 0, 0, 90:95]
ctrl_adiab[0, :4, 0, 0]
###Output
_____no_output_____ |
mercari/mercari-interactive-eda-topic-modelling.ipynb | ###Markdown
**Introduction**This is an initial Explanatory Data Analysis for the [Mercari Price Suggestion Challenge](https://www.kaggle.com/c/mercari-price-suggestion-challengedescription) with matplotlib. [bokeh](https://bokeh.pydata.org/en/latest/) and [Plot.ly](https://plot.ly/feed/) - a visualization tool that creates beautiful interactive plots and dashboards. The competition is hosted by Mercari, the biggest Japanese community-powered shopping app with the main objective to predict an accurate price that Mercari should suggest to its sellers, given the item's information. ***Update***: The abundant amount of food from my family's Thanksgiving dinner has really energized me to continue working on this model. I decided to dive deeper into the NLP analysis and found an amazing tutorial by Ahmed BESBES. The framework below is based on his [source code](https://ahmedbesbes.com/how-to-mine-newsfeed-data-and-extract-interactive-insights-in-python.html). It provides guidance on pre-processing documents and machine learning techniques (K-means and LDA) to clustering topics. So that this kernel will be divided into 2 parts: 1. Explanatory Data Analysis 2. Text Processing 2.1. Tokenizing and tf-idf algorithm 2.2. K-means Clustering 2.3. Latent Dirichlet Allocation (LDA) / Topic Modelling
###Code
import nltk
import string
import re
import numpy as np
import pandas as pd
import pickle
#import lda
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="white")
from nltk.stem.porter import *
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from sklearn.feature_extraction import stop_words
from collections import Counter
from wordcloud import WordCloud
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
%matplotlib inline
import bokeh.plotting as bp
from bokeh.models import HoverTool, BoxSelectTool
from bokeh.models import ColumnDataSource
from bokeh.plotting import figure, show, output_notebook
#from bokeh.transform import factor_cmap
import warnings
warnings.filterwarnings('ignore')
import logging
logging.getLogger("lda").setLevel(logging.WARNING)
###Output
_____no_output_____
###Markdown
**Exploratory Data Analysis**On the first look at the data, besides the unique identifier (item_id), there are 7 variables in this model. This notebook will sequentially go through each of them with a brief statistical summary. 1. **Numerical/Continuous Features** 1. price: the item's final bidding price. This will be our reponse / independent variable that we need to predict in the test set 2. shipping cost 1. **Categorical Features**: 1. shipping cost: A binary indicator, 1 if shipping fee is paid by seller and 0 if it's paid by buyer 2. item_condition_id: The condition of the items provided by the seller 1. name: The item's name 2. brand_name: The item's producer brand name 2. category_name: The item's single or multiple categories that are separated by "\" 3. item_description: A short description on the item that may include removed words, flagged by [rm]
###Code
PATH = "../input/"
train = pd.read_csv(f'{PATH}train.tsv', sep='\t')
test = pd.read_csv(f'{PATH}test.tsv', sep='\t')
# size of training and dataset
print(train.shape)
print(test.shape)
# different data types in the dataset: categorical (strings) and numeric
train.dtypes
train.head()
###Output
_____no_output_____
###Markdown
Target Variable: **Price** The next standard check is with our response or target variables, which in this case is the `price` we are suggesting to the Mercari's marketplace sellers. The median price of all the items in the training is about \$267 but given the existence of some extreme values of over \$100 and the maximum at \$2,009, the distribution of the variables is heavily skewed to the left. So let's make log-transformation on the price (we added +1 to the value before the transformation to avoid zero and negative values).
###Code
train.price.describe()
plt.subplot(1, 2, 1)
(train['price']).plot.hist(bins=50, figsize=(20,10), edgecolor='white',range=[0,250])
plt.xlabel('price+', fontsize=17)
plt.ylabel('frequency', fontsize=17)
plt.tick_params(labelsize=15)
plt.title('Price Distribution - Training Set', fontsize=17)
plt.subplot(1, 2, 2)
np.log(train['price']+1).plot.hist(bins=50, figsize=(20,10), edgecolor='white')
plt.xlabel('log(price+1)', fontsize=17)
plt.ylabel('frequency', fontsize=17)
plt.tick_params(labelsize=15)
plt.title('Log(Price) Distribution - Training Set', fontsize=17)
plt.show()
###Output
_____no_output_____
###Markdown
**Shipping**The shipping cost burden is decently splitted between sellers and buyers with more than half of the items' shipping fees are paid by the sellers (55%). In addition, the average price paid by users who have to pay for shipping fees is lower than those that don't require additional shipping cost. This matches with our perception that the sellers need a lower price to compensate for the additional shipping.
###Code
train.shipping.value_counts()/len(train)
prc_shipBySeller = train.loc[train.shipping==1, 'price']
prc_shipByBuyer = train.loc[train.shipping==0, 'price']
fig, ax = plt.subplots(figsize=(20,10))
ax.hist(np.log(prc_shipBySeller+1), color='#8CB4E1', alpha=1.0, bins=50,
label='Price when Seller pays Shipping')
ax.hist(np.log(prc_shipByBuyer+1), color='#007D00', alpha=0.7, bins=50,
label='Price when Buyer pays Shipping')
ax.set(title='Histogram Comparison', ylabel='% of Dataset in Bin')
plt.xlabel('log(price+1)', fontsize=17)
plt.ylabel('frequency', fontsize=17)
plt.title('Price Distribution by Shipping Type', fontsize=17)
plt.tick_params(labelsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
**Item Category**There are about **1,287** unique categories but among each of them, we will always see a main/general category firstly, followed by two more particular subcategories (e.g. Beauty/Makeup/Face or Lips). In adidition, there are about 6,327 items that do not have a category labels. Let's split the categories into three different columns. We will see later that this information is actually quite important from the seller's point of view and how we handle the missing information in the `brand_name` column will impact the model's prediction.
###Code
print("There are %d unique values in the category column." % train['category_name'].nunique())
# TOP 5 RAW CATEGORIES
train['category_name'].value_counts()[:5]
# missing categories
print("There are %d items that do not have a label." % train['category_name'].isnull().sum())
# reference: BuryBuryZymon at https://www.kaggle.com/maheshdadhich/i-will-sell-everything-for-free-0-55
def split_cat(text):
try: return text.split("/")
except: return ("No Label", "No Label", "No Label")
train['general_cat'], train['subcat_1'], train['subcat_2'] = \
zip(*train['category_name'].apply(lambda x: split_cat(x)))
train.head()
# repeat the same step for the test set
test['general_cat'], test['subcat_1'], test['subcat_2'] = \
zip(*test['category_name'].apply(lambda x: split_cat(x)))
print("There are %d unique first sub-categories." % train['subcat_1'].nunique())
print("There are %d unique second sub-categories." % train['subcat_2'].nunique())
###Output
_____no_output_____
###Markdown
Overall, we have **7 main categories** (114 in the first sub-categories and 871 second sub-categories): women's and beauty items as the two most popular categories (more than 50% of the observations), followed by kids and electronics.
###Code
x = train['general_cat'].value_counts().index.values.astype('str')
y = train['general_cat'].value_counts().values
pct = [("%.2f"%(v*100))+"%"for v in (y/len(train))]
trace1 = go.Bar(x=x, y=y, text=pct)
layout = dict(title= 'Number of Items by Main Category',
yaxis = dict(title='Count'),
xaxis = dict(title='Category'))
fig=dict(data=[trace1], layout=layout)
py.iplot(fig)
x = train['subcat_1'].value_counts().index.values.astype('str')[:15]
y = train['subcat_1'].value_counts().values[:15]
pct = [("%.2f"%(v*100))+"%"for v in (y/len(train))][:15]
trace1 = go.Bar(x=x, y=y, text=pct,
marker=dict(
color = y,colorscale='Portland',showscale=True,
reversescale = False
))
layout = dict(title= 'Number of Items by Sub Category (Top 15)',
yaxis = dict(title='Count'),
xaxis = dict(title='SubCategory'))
fig=dict(data=[trace1], layout=layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
From the pricing (log of price) point of view, all the categories are pretty well distributed, with no category with an extraordinary pricing point
###Code
general_cats = train['general_cat'].unique()
x = [train.loc[train['general_cat']==cat, 'price'] for cat in general_cats]
data = [go.Box(x=np.log(x[i]+1), name=general_cats[i]) for i in range(len(general_cats))]
layout = dict(title="Price Distribution by General Category",
yaxis = dict(title='Frequency'),
xaxis = dict(title='Category'))
fig = dict(data=data, layout=layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
**Brand Name**
###Code
print("There are %d unique brand names in the training dataset." % train['brand_name'].nunique())
x = train['brand_name'].value_counts().index.values.astype('str')[:10]
y = train['brand_name'].value_counts().values[:10]
# trace1 = go.Bar(x=x, y=y,
# marker=dict(
# color = y,colorscale='Portland',showscale=True,
# reversescale = False
# ))
# layout = dict(title= 'Top 10 Brand by Number of Items',
# yaxis = dict(title='Brand Name'),
# xaxis = dict(title='Count'))
# fig=dict(data=[trace1], layout=layout)
# py.iplot(fig)
###Output
_____no_output_____
###Markdown
**Item Description** It will be more challenging to parse through this particular item since it's unstructured data. Does it mean a more detailed and lengthy description will result in a higher bidding price? We will strip out all punctuations, remove some english stop words (i.e. redundant words such as "a", "the", etc.) and any other words with a length less than 3:
###Code
def wordCount(text):
# convert to lower case and strip regex
try:
# convert to lower case and strip regex
text = text.lower()
regex = re.compile('[' +re.escape(string.punctuation) + '0-9\\r\\t\\n]')
txt = regex.sub(" ", text)
# tokenize
# words = nltk.word_tokenize(clean_txt)
# remove words in stop words
words = [w for w in txt.split(" ") \
if not w in stop_words.ENGLISH_STOP_WORDS and len(w)>3]
return len(words)
except:
return 0
# add a column of word counts to both the training and test set
train['desc_len'] = train['item_description'].apply(lambda x: wordCount(x))
test['desc_len'] = test['item_description'].apply(lambda x: wordCount(x))
train.head()
df = train.groupby('desc_len')['price'].mean().reset_index()
trace1 = go.Scatter(
x = df['desc_len'],
y = np.log(df['price']+1),
mode = 'lines+markers',
name = 'lines+markers'
)
layout = dict(title= 'Average Log(Price) by Description Length',
yaxis = dict(title='Average Log(Price)'),
xaxis = dict(title='Description Length'))
fig=dict(data=[trace1], layout=layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
We also need to check if there are any missing values in the item description (4 observations don't have a description) andl remove those observations from our training set.
###Code
train.item_description.isnull().sum()
# remove missing values in item description
train = train[pd.notnull(train['item_description'])]
# create a dictionary of words for each category
cat_desc = dict()
for cat in general_cats:
text = " ".join(train.loc[train['general_cat']==cat, 'item_description'].values)
cat_desc[cat] = tokenize(text)
# flat list of all words combined
flat_lst = [item for sublist in list(cat_desc.values()) for item in sublist]
allWordsCount = Counter(flat_lst)
all_top10 = allWordsCount.most_common(20)
x = [w[0] for w in all_top10]
y = [w[1] for w in all_top10]
trace1 = go.Bar(x=x, y=y, text=pct)
layout = dict(title= 'Word Frequency',
yaxis = dict(title='Count'),
xaxis = dict(title='Word'))
fig=dict(data=[trace1], layout=layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
If we look at the most common words by category, we could also see that, ***size***, ***free*** and ***shipping*** is very commonly used by the sellers, probably with the intention to attract customers, which is contradictory to what we have shown previously that there is little correlation between the two variables `price` and `shipping` (or shipping fees do not account for a differentiation in prices). ***Brand names*** also played quite an important role - it's one of the most popular in all four categories. **Text Processing - Item Description***The following section is based on the tutorial at https://ahmedbesbes.com/how-to-mine-newsfeed-data-and-extract-interactive-insights-in-python.html* **Pre-processing: tokenization**Most of the time, the first steps of an NLP project is to **"tokenize"** your documents, which main purpose is to normalize our texts. The three fundamental stages will usually include: * break the descriptions into sentences and then break the sentences into tokens* remove punctuation and stop words* lowercase the tokens* herein, I will also only consider words that have length equal to or greater than 3 characters
###Code
stop = set(stopwords.words('english'))
def tokenize(text):
"""
sent_tokenize(): segment text into sentences
word_tokenize(): break sentences into words
"""
try:
regex = re.compile('[' +re.escape(string.punctuation) + '0-9\\r\\t\\n]')
text = regex.sub(" ", text) # remove punctuation
tokens_ = [word_tokenize(s) for s in sent_tokenize(text)]
tokens = []
for token_by_sent in tokens_:
tokens += token_by_sent
tokens = list(filter(lambda t: t.lower() not in stop, tokens))
filtered_tokens = [w for w in tokens if re.search('[a-zA-Z]', w)]
filtered_tokens = [w.lower() for w in filtered_tokens if len(w)>=3]
return filtered_tokens
except TypeError as e: print(text,e)
# apply the tokenizer into the item descriptipn column
train['tokens'] = train['item_description'].map(tokenize)
test['tokens'] = test['item_description'].map(tokenize)
train.reset_index(drop=True, inplace=True)
test.reset_index(drop=True, inplace=True)
###Output
_____no_output_____
###Markdown
Let's look at the examples of if the tokenizer did a good job in cleaning up our descriptions
###Code
for description, tokens in zip(train['item_description'].head(),
train['tokens'].head()):
print('description:', description)
print('tokens:', tokens)
print()
###Output
_____no_output_____
###Markdown
We could aso use the package `WordCloud` to easily visualize which words has the highest frequencies within each category:
###Code
# build dictionary with key=category and values as all the descriptions related.
cat_desc = dict()
for cat in general_cats:
text = " ".join(train.loc[train['general_cat']==cat, 'item_description'].values)
cat_desc[cat] = tokenize(text)
# find the most common words for the top 4 categories
women100 = Counter(cat_desc['Women']).most_common(100)
beauty100 = Counter(cat_desc['Beauty']).most_common(100)
kids100 = Counter(cat_desc['Kids']).most_common(100)
electronics100 = Counter(cat_desc['Electronics']).most_common(100)
def generate_wordcloud(tup):
wordcloud = WordCloud(background_color='white',
max_words=50, max_font_size=40,
random_state=42
).generate(str(tup))
return wordcloud
fig,axes = plt.subplots(2, 2, figsize=(30, 15))
ax = axes[0, 0]
ax.imshow(generate_wordcloud(women100), interpolation="bilinear")
ax.axis('off')
ax.set_title("Women Top 100", fontsize=30)
ax = axes[0, 1]
ax.imshow(generate_wordcloud(beauty100))
ax.axis('off')
ax.set_title("Beauty Top 100", fontsize=30)
ax = axes[1, 0]
ax.imshow(generate_wordcloud(kids100))
ax.axis('off')
ax.set_title("Kids Top 100", fontsize=30)
ax = axes[1, 1]
ax.imshow(generate_wordcloud(electronics100))
ax.axis('off')
ax.set_title("Electronic Top 100", fontsize=30)
###Output
_____no_output_____
###Markdown
**Pre-processing: tf-idf** tf-idf is the acronym for **Term Frequency–inverse Document Frequency**. It quantifies the importance of a particular word in relative to the vocabulary of a collection of documents or corpus. The metric depends on two factors: - **Term Frequency**: the occurences of a word in a given document (i.e. bag of words)- **Inverse Document Frequency**: the reciprocal number of times a word occurs in a corpus of documentsThink about of it this way: If the word is used extensively in all documents, its existence within a specific document will not be able to provide us much specific information about the document itself. So the second term could be seen as a penalty term that penalizes common words such as "a", "the", "and", etc. tf-idf can therefore, be seen as a weighting scheme for words relevancy in a specific document.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(min_df=10,
max_features=180000,
tokenizer=tokenize,
ngram_range=(1, 2))
all_desc = np.append(train['item_description'].values, test['item_description'].values)
vz = vectorizer.fit_transform(list(all_desc))
###Output
_____no_output_____
###Markdown
vz is a tfidf matrix where:* the number of rows is the total number of descriptions* the number of columns is the total number of unique tokens across the descriptions
###Code
# create a dictionary mapping the tokens to their tfidf values
tfidf = dict(zip(vectorizer.get_feature_names(), vectorizer.idf_))
tfidf = pd.DataFrame(columns=['tfidf']).from_dict(
dict(tfidf), orient='index')
tfidf.columns = ['tfidf']
###Output
_____no_output_____
###Markdown
Below is the 10 tokens with the lowest tfidf score, which is unsurprisingly, very generic words that we could not use to distinguish one description from another.
###Code
tfidf.sort_values(by=['tfidf'], ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Below is the 10 tokens with the highest tfidf score, which includes words that are a lot specific that by looking at them, we could guess the categories that they belong to:
###Code
tfidf.sort_values(by=['tfidf'], ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Given the high dimension of our tfidf matrix, we need to reduce their dimension using the Singular Value Decomposition (SVD) technique. And to visualize our vocabulary, we could next use t-SNE to reduce the dimension from 50 to 2. t-SNE is more suitable for dimensionality reduction to 2 or 3. **t-Distributed Stochastic Neighbor Embedding (t-SNE)**t-SNE is a technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets. The goal is to take a set of points in a high-dimensional space and find a representation of those points in a lower-dimensional space, typically the 2D plane. It is based on probability distributions with random walk on neighborhood graphs to find the structure within the data. But since t-SNE complexity is significantly high, usually we'd use other high-dimension reduction techniques before applying t-SNE.First, let's take a sample from the both training and testing item's description since t-SNE can take a very long time to execute. We can then reduce the dimension of each vector from to n_components (50) using SVD.
###Code
trn = train.copy()
tst = test.copy()
trn['is_train'] = 1
tst['is_train'] = 0
sample_sz = 15000
combined_df = pd.concat([trn, tst])
combined_sample = combined_df.sample(n=sample_sz)
vz_sample = vectorizer.fit_transform(list(combined_sample['item_description']))
from sklearn.decomposition import TruncatedSVD
n_comp=30
svd = TruncatedSVD(n_components=n_comp, random_state=42)
svd_tfidf = svd.fit_transform(vz_sample)
###Output
_____no_output_____
###Markdown
Now we can reduce the dimension from 50 to 2 using t-SNE!
###Code
from sklearn.manifold import TSNE
tsne_model = TSNE(n_components=2, verbose=1, random_state=42, n_iter=500)
tsne_tfidf = tsne_model.fit_transform(svd_tfidf)
###Output
_____no_output_____
###Markdown
It's now possible to visualize our data points. Note that the deviation as well as the size of the clusters imply little information in t-SNE.
###Code
output_notebook()
plot_tfidf = bp.figure(plot_width=700, plot_height=600,
title="tf-idf clustering of the item description",
tools="pan,wheel_zoom,box_zoom,reset,hover,previewsave",
x_axis_type=None, y_axis_type=None, min_border=1)
combined_sample.reset_index(inplace=True, drop=True)
tfidf_df = pd.DataFrame(tsne_tfidf, columns=['x', 'y'])
tfidf_df['description'] = combined_sample['item_description']
tfidf_df['tokens'] = combined_sample['tokens']
tfidf_df['category'] = combined_sample['general_cat']
plot_tfidf.scatter(x='x', y='y', source=tfidf_df, alpha=0.7)
hover = plot_tfidf.select(dict(type=HoverTool))
hover.tooltips={"description": "@description", "tokens": "@tokens", "category":"@category"}
show(plot_tfidf)
###Output
_____no_output_____
###Markdown
**K-Means Clustering**K-means clustering obejctive is to minimize the average squared Euclidean distance of the document / description from their cluster centroids.
###Code
from sklearn.cluster import MiniBatchKMeans
num_clusters = 30 # need to be selected wisely
kmeans_model = MiniBatchKMeans(n_clusters=num_clusters,
init='k-means++',
n_init=1,
init_size=1000, batch_size=1000, verbose=0, max_iter=1000)
kmeans = kmeans_model.fit(vz)
kmeans_clusters = kmeans.predict(vz)
kmeans_distances = kmeans.transform(vz)
sorted_centroids = kmeans.cluster_centers_.argsort()[:, ::-1]
terms = vectorizer.get_feature_names()
for i in range(num_clusters):
print("Cluster %d:" % i)
aux = ''
for j in sorted_centroids[i, :10]:
aux += terms[j] + ' | '
print(aux)
print()
###Output
_____no_output_____
###Markdown
In order to plot these clusters, first we will need to reduce the dimension of the distances to 2 using tsne:
###Code
# repeat the same steps for the sample
kmeans = kmeans_model.fit(vz_sample)
kmeans_clusters = kmeans.predict(vz_sample)
kmeans_distances = kmeans.transform(vz_sample)
# reduce dimension to 2 using tsne
tsne_kmeans = tsne_model.fit_transform(kmeans_distances)
colormap = np.array(["#6d8dca", "#69de53", "#723bca", "#c3e14c", "#c84dc9", "#68af4e", "#6e6cd5",
"#e3be38", "#4e2d7c", "#5fdfa8", "#d34690", "#3f6d31", "#d44427", "#7fcdd8", "#cb4053", "#5e9981",
"#803a62", "#9b9e39", "#c88cca", "#e1c37b", "#34223b", "#bdd8a3", "#6e3326", "#cfbdce", "#d07d3c",
"#52697d", "#194196", "#d27c88", "#36422b", "#b68f79"])
#combined_sample.reset_index(drop=True, inplace=True)
kmeans_df = pd.DataFrame(tsne_kmeans, columns=['x', 'y'])
kmeans_df['cluster'] = kmeans_clusters
kmeans_df['description'] = combined_sample['item_description']
kmeans_df['category'] = combined_sample['general_cat']
#kmeans_df['cluster']=kmeans_df.cluster.astype(str).astype('category')
plot_kmeans = bp.figure(plot_width=700, plot_height=600,
title="KMeans clustering of the description",
tools="pan,wheel_zoom,box_zoom,reset,hover,previewsave",
x_axis_type=None, y_axis_type=None, min_border=1)
source = ColumnDataSource(data=dict(x=kmeans_df['x'], y=kmeans_df['y'],
color=colormap[kmeans_clusters],
description=kmeans_df['description'],
category=kmeans_df['category'],
cluster=kmeans_df['cluster']))
plot_kmeans.scatter(x='x', y='y', color='color', source=source)
hover = plot_kmeans.select(dict(type=HoverTool))
hover.tooltips={"description": "@description", "category": "@category", "cluster":"@cluster" }
show(plot_kmeans)
###Output
_____no_output_____
###Markdown
**Latent Dirichlet Allocation**Latent Dirichlet Allocation (LDA) is an algorithms used to discover the topics that are present in a corpus.> LDA starts from a fixed number of topics. Each topic is represented as a distribution over words, and each document is then represented as a distribution over topics. Although the tokens themselves are meaningless, the probability distributions over words provided by the topics provide a sense of the different ideas contained in the documents.> > Reference: https://medium.com/intuitionmachine/the-two-paths-from-natural-language-processing-to-artificial-intelligence-d5384ddbfc18Its input is a **bag of words**, i.e. each document represented as a row, with each columns containing the count of words in the corpus. We are going to use a powerful tool called pyLDAvis that gives us an interactive visualization for LDA.
###Code
cvectorizer = CountVectorizer(min_df=4,
max_features=180000,
tokenizer=tokenize,
ngram_range=(1,2))
cvz = cvectorizer.fit_transform(combined_sample['item_description'])
lda_model = LatentDirichletAllocation(n_components=20,
learning_method='online',
max_iter=20,
random_state=42)
X_topics = lda_model.fit_transform(cvz)
n_top_words = 10
topic_summaries = []
topic_word = lda_model.components_ # get the topic words
vocab = cvectorizer.get_feature_names()
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
topic_summaries.append(' '.join(topic_words))
print('Topic {}: {}'.format(i, ' | '.join(topic_words)))
# reduce dimension to 2 using tsne
tsne_lda = tsne_model.fit_transform(X_topics)
unnormalized = np.matrix(X_topics)
doc_topic = unnormalized/unnormalized.sum(axis=1)
lda_keys = []
for i, tweet in enumerate(combined_sample['item_description']):
lda_keys += [doc_topic[i].argmax()]
lda_df = pd.DataFrame(tsne_lda, columns=['x','y'])
lda_df['description'] = combined_sample['item_description']
lda_df['category'] = combined_sample['general_cat']
lda_df['topic'] = lda_keys
lda_df['topic'] = lda_df['topic'].map(int)
plot_lda = bp.figure(plot_width=700,
plot_height=600,
title="LDA topic visualization",
tools="pan,wheel_zoom,box_zoom,reset,hover,previewsave",
x_axis_type=None, y_axis_type=None, min_border=1)
source = ColumnDataSource(data=dict(x=lda_df['x'], y=lda_df['y'],
color=colormap[lda_keys],
description=lda_df['description'],
topic=lda_df['topic'],
category=lda_df['category']))
plot_lda.scatter(source=source, x='x', y='y', color='color')
hover = plot_kmeans.select(dict(type=HoverTool))
hover = plot_lda.select(dict(type=HoverTool))
hover.tooltips={"description":"@description",
"topic":"@topic", "category":"@category"}
show(plot_lda)
def prepareLDAData():
data = {
'vocab': vocab,
'doc_topic_dists': doc_topic,
'doc_lengths': list(lda_df['len_docs']),
'term_frequency':cvectorizer.vocabulary_,
'topic_term_dists': lda_model.components_
}
return data
###Output
_____no_output_____
###Markdown
*Note: It's a shame that by putting the HTML of the visualization using pyLDAvis, it will distort the layout of the kernel, I won't upload in here. But if you follow the below code, there should be an HTML file generated with very interesting interactive bubble chart that visualizes the space of your topic clusters and the term components within each topic.*
###Code
import pyLDAvis
lda_df['len_docs'] = combined_sample['tokens'].map(len)
ldadata = prepareLDAData()
pyLDAvis.enable_notebook()
prepared_data = pyLDAvis.prepare(**ldadata)
###Output
_____no_output_____
###Markdown
###Code
import IPython.display
from IPython.core.display import display, HTML, Javascript
#h = IPython.display.display(HTML(html_string))
#IPython.display.display_HTML(h)
###Output
_____no_output_____ |
HW2/HW2.ipynb | ###Markdown
HW2: Khám phá dữ liệu, tiền xử lý, phân tích đơn giản**Vì đây là bài tập về Pandas nên yêu cầu là không được dùng vòng lặp**(Cập nhật lần cuối: 25/07/2021)Họ tên: Trần Đại ChíMSSV: 18127070 --- Cách làm bài và nộp bài (bạn cần đọc kỹ) &9889; Bạn lưu ý là mình sẽ dùng chương trình hỗ trợ chấm bài nên bạn cần phải tuân thủ chính xác qui định mà mình đặt ra, nếu không rõ thì hỏi, chứ không nên tự tiện làm theo ý của cá nhân.**Cách làm bài**Bạn sẽ làm trực tiếp trên file notebook này. Đầu tiên, bạn điền họ tên và MSSV vào phần đầu file ở bên trên. Trong file, bạn làm bài ở những chỗ có ghi là:```python YOUR CODE HEREraise NotImplementedError()```hoặc đối với những phần code không bắt buộc thì là:```python YOUR CODE HERE (OPTION)```hoặc đối với markdown cell thì là:```markdownYOUR ANSWER HERE```Tất nhiên, khi làm thì bạn xóa dòng `raise NotImplementedError()` đi.Đối những phần yêu cầu code thì thường ở ngay phía dưới sẽ có một (hoặc một số) cell chứa các bộ test để giúp bạn biết đã code đúng hay chưa; nếu chạy cell này không có lỗi gì thì có nghĩa là qua được các bộ test. Trong một số trường hợp, các bộ test có thể sẽ không đầy đủ; nghĩa là, nếu không qua được test thì là code sai, nhưng nếu qua được test thì chưa chắc đã đúng.Trong khi làm bài, bạn có thể cho in ra màn hình, tạo thêm các cell để test. Nhưng khi nộp bài thì bạn xóa các cell mà bạn tự tạo, xóa hoặc comment các câu lệnh in ra màn hình. Bạn lưu ý không được tự tiện xóa các cell hay sửa code của Thầy (trừ những chỗ được phép sửa như đã nói ở trên).Trong khi làm bài, thường xuyên `Ctrl + S` để lưu lại bài làm của bạn, tránh mất mát thông tin.*Nên nhớ mục tiêu chính ở đây là học, học một cách chân thật. Bạn có thể thảo luận ý tưởng với bạn khác cũng như tham khảo các nguồn trên mạng, nhưng sau cùng code và bài làm phải là của bạn, dựa trên sự hiểu thật sự của bạn. Khi tham khảo các nguồn trên mạng thì bạn cần ghi rõ nguồn trong bài làm. Bạn không được tham khảo bài làm của các bạn năm trước (vì nếu làm vậy thì bao giờ bạn mới có thể tự mình suy nghĩ để giải quyết vấn đề); sau khi kết thúc môn học, bạn cũng không được đưa bài làm cho các bạn khóa sau hoặc public bài làm trên Github (vì nếu làm vậy thì sẽ ảnh hưởng tới việc học của các bạn khóa sau). Nếu bạn có thể làm theo những gì mình nói thì điểm của bạn có thể sẽ không cao nhưng bạn sẽ có được những bước tiến thật sự. Trong trường hợp bạn vi phạm những điều mình nói ở trên thì sẽ bị 0 điểm cho toàn bộ môn học.***Cách nộp bài**Khi chấm bài, đầu tiên mình sẽ chọn `Kernel` - `Restart & Run All`, để restart và chạy tất cả các cell trong notebook của bạn; do đó, trước khi nộp bài, bạn nên chạy thử `Kernel` - `Restart & Run All` để đảm bảo mọi chuyện diễn ra đúng như mong đợi.Sau đó, bạn tạo thư mục nộp bài theo cấu trúc sau:- Thư mục `MSSV` (vd, nếu bạn có MSSV là 1234567 thì bạn đặt tên thư mục là `1234567`) - File `HW2.ipynb` (không cần nộp các file khác)Cuối cùng, bạn nén thư mục `MSSV` này lại và nộp ở link trên moodle. Đuôi của file nén phải là .zip (chứ không được .rar hay gì khác).Bạn lưu ý tuân thủ chính xác qui định nộp bài ở trên. --- Môi trường code Ta thống nhất trong môn này: dùng phiên bản các package như trong file "min_ds-env.yml" (file này đã được cập nhật một số lần, file mới nhất ở đầu có dòng: "Last update: 24/06/2021"). Cách tạo ra môi trường code từ file "min_ds-env.yml" đã được nói ở file "02_BeforeClass-Notebook_Python.pdf". Check môi trường code:
###Code
import sys
sys.executable
###Output
_____no_output_____
###Markdown
Nếu không có vấn đề gì thì file chạy python sẽ là file của môi trường code "min_ds-env". --- Import
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# YOUR CODE HERE (OPTION)
# Nếu cần các thư viện khác thì bạn có thể import ở đây
###Output
c:\program files\python36\lib\site-packages\numpy\_distributor_init.py:32: UserWarning: loaded more than 1 DLL from .libs:
c:\program files\python36\lib\site-packages\numpy\.libs\libopenblas.TXA6YQSD3GCQQC22GEQ54J2UDCXDXHWN.gfortran-win_amd64.dll
c:\program files\python36\lib\site-packages\numpy\.libs\libopenblas.WCDJNK7YVMPZQ2ME2ZZHJJRJ3JIKNDB7.gfortran-win_amd64.dll
stacklevel=1)
###Markdown
--- Thu thập dữ liệu Dữ liệu được sử dụng trong bài tập này là dữ liệu khảo sát các lập trình viên của trang StackOverflow. Mình download dữ liệu [ở đây](https://drive.google.com/file/d/1dfGerWeWkcyQ9GX9x20rdSGj7WtEpzBB/view) và có bỏ đi một số cột để đơn giản hóa. Theo mô tả trong file "README_2020.txt" của StackOverflow:>The enclosed data set is the full, cleaned results of the 2020 Stack Overflow Developer Survey. Free response submissions and personally identifying information have been removed from the results to protect the privacy of respondents. There are three files besides this README:>>1. survey_results_public.csv - CSV file with main survey results, one respondent per row and one column per answer>2. survey_results_schema.csv - CSV file with survey schema, i.e., the questions that correspond to each column name>3. so_survey_2020.pdf - PDF file of survey instrument>>The survey was fielded from February 5 to February 28, 2020. The median time spent on the survey for qualified responses was 16.6 minutes.>>Respondents were recruited primarily through channels owned by Stack Overflow. The top 5 sources of respondents were onsite messaging, blog posts, email lists, Meta posts, banner ads, and social media posts. Since respondents were recruited in this way, highly engaged users on Stack Overflow were more likely to notice the links for the survey and click to begin it.File "survey_results_public-short.csv" mà mình đính kèm là phiên bản đơn giản hóa của file "survey_results_public.csv" (từ 61 cột, mình bỏ xuống còn 29 cột). Đây là file dữ liệu chính mà bạn sẽ làm trong bài tập này. Ngoài ra, mình còn đính kèm 2 file phụ: (1) file "survey_results_schema-short.csv" là file cho biết ý nghĩa của các cột, và (2) file "so_survey_2020.pdf" là file khảo sát gốc của StackOverflow.Để ý: - Dữ liệu này không đại diện được cho cộng đồng lập trình viên trên toàn thế giới, mà chỉ giới hạn trong tập những lập trình viên thực hiện khảo sát của StackOverflow. Những câu trả lời có được thông qua tập dữ liệu này cũng sẽ bị giới hạn trong phạm vi đó.- Dữ liệu có đúng không? Về cơ bản là ta không biết được. Ở đây, mục đích chính là học qui trình Khoa Học Dữ Liệu và các câu lệnh của Pandas nên ta sẽ **giả định** phần lớn dữ liệu là đúng và tiếp tục làm.Cũng theo file "README_2020.txt", dữ liệu này được StackOverflow public với license như sau:>This database - The Public 2020 Stack Overflow Developer Survey Results - is made available under the Open Database License (ODbL): http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/>>TLDR: You are free to share, adapt, and create derivative works from The Public 2020 Stack Overflow Developer Survey Results as long as you attribute Stack Overflow, keep the database open (if you redistribute it), and continue to share-alike any adapted database under the ODbl. --- Khám phá dữ liệu Đọc dữ liệu từ file (0.25đ) Đầu tiên, bạn viết code để đọc dữ liệu từ file "survey_results_public-short.csv" và lưu kết quả vào DataFrame `survey_df`; ta thống nhất là sẽ để file dữ liệu này cùng cấp với file notebook và khi đọc file thì chỉ truyền vào tên của file. Ngoài ra, bạn cũng cần cho cột `Respondent` (id của người làm khảo sát) làm cột index của `survey_df`.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
survey_df = pd.read_csv('survey_results_public-short.csv', index_col='Respondent')
# TEST
survey_df.head()
###Output
_____no_output_____
###Markdown
Dữ liệu có bao nhiêu dòng và bao nhiêu cột? (0.25đ) Kế đến, bạn tính số dòng và số cột của DataFrame `survey_df` và lần lượt lưu vào biến `num_rows` và `num_cols`.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
num_rows = survey_df.shape[0]
num_cols = survey_df.shape[1]
print(str(num_rows) + ', ' + str(num_cols))
# TEST
assert num_rows == 64461
assert num_cols == 28
###Output
_____no_output_____
###Markdown
Mỗi dòng có ý nghĩa gì? Có vấn đề các dòng có ý nghĩa khác nhau không? Theo file "README_2020.txt" cũng như theo quan sát sơ bộ về dữ liệu, mỗi dòng trong DataFrame `survey_df` cho biết kết quả làm khảo sát của một người. Có vẻ không có vấn đề các dòng có ý nghĩa khác nhau. Dữ liệu có các dòng bị lặp không? (0.25đ) Kế đến, bạn tính số dòng có index (id của người làm khảo sát) bị lặp và lưu vào biến `num_duplicated_rows`. Trong nhóm các dòng có index giống nhau thì dòng đầu tiên không tính là bị lặp.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
num_duplicated_rows = survey_df.index.duplicated().sum()
print(num_duplicated_rows)
# TEST
assert num_duplicated_rows == 0
###Output
_____no_output_____
###Markdown
Mỗi cột có ý nghĩa gì? (0.25đ) Để xem ý nghĩa của mỗi cột thì:- Trước tiên, bạn cần đọc file "survey_results_schema-short.csv" vào DataFrame `col_meaning_df`; bạn cũng cần cho cột "Column" làm cột index. - Sau đó, bạn chỉ cần hiển thị DataFrame `col_meaning_df` ra để xem (vụ này khó nên ở dưới mình đã làm cho bạn ở cell có dòng " TEST" 😉). Tuy nhiên, bạn sẽ thấy ở cột "QuestionText": các chuỗi mô tả bị cắt do quá dài. Do đó, trước khi hiển thị DataFrame `col_meaning_df`, bạn cũng cần chỉnh sao đó để các chuỗi mô tả không bị cắt (vụ này bạn tự search Google, gợi ý: bạn sẽ dùng đến câu lệnh `pd.set_option`).
###Code
# YOUR CODE HERE
#raise NotImplementedError()
pd.set_option('display.max_colwidth', None) #show full text not cut
col_meaning_df = pd.read_csv('survey_results_schema-short.csv', index_col='Column')
# TEST
col_meaning_df
###Output
_____no_output_____
###Markdown
Trước khi đi tiếp, bạn nên đọc kết quả hiển thị ở trên và đảm bảo là bạn đã hiểu ý nghĩa của các cột. Để hiểu ý nghĩa của cột, có thể bạn sẽ cần xem thêm các giá trị của cột bên DataFrame `survey_df`. Mỗi cột hiện đang có kiểu dữ liệu gì? Có cột nào có kiểu dữ liệu chưa phù hợp để có thể xử lý tiếp không? (0.25đ) Kế đến, bạn tính kiểu dữ liệu (dtype) của mỗi cột trong DataFrame `survey_df` và lưu kết quả vào Series `dtypes` (Series này có index là tên cột).
###Code
# YOUR CODE HERE
#raise NotImplementedError()
dtypes = survey_df.dtypes
dtypes
# TEST
float_cols = set(dtypes[(dtypes==np.float32) | (dtypes==np.float64)].index)
assert float_cols == {'Age', 'ConvertedComp', 'WorkWeekHrs'}
object_cols = set(dtypes[dtypes == object].index)
assert len(object_cols) == 25
###Output
_____no_output_____
###Markdown
Như bạn có thể thấy, cột "YearsCode" và "YearsCodePro" nên có kiểu dữ liệu số, nhưng hiện giờ đang có kiểu dữ liệu object. Ta hãy thử xem thêm về các giá trị 2 cột này.
###Code
survey_df['YearsCode'].unique()
survey_df['YearsCodePro'].unique()
###Output
_____no_output_____
###Markdown
Ta nên đưa 2 cột này về dạng số để có thể tiếp tục khám phá (tính min, median, max, ...). --- Tiền xử lý (0.5đ) Bạn sẽ thực hiện tiền xử lý để chuyển 2 cột "YearsCode" và "YearsCodePro" về dạng số (float). Trong đó: "Less than 1 year" $\to$ 0, "More than 50 years" $\to$ 51. Sau khi chuyển thì `survey_df.dtypes` sẽ thay đổi.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
list_convert = {"Less than 1 year":"0", "More than 50 years":"51"}
#survey_df['YearsCode'] = survey_df['YearsCode'].map(list_convert).astype(float)
#survey_df['YearsCodePro'] = survey_df['YearsCodePro'].map(list_convert).astype(float)
#use lambda to avoid replacing other value not in list_convert
survey_df['YearsCode'] = survey_df['YearsCode'].apply(lambda ele: list_convert[ele] if ele in list_convert else ele).astype(float)
survey_df['YearsCodePro'] = survey_df['YearsCodePro'].apply(lambda ele: list_convert[ele] if ele in list_convert else ele).astype(float)
survey_df.dtypes
# TEST
assert survey_df['YearsCode'].dtype in [np.float32, np.float64]
assert survey_df['YearsCodePro'].dtype in [np.float32, np.float64]
###Output
_____no_output_____
###Markdown
--- Quay lại bước khám phá dữ liệu Với mỗi cột có kiểu dữ liệu dạng số, các giá trị được phân bố như thế nào? (1đ)(Trong đó: phần tính các mô tả của mỗi cột chiếm 0.5đ, phần tính số lượng giá trị không hợp lệ của mỗi cột chiếm 0.5đ) Với các cột có kiểu dữ liệu số, bạn sẽ tính:- Tỉ lệ % (từ 0 đến 100) các giá trị thiếu - Giá trị min- Giá trị lower quartile (phân vị 25)- Giá trị median (phân vị 50)- Giá trị upper quartile (phân vị 75)- Giá trị maxBạn sẽ lưu kết quả vào DataFrame `nume_col_info_df`, trong đó: - Tên của các cột là tên của các cột số trong `survey_df`- Tên của các dòng là: "missing_percentage", "min", "lower_quartile", "median", "upper_quartile", "max" Để dễ nhìn, tất cả các giá trị bạn đều làm tròn với 1 chữ số thập phân bằng phương thức `.round(1)`.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
col_only_numeric = survey_df.select_dtypes(include=['float32', 'float64']) #select just column float
nume_col_info_df = col_only_numeric.describe().transpose().drop(columns=['mean', 'std']) #remove 2 rows mean and std
nume_col_info_df['count'] = survey_df.isnull().sum() * 100.0 / len(survey_df) #calc % missing
my_dict_info = {'count': 'missing_percentage', '25%': 'lower_quartile', '50%': 'median', '75%': 'upper_quartile'}
nume_col_info_df = nume_col_info_df.rename(columns=my_dict_info).round(1).transpose() #rename and round
nume_col_info_df
# TEST
assert nume_col_info_df.shape == (6, 5)
data = nume_col_info_df.loc[['missing_percentage', 'min', 'lower_quartile', 'median', 'upper_quartile', 'max'],
['Age', 'ConvertedComp', 'WorkWeekHrs', 'YearsCode', 'YearsCodePro']].values
correct_data = np.array([[ 29.5, 46.1, 36.2, 10.5, 28.1],
[ 1. , 0. , 1. , 0. , 0. ],
[ 24. , 24648. , 40. , 6. , 3. ],
[ 29. , 54049. , 40. , 10. , 6. ],
[ 35. , 95000. , 44. , 17. , 12. ],
[ 279. , 2000000. , 475. , 51. , 51. ]])
assert np.array_equal(data, correct_data)
###Output
_____no_output_____
###Markdown
**Có giá trị không hợp lệ trong mỗi cột không? (không xét giá trị thiếu)**- Cột "Age": bạn hãy tính số lượng giá trị không hợp lệ của cột "Age" (< giá trị tương ứng trong cột "YearsCode" HOẶC < giá trị tương ứng trong cột "YearsCodePro") và lưu kết quả vào biến `num_invalid_Age_vals`.- Cột "WorkWeekHrs" (số giờ làm việc trung bình một tuần): ta thấy max là 475 giờ! Trong khi đó, 7 ngày * 24 giờ = 168 giờ! Bạn hãy tính số lượng giá trị không hợp lệ của cột "WorkWeekHrs" (> 24 * 7) và lưu kết quả vào biến `num_invalid_WorkWeekHrs`.- Cột "YearsCode": bạn hãy tính số lượng giá trị không hợp lệ của cột "YearsCode" ( giá trị tương ứng trong cột "Age") và lưu kết quả vào biến `num_invalid_YearsCode`.- Cột "YearsCodePro": bạn hãy tính số lượng giá trị không hợp lệ của cột "YearsCodePro" (> giá trị tương ứng trong cột "YearsCode" HOẶC > giá trị tương ứng trong cột "Age") và lưu kết quả vào biến `num_invalid_YearsCodePro`.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
#use logic OR to return all rows where A = ... or B = ...
not_valid_age = (survey_df['Age'] < survey_df['YearsCode']) | (survey_df['Age'] < survey_df['YearsCodePro'])
num_invalid_Age_vals = len(survey_df[not_valid_age])
not_valid_workweekhrs = survey_df['WorkWeekHrs'] > 24*7
num_invalid_WorkWeekHrs_vals = len(survey_df[not_valid_workweekhrs])
not_valid_yearscode = (survey_df['YearsCode'] < survey_df['YearsCodePro']) | (survey_df['YearsCode'] > survey_df['Age'])
num_invalid_YearsCode_vals = len(survey_df[not_valid_yearscode])
not_valid_yearscodepro = (survey_df['YearsCodePro'] > survey_df['YearsCode']) | (survey_df['YearsCodePro'] > survey_df['Age'])
num_invalid_YearsCodePro_vals = len(survey_df[not_valid_yearscodepro])
print(num_invalid_WorkWeekHrs_vals)
print(num_invalid_Age_vals)
print(num_invalid_YearsCode_vals)
print(num_invalid_YearsCodePro_vals)
# TEST
assert num_invalid_WorkWeekHrs_vals == 62
assert num_invalid_Age_vals == 16
assert num_invalid_YearsCode_vals == 499
assert num_invalid_YearsCodePro_vals == 486
###Output
_____no_output_____
###Markdown
Do số lượng các giá trị không hợp lệ cũng khá ít nên ta có thể tiền xử lý bằng cách xóa các dòng chứa các giá trị không hợp lệ. --- Tiền xử lý (0.5đ) Bạn sẽ thực hiện tiền xử lý để xóa đi các dòng của DataFrame `survey_df` mà chứa ít nhất là một giá trị không hợp lệ. Sau khi tiền xử lý thì `survey_df` sẽ thay đổi.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
#use logic not to inverse
#survey_df = survey_df[~not_valid_age]
#survey_df = survey_df[~not_valid_workweekhrs]
#survey_df = survey_df[~not_valid_yearscode]
#survey_df = survey_df[~not_valid_yearscodepro]
#convert all above to boolean to avoid warning boolean series key will be reindexed to match dataFrame index
survey_df = survey_df[~(not_valid_age | not_valid_workweekhrs | not_valid_yearscode | not_valid_yearscodepro)]
print(len(survey_df))
# TEST
assert len(survey_df) == 63900
###Output
_____no_output_____
###Markdown
--- Quay lại bước khám phá dữ liệu Với mỗi cột có kiểu dữ liệu không phải dạng số, các giá trị được phân bố như thế nào? (1đ) Với các cột có kiểu dữ liệu không phải số, bạn sẽ tính:- Tỉ lệ % (từ 0 đến 100) các giá trị thiếu - Số lượng các giá trị (các giá trị ở đây là các giá trị khác nhau và ta không xét giá trị thiếu): với cột mà ứng với câu hỏi dạng multichoice (ví dụ, cột "DevType"), mỗi giá trị có thể chứa nhiều choice (các choice được phân tách bởi dấu chấm phẩy), và việc đếm trực tiếp các giá trị không có nhiều ý nghĩa lắm vì số lượng tổ hợp các choice là khá nhiều; một cách khác tốt hơn mà bạn sẽ làm là đếm số lượng các choice- Tỉ lệ % (từ 0 đến 100) của mỗi giá trị được sort theo tỉ lệ % giảm dần (ta không xét giá trị thiếu, tỉ lệ là tỉ lệ so với số lượng các giá trị không thiếu): bạn dùng dictionary để lưu, key là giá trị, value là tỉ lệ %; với cột mà ứng với câu hỏi dạng multichoice, cách làm tương tự như ở trênBạn sẽ lưu kết quả vào DataFrame `cate_col_info_df`, trong đó: - Tên của các cột là tên của các cột không phải số trong `survey_df`- Tên của các dòng là: "missing_percentage", "num_values", "value_percentages" Để dễ nhìn, tất cả các giá trị bạn đều làm tròn với 1 chữ số thập phân bằng phương thức `.round(1)`.Gợi ý: có thể bạn sẽ muốn dùng [phương thức `explode`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.explode.html).
###Code
# Các cột ứng với câu hỏi khảo sát multichoice
multichoice_cols = ['DevType', 'Gender', 'JobFactors',
'LanguageWorkedWith', 'LanguageDesireNextYear',
'MiscTechWorkedWith', 'MiscTechDesireNextYear',
'NEWCollabToolsWorkedWith', 'NEWCollabToolsDesireNextYear',
'PlatformWorkedWith', 'PlatformDesireNextYear',
'NEWStuck']
pd.set_option('display.max_colwidth', 100) # Để dễ nhìn
pd.set_option('display.max_columns', None) # Để dễ nhìn
# YOUR CODE HERE
#raise NotImplementedError()
exclude_numeric = survey_df.select_dtypes(exclude=['float32', 'float64']) #exclude numeric first
missing_percentage = exclude_numeric.isnull().sum() * 100.0 / len(survey_df) #calc % missing
missing_percentage = pd.DataFrame(missing_percentage.round(1))
cate_col_info_df = missing_percentage.rename(columns={0:"missing_percentage"}) #rename column
cate_col_info_df
get_name_cate_col_info_df = cate_col_info_df.transpose().columns
get_name_cate_col_info_df
#create 2 dict for num_values and value_percentages
num_values = {}
value_percentages = {}
for elem in get_name_cate_col_info_df:
#first process column with multi-choice
if elem in multichoice_cols:
exclude_numeric[elem] = exclude_numeric[elem].str.split(';') #split by delimeter for columns not numeric
col_explode = exclude_numeric[elem].explode() #explode after spliting
count_value_after_explode = col_explode.value_counts() #count after exploding
num_values[elem] = len(count_value_after_explode.index.values) #calculate num_values
#calculate value_percentages
value_percentages[elem] = {} #init empty dict for col with multi chocie
for i in count_value_after_explode.index.values:
#ex: value_percentages[devType][Developer, back-end] = count_value_after_explode[Developer, backend]*100.0/sum col after exploding not null...
value_percentages[elem][i] = (count_value_after_explode[i]*100.0/(~col_explode.isnull()).sum()).round(1)
#second process column without multi-choice
else:
# calculate num_values for others
count_value_after_explode1 = exclude_numeric[elem].value_counts() #count without explode with no multi chocie
num_values[elem] = len(count_value_after_explode1.index.values)
# calculate value_ratios for others
value_percentages[elem] = {} #init empty dict for col without multi chocie
for i in count_value_after_explode1.index.values:
#change to sum col without exploding not null
value_percentages[elem][i] = (count_value_after_explode1[i]*100.0/(~exclude_numeric[elem].isnull()).sum()).round(1)
#num_values
#value_percentages
#add values of num_values to list l1
l1 = []
for key, value in num_values.items():
l1.append(value)
print(l1)
#add values of num_values to list l2
l2 = []
for key1, value1 in value_percentages.items():
l2.append(tuple((key1, value1)))
l2
#merge missing_percentage with num_values
df1 = cate_col_info_df.transpose()
df_tmp1 = pd.DataFrame([l1], columns=get_name_cate_col_info_df, index=['num_values']) #add list l1 to dataframe
df1 = df1.append(df_tmp1)
df1
#add l2 to dataframe and change 0 to value_percentages
df_tmp2 = pd.DataFrame(l2, columns=['0', 'value_percentages'], index=value_percentages.keys())
df_tmp2 = df_tmp2.drop(columns=['0']) #drop redundant column 0
df_tmp2 = df_tmp2.transpose()
df1 = df1.append(df_tmp2)
cate_col_info_df = df1
cate_col_info_df
# TEST
c = cate_col_info_df['MainBranch']
assert c.loc['missing_percentage'] == 0.5
assert c.loc['num_values'] == 5
assert c.loc['value_percentages']['I am a developer by profession'] == 73.5
c = cate_col_info_df['Hobbyist']
assert c.loc['missing_percentage'] == 0.1
assert c.loc['num_values'] == 2
assert c.loc['value_percentages']['Yes'] == 78.2
c = cate_col_info_df['DevType']
assert c.loc['missing_percentage'] == 23.6
assert c.loc['num_values'] == 23
assert c.loc['value_percentages']['Academic researcher'] == 2.2
c = cate_col_info_df['PlatformWorkedWith']
assert c.loc['missing_percentage'] == 16.5
assert c.loc['num_values'] == 16
assert c.loc['value_percentages']['Docker'] == 10.6
###Output
_____no_output_____
###Markdown
--- Đặt câu hỏi Sau khi khám phá dữ liệu, ta đã hiểu hơn về dữ liệu. Bây giờ, ta hãy xem thử có câu hỏi nào có thể được trả lời bằng dữ liệu này.**Một câu hỏi có thể có là:** Platform nào (Windows, Linux, Docker, AWS, ...) được yêu thích nhất, platform nào được yêu thích nhì, platform nào được yêu thích ba, ...?Một platform được xem là được yêu thích nếu một người đã dùng platform này (cột "PlatformWorkedWith") và muốn tiếp tục dùng platform trong năm kế (cột "PlatformDesireNextYear").**Trả lời được câu hỏi này sẽ** phần nào giúp ta định hướng là nên tập trung học platform nào để có thể chuẩn bị cho tương lai (mình nói "phần nào" vì ở đây dữ liệu chỉ giới hạn trong phạm vi những người làm khảo sát của StackOverflow). --- Tiền xử lý Nếu bạn thấy cần thực hiện thêm thao tác tiền xử lý để chuẩn bị dữ liệu cho bước phân tích thì bạn làm ở đây. Bước này là không bắt buộc.
###Code
# YOUR CODE HERE (OPTION)
###Output
_____no_output_____
###Markdown
--- Phân tích dữ liệu (2.25đ) Bây giờ, bạn sẽ thực hiện phân tích dữ liệu để trả lời cho câu hỏi ở trên. Cụ thể các bước như sau:- Bước 1: tính Series `most_loved_platforms`, trong đó: - Index là tên flatform (ở bước khám phá dữ liệu, bạn đã thấy có tất cả 16 platform) - Data là tỉ lệ % (từ 0 đến 100, được làm tròn với một chữ số thập phân bằng phương thức `round(1)`) được yêu thích (được sort giảm dần) - Bước 2: từ Series `most_loved_platforms`, bạn vẽ bar chart: - Bạn cho các bar nằm ngang (cho dễ nhìn) - Bạn đặt tên trục hoành là "Tỉ lệ %" Code bước 1.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
is_platform_favourite_df = survey_df[['PlatformWorkedWith', 'PlatformDesireNextYear']]
is_platform_favourite_df.loc[:, 'PlatformWorkedWith'] = is_platform_favourite_df['PlatformWorkedWith'].str.split(';') #first split by delimeter
is_platform_favourite_df = is_platform_favourite_df.explode('PlatformWorkedWith').dropna() #explode and drop column NaN
is_platform_favourite_df.loc[:, 'PlatformDesireNextYear'] = is_platform_favourite_df['PlatformDesireNextYear'].str.split(';') #first split by delimeter
is_platform_favourite_df = is_platform_favourite_df.explode('PlatformDesireNextYear').dropna() #explode and drop column NaN
#set rows PlatformDesireNextYear equal PlatformDesireNextYear to count
duplicate_platform_df = is_platform_favourite_df[is_platform_favourite_df['PlatformDesireNextYear'] == is_platform_favourite_df['PlatformWorkedWith']]
most_loved_platforms = duplicate_platform_df.groupby(['PlatformWorkedWith']).count()
most_loved_platforms = most_loved_platforms.sort_values(by="PlatformDesireNextYear", ascending=False)
most_loved_platforms['PlatformDesireNextYear'] = ((most_loved_platforms['PlatformDesireNextYear']*100.0)/len(duplicate_platform_df)).round(1)
#convert to series for running test
most_loved_platforms = most_loved_platforms.iloc[:, 0]
most_loved_platforms
# TEST
assert len(most_loved_platforms) == 16
assert most_loved_platforms.loc['Linux'] == 20.2
assert most_loved_platforms.loc['Windows'] == 14.6
assert most_loved_platforms.loc['Docker'] == 12.3
###Output
_____no_output_____
###Markdown
Code bước 2.
###Code
# YOUR CODE HERE
#raise NotImplementedError()
fig = plt.figure(figsize = (10, 7))
plt.tick_params(labelsize=12)
plt.xlabel("Tỉ lệ %", fontsize=15)
plt.barh(most_loved_platforms.index, most_loved_platforms.values);
###Output
_____no_output_____
###Markdown
Bạn đã hiểu tại sao mình khuyên bạn là nên tập làm quen dần với các câu lệnh của Linux chưa 😉 --- Đặt câu hỏi của bạn (1.5đ) Bây giờ, đến lượt bạn phải tự suy nghĩ và đưa ra câu hỏi mà có thể trả lời bằng dữ liệu. Ngoài việc đưa ra câu hỏi, bạn cũng phải giải thích để người đọc thấy nếu trả lời được câu hỏi thì sẽ có lợi ích gì. Bạn nên sáng tạo một xíu, không nên đưa ra câu hỏi cùng dạng với câu hỏi của mình ở trên. Một câu hỏi có thể có là: Tiền lương trung bình ở các ngôn ngữ lập trình khác nhau, đất nước khác nhau, tình trạng học vấn của người đó... sẽ khác nhau như thế nàoTrả lời được câu hỏi này sẽ phần nào giúp ta định hướng là nên tập trung học ngôn ngữ nào, có nên học lên cao hay không hay là đất nước nào ta nên đến đó để làm việc nếu có cơ hội... để có thể có tiền lương tốt hơn trong tương lai --- Tiền xử lý để chuẩn bị dữ liệu cho bước phân tích để trả lời cho câu hỏi của bạn Phần này là không bắt buộc.
###Code
# YOUR CODE HERE (OPTION)
#take all necessary columns relate to salary
all_cols = ['Country', 'DevType', 'EdLevel', 'Employment', 'LanguageWorkedWith', 'MiscTechWorkedWith',
'NEWCollabToolsWorkedWith', 'NEWLearn', 'NEWOvertime', 'OpSys', 'PlatformWorkedWith',
'Age', 'ConvertedComp', 'WorkWeekHrs', 'YearsCode', 'YearsCodePro']
###Output
_____no_output_____
###Markdown
Đầu tiên, ta sẽ bỏ đi các giá trị null của cột lương là ConvertedComp, sau đó với các cột còn lại là numeric ta sẽ fillna(0) vàtương tự fillna('unk') cho các cột không phải numeric
###Code
salary_df = survey_df[all_cols]
salary_df = salary_df.dropna(subset=['ConvertedComp'])
salary_df[['Age', 'WorkWeekHrs', 'YearsCode', 'YearsCodePro']] = salary_df[['Age', 'WorkWeekHrs', 'YearsCode', 'YearsCodePro']].fillna(0)
salary_df.fillna('unk')
###Output
_____no_output_____
###Markdown
Các quốc gia có lương trung bình cao nhất thì Mỹ vẫn là 1 quốc gia nổi trội nhất trên thế giới
###Code
salary_by_country = salary_df.groupby(['Country'], as_index=False).median()[['Country', 'ConvertedComp']]
salary_by_country = salary_by_country.sort_values('ConvertedComp', ascending=False)
salary_by_country
###Output
_____no_output_____
###Markdown
Về trình độ học vấn thì những người có bằng thạc sĩ hoặc tiến sĩ trở lên sẽ có mức lương trung bình nổi trội hơn
###Code
salary_by_edlevel = salary_df.groupby(['EdLevel'], as_index=False).median()[['EdLevel', 'ConvertedComp']]
salary_by_edlevel = salary_by_edlevel.sort_values('ConvertedComp', ascending=False)
salary_by_edlevel
###Output
_____no_output_____
###Markdown
Về các ngôn ngữ lập trình có mức lương trung bình cao nhất thì top 5 là Perl, Scala, Go, Rust và Ruby
###Code
salary_by_languagework = salary_df[['ConvertedComp', 'LanguageWorkedWith']]
salary_by_languagework.iloc[:, 1] = salary_by_languagework['LanguageWorkedWith'].str.split(';')
salary_by_languagework = salary_by_languagework.explode('LanguageWorkedWith')
salary_by_languagework1 = salary_by_languagework.groupby(['LanguageWorkedWith'], as_index=False).median()[['LanguageWorkedWith', 'ConvertedComp']]
salary_by_languagework1 = salary_by_languagework1.sort_values('ConvertedComp', ascending=False)
salary_by_languagework1
###Output
c:\program files\python36\lib\site-packages\pandas\core\indexing.py:1743: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
isetter(ilocs[0], value)
###Markdown
--- Phân tích dữ liệu để ra câu trả lời cho câu hỏi của bạn (2đ)
###Code
# YOUR CODE HERE
#raise NotImplementedError()
salary_by_country = salary_by_country.set_index('Country')
salary_by_country = salary_by_country.iloc[:, 0]
plt.figure(figsize = (14, 32))
plt.tick_params(labelsize=14)
plt.barh(salary_by_country.index, salary_by_country.values)
plt.xlabel('Mức lương trung bình theo đất nước', fontsize=14);
salary_by_edlevel = salary_by_edlevel.set_index('EdLevel')
salary_by_edlevel = salary_by_edlevel.iloc[:, 0]
plt.barh(salary_by_edlevel.index, salary_by_edlevel.values)
plt.xlabel('Mức lương trung bình theo trình độ học vấn', fontsize=14);
salary_by_languagework1 = salary_by_languagework1.set_index('LanguageWorkedWith')
salary_by_languagework1 = salary_by_languagework1.iloc[:, 0]
plt.figure(figsize = (12, 8))
plt.barh(salary_by_languagework1.index, salary_by_languagework1.values)
plt.xlabel('Mức lương trung bình theo ngôn ngữ lập trình', fontsize=14);
###Output
_____no_output_____
###Markdown
Exercise 2: Neural NetworksIn the previous exercise you implemented a binary classifier with one linear layer on a small portion of CIFAR-10. In this exercise, you will first implement a multi-class logistic regression model followed by a three layer neural network. Submission guidelines:**Zip** all the files in the exercise directory excluding the data. Name the file `ex2_ID.zip`. Read the following instructions carefully:1. This jupyter notebook contains all the step by step instructions needed for this exercise.2. Write **efficient vectorized** code whenever possible. 3. You are responsible for the correctness of your code and should add as many tests as you see fit. Tests will not be graded nor checked.4. Do not change the functions we provided you. 4. Write your functions in the instructed python modules only. All the logic you write is imported and used using this jupyter notebook. You are allowed to add functions as long as they are located in the python modules and are imported properly.5. You are allowed to use functions and methods from the [Python Standard Library](https://docs.python.org/3/library/) and [numpy](https://www.numpy.org/devdocs/reference/) only. Any other imports are forbidden.6. Your code must run without errors. Use `python 3` and `numpy 1.15.4`.7. **Before submitting the exercise, restart the kernel and run the notebook from start to finish to make sure everything works. Code that cannot run will not be tested.**8. Write your own code. Cheating will not be tolerated. 9. Answers to qualitative questions should be written in **markdown** cells (with $\LaTeX$ support).
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (12.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
import platform
print("Python version: ", platform.python_version())
print("Numpy version: ", np.__version__)
###Output
Python version: 3.7.1
Numpy version: 1.15.4
###Markdown
Logistic RegressionDuring this exercise, you are allowed (and encouraged) to use your code from HW1. Load Data - CIFAR-10The next few cells will download and extract CIFAR-10 into `datasets/cifar10/` - notice you can copy and paste this dataset from the previous exercise or just download it again. The CIFAR-10 dataset consists of 60,000 32x32 color images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images. The dataset is divided into five training batches and one test batch, each with 10,000 images. The test batch contains exactly 1,000 randomly-selected images from each class.
###Code
from datasets import load_cifar10
URL = "http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz"
PATH = 'datasets/cifar10/' # the script will create required directories
load_cifar10.maybe_download_and_extract(URL, PATH)
CIFAR10_PATH = os.path.join(PATH, 'cifar-10-batches-py')
X_train, y_train, X_test, y_test = load_cifar10.load(CIFAR10_PATH) # load the entire data
# define a splitting for the data
num_training = 49000
num_validation = 1000
num_testing = 1000
# add a validation dataset for hyperparameter optimization
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_validation)
X_val = X_test[mask]
y_val = y_test[mask]
mask = range(num_validation, num_validation+num_testing)
X_test = X_test[mask]
y_test = y_test[mask]
# float64
X_train = X_train.astype(np.float64)
X_val = X_val.astype(np.float64)
X_test = X_test.astype(np.float64)
# subtract the mean from all the images in the batch
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# flatten all the images in the batch (make sure you understand why this is needed)
X_train = np.reshape(X_train, newshape=(X_train.shape[0], -1))
X_val = np.reshape(X_val, newshape=(X_val.shape[0], -1))
X_test = np.reshape(X_test, newshape=(X_test.shape[0], -1))
# add a bias term to all images in the batch
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
print(X_train.shape)
print(X_val.shape)
print(X_test.shape)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
def get_batch(X, y, n):
rand_items = np.random.randint(0, X.shape[0], size=n)
images = X[rand_items]
labels = y[rand_items]
return X, y
def make_random_grid(x, y, n=4):
rand_items = np.random.randint(0, x.shape[0], size=n)
images = x[rand_items]
labels = y[rand_items]
grid = np.hstack((np.asarray((vec_2_img(i) + mean_image), dtype=np.int) for i in images))
print(' '.join('%13s' % classes[labels[j]] for j in range(4)))
return grid
def vec_2_img(x):
x = np.reshape(x[:-1], (32, 32, 3))
return x
X_batch, y_batch = get_batch(X_test, y_test, 4)
plt.imshow(make_random_grid(X_batch, y_batch));
###Output
ship plane dog truck
###Markdown
Open the file `functions/classifier.py`. The constructor of the `LogisticRegression` class takes as input the dataset and labels in order to create appropriate parameters. Notice we are using the bias trick and only use the matrix `w` for convenience. Since we already have a (random) model, we can start predicting classes on images. Complete the method `predict` in the `LogisticRegression` class. **5 points**You can use your code from HW1. If you used vectorized code well, you should make very little changes.
###Code
from functions.classifier import LogisticRegression
classifier = LogisticRegression(X_train, y_train)
y_pred = classifier.predict(X_test)
X_batch, y_batch = get_batch(X_train, y_train, 4)
plt.imshow(make_random_grid(X_batch, y_batch));
print(' '.join('%13s' % classes[y_pred[j]] for j in range(4)))
print("model accuracy: ", classifier.calc_accuracy(X_train, y_train))
###Output
model accuracy: 10.481632653061226
###Markdown
Cross-entropyOpen the file `functions/losses.py`. Complete the function `softmax_loss_vectorized` using vectorized code. This function takes as input the weights `W`, data `X`, labels `y` and a regularization term `reg` and outputs the calculated loss as a single number and the gradients with respect to W. Don't forget the regularization. **5 points**
###Code
from functions.losses import softmax_loss_vectorized
W = np.random.randn(3073, 10) * 0.0001
loss_naive, grad_naive = softmax_loss_vectorized(W, X_val, y_val, 0.00000)
print ('loss: %f' % (loss_naive, ))
print ('sanity check: %f' % (-np.log(0.1))) # should be close but not the same
###Output
loss: 2.394880
sanity check: 2.302585
###Markdown
Inline Question 1:Why do we expect our loss to be close to -log(0.1)? **Explain briefly.****Your answer:** As HW1, W is set to random number so the probability is 1 to number of classes (10). Use the following cell to test your implementation of the gradients.
###Code
from functions.losses import grad_check
loss, grad = softmax_loss_vectorized(W, X_val, y_val, 1)
f = lambda w: softmax_loss_vectorized(W, X_val, y_val, 1)[0]
grad_numerical = grad_check(f, W, grad, num_checks=10)
from functions.classifier import LogisticRegression
logistic = LogisticRegression(X_train, y_train)
loss_history = logistic.train(X_train, y_train,
learning_rate=1e-7,
reg=5e4,
num_iters=1500,
verbose=True)
plt.plot(loss_history)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
print("Training accuracy: ", logistic.calc_accuracy(X_train, y_train))
print("Testing accuracy: ", logistic.calc_accuracy(X_test, y_test))
###Output
Training accuracy: 38.16734693877551
Testing accuracy: 39.900000000000006
###Markdown
Use the validation set to tune hyperparameters by training different models (using the training dataset) and evaluating the performance using the validation dataset. Save the results in a dictionary mapping tuples of the form `(learning_rate, batch_size)` to tuples of the form `(training_accuracy, validation_accuracy)`. Finally, you should evaluate the best model on the testing dataset.
###Code
# You are encouraged to experiment with additional values
learning_rates = [1e-7, 5e-6]
regularization_strengths = [5e4, 1e5, 5e3, 1e2]
results = {}
best_val = -1 # The highest validation accuracy that we have seen so far.
best_logistic = None # The LogisticRegression object that achieved the highest validation score.
################################################################################
# START OF YOUR CODE #
################################################################################
iters = 2000
for learning_rate in learning_rates:
for reg_strength in regularization_strengths:
log_reg = LogisticRegression(X_train, y_train)
log_reg.train(X_train, y_train, learning_rate=learning_rate, reg=reg_strength, num_iters=iters)
y_train_pred = log_reg.predict(X_train)
acc_train = np.mean(y_train == y_train_pred)
y_val_pred = log_reg.predict(X_val)
acc_val = np.mean(y_val == y_val_pred)
results[(learning_rate, reg_strength)] = (acc_train, acc_val)
if best_val < acc_val:
best_val = acc_val
best_logistic = log_reg
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print ('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print ('best validation accuracy achieved during cross-validation: %f' % best_val)
test_accuracy = logistic.calc_accuracy(X_test, y_test)
print ('Binary logistic regression on raw pixels final test set accuracy: %f' % test_accuracy)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
w = best_logistic.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____
###Markdown
Neural NetworkThe implementation of linear regression was (hopefully) simple yet not very modular since the layer, loss and gradient were calculated as a single monolithic function. This would become impractical as we move towards bigger models. As a warmup towards `PyTorch`, we want to build networks using a more modular design so that we can implement different layer types in isolation and easily integrate them together into models with different architectures.This logic of isolation & integration is at the heart of all popular deep learning frameworks, and is based on two methods each layer holds - a forward and backward pass. The forward function will receive inputs, weights and other parameters and will return both an output and a cache object storing data needed for the backward pass. The backward pass will receive upstream derivatives and the cache, and will return gradients with respect to the inputs and weights. By implementing several types of layers this way, we will be able to easily combine them to build classifiers with different architectures with relative ease.We will implement a neural network to obtain better results on CIFAR-10. If you were careful, you should have got a classification accuracy of over 38% on the test set using a simple single layer network. However, using multiple layers we could reach around 50% accuracy. Our neural network will be implemented in the file `functions/neural_net.py`. We will train this network using softmax loss and L2 regularization and a ReLU non-linearity after the first two fully connected layers. Fully Connected Layer: Forward Pass.Open the file `functions/layers.py` and implement the function `fc_forward` **7.5 points**.
###Code
np.random.seed(42)
from functions.layers import *
num_instances = 5
input_shape = (11, 7, 3)
output_shape = 4
X = np.random.randn(num_instances * np.prod(input_shape)).reshape(num_instances, *input_shape)
W = np.random.randn(np.prod(input_shape) * output_shape).reshape(np.prod(input_shape), output_shape)
b = np.random.randn(output_shape)
out, _ = fc_forward(X, W, b)
correct_out = np.array([[16.77132953, 1.43667172, -15.60205534, 7.15789287],
[ -8.5994206, 7.59104298, 10.92160126, 17.19394331],
[ 4.77874003, 2.25606192, -6.10944859, 14.76954561],
[21.21222953, 17.82329258, 4.53431782, -9.88327913],
[18.83041801, -2.55273817, 14.08484003, -3.99196171]])
print(np.isclose(out, correct_out, rtol=1e-8).all()) # simple test
###Output
True
###Markdown
Fully Connected Layer: Backward PassOpen the file `functions/layers.py` and implement the function `fc_backward` **7.5 points**.
###Code
np.random.seed(42)
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: fc_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: fc_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: fc_forward(x, w, b)[0], b, dout)
out, cache = fc_forward(x,w,b)
dx, dw, db = fc_backward(dout, cache)
np.isclose(dw, dw_num, rtol=1e-8).all() # simple test
np.isclose(dx, dx_num, rtol=1e-8).all() # simple test
np.isclose(db, db_num, rtol=1e-8).all() # simple test
###Output
_____no_output_____
###Markdown
ReLU: Forward PassOpen the file `functions/layers.py` and implement the function `relu_forward` **7.5 points**.
###Code
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
print(np.isclose(out, correct_out, rtol=1e-8).all()) # simple test
###Output
True
###Markdown
ReLU: Backward PassOpen the file `functions/layers.py` and implement the function `relu_backward` **7.5 points**.
###Code
np.random.seed(42)
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
xx, cache = relu_forward(x)
dx = relu_backward(dout, cache)
np.isclose(dx, dx_num, rtol=1e-8).all() # simple test
###Output
_____no_output_____
###Markdown
**Optional**: you are given two helper functions in `functions/layers.py` - `fc_relu_forward` and `fc_relu_backward`. You might find it beneficial to use dedicated functions to calculate the forward and backward outputs of a fully connected layer immediately followed by a ReLU. Building the NetworkFirst, notice that we are leaving behind the bias trick and removing the bias from each image.
###Code
X_train = np.array([x[:-1] for x in X_train])
X_val = np.array([x[:-1] for x in X_val])
X_test = np.array([x[:-1] for x in X_test])
print(X_train.shape)
print(X_val.shape)
print(X_test.shape)
###Output
(49000, 3072)
(1000, 3072)
(1000, 3072)
###Markdown
Open the file `functions/neural_net.py` and complete the class `ThreeLayerNet`. All the implementation details are available in the file itself. Read the documentation carefully since the class of this network is slightly different from the network in the previous section of this exercise. **50 points**
###Code
from functions.neural_net import ThreeLayerNet
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
model = ThreeLayerNet(input_size, hidden_size, num_classes)
stats = model.train(X_train, y_train, X_val, y_val,
num_iters=1500, batch_size=200,
learning_rate=1e-3, reg=0, verbose=True)
val_acc = (model.predict(X_val) == y_val).mean()
print ('Validation accuracy: ', val_acc)
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
###Output
_____no_output_____
###Markdown
Use the validation set to tune hyperparameters by training different models (using the training dataset) and evaluating the performance using the validation dataset. Save the results in a dictionary mapping tuples of the form `(learning_rate, hidden_size, regularization)` to tuples of the form `(training_accuracy, validation_accuracy)`. You should evaluate the best model on the testing dataset and print out the training, validation and testing accuracies for each of the models and provide a clear visualization. Highlight the best model w.r.t the testing accuracy. **10 points**
###Code
# You are encouraged to experiment with additional values
learning_rates = [ 1e-3, 1e-6, 5e-3, 5e-6] # this time its up to you
hidden_sizes = [50, 100, 200, 500, 1000,] # this time its up to you
regularizations = [0, 1e-3, 1e-6, 5e-6, 5e-9 , 1,] # this time its up to you
results = {}
best_val = -1
best_net = None
################################################################################
# START OF YOUR CODE #
################################################################################
best_values = None
for hs in hidden_sizes:
for lr in learning_rates:
for r in regularizations:
model = ThreeLayerNet(input_size, hs, num_classes)
stats = model.train(X_train, y_train, X_val, y_val,
num_iters=1500, batch_size=200,
learning_rate=lr, reg=r, verbose=False)
val_accuracy = (model.predict(X_val) == y_val).mean()
train_accuracy = (model.predict(X_train) == y_train).mean()
print('(learning_rate, hidden_size, regularization) = {}'.format((lr, hs, r)))
print('(train_accuracy,val_accuracy) = {}'.format((train_accuracy,val_accuracy)))
results[(lr, hs, r)] = (train_accuracy,val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_values = [(lr, hs, r), (train_accuracy, val_accuracy), ]
best_net = model
print('best training values = {}, got use the best accuracy, val - {} train - {}'
.format(best_values[0],best_values[1][0], best_values[1][1]))
################################################################################
# END OF YOUR CODE #
################################################################################
###Output
(learning_rate, hidden_size, regularization) = (0.001, 50, 0)
(train_accuracy,val_accuracy) = (0.4101224489795918, 0.398)
(learning_rate, hidden_size, regularization) = (0.001, 50, 0.001)
(train_accuracy,val_accuracy) = (0.4099795918367347, 0.395)
(learning_rate, hidden_size, regularization) = (0.001, 50, 1e-06)
(train_accuracy,val_accuracy) = (0.41608163265306125, 0.397)
(learning_rate, hidden_size, regularization) = (0.001, 50, 5e-06)
(train_accuracy,val_accuracy) = (0.4155714285714286, 0.414)
(learning_rate, hidden_size, regularization) = (0.001, 50, 5e-09)
(train_accuracy,val_accuracy) = (0.4164081632653061, 0.399)
(learning_rate, hidden_size, regularization) = (0.001, 50, 1)
(train_accuracy,val_accuracy) = (0.272734693877551, 0.287)
(learning_rate, hidden_size, regularization) = (1e-06, 50, 0)
(train_accuracy,val_accuracy) = (0.11881632653061225, 0.135)
(learning_rate, hidden_size, regularization) = (1e-06, 50, 0.001)
(train_accuracy,val_accuracy) = (0.10673469387755102, 0.108)
(learning_rate, hidden_size, regularization) = (1e-06, 50, 1e-06)
(train_accuracy,val_accuracy) = (0.1103469387755102, 0.113)
(learning_rate, hidden_size, regularization) = (1e-06, 50, 5e-06)
(train_accuracy,val_accuracy) = (0.09648979591836734, 0.102)
(learning_rate, hidden_size, regularization) = (1e-06, 50, 5e-09)
(train_accuracy,val_accuracy) = (0.10220408163265306, 0.096)
(learning_rate, hidden_size, regularization) = (1e-06, 50, 1)
(train_accuracy,val_accuracy) = (0.08842857142857143, 0.104)
(learning_rate, hidden_size, regularization) = (0.005, 50, 0)
(train_accuracy,val_accuracy) = (0.5171428571428571, 0.506)
(learning_rate, hidden_size, regularization) = (0.005, 50, 0.001)
(train_accuracy,val_accuracy) = (0.508, 0.496)
(learning_rate, hidden_size, regularization) = (0.005, 50, 1e-06)
(train_accuracy,val_accuracy) = (0.5159795918367347, 0.475)
(learning_rate, hidden_size, regularization) = (0.005, 50, 5e-06)
(train_accuracy,val_accuracy) = (0.5118571428571429, 0.479)
(learning_rate, hidden_size, regularization) = (0.005, 50, 5e-09)
(train_accuracy,val_accuracy) = (0.5148367346938776, 0.489)
(learning_rate, hidden_size, regularization) = (0.005, 50, 1)
(train_accuracy,val_accuracy) = (0.2983469387755102, 0.322)
(learning_rate, hidden_size, regularization) = (5e-06, 50, 0)
(train_accuracy,val_accuracy) = (0.08685714285714285, 0.091)
(learning_rate, hidden_size, regularization) = (5e-06, 50, 0.001)
(train_accuracy,val_accuracy) = (0.11326530612244898, 0.114)
(learning_rate, hidden_size, regularization) = (5e-06, 50, 1e-06)
(train_accuracy,val_accuracy) = (0.10273469387755102, 0.103)
(learning_rate, hidden_size, regularization) = (5e-06, 50, 5e-06)
(train_accuracy,val_accuracy) = (0.14134693877551022, 0.146)
(learning_rate, hidden_size, regularization) = (5e-06, 50, 5e-09)
(train_accuracy,val_accuracy) = (0.1223265306122449, 0.124)
(learning_rate, hidden_size, regularization) = (5e-06, 50, 1)
(train_accuracy,val_accuracy) = (0.11075510204081633, 0.099)
(learning_rate, hidden_size, regularization) = (0.001, 100, 0)
(train_accuracy,val_accuracy) = (0.4493265306122449, 0.436)
(learning_rate, hidden_size, regularization) = (0.001, 100, 0.001)
(train_accuracy,val_accuracy) = (0.44726530612244897, 0.433)
(learning_rate, hidden_size, regularization) = (0.001, 100, 1e-06)
(train_accuracy,val_accuracy) = (0.447530612244898, 0.435)
(learning_rate, hidden_size, regularization) = (0.001, 100, 5e-06)
(train_accuracy,val_accuracy) = (0.44877551020408163, 0.446)
(learning_rate, hidden_size, regularization) = (0.001, 100, 5e-09)
(train_accuracy,val_accuracy) = (0.4440612244897959, 0.438)
(learning_rate, hidden_size, regularization) = (0.001, 100, 1)
(train_accuracy,val_accuracy) = (0.311530612244898, 0.322)
(learning_rate, hidden_size, regularization) = (1e-06, 100, 0)
(train_accuracy,val_accuracy) = (0.07185714285714286, 0.059)
(learning_rate, hidden_size, regularization) = (1e-06, 100, 0.001)
(train_accuracy,val_accuracy) = (0.10687755102040816, 0.116)
(learning_rate, hidden_size, regularization) = (1e-06, 100, 1e-06)
(train_accuracy,val_accuracy) = (0.10628571428571429, 0.116)
(learning_rate, hidden_size, regularization) = (1e-06, 100, 5e-06)
(train_accuracy,val_accuracy) = (0.1106734693877551, 0.122)
(learning_rate, hidden_size, regularization) = (1e-06, 100, 5e-09)
(train_accuracy,val_accuracy) = (0.0906530612244898, 0.092)
(learning_rate, hidden_size, regularization) = (1e-06, 100, 1)
(train_accuracy,val_accuracy) = (0.10828571428571429, 0.103)
(learning_rate, hidden_size, regularization) = (0.005, 100, 0)
(train_accuracy,val_accuracy) = (0.5486734693877551, 0.499)
(learning_rate, hidden_size, regularization) = (0.005, 100, 0.001)
(train_accuracy,val_accuracy) = (0.5506326530612244, 0.5)
(learning_rate, hidden_size, regularization) = (0.005, 100, 1e-06)
(train_accuracy,val_accuracy) = (0.5470408163265306, 0.488)
(learning_rate, hidden_size, regularization) = (0.005, 100, 5e-06)
(train_accuracy,val_accuracy) = (0.5500408163265306, 0.508)
(learning_rate, hidden_size, regularization) = (0.005, 100, 5e-09)
(train_accuracy,val_accuracy) = (0.5296122448979592, 0.496)
(learning_rate, hidden_size, regularization) = (0.005, 100, 1)
(train_accuracy,val_accuracy) = (0.31981632653061226, 0.333)
(learning_rate, hidden_size, regularization) = (5e-06, 100, 0)
(train_accuracy,val_accuracy) = (0.13979591836734695, 0.153)
(learning_rate, hidden_size, regularization) = (5e-06, 100, 0.001)
(train_accuracy,val_accuracy) = (0.13185714285714287, 0.163)
(learning_rate, hidden_size, regularization) = (5e-06, 100, 1e-06)
(train_accuracy,val_accuracy) = (0.11724489795918368, 0.126)
(learning_rate, hidden_size, regularization) = (5e-06, 100, 5e-06)
(train_accuracy,val_accuracy) = (0.1260408163265306, 0.12)
(learning_rate, hidden_size, regularization) = (5e-06, 100, 5e-09)
(train_accuracy,val_accuracy) = (0.12081632653061225, 0.11)
(learning_rate, hidden_size, regularization) = (5e-06, 100, 1)
(train_accuracy,val_accuracy) = (0.15855102040816327, 0.156)
(learning_rate, hidden_size, regularization) = (0.001, 200, 0)
(train_accuracy,val_accuracy) = (0.4793877551020408, 0.441)
(learning_rate, hidden_size, regularization) = (0.001, 200, 0.001)
(train_accuracy,val_accuracy) = (0.4841020408163265, 0.477)
(learning_rate, hidden_size, regularization) = (0.001, 200, 1e-06)
(train_accuracy,val_accuracy) = (0.48242857142857143, 0.458)
(learning_rate, hidden_size, regularization) = (0.001, 200, 5e-06)
(train_accuracy,val_accuracy) = (0.4837142857142857, 0.452)
(learning_rate, hidden_size, regularization) = (0.001, 200, 5e-09)
(train_accuracy,val_accuracy) = (0.4811020408163265, 0.465)
(learning_rate, hidden_size, regularization) = (0.001, 200, 1)
(train_accuracy,val_accuracy) = (0.33110204081632655, 0.338)
(learning_rate, hidden_size, regularization) = (1e-06, 200, 0)
(train_accuracy,val_accuracy) = (0.08795918367346939, 0.096)
(learning_rate, hidden_size, regularization) = (1e-06, 200, 0.001)
(train_accuracy,val_accuracy) = (0.08728571428571429, 0.076)
(learning_rate, hidden_size, regularization) = (1e-06, 200, 1e-06)
(train_accuracy,val_accuracy) = (0.0893265306122449, 0.09)
(learning_rate, hidden_size, regularization) = (1e-06, 200, 5e-06)
(train_accuracy,val_accuracy) = (0.10946938775510204, 0.115)
(learning_rate, hidden_size, regularization) = (1e-06, 200, 5e-09)
(train_accuracy,val_accuracy) = (0.1069795918367347, 0.087)
(learning_rate, hidden_size, regularization) = (1e-06, 200, 1)
(train_accuracy,val_accuracy) = (0.1273877551020408, 0.126)
(learning_rate, hidden_size, regularization) = (0.005, 200, 0)
(train_accuracy,val_accuracy) = (0.5937142857142857, 0.5)
(learning_rate, hidden_size, regularization) = (0.005, 200, 0.001)
(train_accuracy,val_accuracy) = (0.5848163265306122, 0.51)
(learning_rate, hidden_size, regularization) = (0.005, 200, 1e-06)
(train_accuracy,val_accuracy) = (0.5892244897959183, 0.498)
(learning_rate, hidden_size, regularization) = (0.005, 200, 5e-06)
(train_accuracy,val_accuracy) = (0.5926734693877551, 0.503)
(learning_rate, hidden_size, regularization) = (0.005, 200, 5e-09)
(train_accuracy,val_accuracy) = (0.5729387755102041, 0.528)
(learning_rate, hidden_size, regularization) = (0.005, 200, 1)
(train_accuracy,val_accuracy) = (0.33316326530612245, 0.325)
###Markdown
Inline Question 2:What can you say about the training? Why does it take much longer to train? How could you speed up computation? What would happen to the network accuracy and training time when adding additional layer? What about additional hidden neurons?**Your answer:** *THe training process takes much longer, We think that it takes much longer to run because we added more layers which means more neurons and the forward and backwards passes takes high computaional effort (derivetive and loss for each)we could speed up the computation by using GPU, stronger CPU and writing a better efficeint code, maybe using C++ could improve python overhead.If we would add more layer the training time will increase, while the accuuracy might improve and might not.Same goes for hidden neurons * Bonus Train a 5 hidden layer network with varying hidden layer size and plot the loss function and train / validation accuracies. **5 points**
###Code
## Your code here ##
###Output
_____no_output_____
###Markdown
Анализ пространственных данных. Домашнее задание №2 Мягкий дедлайн: __4 ноября 2020 г. 23:59__Жесткий дедлайн (со штрафом в _50%_ от количества набранных вами за ДЗ баллов): __5 ноября 2020 г. 08:59__ Визуализация "чего-либо" __без__ выполненного основного задания оценивается в __0 баллов__ ФИО: `Маргасов Арсений Олегович` Группа: `MADE-DS-11` Задание №1. Горячая точка (алгоритм - 10 баллов, визуализация - 10 баллов). Генерируйте рандомные точки на планете Земля до тех пор, пока не попадете на территорию ``Афганистана`` 1. Вы можете использовать функции принадлжености точки полигону и расстояния от точки до полигона (в метрах)2. Предложите не наивный алгоритм поиска (генерировать __напрямую__ точку из полигона границ Афганистана __запрещено__)
###Code
import numpy as np
import geopandas as gpd
import pandas as pd
from itertools import count
from shapely.geometry import Point, Polygon
import random
import folium
import warnings
warnings.filterwarnings('ignore')
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
AfgPolygon = world.loc[world['name'] == 'Afghanistan']['geometry'][103]
AfgPolygon = Polygon(list(map(lambda x: (x[1], x[0]), AfgPolygon.exterior.coords[:])))
bbox = AfgPolygon.bounds
# afg_centroid = Point((bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2)
afg_centroid = AfgPolygon.centroid
random.seed = 19
def gen_point(prev_point):
r = afg_centroid.distance(prev_point) / 4
x0 = 0.25 * prev_point.x + 0.75 * afg_centroid.x
y0 = 0.25 * prev_point.y + 0.75 * afg_centroid.y
u = random.random()
x = u * (prev_point.x + afg_centroid.x) / 2 + (1 - u) * afg_centroid.x
y = y0 + np.sqrt(r * r - (x - x0) * (x - x0)) + np.random.normal(0, r)
return Point(x, y)
points = []
p = Point(random.randint(-90, 90), random.randint(-180, 180))
for i in count(start = 1):
print(p)
points.append(p)
if p.within(AfgPolygon):
break
p = gen_point(p)
m = folium.Map(afg_centroid.coords[:][0], zoom_start=6)
folium.GeoJson(world.loc[world['name'] == 'Afghanistan']['geometry'][103]).add_to(m)
folium.Marker(
location=points[0].coords[:][0],
popup= 'Point #1' ,
icon=folium.Icon(color='green', icon='info-sign')).add_to(m)
for i in range(1, len(points)):
folium.Marker(
location=points[i].coords[:][0],
popup= 'Point #' + str(i + 1),
icon=folium.Icon(color='green', icon='info-sign')).add_to(m)
folium.PolyLine([points[i - 1].coords[:][0],
points[i].coords[:][0]], color='purple').add_to(m)
folium.LatLngPopup().add_to(m)
m
###Output
_____no_output_____
###Markdown
Визуализируйте пошагово предложенный алгоритм при помощи ``Folium`` Задание №2. Качество жизни (20 баллов). Для измерения показателя качества жизни в точке, найденной в предыдущем задании, вам необходимо рассчитать следующую сумму расстояний (в метрах): 1. Расстояние от точки до 5 ближайших __*__ банкоматов, находящихся в стране с наибольшим количеством объектов жилой недвижимости2. Расстояние от точки до 5 ближайших школ, находящихся в стране с наибольшим количеством аптек в столице3. Расстояние от точки до 5 ближайших кинотеатров, наодящихся в стране с самым большим отношением числа железнодорожных станций к автобусным остановкам в южной части __**__ __*__ При поиске _N_ ближайших объектов обязательно использовать ``R-tree``__**__ Южной частью страны является территория, находящаяся к югу от множества точек, равноудаленных от самой северной и самой южной точек страны
###Code
import requests
import json
from OSMPythonTools.nominatim import Nominatim
nominatim = Nominatim()
from OSMPythonTools.overpass import overpassQueryBuilder, Overpass
overpass = Overpass()
###Output
_____no_output_____
###Markdown
Part 1
###Code
countries = overpass.query('relation["admin_level"="2"][boundary=administrative];out;', timeout=100)
countries_id_name = {}
for elem in countries.elements():
if str(elem.id())[-1] == '0':
countries_id_name[elem.id() + 3600000000] = elem.tag('name:en')
countries_id_realty_cnt = {}
for country_id in tqdm(countries_id_name.keys()):
query = overpassQueryBuilder(area=country_id, elementType=['node'], selector='"building"="yes"', out='count')
result = overpass.query(query, timeout=180)
countries_id_realty_cnt[country_id] = result.countNodes()
import operator
max_realty_id = max(countries_id_realty_cnt.items(), key=operator.itemgetter(1))[0]
countries_id_name[max_realty_id]
query = overpassQueryBuilder(area=max_realty_id, elementType=['node'], selector='"amenity"="atm"')
result = overpass.query(query, timeout=180)
afg_last = points[-1].coords[:][0]
from rtree import index
index = index.Index()
atm_id_coords = {}
for atm in result.elements():
yx = tuple(reversed(atm.geometry()['coordinates']))
atm_id_coords[atm.id()] = yx
index.insert(atm.id(), yx + yx)
from geographiclib.geodesic import Geodesic
distances = []
for idx in index.nearest(afg_last + afg_last, 5):
g = Geodesic.WGS84.Inverse(*(afg_last + atm_id_coords[idx]))
distances.append(g['s12'])
# sum of distances to 5 nearest atms
sum(distances)
###Output
_____no_output_____
###Markdown
Задание №3. Поездка по Нью-Йорку (маршрут - 20 баллов, визуализация - 10 баллов). Добраться __на автомобиле__ от входа в ``Central Park`` __Нью-Йорка__ (со стороны ``5th Avenue``) до пересечения ``Water Street`` и ``Washington Street`` в Бруклине (откуда получаются лучшие фото Манхэттенского моста) довольно непросто - разумеется, из-за вечных пробок. Однако еще сложнее это сделать, проезжая мимо школ, где дети то и дело переходят дорогу в неположенном месте. Вам необходимо построить описанный выше маршрут, избегая на своем пути школы. Визуализируйте данный маршрут (также добавив школы и недоступные для проезда участки дорог) при помощи ``Folium`` Данные о расположении школ Нью-Йорка можно найти [здесь](https://catalog.data.gov/dataset/2019-2020-school-point-locations)
###Code
import pyproj
from shapely import geometry
from shapely.geometry import LineString, MultiPolygon
from openrouteservice import client
from OSMPythonTools.api import Api
from tqdm import tqdm
intersect_WaterWashingtonSt = Point(40.70320,-73.98958)
centralpark_avenue5th = Point(40.7649, -73.97256)
api_key = '5b3ce3597851110001cf6248f45bca95f4ec48a8839956dda7f75bd1'
clnt = client.Client(key=api_key)
request_params_1 = {'coordinates': [list(reversed(centralpark_avenue5th.coords[:][0])),
list(reversed(intersect_WaterWashingtonSt.coords[:][0]))],
'format_out': 'geojson',
'profile': 'driving-car',
'preference': 'shortest',
'instructions': 'false'}
route_normal = clnt.directions(**request_params_1)
route_buffer = LineString(route_normal['features'][0]['geometry']['coordinates']).buffer(0.005)
loc_point = Point([(intersect_WaterWashingtonSt.coords[:][0][i] + centralpark_avenue5th.coords[:][0][i]) / 2 for i in range(2)])
schools = pd.read_csv('./2019_-_2020_School_Point_Locations.csv')
# schools['the_geom'] = schools['the_geom'].apply(lambda x: Point(float(x.split(' ')[2][:-1]), float(x.split(' ')[1][1:])))
schools['the_geom'] = schools['the_geom'].apply(lambda x: Point(float(x.split(' ')[1][1:]), float(x.split(' ')[2][:-1])))
def CreateBufferPolygon(point_in, resolution=2, radius=150):
sr_wgs = pyproj.Proj(init='epsg:4326')
sr_utm = pyproj.Proj(init='epsg:32632') # WGS84 UTM32N
point_in_proj = pyproj.transform(sr_wgs, sr_utm, *point_in) # unpack list to arguments
point_buffer_proj = Point(point_in_proj).buffer(radius, resolution=resolution)
poly_wgs = []
for point in point_buffer_proj.exterior.coords:
poly_wgs.append(pyproj.transform(sr_utm, sr_wgs, *point)) # Transform back to WGS84
return poly_wgs
# sites_poly = []
for _, school in tqdm(schools.iterrows()):
site_coords = school['the_geom'].coords[:][0]
site_poly_coords = CreateBufferPolygon(site_coords,
resolution=2,
radius=150)
# sites_poly.append(site_poly_coords)
sites_buffer_poly = []
sites_buffer_poly_idx = []
for idx, site_poly in enumerate(sites_poly):
poly = Polygon(site_poly)
if route_buffer.intersects(poly):
sites_buffer_poly.append(poly)
sites_buffer_poly_idx.append(idx)
request_params_2 = request_params_1.copy()
request_params_2['options'] = {'avoid_polygons': geometry.mapping(MultiPolygon(sites_buffer_poly))}
route_detour = clnt.directions(**request_params_2)
# GeoJSON style function
def style_function(color):
return lambda feature: dict(color=color,
weight=3,
opacity=0.8)
map_params = {'tiles':'OpenStreetMap',
'location':loc_point.coords[:][0],
'zoom_start': 13}
map2 = folium.Map(**map_params)
for idx in sites_buffer_poly_idx:
school = schools.iloc[idx, :]
folium.features.Marker(list(reversed(school['the_geom'].coords[:][0])),
popup=school['Loc_Name'],
icon=folium.Icon(color='orange', icon='info-sign')).add_to(map2)
site_poly_coords = [(y,x) for x,y in sites_poly[idx]]
folium.vector_layers.Polygon(locations=site_poly_coords,
color='purple',
fill_color='purple',
fill_opacity=1,
weight=3).add_to(map2)
folium.features.GeoJson(data=route_normal,
name='Не жалеем школьников',
style_function=style_function('blue'),
overlay=True).add_to(map2)
folium.features.GeoJson(data=route_detour,
name='Объезжаем школы',
style_function=style_function('#00FF00'),
overlay=True).add_to(map2)
map2.add_child(folium.map.LayerControl())
map2
###Output
_____no_output_____
###Markdown
PHYS 434 HW 2 Section AC Haowen Guan
###Code
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import scipy
import random
from scipy import stats
plt.rcParams["figure.figsize"] = (15,10)
###Output
_____no_output_____
###Markdown
**1) A little introductory brain teaser. Which is more probable when rolling 2 six-sided dice: rolling snake eyes (two ones) or rolling sevens (dice sum to seven)? What is the ratio of the probabilities?** Theoratically, there is only _one case_ to get two ones, but for rolling sevens, there are in total _6 cases_. So, appearently, rolling a _seven is more probable_, and the ratio of the probabilities is $1 : 6$. A testing can also confirm this:
###Code
countTwo = 0;
countSeven = 0;
for i in range(100000):
a = random.randint(1,6)
b = random.randint(1,6)
if (a + b == 2):
countTwo += 1
if (a + b == 7):
countSeven += 1
print("Probability of rolling snake eyes is ", countTwo / 1000, "%")
print("Probability of rolling sevens is ", countSeven / 1000, "%")
print("The ratio of rolling snake eyes to rolling sevens is 1 : ", countSeven / countTwo)
###Output
Probability of rolling snake eyes is 2.77 %
Probability of rolling sevens is 16.702 %
The ratio of rolling snake eyes to rolling sevens is 1 : 6.029602888086643
###Markdown
2) **Following what we did in class show how to use the convolution operator to determine the probability of the sum of 2 six sided dice. Do both analytically (math & counting) and numerically (computer program).** Analytically, we know that there is 17 possible outcomes (from 2 to 18), while intotal 36 different combination of two number (6x6). The number of different combination for each possible outcome can be expressed by the math function: $$ y = -|7-x|+6 \quad\textrm{Where}\quad 2<=x<=18 $$ Numerically, we can check the plot generated below:
###Code
results = [];
for i in range(100000):
a = random.randint(1,6)
b = random.randint(1,6)
results.append(a + b)
fig, ax = plt.subplots(1,2)
fig.set_size_inches(20,7)
ax[0].hist(results, 100)
ax[0].tick_params(labelsize = 24)
ax[0].set_xlabel("sum of 2 six sided dice", fontsize=24)
ax[0].set_ylabel("Count", fontsize=24)
ax[1].hist(results, 100)
ax[1].tick_params(labelsize = 24)
ax[1].set_yscale('log')
ax[1].set_xlabel("sum of 2 six sided dice", fontsize=24)
ax[1].set_ylabel("Log(Count)", fontsize=24)
###Output
_____no_output_____
###Markdown
**3) Calculate the mean and the variance of the distribution in problem 2. Hint: this is surprisingly tricky, make sure your result makes sense.**
###Code
print("Mean of distribution is", np.mean(results))
print("Variance of distribution is", np.var(results))
###Output
Mean of distribution is 6.99417
Variance of distribution is 5.8279560110999995
###Markdown
**4) Repeat 2, and graph the average of 10 dice. Is this is a Gaussian distribution? Explain in depth.**
###Code
results = [];
for i in range(100000):
results.append(0)
for i in range(10):
for i in range(100000):
a = random.randint(1,6)
results[i] += a;
results = np.divide(results, 10)
fig, ax = plt.subplots(1,2)
fig.set_size_inches(20,7)
ax[0].hist(results, 100)
ax[0].tick_params(labelsize = 24)
ax[0].set_xlabel("avg of 10 dice", fontsize=24)
ax[0].set_ylabel("Count", fontsize=24)
ax[1].hist(results, 100)
ax[1].tick_params(labelsize = 24)
ax[1].set_yscale('log')
ax[1].set_xlabel("avg of 10 dice", fontsize=24)
ax[1].set_ylabel("Log(Count)", fontsize=24)
###Output
_____no_output_____
###Markdown
It does look like a Gaussian distribution, in a bell shape. The reason might be, since we took the average of 10 dice for each measurements, then each measurements are tend to be closer to be the mean value, while getting a average value that is far from the mean value becomes rarer. **5) Show that the sum and average of an initially Gaussian distribution is also a Guassian (can be analytic or numerical). How does the standard deviation of the resulting sum or average Guassian change? This is a hugely important result. Explore what this means for integrating a signal over time.**
###Code
results = [];
for i in range(100000):
results.append(0)
for i in range(10):
d = stats.norm.rvs(loc = 5., scale = 1, size = 100000)
results += d;
results = np.divide(results, 10)
print("Standard diviation of distribution is", np.std(results))
fig, ax = plt.subplots(1,2)
fig.set_size_inches(10,3)
ax[0].hist(results, 100)
ax[0].tick_params(labelsize = 12)
ax[0].set_xlabel("x", fontsize=12)
ax[0].set_ylabel("Count", fontsize=12)
ax[1].hist(results, 100)
ax[1].tick_params(labelsize = 12)
ax[1].set_yscale('log')
ax[1].set_xlabel("x", fontsize=12)
ax[1].set_ylabel("Log(Count)", fontsize=12)
results = [];
for i in range(100000):
results.append(0)
for i in range(20):
d = stats.norm.rvs(loc = 5., scale = 1, size = 100000)
results += d;
results = np.divide(results, 20)
print("Standard diviation of distribution is", np.std(results))
fig, ax = plt.subplots(1,2)
fig.set_size_inches(10,3)
ax[0].hist(results, 100)
ax[0].tick_params(labelsize = 12)
ax[0].set_xlabel("x", fontsize=12)
ax[0].set_ylabel("Count", fontsize=12)
ax[1].hist(results, 100)
ax[1].tick_params(labelsize = 12)
ax[1].set_yscale('log')
ax[1].set_xlabel("x", fontsize=12)
ax[1].set_ylabel("Log(Count)", fontsize=12)
###Output
Standard diviation of distribution is 0.2234944258926579
###Markdown
Домашнее задание1. Сделать класс нейронки, вписать необходимые операции, архитектура ниже1. Написать обучалку (обобщить то, что было выше)1. Добавить логирование 1. Сохранять лосс на каждой итерции обучения __0.25 балла__ ✓ 1. Каждую эпоху сохранять лосс трейна и тест __0.25 балла__ ✓ 1. Каждую эпоху рассчитывать метрики __0.25 балла__ ✓ 1. Добавить прогресс бар, в котором показывается усредненный лосс последних 500-та итераций __0.25 балла__ ✓1. Добавить early stopping __0.5 балла__1. Нарисовать графики лосса, метрик, конфьюжин матрицу __0.5 балла__ ✓ Архитектура (что можно попробовать)1. Предобученные эмбеддинги. Почитайте [здесь](https://pytorch.org/docs/stable/nn.htmlembedding) (from_pretrained) как вставить свои эмбеддинги, выше мы читали матрицу эмбеддингов. __0 баллов__ ✓1. Дообучить эмбеддинги отдельно от сети. __2 балла__1. Дообучить эмбеддинги вместе с сетью и с другим learning rate (указывается в оптимизаторе). __2 балла__ ✓1. Bidirectional LSTM. __1 балл__ ✓1. Несколько параллельных CNN с разными размерами окна и mean/max over time пулингами к ним и дальнейшей конкатенацией. __2 балла__ ✓1. Несколько последовательных CNN. __1 балла__ ✓1. Разные окна и residual к предыдущему пункту. __2 балла__ ✓1. Предыдущий пункт сделан без ошибок (замаскированы свертки паддингов). __2 балла__1. Написать правильный mean/max пулинг, который не учитывает паддинги, точнее их маскирует. __2 балла__1. Добавить [torch.nn.utils.rnn.pack_padded_sequence()](https://pytorch.org/docs/stable/nn.htmltorch.nn.utils.rnn.pack_padded_sequence) и [torch.nn.utils.rnn.pack_sequence()](https://pytorch.org/docs/stable/nn.htmltorch.nn.utils.rnn.pack_sequence) для LSTM. Инфа [здесь](Еще-важный-момент-про-LSTM) __2 балла__ ✓1. Добавить spatial дропаут для входа LSTM (не просто стандартный пункт при инициализации LSTM) __1 балл__1. Добавить BatchNorm/LayerNorm/Dropout/Residual/etc __1 балл__ ✓1. Добавить шедуллер __1 балл__ ✓1. Обучать на GPU __2 балла__1. Сделать transfer learning с собственно обученной языковой модели, обученной на любых данных, например, unlabeled. __7 баллов__1. your madness 10 баллов максимум
###Code
import math
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from tqdm import tqdm
import torch
from torch.utils.data import DataLoader
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
from torch import nn
import torch.nn.functional as F
import zipfile
import seaborn as sns
import matplotlib.pyplot as plt
# вынес в отдельные файлы, чтобы не забивать тетрадку
from src.data import Parser
from src.utils import load_embeddings, TextClassificationDataset
from src.train import train_model
from typing import Iterable, List, Tuple
from nltk.tokenize import wordpunct_tokenize
###Output
_____no_output_____
###Markdown
Читаем и обрабатываем данные
###Code
data_path = '/mnt/f/data/dl'
parser = Parser(data_path=data_path)
unlabeled, train, valid = parser.run()
unique_categories = set(train.category.unique().tolist() + valid.category.unique().tolist())
category2index = {category: index for index, category in enumerate(unique_categories)}
train['target'] = train.category.map(category2index)
valid['target'] = valid.category.map(category2index)
vocab, embeddings = load_embeddings('/mnt/f/data/models/wiki-news-300d-1M.vec.zip', 'wiki-news-300d-1M.vec', max_words=100_000)
train_x, train_y = train.question.tolist(), train.target.tolist()
valid_x, valid_y = valid.question.tolist(), valid.target.tolist()
train_ds = TextClassificationDataset(texts=train_x, targets=train_y, vocab=vocab)
valid_ds = TextClassificationDataset(texts=valid_x, targets=valid_y, vocab=vocab)
train_loader = DataLoader(train_ds, batch_size=512)
valid_loader = DataLoader(valid_ds, batch_size=512)
###Output
_____no_output_____
###Markdown
Архитектура сети
###Code
class MyNet(nn.Module):
def __init__(self,
embeddings: np.ndarray,
n_filters: int,
kernel_sizes: List[int],
n_classes: int,
dropout: float,
lstm_hidden_size: int):
super().__init__()
self.lstm_hidden_size = lstm_hidden_size
self.embedding_layer = nn.Embedding.from_pretrained(torch.tensor(embeddings).float(),
padding_idx=0,
freeze=False)
self.embedding_dim = embeddings.shape[-1]
self.convs = nn.ModuleList([nn.Conv1d(in_channels=self.lstm_hidden_size * 2,
out_channels=n_filters,
kernel_size=ks)
for ks in kernel_sizes])
self.linear_final = nn.Linear(len(kernel_sizes) * n_filters, n_classes)
self.dropout = nn.Dropout(dropout)
self.batch_norm = nn.BatchNorm1d(num_features=len(kernel_sizes) * n_filters)
self.residual_conv = nn.Conv1d(in_channels=self.lstm_hidden_size * 2,
out_channels=len(kernel_sizes) * n_filters,
kernel_size=1)
self.conv_2 = nn.Conv1d(in_channels=len(kernel_sizes) * n_filters,
out_channels=len(kernel_sizes) * n_filters,
kernel_size=2)
self.conv_3 = nn.Conv1d(in_channels=len(kernel_sizes) * n_filters,
out_channels=len(kernel_sizes) * n_filters,
kernel_size=3)
self.avg_pool = nn.AvgPool1d(kernel_size=3,
stride=1,
padding=1,
count_include_pad=False,
ceil_mode=False)
self.lstm = torch.nn.LSTM(self.embedding_dim,
lstm_hidden_size,
batch_first=True,
bidirectional=True)
@staticmethod
def pad_convolution(x, kernel_size):
x = F.pad(x.transpose(1, 2), (kernel_size - 1, 0))
return x.transpose(1, 2)
@staticmethod
def count_pads(x, axis=1):
return torch.Tensor(np.count_nonzero(x, axis=axis)) # торчовая функция почему-то не находится
def forward(self, x):
lengths = self.count_pads(x)
x = self.embedding_layer(x)
x = pack_padded_sequence(x,
lengths,
batch_first=True,
enforce_sorted=False)
x, memory = self.lstm(x)
x = pad_packed_sequence(x, batch_first=True)[0]
residual = self.residual_conv(x.transpose(1, 2)).transpose(1, 2)
convs = [F.relu(conv(self.pad_convolution(x, conv.kernel_size[0]).transpose(1, 2)).transpose(1, 2)) for conv in self.convs]
convs = [self.avg_pool(conv.transpose(1, 2)).transpose(1, 2) # FIX
for conv in convs]
x = torch.cat(convs, 2)
x = x + residual
x = self.dropout(x)
residual = x
x = F.relu(self.conv_2(self.pad_convolution(x, self.conv_2.kernel_size[0]).transpose(1, 2)).transpose(1, 2))
x = self.avg_pool(x.transpose(1, 2)).transpose(1, 2)
x = x + residual
residual = x
x = F.relu(self.conv_3(self.pad_convolution(x, self.conv_3.kernel_size[0]).transpose(1, 2)).transpose(1, 2))
x = x + residual
x = x.mean(dim=1) # FIX
x = self.batch_norm(x)
x = self.dropout(x)
return self.linear_final(x)
model = MyNet(embeddings=embeddings,
n_filters=128,
kernel_sizes=[2, 3, 4],
n_classes=len(category2index),
dropout=0.15,
lstm_hidden_size=128)
layer_list = ['embedding_layer.weight']
params = list(map(lambda x: x[1], list(filter(lambda kv: kv[0] in layer_list, model.named_parameters()))))
base_params = list(map(lambda x: x[1], list(filter(lambda kv: kv[0] not in layer_list, model.named_parameters()))))
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam([{'params': base_params}, {'params': params, 'lr': 1e-5}], lr=1e-2)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.95)
###Output
_____no_output_____
###Markdown
Обучение
###Code
model, losses, metrics, valid_preds, valid_targets = train_model(model,
train_loader,
valid_loader,
optimizer,
criterion,
scheduler,
epochs=6)
###Output
Epoch 1 of 6: 100%|██████████| 250000/250000 [17:02<00:00, 244.50it/s, train_loss=1.02]
Epoch 2 of 6: 0%| | 0/250000 [00:00<?, ?it/s]
###Markdown
Итоги
###Code
print(classification_report(valid_targets, valid_preds))
###Output
precision recall f1-score support
0 0.77 0.66 0.71 2131
1 0.59 0.62 0.61 2978
2 0.84 0.70 0.76 10187
3 0.61 0.82 0.70 12699
4 0.78 0.62 0.69 4998
5 0.84 0.83 0.84 8833
6 0.74 0.65 0.69 4117
7 0.72 0.57 0.64 4057
accuracy 0.72 50000
macro avg 0.74 0.68 0.71 50000
weighted avg 0.74 0.72 0.73 50000
###Markdown
В целом результаты достаточно неплохие. Отедьные категории определяеются лучше других.
###Code
(train.target.value_counts() / train.shape[0]).sort_index()
category2index
plt.figure(figsize=(20, 15))
cf = confusion_matrix(valid_targets, valid_preds, normalize='pred')
g = sns.heatmap(cf, annot=True)
###Output
_____no_output_____
###Markdown
Можем видеть, что модель сравнительно хорошо предсказывает все категории, за исключением `baby` и `sports and outdoors`, которые составляют 5 и 25 процентов датасета соответственно. Категория `sports and outdoors` чаще всего ошибочно предсказывается как `baby`, `baby` как `sports and outdoors`, т.е. модель путает между собой эти категории. Категория `cell phones and accessories` определяется лучше всех остальных. Предполагаю, что дело в том, что описания из этих категорий меньше всего пересекаются со всеми остальными.
###Code
train.loc[train.target == 3].tail(3).values
train.loc[train.target == 1].tail(3).values
###Output
_____no_output_____
###Markdown
Можно предположить, что это связано с тем, что в обеих категориях есть спецификации одежды и всяких аксессуаров, поэтому модели может быть сложно отличить одно от другого.
###Code
metrics_df = pd.DataFrame.from_dict(metrics)
g = metrics_df.plot(figsize=(14, 12))
###Output
_____no_output_____
###Markdown
Можно видеть, что модель практически сразу начинает переобучаться. Возможно, дело в том, что для такой сложной архитектуры у нас недостаточно данных.
###Code
plt.figure(figsize=(14, 12))
plt.plot(losses)
plt.grid()
plt.title('Training process')
plt.xlabel('Iterations')
plt.ylabel('Loss function');
###Output
_____no_output_____
###Markdown
CS559: Homework 2 Due:10/8/2021 Friday 11:59 PM- Change the file name as YourName_F21_CS559_HW2- Submit the assignment in `ipynb` and `html` formats. - You can export the notebook in HTML. - Do not compress your files. Please submit files individually. - All work must be your own and must not be shared with other classmates. - Collaboration with classmates or getting help by any people is not acceptable. - For impletementation problems, do not copy algorithms from internet. Problem 1 - Linear Regression [35 pts]1-a. Consider a data set in which each data point $t_n$ is associated with a weighting factor $r_n>0$, so that the sum of squares error function becomes $${\large E_D(\vec{w})=\frac{1}{2}\sum_{n=1}^Nr_n\big(t_n-\vec{w}^T\vec{x}_n\big)^2}$$Find an expression for the solution $w^*$ that minimizes this error function. [5 pts] Solution:We have:$${E_D(\vec{w})=\frac{1}{2}\sum_{n=1}^Nr_n\big(t_n-\vec{w}^T\vec{x}_n\big)^2\;\;\;\;\;\;\;-\;(i)}$$ Convert the equation in form of matrix products for ease of computation,therefore:$${E_D(W)=\frac{1}{2}r_n\big(XW-t\big)^TR\big(XW-t\big)\;\;\;\;\;\;\;-\;(ii)}$$${where,}$$${X\;is\;a\;Matrix\;of\;size\;X_{m\times(N+1)}\;with\;m\;obsevations\;and\;N\;features\;i.e\;(N+1)\;parameters\\ W\;is\;a\;Matrix\;of\;size\;W_{(N+1)\times1}=[w_0,w_1\;\;...w_N]\\ t\;is\;target\;Matrix\;of\;size\; t_{m\times1}\\ R\;is\;a\;diagonal\;matrix\;of\;size\;R_{m\times m} = diag(r_1,r_2\;\;...r_N)}$$ $${}$$ $${}$$ ${By \;simplifying \;(ii)\; we\; get\;:}$ $${}$$ ${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;E_D(W)=\frac{1}{2}\big(W^TX^T-t^T\big)\big(RXW-Rt\big)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;[beacause\;(AB)^T=B^TA^T]}$ $${}$$ ${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=\frac{1}{2}\big(W^TX^TRXW-W^TX^TRt-t^TRXW+t^TRT\big)}$ $${}$$ ${since\; all\; products\; in\; the \;above\; equation \;give\;a\; scalar_{1\times1},\;and\; S^T = S \;where\; S\; is \;scalar\; or \;diagonal}$ $${}$$ ${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(W^TX^TRt)^T=t^TR^TXW}$ $${}$$ ${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=t^TRXW\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;[because\;R^T=R]}$ $${}$$ ${therefore,}$ $${}$$ ${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;E_D(W)=\frac{1}{2}\big(W^TX^TRXW-2W^TX^TRt+t^TRT\big)}$ $${}$$ ${for \;minimising\;E_D(\vec{w}),we \;take\; it's\; gradient\; and \;equate\; it\; to\; 0,\;therefore,}$ $${}$$ ${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\frac{\partial{E_D(W)}}{\partial W}=X^TRXW-X^TRt=0}$ $${}$$ ${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;W = (X^TRX)^{-1}X^TRt}$ $${}$$ ${therefore,\;our \;minimized\;solution\;is}$ $${w^* = (X^TRX)^{-1}X^TRt}$$ $${}$$$${}$$ 1-b. Implement a function called, `my_error(data,r_n)`, that estimates the error using the error function, $E_D(\vec{w})$, from 1-a. The task to implement a function to estimates the optimized $vec{w}$ and demonstrates the behavior of error at different weighting factor, $r_n$, values. The function return a list of $\vec{w}$, $r_n$, and error. Do not use any other modules except `numpy`. [10 pts]
###Code
import numpy as np
from numpy.random import RandomState
import pandas as pd
#the below function inv() inverts a square singular matrix, we define this because numpy.inv() throws an error when inverting singular matrices
#this function uses least squares to compute the inverse
def inv(square_matrix):
a = square_matrix.shape[0]
diag = np.eye(a, a)
return np.linalg.lstsq(square_matrix,diag,rcond=None)[0]
### my_error starts here
def my_error(data,r_n):
features = data.iloc[:, :(len(data.columns) - 1)].to_numpy()
n = len(data) #number of observations
t = data.iloc[: , -1].to_numpy() #target variable (nX1) matrix
X = np.column_stack((np.ones(n),features)) #explanatory variable matrix (m+1)Xn matrix, where m is number of features,the extra column is for x0 = 1
XT = X.transpose() #transpose of X, nX(m+1) matrix
R = np.diag(r_n) #diagonal weighting factor matrix for r_n values, size nXn
XT_R = np.dot(XT,R) #product of XT and R, XT.R
XT_R_X = np.dot(XT_R,X)#product of XT.R and X, XT.R.X
XT_R_X_INV = inv(XT_R_X) #inverse of XT.R.X
XT_R_t = np.dot(XT_R,t) #product of XT.R and t, XT.R.t
#equating the gradient of sum of error function to '0' in question 1(a), we derived the optimised value for w in matrix form as (XT.R.X)^-1.(XT.R.t)
w = np.dot(XT_R_X_INV,XT_R_t) #(XT.R.X)^-1.(XT.R.t)
XW = np.dot(X,w) #product of X and w, X.w
nres = XW - t #negative residual XW - t
nresT = nres.transpose() #transpose of nres
nresT_R = np.dot(nresT,R) #product of (XW-t) and R, (XW-t.R)
#similarly we also converted the error function in matrix form in 1(a) as:
error = 0.5*(np.dot(nresT_R,nres))
return w, error, r_n
####testing funtion my_error() not relevant and can be ignored
#let's test this function on a linear model y = w_0 + w_1.x_1
f = lambda x: 1.*x + 1.
#generate X positions of data
X = np.linspace(0.,20.,21)
#generate Y positions of data, follow function f with random error
ran = RandomState()
Y = f(X) + ran.randn(len(X))
#create dataset with columns X and Y
data = pd.DataFrame(np.column_stack((X,Y)))
r_n = np.ones(len(data)) #initiaize r_n
r_n2 = [2]*len(data) #trying diff r_n
r_n3 = [3]*len(data) #trying diff r_n
e = my_error(data,r_n)
print('optimized W for f(X):',e[0])
print('Error:',e[1])
print('R',e[2])
print('------------------')
e2 = my_error(data,r_n2)
e3 = my_error(data,r_n3)
print('error value for r_n=2',e2[1])
print('error value for r_n=3',e3[1])
print('------------------')
#we can see that error is directly proportional to r_n when r_n is same for each point.
#we can also try different values of r_n for each point
r_n = np.linspace(0.1,2,len(data))
r_n2 = np.linspace(3,5,len(data))
e2 = my_error(data,r_n[::-1])
e3 = my_error(data,r_n2[::-1])
print('error value for r_n series 0.1 to 2',e2[1])
print('error value for r_n series 3 to 5',e3[1])
print('------------------')
#we get the same result, if r_n value increases, error value increases
###Output
optimized W for f(X): [1.42537942 0.96448621]
Error: 8.08156429089746
R [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
------------------
error value for r_n=2 16.16312858179492
error value for r_n=3 24.24469287269238
------------------
error value for r_n series 0.1 to 2 10.356931395659151
error value for r_n series 3 to 5 34.39213116096811
------------------
###Markdown
1-c. Load the dataset, make a model using Linear Regression from sklearn.linear_model to predict the target `y` from a given dataset `HW2_LR.csv`. Students must do EDA and pre-processing the dataset before training the model. All pre-processing and EDA work must be explained and the weights and mean squared error value must be reported. Treat the whole dataset as a train set. [15 pts]
###Code
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import pandas as pd
from pandas.plotting import scatter_matrix
data = pd.read_csv('./HW2_LR.csv')
### EDA stars here
# data.info()
# # no null values
data.describe()
#ranges differ a lot, need to scale data appropriately
from scipy import stats
# data.hist(bins=50, figsize=(20,15))
corr = data.corr()
corr.style.background_gradient(cmap='coolwarm')
#here we can see that the explanatory varibales have a very weak correlation with each other, that is good as this means there is no multicolinearity between features
#but we can also see that only 'b' has a strong correlation with tartget 'y',
#the rest of the varaibles have a significantly weak correlation,with 'c' being slightly less weak
#we can think of dropping a,d and k but we dont know if it would improve the model with certainity, so we keep them, considering we have less feautures anyway.
#the most promising feature to predict target is 'b'
scatter_matrix(data, figsize=(12,8))
plt.show()
#the data is not normally distributed as we can see
#we can see above that the data is not scaled properly,let's scale it
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import FeatureUnion
scaler = StandardScaler()
scaled_data = scaler.fit_transform(data)
new_data = pd.DataFrame(scaled_data,columns = ['a','b','c','d','k','y'])
new_data.describe()
stats.probplot(new_data['y'],plot=plt)
plt.show()
#above we see from the scatter and probability plots that the target data is slightly negtively skewed
#we can fix this by reflecting y to a positive skew and transforming using sqrt as the skewness is moderate
max = np.max(new_data['y'])
new_data['refy'] = max +1
new_data['refy'] = new_data['refy'] - new_data['y']
new_data['refy'].hist()
plt.show()
positive_values = [1 if i>=0 else 0 for i in new_data['refy']]
negative_values = len(new_data)-np.sum(positive_values)
print('negative values=',negative_values)
#sqrt transformation on positively skewed data as there are no negative values
new_data['refy'] = np.sqrt(new_data['refy'])
new_data['refy'].hist()
plt.show()
stats.probplot(new_data['refy'],plot=plt)
plt.show()
#now target is roughly normal
new_data = new_data.drop(columns = ['y'])
new_data
### Linear Regression Modeling starts here
# from sklearn.model_selection import train_test_split
# train_set, test_set = train_test_split(new_data, test_size=0.1, random_state=42)
LR = LinearRegression()
features = new_data.drop(columns = ['refy'])
target = new_data[['refy']]
LR.fit(features,target)
#prediction is done by a test set ideally but we used up the entire set in training as asked in the question.
#therfore we use the same dataset for prediction as well
y_pred = LR.predict(train_set)
# y_pred = LR.predict(test_set)
model_error = LR.score(features,target)
mse = mean_squared_error(target,y_pred)
print('MSE :',mse)
print('Regression Coefficients :',LR.coef_)
#note: we get a very small value for MSE as we have not split the data, so the model is overfitted towards the train set
###Output
MSE : 0.030931581678760416
Regression Coefficients : [[ 9.12833215e-05 -2.12948421e-01 -5.72370133e-02 3.16450655e-03
6.17847116e-05]]
###Markdown
1-d. Use the function `my_error()` from 1-b to estimate $\vec{w}$ and make a visualization to show the behavior of error in terms of $r_n$. Add a point to indicate the final training model error obtained from 1-c. [5 pts]
###Code
### Visualization starts from here.
#for now lets assume that r_n is a constant and has same value for each data point, so that we can track r_n vs error more easily
def r_n_vs_error(data):
num_of_tests = 20
rn_values = []
errorvalues = []
for i in range(num_of_tests):
r_n = [i/10]*len(data)
e = my_error(data,r_n)
errorvalues.append(e[1])
rn_values.append(r_n[0])
return errorvalues,rn_values
er,rn = r_n_vs_error(new_data)
#visualization of r_n vs error
plt.scatter(rn, er)
plt.plot(1,model_error,'ro')
plt.show()
#we can see that r_n and error have a strong positive linear correlation, i.e as r_n increases error also increases linearrly
#this plot verifies the equation E(W) = 1/2*SUM{rn(tn-WT.xn)}
#red point is MSE from 1(c)
###Output
_____no_output_____
###Markdown
Problem 2 - Linear Classification 1 [65 pts]In this assignment, you are going to implement three classifiers - **LDA, Perceptron, and Logistic Regression** - to predict the risk of heart attack using the provided dataset, `heart.csv`. Here are data attributes:- Age : Age of the patient- Sex : Sex of the patient- exang: exercise induced angina (1 = yes; 0 = no)- ca: number of major vessels (0-3)- cp : Chest Pain type chest pain type - Value 1: typical angina - Value 2: atypical angina - Value 3: non-anginal pain - Value 4: asymptomatic- trtbps : resting blood pressure (in mm Hg)- chol : cholestoral in mg/dl fetched via BMI sensor- fbs : (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)- restecg : resting electrocardiographic results - Value 0: normal - Value 1: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV) - Value 2: showing probable or definite left ventricular hypertrophy by Estes' criteria- thalach : maximum heart rate achieved- output : 0= less chance of heart attack 1= more chance of heart attack
###Code
from sklearn.metrics import accuracy_score
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
2-a. Implement `my_LDA` that classifies the target. Use `accuracy_score` from `sklearn.metrics` to calculate the accuracy. [10 pts]
###Code
### my_LDA starts here
#the target class is binary, therefore implementing fisher Discriminant algorithm
def my_LDA(X_train,X_test):
#Fisher Discriminant Algorithm for binary classes
#separate the data by class
grouped = X_train.groupby(['output'])
class1_data = grouped.get_group(0)
class2_data = grouped.get_group(1)
class1_data=class1_data.drop(['output'],axis=1)
class2_data=class2_data.drop(['output'],axis=1)
#calculate the means for each class
mean1 = class1_data.mean(axis = 0)
mean2 = class2_data.mean(axis = 0)
#vectorize the means
mean1, mean2 = mean1.T, mean2.T
#calculate (x - m_1) for scatter calc
m,n = class1_data.shape
diff1 = class1_data - np.array(list(mean1)*m).reshape(m,n)
#calculate (x - m_2) for scatter calc
m,n = class2_data.shape
diff2 = class2_data - np.array(list(mean2)*m).reshape(m,n)
#creating a matrix to diff1 and diff2
diff = np.concatenate([diff1, diff2])
m, n = diff.shape
withinClass = np.zeros((n,n))
diff = np.matrix(diff)
#S_1 = s1^2 = Summation(x - m_1)(x - m_1)^T
#S_2 = s1^2 = Summation(x - m_2)(x - m_2)^T
#S_w = S_1 + S_2 = s1^2 + s2^2
for i in range(m):
#calculate within class scatter matrix S_W using the above formula for fisher discriminant
withinClass += np.dot(diff[i,:].T, diff[i,:])
#find optimum direction vector for separation argmax(W) ~ S_W^(-1).(m_1-m_2)
opt_dir_vector = np.dot(inv(withinClass), (mean1 - mean2))
#calculate threshold value for classification C = 1/2(mu1+mu2)
mu1=np.dot(opt_dir_vector,class1_data.T).mean(axis = 0)
mu2=np.dot(opt_dir_vector,class2_data.T).mean(axis = 0)
threshold=(mu1+mu2)/2
#fisher criterion for classification is that if W^T.X > threshold assign to Class1 else Class2
target_pred=np.dot(X_test,np.matrix(opt_dir_vector).T)
target_pred = np.where(target_pred > threshold, 0, 1)
return target_pred
###Output
_____no_output_____
###Markdown
2-b. Implement `my_Perceptron` that classifies the target. Use `accuracy_score` from `sklearn.metrics` to calculate the accuracy. [10 pts]
###Code
import random
### my_Perceptron starts here
def my_Perceptron(features_train,target_train,features_test):
target = target_train.to_numpy() # vector having observed values of target
n_features = np.shape(features_train)[1] # number of features
weights = [random.random()*0.1 for _ in range(n_features)] # vector having initial random weights for each feature
theta = 0 # threshold of output function
alpha = 0.01 # rate at which model learns, a small value is preferable
total_iterations = 1000
# perceptron algorithm
for itr in range(total_iterations):
for feature_index in range(n_features):
f = np.dot(features_train[feature_index], weights) - theta # transfer function f(X): WT.X - threshold
target_pred = np.where(f >= 0, 1, 0) # predict class 1 if transfer function >=0,else predict class 0
# update rule of perceptron, update_weights = 0,if correct value is predicted, else update_weights by 0.01 i.e learning rate
update_weights = alpha * (target[feature_index] - target_pred)
# update weights of perceptron and threshold value, goal is to run all iterations and fix final weights and threshold
weights += update_weights * features_train[feature_index]
theta -= update_weights
#model training finished
#prediction
f = np.dot(features_test, weights) - theta
target_pred = np.where(f >= 0, 1, 0)
return target_pred
###Output
_____no_output_____
###Markdown
2-c. Implement `my_LogisticRegression` that classifies the target. Use `accuracy_score` from `sklearn.metrics` to calculate the accuracy. [10 pts]
###Code
### my_LogisticRegression starts here
#the target class is binary, therefore implementing as such
def my_LogisticRegression(features_train,target_train,features_test):
n_samples,n_features = features_train.shape # number of observations,features
weights = np.zeros(n_features) # vector having weights for each feature
beta = 0 # zero^th parameter/intercept
alpha = 0.01 # rate at which model learns, a small value is preferable
total_iterations = 1000
# gradient descent algorithm
for itr in range(total_iterations):
# approximate target with linear combination of weights and feature set, add a bias b. f(X)=X.W + b
f = np.dot(features_train, weights) + beta
# predict target values by using logistic sigmoid function f(a)=1/(1+e^-a) where a = X.W + b
target_pred = 1 / (1 + np.exp(-f))
# formulate gradients for weights 1/n*summation[XT(y-t)] and b 1/n*summation[(y-t)]
dw = (1 / n_samples) * np.dot(np.transpose(features_train), (target_pred - target_train))
db = (1 / n_samples) * np.sum(target_pred - target_train)
# update weights and b
weights -= alpha * dw
beta -= alpha * db
#model training finished
#prediction
f = np.dot(features_test, weights) + beta
target_pred = 1 / (1 + np.exp(-f))
target_pred_binary = [1 if i > 0.5 else 0 for i in target_pred]
return np.array(target_pred_binary)
###Output
_____no_output_____
###Markdown
2-d. The EDA and pre-processing are not limitted however, you must1. check if the data is **balanced** or not. 2. check if features are **skewed** or not.3. check outliers. For any finds from 1 to 3, please handle the data carefully. Exaplin your workflow and perform accordingly. If any interesting facts are learned, please state them. [15 pts]
###Code
### EDA stars here
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import seaborn as sns
dataset = pd.read_csv('heart.csv')
print(dataset.duplicated().sum())
#one duplicate row, need to drop it
print(dataset.isnull().sum())
#no null/missing values
dataset.drop_duplicates(inplace=True)
# dataset.info()
dataset.describe()
#we can see that the ranges are diverse we can scale them after encoding categorical columns
#The average blood pressure of an individual is 131.6 whereas the maximun value caps at 200.
#The average heart rate of the group is 149.5, whereas overall it ranges between 133 to 202
#Age of the group varies from 29 to 77 and the mean age is 54.4
plt.figure(figsize=(12,6))
ax=plt.axes()
ax.set_facecolor("green")
p = sns.countplot(data=dataset, x="sex", palette='pastel')
plt.show()
#the data set has a lot of bias towards one class of sex as there are more than double observations for class 1
corr = dataset.corr()
corr.style.background_gradient(cmap='coolwarm')
#we can see that there is low multi colinearity between most of the variables except slope and thalachh
#thalach and cp would be the most important factors for classification as they have high corr with target
plt.figure(figsize = (10,10))
sns.violinplot(x='caa',y='age',data=dataset)
sns.swarmplot(x=dataset['caa'],y=dataset['age'],hue=dataset['output'], palette='pastel')
plt.show()
# This swarmplot gives us a lot of information.
# Accoring to the figure, people belonging to caa category '0' , irrespective of their age are highly prone to getting a heart attack.
# While there are very few people belonging to caa category '4' , but it seems that around 75% of those get heart attacks.
# People belonging to category '1' , '2' and '3' are more or less at similar risk.
category_columns = ['sex', 'cp', 'fbs', 'restecg', 'exng', 'slp', 'caa', 'thall']
numerical_columns = ['age','trtbps','chol','thalachh','oldpeak']
### Pre-processing starts here
#get encoded valuse for all categorical columns
cp = pd.get_dummies(dataset['cp'],prefix='cp')
thal = pd.get_dummies(dataset['thall'],prefix='thall')
slope = pd.get_dummies(dataset['slp'],prefix='slp')
sex = pd.get_dummies(dataset['sex'],prefix='sex')
fbs = pd.get_dummies(dataset['fbs'],prefix='fbs')
restecg = pd.get_dummies(dataset['restecg'],prefix='restecg')
exng = pd.get_dummies(dataset['exng'],prefix='exng')
caa = pd.get_dummies(dataset['caa'],prefix='caa')
#add the encoded columns to the dataset
lst = [dataset,cp,thal,slope,sex,fbs,restecg,exng,caa]
dataset = pd.concat(lst,axis=1)
dataset.head()
dataset.drop(columns=['cp','thall','slp','sex','fbs','restecg','exng','caa'],axis=1,inplace=True)
######################
X_features = dataset.drop(['output'],axis=1)
y_target = dataset['output']
#scale columns using standard scaler
scalerX = StandardScaler()
X_features = scalerX.fit_transform(X_features)
#split dataset into test and train with a 20:80 ratio
X_train , X_test , y_train , y_test = train_test_split(X_features,y_target,test_size=0.2,random_state=42)
###Output
_____no_output_____
###Markdown
2-e. Use ML LDA, Perceptron, and LogisticRegression from sklearn to classify the trained data and report the accuracy. [10 pts]
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.linear_model import Perceptron
### LDA starts here
clf1= LinearDiscriminantAnalysis()
clf1.fit(X_train,y_train)
y_pred = clf1.predict(X_test)
clf1_accuracy = accuracy_score(y_test,y_pred)
### Perceptron starts here
clf2 = Perceptron()
clf2.fit(X_train,y_train)
y_pred = clf2.predict(X_test)
clf2_accuracy = accuracy_score(y_test,y_pred)
### Logistic Regression starts here
clf3 = LogisticRegression()
clf3.fit(X_train,y_train)
y_pred = clf3.predict(X_test)
clf3_accuracy = accuracy_score(y_test,y_pred)
###Output
_____no_output_____
###Markdown
2-f. Use the implemented classifiers from 2-a to 2-c and classify the output. [10 pts]
###Code
clf4= my_LDA(dataset,X_test)
clf4_accuracy = accuracy_score(y_test,clf4)
clf5= my_Perceptron(X_train,y_train,X_test)
clf5_accuracy = accuracy_score(y_test,clf5)
clf6= my_LogisticRegression(X_train,y_train,X_test)
clf6_accuracy = accuracy_score(y_test,clf6)
# result_table = pd.DataFrame({'Model',['clf1','clf2','clf3','clf4','clf5','clf6'],
# 'Accuracy':[clf1_accuracy,clf2_accuracy,clf3_accuracy,
# clf4_accuracy,clf5_accuracy,clf6_accuracy]})
# result_table
result_table = pd.DataFrame([['clf1',clf1_accuracy],['clf2',clf2_accuracy],['clf3',clf3_accuracy],['clf4',clf4_accuracy]
,['clf5',clf5_accuracy],['clf6',clf6_accuracy]]
,columns = ['Model','Accuracy'])
result_table
###Output
_____no_output_____
###Markdown
Question 4
###Code
df = pd.read_csv('data-hw2.csv')
df
plt.figure(figsize=(8,8))
plt.scatter(df['LUNG'], df['CIG'])
plt.xlabel("LUNG DEATHS")
plt.ylabel("CIG SALES")
plt.title("Scatter plot of Lung Cancer Deaths vs. Cigarette Sales")
for i in range(len(df)):
plt.annotate(df.iloc[i]['STATE'], xy=(df.iloc[i]['LUNG'], df.iloc[i]['CIG']))
df.corr()
df_clean = df
df_clean = df_clean.drop([6, 24], axis=0)
df_clean
plt.figure(figsize=(8,8))
plt.scatter(df_clean['LUNG'], df_clean['CIG'])
plt.xlabel("LUNG DEATHS")
plt.ylabel("CIG SALES")
plt.title("Scatter plot of Lung Cancer Deaths vs. Cigarette Sales")
for i in range(len(df_clean)):
plt.annotate(df_clean.iloc[i]['STATE'], xy=(df_clean.iloc[i]['LUNG'], df_clean.iloc[i]['CIG']))
df_clean.corr()
###Output
_____no_output_____
###Markdown
Question 5
###Code
df_ko = pd.read_csv('KO.csv')
df_pep = pd.read_csv('PEP.csv')
del df_ko['Open'], df_ko['High'], df_ko['Low'], df_ko['Close'], df_ko['Volume']
del df_pep['Open'], df_pep['High'], df_pep['Low'], df_pep['Close'], df_pep['Volume']
df_comb = pd.DataFrame(columns=["Date", "KO Adj Close", "PEP Adj Close"])
df_comb["Date"] = df_ko["Date"]
df_comb["KO Adj Close"] = df_ko["Adj Close"]
df_comb["PEP Adj Close"] = df_pep["Adj Close"]
df_comb.corr()
x_vals = np.array([np.min(df_comb["KO Adj Close"]), np.max(df_comb["PEP Adj Close"])])
x_vals_standardized = (x_vals-df_comb["KO Adj Close"].mean())/df_comb["KO Adj Close"].std(ddof=0)
y_predictions_standardized = df_comb.corr()["KO Adj Close"]["PEP Adj Close"]*x_vals_standardized
y_predictions = y_predictions_standardized*df_comb["PEP Adj Close"].std(ddof=0)+df_comb["PEP Adj Close"].mean()
plt.figure(figsize=(8,8))
plt.scatter(df_comb['KO Adj Close'], df_comb['PEP Adj Close'])
plt.xlabel("KO Daily Adj Close Price")
plt.ylabel("PEP Daily Adj Close Price")
plt.title("Scatter plot of KO Daily Adj Close Price vs. PEP Daily Adj Close Price with prediction line")
plt.plot(x_vals, y_predictions, 'r', linewidth=2)
plt.xlim(35, 60)
plt.ylim(100, 145)
###Output
_____no_output_____
###Markdown
Object recognition and computer vision 2019/2020Jean Ponce, Ivan Laptev, Cordelia Schmid and Josef Sivic Assignment 2: Neural networksAdapted from practicals from Nicolas le Roux, Andrea Vedaldi and Andrew Zisserman and Rob Fergus by Gul Varol and Ignacio RoccoFigure 1 ```` This is formatted as code````**STUDENT**: DHAOU Amin**EMAIL**: [email protected] GuidelinesThe purpose of this assignment is that you get hands-on experience with the topics covered in class, which will help you understand these topics better. Therefore, ** it is imperative that you do this assignment yourself. No code sharing will be tolerated. **Once you have completed the assignment, you will submit the `ipynb` file containing **both** code and results. For this, make sure to **run your notebook completely before submitting**.The `ipynb` must be named using the following format: **A2_LASTNAME_Firstname.ipynb**, and submitted in the **class Moodle page**. Goal The goal of this assignment is to get basic knowledge and hands-on experience with training and using neural networks. In Part 1 of the assignment you will implement and experiment with the training and testing of a simple two layer fully-connected neural network, similar to the one depicted in Figure 1 above. In Part 2 you will learn about convolutional neural networks, their motivation, building blocks, and how they are trained. Finally, in part 3 you will train a CNN for classification using the CIFAR-10 dataset. Part 1 - Training a fully connected neural network Getting started You will be working with a two layer neural network of the following form \begin{equation}H=\text{ReLU}(W_i X+B_i)\\\bar{Y}=W_oH+B_o\tag{1}\end{equation}where $X$ is the input, $\bar{Y}$ is the output, $H$ is the hidden layer, and $W_i$, $W_o$, $B_i$ and $B_o$ are the network parameters that need to be trained. Here the subscripts $i$ and $o$ stand for the *input* and *output* layer, respectively. This network was also discussed in the class and is illustrated in the above figure where the input units are shown in green, the hidden units in blue and the output in yellow. This network is implemented in the function `nnet_forward_logloss`.You will train the parameters of the network from labelled training data $\{X^n,Y^n\}$ where $X^n$ are points in $\mathbb{R}^2$ and $Y^n\in\{-1,1\}$ are labels for each point. You will use the stochastic gradient descent algorithm discussed in the class to minimize the loss of the network on the training data given by \begin{equation}L=\sum_n s(Y^n,\bar{Y}(X^n))\tag{2}\end{equation}where $Y^n$ is the target label for the n-th example and $\bar{Y}(X^n)$ is the network’s output for the n-th example $X^n$. The skeleton of the training procedure is provided in the `train_loop` function. We will use the logistic loss, which has the following form:\begin{equation}s(Y, \bar{Y}(X))=\log(1+\exp(-Y. \bar{Y}(X))\tag{3}\end{equation}where $Y$ is the target label and $\bar{Y}(X)$ is the output of the network for input example $X$. With the logistic loss, the output of the network can be interpreted as a probability $P(\text{class}=1|X) =\sigma(X)$ , where $\sigma(X) =1/(1+\exp(-X))$ is the sigmoid function. Note also that $P(\text{class}=-1|X)=1-P(\text{class}=1|X)$.
###Code
from IPython import display
import matplotlib.pyplot as plt
import numpy as np
import scipy.io as sio
def decision_boundary_nnet(X, Y, Wi, bi, Wo, bo):
x_min, x_max = -2, 4
y_min, y_max = -5, 3
xx, yy = np.meshgrid(np.arange(x_min, x_max, .05),
np.arange(y_min, y_max, .05))
XX = np.vstack((xx.ravel(), yy.ravel())).T
input_hidden = np.dot(XX, Wi) + bi
hidden = np.maximum(input_hidden, 0)
Z = np.dot(hidden, Wo) + bo
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z > 0, cmap=plt.cm.Paired)
plt.axis('off')
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap='winter')
plt.axis([-2, 4, -5, 3])
plt.draw()
def sigm(x):
# Returns the sigmoid of x.
small_x = np.where(x < -20) # Avoid overflows.
sigm_x = 1/(1 + np.exp(-x))
if type(sigm_x) is np.ndarray:
sigm_x[small_x] = 0.0
return sigm_x
def nnet_forward_logloss(X, Y, Wi, bi, Wo, bo):
'''
Compute the output Po, Yo and the loss of the network for the input X
This is a 2 layer (1 hidden layer network)
Input:
X ... (in R^2) set of input points, one per column
Y ... {-1,1} the target values for the set of points X
Wi, bi, Wo, bo ... parameters of the network
Output:
Po ... probabilisitc output of the network P(class=1 | x)
Po is in <0 1>.
Note: P(class=-1 | x ) = 1 - Po
Yo ... output of the network Yo is in <-inf +inf>
loss ... logistic loss of the network on examples X with ground target
values Y in {-1,1}
'''
# Hidden layer
hidden = np.maximum(np.dot(X, Wi) + bi, 0)
# Output of the network
Yo = np.dot(hidden, Wo) + bo
# Probabilistic output
Po = sigm(Yo)
# Logistic loss
loss = np.log(1 + np.exp( -Y * Yo))
return Po, Yo, loss
# Load the training data
!wget -q http://www.di.ens.fr/willow/teaching/recvis18/assignment2/double_moon_train1000.mat
train_data = sio.loadmat('./double_moon_train1000.mat', squeeze_me=True)
Xtr = train_data['X']
Ytr = train_data['Y']
# Load the validation data
!wget -q http://www.di.ens.fr/willow/teaching/recvis18/assignment2/double_moon_val1000.mat
val_data = sio.loadmat('./double_moon_val1000.mat', squeeze_me=True)
Xval = val_data['X']
Yval = val_data['Y']
###Output
_____no_output_____
###Markdown
Computing gradients of the loss with respect to network parameters :: TASK 1.1 ::Derive the form of the gradient of the logistic loss (3) with respect to the parameters of the network $W_i$, $W_o$, $B_i$ and $B_o$. *Hint:* Use the chain rule as discussed in the class. Let $X\in\mathbb{R}^d$, $Y\in\{-1,1\}$, $W_i\in\mathbb{R}^{d\times h}$, $B_i\in\mathbb{R}^h$, $W_o\in\mathbb{R}^h$ and $B_o\in\mathbb{R}$Let :\begin{equation}Z=W_iX+B_i\in\mathbb{R}^h\end{equation}\begin{equation}H=\text{ReLU}(W_i X+B_i)=\text{ReLU}(Z)\in\mathbb{R}^h\end{equation}\begin{equation}\overline{Y}=W_o^TH+B_o=W_o^T\text{ReLU}(Z)+B_o\in\mathbb{R}\end{equation}First, let's do the computations of $\frac{\partial s}{\partial\overline{Y}}$: \begin{equation}\frac{\partial s}{\partial\overline{Y}}=\frac{-Y\exp(-Y.\overline{Y}(X))}{1+\exp(-Y.\overline{Y}(X))}=-Y\sigma(-Y.\overline{Y})\in\mathbb{R}\end{equation}Then, using the chain rule :\begin{equation}\frac{\partial s}{\partial W_o}=\frac{\partial s}{\partial\overline{Y}}\frac{\partial\overline{Y}}{\partial W_o}=-Y\sigma(-Y.\overline{Y})H\in\mathbb{R^h}\end{equation}\begin{equation}\frac{\partial s}{\partial B_o}=\frac{\partial s}{\partial\overline{Y}}\frac{\partial\overline{Y}}{\partial B_o}=-Y\sigma(-Y.\overline{Y})\in\mathbb{R}\end{equation}\begin{equation}\frac{\partial s}{\partial W_i}=\frac{\partial s}{\partial\overline{Y}}\frac{\partial\overline{Y}}{\partial W_i}=-Y\sigma(-Y.\overline{Y})\frac{\partial\overline{Y}}{\partial Z}X^T\end{equation}But : $\overline{Y}=W_o^T\text{ReLU}(Z)+B_o$, then :\begin{equation}\frac{\partial\overline{Y}}{\partial Z}=diag(\mathbb{1}_{Z\succ0})W_o\in\mathbb{R}^h\end{equation}With $diag(\mathbb{1}_{Z\succ0})\in\mathbb{R}^{h\times h}$ the matrix with $\mathbb{1}_{Z\succ0}\in\mathbb{R}^h$ in the diagonal.Thus :\begin{equation}\frac{\partial s}{\partial W_i}=-Y\sigma(-Y.\overline{Y})diag(\mathbb{1}_{Z\succ0})W_oX^T\in\mathbb{R}^{h\times d}\end{equation}Similarly :\begin{equation}\frac{\partial s}{\partial B_i}=-Y\sigma(-Y.\overline{Y})diag(\mathbb{1}_{Z\succ0})W_o\in\mathbb{R}^{h}\end{equation}Other ressources:http://cs231n.stanford.edu/slides/2018/cs231n_2018_ds02.pdf :: TASK 1.2 ::Following your derivation, implement the gradient computation in the function `gradient_nn`. See the code for the description of the required inputs / outputs of this function.
###Code
def gradient_nn(X, Y, Wi, bi, Wo, bo):
'''
Compute gradient of the logistic loss of the neural network on example X with
target label Y, with respect to the parameters Wi,bi,Wo,bo.
Input:
X ... 2d vector of the input example
Y ... the target label in {-1,1}
Wi,bi,Wo,bo ... parameters of the network
Wi ... [dxh]
bi ... [h]
Wo ... [h]
bo ... 1
where h... is the number of hidden units
d... is the number of input dimensions (d=2)
Output:
grad_s_Wi [dxh] ... gradient of loss s(Y,Y(X)) w.r.t Wi
grad_s_bi [h] ... gradient of loss s(Y,Y(X)) w.r.t. bi
grad_s_Wo [h] ... gradient of loss s(Y,Y(X)) w.r.t. Wo
grad_s_bo 1 ... gradient of loss s(Y,Y(X)) w.r.t. bo
'''
Z = np.dot(X,Wi) + bi
H = np.maximum(Z, 0)
Yo = np.dot(H,Wo) + bo
Xt = np.reshape(X,(len(X),1))
Wo2 = np.reshape(Wo,(1,len(Wo)))
ind_Z = np.maximum(np.sign(Z),0)
grad_s_Wi = -Y*sigm(-Y*Yo)*Xt@[email protected](ind_Z)
grad_s_bi = -Y*sigm(-Y*Yo)*np.diag(ind_Z)@Wo
grad_s_Wo = -Y*sigm(-Y*Yo)*H
grad_s_bo = -Y*sigm(-Y*Yo)
return grad_s_Wi, grad_s_bi, grad_s_Wo, grad_s_bo
###Output
_____no_output_____
###Markdown
Numerically verify the gradientsHere you will numerically verify that your analytically computed gradients in function `gradient_nn` are correct. :: TASK 1.3 ::Write down the general formula for numerically computing the approximate derivative of the loss $s(\theta)$, with respect to the parameter $\theta_i$ using finite differencing. *Hint: use the first order Taylor expansion of loss $s(\theta+\Delta \theta)$ around point $\theta$. * Computing the first order Taylor expansion for $s(\theta+\Delta \theta)$ and $s(\theta - \Delta \theta)$ gives: $$\frac{\partial s}{\partial \theta} =\frac{ s(\theta+\Delta \theta) - s(\theta - \Delta \theta)}{ \Delta \theta}$$The gradient_nn_numerical algorithm uses this to calculate the gradient Following the general formula, `gradient_nn_numerical` function numerically computes the derivatives of the loss function with respect to all the parameters of the network $W_i$, $W_o$, $B_i$ and $B_o$:
###Code
def gradient_nn_numerical(X, Y, Wi, bi, Wo, bo):
'''
Compute numerical gradient of the logistic loss of the neural network on
example X with target label Y, with respect to the parameters Wi,bi,Wo,bo.
Input:
X ... 2d vector of the input example
Y ... the target label in {-1,1}
Wi, bi, Wo, bo ... parameters of the network
Wi ... [dxh]
bi ... [h]
Wo ... [h]
bo ... 1
where h... is the number of hidden units
d... is the number of input dimensions (d=2)
Output:
grad_s_Wi_numerical [dxh] ... gradient of loss s(Y,Y(X)) w.r.t Wi
grad_s_bi_numerical [h] ... gradient of loss s(Y,Y(X)) w.r.t. bi
grad_s_Wo_numerical [h] ... gradient of loss s(Y,Y(X)) w.r.t. Wo
grad_s_bo_numerical 1 ... gradient of loss s(Y,Y(X)) w.r.t. bo
'''
eps = 1e-8
grad_s_Wi_numerical = np.zeros(Wi.shape)
grad_s_bi_numerical = np.zeros(bi.shape)
grad_s_Wo_numerical = np.zeros(Wo.shape)
for i in range(Wi.shape[0]):
for j in range(Wi.shape[1]):
dummy, dummy, pos_loss = nnet_forward_logloss(X, Y, sumelement_matrix(Wi, i, j, +eps), bi, Wo, bo)
dummy, dummy, neg_loss = nnet_forward_logloss(X, Y, sumelement_matrix(Wi, i, j, -eps), bi, Wo, bo)
grad_s_Wi_numerical[i, j] = (pos_loss - neg_loss)/(2*eps)
for i in range(bi.shape[0]):
dummy, dummy, pos_loss = nnet_forward_logloss(X, Y, Wi, sumelement_vector(bi, i, +eps), Wo, bo)
dummy, dummy, neg_loss = nnet_forward_logloss(X, Y, Wi, sumelement_vector(bi, i, -eps), Wo, bo)
grad_s_bi_numerical[i] = (pos_loss - neg_loss)/(2*eps)
for i in range(Wo.shape[0]):
dummy, dummy, pos_loss = nnet_forward_logloss(X, Y, Wi, bi, sumelement_vector(Wo, i, +eps), bo)
dummy, dummy, neg_loss = nnet_forward_logloss(X, Y, Wi, bi, sumelement_vector(Wo, i, -eps), bo)
grad_s_Wo_numerical[i] = (pos_loss - neg_loss)/(2*eps)
dummy, dummy, pos_loss = nnet_forward_logloss(X, Y, Wi, bi, Wo, bo+eps)
dummy, dummy, neg_loss = nnet_forward_logloss(X, Y, Wi, bi, Wo, bo-eps)
grad_s_bo_numerical = (pos_loss - neg_loss)/(2*eps)
return grad_s_Wi_numerical, grad_s_bi_numerical, grad_s_Wo_numerical, grad_s_bo_numerical
def sumelement_matrix(X, i, j, element):
Y = np.copy(X)
Y[i, j] = X[i, j] + element
return Y
def sumelement_vector(X, i, element):
Y = np.copy(X)
Y[i] = X[i] + element
return Y
###Output
_____no_output_____
###Markdown
:: TASK 1.4 ::Run the following code snippet and understand what it is doing. `gradcheck` function checks that the analytically computed derivative using function `gradient_nn` (e.g. `grad_s_bo`) at the same training example $\{X,Y\}$ is the same (up to small errors) as your numerically computed value of the derivative using function `gradient_nn_numerical` (e.g. `grad_s_bo_numerical`). Make sure the output is `SUCCESS` to move on to the next task.
###Code
def gradcheck():
'''
Check that the numerical and analytical gradients are the same up to eps
'''
h = 3 # number of hidden units
eps = 1e-6
for i in range(1000):
# Generate random input/output/weight/bias
X = np.random.randn(2)
Y = 2* np.random.randint(2) - 1 # {-1, 1}
Wi = np.random.randn(X.shape[0], h)
bi = np.random.randn(h)
Wo = np.random.randn(h)
bo = np.random.randn(1)
# Compute analytical gradients
grad_s_Wi, grad_s_bi, grad_s_Wo, grad_s_bo = gradient_nn(X, Y, Wi, bi, Wo, bo)
# Compute numerical gradients
grad_s_Wi_numerical, grad_s_bi_numerical, grad_s_Wo_numerical, grad_s_bo_numerical = gradient_nn_numerical(X, Y, Wi, bi, Wo, bo)
# Compute the difference between analytical and numerical gradients
delta_Wi = np.mean(np.abs(grad_s_Wi - grad_s_Wi_numerical))
delta_bi = np.mean(np.abs(grad_s_bi - grad_s_bi_numerical))
delta_Wo = np.mean(np.abs(grad_s_Wo - grad_s_Wo_numerical))
delta_bo = np.abs(grad_s_bo - grad_s_bo_numerical)
# Difference larger than a threshold
if ( delta_Wi > eps or delta_bi > eps or delta_Wo > eps or delta_bo > eps):
return False
return True
# Check gradients
if gradcheck():
print('SUCCESS: Passed gradcheck.')
else:
print('FAILURE: Fix gradient_nn and/or gradient_nn_aprox implementation.')
###Output
SUCCESS: Passed gradcheck.
###Markdown
Training the network using backpropagation and experimenting with different parameters Use the provided code below that calls the `train_loop` function. Set the number of hidden units to 7 by setting $h=7$ in the code and set the learning rate to 0.02 by setting `lrate = 0.02`. Run the training code. Visualize the trained hyperplane using the provided function `plot_decision_boundary(Xtr,Ytr,Wi,bi,Wo,bo)`. Show also the evolution of the training and validation errors. Include the decision hyper-plane visualization and the training and validation error plots.
###Code
def train_loop(Xtr, Ytr, Xval, Yval, h = 7, lrate = 0.02, vis='all', nEpochs=100):
'''
Check that the numerical and analytical gradients are the same up to eps
Input:
Xtr ... Nx2 matrix of training samples
Ytr ... N dimensional vector of training labels
Xval ... Nx2 matrix of validation samples
Yval ... N dimensional vector validation labels
h ... number of hidden units
lrate ... learning rate
vis ... visulaization option ('all' | 'last' | 'never')
nEpochs ... number of training epochs
Output:
tr_error ... nEpochs*nSamples dimensional vector of training error
val_error ... nEpochs*nSamples dimensional vector of validation error
'''
nSamples = Xtr.shape[0]
tr_error = np.zeros(nEpochs*nSamples)
val_error = np.zeros(nEpochs*nSamples)
# Randomly initialize parameters of the model
Wi = np.random.randn(Xtr.shape[1], h)
Wo = np.zeros(h)
bi = np.zeros(h)
bo = 0.
if(vis == 'all' or vis == 'last'):
plt.figure()
for i in range(nEpochs*nSamples):
# Draw an example at random
n = np.random.randint(nSamples)
X = Xtr[n]
Y = Ytr[n]
# Compute gradient
grad_s_Wi, grad_s_bi, grad_s_Wo, grad_s_bo = gradient_nn(X, Y, Wi, bi, Wo, bo)
# Gradient update
Wi -= lrate*grad_s_Wi
Wo -= lrate*grad_s_Wo
bi -= lrate*grad_s_bi
bo -= lrate*grad_s_bo
# Compute training error
Po, Yo, loss = nnet_forward_logloss(Xtr, Ytr, Wi, bi, Wo, bo)
Yo_class = np.sign(Yo)
tr_error[i] = 100*np.mean(Yo_class != Ytr)
# Compute validation error
Pov, Yov, lossv = nnet_forward_logloss(Xval, Yval, Wi, bi, Wo, bo)
Yov_class = np.sign(Yov)
val_error[i] = 100*np.mean(Yov_class != Yval)
# Plot (at every epoch if visualization is 'all', only at the end if 'last')
if(vis == 'all' and i%nSamples == 0) or (vis == 'last' and i == nEpochs*nSamples - 1):
# Draw the decision boundary.
plt.clf()
plt.title('p = %d, Iteration = %.d, Error = %.3f' % (h, i/nSamples, tr_error[i]))
decision_boundary_nnet(Xtr, Ytr, Wi, bi, Wo, bo)
display.display(plt.gcf(), display_id=True)
display.clear_output(wait=True)
if(vis == 'all'):
# Plot the evolution of the training and test errors
plt.figure()
plt.plot(tr_error, label='training')
plt.plot(val_error, label='validation')
plt.legend()
plt.title('Training/validation errors: %.2f%% / %.2f%%' % (tr_error[-1], val_error[-1]))
return tr_error, val_error
# Run training
h = 7
lrate = .02
tr_error, val_error = train_loop(Xtr, Ytr, Xval, Yval, h, lrate)
###Output
_____no_output_____
###Markdown
:: TASK 1.6 ::**Random initializations.** Repeat this procedure 5 times from 5 different random initializations. Record for each run the final training and validation errors. Did the network always converge to zero training error? Summarize your final training and validation errors into a table for the 5 runs. You do not need to include the decision hyper-plane visualizations. Note: to speed-up the training you can plot the visualization figures less often (or never) and hence speed-up the training.
###Code
list_h = [10, 15, 20, 30, 50]
ltr_error, lval_error = [] , []
for k in range(5):
tr_error, val_error = train_loop(Xtr, Ytr, Xval, Yval, list_h[k], vis = 'never')
ltr_error.append(tr_error[-1])
lval_error.append(val_error[-1])
import pandas
pandas.DataFrame(data = {'Training Error': ltr_error, 'Validation Error': lval_error}, index = np.arange(1, 6))
###Output
_____no_output_____
###Markdown
In these cases, the network always converge to zero training errorshere is the table: [[0.0, 0.0], [0.0, 0.0], [0.0, 0.0], [0.0, 0.0], [0.0, 0.0]] :: SAMPLE TASK ::For this task, the answer is given. Run the given code and answer Task 1.8 similarly.**Learning rate:**Keep $h=7$ and change the learning rate to values $\text{lrate} = \{2, 0.2, 0.02, 0.002\}$. For each of these values run the training procedure 5 times and observe the training behaviour. You do not need to include the decision hyper-plane visualizations in your answer.**- Make one figure** where *final* error for (i) training and (ii) validation sets are superimposed. $x$-axis should be the different values of the learning rate, $y$-axis the error *mean* across 5 runs. Show the standard deviation with error bars and make sure to label each plot with a legend.**- Make another figure** where *training error evolution* for each learning rate is superimposed. $x$-axis should be the iteration number, $y$-axis the training error *mean* across 5 runs for a given learning rate. Show the standard deviation with error bars and make sure to label each curve with a legend.
###Code
nEpochs = 40
trials = 5
lrates = [2, 0.2, 0.02, 0.002]
plot_data_lr = np.zeros((2, trials, len(lrates), nEpochs*1000))
h = 7
for j, lrate in enumerate(lrates):
print('LR = %f' % lrate)
for i in range(trials):
tr_error, val_error = train_loop(Xtr, Ytr, Xval, Yval, h, lrate, vis='never', nEpochs=nEpochs)
plot_data_lr[0, i, j, :] = tr_error
plot_data_lr[1, i, j, :] = val_error
plt.errorbar(np.arange(len(lrates)), plot_data_lr[0, :, :, -1].mean(axis=0), yerr=plot_data_lr[0, :, :, -1].std(axis=0), label='Training')
plt.errorbar(np.arange(len(lrates)), plot_data_lr[1, :, :, -1].mean(axis=0), yerr=plot_data_lr[0, :, :, -1].std(axis=0), label='Validation')
plt.xticks(np.arange(len(lrates)), lrates)
plt.xlabel('learning rate')
plt.ylabel('error')
plt.legend()
# Plot the evolution of the training loss for each learning rate
plt.figure()
for j, lrate in enumerate(lrates):
x = np.arange(plot_data_lr.shape[3])
# Mean training loss over trials
y = plot_data_lr[0, :, j, :].mean(axis=0)
# Standard deviation over trials
ebar = plot_data_lr[0, :, j, :].std(axis=0)
# Plot
markers, caps, bars = plt.errorbar(x, y, yerr=ebar, label='LR = ' + str(lrate))
# Make the error bars transparent
[bar.set_alpha(0.01) for bar in bars]
plt.legend()
plt.xlabel('iterations')
plt.ylabel('training error')
###Output
_____no_output_____
###Markdown
:: TASK 1.7 ::**- Briefly discuss** the different behaviour of the training for different learning rates. How many iterations does it take to converge or does it converge at all? Which learning rate is better and why? We see from the plot that depending on the learning rate the model will not always converge. The model converge with the three other learning rates. It take approximately 2000 iterations to converge for the model with LR = 0.2 , 10 000 for LR = 0.02 and 30 000 for LR = 0.002.When learning rate is too high ( it's the case for LR = 2), the weights are update too much and then the loss will divergeWhen the learning rate is too low, the weights are updated little by little and it'll take a lot of time to have the convergence.The optimal learning is the one of the green curve because although the yellow converge faster the variance is more important in this last plot.Indeed, the model with LR = 0.02 keeps a good balance between the speed and the accuracy. :: TASK 1.8 ::**The number of hidden units:**Set the learning rate to 0.02 and change the number of hidden units $h = \{1, 2, 5, 7, 10, 100\}$. For each of these values run the training procedure 5 times and observe the training behaviour**-Visualize** one decision hyper-plane per number of hidden units.**-Make one figure** where *final* error for (i) training and (ii) validation sets are superimposed. $x$-axis should be the different values of the number of hidden units, $y$-axis the error *mean* across 5 runs. Show the standard deviation with error bars and make sure to label each plot with a legend.**-Make another figure** where *training error evolution* for each number of hidden units is superimposed. $x$-axis should be the iteration number, $y$-axis the training error *mean* across 5 runs for a given learning rate. Show the standard deviation with error bars and make sure to label each curve with a legend.**-Briefly discuss** the different behaviours for the different numbers of hidden units.
###Code
nEpochs = 40
trials = 5
rate = 0.2
hid = [1, 2, 5, 7, 10, 100]
plot_data_h = np.zeros((2, trials, len(hid), nEpochs*1000))
h = 7
for j, h in enumerate(hid):
print('h = %d' % h)
for i in range(trials):
tr_error, val_error = train_loop(Xtr, Ytr, Xval, Yval, h, rate, vis='never', nEpochs=nEpochs)
plot_data_h[0, i, j, :] = tr_error
plot_data_h[1, i, j, :] = val_error
plt.errorbar(np.arange(len(hid)), plot_data_h[0, :, :, -1].mean(axis=0), yerr=plot_data_h[0, :, :, -1].std(axis=0), label='Training')
plt.errorbar(np.arange(len(hid)), plot_data_h[1, :, :, -1].mean(axis=0), yerr=plot_data_h[0, :, :, -1].std(axis=0), label='Validation')
plt.xticks(np.arange(len(hid)), hid)
plt.xlabel('Number of hidden units')
plt.ylabel('error')
plt.legend()
# Plot the evolution of the training loss for each h
plt.figure()
for j, hid in enumerate(hid):
x = np.arange(plot_data_h.shape[3])
# Mean training loss over trials
y = plot_data_h[0, :, j, :].mean(axis=0)
# Standard deviation over trials
ebar = plot_data_h[0, :, j, :].std(axis=0)
# Plot
markers, caps, bars = plt.errorbar(x, y, yerr=ebar, label='H = ' + str(hid))
# Make the error bars transparent
[bar.set_alpha(0.01) for bar in bars]
plt.legend()
plt.xlabel('iterations')
plt.ylabel('training error')
###Output
_____no_output_____
###Markdown
We observe that we can't fit the data when the model is too simple. It's the case for h=1 and h=2 layers. Indeed, the model can't achieve zero training error.Increasing the number of hidden layers and thus the numbers of parameters will allow the model to fit complex data and then converge. But increasing the number of hidden layer will increase the time to train the model. So a good tradeoff here is to choose h = 10. Part 2 - Building blocks of a CNNThis part introduces typical CNN building blocks, such as ReLU units and linear filters. For a motivation for using CNNs over fully-connected neural networks, see [[Le Cun, et al, 1998]](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf). Install PyTorch
###Code
!pip install torch torchvision
import torch
print(torch.__version__)
print(torch.cuda.is_available())
###Output
Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.3.1+cu100)
Requirement already satisfied: torchvision in /usr/local/lib/python3.6/dist-packages (0.4.2+cu100)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch) (1.17.3)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision) (4.3.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.12.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from pillow>=4.1.1->torchvision) (0.46)
1.3.1+cu100
True
###Markdown
ConvolutionA feed-forward neural network can be thought of as the composition of number of functions $$f(\mathbf{x}) = f_L(\dots f_2(f_1(\mathbf{x};\mathbf{w}_1);\mathbf{w}_2)\dots),\mathbf{w}_{L}).$$Each function $f_l$ takes as input a datum $\mathbf{x}_l$ and a parameter vector $\mathbf{w}_l$ and produces as output a datum $\mathbf{x}_{l+1}$. While the type and sequence of functions is usually handcrafted, the parameters $\mathbf{w}=(\mathbf{w}_1,\dots,\mathbf{w}_L)$ are *learned from data* in order to solve a target problem, for example classifying images or sounds.In a *convolutional neural network* data and functions have additional structure. The data $\mathbf{x}_1,\dots,\mathbf{x}_n$ are images, sounds, or more in general maps from a lattice$^1$ to one or more real numbers. In particular, since the rest of the practical will focus on computer vision applications, data will be 2D arrays of pixels. Formally, each $\mathbf{x}_i$ will be a $M \times N \times K$ real array of $M \times N$ pixels and $K$ channels per pixel. Hence the first two dimensions of the array span space, while the last one spans channels. Note that only the input $\mathbf{x}=\mathbf{x}_1$ of the network is an actual image, while the remaining data are intermediate *feature maps*.The second property of a CNN is that the functions $f_l$ have a *convolutional structure*. This means that $f_l$ applies to the input map $\mathbf{x}_l$ an operator that is *local and translation invariant*. Examples of convolutional operators are applying a bank of linear filters to $\mathbf{x}_l$. In this part we will familiarise ourselves with a number of such convolutional and non-linear operators. The first one is the regular *linear convolution* by a filter bank. We will start by focusing our attention on a single function relation as follows:$$ f: \mathbb{R}^{M\times N\times K} \rightarrow \mathbb{R}^{M' \times N' \times K'}, \qquad \mathbf{x} \mapsto \mathbf{y}.$$$^1$A two-dimensional *lattice* is a discrete grid embedded in $R^2$, similar for example to a checkerboard.
###Code
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import torch
import torchvision
# Download an example image
!wget -q http://www.di.ens.fr/willow/teaching/recvis/assignment3/images/peppers.png
# Read the image
x = np.asarray(Image.open('peppers.png'))/255.0
# Print the size of x. Third dimension (=3) corresponds to the R, G, B channels
print(x.shape)
# Visualize the input x
plt.imshow(x)
# Convert to torch tensor
x = torch.from_numpy(x).permute(2, 0, 1).float()
# Prepare it as a batch
x = x.unsqueeze(0)
###Output
(384, 512, 3)
###Markdown
This should display an image of bell peppers.Next, we create a convolutional layer with a bank of 10 filters of dimension $5 \times 5 \times 3$ whose coefficients are initialized randomly. This uses the [`torch.nn.Conv2d`](https://pytorch.org/docs/stable/nn.htmltorch.nn.Conv2d) module from PyTorch:
###Code
# Create a convolutional layer and a random bank of linear filters
conv = torch.nn.Conv2d(3, 10, kernel_size=5, stride=1, padding=0, bias=False)
print(conv.weight.size())
###Output
torch.Size([10, 3, 5, 5])
###Markdown
**Remark:** You might have noticed that the `bias` argument to the `torch.nn.Conv2d` function is the empty matrix `false`. It can be otherwise used to pass a vector of bias terms to add to the output of each filter.Note that `conv.weight` has four dimensions, packing 10 filters. Note also that each filter is not flat, but rather a volume containing three slices. The next step is applying the filter to the image.
###Code
# Apply the convolution operator
y = conv(x)
# Observe the input/output sizes
print(x.size())
print(y.size())
###Output
torch.Size([1, 3, 384, 512])
torch.Size([1, 10, 380, 508])
###Markdown
The variable `y` contains the output of the convolution. Note that the filters are three-dimensional. This is because they operate on a tensor $\mathbf{x}$ with $K$ channels. Furthermore, there are $K'$ such filters, generating a $K'$ dimensional map $\mathbf{y}$.We can now visualise the output `y` of the convolution. In order to do this, use the `torchvision.utils.make_grid` function to display an image for each feature channel in `y`:
###Code
# Visualize the output y
def vis_features(y):
# Organize it into 10 grayscale images
out = y.permute(1, 0, 2, 3)
# Scale between [0, 1]
out = (out - out.min().expand(out.size())) / (out.max() - out.min()).expand(out.size())
# Create a grid of images
out = torchvision.utils.make_grid(out, nrow=5)
# Convert to numpy image
out = np.transpose(out.detach().numpy(), (1, 2, 0))
# Show
plt.imshow(out)
# Remove grid
plt.gca().grid(False)
vis_features(y)
###Output
_____no_output_____
###Markdown
So far filters preserve the resolution of the input feature map. However, it is often useful to *downsample the output*. This can be obtained by using the `stride` option in `torch.nn.Conv2d`:
###Code
# Try again, downsampling the output
conv_ds = torch.nn.Conv2d(3, 10, kernel_size=5, stride=16, padding=0, bias=False)
y_ds = conv_ds(x)
print(x.size())
print(y_ds.size())
vis_features(y_ds)
###Output
torch.Size([1, 3, 384, 512])
torch.Size([1, 10, 24, 32])
###Markdown
Applying a filter to an image or feature map interacts with the boundaries, making the output map smaller by an amount proportional to the size of the filters. If this is undesirable, then the input array can be padded with zeros by using the `pad` option:
###Code
# Try padding
conv_pad = torch.nn.Conv2d(3, 10, kernel_size=5, stride=1, padding=2, bias=False)
y_pad = conv_pad(x)
print(x.size())
print(y_pad.size())
vis_features(y_pad)
###Output
torch.Size([1, 3, 384, 512])
torch.Size([1, 10, 384, 512])
###Markdown
In order to consolidate what has been learned so far, we will now design a filter by hand:
###Code
w = torch.FloatTensor([[0, 1, 0 ],
[1, -4, 1 ],
[0, 1, 0 ]])
w = w.repeat(3, 1).reshape(1, 3, 3, 3)
conv_lap = torch.nn.Conv2d(3, 3, kernel_size=3, stride=1, padding=1, bias=False)
conv_lap.weight = torch.nn.Parameter(w)
y_lap = conv_lap(x)
print(x.size())
print(y_lap.size())
plt.figure()
vis_features(y_lap)
plt.title('filter output')
plt.figure()
vis_features(-torch.abs(y_lap))
plt.title('- abs(filter output)') ;
print(w)
###Output
tensor([[[[ 0., 1., 0.],
[ 1., -4., 1.],
[ 0., 1., 0.]],
[[ 0., 1., 0.],
[ 1., -4., 1.],
[ 0., 1., 0.]],
[[ 0., 1., 0.],
[ 1., -4., 1.],
[ 0., 1., 0.]]]])
###Markdown
:: TASK 2.1 ::* i. What filter have we implemented?* ii. How are the RGB colour channels processed by this filter?* iii. What image structure are detected? i. A Laplacian Filter is implemented hereii. This filter is applied to each of the RGB channel and then we sum the three values on the corresponding pixel and get the final feature map.iii. It detected quick changes of intensity in the image. Then, the edges in the image are detected. Non-linear activation functionsThe simplest non-linearity is obtained by following a linear filter by a *non-linear activation function*, applied identically to each component (i.e. point-wise) of a feature map. The simplest such function is the *Rectified Linear Unit (ReLU)*$$ y_{ijk} = \max\{0, x_{ijk}\}.$$This function is implemented by [`torch.nn.ReLU()`](https://pytorch.org/docs/stable/nn.htmltorch.nn.ReLU). Run the code below and understand what the filter $\mathbf{w}$ is doing.
###Code
w = torch.FloatTensor([[1], [0], [-1]]).repeat(1, 3, 1, 1)
w = torch.cat((w, -w), 0)
conv = torch.nn.Conv2d(3, 2, kernel_size=(3, 1), stride=1, padding=0, bias=False)
conv.weight = torch.nn.Parameter(w)
relu = torch.nn.ReLU()
y = conv(x)
z = relu(y)
plt.figure()
vis_features(y)
plt.figure()
vis_features(z)
###Output
_____no_output_____
###Markdown
PoolingThere are several other important operators in a CNN. One of them is *pooling*. A pooling operator operates on individual feature channels, coalescing nearby feature values into one by the application of a suitable operator. Common choices include max-pooling (using the max operator) or sum-pooling (using summation). For example, *max-pooling* is defined as:$$ y_{ijk} = \max \{ y_{i'j'k} : i \leq i' < i+p, j \leq j' < j + p \}$$Max-pooling is implemented by [`torch.nn.MaxPool2d()`](https://pytorch.org/docs/stable/nn.htmltorch.nn.MaxPool2d). :: TASK 2.2 ::Run the code below to try max-pooling. Look at the resulting image. Can you interpret the result?
###Code
mp = torch.nn.MaxPool2d(15, stride=1)
y = mp(x)
plt.imshow(y.squeeze().permute(1, 2, 0).numpy())
plt.gca().grid(False)
###Output
_____no_output_____
###Markdown
Maxpooling will downsample the image. In fact, it will take the maximum of the kernel applied in the image. Then the output image will be smoothed but the whole image will keep its caracteristics. Part 3 - Training a CNNThis part is an introduction to using PyTorch for training simple neural net models. CIFAR-10 dataset will be used. Imports
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
USE_GPU = True # To train on gpu (no that important small network and data)
if USE_GPU and torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
print("Using : %s" %str(device))
###Output
Using : cuda
###Markdown
Parameters The default values for the learning rate, batch size and number of epochs are given in the "options" cell of this notebook. Unless otherwise specified, use the default values throughout this assignment.
###Code
batch_size = 64 # input batch size for training
epochs = 10 # number of epochs to train
lr = 0.01 # learning rate
###Output
_____no_output_____
###Markdown
Warmup It is always good practice to visually inspect your data before trying to train a model, since it lets you check for problems and get a feel for the task at hand.CIFAR-10 is a dataset of 60,000 color images (32 by 32 resolution) across 10 classes(airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck). The train/test split is 50k/10k.
###Code
# Data Loading
# Warning: this cell might take some time when you run it for the first time,
# because it will download the dataset from the internet
dataset = 'cifar10'
data_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
trainset = datasets.CIFAR10(root='.', train=True, download=True, transform=data_transform)
testset = datasets.CIFAR10(root='.', train=False, download=True, transform=data_transform)
train_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=0)
test_loader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=0)
###Output
0it [00:00, ?it/s]
###Markdown
:: TASK 3.1 ::Use `matplotlib` and ipython notebook's visualization capabilities to display some of these images. Display 5 images from the dataset together with their category label. [See this PyTorch tutorial page](http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.htmlsphx-glr-beginner-blitz-cifar10-tutorial-py) for hints on how to achieve this.
###Code
import matplotlib.pyplot as plt
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
trainloader = torch.utils.data.DataLoader(trainset, batch_size=5, shuffle=True, num_workers=0)
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(5)))
###Output
_____no_output_____
###Markdown
Training a Convolutional Network on CIFAR-10 Start by running the provided training code below. By default it will train on CIFAR-10 for 10 epochs (passes through the training data) with a single layer network. The loss function [cross_entropy](http://pytorch.org/docs/master/nn.html?highlight=cross_entropytorch.nn.functional.cross_entropy) computes a Logarithm of the Softmax on the output of the neural network, and then computes the negative log-likelihood w.r.t. the given `target`. Note the decrease in training loss and corresponding decrease in validation errors.
###Code
def train(epoch, network):
network.train()
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
data = data.to(device)
target = target.to(device)
output = network(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(network):
network.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
data = data.to(device)
target = target.to(device)
output = network(data)
test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# Single layer network architecture
class Net(nn.Module):
def __init__(self, num_inputs, num_outputs):
super(Net, self).__init__()
self.linear = nn.Linear(num_inputs, num_outputs)
self.num_inputs = num_inputs
def forward(self, input):
input = input.view(-1, self.num_inputs) # reshape input to batch x num_inputs
output = self.linear(input)
return output
# Train
network = Net(3072, 10).to(device)
optimizer = optim.SGD(network.parameters(), lr=lr)
for epoch in range(1, 11):
train(epoch, network)
test(network)
###Output
Train Epoch: 1 [0/50000 (0%)] Loss: 2.251657
Train Epoch: 1 [6400/50000 (13%)] Loss: 2.000074
Train Epoch: 1 [12800/50000 (26%)] Loss: 1.983296
Train Epoch: 1 [19200/50000 (38%)] Loss: 1.921615
Train Epoch: 1 [25600/50000 (51%)] Loss: 1.727296
Train Epoch: 1 [32000/50000 (64%)] Loss: 1.843463
Train Epoch: 1 [38400/50000 (77%)] Loss: 1.662544
Train Epoch: 1 [44800/50000 (90%)] Loss: 1.798393
###Markdown
:: TASK 3.2 ::Add code to create a convolutional network architecture as below. - Convolution with 5 by 5 filters, 16 feature maps + Tanh nonlinearity. - 2 by 2 max pooling. - Convolution with 5 by 5 filters, 128 feature maps + Tanh nonlinearity. - 2 by 2 max pooling. - Flatten to vector. - Linear layer with 64 hidden units + Tanh nonlinearity. - Linear layer to 10 output units.
###Code
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size = 5, bias=False)
self.t1 = nn.Tanh()
self.pool1 = nn.MaxPool2d(kernel_size = 2)
self.conv2 = nn.Conv2d(16, 128, kernel_size = 5, bias=False)
self.t2 = nn.Tanh()
self.pool2 = nn.MaxPool2d(kernel_size = 2)
self.flat = nn.Flatten()
self.linear1 = nn.Linear(128 * 5 * 5, 64, bias=False)
self.t3 = nn.Tanh()
self.linear2 = nn.Linear(64, 10, bias=False)
def forward(self, input):
x = self.conv1(input)
x = self.t1(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.t2(x)
x = self.pool2(x)
x = self.flat(x)
x = self.linear1(x)
x = self.t3(x)
output = self.linear2(x)
return output
###Output
_____no_output_____
###Markdown
:: TASK 3.3 ::Some of the functions in a CNN must be non-linear. Why? If we use linear functions in a CNN, the whole network will be linear and then will be equivalent to a single layer network. Most of the real world problems are non linear and have to be approximated by non linear function. Introducing non linear activations will increase the order of the network which will be able to approximate more complicated functions. :: TASK 3.4 ::Train the CNN for 20 epochs on the CIFAR-10 training set.
###Code
# Train
network = ConvNet().to(device)
optimizer = optim.SGD(network.parameters(), lr = lr)
for epoch in range(1, 21):
train(epoch, network)
test(network)
###Output
Train Epoch: 1 [0/50000 (0%)] Loss: 2.298941
Train Epoch: 1 [6400/50000 (13%)] Loss: 2.202316
Train Epoch: 1 [12800/50000 (26%)] Loss: 2.010598
Train Epoch: 1 [19200/50000 (38%)] Loss: 2.035936
Train Epoch: 1 [25600/50000 (51%)] Loss: 1.921243
Train Epoch: 1 [32000/50000 (64%)] Loss: 1.888596
Train Epoch: 1 [38400/50000 (77%)] Loss: 1.891539
Train Epoch: 1 [44800/50000 (90%)] Loss: 1.800411
###Markdown
:: TASK 3.5 ::Plot the first convolutional layer weights as images after the last epoch. (Hint threads: [1](https://discuss.pytorch.org/t/understanding-deep-network-visualize-weights/2060/2?u=smth) [2](https://github.com/pytorch/visionutils) )
###Code
def vis_filter(y):
# Scale between [0, 1]
out = (y - y.min().expand(y.size())) / (y.max() - y.min()).expand(y.size())
# Create a grid of images
out = torchvision.utils.make_grid(out, nrow=8, padding=1)
# Convert to numpy image
out = np.transpose(out.detach().numpy(), (1, 2, 0))
# Show
plt.figure(figsize= (20, 2))
plt.imshow(out)
# Remove grid
plt.gca().grid(False)
network = network.to("cpu")
print(network.conv1.weight.size())
vis_filter(network.conv1.weight)
###Output
torch.Size([16, 3, 5, 5])
###Markdown
Задание 1. (5 баллов) В тетрадке реализована биграмная языковая модель (при генерации учитывается информация только о 1 предыдущем слове). Реализуйте триграмную модель и сгенерируйте несколько текстов. Сравните их с текстами, сгенерированными биграмной моделью. Можно использовать те же тексты, что в семинаре, или взять какой-то другой (на английском или русском языке). Делать это задание будет легче после прочтения первых 7 страниц вот этой главы из Журафского - https://web.stanford.edu/~jurafsky/slp3/3.pdf
###Code
import nltk
import re
from collections import defaultdict
import numpy as np
import copy
from gensim.models.phrases import Phrases
with open('stranger.txt', encoding='utf-8') as file: #Robert Heinlein's Stranger in a Strange Land with preface removed
stranger = file.read()
stranger = stranger.replace('“Stranger In A Strange Land” by Robert Heinlein', '')
sents = nltk.tokenize.sent_tokenize(stranger)
###Output
_____no_output_____
###Markdown
*Не включаю слова, написанные через дефис, т.к. 1) в данном тексте не используются тире 2) между дефисами нет пробелов.*
###Code
pattern = re.compile(r'([A-Za-z]+[\']?[A-Za-z]*)')
sents = [re.findall(pattern, sent) for sent in sents]
tri_model = defaultdict(lambda: defaultdict(lambda: 0))
for sentence in sents:
for w1, w2, w3 in nltk.trigrams(sentence, pad_right=True, pad_left=True, left_pad_symbol='<s>', right_pad_symbol='</s>'):
tri_model[(w1, w2)][w3] += 1
###Output
_____no_output_____
###Markdown
Логарифм не использую, потому что верятности нужны только, чтобы сделать взвешенную выборку.
###Code
for bigram in tri_model:
total_count = sum(tri_model[bigram].values())
for target in tri_model[bigram]:
tri_model[bigram][target] /= total_count
def tri_generate(model, start=('<s>', '<s>')):
text = list(start)
while text[-1] != '</s>':
index = tuple(text[-2:])
keys = list(model[index].keys())
values = list(model[index].values())
key = np.random.choice(keys, 1, values)[0]
text.append(key)
return ' '.join(text[2:]).strip(' </s>')
def text_generator(sent_generator, model, number_of_sents=1, count_words=False):
result = []
for _ in range(number_of_sents):
result.append(sent_generator(model))
if count_words == True:
count = count_words_avg(result)
return count
else:
result = '. '.join(result) + '.'
return result
def count_words_avg(sents):
total = 0
for sent in sents:
total += len(sent.split(' '))
return total/len(sents)
###Output
_____no_output_____
###Markdown
*Тут надо отметить, что данный текст – результат работы программы автоматического распознавания печатной книги, поэтому здесь нередко встречаются ошибки. Их можно было бы поправить, но это в масштаб данного задания всё же не входит.*
###Code
tri_example = text_generator(tri_generate, tri_model, 6)
print(tri_example)
bi_model = defaultdict(lambda: defaultdict(lambda: 0))
for sentence in sents:
for w1, w2 in nltk.bigrams(sentence, pad_right=True, pad_left=True, left_pad_symbol='<s>', right_pad_symbol='</s>'):
bi_model[w1][w2] += 1
for unigram in bi_model:
total_count = sum(bi_model[unigram].values())
for target in bi_model[unigram]:
bi_model[unigram][target] /= total_count
def bi_generate(model, start=['<s>']):
text = copy.copy(start)
while text[-1] != '</s>':
index = text[-1]
keys = list(model[index].keys())
values = list(model[index].values())
key = np.random.choice(keys, 1, values)[0]
text.append(key)
return ' '.join(text[1:]).strip(' </s>')
bi_example = text_generator(bi_generate, bi_model, 6)
bi_example
###Output
_____no_output_____
###Markdown
Результат триграммной модели для сравнения:
###Code
tri_example
###Output
_____no_output_____
###Markdown
Можно отметить две вещи: 1) Предложения, созданные триграммной модели определённо больше похожи на текст, написанный человеком. Они более связные (только грамматически, естественно). 2) Биграммные предложения длиннее, чем триграммные, при условии, что мы останавливаемся только на символе окончания предложения. Это подтверждается экспериментом ниже.
###Code
n = 100
m = 10
bi_test = 0
tri_test = 0
for _ in range(n):
bi_test += text_generator(bi_generate, bi_model, m, count_words=True)
tri_test += text_generator(tri_generate, tri_model, m, count_words=True)
print(f'Average bigram model sentence length: {bi_test/n:.2f} words\n' +
f'Average trigram model sentence length: {tri_test/n:.2f} words\n')
###Output
Average bigram model sentence length: 19.90 words
Average trigram model sentence length: 12.46 words
###Markdown
Задание 2. (5 баллов) При помощи gensim.models.Phrases реализуйте byte-pair-encoding, про который говорилось на первом семинаре (https://github.com/mannefedov/compling_nlp_hse_course/blob/master/notebooks/Preprocessing.ipynb) А именно 1) возьмите любой текст; разбейте его на предложения, а каждое предложение разбейте на отдельные символы (не потеряйте пробелы) 2) обучите gensim.models.Phrases на полученных символьных предложениях 3) примените полученный нграммер к этим символьным предложениям 4) повторите 2 и 3 N количество раз, чтобы начали получаться целые словаПараметры в gensim.models.Phrases влияют на количество получаемых нграммов после каждого прохода, поэтому не забудьте их настроить
###Code
symbol_sents = [' '.join(sent) for sent in sents]
symbol_sents = [[ch for ch in sent if ch not in ',.;!?\n'] for sent in symbol_sents]
def symbol_grams(sents, iterations):
transformed = []
for _ in range(iterations):
if not transformed:
transformed = sents
phrases = Phrases(transformed, scoring='npmi', threshold=0, min_count=2)
transformed = phrases[transformed]
return transformed
result = symbol_grams(symbol_sents, 3)
###Output
_____no_output_____
###Markdown
Как видим, действительно, используя такой метод, не получается получить нграммы соответствующие именно отдельным словам, не перепрыгивая через пробелы.
###Code
list(result)[1]
def split_spaces(sents):
res = []
for sent in sents:
sub = []
for word in sent:
sub.extend(word.split(' '))
sub = [word.strip('_') if word else ' ' for word in sub]
res.append(sub)
return res
def symbol_grams_spaces(sents, iterations):
transformed = []
for _ in range(iterations):
if not transformed:
transformed = sents
phrases = Phrases(transformed, scoring='npmi', threshold=0, min_count=3)
transformed = phrases[transformed]
transformed = split_spaces(transformed)
return transformed
result_spaces = symbol_grams_spaces(symbol_sents, 3)
###Output
_____no_output_____
###Markdown
С учётом пробелов получается результат, более близкий к желаемому, но появляются дополнительные пробелы. Предположительно, это связано с тем, что артикль склеивается с пробелом при каждой итерации, что вполне объяснимо тем, что для артикля это самая частотная пара.
###Code
'|'.join(list(result_spaces)[1])
###Output
_____no_output_____
###Markdown
С помощью разнообразных описательных статистик (которые были изложены в лекциях) сравните товары и рестораны.
###Code
import os
os.chdir("/Users/egorgusev/Анализ данных/Задание 2")
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
ProductList = pd.read_csv('PRODUCT_LIST.CSV', sep=';', encoding='Windows-1251')
ProductList.head()
ProductList.dtypes
ProductList.describe(include = 'all')
SaleList = pd.read_csv('SALE_LIST.csv', sep=';', encoding='Windows-1251')
SaleList
SaleList.info()
SaleList.describe(include = 'all')
len(ProductList['Product_name'].unique())
ProductList['Product_name'].unique()
ProductList['Unnamed: 3'].unique()
mergeList[mergeList['Product_name'] == 'Чай красный фрукт 270мл']
top5 = list(mergeList.groupby('Product_name')['product_count'].sum().sort_values(ascending = False).head(5).index.values)
mergeList[mergeList['Product_name'].isin(top5)].groupby(['rest_code', 'Product_name'])['product_count'].hist(bins=50)
ax1 = mergeList[mergeList['Product_name'].isin(top5)].boxplot(figsize=(30,10), column='product_count', by = ['rest_code', 'Product_name'])
ax1.get_figure().suptitle('')
plt.figure()
top200M = mergeList[mergeList['rest_code'] == 'Мечта'].sort_values('product_count', ascending = False).head(200)
top400O = mergeList[mergeList['rest_code'] == 'Озерный'].sort_values('product_count', ascending = False).head(400)
top400O
top400O.hist('product_count')
top200M
top200M.hist('product_count')
top200M.boxplot(column='product_count')
top400O.boxplot(column='product_count')
np.log(top200['product_count']).hist()
mergeList.groupby('Product_name')['product_count'].sum().sort_values(ascending = False)
#top200.groupby('date').hist(column='product_count')
df = pd.DataFrame(mergeList.groupby(['Product_name', 'rest_code'])['product_count'].sum())
df.boxplot(column='product_count', by='rest_code')
top200.boxplot(column='product_count', by='rest_code')
df.hist(column='product_count', by='rest_code')
mergeList.groupby(['Product_name', 'rest_code'])['product_count'].sum().sort_values(ascending = False)
mergeList = pd.merge(left=SaleList, right=ProductList, left_on='product_code', right_on='Product_code')
mergeList
mergeList.groupby('rest_code')['product_count'].plot.hist(bins=40, alpha=0.4)
plt.legend(loc='upper right')
mergeList['product_count'].hist(bins=40, alpha=0.4)
mergeList.boxplot(column='product_count')
#mergeList.groupby('rest_code')["Product_code"].plot.hist(alpha=0.2)
mergeList.boxplot(column='product_count', by='rest_code')
ax = top200.boxplot(column='product_count', by='Product_name')
ax.get_figure().suptitle('')
ax = top400O.boxplot(column='product_count', by='Product_name')
ax.get_figure().suptitle('')
plt.title(u' Топ продажи в кафе Озерный')
mergeList[mergeList['Product_name'] == 'Кофе КАПУЧИНО 150мл'].groupby('rest_code')['product_count'].hist(bins = 30)
mergeList[mergeList['Product_name'] == 'Кофе КАПУЧИНО 150мл'].boxplot(by = 'rest_code', column = 'product_count')
plt.title(u'Продажи Кофе КАПУЧИНО 150мл')
###Output
_____no_output_____
###Markdown
ASTR531 HW2 Contents* [9.1](9.1)* [11.2](11.2)* [12.2](12.2)* [13.2](13.2)
###Code
%matplotlib inline
%config InlineBackend.figure_format='retina'
from astropy.coordinates import Distance
from scipy.optimize import leastsq
import astropy.units as u
import astropy.constants as const
import numpy as np
import pandas as pd
import seaborn as sns
sns.set(font_scale=1.2)
###Output
_____no_output_____
###Markdown
9.1
###Code
def tdyn(M, R):
rho = 3*M/(4*np.pi*R**3)
return ((const.G*rho)**-0.5).to(u.hour)
def tkh(M, R, L):
return (const.G*M**2/(R*L)).to(u.year)
def tnuc(M, L):
return ((M/u.solMass)/(L/u.solLum)).decompose() * 1e10 * u.yr
def calc_timescales(M, R, L):
print('For a star with mass {:.1f}, radius {:.2f}, and luminosity {:.3e}:'.format(M,R,L))
print( 'Dynamical timescale : {:.3f}'.format(tdyn(M,R)) )
print( 'Thermal timescale : {:.3e}'.format(tkh(M,R,L)) )
print( 'Nuclear timescale : {:.3e}'.format(tnuc(M,L)) )
print( 'Dynamical : KH : {:.3e}'.format( (tdyn(M,R)/tkh(M,R,L)).decompose() ) )
print( 'Dynamical : nuclear : {:.3e}'.format( (tkh(M,R,L)/tnuc(M,L)).decompose() ) )
print('-----')
R_wd = 0.012 * 0.6**(-1/3)
params = [(1*u.solMass, 1*u.solRad, 1*u.solLum),
(60*u.solMass, 15*u.solRad, 10**5.9*u.solLum),
(15*u.solMass, 3300*u.solRad, 10**5.65*u.solLum),
(0.6*u.solMass, R_wd*u.solRad, 1e-2*u.solLum)]
for (M, R, L) in params:
calc_timescales(M,R,L)
###Output
For a star with mass 1.0 solMass, radius 1.00 solRad, and luminosity 1.000e+00 solLum:
Dynamical timescale : 0.906 h
Thermal timescale : 3.140e+07 yr
Nuclear timescale : 1.000e+10 yr
Dynamical : KH : 3.290e-12
Dynamical : nuclear : 3.140e-03
-----
For a star with mass 60.0 solMass, radius 15.00 solRad, and luminosity 7.943e+05 solLum:
Dynamical timescale : 6.792 h
Thermal timescale : 9.487e+03 yr
Nuclear timescale : 7.554e+05 yr
Dynamical : KH : 8.166e-08
Dynamical : nuclear : 1.256e-02
-----
For a star with mass 15.0 solMass, radius 3300.00 solRad, and luminosity 4.467e+05 solLum:
Dynamical timescale : 44324.544 h
Thermal timescale : 4.793e+00 yr
Nuclear timescale : 3.358e+05 yr
Dynamical : KH : 1.055e+00
Dynamical : nuclear : 1.427e-05
-----
For a star with mass 0.6 solMass, radius 0.01 solRad, and luminosity 1.000e-02 solLum:
Dynamical timescale : 0.002 h
Thermal timescale : 7.945e+10 yr
Nuclear timescale : 6.000e+11 yr
Dynamical : KH : 2.849e-18
Dynamical : nuclear : 1.324e-01
-----
###Markdown
11.2
###Code
def find_beta(B, M0=1):
M = 18.2 * (1-B)**0.5 / (B*0.6)**2
return M0 - M
B_m1 = leastsq(find_beta, 0.9, args=1)[0][0]
B_m60 = leastsq(find_beta, 0.9, args=60)[0][0]
B_m1, B_m60
Prad_Pgas_m1 = (1-B_m1)/B_m1
Prad_Pgas_m60 = (1-B_m60)/B_m60
Prad_Pgas_m1, Prad_Pgas_m60
def find_lum(M, B):
Ledd = 3.8e4 * M * u.solLum
return (1-B)*Ledd
L_m1 = find_lum(1, B_m1)
L_m1, np.log10(L_m1.value)
L_m60 = find_lum(60, B_m60)
L_m60, np.log10(L_m60.value)
###Output
_____no_output_____
###Markdown
12.2
###Code
def R_start(M):
return 100.*M * u.solRad
def R_end(M):
return R_start(M) / 50.
def R_ms(M):
return M**0.7 * u.solRad
def calc_radii(M):
print('For a star with mass {:.1f} solMass:'.format(M))
print( 'R at start of Hayashi track : {:.1f}'.format(R_start(M)) )
print( 'R at end of Hayashi track : {:.1f}'.format(R_end(M)) )
print( 'R at end of PMS contraction : {:.1f}'.format(R_ms(M)) )
print('-----')
for m in [0.1, 1, 10, 100]:
calc_radii(m)
def t_hayashi(M):
return M**-1 * 1e6 * u.yr
def t_pms(M):
return 6e7 * M**-2.5 * u.yr
def calc_pms_timescales(M):
print('For a star with mass {:.1f} solMass:'.format(M))
print( 'Hayashi timescale : {:.3e}'.format(t_hayashi(M)) )
print( 'PMS timescale : {:.3e}'.format(t_pms(M)) )
print('-----')
for m in [0.1, 1, 10, 100]:
calc_pms_timescales(m)
###Output
For a star with mass 0.1 solMass:
Hayashi timescale : 1.000e+07 yr
PMS timescale : 1.897e+10 yr
-----
For a star with mass 1.0 solMass:
Hayashi timescale : 1.000e+06 yr
PMS timescale : 6.000e+07 yr
-----
For a star with mass 10.0 solMass:
Hayashi timescale : 1.000e+05 yr
PMS timescale : 1.897e+05 yr
-----
For a star with mass 100.0 solMass:
Hayashi timescale : 1.000e+04 yr
PMS timescale : 6.000e+02 yr
-----
###Markdown
13.2
###Code
X0, Y0 = 0.70, 0.28
X1, Y1 = 0.49, 0.49
mu0 = (2-1.25*Y0)**-1
mu1 = (2-1.25*Y1)**-1
dL = ( ((2-Y1)**-1) * mu1**4 ) / ( ((2-Y0)**-1) * mu0**4 )
dR = ( mu1**(2/3) * (1+X1)**0.05 ) / ( mu0**(2/3) * (1+X0)**0.05 )
dT = ( mu1**0.83 * (1+X1)**-0.5 ) / ( mu0**0.83 * (1+X0)**-0.5 )
dL, dR, dT
###Output
_____no_output_____
###Markdown
1.Snake eyes: $$\frac{1}{6} \frac{1}{6} = \frac{1}{36}$$Sevens: $$\sum_{z} P_A(z)P_B(x-z)=\sum^{7}\frac{1}{6} \frac{1}{6} =\frac{6}{36} = \frac{1}{6}$$Ratio of snake eyes to sevens: $$\frac{\frac{1}{36}}{\frac{1}{6}} = \frac{1}{6}$$ 2.| | 1 | 2 | 3 | 4 | 5 | 6 ||---|---|---|---|----|----|----|| 1 | 2 | 3 | 4 | 5 | 6 | 7 || 2 | 3 | 4 | 5 | 6 | 7 | 8 || 3 | 4 | 5 | 6 | 7 | 8 | 9 || 4 | 5 | 6 | 7 | 8 | 9 | 10 || 5 | 6 | 7 | 8 | 9 | 10 | 11 || 6 | 7 | 8 | 9 | 10 | 11 | 12 |The left column contains the number rolled by one die, and the top row contains the number rolled by the other die. The middle of the table has the sum of the two dice. $$P_{A+B}(z) = \sum_{z}P_A(z)P_B(x-z)\text{ for } z>x\\P_{4} = P_1 P_3 + P_2 P_2 + P_3 P_1= \frac{1}{36} + \frac{1}{36}+ \frac{1}{36}= \frac{1}{12}$$
###Code
n = 2
die_pdf = np.ones(6) * 1/6
sum_prob = np.convolve(die_pdf, die_pdf)
sum_val = np.arange(n,6*n+1)
plt.bar(sum_val, sum_prob)
plt.xlabel('Sum of Dice Roll')
plt.ylabel('Probability')
plt.show()
###Output
_____no_output_____
###Markdown
3.
###Code
mean = sum(sum_val*sum_prob)
variance = sum((sum_val-mean)**2 * sum_prob)
print(mean, variance)
###Output
7.0 5.833333333333334
###Markdown
4.
###Code
n = 10
sum_prob = die_pdf
for i in range(n-1):
sum_prob = np.convolve(die_pdf, sum_prob)
sum_prob
sum_val = np.arange(n,6*n+1)
plt.step(sum_val/10, sum_prob)
plt.xlabel('Sum of Dice Roll')
plt.ylabel('Probability')
plt.show()
sum_val = np.arange(n,6*n+1)/10
plt.step(sum_val, sum_prob)
plt.semilogy()
plt.xlabel('Sum of Dice Roll')
plt.ylabel('Probability')
plt.show()
sum_val = np.arange(n,6*n+1)/10
plt.bar(sum_val, sum_prob)
plt.semilogy()
plt.xlabel('Sum of Dice Roll')
plt.ylabel('Probability')
plt.show()
###Output
_____no_output_____
###Markdown
Yes it is Gaussian because when it is plotted with a log y-axis it is in the shape of an upside down parabola. On the step plot it looks like it is not symmetric, however when plotted with a bar plot, it can be seen that the ends are actually symmetric. 5.
###Code
gaussian_pdf = []
x = np.linspace(-4, 4, num=50)
for i in range(50):
gaussian_pdf.append(stats.norm.pdf(x[i]))
gaus_conv = np.convolve(gaussian_pdf, gaussian_pdf)
x_1 = np.linspace(-8, 8, num=len(gaus_conv))
plt.step(x_1, gaus_conv)
plt.semilogy()
plt.show()
x_2 = x_1/2
plt.step(x_2, gaus_conv)
plt.semilogy()
plt.show()
mean = sum(x*gaussian_pdf)
variance = sum((x-mean)**2 * gaussian_pdf)
mean_1 = sum(x_1*gaus_conv)
variance_1 = sum((x_1-mean_1)**2 * gaus_conv)
mean_2 = sum(x_2*gaus_conv)
variance_2 = sum((x_2-mean_2)**2 * gaus_conv)
print(mean, mean_1, mean_2)
print(variance, variance_1, variance_2)
###Output
8.012254054667878e-17 -4.443528779716531e-15 -2.2217643898582654e-15
6.119992498830809 74.96662014018258 18.741655035045646
###Markdown
В качестве домашнего задания мы предлагаем вам решить задачу бинарной классификации на большом корпусе imdb рецензий на фильмы. Корпус можно скачать по ссылке http://ai.stanford.edu/~amaas/data/sentiment/Ваша задача в sklearn, используя три разных алгоритма, построить и обучить классификаторы, для каждого из них посчитать метрики качества. Постройте ROC кривую и посчитайте величину ROC AUC. Выберите лучший классификатор.Используя предсказания вероятностей класса, найдите 15 самых негативных и самых позитивных рецензий по мнению модели. - 7 балловНаписать свои функции, которые бы считали tp, fp, tn, fn, и возвращали точность, полноту и ф-меру и применить их к результатам, полученным вашими классификаторами (если все сделано правильно, то результаты должны совпадать с полученными sklearn метриками). - 3 балла
###Code
import os
import sys
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.metrics import roc_auc_score, roc_curve, f1_score, precision_score, recall_score
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
import warnings
warnings.filterwarnings('ignore')
def load_files(directory):
sentences = []
for file in os.listdir(directory):
path = directory + '\\' + file
with open(path, 'r', encoding='utf-8') as file:
sentences.append(file.readlines()[0])
return sentences
def read_texts(directory):
directory = os.getcwd() + directory
neg_directory = directory + '\\neg'
pos_directory = directory + '\\pos'
neg_sentences = load_files(neg_directory)
pos_sentences = load_files(pos_directory)
neg_index = np.arange(len(neg_sentences))
pos_index = np.arange(len(neg_sentences), len(neg_sentences) + len(pos_sentences))
neg_sentences = pd.Series(neg_sentences, index=neg_index)
pos_sentences = pd.Series(pos_sentences, index=pos_index)
sentences = pd.concat([neg_sentences, pos_sentences])
sentiment_values = pd.concat([pd.Series(0, index=neg_index),
pd.Series(1, index=pos_index)])
df = pd.concat([sentences, sentiment_values], axis=1)
df.columns = ['text', 'sentiment']
return df
train_data = read_texts('\\aclImdb\\train')
#test_data = read_texts('\\aclImdb\\test')
train, test = train_test_split(train_data, test_size=0.2, random_state=0)
train.reset_index(inplace=True)
test.reset_index(inplace=True)
vec = TfidfVectorizer()
vec.fit(train.text.values)
X_train = vec.transform(train.text)
X_test = vec.transform(test.text)
y_train = train.sentiment
y_test = test.sentiment
log_reg = LogisticRegression(solver='liblinear')
knn = KNeighborsClassifier()
dt = DecisionTreeClassifier()
def eval_model(X_train, X_test, y_train, y_test, model):
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
y_pred_proba = model.predict_proba(X_test)
f1 = f1_score(y_test, y_pred)
roc = roc_auc_score(y_test, y_pred)
pr = precision_score(y_test, y_pred)
rec = recall_score(y_test, y_pred)
fpr, tpr, _ = roc_curve(y_test, y_pred)
plt.plot(fpr, tpr, marker='.', label='Test')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
print('\n')
print(f'f-score of the model is {f1}')
print(f'ROC AUC score of the model is {roc}')
print(f'Precision is {pr}')
print(f'Recall is {rec}')
print('\n\n\n')
plt.show()
return y_pred, y_pred_proba, f1, pr, rec
log_pred, log_pred_proba, log_f1, log_pr, log_rec = eval_model(X_train, X_test, y_train, y_test, log_reg)
knn_pred, knn_pred_proba, knn_f1, knn_pr, knn_rec = eval_model(X_train, X_test, y_train, y_test, knn)
dt_pred, dt_pred_proba, dt_f1, dt_pr, dt_rec = eval_model(X_train, X_test, y_train, y_test, dt)
###Output
f-score of the model is 0.7147644391878573
ROC AUC score of the model is 0.7105784590808553
Precision is 0.7057220708446866
Recall is 0.7240415335463258
###Markdown
Оценка моделей Лучшим классификатором оказалась *логистическая регрессия*.
###Code
test['proba_0'] = log_pred_proba[:, 0]
test['proba_1'] = log_pred_proba[:, 1]
###Output
_____no_output_____
###Markdown
Самые негативные отзывы по мнению модели.
###Code
most_negative = test.sort_values(by='proba_0', ascending=False).head(15)
most_negative
###Output
_____no_output_____
###Markdown
Посмотрим на некоторые из них:
###Code
most_negative.sample(3).text.values
###Output
_____no_output_____
###Markdown
Самые позитивные отзывы.
###Code
most_positive = test.sort_values(by='proba_1', ascending=False).head(15)
most_positive
most_positive.sample(3).text.values
###Output
_____no_output_____
###Markdown
Функции оценки качества предсказаний
###Code
def tp(true, pred):
if true and pred:
return True
else:
return False
def fp(true, pred):
if not true and pred:
return True
else:
return False
def tn(true, pred):
if not true and not pred:
return True
else:
return False
def fn(true, pred):
if true and not pred:
return True
else:
return False
def precision(tp, fp):
return (tp)/(tp + fp)
def recall(tp, fn):
return (tp)/(tp + fn)
def f1_custom(pr, rec):
return (2 * pr * rec)/(pr + rec)
def model_metrics(y_test, y_pred):
assert len(y_test) == len(y_pred)
TP = 0
FP = 0
TN = 0
FN = 0
for i in range(len(y_test)):
TP += tp(y_test[i], y_pred[i])
FP += fp(y_test[i], y_pred[i])
TN += tn(y_test[i], y_pred[i])
FN += fn(y_test[i], y_pred[i])
pr = precision(TP, FP)
rec = recall(TP, FN)
f1 = f1_custom(pr, rec)
return pr, rec, f1
def print_metrics(y_test, y_pred, true_pr, true_rec, true_f1):
pr, rec, f1 = model_metrics(y_test.tolist(), y_pred)
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print(f'Custom function precision is {pr}, sklearn precision is {true_pr}.')
print(f'These values are equal: {pr == true_pr}')
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print(f'Custom function recall is {rec}, sklearn recall is {true_rec}.')
print(f'These values are equal: {rec == true_rec}')
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print(f'Custom function f-score is {f1}, sklearn f-score is {true_f1}.')
print(f'These values are equal: {f1 == true_f1}')
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
###Output
_____no_output_____
###Markdown
Сравнение с библиотекой sklearn *Logistic Regression*
###Code
print_metrics(y_test, log_pred, log_pr, log_rec, log_f1)
###Output
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function precision is 0.876905041031653, sklearn precision is 0.876905041031653.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function recall is 0.8961661341853036, sklearn recall is 0.8961661341853036.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function f-score is 0.8864309697807624, sklearn f-score is 0.8864309697807624.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
###Markdown
*K-Nearest Neighbors*
###Code
print_metrics(y_test, knn_pred, knn_pr, knn_rec, knn_f1)
###Output
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function precision is 0.7728520988622989, sklearn precision is 0.7728520988622989.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function recall is 0.786741214057508, sklearn recall is 0.786741214057508.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function f-score is 0.7797348110033643, sklearn f-score is 0.7797348110033643.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
###Markdown
*Decision Tree*
###Code
print_metrics(y_test, dt_pred, dt_pr, dt_rec, dt_f1)
###Output
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function precision is 0.7057220708446866, sklearn precision is 0.7057220708446866.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function recall is 0.7240415335463258, sklearn recall is 0.7240415335463258.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Custom function f-score is 0.7147644391878573, sklearn f-score is 0.7147644391878573.
These values are equal: True
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
###Markdown
###Code
from collections import OrderedDict
import numpy as np
import pandas as pd
import re
###Output
_____no_output_____
###Markdown
This assignment deals with loading a simple text file into a Python structure, lists, arrays, and dataframes.a. Locate a movie script, play script, poem, or book of your choice in .txt format*. Project Gutenburg is a great resource for this if you're not sure where to start.b. Load the words of this structure, one-by-one, into a one-dimensional, sequential Python list (i.e. the first word should be the first element in the list, while the last word should be the last element). It's up to you how to deal with special chacters -- you can remove them manually, ignore them during the loading process, or even count them as words, for example.
###Code
def load_data(data_file):
"""
Reads txt file and returns a list of words
"""
# Compile regex pattern
regex_pattern = re.compile('[^A-Za-z0-9.:/-]+')
# Read txt file
with open(data_file) as f:
# Find all special characters in the word and replace it with empty string
# Remove leading and trailing special characters
return [re.sub(regex_pattern, '', word).lower().strip('.:/') for line in f for word in line.split()]
words_list = load_data('the_martian_circe.txt')
print(words_list[:100])
###Output
['the', 'project', 'gutenberg', 'ebook', 'of', 'the', 'martian', 'circe', 'by', 'raymond', 'f', 'jones', 'this', 'ebook', 'is', 'for', 'the', 'use', 'of', 'anyone', 'anywhere', 'in', 'the', 'united', 'states', 'and', 'most', 'other', 'parts', 'of', 'the', 'world', 'at', 'no', 'cost', 'and', 'with', 'almost', 'no', 'restrictions', 'whatsoever', 'you', 'may', 'copy', 'it', 'give', 'it', 'away', 'or', 're-use', 'it', 'under', 'the', 'terms', 'of', 'the', 'project', 'gutenberg', 'license', 'included', 'with', 'this', 'ebook', 'or', 'online', 'at', 'www.gutenberg.org', 'if', 'you', 'are', 'not', 'located', 'in', 'the', 'united', 'states', 'you', 'will', 'have', 'to', 'check', 'the', 'laws', 'of', 'the', 'country', 'where', 'you', 'are', 'located', 'before', 'using', 'this', 'ebook', 'title', 'the', 'martian', 'circe', 'author', 'raymond']
###Markdown
c. Use your list to create and print a two-column pandas data-frame with the following properties: i. Each index should mark the first occurrence of a unique word (independent of case) in the text. ii. The first column for each index should represent the word in question at that index iii. The second column should represent the number of times that particular word appears in the text.
###Code
def count_elements(list_of_elements):
"""
Count words occurance with preserved possition
"""
data = OrderedDict()
for element in list_of_elements:
if element not in data:
data[element] = 1
else:
data[element] += 1
return data
counted_occurence = count_elements(words_list)
df = pd.DataFrame(counted_occurence.items(), columns=['Word', 'Count'])
df
###Output
_____no_output_____
###Markdown
d. The co-occurrence of two events represents the likelihood of the two occurring together. A simple example of co-occurrence in texts is a predecessor-successor relationship -- that is, the frequency with which one word immediately follows another. The word "cellar," for example, is commonly followed by "door." For this task, you are to construct a 2-dimensional predecessor-successor co-occurrence array as follows**: i. The row index corresponds to the word from the same index in part c.'s data-frame. ii. The column index likewise corresponds to the word in the same index in the data-frame. iii. The value in each array location represents the count of the number of times the word corresponding to the row index immediately precedes the word correponding to the column index in the text.
###Code
def generate_co_occurance_matrix(words_list, columns):
"""
Generates co-occurance matrix:
word_list: a list of words in the text
columns: unique list of words
"""
# Convert words list to numpy array
words_array = np.array(words_list)
# Convert columns list to numpy array
columns_array = np.array(columns)
# Generate zero value matrix with integer value
matrix = np.zeros((len(columns), len(columns)), dtype=np.int)
count = 0
# Iterate over unique words
for word in columns_array:
# find row position in matrix for the word
row_position = np.where(columns_array == word)[0][0]
# Find all occurances of this word in the list and iterate over it
for word_position in np.where(words_array == word)[0]:
if word_position < len(words_array) - 1:
# Find position of the successor word
col_position = np.where(columns_array == str(words_array[word_position + 1]))[0][0]
# Incremenet predecessor-successor co-occurrence by one
matrix[row_position, col_position] += 1
return matrix
matrix = generate_co_occurance_matrix(words_list, list(counted_occurence.keys()))
matrix
###Output
_____no_output_____
###Markdown
e. Based on the data-frame derived in part c. and array derived in part d., determine and print the following information:i. The first occurring word in the text.
###Code
df['Word'].iloc[0]
###Output
_____no_output_____
###Markdown
ii. The unique word that first occurs last within the text.
###Code
df['Word'].iloc[-1]
###Output
_____no_output_____
###Markdown
iii. The most common word
###Code
df[df['Count'] == df['Count'].max()]['Word'].iloc[0]
###Output
_____no_output_____
###Markdown
v. Words A and B such that B follows A more than any other combination of words.
###Code
# Find max value in matrix
max_occurence = np.amax(matrix)
# Find positions of max value
position = np.where(matrix == max_occurence)
print(df['Word'].iloc[position[0][0]], df['Word'].iloc[position[1][0]])
###Output
of the
###Markdown
vi. The word that most commonly follows the least common word
###Code
def find_most_common_word_follows_least(df, matrix):
# Find least common words
least_common_words_index = df.index[df['Count'] == df['Count'].min()].tolist()
# Create empty PandaFrame
result = pd.DataFrame(data = [], columns=['Predecessor', 'Successor',
'Occurence_predecessor','Occurence_successor' ])
# Iterate over least common words
for column_pos in least_common_words_index:
# Get one dimension matrix for that word
tmp_array = matrix[:,column_pos]
# Find all occurence with greater than 0
positive_occurence_list = tmp_array[tmp_array>0]
# Iterate over all occurence
for occurence in positive_occurence_list:
# Find index of the element
row_pos = np.where(tmp_array == occurence)[0][0]
# Generate new row
new_row = [df.iloc[row_pos]['Word'], df.iloc[column_pos]['Word'],
df.iloc[row_pos]['Count'], df.iloc[column_pos]['Count']]
# Append it to result
result.loc[len(result)] = new_row
return result
df_task4 = find_most_common_word_follows_least(df, matrix)
df_task4
df_task4[df_task4['Occurence_predecessor'] == df_task4['Occurence_predecessor'].max()]
###Output
_____no_output_____
###Markdown
###Code
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from pandas.plotting import parallel_coordinates
data = sns.load_dataset("iris")
###Output
_____no_output_____
###Markdown
**Task 1. Brief description on its value and possible applications.**The IRIS dataset has the following characteristics:* 150 examples of Iris flowers * The first four fields are features that are the characteristics of flower examples. All these fields hold float numbers representing flower measurements. * The last column is the label which represents the Iris species. * Balance class distribution meaning that each category has even amount of instances * Has no missing values One example of possible application is for botanists to find an automated way to categorize each Iris flower they find. For instance, to classify based on photographs, or in our case based on the length and width measurements of their sepals and petals.
###Code
data
print(f'CLASS DISTRIBUTION:\n{data.groupby("species").size()}')
print(f'\nSHAPE: {data.shape}')
print(f'\nTOTAL MISSING VALUES:\n{data.isnull().sum()}\n')
###Output
CLASS DISTRIBUTION:
species
setosa 50
versicolor 50
virginica 50
dtype: int64
SHAPE: (150, 5)
TOTAL MISSING VALUES:
sepal_length 0
sepal_width 0
petal_length 0
petal_width 0
species 0
dtype: int64
###Markdown
_________ **Task 2. Summarize and visually report on the Size of this data set including labeling or non-labeled status** For all three species, the respective values of the mean and median of its features are found to be pretty close. This indicates that data is nearly symmetrically distributed with very less presence of outliers.
###Code
data.groupby('species').agg(['mean', 'median'])
###Output
_____no_output_____
###Markdown
Standard deviation (or variance) is an indication of how widely the data is spread about the mean.
###Code
data.groupby('species').std()
###Output
_____no_output_____
###Markdown
The isolated points for each feature that can be seen in the box-plots below are the outliers in the data. Since these are very few in number, it wouldn't have any significant impact on our analysis.
###Code
sns.set(style="ticks")
plt.figure(figsize=(12,10))
plt.subplot(2,2,1)
sns.boxplot(x='species',y='sepal_length',data=data)
plt.subplot(2,2,2)
sns.boxplot(x='species',y='sepal_width',data=data)
plt.subplot(2,2,3)
sns.boxplot(x='species',y='petal_length',data=data)
plt.subplot(2,2,4)
sns.boxplot(x='species',y='petal_width',data=data)
plt.show()
###Output
_____no_output_____
###Markdown
Scatter plot helps to analyze the relationship between 2 features on the x and y
###Code
sns.pairplot(data)
###Output
_____no_output_____
###Markdown
Next, we can make a correlation matrix to see how these features are correlated to each other using a heatmap in the seaborn library. It can be observed that petal measurements are highly correlated, while the sepal one are uncorrelated. Also we can see that petal length is highly correlated with speal length, but not with sepal width.
###Code
plt.figure(figsize=(10,11))
sns.heatmap(data.corr(),annot=True, square = True)
plt.plot()
###Output
_____no_output_____
###Markdown
Another way to visualize the data is by parallel coordinate plot, which represents each row as a line. As we have seen below, petal measurements can separate species better than the sepal ones.
###Code
parallel_coordinates(data, "species", color = ['blue', 'red', 'green']);
###Output
_____no_output_____
###Markdown
Now, we can plot a scatter plot between the sepal length and the sepal width to visualise the iris dataset. We can observe that the blue dots(setosa) are quite clear separated from red(versicolor) and green dots(virginica), while separation between red dots and green dots might be a very difficult task given the two features available.
###Code
labels_names = { 'setosa': 'blue',
'versicolor': 'red',
'virginica': 'green'}
for species, color in labels_names.items():
x = data.loc[data['species'] == species]['sepal_length']
y = data.loc[data['species'] == species]['sepal_width']
plt.scatter(x, y, c=color)
plt.legend(labels_names.keys())
plt.xlabel('sepal_length')
plt.ylabel('sepal_width')
plt.show()
###Output
_____no_output_____
###Markdown
We can also visualise the data on different features such as petal width and petal length. In this case, the decision boundary between blue, green and red dots can be easily determined, which indicates that using all features for training is a good choice.
###Code
labels_names = { 'setosa': 'blue',
'versicolor': 'red',
'virginica': 'green'}
for species, color in labels_names.items():
x = data.loc[data['species'] == species]['petal_length']
y = data.loc[data['species'] == species]['petal_width']
plt.scatter(x, y, c=color)
plt.legend(labels_names.keys())
plt.xlabel('petal_length')
plt.ylabel('petal_width')
plt.show()
###Output
_____no_output_____
###Markdown
___________ **3. Propose and perform Deep Learning using this data set.**Report on your implementation as follows:* Justify your selection of techniques and platform* Explain your results and their applicability In our project we are using python language. There are two well-known libraries for deep learning such as PyTorch and Tensorflow. Each library has its own API implementation for example, Keras is high level API for Tensorflow, while fastai is an API for PyTorch. The Iris classification problem is an example of supervised machine learning: the model is trained from examples that contain labels and for our model we are planning to use Keras wrapper for Tensor flow. The Deep Learning would be performed in the following steps: * Data preprocessing * Model Building * Model Selection In the **Data preprocessing**, we need to create data frames for features and labels, normalize the feature data by converting all values in a range between 0 and 1, convert species labels to numerical representation and then to binary string. Then, the data needs to be split into train and test data sets. **Phase 1: Data Preprocessing** Step 1: Create Dataframes for features and lables
###Code
import pandas as pd
from sklearn.preprocessing import LabelBinarizer, LabelEncoder
encoder = LabelBinarizer()
le=LabelEncoder()
seed = 42
data = sns.load_dataset("iris")
# Create X variable with four features
X = data.drop(['species'],axis=1)
# Convert species to int
Y_int = le.fit_transform(data['species'])
# Convert species int to binary representation
Y_binary = encoder.fit_transform(Y_int)
target_names = data['species'].unique()
Y = pd.DataFrame(data=Y_binary, columns=target_names)
print(f'\nNormalized X_test values:\n{X[:5]}')
print(f'\nEncoded Y_test:\n{Y[:5]}')
###Output
Normalized X_test values:
sepal_length sepal_width petal_length petal_width
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
Encoded Y_test:
setosa versicolor virginica
0 1 0 0
1 1 0 0
2 1 0 0
3 1 0 0
4 1 0 0
###Markdown
Step 2: Create training and testing datasets
###Code
from sklearn.model_selection import train_test_split
# Split data in train and test with percentage proportion 70%/30%
X_train,X_test,y_train,y_test = train_test_split(X, Y, test_size=0.30,random_state=seed)
print(f'X_train: {X_train.shape}, y_train: {y_train.shape}')
print(f'X_test : {X_test.shape}, y_test : {y_test.shape}')
###Output
X_train: (105, 4), y_train: (105, 3)
X_test : (45, 4), y_test : (45, 3)
###Markdown
Step 3: Normalize the feature data, all values should be in a range from 0 to 1
###Code
import pandas as pd
from sklearn import preprocessing
# Normalize X features, make all values between 0 and 1
X_train = pd.DataFrame(preprocessing.normalize(X_train),
columns=X_train.columns,
index=X_train.index)
X_test = pd.DataFrame(preprocessing.normalize(X_test),
columns=X_test.columns,
index=X_test.index)
print(f'Train sample:\n{X_train.head(4)},\nShape: {X_train.shape}')
print(f'\nTest sample:\n{X_test.head(4)},\nShape: {X_test.shape}')
###Output
Train sample:
sepal_length sepal_width petal_length petal_width
81 0.772429 0.337060 0.519634 0.140442
133 0.723660 0.321627 0.585820 0.172300
137 0.698048 0.338117 0.599885 0.196326
75 0.767857 0.349026 0.511905 0.162879,
Shape: (105, 4)
Test sample:
sepal_length sepal_width petal_length petal_width
73 0.736599 0.338111 0.567543 0.144905
18 0.806828 0.537885 0.240633 0.042465
118 0.706006 0.238392 0.632655 0.210885
78 0.733509 0.354530 0.550132 0.183377,
Shape: (45, 4)
###Markdown
**Phase 2: Model Building** Step 1: Build model The IRIS is a classification problem, we need to classify if an Iris flower is setosa, versicolor or virginia. Softmax activation function is commonly used in multi classification problems in the output layer, that would return the label with the highest probability. The **tf.keras.Sequential** model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, one tf.keras.layers.Dense layer with 8 nodes, two layers with 10 nodes, and an output layer with 3 nodes representing our label predictions. The first layer’s input_shape parameter corresponds to the number of features from the dataset which is equal 4. The **activation** function determines the output shape of each node in the layer. These non linearities are important, without them the model would be equivalent to a single layer. There are many tf.keras.activations such as tanh, like sigmoid or relu. In our two models we have decided to use "tahn" and "relu" and compare the performance. The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. For our illustration we have used two models with 3 and 4 layers. Our expectation that the model with many layers should give a better result.
###Code
from keras.models import Sequential
from keras.layers import Dense
def model_with_3_layers():
model = Sequential()
model.add(Dense(27, input_dim=4, activation='relu', name='input_layer'))
model.add(Dense(9, activation='relu', name='layer_1'))
model.add(Dense(3, activation='softmax', name='output_layer'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
def model_with_4_layers():
"""build the Keras model callback"""
model = Sequential()
model.add(Dense(8, input_dim=4, activation='tanh', name='layer_1'))
model.add(Dense(10, activation='tanh', name='layer_2'))
model.add(Dense(10, activation='tanh', name='layer_3'))
model.add(Dense(3, activation='softmax', name='output_layer'))
model.compile(loss="categorical_crossentropy",
optimizer="adam",
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Step 2: Create estimator We can also pass arguments in the construction of the KerasClassifier class that will be passed on to the fit() function internally used to train the neural network. Here, we pass the number of epochs as 200 and batch size as 20 to use when training the model.
###Code
from keras.wrappers.scikit_learn import KerasClassifier
estimator = KerasClassifier(
build_fn=model_with_4_layers,
epochs=200, batch_size=20,
verbose=0)
###Output
_____no_output_____
###Markdown
Step 3: Evaluate The Model with k-Fold Cross Validation Now, the neural network model can be evaluated on a training dataset. The scikit-learn has excellent capability to evaluate models using a suite of techniques. The gold standard for evaluating machine learning models is k-fold cross validation.Since the dataset is quite small, we can pass 5 fold for cross validation.
###Code
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
import tensorflow as tf
# Suppress Tensorflow warning
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
estimator = KerasClassifier(
build_fn=model_with_3_layers,
epochs=200, batch_size=20,
verbose=0)
kfold = KFold(n_splits=5, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X_train, y_train, cv=kfold)
print(f'Model Performance:\nmean: {results.mean()*100:.2f}\
\nstd: {results.std()*100:.2f}')
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
estimator = KerasClassifier(
build_fn=model_with_4_layers,
epochs=200, batch_size=20,
verbose=0)
kfold = KFold(n_splits=5, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X_train, y_train, cv=kfold)
print(f'Model Performance:\nmean: {results.mean()*100:.2f}\
\nstd: {results.std()*100:.2f}')
###Output
Model Performance:
mean: 98.10
std: 2.33
###Markdown
Phase 3 : Model Selection For our illustration, two models have been used. One with 3 layers and another with 4 layers. We can observe that the accuracy are almost the same, but the loss value is much lower in the model with 4 layers. It can be concluded that by adding more layers, it improves the accuracy and lost and at the same time requires more computational power.
###Code
md1 = model_with_3_layers()
md1.fit(X_train,
y_train,
epochs=200,
shuffle=True, # shuffle data randomly.
verbose=0 # this will tell keras to print more detailed info
)
# Validate the model with test dateset
test_error_rate = md1.evaluate(X_test, y_test, verbose=0)
print(f'{md1.metrics_names[1]}: {test_error_rate[1]*100:.2f}')
print(f'{md1.metrics_names[0]}: {test_error_rate[0]*100:.2f}')
md2 = model_with_4_layers()
md2.fit(X_train,
y_train,
epochs=200,
shuffle=True, # shuffle data randomly.
verbose=0 # this will tell keras to print more detailed info
)
# Validate the model with test dateset
test_error_rate = md2.evaluate(X_test, y_test, verbose=0)
print(f'{md2.metrics_names[1]}: {test_error_rate[1]*100:.2f}')
print(f'{md2.metrics_names[0]}: {test_error_rate[0]*100:.2f}')
###Output
accuracy: 95.56
loss: 11.00
###Markdown
STEP 4: Evaluate model performs on the test data
###Code
from sklearn.metrics import confusion_matrix
def evaluate_performace(actual, expected):
"""
Function accepts two lists with actual and expected lables
"""
flowers = {0:'setosa',
1:'versicolor',
2:'virginica'}
print(f'Flowers in test set: \nSetosa={y_test["setosa"].sum()}\
\nVersicolor={y_test["versicolor"].sum()}\
\nVirginica={y_test["virginica"].sum()}')
for act,exp in zip(actual, expected):
if act != exp:
print(f'ERROR: {flowers[exp]} predicted as {flowers[act]}')
for i,model in enumerate((md1, md2), 1):
print(f'\nEVALUATION OF MODEL {i}')
predicted_targets = model.predict_classes(X_test)
true_targets = encoder.inverse_transform(y_test.values)
evaluate_performace(predicted_targets, true_targets)
# Calculate the confusion matrix using sklearn.metrics
fig, ax =plt.subplots(1,1)
conf_matrix = confusion_matrix(true_targets, predicted_targets)
sns.heatmap(conf_matrix, annot=True, cmap='Blues', xticklabels=target_names,yticklabels=target_names)
print('\n')
###Output
EVALUATION OF MODEL 1
Flowers in test set:
Setosa=19
Versicolor=13
Virginica=13
ERROR: versicolor predicted as virginica
ERROR: virginica predicted as versicolor
EVALUATION OF MODEL 2
Flowers in test set:
Setosa=19
Versicolor=13
Virginica=13
ERROR: versicolor predicted as virginica
ERROR: virginica predicted as versicolor
###Markdown
From the confusion matrix above we can see that the second model with 4 layers outperformed the model with 3 layers and the prediction was wrong only once for versicolor species. ___ **4. Find a publication or report that uses this same data set and compare it’s methodology and results to what you did** In the last assignment, we would like to analyze the approach that has been suggested by TensorFlow in "Custom Training: walkthrough" report [1]. The same deep learning framework has been used. Feature and labels are stored in tf.Tensor structure, where in my model all data was stored in pandas.DataFrame. Label data is converted to numerical representation, where in our side we have decided to use binary string representation. The Author decided to not normalize the feature data, to be more specific to represent in the range from 0 to 1. It is a preferable approach, because it allows the model to learn faster. Suggested model is using a Sequential model which is a linear stack of layers. The stack is built with 4 layers, input and output layers and two Dense layers with 10 nodes each can be simply represented as 4/10/10/3. One of our models that showed better accuracy and loss contains 5 layers which can be represented as follows: 4/8/10/10/3. The relu activation function has been chosen for inner layers, that outputs 0 if input is negative or 0, and returns the value when it is positive. Both models are using **SparseCategoricalCrossentropy** function which calculates the loss value by taking the model's class probability predictions and the desired labels, and returns the average loss across all examples. To minimize the loss the Stochastic gradient algorithm has been chosen with learning rate 0.01, in contrast our model is built with Adam which is an extension to stochastic gradient descent algorithm. Both models are run with almost the same number of epochs. It can be observed that both models return almost the same accuracy and loss. To summarize, we can see that both models performed similarly, but in our approach to the same result can be achieved by adding the new inner layer that does help to improve the model but it might be very resource-intensive.**References:**[1] - "Custom training: walkthrough", https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/customization/custom_training_walkthrough.ipynbscrollTo=rwxGnsA92emp, Accessed 2018, The TensorFlow Authors
###Code
# To convert colab notebook to pdf
!apt-get install texlive texlive-xetex texlive-latex-extra pandoc >/dev/null
!pip install pypandoc >/dev/null
from google.colab import drive
drive.mount('/content/drive')
!cp drive/My\ Drive/Colab\ Notebooks/HW2.ipynb ./
!jupyter nbconvert --to PDF "HW2.ipynb" 2>/dev/null
!cp ./HW2.pdf drive/My\ Drive/Colab\ Notebooks/
###Output
_____no_output_____
###Markdown
Homework 2: Multiwavelength Galactic Structure Part I Q1 and Q3:These are found in calculus.py, matrix.py. In addition, you will find units tests for both in the tests/ directory.
###Code
%matplotlib inline
import numpy as np
from matplotlib.pyplot import *
from matplotlib import rc
import astropy.units as un
from astropy import constants as const
import sys
sys.path.append("../HW1/")
from interpolation import linear_interpolator
from calculus import derivative, integrate
from matrix import Matrix, make_matrix
# Make more readable plots
rc('text',usetex=True)
rc('font',**{'size':18})
rc('xtick',**{'labelsize':18})
rc('ytick',**{'labelsize':18})
rc('axes',**{'labelsize':24,'titlesize':24})
###Output
_____no_output_____
###Markdown
Q2: Using your chosen parameters of $c$, $v_{200}$, and $r_{200}$, numerically determine the mass enclosed $M_{\rm enc}(r)$ (and plot), the total mass of the dark matter halo $M$, and then also $M(r)$, the amount of mass ina little shell around $r \pm \Delta r$ (i.e., what does the mass profile look like?), and $dM(r)/dr$. Compare this by repeating, holding $c$ fixed and changing $v_{200}$. Again compare the first by repeating, holding $v_{200}$ fixed and changing $c$. Feel free to use your previous tools from last time if you find that you need to (though I don't think you should); in the future you’ll be able to use built-in functions for those. For simplicity, I will only solve for the form given in the problem, with $c$ = 15, $v_{200}$ = 160 km/s, and $r_{200}$ = 230 kpc. These were originally calculated the other way around from $M_{200}$ and 200 times the critical density of the Universe, $\rho_{\rm crit}$.
###Code
H0 = 70.0*un.km/un.s/un.Mpc
rho_crit = (3*H0**2/(8*np.pi*const.G)).to(un.g/un.cm**3) #matches above
rho_0 = 200*rho_crit
M_200 = 1.4e12*un.Msun #https://arxiv.org/pdf/1501.01788.pdf
r_200 = (M_200/((4*np.pi/3) * 200*rho_crit.to(un.Msun/un.kpc**3)))**(1.0/3)
v_200 = np.sqrt(const.G*M_200/r_200).to(un.km/un.s)
print(M_200, r_200, v_200)
c = 15.0
###Output
1400000000000.0 solMass 230.76202860993436 kpc 161.53342002695413 km / s
###Markdown
Define $v_c$ function from$$\left[\frac{v_c(r)}{v_{200}}\right]^2 = \frac{1}{x} \frac{\ln(1+cx) - \frac{cx}{1+cx}}{\ln(1+c)-\frac{c}{1+c}}$$where $c$ is a dimensionless "concentration" factor, $v_{200}$ is the velocity at a radius of $r_{200}$, where the density reaches 200 times the critical density of the Universe, and $x \equiv r/r_{200}$.
###Code
def v_c(r, c, r_200, v_200):
"""
Returns the circular velocity given the
concentration factor c and the velocity v_200
at a radius r_200
Parameters
==========
r : float
radius
c : float
concentration factor
r_200 : float
"Virial radius" where \rho = 200\rho_crit
v_200 : float
Velocity at r_200
Returns
=======
v_c : float
The circular velocity
"""
x = r/r_200
cx = c*x
numer = np.log(1+cx) - cx/(1+cx)
denom = x*(np.log(1+c) - c/(1+c))
return v_200* np.sqrt(numer/denom)
###Output
_____no_output_____
###Markdown
Make plot to show $v_c$ vs $r$ (in the assignment, `rs` only goes to 300)
###Code
rs = np.arange(0.1, 500, 0.1)
vs = np.vectorize(v_c)(rs, c, r_200.value, v_200.value)
plot(rs, vs)
ylim(0, 250)
xlabel(r'$r~\mathrm{(kpc)}$')
ylabel(r'$v_c~(\mathrm{km/s})$')
show()
###Output
_____no_output_____
###Markdown
Determine mass enclosed and total mass of the halo
###Code
def M_enc(r, c, r_200, v_200):
""" Mass enclosed given the same parameters as v_c() """
return ((r*un.kpc)*(v_c(r, c, r_200, v_200)*un.km/un.s)**2 / const.G).to(un.solMass).value
M_encs = np.vectorize(M_enc)(rs, c, r_200.value, v_200.value)
print("M_total = %0.2f x 10^12 Solar Masses"%(M_encs[-1]/1e12))
plot(rs, M_encs/1e12)
xlabel(r'$r~\mathrm{(kpc)}$')
ylabel(r'$M_{\rm enc}~(10^{12}~M_\odot)$')
show()
###Output
M_total = 1.94 x 10^12 Solar Masses
###Markdown
Plot $M(r)$ in little shells (but change the shell size to 0.5 kpc)
###Code
rs = np.arange(0.5, 500, 0.5)
M_encs = np.vectorize(M_enc)(rs, c, r_200.value, v_200.value)
dM_encs = np.diff(M_encs)
# One could also use the full integrator here
# Below is essentially just the midpoint rule
plot((rs[1:]+rs[:-1])/2, dM_encs/1e9)
xlabel(r'$r~\mathrm{(kpc)}$')
ylabel(r'$M~(10^{9}~M_\odot)$')
show()
###Output
_____no_output_____
###Markdown
Plot $dM(r)/dr$. I am using the `linear_interpolator()` function from HW1.
###Code
x = (rs[1:]+rs[:-1])/2
y = dM_encs/1e9
func = linear_interpolator(x, y)
N = len(x)
deriv = [0 for i in range(N)]
for i in range(1, N-1):
deriv[i] = derivative(func, x[i], 0.00001)
newx = x[1:-1]
deriv = deriv[1:-1]
plot(newx, deriv)
xlabel(r'$r~\mathrm{(kpc)}$')
ylabel(r'$dM/dr~(10^{9}~M_\odot / {\rm kpc})$')
show()
###Output
_____no_output_____ |
MachineLearning/07.IntroductionToNeuralNetworks/Intro to Neural Networks Demos.ipynb | ###Markdown
Introduction to Neural Networks Live Demos
###Code
digits_data, digits_labels = load_digits().data, load_digits().target
digits_data.shape
digits_labels.shape
pd.Series(digits_labels).groupby(digits_labels).size()
for i in range(10):
plt.imshow(digits_data[i].reshape(8, 8), cmap = "gray")
plt.title("Label: {}".format(digits_labels[i]))
plt.show()
digits_data = MinMaxScaler().fit_transform(digits_data)
digits_data_train, digits_data_test, digits_labels_train, digits_labels_test = train_test_split(
digits_data,
digits_labels,
train_size = 0.8,
stratify = digits_labels)
nn = MLPClassifier(hidden_layer_sizes = (10,))
nn.fit(digits_data_train, digits_labels_train)
def get_scores(estimator):
print("Train: ", estimator.score(digits_data_train, digits_labels_train))
print("Test: ", estimator.score(digits_data_test, digits_labels_test))
get_scores(nn)
nn.coefs_[0].shape, nn.coefs_[1].shape,
len(nn.intercepts_)
deep_nn = MLPClassifier(hidden_layer_sizes = (3, 5, 3))
deep_nn.fit(digits_data_train, digits_labels_train)
get_scores(deep_nn)
###Output
Train: 0.6325678496868476
Test: 0.5972222222222222
|
Phone Contacts Analysis.ipynb | ###Markdown
IntroductionI was curious which Network provider numbers exist the most in my phone conatcts log So, i ran this analysis to find out and decide which phone call bundle should i pick!
###Code
# import pandas for dataframe operations
import pandas as pd
#import matplotlib for visualizing results
import matplotlib.pyplot as plt
# reading phone contacts as csv file
contacts = pd.read_csv('Phone Contacts.csv')
# displaying the dataframe
contacts.head()
# dividing the contacts numbers to the known providers in my country
# recognize each provider by the third digit in the phone number
voda = []
eti = []
mob = []
other = []
for i in range(len(contacts)-1):
if contacts['Phone Number'][i][3] == '0':
voda.append(contacts['Phone Number'][i])
elif contacts['Phone Number'][i][3] == '1':
eti.append(contacts['Phone Number'][i])
elif contacts['Phone Number'][i][3] == '2':
mob.append(contacts['Phone Number'][i])
else:
other.append(contacts['Phone Number'][i])
# Plotting the results to see which service provider exists the most!
%matplotlib inline
labels = 'Vodafone', 'Etisalat', 'Mobinil', 'Other'
sizes = [len(voda), len(eti), len(mob), len(other)]
colors = ['red', 'yellowgreen', 'orange', 'purple']
plt.pie(sizes, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=140)
plt.title("Phone Contact Numbers Analysis")
plt.show()
###Output
_____no_output_____ |
notebooks/opencv-basics.ipynb | ###Markdown
Data prep for image recognitionThe purpose of this short notebook is to introduce the most basic features of the OpenCV library, focusing on features that will make it possible to use intelligent APIs on image data. We'll then see how to use a pretrained object detection model to find real-world objects in images.
###Code
import cv2
import numpy as np
###Output
_____no_output_____
###Markdown
The first thing we'll try is reading an image from a file. OpenCV makes it easy to decode popular image formats, and this notebook has access to an image file we can read.
###Code
def loadImage(f):
""" Load an image and convert it from BGR color space
(which OpenCV uses) to RGB color space (which pyplot expects) """
return cv2.cvtColor(cv2.imread(f, cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB)
img = loadImage("otto.jpg")
###Output
_____no_output_____
###Markdown
Working with images as arraysThis will get us a `numpy` array containing the pixels from a picture of a confused schnauzer who did not expect to wind up unable to get out of the clothes basket. We can look at the size of the array:
###Code
img.shape
###Output
_____no_output_____
###Markdown
We can examine the image itself by plotting it.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(img)
###Output
_____no_output_____
###Markdown
While our focus is on using pretrained models, if we were training a model, it may be useful to transform, blur, or resize images in order to generate more training data from a few images. Since our images are `numpy` arrays, this is relatively straightforward in general, but OpenCV provides functions to make these tasks even easier. We'll see how to- blur an input image with a 15x15 box blur,- resize an image and interpolate between pixels in the source data, and- rotate an image without calculating a transformation matrixFirst, let's look at box blur:
###Code
plt.imshow(cv2.blur(img, (15,15)))
###Output
_____no_output_____
###Markdown
We can also scale the image by a factor of 3 on both axes (notice the difference in the axes on the plotted image, even though the size doesn't change).
###Code
plt.imshow(cv2.resize(img, None, fx=3, fy=3, interpolation=cv2.INTER_CUBIC))
###Output
_____no_output_____
###Markdown
It's also possible to stretch the image by scaling along axes differently:
###Code
plt.imshow(cv2.resize(img, None, fx=2.5, fy=3, interpolation=cv2.INTER_CUBIC))
###Output
_____no_output_____
###Markdown
We can also rotate the image. Recall that rotation is an affine tranformation on image matrices. OpenCV provides a function to calculate the transformation matrix, given a point to rotate around, an angle of rotation, and a scaling factor. Here we'll rotate the image around its center by 15 degrees while scaling by 1.3x.
###Code
rows, cols, _ = img.shape
center = (cols / 2, rows / 2)
angle = 15 # degrees
scale = 1.3
rotationMatrix = cv2.getRotationMatrix2D(center, angle, scale)
plt.imshow(cv2.warpAffine(img, rotationMatrix, (cols, rows)))
###Output
_____no_output_____
###Markdown
Working with image data in byte arraysIn many non-batch applications, we won't be actually processing _files_; instead, we'll be dealing with binary data, whether passed as a base64-encoded string to a HTTP request or stored in a blob as part of structured data on a stream. OpenCV is able to decode this raw binary data just as it is able to decode files; this last part of the notebook will show you how to do it.We'll start by getting a Python `bytearray` with the contents of a file. Notice that, while we have a JPEG file, we aren't storing the file type anywhere.
###Code
with open("otto.jpg", "rb") as f:
img_bytes = bytearray(f.read())
###Output
_____no_output_____
###Markdown
Now that we have a `bytearray` of the file's contents, we'll convert that into a flat NumPy array:
###Code
imgarr = np.asarray(img_bytes, dtype=np.uint8)
imgarr
###Output
_____no_output_____
###Markdown
The OpenCV `imdecode` function will inspect this flat array and parse it as an image, inferring the right type and dimensions and returning a multidimensional array with an appropriate shape.
###Code
# decode byte array as image
img2 = cv2.imdecode(imgarr, cv2.IMREAD_COLOR)
# convert BGR to RGB
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2RGB)
###Output
_____no_output_____
###Markdown
We then have a multidimensional array that we can use just as we did the image we read from a file.
###Code
plt.imshow(img2)
###Output
_____no_output_____
###Markdown
Image intensitiesWe can also plot histograms for each channel of the image. (This example code is taken from the [OpenCV documentation](https://docs.opencv.org/3.1.0/d1/db7/tutorial_py_histogram_begins.html).) You can see that the image of the dog is underexposed.
###Code
for i, color in enumerate(["r", "g", "b"]):
histogram = cv2.calcHist([img], [i], None, [256], [0, 256])
plt.plot(histogram, color=color)
plt.xlim([0, 256])
plt.show()
###Output
_____no_output_____
###Markdown
Object detection with pretrained modelsNow that we've seen how to use some of the basic capabilities of OpenCV to parse image data into a matrix of pixels -- and then to perform useful image transformations and analyses on this matrix -- we're ready to see how to use a pretrained model to identify objects in real images.We'll use a pretrained [YOLO](https://pjreddie.com/darknet/yolo/) ("you only look once") model and we'll load and score that model with the [darkflow](https://github.com/thtrieu/darkflow/) library, which is built on TensorFlow.One of the key themes of this workshop is that you don't need a deep understanding of the techniques behind off-the-shelf models for language processing or image recognition in order to make use of them in your applications, but YOLO is a cool technique, so if you want to learn more about it, here's where to get started:- [this paper](https://pjreddie.com/media/files/papers/yolo_1.pdf) explains the first version of YOLO and the basic technique,- [this presentation](https://www.youtube.com/watch?v=NM6lrxy0bxs) presents the basics of the paper in a thirteen-minute video, and- [this paper](http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf) provides a deeper dive into object detection (including some details on the mAP metric for evaluating classifier quality)YOLO is so-called because previous object-detection techniques repeatedly ran image classifiers on multiple overlapping windows of an image; by contrast, YOLO "only looks once," identifying image regions that might contain an interesting object and then identifying which objects those regions might contain in a single pass. It can be much faster than classic approaches; indeed, it can run in real time or faster with GPU acceleration. Loading our modelWe'll start by loading a pretrained model architecture and model weights from files:
###Code
from darkflow.net.build import TFNet
options = {"model": "cfg/yolo.cfg", "load": "/data/yolo.weights", "threshold" : 0.1}
yolo = TFNet(options)
###Output
_____no_output_____
###Markdown
Our next step is to use the model to identify some objects in an image. We'll start with the dog image. The `return_predict` method will return a list of predictions, each with a visual object class, a confidence score, and a bounding box.
###Code
predictions = yolo.return_predict(img)
predictions
###Output
_____no_output_____
###Markdown
To be fair, most dogs spend a lot of time on sofas.It is often useful to visualize what parts of the image were identified as objects. We can use OpenCV to annotate the bounding boxes of each identified object in the image with the `cv2.rectangle` function. Since this is destructive, we'll work on a copy of the image.
###Code
def annotate(img, predictions, thickness=None):
""" Copies the supplied image and annotates it with the bounding
boxes of each identified object """
annotated_img = np.copy(img)
if thickness is None:
thickness = int(max(img.shape[0], img.shape[1]) / 100)
for prediction in predictions:
tl = prediction["topleft"]
topleft = (tl["x"], tl["y"])
br = prediction["bottomright"]
bottomright = (br["x"], br["y"])
# draw a white rectangle around the identified object
white = (255,255,255)
cv2.rectangle(annotated_img, topleft, bottomright, color=white, thickness=thickness)
return annotated_img
plt.imshow(annotate(img, predictions))
###Output
_____no_output_____
###Markdown
Trying it out with other imagesWe can try this technique out with other images as well. The test images we have are from the [Open Images Dataset](https://storage.googleapis.com/openimages/web/index.html) and are licensed under CC-BY-SA. Some of these results are impressive and some are unintentionally hilarious! Try it out and see if you can figure out why certain false positives show up.
###Code
from ipywidgets import interact
from os import listdir
def predict(imageFile):
image = loadImage("/data/images/" + imageFile)
predictions = yolo.return_predict(image)
plt.imshow(annotate(image, predictions, thickness=5))
return predictions
interact(predict, imageFile = listdir("/data/images/"))
###Output
_____no_output_____ |
docs/examples/2_Constraints.ipynb | ###Markdown
The importance of constraintsConstraints determine which potential adversarial examples are valid inputs to the model. When determining the efficacy of an attack, constraints are everything. After all, an attack that looks very powerful may just be generating nonsense. Or, perhaps more nefariously, an attack may generate a real-looking example that changes the original label of the input. That's why you should always clearly define the *constraints* your adversarial examples must meet. Classes of constraintsTextAttack evaluates constraints using methods from three groups:- **Overlap constraints** determine if a perturbation is valid based on character-level analysis. For example, some attacks are constrained by edit distance: a perturbation is only valid if it perturbs some small number of characters (or fewer).- **Grammaticality constraints** filter inputs based on syntactical information. For example, an attack may require that adversarial perturbations do not introduce grammatical errors.- **Semantic constraints** try to ensure that the perturbation is semantically similar to the original input. For example, we may design a constraint that uses a sentence encoder to encode the original and perturbed inputs, and enforce that the sentence encodings be within some fixed distance of one another. (This is what happens in subclasses of `textattack.constraints.semantics.sentence_encoders`.) A new constraintTo add our own constraint, we need to create a subclass of `textattack.constraints.Constraint`. We can implement one of two functions, either `_check_constarint` or `_check_constraint_many`:- `_check_constraint` determines whether candidate `TokenizedText` `transformed_text`, transformed from `current_text`, fulfills a desired constraint. It returns either `True` or `False`.- `_check_constraint_many` determines whether each of a list of candidates `transformed_texts` fulfill the constraint relative to `current_text`. This is here in case your constraint can be vectorized. If not, just implement `_check_constraint`, and `_check_constraint` will be executed for each `(transformed_text, current_text)` pair. A custom constraintFor fun, we're going to see what happens when we constrain an attack to only allow perturbations that substitute out a named entity for another. In linguistics, a **named entity** is a proper noun, the name of a person, organization, location, product, etc. Named Entity Recognition is a popular NLP task (and one that state-of-the-art models can perform quite well). NLTK and Named Entity Recognition**NLTK**, the Natural Language Toolkit, is a Python package that helps developers write programs that process natural language. NLTK comes with predefined algorithms for lots of linguistic tasks– including Named Entity Recognition.First, we're going to write a constraint class. In the `_check_constraints` method, we're going to use NLTK to find the named entities in both `current_text` and `transformed_text`. We will only return `True` (that is, our constraint is met) if `transformed_text` has substituted one named entity in `current_text` for another.Let's import NLTK and download the required modules:
###Code
import nltk
nltk.download('punkt') # The NLTK tokenizer
nltk.download('maxent_ne_chunker') # NLTK named-entity chunker
nltk.download('words') # NLTK list of words
###Output
[nltk_data] Downloading package punkt to /u/edl9cy/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data] /u/edl9cy/nltk_data...
[nltk_data] Package maxent_ne_chunker is already up-to-date!
[nltk_data] Downloading package words to /u/edl9cy/nltk_data...
[nltk_data] Package words is already up-to-date!
###Markdown
NLTK NER ExampleHere's an example of using NLTK to find the named entities in a sentence:
###Code
sentence = ('In 2017, star quarterback Tom Brady led the Patriots to the Super Bowl, '
'but lost to the Philadelphia Eagles.')
# 1. Tokenize using the NLTK tokenizer.
tokens = nltk.word_tokenize(sentence)
# 2. Tag parts of speech using the NLTK part-of-speech tagger.
tagged = nltk.pos_tag(tokens)
# 3. Extract entities from tagged sentence.
entities = nltk.chunk.ne_chunk(tagged)
print(entities)
###Output
(S
In/IN
2017/CD
,/,
star/NN
quarterback/NN
(PERSON Tom/NNP Brady/NNP)
led/VBD
the/DT
(ORGANIZATION Patriots/NNP)
to/TO
the/DT
(ORGANIZATION Super/NNP Bowl/NNP)
,/,
but/CC
lost/VBD
to/TO
the/DT
(ORGANIZATION Philadelphia/NNP Eagles/NNP)
./.)
###Markdown
It looks like `nltk.chunk.ne_chunk` gives us an `nltk.tree.Tree` object where named entities are also `nltk.tree.Tree` objects within that tree. We can take this a step further and grab the named entities from the tree of entities:
###Code
# 4. Filter entities to just named entities.
named_entities = [entity for entity in entities if isinstance(entity, nltk.tree.Tree)]
print(named_entities)
###Output
[Tree('PERSON', [('Tom', 'NNP'), ('Brady', 'NNP')]), Tree('ORGANIZATION', [('Patriots', 'NNP')]), Tree('ORGANIZATION', [('Super', 'NNP'), ('Bowl', 'NNP')]), Tree('ORGANIZATION', [('Philadelphia', 'NNP'), ('Eagles', 'NNP')])]
###Markdown
Caching with `@functools.lru_cache`A little-known feature of Python 3 is `functools.lru_cache`, a decorator that allows users to easily cache the results of a function in an LRU cache. We're going to be using the NLTK library quite a bit to tokenize, parse, and detect named entities in sentences. These sentences might repeat themselves. As such, we'll use this decorator to cache named entities so that we don't have to perform this expensive computation multiple times. Putting it all together: getting a list of Named Entity Labels from a sentenceNow that we know how to tokenize, parse, and detect named entities using NLTK, let's put it all together into a single helper function. Later, when we implement our constraint, we can query this function to easily get the entity labels from a sentence. We can even use `@functools.lru_cache` to try and speed this process up.
###Code
import functools
@functools.lru_cache(maxsize=2**14)
def get_entities(sentence):
tokens = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokens)
# Setting `binary=True` makes NLTK return all of the named
# entities tagged as NNP instead of detailed tags like
#'Organization', 'Geo-Political Entity', etc.
entities = nltk.chunk.ne_chunk(tagged, binary=True)
return entities.leaves()
###Output
_____no_output_____
###Markdown
And let's test our function to make sure it works:
###Code
sentence = 'Jack Black starred in the 2003 film classic "School of Rock".'
get_entities(sentence)
###Output
_____no_output_____
###Markdown
We flattened the tree of entities, so the return format is a list of `(word, entity type)` tuples. For non-entities, the `entity_type` is just the part of speech of the word. `'NNP'` is the indicator of a named entity (a proper noun, according to NLTK). Looks like we identified three named entities here: 'Jack' and 'Black', 'School', and 'Rock'. as a 'GPE'. (Seems that the labeler thinks Rock is the name of a place, a city or something.) Whatever technique NLTK uses for named entity recognition may be a bit rough, but it did a pretty decent job here! Creating our NamedEntityConstraintNow that we know how to detect named entities using NLTK, let's create our custom constraint.
###Code
from textattack.constraints import Constraint
class NamedEntityConstraint(Constraint):
""" A constraint that ensures `transformed_text` only substitutes named entities from `current_text` with other named entities.
"""
def _check_constraint(self, transformed_text, current_text):
transformed_entities = get_entities(transformed_text.text)
current_entities = get_entities(current_text.text)
# If there aren't named entities, let's return False (the attack
# will eventually fail).
if len(current_entities) == 0:
return False
if len(current_entities) != len(transformed_entities):
# If the two sentences have a different number of entities, then
# they definitely don't have the same labels. In this case, the
# constraint is violated, and we return False.
return False
else:
# Here we compare all of the words, in order, to make sure that they match.
# If we find two words that don't match, this means a word was swapped
# between `current_text` and `transformed_text`. That word must be a named entity to fulfill our
# constraint.
current_word_label = None
transformed_word_label = None
for (word_1, label_1), (word_2, label_2) in zip(current_entities, transformed_entities):
if word_1 != word_2:
# Finally, make sure that words swapped between `x` and `x_adv` are named entities. If
# they're not, then we also return False.
if (label_1 not in ['NNP', 'NE']) or (label_2 not in ['NNP', 'NE']):
return False
# If we get here, all of the labels match up. Return True!
return True
###Output
_____no_output_____
###Markdown
Testing our constraintWe need to create an attack and a dataset to test our constraint on. We went over all of this in the transformations tutorial, so let's gloss over this part for now.
###Code
# Import the model
import transformers
from textattack.models.tokenizers import AutoTokenizer
model = transformers.AutoModelForSequenceClassification.from_pretrained("textattack/albert-base-v2-yelp-polarity")
model.tokenizer = AutoTokenizer("textattack/albert-base-v2-yelp-polarity")
# Create the goal function using the model
from textattack.goal_functions import UntargetedClassification
goal_function = UntargetedClassification(model)
# Import the dataset
from textattack.datasets import HuggingFaceNlpDataset
dataset = HuggingFaceNlpDataset("yelp_polarity", None, "test")
from textattack.transformations import WordSwapEmbedding
from textattack.search_methods import GreedySearch
from textattack.shared import Attack
from textattack.constraints.pre_transformation import RepeatModification, StopwordModification
# We're going to the `WordSwapEmbedding` transformation. Using the default settings, this
# will try substituting words with their neighbors in the counter-fitted embedding space.
transformation = WordSwapEmbedding(max_candidates=15)
# We'll use the greedy search method again
search_method = GreedySearch()
# Our constraints will be the same as Tutorial 1, plus the named entity constraint
constraints = [RepeatModification(),
StopwordModification(),
NamedEntityConstraint(False)]
# Now, let's make the attack using these parameters.
attack = Attack(goal_function, constraints, transformation, search_method)
print(attack)
###Output
Attack(
(search_method): GreedySearch
(goal_function): UntargetedClassification
(transformation): WordSwapEmbedding(
(max_candidates): 15
(embedding_type): paragramcf
)
(constraints):
(0): NamedEntityConstraint(
(compare_against_original): False
)
(1): RepeatModification
(2): StopwordModification
(is_black_box): True
)
###Markdown
Now, let's use our attack. We're going to attack samples until we achieve 5 successes. (There's a lot to check here, and since we're using a greedy search over all potential word swap positions, each sample will take a few minutes. This will take a few hours to run on a single core.)
###Code
from textattack.loggers import CSVLogger # tracks a dataframe for us.
from textattack.attack_results import SuccessfulAttackResult
results_iterable = attack.attack_dataset(dataset)
logger = CSVLogger(color_method='html')
num_successes = 0
while num_successes < 5:
result = next(results_iterable)
if isinstance(result, SuccessfulAttackResult):
logger.log_attack_result(result)
num_successes += 1
print(f'{num_successes} of 5 successes complete.')
###Output
1 of 5 successes complete.
2 of 5 successes complete.
3 of 5 successes complete.
4 of 5 successes complete.
5 of 5 successes complete.
###Markdown
Now let's visualize our 5 successes in color:
###Code
import pandas as pd
pd.options.display.max_colwidth = 480 # increase column width so we can actually read the examples
from IPython.core.display import display, HTML
display(HTML(logger.df[['original_text', 'perturbed_text']].to_html(escape=False)))
###Output
_____no_output_____
###Markdown
The importance of constraintsConstraints determine which potential adversarial examples are valid inputs to the model. When determining the efficacy of an attack, constraints are everything. After all, an attack that looks very powerful may just be generating nonsense. Or, perhaps more nefariously, an attack may generate a real-looking example that changes the original label of the input. That's why you should always clearly define the *constraints* your adversarial examples must meet. Classes of constraintsTextAttack evaluates constraints using methods from three groups:- **Overlap constraints** determine if a perturbation is valid based on character-level analysis. For example, some attacks are constrained by edit distance: a perturbation is only valid if it perturbs some small number of characters (or fewer).- **Grammaticality constraints** filter inputs based on syntactical information. For example, an attack may require that adversarial perturbations do not introduce grammatical errors.- **Semantic constraints** try to ensure that the perturbation is semantically similar to the original input. For example, we may design a constraint that uses a sentence encoder to encode the original and perturbed inputs, and enforce that the sentence encodings be within some fixed distance of one another. (This is what happens in subclasses of `textattack.constraints.semantics.sentence_encoders`.) A new constraintTo add our own constraint, we need to create a subclass of `textattack.constraints.Constraint`. We can implement one of two functions, either `_check_constraint` or `_check_constraint_many`:- `_check_constraint` determines whether candidate `TokenizedText` `transformed_text`, transformed from `current_text`, fulfills a desired constraint. It returns either `True` or `False`.- `_check_constraint_many` determines whether each of a list of candidates `transformed_texts` fulfill the constraint relative to `current_text`. This is here in case your constraint can be vectorized. If not, just implement `_check_constraint`, and `_check_constraint` will be executed for each `(transformed_text, current_text)` pair. A custom constraintFor fun, we're going to see what happens when we constrain an attack to only allow perturbations that substitute out a named entity for another. In linguistics, a **named entity** is a proper noun, the name of a person, organization, location, product, etc. Named Entity Recognition is a popular NLP task (and one that state-of-the-art models can perform quite well). NLTK and Named Entity Recognition**NLTK**, the Natural Language Toolkit, is a Python package that helps developers write programs that process natural language. NLTK comes with predefined algorithms for lots of linguistic tasks– including Named Entity Recognition.First, we're going to write a constraint class. In the `_check_constraints` method, we're going to use NLTK to find the named entities in both `current_text` and `transformed_text`. We will only return `True` (that is, our constraint is met) if `transformed_text` has substituted one named entity in `current_text` for another.Let's import NLTK and download the required modules:
###Code
import nltk
nltk.download('punkt') # The NLTK tokenizer
nltk.download('maxent_ne_chunker') # NLTK named-entity chunker
nltk.download('words') # NLTK list of words
###Output
[nltk_data] Downloading package punkt to /u/edl9cy/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data] /u/edl9cy/nltk_data...
[nltk_data] Package maxent_ne_chunker is already up-to-date!
[nltk_data] Downloading package words to /u/edl9cy/nltk_data...
[nltk_data] Package words is already up-to-date!
###Markdown
NLTK NER ExampleHere's an example of using NLTK to find the named entities in a sentence:
###Code
sentence = ('In 2017, star quarterback Tom Brady led the Patriots to the Super Bowl, '
'but lost to the Philadelphia Eagles.')
# 1. Tokenize using the NLTK tokenizer.
tokens = nltk.word_tokenize(sentence)
# 2. Tag parts of speech using the NLTK part-of-speech tagger.
tagged = nltk.pos_tag(tokens)
# 3. Extract entities from tagged sentence.
entities = nltk.chunk.ne_chunk(tagged)
print(entities)
###Output
(S
In/IN
2017/CD
,/,
star/NN
quarterback/NN
(PERSON Tom/NNP Brady/NNP)
led/VBD
the/DT
(ORGANIZATION Patriots/NNP)
to/TO
the/DT
(ORGANIZATION Super/NNP Bowl/NNP)
,/,
but/CC
lost/VBD
to/TO
the/DT
(ORGANIZATION Philadelphia/NNP Eagles/NNP)
./.)
###Markdown
It looks like `nltk.chunk.ne_chunk` gives us an `nltk.tree.Tree` object where named entities are also `nltk.tree.Tree` objects within that tree. We can take this a step further and grab the named entities from the tree of entities:
###Code
# 4. Filter entities to just named entities.
named_entities = [entity for entity in entities if isinstance(entity, nltk.tree.Tree)]
print(named_entities)
###Output
[Tree('PERSON', [('Tom', 'NNP'), ('Brady', 'NNP')]), Tree('ORGANIZATION', [('Patriots', 'NNP')]), Tree('ORGANIZATION', [('Super', 'NNP'), ('Bowl', 'NNP')]), Tree('ORGANIZATION', [('Philadelphia', 'NNP'), ('Eagles', 'NNP')])]
###Markdown
Caching with `@functools.lru_cache`A little-known feature of Python 3 is `functools.lru_cache`, a decorator that allows users to easily cache the results of a function in an LRU cache. We're going to be using the NLTK library quite a bit to tokenize, parse, and detect named entities in sentences. These sentences might repeat themselves. As such, we'll use this decorator to cache named entities so that we don't have to perform this expensive computation multiple times. Putting it all together: getting a list of Named Entity Labels from a sentenceNow that we know how to tokenize, parse, and detect named entities using NLTK, let's put it all together into a single helper function. Later, when we implement our constraint, we can query this function to easily get the entity labels from a sentence. We can even use `@functools.lru_cache` to try and speed this process up.
###Code
import functools
@functools.lru_cache(maxsize=2**14)
def get_entities(sentence):
tokens = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokens)
# Setting `binary=True` makes NLTK return all of the named
# entities tagged as NNP instead of detailed tags like
#'Organization', 'Geo-Political Entity', etc.
entities = nltk.chunk.ne_chunk(tagged, binary=True)
return entities.leaves()
###Output
_____no_output_____
###Markdown
And let's test our function to make sure it works:
###Code
sentence = 'Jack Black starred in the 2003 film classic "School of Rock".'
get_entities(sentence)
###Output
_____no_output_____
###Markdown
We flattened the tree of entities, so the return format is a list of `(word, entity type)` tuples. For non-entities, the `entity_type` is just the part of speech of the word. `'NNP'` is the indicator of a named entity (a proper noun, according to NLTK). Looks like we identified three named entities here: 'Jack' and 'Black', 'School', and 'Rock'. as a 'GPE'. (Seems that the labeler thinks Rock is the name of a place, a city or something.) Whatever technique NLTK uses for named entity recognition may be a bit rough, but it did a pretty decent job here! Creating our NamedEntityConstraintNow that we know how to detect named entities using NLTK, let's create our custom constraint.
###Code
from textattack.constraints import Constraint
class NamedEntityConstraint(Constraint):
""" A constraint that ensures `transformed_text` only substitutes named entities from `current_text` with other named entities.
"""
def _check_constraint(self, transformed_text, current_text):
transformed_entities = get_entities(transformed_text.text)
current_entities = get_entities(current_text.text)
# If there aren't named entities, let's return False (the attack
# will eventually fail).
if len(current_entities) == 0:
return False
if len(current_entities) != len(transformed_entities):
# If the two sentences have a different number of entities, then
# they definitely don't have the same labels. In this case, the
# constraint is violated, and we return False.
return False
else:
# Here we compare all of the words, in order, to make sure that they match.
# If we find two words that don't match, this means a word was swapped
# between `current_text` and `transformed_text`. That word must be a named entity to fulfill our
# constraint.
current_word_label = None
transformed_word_label = None
for (word_1, label_1), (word_2, label_2) in zip(current_entities, transformed_entities):
if word_1 != word_2:
# Finally, make sure that words swapped between `x` and `x_adv` are named entities. If
# they're not, then we also return False.
if (label_1 not in ['NNP', 'NE']) or (label_2 not in ['NNP', 'NE']):
return False
# If we get here, all of the labels match up. Return True!
return True
###Output
_____no_output_____
###Markdown
Testing our constraintWe need to create an attack and a dataset to test our constraint on. We went over all of this in the transformations tutorial, so let's gloss over this part for now.
###Code
# Import the model
import transformers
from textattack.models.tokenizers import AutoTokenizer
from textattack.models.wrappers import HuggingFaceModelWrapper
model = transformers.AutoModelForSequenceClassification.from_pretrained("textattack/albert-base-v2-yelp-polarity")
tokenizer = AutoTokenizer("textattack/albert-base-v2-yelp-polarity")
model_wrapper = HuggingFaceModelWrapper(model, tokenizer)
# Create the goal function using the model
from textattack.goal_functions import UntargetedClassification
goal_function = UntargetedClassification(model_wrapper)
# Import the dataset
from textattack.datasets import HuggingFaceNlpDataset
dataset = HuggingFaceNlpDataset("yelp_polarity", None, "test")
from textattack.transformations import WordSwapEmbedding
from textattack.search_methods import GreedySearch
from textattack.shared import Attack
from textattack.constraints.pre_transformation import RepeatModification, StopwordModification
# We're going to the `WordSwapEmbedding` transformation. Using the default settings, this
# will try substituting words with their neighbors in the counter-fitted embedding space.
transformation = WordSwapEmbedding(max_candidates=15)
# We'll use the greedy search method again
search_method = GreedySearch()
# Our constraints will be the same as Tutorial 1, plus the named entity constraint
constraints = [RepeatModification(),
StopwordModification(),
NamedEntityConstraint(False)]
# Now, let's make the attack using these parameters.
attack = Attack(goal_function, constraints, transformation, search_method)
print(attack)
###Output
Attack(
(search_method): GreedySearch
(goal_function): UntargetedClassification
(transformation): WordSwapEmbedding(
(max_candidates): 15
(embedding_type): paragramcf
)
(constraints):
(0): NamedEntityConstraint(
(compare_against_original): False
)
(1): RepeatModification
(2): StopwordModification
(is_black_box): True
)
###Markdown
Now, let's use our attack. We're going to attack samples until we achieve 5 successes. (There's a lot to check here, and since we're using a greedy search over all potential word swap positions, each sample will take a few minutes. This will take a few hours to run on a single core.)
###Code
from textattack.loggers import CSVLogger # tracks a dataframe for us.
from textattack.attack_results import SuccessfulAttackResult
results_iterable = attack.attack_dataset(dataset)
logger = CSVLogger(color_method='html')
num_successes = 0
while num_successes < 5:
result = next(results_iterable)
if isinstance(result, SuccessfulAttackResult):
logger.log_attack_result(result)
num_successes += 1
print(f'{num_successes} of 5 successes complete.')
###Output
1 of 5 successes complete.
2 of 5 successes complete.
3 of 5 successes complete.
4 of 5 successes complete.
5 of 5 successes complete.
###Markdown
Now let's visualize our 5 successes in color:
###Code
import pandas as pd
pd.options.display.max_colwidth = 480 # increase column width so we can actually read the examples
from IPython.core.display import display, HTML
display(HTML(logger.df[['original_text', 'perturbed_text']].to_html(escape=False)))
###Output
_____no_output_____
###Markdown
The importance of constraintsConstraints determine which potential adversarial examples are valid inputs to the model. When determining the efficacy of an attack, constraints are everything. After all, an attack that looks very powerful may just be generating nonsense. Or, perhaps more nefariously, an attack may generate a real-looking example that changes the original label of the input. That's why you should always clearly define the *constraints* your adversarial examples must meet. [](https://colab.research.google.com/drive/1cBRUj2l0m8o81vJGGFgO-o_zDLj24M5Y?usp=sharing)[](https://github.com/QData/TextAttack/blob/master/docs/examples/1_Introduction_and_Transformations.ipynb) Classes of constraintsTextAttack evaluates constraints using methods from three groups:- **Overlap constraints** determine if a perturbation is valid based on character-level analysis. For example, some attacks are constrained by edit distance: a perturbation is only valid if it perturbs some small number of characters (or fewer).- **Grammaticality constraints** filter inputs based on syntactical information. For example, an attack may require that adversarial perturbations do not introduce grammatical errors.- **Semantic constraints** try to ensure that the perturbation is semantically similar to the original input. For example, we may design a constraint that uses a sentence encoder to encode the original and perturbed inputs, and enforce that the sentence encodings be within some fixed distance of one another. (This is what happens in subclasses of `textattack.constraints.semantics.sentence_encoders`.) A new constraintTo add our own constraint, we need to create a subclass of `textattack.constraints.Constraint`. We can implement one of two functions, either `_check_constraint` or `_check_constraint_many`:- `_check_constraint` determines whether candidate `TokenizedText` `transformed_text`, transformed from `current_text`, fulfills a desired constraint. It returns either `True` or `False`.- `_check_constraint_many` determines whether each of a list of candidates `transformed_texts` fulfill the constraint relative to `current_text`. This is here in case your constraint can be vectorized. If not, just implement `_check_constraint`, and `_check_constraint` will be executed for each `(transformed_text, current_text)` pair. A custom constraintFor fun, we're going to see what happens when we constrain an attack to only allow perturbations that substitute out a named entity for another. In linguistics, a **named entity** is a proper noun, the name of a person, organization, location, product, etc. Named Entity Recognition is a popular NLP task (and one that state-of-the-art models can perform quite well). NLTK and Named Entity Recognition**NLTK**, the Natural Language Toolkit, is a Python package that helps developers write programs that process natural language. NLTK comes with predefined algorithms for lots of linguistic tasks– including Named Entity Recognition.First, we're going to write a constraint class. In the `_check_constraints` method, we're going to use NLTK to find the named entities in both `current_text` and `transformed_text`. We will only return `True` (that is, our constraint is met) if `transformed_text` has substituted one named entity in `current_text` for another.Let's import NLTK and download the required modules:
###Code
import nltk
nltk.download('punkt') # The NLTK tokenizer
nltk.download('maxent_ne_chunker') # NLTK named-entity chunker
nltk.download('words') # NLTK list of words
###Output
[nltk_data] Downloading package punkt to /u/edl9cy/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data] /u/edl9cy/nltk_data...
[nltk_data] Package maxent_ne_chunker is already up-to-date!
[nltk_data] Downloading package words to /u/edl9cy/nltk_data...
[nltk_data] Package words is already up-to-date!
###Markdown
NLTK NER ExampleHere's an example of using NLTK to find the named entities in a sentence:
###Code
sentence = ('In 2017, star quarterback Tom Brady led the Patriots to the Super Bowl, '
'but lost to the Philadelphia Eagles.')
# 1. Tokenize using the NLTK tokenizer.
tokens = nltk.word_tokenize(sentence)
# 2. Tag parts of speech using the NLTK part-of-speech tagger.
tagged = nltk.pos_tag(tokens)
# 3. Extract entities from tagged sentence.
entities = nltk.chunk.ne_chunk(tagged)
print(entities)
###Output
(S
In/IN
2017/CD
,/,
star/NN
quarterback/NN
(PERSON Tom/NNP Brady/NNP)
led/VBD
the/DT
(ORGANIZATION Patriots/NNP)
to/TO
the/DT
(ORGANIZATION Super/NNP Bowl/NNP)
,/,
but/CC
lost/VBD
to/TO
the/DT
(ORGANIZATION Philadelphia/NNP Eagles/NNP)
./.)
###Markdown
It looks like `nltk.chunk.ne_chunk` gives us an `nltk.tree.Tree` object where named entities are also `nltk.tree.Tree` objects within that tree. We can take this a step further and grab the named entities from the tree of entities:
###Code
# 4. Filter entities to just named entities.
named_entities = [entity for entity in entities if isinstance(entity, nltk.tree.Tree)]
print(named_entities)
###Output
[Tree('PERSON', [('Tom', 'NNP'), ('Brady', 'NNP')]), Tree('ORGANIZATION', [('Patriots', 'NNP')]), Tree('ORGANIZATION', [('Super', 'NNP'), ('Bowl', 'NNP')]), Tree('ORGANIZATION', [('Philadelphia', 'NNP'), ('Eagles', 'NNP')])]
###Markdown
Caching with `@functools.lru_cache`A little-known feature of Python 3 is `functools.lru_cache`, a decorator that allows users to easily cache the results of a function in an LRU cache. We're going to be using the NLTK library quite a bit to tokenize, parse, and detect named entities in sentences. These sentences might repeat themselves. As such, we'll use this decorator to cache named entities so that we don't have to perform this expensive computation multiple times. Putting it all together: getting a list of Named Entity Labels from a sentenceNow that we know how to tokenize, parse, and detect named entities using NLTK, let's put it all together into a single helper function. Later, when we implement our constraint, we can query this function to easily get the entity labels from a sentence. We can even use `@functools.lru_cache` to try and speed this process up.
###Code
import functools
@functools.lru_cache(maxsize=2**14)
def get_entities(sentence):
tokens = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokens)
# Setting `binary=True` makes NLTK return all of the named
# entities tagged as NNP instead of detailed tags like
#'Organization', 'Geo-Political Entity', etc.
entities = nltk.chunk.ne_chunk(tagged, binary=True)
return entities.leaves()
###Output
_____no_output_____
###Markdown
And let's test our function to make sure it works:
###Code
sentence = 'Jack Black starred in the 2003 film classic "School of Rock".'
get_entities(sentence)
###Output
_____no_output_____
###Markdown
We flattened the tree of entities, so the return format is a list of `(word, entity type)` tuples. For non-entities, the `entity_type` is just the part of speech of the word. `'NNP'` is the indicator of a named entity (a proper noun, according to NLTK). Looks like we identified three named entities here: 'Jack' and 'Black', 'School', and 'Rock'. as a 'GPE'. (Seems that the labeler thinks Rock is the name of a place, a city or something.) Whatever technique NLTK uses for named entity recognition may be a bit rough, but it did a pretty decent job here! Creating our NamedEntityConstraintNow that we know how to detect named entities using NLTK, let's create our custom constraint.
###Code
from textattack.constraints import Constraint
class NamedEntityConstraint(Constraint):
""" A constraint that ensures `transformed_text` only substitutes named entities from `current_text` with other named entities.
"""
def _check_constraint(self, transformed_text, current_text):
transformed_entities = get_entities(transformed_text.text)
current_entities = get_entities(current_text.text)
# If there aren't named entities, let's return False (the attack
# will eventually fail).
if len(current_entities) == 0:
return False
if len(current_entities) != len(transformed_entities):
# If the two sentences have a different number of entities, then
# they definitely don't have the same labels. In this case, the
# constraint is violated, and we return False.
return False
else:
# Here we compare all of the words, in order, to make sure that they match.
# If we find two words that don't match, this means a word was swapped
# between `current_text` and `transformed_text`. That word must be a named entity to fulfill our
# constraint.
current_word_label = None
transformed_word_label = None
for (word_1, label_1), (word_2, label_2) in zip(current_entities, transformed_entities):
if word_1 != word_2:
# Finally, make sure that words swapped between `x` and `x_adv` are named entities. If
# they're not, then we also return False.
if (label_1 not in ['NNP', 'NE']) or (label_2 not in ['NNP', 'NE']):
return False
# If we get here, all of the labels match up. Return True!
return True
###Output
_____no_output_____
###Markdown
Testing our constraintWe need to create an attack and a dataset to test our constraint on. We went over all of this in the transformations tutorial, so let's gloss over this part for now.
###Code
# Import the model
import transformers
from textattack.models.tokenizers import AutoTokenizer
from textattack.models.wrappers import HuggingFaceModelWrapper
model = transformers.AutoModelForSequenceClassification.from_pretrained("textattack/albert-base-v2-yelp-polarity")
tokenizer = AutoTokenizer("textattack/albert-base-v2-yelp-polarity")
model_wrapper = HuggingFaceModelWrapper(model, tokenizer)
# Create the goal function using the model
from textattack.goal_functions import UntargetedClassification
goal_function = UntargetedClassification(model_wrapper)
# Import the dataset
from textattack.datasets import HuggingFaceNlpDataset
dataset = HuggingFaceNlpDataset("yelp_polarity", None, "test")
from textattack.transformations import WordSwapEmbedding
from textattack.search_methods import GreedySearch
from textattack.shared import Attack
from textattack.constraints.pre_transformation import RepeatModification, StopwordModification
# We're going to the `WordSwapEmbedding` transformation. Using the default settings, this
# will try substituting words with their neighbors in the counter-fitted embedding space.
transformation = WordSwapEmbedding(max_candidates=15)
# We'll use the greedy search method again
search_method = GreedySearch()
# Our constraints will be the same as Tutorial 1, plus the named entity constraint
constraints = [RepeatModification(),
StopwordModification(),
NamedEntityConstraint(False)]
# Now, let's make the attack using these parameters.
attack = Attack(goal_function, constraints, transformation, search_method)
print(attack)
###Output
Attack(
(search_method): GreedySearch
(goal_function): UntargetedClassification
(transformation): WordSwapEmbedding(
(max_candidates): 15
(embedding_type): paragramcf
)
(constraints):
(0): NamedEntityConstraint(
(compare_against_original): False
)
(1): RepeatModification
(2): StopwordModification
(is_black_box): True
)
###Markdown
Now, let's use our attack. We're going to attack samples until we achieve 5 successes. (There's a lot to check here, and since we're using a greedy search over all potential word swap positions, each sample will take a few minutes. This will take a few hours to run on a single core.)
###Code
from textattack.loggers import CSVLogger # tracks a dataframe for us.
from textattack.attack_results import SuccessfulAttackResult
results_iterable = attack.attack_dataset(dataset)
logger = CSVLogger(color_method='html')
num_successes = 0
while num_successes < 5:
result = next(results_iterable)
if isinstance(result, SuccessfulAttackResult):
logger.log_attack_result(result)
num_successes += 1
print(f'{num_successes} of 5 successes complete.')
###Output
1 of 5 successes complete.
2 of 5 successes complete.
3 of 5 successes complete.
4 of 5 successes complete.
5 of 5 successes complete.
###Markdown
Now let's visualize our 5 successes in color:
###Code
import pandas as pd
pd.options.display.max_colwidth = 480 # increase column width so we can actually read the examples
from IPython.core.display import display, HTML
display(HTML(logger.df[['original_text', 'perturbed_text']].to_html(escape=False)))
###Output
_____no_output_____
###Markdown
The importance of constraintsConstraints determine which potential adversarial examples are valid inputs to the model. When determining the efficacy of an attack, constraints are everything. After all, an attack that looks very powerful may just be generating nonsense. Or, perhaps more nefariously, an attack may generate a real-looking example that changes the original label of the input. That's why you should always clearly define the *constraints* your adversarial examples must meet. Classes of constraintsTextAttack evaluates constraints using methods from three groups:- **Overlap constraints** determine if a perturbation is valid based on character-level analysis. For example, some attacks are constrained by edit distance: a perturbation is only valid if it perturbs some small number of characters (or fewer).- **Grammaticality constraints** filter inputs based on syntactical information. For example, an attack may require that adversarial perturbations do not introduce grammatical errors.- **Semantic constraints** try to ensure that the perturbation is semantically similar to the original input. For example, we may design a constraint that uses a sentence encoder to encode the original and perturbed inputs, and enforce that the sentence encodings be within some fixed distance of one another. (This is what happens in subclasses of `textattack.constraints.semantics.sentence_encoders`.) A new constraintTo add our own constraint, we need to create a subclass of `textattack.constraints.Constraint`. We can implement one of two functions, either `_check_constarint` or `_check_constraint_many`:- `_check_constraint` determines whether candidate `TokenizedText` `transformed_text`, transformed from `current_text`, fulfills a desired constraint. It returns either `True` or `False`.- `_check_constraint_many` determines whether each of a list of candidates `transformed_texts` fulfill the constraint relative to `current_text`. This is here in case your constraint can be vectorized. If not, just implement `_check_constraint`, and `_check_constraint` will be executed for each `(transformed_text, current_text)` pair. A custom constraintFor fun, we're going to see what happens when we constrain an attack to only allow perturbations that substitute out a named entity for another. In linguistics, a **named entity** is a proper noun, the name of a person, organization, location, product, etc. Named Entity Recognition is a popular NLP task (and one that state-of-the-art models can perform quite well). NLTK and Named Entity Recognition**NLTK**, the Natural Language Toolkit, is a Python package that helps developers write programs that process natural language. NLTK comes with predefined algorithms for lots of linguistic tasks– including Named Entity Recognition.First, we're going to write a constraint class. In the `_check_constraints` method, we're going to use NLTK to find the named entities in both `current_text` and `transformed_text`. We will only return `True` (that is, our constraint is met) if `transformed_text` has substituted one named entity in `current_text` for another.Let's import NLTK and download the required modules:
###Code
import nltk
nltk.download('punkt') # The NLTK tokenizer
nltk.download('maxent_ne_chunker') # NLTK named-entity chunker
nltk.download('words') # NLTK list of words
###Output
[nltk_data] Downloading package punkt to /u/edl9cy/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data] /u/edl9cy/nltk_data...
[nltk_data] Package maxent_ne_chunker is already up-to-date!
[nltk_data] Downloading package words to /u/edl9cy/nltk_data...
[nltk_data] Package words is already up-to-date!
###Markdown
NLTK NER ExampleHere's an example of using NLTK to find the named entities in a sentence:
###Code
sentence = ('In 2017, star quarterback Tom Brady led the Patriots to the Super Bowl, '
'but lost to the Philadelphia Eagles.')
# 1. Tokenize using the NLTK tokenizer.
tokens = nltk.word_tokenize(sentence)
# 2. Tag parts of speech using the NLTK part-of-speech tagger.
tagged = nltk.pos_tag(tokens)
# 3. Extract entities from tagged sentence.
entities = nltk.chunk.ne_chunk(tagged)
print(entities)
###Output
(S
In/IN
2017/CD
,/,
star/NN
quarterback/NN
(PERSON Tom/NNP Brady/NNP)
led/VBD
the/DT
(ORGANIZATION Patriots/NNP)
to/TO
the/DT
(ORGANIZATION Super/NNP Bowl/NNP)
,/,
but/CC
lost/VBD
to/TO
the/DT
(ORGANIZATION Philadelphia/NNP Eagles/NNP)
./.)
###Markdown
It looks like `nltk.chunk.ne_chunk` gives us an `nltk.tree.Tree` object where named entities are also `nltk.tree.Tree` objects within that tree. We can take this a step further and grab the named entities from the tree of entities:
###Code
# 4. Filter entities to just named entities.
named_entities = [entity for entity in entities if isinstance(entity, nltk.tree.Tree)]
print(named_entities)
###Output
[Tree('PERSON', [('Tom', 'NNP'), ('Brady', 'NNP')]), Tree('ORGANIZATION', [('Patriots', 'NNP')]), Tree('ORGANIZATION', [('Super', 'NNP'), ('Bowl', 'NNP')]), Tree('ORGANIZATION', [('Philadelphia', 'NNP'), ('Eagles', 'NNP')])]
###Markdown
Caching with `@functools.lru_cache`A little-known feature of Python 3 is `functools.lru_cache`, a decorator that allows users to easily cache the results of a function in an LRU cache. We're going to be using the NLTK library quite a bit to tokenize, parse, and detect named entities in sentences. These sentences might repeat themselves. As such, we'll use this decorator to cache named entities so that we don't have to perform this expensive computation multiple times. Putting it all together: getting a list of Named Entity Labels from a sentenceNow that we know how to tokenize, parse, and detect named entities using NLTK, let's put it all together into a single helper function. Later, when we implement our constraint, we can query this function to easily get the entity labels from a sentence. We can even use `@functools.lru_cache` to try and speed this process up.
###Code
import functools
@functools.lru_cache(maxsize=2**14)
def get_entities(sentence):
tokens = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokens)
# Setting `binary=True` makes NLTK return all of the named
# entities tagged as NNP instead of detailed tags like
#'Organization', 'Geo-Political Entity', etc.
entities = nltk.chunk.ne_chunk(tagged, binary=True)
return entities.leaves()
###Output
_____no_output_____
###Markdown
And let's test our function to make sure it works:
###Code
sentence = 'Jack Black starred in the 2003 film classic "School of Rock".'
get_entities(sentence)
###Output
_____no_output_____
###Markdown
We flattened the tree of entities, so the return format is a list of `(word, entity type)` tuples. For non-entities, the `entity_type` is just the part of speech of the word. `'NNP'` is the indicator of a named entity (a proper noun, according to NLTK). Looks like we identified three named entities here: 'Jack' and 'Black', 'School', and 'Rock'. as a 'GPE'. (Seems that the labeler thinks Rock is the name of a place, a city or something.) Whatever technique NLTK uses for named entity recognition may be a bit rough, but it did a pretty decent job here! Creating our NamedEntityConstraintNow that we know how to detect named entities using NLTK, let's create our custom constraint.
###Code
from textattack.constraints import Constraint
class NamedEntityConstraint(Constraint):
""" A constraint that ensures `transformed_text` only substitutes named entities from `current_text` with other named entities.
"""
def _check_constraint(self, tranformed_text, current_text, original_text=None):
transformed_entities = get_entities(transformed_text.text)
current_entities = get_entities(current_text.text)
# If there aren't named entities, let's return False (the attack
# will eventually fail).
if len(current_entities) == 0:
return False
if len(current_entities) != len(transformed_entities):
# If the two sentences have a different number of entities, then
# they definitely don't have the same labels. In this case, the
# constraint is violated, and we return True.
return False
else:
# Here we compare all of the words, in order, to make sure that they match.
# If we find two words that don't match, this means a word was swapped
# between `current_text` and `transformed_text`. That word must be a named entity to fulfill our
# constraint.
current_word_label = None
transformed_word_label = None
for (word_1, label_1), (word_2, label_2) in zip(current_entities, transformed_entities):
if word_1 != word_2:
# Finally, make sure that words swapped between `x` and `x_adv` are named entities. If
# they're not, then we also return False.
if (label_1 not in ['NNP', 'NE']) or (label_2 not in ['NNP', 'NE']):
return False
# If we get here, all of the labels match up. Return True!
return True
###Output
_____no_output_____
###Markdown
Testing our constraintWe need to create an attack and a dataset to test our constraint on. We went over all of this in the first tutorial, so let's gloss over this part for now.
###Code
# Import the dataset.
from textattack.datasets.classification import YelpSentiment
# Create the model.
from textattack.models.classification.lstm import LSTMForYelpSentimentClassification
model = LSTMForYelpSentimentClassification()
# Create the goal function using the model.
from textattack.goal_functions import UntargetedClassification
goal_function = UntargetedClassification(model)
from textattack.transformations import WordSwapEmbedding
from textattack.search_methods import GreedySearch
from textattack.shared import Attack
from textattack.constraints.pre_transformation import RepeatModification, StopwordModification
# We're going to the `WordSwapEmbedding` transformation. Using the default settings, this
# will try substituting words with their neighbors in the counter-fitted embedding space.
transformation = WordSwapEmbedding(max_candidates=15)
# We'll use the greedy search method again
search_method = GreedySearch()
# Our constraints will be the same as Tutorial 1, plus the named entity constraint
constraints = [RepeatModification(),
StopwordModification(),
NamedEntityConstraint()]
# Now, let's make the attack using these parameters.
attack = Attack(goal_function, constraints, transformation, search_method)
print(attack)
import torch
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Now, let's use our attack. We're going to iterate through the `YelpSentiment` dataset and attack samples until we achieve 10 successes. (There's a lot to check here, and since we're using a greedy search over all potential word swap positions, each sample will take a few minutes. This will take a few hours to run on a single core.)
###Code
from textattack.loggers import CSVLogger # tracks a dataframe for us.
from textattack.attack_results import SuccessfulAttackResult
results_iterable = attack.attack_dataset(YelpSentiment(), attack_n=True)
logger = CSVLogger(color_method='html')
num_successes = 0
while num_successes < 10:
result = next(results_iterable)
if isinstance(result, SuccessfulAttackResult):
logger.log_attack_result(result)
num_successes += 1
###Output
_____no_output_____
###Markdown
Now let's visualize our 10 successes in color:
###Code
import pandas as pd
pd.options.display.max_colwidth = 480 # increase column width so we can actually read the examples
from IPython.core.display import display, HTML
display(HTML(logger.df[['passage_1', 'passage_2']].to_html(escape=False)))
###Output
_____no_output_____
###Markdown
The importance of constraintsConstraints determine which potential adversarial examples are valid inputs to the model. When determining the efficacy of an attack, constraints are everything. After all, an attack that looks very powerful may just be generating nonsense. Or, perhaps more nefariously, an attack may generate a real-looking example that changes the original label of the input. That's why you should always clearly define the *constraints* your adversarial examples must meet. [](https://colab.research.google.com/drive/1cBRUj2l0m8o81vJGGFgO-o_zDLj24M5Y?usp=sharing)[](https://github.com/QData/TextAttack/blob/master/docs/examples/1_Introduction_and_Transformations.ipynb) Classes of constraintsTextAttack evaluates constraints using methods from three groups:- **Overlap constraints** determine if a perturbation is valid based on character-level analysis. For example, some attacks are constrained by edit distance: a perturbation is only valid if it perturbs some small number of characters (or fewer).- **Grammaticality constraints** filter inputs based on syntactical information. For example, an attack may require that adversarial perturbations do not introduce grammatical errors.- **Semantic constraints** try to ensure that the perturbation is semantically similar to the original input. For example, we may design a constraint that uses a sentence encoder to encode the original and perturbed inputs, and enforce that the sentence encodings be within some fixed distance of one another. (This is what happens in subclasses of `textattack.constraints.semantics.sentence_encoders`.) A new constraintTo add our own constraint, we need to create a subclass of `textattack.constraints.Constraint`. We can implement one of two functions, either `_check_constraint` or `_check_constraint_many`:- `_check_constraint` determines whether candidate `TokenizedText` `transformed_text`, transformed from `current_text`, fulfills a desired constraint. It returns either `True` or `False`.- `_check_constraint_many` determines whether each of a list of candidates `transformed_texts` fulfill the constraint relative to `current_text`. This is here in case your constraint can be vectorized. If not, just implement `_check_constraint`, and `_check_constraint` will be executed for each `(transformed_text, current_text)` pair. A custom constraintFor fun, we're going to see what happens when we constrain an attack to only allow perturbations that substitute out a named entity for another. In linguistics, a **named entity** is a proper noun, the name of a person, organization, location, product, etc. Named Entity Recognition is a popular NLP task (and one that state-of-the-art models can perform quite well). NLTK and Named Entity Recognition**NLTK**, the Natural Language Toolkit, is a Python package that helps developers write programs that process natural language. NLTK comes with predefined algorithms for lots of linguistic tasks– including Named Entity Recognition.First, we're going to write a constraint class. In the `_check_constraints` method, we're going to use NLTK to find the named entities in both `current_text` and `transformed_text`. We will only return `True` (that is, our constraint is met) if `transformed_text` has substituted one named entity in `current_text` for another.Let's import NLTK and download the required modules:
###Code
import nltk
nltk.download('punkt') # The NLTK tokenizer
nltk.download('maxent_ne_chunker') # NLTK named-entity chunker
nltk.download('words') # NLTK list of words
###Output
[nltk_data] Downloading package punkt to /u/edl9cy/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data] /u/edl9cy/nltk_data...
[nltk_data] Package maxent_ne_chunker is already up-to-date!
[nltk_data] Downloading package words to /u/edl9cy/nltk_data...
[nltk_data] Package words is already up-to-date!
###Markdown
NLTK NER ExampleHere's an example of using NLTK to find the named entities in a sentence:
###Code
sentence = ('In 2017, star quarterback Tom Brady led the Patriots to the Super Bowl, '
'but lost to the Philadelphia Eagles.')
# 1. Tokenize using the NLTK tokenizer.
tokens = nltk.word_tokenize(sentence)
# 2. Tag parts of speech using the NLTK part-of-speech tagger.
tagged = nltk.pos_tag(tokens)
# 3. Extract entities from tagged sentence.
entities = nltk.chunk.ne_chunk(tagged)
print(entities)
###Output
(S
In/IN
2017/CD
,/,
star/NN
quarterback/NN
(PERSON Tom/NNP Brady/NNP)
led/VBD
the/DT
(ORGANIZATION Patriots/NNP)
to/TO
the/DT
(ORGANIZATION Super/NNP Bowl/NNP)
,/,
but/CC
lost/VBD
to/TO
the/DT
(ORGANIZATION Philadelphia/NNP Eagles/NNP)
./.)
###Markdown
It looks like `nltk.chunk.ne_chunk` gives us an `nltk.tree.Tree` object where named entities are also `nltk.tree.Tree` objects within that tree. We can take this a step further and grab the named entities from the tree of entities:
###Code
# 4. Filter entities to just named entities.
named_entities = [entity for entity in entities if isinstance(entity, nltk.tree.Tree)]
print(named_entities)
###Output
[Tree('PERSON', [('Tom', 'NNP'), ('Brady', 'NNP')]), Tree('ORGANIZATION', [('Patriots', 'NNP')]), Tree('ORGANIZATION', [('Super', 'NNP'), ('Bowl', 'NNP')]), Tree('ORGANIZATION', [('Philadelphia', 'NNP'), ('Eagles', 'NNP')])]
###Markdown
Caching with `@functools.lru_cache`A little-known feature of Python 3 is `functools.lru_cache`, a decorator that allows users to easily cache the results of a function in an LRU cache. We're going to be using the NLTK library quite a bit to tokenize, parse, and detect named entities in sentences. These sentences might repeat themselves. As such, we'll use this decorator to cache named entities so that we don't have to perform this expensive computation multiple times. Putting it all together: getting a list of Named Entity Labels from a sentenceNow that we know how to tokenize, parse, and detect named entities using NLTK, let's put it all together into a single helper function. Later, when we implement our constraint, we can query this function to easily get the entity labels from a sentence. We can even use `@functools.lru_cache` to try and speed this process up.
###Code
import functools
@functools.lru_cache(maxsize=2**14)
def get_entities(sentence):
tokens = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokens)
# Setting `binary=True` makes NLTK return all of the named
# entities tagged as NNP instead of detailed tags like
#'Organization', 'Geo-Political Entity', etc.
entities = nltk.chunk.ne_chunk(tagged, binary=True)
return entities.leaves()
###Output
_____no_output_____
###Markdown
And let's test our function to make sure it works:
###Code
sentence = 'Jack Black starred in the 2003 film classic "School of Rock".'
get_entities(sentence)
###Output
_____no_output_____
###Markdown
We flattened the tree of entities, so the return format is a list of `(word, entity type)` tuples. For non-entities, the `entity_type` is just the part of speech of the word. `'NNP'` is the indicator of a named entity (a proper noun, according to NLTK). Looks like we identified three named entities here: 'Jack' and 'Black', 'School', and 'Rock'. as a 'GPE'. (Seems that the labeler thinks Rock is the name of a place, a city or something.) Whatever technique NLTK uses for named entity recognition may be a bit rough, but it did a pretty decent job here! Creating our NamedEntityConstraintNow that we know how to detect named entities using NLTK, let's create our custom constraint.
###Code
from textattack.constraints import Constraint
class NamedEntityConstraint(Constraint):
""" A constraint that ensures `transformed_text` only substitutes named entities from `current_text` with other named entities.
"""
def _check_constraint(self, transformed_text, current_text):
transformed_entities = get_entities(transformed_text.text)
current_entities = get_entities(current_text.text)
# If there aren't named entities, let's return False (the attack
# will eventually fail).
if len(current_entities) == 0:
return False
if len(current_entities) != len(transformed_entities):
# If the two sentences have a different number of entities, then
# they definitely don't have the same labels. In this case, the
# constraint is violated, and we return False.
return False
else:
# Here we compare all of the words, in order, to make sure that they match.
# If we find two words that don't match, this means a word was swapped
# between `current_text` and `transformed_text`. That word must be a named entity to fulfill our
# constraint.
current_word_label = None
transformed_word_label = None
for (word_1, label_1), (word_2, label_2) in zip(current_entities, transformed_entities):
if word_1 != word_2:
# Finally, make sure that words swapped between `x` and `x_adv` are named entities. If
# they're not, then we also return False.
if (label_1 not in ['NNP', 'NE']) or (label_2 not in ['NNP', 'NE']):
return False
# If we get here, all of the labels match up. Return True!
return True
###Output
_____no_output_____
###Markdown
Testing our constraintWe need to create an attack and a dataset to test our constraint on. We went over all of this in the transformations tutorial, so let's gloss over this part for now.
###Code
# Import the model
import transformers
from textattack.models.tokenizers import AutoTokenizer
from textattack.models.wrappers import HuggingFaceModelWrapper
model = transformers.AutoModelForSequenceClassification.from_pretrained("textattack/albert-base-v2-yelp-polarity")
tokenizer = AutoTokenizer("textattack/albert-base-v2-yelp-polarity")
model_wrapper = HuggingFaceModelWrapper(model, tokenizer)
# Create the goal function using the model
from textattack.goal_functions import UntargetedClassification
goal_function = UntargetedClassification(model_wrapper)
# Import the dataset
from textattack.datasets import HuggingFaceDataset
dataset = HuggingFaceDataset("yelp_polarity", None, "test")
from textattack.transformations import WordSwapEmbedding
from textattack.search_methods import GreedySearch
from textattack.shared import Attack
from textattack.constraints.pre_transformation import RepeatModification, StopwordModification
# We're going to the `WordSwapEmbedding` transformation. Using the default settings, this
# will try substituting words with their neighbors in the counter-fitted embedding space.
transformation = WordSwapEmbedding(max_candidates=15)
# We'll use the greedy search method again
search_method = GreedySearch()
# Our constraints will be the same as Tutorial 1, plus the named entity constraint
constraints = [RepeatModification(),
StopwordModification(),
NamedEntityConstraint(False)]
# Now, let's make the attack using these parameters.
attack = Attack(goal_function, constraints, transformation, search_method)
print(attack)
###Output
Attack(
(search_method): GreedySearch
(goal_function): UntargetedClassification
(transformation): WordSwapEmbedding(
(max_candidates): 15
(embedding_type): paragramcf
)
(constraints):
(0): NamedEntityConstraint(
(compare_against_original): False
)
(1): RepeatModification
(2): StopwordModification
(is_black_box): True
)
###Markdown
Now, let's use our attack. We're going to attack samples until we achieve 5 successes. (There's a lot to check here, and since we're using a greedy search over all potential word swap positions, each sample will take a few minutes. This will take a few hours to run on a single core.)
###Code
from textattack.loggers import CSVLogger # tracks a dataframe for us.
from textattack.attack_results import SuccessfulAttackResult
results_iterable = attack.attack_dataset(dataset)
logger = CSVLogger(color_method='html')
num_successes = 0
while num_successes < 5:
result = next(results_iterable)
if isinstance(result, SuccessfulAttackResult):
logger.log_attack_result(result)
num_successes += 1
print(f'{num_successes} of 5 successes complete.')
###Output
1 of 5 successes complete.
2 of 5 successes complete.
3 of 5 successes complete.
4 of 5 successes complete.
5 of 5 successes complete.
###Markdown
Now let's visualize our 5 successes in color:
###Code
import pandas as pd
pd.options.display.max_colwidth = 480 # increase column width so we can actually read the examples
from IPython.core.display import display, HTML
display(HTML(logger.df[['original_text', 'perturbed_text']].to_html(escape=False)))
###Output
_____no_output_____
###Markdown
The importance of constraintsConstraints determine which potential adversarial examples are valid inputs to the model. When determining the efficacy of an attack, constraints are everything. After all, an attack that looks very powerful may just be generating nonsense. Or, perhaps more nefariously, an attack may generate a real-looking example that changes the original label of the input. That's why you should always clearly define the *constraints* your adversarial examples must meet. Classes of constraintsTextAttack evaluates constraints using methods from three groups:- **Overlap constraints** determine if a perturbation is valid based on character-level analysis. For example, some attacks are constrained by edit distance: a perturbation is only valid if it perturbs some small number of characters (or fewer).- **Grammaticality constraints** filter inputs based on syntactical information. For example, an attack may require that adversarial perturbations do not introduce grammatical errors.- **Semantic constraints** try to ensure that the perturbation is semantically similar to the original input. For example, we may design a constraint that uses a sentence encoder to encode the original and perturbed inputs, and enforce that the sentence encodings be within some fixed distance of one another. (This is what happens in subclasses of `textattack.constraints.semantics.sentence_encoders`.) A new constraintTo add our own constraint, we need to create a subclass of `textattack.constraints.Constraint`. We can implement one of two functions, either `_check_constarint` or `_check_constraint_many`:- `_check_constraint` determines whether candidate `TokenizedText` `transformed_text`, transformed from `current_text`, fulfills a desired constraint. It returns either `True` or `False`.- `_check_constraint_many` determines whether each of a list of candidates `transformed_texts` fulfill the constraint relative to `current_text`. This is here in case your constraint can be vectorized. If not, just implement `_check_constraint`, and `_check_constraint` will be executed for each `(transformed_text, current_text)` pair. A custom constraintFor fun, we're going to see what happens when we constrain an attack to only allow perturbations that substitute out a named entity for another. In linguistics, a **named entity** is a proper noun, the name of a person, organization, location, product, etc. Named Entity Recognition is a popular NLP task (and one that state-of-the-art models can perform quite well). NLTK and Named Entity Recognition**NLTK**, the Natural Language Toolkit, is a Python package that helps developers write programs that process natural language. NLTK comes with predefined algorithms for lots of linguistic tasks– including Named Entity Recognition.First, we're going to write a constraint class. In the `_check_constraints` method, we're going to use NLTK to find the named entities in both `current_text` and `transformed_text`. We will only return `True` (that is, our constraint is met) if `transformed_text` has substituted one named entity in `current_text` for another.Let's import NLTK and download the required modules:
###Code
import nltk
nltk.download('punkt') # The NLTK tokenizer
nltk.download('maxent_ne_chunker') # NLTK named-entity chunker
nltk.download('words') # NLTK list of words
###Output
[nltk_data] Downloading package punkt to /u/edl9cy/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data] /u/edl9cy/nltk_data...
[nltk_data] Package maxent_ne_chunker is already up-to-date!
[nltk_data] Downloading package words to /u/edl9cy/nltk_data...
[nltk_data] Package words is already up-to-date!
###Markdown
NLTK NER ExampleHere's an example of using NLTK to find the named entities in a sentence:
###Code
sentence = ('In 2017, star quarterback Tom Brady led the Patriots to the Super Bowl, '
'but lost to the Philadelphia Eagles.')
# 1. Tokenize using the NLTK tokenizer.
tokens = nltk.word_tokenize(sentence)
# 2. Tag parts of speech using the NLTK part-of-speech tagger.
tagged = nltk.pos_tag(tokens)
# 3. Extract entities from tagged sentence.
entities = nltk.chunk.ne_chunk(tagged)
print(entities)
###Output
(S
In/IN
2017/CD
,/,
star/NN
quarterback/NN
(PERSON Tom/NNP Brady/NNP)
led/VBD
the/DT
(ORGANIZATION Patriots/NNP)
to/TO
the/DT
(ORGANIZATION Super/NNP Bowl/NNP)
,/,
but/CC
lost/VBD
to/TO
the/DT
(ORGANIZATION Philadelphia/NNP Eagles/NNP)
./.)
###Markdown
It looks like `nltk.chunk.ne_chunk` gives us an `nltk.tree.Tree` object where named entities are also `nltk.tree.Tree` objects within that tree. We can take this a step further and grab the named entities from the tree of entities:
###Code
# 4. Filter entities to just named entities.
named_entities = [entity for entity in entities if isinstance(entity, nltk.tree.Tree)]
print(named_entities)
###Output
[Tree('PERSON', [('Tom', 'NNP'), ('Brady', 'NNP')]), Tree('ORGANIZATION', [('Patriots', 'NNP')]), Tree('ORGANIZATION', [('Super', 'NNP'), ('Bowl', 'NNP')]), Tree('ORGANIZATION', [('Philadelphia', 'NNP'), ('Eagles', 'NNP')])]
###Markdown
Caching with `@functools.lru_cache`A little-known feature of Python 3 is `functools.lru_cache`, a decorator that allows users to easily cache the results of a function in an LRU cache. We're going to be using the NLTK library quite a bit to tokenize, parse, and detect named entities in sentences. These sentences might repeat themselves. As such, we'll use this decorator to cache named entities so that we don't have to perform this expensive computation multiple times. Putting it all together: getting a list of Named Entity Labels from a sentenceNow that we know how to tokenize, parse, and detect named entities using NLTK, let's put it all together into a single helper function. Later, when we implement our constraint, we can query this function to easily get the entity labels from a sentence. We can even use `@functools.lru_cache` to try and speed this process up.
###Code
import functools
@functools.lru_cache(maxsize=2**14)
def get_entities(sentence):
tokens = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokens)
# Setting `binary=True` makes NLTK return all of the named
# entities tagged as NNP instead of detailed tags like
#'Organization', 'Geo-Political Entity', etc.
entities = nltk.chunk.ne_chunk(tagged, binary=True)
return entities.leaves()
###Output
_____no_output_____
###Markdown
And let's test our function to make sure it works:
###Code
sentence = 'Jack Black starred in the 2003 film classic "School of Rock".'
get_entities(sentence)
###Output
_____no_output_____
###Markdown
We flattened the tree of entities, so the return format is a list of `(word, entity type)` tuples. For non-entities, the `entity_type` is just the part of speech of the word. `'NNP'` is the indicator of a named entity (a proper noun, according to NLTK). Looks like we identified three named entities here: 'Jack' and 'Black', 'School', and 'Rock'. as a 'GPE'. (Seems that the labeler thinks Rock is the name of a place, a city or something.) Whatever technique NLTK uses for named entity recognition may be a bit rough, but it did a pretty decent job here! Creating our NamedEntityConstraintNow that we know how to detect named entities using NLTK, let's create our custom constraint.
###Code
from textattack.constraints import Constraint
class NamedEntityConstraint(Constraint):
""" A constraint that ensures `transformed_text` only substitutes named entities from `current_text` with other named entities.
"""
def _check_constraint(self, tranformed_text, current_text, original_text=None):
transformed_entities = get_entities(transformed_text.text)
current_entities = get_entities(current_text.text)
# If there aren't named entities, let's return False (the attack
# will eventually fail).
if len(current_entities) == 0:
return False
if len(current_entities) != len(transformed_entities):
# If the two sentences have a different number of entities, then
# they definitely don't have the same labels. In this case, the
# constraint is violated, and we return False.
return False
else:
# Here we compare all of the words, in order, to make sure that they match.
# If we find two words that don't match, this means a word was swapped
# between `current_text` and `transformed_text`. That word must be a named entity to fulfill our
# constraint.
current_word_label = None
transformed_word_label = None
for (word_1, label_1), (word_2, label_2) in zip(current_entities, transformed_entities):
if word_1 != word_2:
# Finally, make sure that words swapped between `x` and `x_adv` are named entities. If
# they're not, then we also return False.
if (label_1 not in ['NNP', 'NE']) or (label_2 not in ['NNP', 'NE']):
return False
# If we get here, all of the labels match up. Return True!
return True
###Output
_____no_output_____
###Markdown
Testing our constraintWe need to create an attack and a dataset to test our constraint on. We went over all of this in the first tutorial, so let's gloss over this part for now.
###Code
# Import the dataset.
from textattack.datasets.classification import YelpSentiment
# Create the model.
from textattack.models.classification.lstm import LSTMForYelpSentimentClassification
model = LSTMForYelpSentimentClassification()
# Create the goal function using the model.
from textattack.goal_functions import UntargetedClassification
goal_function = UntargetedClassification(model)
from textattack.transformations import WordSwapEmbedding
from textattack.search_methods import GreedySearch
from textattack.shared import Attack
from textattack.constraints.pre_transformation import RepeatModification, StopwordModification
# We're going to the `WordSwapEmbedding` transformation. Using the default settings, this
# will try substituting words with their neighbors in the counter-fitted embedding space.
transformation = WordSwapEmbedding(max_candidates=15)
# We'll use the greedy search method again
search_method = GreedySearch()
# Our constraints will be the same as Tutorial 1, plus the named entity constraint
constraints = [RepeatModification(),
StopwordModification(),
NamedEntityConstraint()]
# Now, let's make the attack using these parameters.
attack = Attack(goal_function, constraints, transformation, search_method)
print(attack)
import torch
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Now, let's use our attack. We're going to iterate through the `YelpSentiment` dataset and attack samples until we achieve 10 successes. (There's a lot to check here, and since we're using a greedy search over all potential word swap positions, each sample will take a few minutes. This will take a few hours to run on a single core.)
###Code
from textattack.loggers import CSVLogger # tracks a dataframe for us.
from textattack.attack_results import SuccessfulAttackResult
results_iterable = attack.attack_dataset(YelpSentiment(), attack_n=True)
logger = CSVLogger(color_method='html')
num_successes = 0
while num_successes < 10:
result = next(results_iterable)
if isinstance(result, SuccessfulAttackResult):
logger.log_attack_result(result)
num_successes += 1
###Output
_____no_output_____
###Markdown
Now let's visualize our 10 successes in color:
###Code
import pandas as pd
pd.options.display.max_colwidth = 480 # increase column width so we can actually read the examples
from IPython.core.display import display, HTML
display(HTML(logger.df[['passage_1', 'passage_2']].to_html(escape=False)))
###Output
_____no_output_____ |
robert_haase/cupy_cucim/cupy_cucim.ipynb | ###Markdown
GPU-accelerated image processing using CUPY and CUCIMProcessing large images with python can take time. In order to accelerate processing, graphics processing units (GPUs) can be exploited, for example using [NVidia CUDA](https://en.wikipedia.org/wiki/CUDA). For processing images with CUDA, there are a couple of libraries available. We will take a closer look at [cupy](https://cupy.dev/), which brings more general computing capabilities for CUDA compatible GPUs, and [cucim](https://github.com/rapidsai/cucim), a library of image processing specific operations using CUDA. Both together can serve as GPU-surrogate for [scikit-image](https://scikit-image.org/).See also* [StackOverflow: Is it possible to install cupy on google colab?](https://stackoverflow.com/questions/49135065/is-it-possible-to-install-cupy-on-google-colab)* [Cucim example notebooks](https://github.com/rapidsai/cucim/blob/branch-0.20/notebooks/Welcome.ipynb)Before we start, we need to install CUDA and CUCIM it properly. The following commands make this notebook run in Google Colab.
###Code
!curl https://colab.chainer.org/install | sh -
!pip install cucim
!pip install scipy scikit-image cupy-cuda100
import numpy as np
import cupy as cp
import cucim
from skimage.io import imread, imshow
import pandas as pd
###Output
_____no_output_____
###Markdown
In the following, we are using image data from Paci et al shared under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license. See also: https://doi.org/10.17867/10000140
###Code
image = imread('https://idr.openmicroscopy.org/webclient/render_image_download/9844418/?format=tif')
imshow(image)
###Output
_____no_output_____
###Markdown
In order to process an image using CUDA on the GPU, we need to convert it. Under the hood of this conversion, the image data is sent from computer random access memory (RAM) to the GPUs memory.
###Code
image_gpu = cp.asarray(image)
image_gpu.shape
###Output
_____no_output_____
###Markdown
Extracting a single channel out of the three-channel image works like if we were working with numpy. Showing the image does not work, because the CUDA image is not available in memory. In order to get it back from GPU memory, we need to convert it to a numpy array.
###Code
single_channel_gpu = image_gpu[:,:,1]
# the following line would fail
# imshow(single_channel_gpu)
# get single channel image back from GPU memory and show it
single_channel = cp.asnumpy(single_channel_gpu)
imshow(single_channel)
# we can also do this with a convenience function
def gpu_imshow(image_gpu):
image = np.asarray(image_gpu)
imshow(image)
###Output
_____no_output_____
###Markdown
Image filtering and segmentationThe cucim developers have re-implemented many functions from sckit image, e.g. the [Gaussian blur filter](https://docs.rapids.ai/api/cucim/stable/api.htmlcucim.skimage.filters.gaussian), [Otsu Thresholding](https://docs.rapids.ai/api/cucim/stable/api.htmlcucim.skimage.filters.threshold_otsu) after [Otsu et al. 1979](https://ieeexplore.ieee.org/document/4310076), [binary erosion](https://docs.rapids.ai/api/cucim/stable/api.htmlcucim.skimage.morphology.binary_erosion) and [connected component labeling](https://docs.rapids.ai/api/cucim/stable/api.htmlcucim.skimage.measure.label).
###Code
from cucim.skimage.filters import gaussian
blurred_gpu = gaussian(single_channel_gpu, sigma=5)
gpu_imshow(blurred_gpu)
from cucim.skimage.filters import threshold_otsu
# determine threshold
threshold = threshold_otsu(blurred_gpu)
# binarize image by apply the threshold
binary_gpu = blurred_gpu > threshold
gpu_imshow(binary_gpu)
from cucim.skimage.morphology import binary_erosion, disk
eroded_gpu = binary_erosion(binary_gpu, selem=disk(2))
gpu_imshow(eroded_gpu)
from cucim.skimage.measure import label
labels_gpu = label(eroded_gpu)
gpu_imshow(labels_gpu)
###Output
/usr/local/lib/python3.7/dist-packages/skimage/io/_plugins/matplotlib_plugin.py:150: UserWarning: Low image data range; displaying image with stretched contrast.
lo, hi, cmap = _get_display_range(image)
###Markdown
For visualization purposes, it is recommended to turn the label image into an RGB image, especially if you want to save it to disk.
###Code
from cucim.skimage.color import label2rgb
labels_rgb_gpu = label2rgb(labels_gpu)
gpu_imshow(labels_rgb_gpu)
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:3: FutureWarning: The new recommended value for bg_label is 0. Until version 0.19, the default bg_label value is -1. From version 0.19, the bg_label default value will be 0. To avoid this warning, please explicitly set bg_label value.
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Quantitative measurementsAlso quantitative measurments using [regionprops_table](https://docs.rapids.ai/api/cucim/stable/api.htmlcucim.skimage.measure.regionprops_table) have been implemented in cucim. A major difference is that you need to convert its result back to numpy if you want to continue processing on the CPU, e.g. using [pandas](https://pandas.pydata.org/).
###Code
from cucim.skimage.measure import regionprops_table
table_gpu = regionprops_table(labels_gpu, intensity_image=single_channel_gpu, properties=('mean_intensity', 'area', 'solidity'))
table_gpu
# The following line would fail.
# pd.DataFrame(table_gpu)
# We need to convert that table to numpy before we can pass it to pandas.
table = {item[0] : cp.asnumpy(item[1]) for item in table_gpu.items()}
pd.DataFrame(table)
###Output
_____no_output_____ |
02 Algorithms/Day 1.ipynb | ###Markdown
Ассимптотика алгоритмов и зачем она нужнаНужно как-то уметь сравнивать алгоритмы между собой, особенно хочется понимать какой из алгоритмов работает быстрее (вне зависимости от железа) И какой из них потребляет больеше памяти.Для этого придумали О-нотацию.Есть математическое определение: для двух функций f(x) и g(x) можно сказать что f является O(g) если существует такая константа C > 0, что при устремлении x к бесконечности (или какой-то точке $x_0$) выполняется неравенство $|f(x)| < C * |g(x)|$Если говорить человеческим языком, то если мы говорим что алгоритм работает за O(f(n)), где n - размер входных параметров алгоритмаэто значит что скорость работы алгоритма с увеличением n будет расти не быстрее чем f(n) на какую-то константуВычислять ассимптотику можно просто отбрасывая все константы при вычислении количества операций, которое делате алгоритм, но некоторые случаи сложнее чем другиеВот тут можно почитать еще https://bit.ly/3khIrsy Пример алгоритма с ассимптотикой (по времени) O($n^2$)
###Code
def n_square_algo(array):
# Здесь n это len(array)
# Этот цикл работает за O(n^2) операций
for elem in array:
for elem2 in array:
print(elem * elem2)
# Не влияет на ассимптотику т.к. отбрасываем все константы (не зависящие от n штуки)
for i in range(100000000000000000):
print(i)
# Можно поверить что сортировка списка работает за O(n log n), что меньше чем O(n^2)
array.sort()
###Output
_____no_output_____
###Markdown
Задача 1 (В которой мы узнаем про трюк с префиксными суммами и что можно обменивать время на память)Вам дан список чисел (как отрицательных так и положительных), задача найти в нем помассив с максимальной суммой элементов и при том из всех таких - с наибольшей длинойПример:array = [1, 2, -4, 5, 2, -1, 3, -10, 7, 1, -1, 2]ОТВЕТ: [1, 2, -4, 5, 2, -1, 3, -10, 7, 1, -1, 1] Наивное решение с ассимптотикой O($n^3$) по времени и O($1$) по памяти:
###Code
def max_sum_subarray(array):
n = len(array)
ans = -1
max_sum = float("-inf")
for start in range(n):
for finish in range(start + 1, n):
current_sum = 0
for elem in array[start:finish + 1]:
current_sum += elem
if current_sum > max_sum:
max_sum, ans = current_sum, finish - start + 1
return ans, max_sum
import numpy as np
%%timeit
max_sum_subarray(np.random.rand(100)-0.5)
###Output
55.9 ms ± 2.41 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Чуть более близкое к оптимальному решение с ассимптотикой O($n^2$) по времени и O($n$) по памяти:
###Code
def max_sum_subarray(array):
# Трюк заключается в том чтобы сначала посчитать суммы всех префиксов массива и потом вычислять сумму любого подмассива за О(1)
n = len(array)
prefix_sum = [0]
for i in range(n):
prefix_sum.append(prefix_sum[-1] + array[i])
ans = -1
max_sum = -float("inf")
for start in range(n):
for finish in range(start + 1, n):
# Вот как мы вычисляем сумму на подмассиве используя префиксные суммы
current_sum = prefix_sum[finish + 1] - prefix_sum[start]
if current_sum > max_sum:
max_sum, ans = current_sum, finish - start + 1
return ans, max_sum
%%timeit
max_sum_subarray(np.random.rand(1000)-0.5)
49.6/1.32
###Output
_____no_output_____
###Markdown
в 38 раз быстрее Задача 2 (Где мы узнаем про трюк с двумя указателями и что ассимптотику по времени можно сократить если итерироваться в каком-то специальном порядке):Вам дано 2 массива с числами, отсортированные по возрастанию, задача найти по элементу в каждом изз массивов ($x_1$ и $x_2$ соответственно) таких что $|x_1 - x_2|$ минимальноПример:array1 = [-10, -3, 0, 5, 13, 58, 91, 200, 356, 1000, 25000]array2 = [-9034, -574, -300, -29, 27, 100, 250, 340, 900, 60000]ОТВЕТ: 91 и 100 Наивное решение с ассимптотикой O($n * m$) по времени и O(1) по памятигде n и m это размеры массивов
###Code
def min_difference_pair(arr1, arr2):
n, m = len(arr1) - 1, len(arr2) - 1
min_diff, ans = abs(arr1[0] - arr2[0]), (0, 0)
for i in range(n):
for j in range(m):
diff = abs(arr1[i] - arr2[j])
if diff < min_diff:
min_diff, ans = diff, (i, j)
return min_diff, ans
###Output
_____no_output_____
###Markdown
Оптимальное решение с ассимптотикой O($n + m$) по времени и O(1) по памяти
###Code
def min_difference_pair(arr1, arr2):
n, m = len(arr1) - 1, len(arr2) - 1
pointer1, pointer2 = 0, 0
min_diff, ans = abs(arr1[0] - arr2[0]), (0, 0)
while pointer1 + pointer2 != n + m:
if pointer2 == m or (pointer1 < n and arr1[pointer1] <= arr2[pointer2]):
pointer1 += 1
elif pointer1 == n or (pointer2 < m and arr1[pointer1] >= arr2[pointer2]):
pointer2 += 1
current_diff = abs(arr1[pointer1] - arr2[pointer2])
if current_diff < min_diff:
min_diff, ans = current_diff, (pointer1, pointer2)
return min_diff, ans
###Output
_____no_output_____
###Markdown
Домашнее Задание1) Потренироваться в рекурсии, например, здесь: https://informatics.mccme.ru/mod/statements/view.php?id=25431 (задачи в менюшке справа, нужно зарегаться чтобы решать)2) Задачи на метод двух указателей: Простые:https://leetcode.com/problems/longest-substring-without-repeating-characters/https://leetcode.com/problems/remove-duplicates-from-sorted-array/https://leetcode.com/problems/merge-sorted-array/ Посложнее: https://leetcode.com/problems/long-pressed-name/https://leetcode.com/problems/trapping-rain-water/3) Придумать решение первой задачи с ассимптотикой по времени O(n) (например используя метод двух указателей)
###Code
def max_sum_subarray(array):
"""выдает длину и сумму подстроки, которая дает максимальную сумму
на массиве np.random.rand(1000)-0.5) работает в 37 раз быстрее,
чем алогиртм с лекции """
n = len(array)
# all prefix sum compute
prefix_sum = [0]
for i in range(n):
prefix_sum.append(prefix_sum[-1] + array[i])
start, finish, start_last, finish_last = 0, 0, -1, -1
start_old = -1
finish_old = -1
ans = -1
max_sum = -float("inf")
while start != n - 1:
# критерий остановки - дошли до предпоследнего места указателем начала
if finish == n or ((finish == finish_old) and (start == start_old)):
# если дошли указателем конца или два раза смотрим то же место - двигать начало
start += 1
current_sum = prefix_sum[finish + 1] - prefix_sum[start]
if current_sum > max_sum:
max_sum, ans = current_sum, finish - start + 1
try:
# если доходим до конца, следующего движения уже нет, поэтому, следующая сумма даст IndexError,
# переставляю индикатор движения вперед по более большой сумме на False
sum_check = (prefix_sum[finish] - prefix_sum[start + 1]) < (prefix_sum[finish + 2] - prefix_sum[start])
except IndexError:
sum_check = False
if ((finish - start) < 1 or sum_check) and (finish + 2 < len(prefix_sum)):
finish += 1
current_sum = prefix_sum[finish + 1] - prefix_sum[start]
if current_sum > max_sum:
max_sum, ans = current_sum, finish - start + 1
start_old = start_last
start_last = start
finish_old = finish_last
finish_last = finish
return ans, max_sum
not(True)
%%timeit
X = [1, 2, -4, 5, 2, -1, 3, -10, 7, 1, -1, 2]
max_sum_subarray(X)
%%timeit
max_sum_subarray(np.random.rand(1000)-0.5)
###Output
3.64 ms ± 653 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
хотим быстрее 1 мс
###Code
X = np.random.rand(1000)-0.5
133/3.54
max_sum_subarray(X)
max_sum_subarray(X)
###Output
_____no_output_____ |
0.14/_downloads/plot_evoked_topomap.ipynb | ###Markdown
Plotting topographic maps of evoked dataLoad evoked data and plot topomaps for selected time points.
###Code
# Authors: Christian Brodbeck <[email protected]>
# Tal Linzen <[email protected]>
# Denis A. Engeman <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne.datasets import sample
from mne import read_evokeds
print(__doc__)
path = sample.data_path()
fname = path + '/MEG/sample/sample_audvis-ave.fif'
# load evoked and subtract baseline
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0))
# set time instants in seconds (from 50 to 150ms in a step of 10ms)
times = np.arange(0.05, 0.15, 0.01)
# If times is set to None only 10 regularly spaced topographies will be shown
# plot magnetometer data as topomaps
evoked.plot_topomap(times, ch_type='mag')
# compute a 50 ms bin to stabilize topographies
evoked.plot_topomap(times, ch_type='mag', average=0.05)
# plot gradiometer data (plots the RMS for each pair of gradiometers)
evoked.plot_topomap(times, ch_type='grad')
# plot magnetometer data as an animation
evoked.animate_topomap(ch_type='mag', times=times, frame_rate=10)
# plot magnetometer data as topomap at 1 time point : 100 ms
# and add channel labels and title
evoked.plot_topomap(0.1, ch_type='mag', show_names=True, colorbar=False,
size=6, res=128, title='Auditory response')
plt.subplots_adjust(left=0.01, right=0.99, bottom=0.01, top=0.88)
###Output
_____no_output_____ |
codes/DEMO3_Inverse_problem/step1_a_FormatGraphs.ipynb | ###Markdown
DEMO3 Inverse problem solving Most of the codes are the same as DEMO2, but slightly different.- This script will format graph databases
###Code
import sys
sys.path.append("../MIGraph/GraphConv/")
from ValueTransformer import ValueTransformer
from ConvGraphScript import drawGraph,checkGraphList
from AutoParameterScaling import AutoParameterScaling
from ConvGraphmlToGraph import loadGraphCSV
from PrepGraphScript import PrepGraphScript
import glob
import os
import joblib
from tqdm import tqdm
import numpy as np
import random
os.chdir("praparingGraphs")
#load PEDOT-PSS files
folderList=glob.glob("input/PEDOTPSS/*")
CSVPathList=[]
graphPathList=[]
for folder in folderList:
CSVPath=folder+"/"+os.path.basename(folder)+".csv"
graphPath=folder+"/graph/"
CSVPathList.append(CSVPath)
graphPathList.append(graphPath)
###Output
_____no_output_____
###Markdown
convert graph-type PEDOT-PSS file
###Code
VT=ValueTransformer()
for CSVPath,graphPath in zip(CSVPathList,graphPathList):
print(CSVPath)
gList=loadGraphCSV(CSVPath,graphPath)
#convert unit etc
gList=VT.convertGraphList(gList)
checkGraphList(gList)
filename=os.path.basename(CSVPath)
outname="temporary/"+filename+".graphbin"
print("saving...", outname)
joblib.dump(gList,outname,compress=3)
#convert wikipedia file
#you can add other compound csv files in additional_simple_comps
csvList=glob.glob("input/additional_simple_comps/*.csv")
print(len(csvList))
sorted(csvList)
def conv(filename):
pgs=PrepGraphScript(filename)
pgs.doFragment=False
pgs.prapareGraphList(numOfMaxFragments=2000)
for num,filename in tqdm(enumerate(csvList)):
print(num, "file: ",filename)
conv(filename)
###Output
0it [00:00, ?it/s]
0%| | 0/1370 [00:00<?, ?it/s][A
1%| | 10/1370 [00:00<00:14, 95.00it/s][A
###Markdown
combine compound databases
###Code
import pandas as pd
#in the case of this PEDOT-PSS_txt project, only one compound file is available, but normally many)
allCompundsPath="output/allcompounds.csv.gz"
csvList=glob.glob("../convCSVtoGraph/temp/output/*.csv")
csvList2=glob.glob("input/*.csv")
csvgzList=glob.glob("input/*.csv.gz")
compPathList=sorted(list(set(csvList)|set(csvgzList)|set(csvList2)))
print(compPathList)
CompColumns=["ID","SMILES"]
for num,filePath in enumerate(compPathList):
print(filePath)
if num==0:
df=pd.read_csv(filePath)[CompColumns]
else:
df2=pd.read_csv(filePath)[CompColumns]
df=pd.concat([df,df2],axis=0)
df=df.drop_duplicates("ID")
df=df[CompColumns].reset_index()
df.to_csv(allCompundsPath,index=False)
df
###Output
input/20190520wikipedia.csv.gz
input/20190521wikidata.csv.gz
input/20200220PEDOTProcess_comp.csv
###Markdown
delete broken compounds and their graphs
###Code
from rdkit import Chem
from rdkit.Chem import AllChem
compIDtoSMILES=dict(zip(df["ID"],df["SMILES"]))
graphbinList1=glob.glob("temporary/*.graphbin")
graphbinList2=glob.glob("../convCSVtoGraph/temp/output/*.graphbin")
graphbinList=sorted(list(set(graphbinList1)|set(graphbinList2)))
for graphbin in tqdm(graphbinList):
gList=joblib.load(graphbin)
ngList=[]
for g in (gList):
#extract comps
compIDList=[g.nodes[node]["label"] for node in g.nodes if str(g.nodes[node]["label"])[:2]=="C_"]
if np.nan in compIDList:
compIDList=["none"]
print("nan")
if "C_nan" in compIDList:
compIDList=["none"]
#check if mol objects can be made from smiles
try:
SMILESList = [compIDtoSMILES[i[2:]] for i in compIDList]
molList =[Chem.MolFromSmiles(smiles) for smiles in SMILESList]
for mol in molList:
morgan_fps =AllChem.GetMorganFingerprintAsBitVect(mol, 2, 20)
bit=morgan_fps.ToBitString()
ngList.append(g)
except:
print("error",SMILESList)
joblib.dump(ngList,graphbin)
#standardizing values (this is not necessary for PEDOT-PSS project) and finalize graphs
#** standardizing was done at step 1, because graphs made from automatic text parsing have slightly different forms
#, and standardizing cannot be done by this code. (i.e., developed for "normal graphs" )
graphbinList1=glob.glob("temporary/*.graphbin")
graphbinList2=glob.glob("../convCSVtoGraph/temp/output/*.graphbin")
graphbinList=sorted(list(set(graphbinList1)|set(graphbinList2)))
print(graphbinList)
AutoSC=AutoParameterScaling()
AutoSC.initialize(graphbinList)
joblib.dump(AutoSC,"output/AutoSC.scaler",compress=3)
AutoSC.autoTransform(graphbinList)
###Output
0%| | 0/2 [00:00<?, ?it/s]
###Markdown
check graphs
###Code
graphbinList=glob.glob("output/*.graphbin")
gList=[]
for file in tqdm(graphbinList):
print(file)
temp=joblib.load(file)
gList.extend(temp)
print(len(gList), " plots")
number=0
#draw
drawGraph(gList[number])
g=gList[number]
nodeVals=[g.nodes[node]["label"] for node in g.nodes]
nodeVals
###Output
_____no_output_____ |
docs/source/notebooks/Recipe_Get_Schedule_Year.ipynb | ###Markdown
Recipe: Get schedules for a whole year ProblemGet schedules for a whole year. Solution
###Code
import datetime as dt
import pandas as pd
from cro.schedule.sdk import Client, Schedule
YEAR = 2022
month_dates = [dt.date(YEAR, month, 1) for month in range(1, 3)]
data: dict[Schedule, pd.DataFrame] = {}
client = Client() # Set the station id (sid) later within the for loop.
###Output
_____no_output_____
###Markdown
Fetch the sechedules for station Plus and Radiožurnál from the beginning of the year.
###Code
from tqdm import tqdm
for sid in ("plus", "radiozurnal"):
client.station = sid
for date in tqdm(month_dates):
schedules = client.get_month_schedule(date)
for schedule in schedules:
data[schedule] = schedule.to_table()
###Output
100%|██████████| 2/2 [00:06<00:00, 3.23s/it]
100%|██████████| 2/2 [00:06<00:00, 3.24s/it]
###Markdown
Write single dataset to Excel
###Code
for schedule, table in tqdm(data.items()):
week_number = f"{schedule.date.isocalendar()[1]:02d}"
week_start = schedule.date - dt.timedelta(days=schedule.date.weekday()) # Monday
week_end = week_start + dt.timedelta(days=6) # Sunday
with pd.ExcelWriter(
f"../../../data/sheet/{YEAR}/Schedule_{schedule.station.name}_{YEAR}W{week_number}_{week_start}_{week_end}.xlsx"
) as writer:
table.to_excel(writer, index=False)
###Output
100%|██████████| 118/118 [00:14<00:00, 7.89it/s]
###Markdown
Write concatenated datasets to Excel
###Code
with pd.ExcelWriter(f"../../../data/sheet/Schedule_Y{YEAR}.xlsx") as writer:
pd.concat(data.values()).to_excel(writer)
###Output
_____no_output_____ |
extractive_summarization/english.ipynb | ###Markdown
*Extractive summarization* en inglésEl objetivo del presente proyecto es crear un modelo capaz de producir resúmenes del conjunto de noticias en **lengua inglesa** de CNN y Daily Mail. Los resúmenes serán obtenidos utilizando la metodología de extracción (*extraction summarization*), es decir, el resumen generado será a partir de las frases del texto original que sean más relevantes. El proyecto constará de distintas secciones:- Preparación del entorno- Análisis de los datos- Preprocesamiento de los datos - Análisis de la extensión de los datos- Construcción del modelo- Generar nuevos resúmenes Preparación del entorno
###Code
# Librerías necesarias
import tensorflow as tf
import tensorflow_datasets as tfds
import pandas as pd
import math as m
import re
from itertools import chain, groupby
from bs4 import BeautifulSoup
from collections import Counter
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
nltk.download('punkt')
import heapq
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
Análisis de los datos
###Code
data = pd.read_csv('/content/drive/MyDrive/TFM/data/en_train.csv')
def dataframe_ok(df):
df.drop(['Unnamed: 0'], axis = 1, inplace = True) #Eliminar la columna del índice
df.columns = ['Text', 'Summary'] #Asignar el nombre a cada columna
dataframe_ok(data)
data.shape # Dimensiones de los datos: 1000 filas (noticias) con dos columnas: texto y resumen
data.head() # Observar las cinco primeras líneas de los datos
# Inspeccionar el texto completo de las tres primeras filas
for i in range(3):
print("Noticia #",i+1)
print(data.Summary[i])
print(data.Text[i])
print()
# Comprobar el número de datos nulos
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Preprocesamiento de los datosLa tarea de preprocesamiento de los datos es una de las partes más importantes en un proyecto de procesamiento de lenguaje natural. Para realizar resúmenes de texto por extracción se parte de la hipótesis de que el tema principal del texto viene dado por las palabras que aparezcan con mayor frecuencia. En consecuencia, el resumen se generará a partir de las frases que contengan mayor cantidad de dichas palabras. Es por esta razón que para este tipo de resumen automático de textos no es necesario modificar de forma excesiva los textos originales para que estos sean más naturales. Según la lengua con la que se desee entrenar el modelo, las tareas de limpieza de los datos pueden tener variaciones. Se recuerda que en el presente *notebook* se pretende utilizar textos en lengua inglesa. **Preprocesamiento de los datos:**- **Eliminar letras mayúsculas**: Python diferencia entre carácteres en mayúsuclas y en minúsculas, por lo tanto, las palabras *News* y *news* serían interpretadas como diferentes. Sin embargo, para comprender el texto correctamente, esto no debe ser así. Es por ello que se convierte todo el texto a letras minúsculas. - **Eliminar los restos de la importación de los datos**: El conjunto de datos ha sido descargado de la librería TensorFlow. Estos datos se encuentran en el interior de `tf.tensor\(b" [...] ", shape=\(\), dtype=string\)`, carácteres que deben eliminarse porque no forman parte del texto original que se desea analizar. Además, también quedan conjuntos de carácteres como `xe2`, `xc2`... que carecen de significado y están presentes con mucha frecuencia en los textos. - **Eliminar los cambios de línea ./n**- **Eliminar el texto entre paréntesis**: generalmente, entre paréntesis no se pone información relevante. Por ello, se puede prescindir de esta para reducir la información que debe ser analizada por el modelo.- **Eliminar caracteres especiales**- **Eliminar 's**- **Sustituir las contracciones por su forma original**: [diccionario para expandir las contracciones](https://www.analyticsvidhya.com/blog/2019/06comprehensive-guide-text-summarization-using-deep-learning-python/)
###Code
# Diccionario para expandir las contracciones
contraction_mapping_upper = {"ain't": "is not","can't": "cannot", "'cause": "because", "could've": "could have",
"he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is",
"I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would",
"i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "it'd": "it would",
"it'd've": "it would have", "it'll": "it will", "it'll've": "it will have", "let's": "let us", "ma'am": "madam",
"mayn't": "may not", "might've": "might have", "mightn't've": "might not have", "must've": "must have",
"mustn't've": "must not have", "needn't've": "need not have","o'clock": "of the clock",
"oughtn't": "ought not", "oughtn't've": "ought not have", "sha'n't": "shall not", "shan't've": "shall not have",
"she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have",
"shouldn't've": "should not have", "so've": "so have","so's": "so as",
"this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would",
"there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have",
"they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have",
"we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are",
"we've": "we have", "what'll": "what will", "what'll've": "what will have", "what're": "what are",
"what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is",
"where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have",
"why's": "why is", "why've": "why have", "will've": "will have", "won't've": "will not have",
"would've": "would have", "wouldn't've": "would not have", "y'all": "you all",
"y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have",
"you'd've": "you would have", "you'll've": "you will have"}
contraction_mapping = dict((k.lower(), v) for k, v in contraction_mapping_upper .items()) # Convertir todas las clave-valor del diccionario a minúsculas
# Stop words: palabras que no tienen un significado por sí solas (artículos, pronombres, preposiciones)
stop_words = set(stopwords.words('english'))
def clean_text(text):
clean = text.lower() #Convierte todo a minúsculas
""" Limpiar los datos """
#Eliminar los restos de tensorflow para los casos en los que están entre comillas simples y entre comillas dobles
clean = re.sub('tf.tensor\(b"', "", clean)
clean = re.sub('", shape=\(\), dtype=string\)',"",clean)
clean = re.sub("tf.tensor\(b'", "", clean)
clean = re.sub("', shape=\(\), dtype=string\)","",clean)
#Eliminar los cambios de línea
clean = clean.replace('.\\n','')
#Eliminar el texto que se encuentra entre paréntesis
clean = re.sub(r'\([^)]*\)', '', clean)
#Eliminar las 's
clean = re.sub(r"'s", "", clean)
#Eliminar las contracciones
clean = ' '.join([contraction_mapping[t] if t in contraction_mapping else t for t in clean.split(" ")]) #Quitar las contracciones
#Eliminar los carácteres especiales
clean = re.sub("[^a-zA-Z, ., ,, ?, %, 0-9]", " ", clean)
#Añadir un espacio antes de los signos de puntuación y los símbolos
clean = clean.replace(".", " . ")
clean = clean.replace(",", " , ")
clean = clean.replace("?", " ? ")
#Eliminar carácteres extraños
clean = re.sub(r'xe2', " ", clean)
clean = re.sub(r'xc2', " ", clean)
clean = re.sub(r'x99s', " ", clean)
clean = re.sub(r'x99t', " ", clean)
clean = re.sub(r'x99ve', " ", clean)
clean = re.sub(r'x99', " ", clean)
clean = re.sub(r'x98i', " ", clean)
clean = re.sub(r'x93', " ", clean)
clean = re.sub(r'xa0', " ", clean)
clean = re.sub(r'x80', " ", clean)
clean = re.sub(r'x94', " ", clean)
clean = re.sub(r'x98', " ", clean)
clean = re.sub(r'x0', " ", clean)
clean = re.sub(r'x81', " ", clean)
clean = re.sub(r'x82', " ", clean)
clean = re.sub(r'x83', " ", clean)
clean = re.sub(r'x84', " ", clean)
clean = re.sub(r'x89', " ", clean)
clean = re.sub(r'x9', " ", clean)
clean = re.sub(r'x8', " ", clean)
clean = re.sub(r'x97', " ", clean)
clean = re.sub(r'x9c', " ", clean)
tokens = [w for w in clean.split()] #Juntar palabras
return (" ".join(tokens).strip())
# Limpiar los resúmenes y los textos
clean_summaries = []
for summary in data.Summary:
clean_summaries.append(clean_text(summary)) #Remove_stopwords = False: hacer resúmenes más naturales
print("Sumarios completados.")
clean_texts = []
for text in data.Text:
clean_texts.append(clean_text(text)) #Remove_stopwords = True: stop words no aportan información por lo que son irrelevantes para entrenar al modelo
print("Textos completados.")
# Inspeccionar los resúmentes y textos limpios para observar que se ha efectuado la limpieza correctamente
for i in range(3):
print("Noticia #",i+1)
print('Sumario: ', clean_summaries[i])
print('Texto: ',clean_texts[i])
print()
###Output
Noticia # 1
Sumario: chilton is hopeful of racing at marussia for a third straight season briton form has dipped in recent races after finishing 13th in bahrain but he is confident of staying at marussia beyond this season .
Texto: by . ian parkes , press association . max chilton has every confidence he will be retained by marussia for a third consecutive season . chilton started the campaign relatively strongly , claiming the best results of his formula one career by finishing 13th in the season opening race in australia and again in bahrain . in monaco , however , team mate jules bianchi stole chilton thunder as the frenchman scored marussia first points from their four and a half years in f1 with ninth place in monaco . centre of attention max chilton remains hopeful of being retained by marussia for the 2015 season . since then chilton has struggled for form and results , but the 23 year old from reigate in surrey sees no reason why marussia would not retain him for 2015 . i naturally want to stay with the team , said chilton . like a lot of these things they filter down from the top , and there are a lot of rumours with regard to the top of the grid , with people moving around and you don t really know where you stand until then . i won t focus on that until later on in the year , but i am confident i will be here next year . i have had good races this year . i started off fairly strong , and okay the last few have not been particularly great , but i feel we have got to the bottom of that . overall i have been consistent and had good results . chilton may yet be thanking bianchi for that result in monaco as the young briton would like to believe it could play a key role in his own future . on track the british driver joined the team in 2013 and finished every race of his debut season . those two points mean marussia lie ninth in the constructors title race ahead of both sauber and caterham . if marussia can hold on to that position the financial rewards would be considerable , which in turn may mean chilton not having to find the cash to fund his seat . marussia have a good future , especially if we hold off sauber for ninth . that would really build up momentum , added chilton . that would be a big help to the team financially if we could do that as it would help us develop the car for next year . if the team gets this ninth then we might not need to worry about that . that just me really thinking of the bigger picture . i ve not properly looked into it , but i would like to think i could continue to help the team develop over the next couple of years .
Noticia # 2
Sumario: president obama will head back to the white house on sunday night as tensions rise in missouri and iraq the decision appears aimed at countering criticism that the president was spending two weeks on a resort island in the midst of so many crises .
Texto: president barack obama is getting off the island . in a rare move for him , the president planned a break in the middle of his martha vineyard vacation to return to washington on sunday night for meetings with vice president joe biden and other advisers on the u . s . military campaign in iraq and tensions between police and protesters in ferguson , missouri . the white house has been cagey about why the president needs to be back in washington for those discussions . he received multiple briefings on both issues while on vacation . the white house had also already announced obama plans to return to washington before the u . s . airstrikes in iraq began and before the shooting of a teen in ferguson that sparked protests . protective gesture obama walks with daughter malia obama to board air force one at cape cod coast guard air station in massachusetts on sunday . back home obama and malia are seen at joint base andrews in washington early monday . mysterious the white house has been cagey about why the president needs to be back in washington . he is seen here on the south lawn of the white house with daughter malia . in good spirits despite the early return , the president and first daughter seemed to be enjoying a joke . part of the decision to head back to washington appears aimed at countering criticism that obama is spending two weeks on a resort island in the midst of so many foreign and domestic crises . yet those crises turned the first week of obama vacation into a working holiday . he made on camera statements iraq and the clashes in ferguson , a st . louis suburb . he also called foreign leaders to discuss the tensions between ukraine and russia , as well as between israel and hamas . i think it fair to say there are , of course , ongoing complicated situations in the world , and that why you ve seen the president stay engaged , white house spokesman eric schultz said . obama returned from his break along with his 16 year old daughter mailia , but is scheduled to return to martha vineyard on tuesday and stay through next weekend . in a first for obama family summer vacations , neither teenager is spending the entire holiday with her father . obama left washington aug . 9 with his wife , michelle , daughter malia , and the family two portuguese water dogs . the white house said 13 year old sasha would join her parents at a later date for part of their stay on this quaint island of shingled homes . but malia will not be around when her younger sister arrives . the daughters essentially are trading places , and the vacation is boiling down to obama getting about a week with each one . malia returned to washington with her father and is not expected to go back to martha vineyard . the white house said sasha will join her parents this week , without saying when she will arrive or what kept her away last week , or why malia left the island . president barack obama bike rides with daughter malia obama while on vacation with his family on the island of martha vineyard . obama often draws chuckles from sympathetic parents who understand his complaints about his girls lack of interest in spending time with him . what i m discovering is that each year , i get more excited about spending time with them . they get a little less excited , obama told cnn last year . even though work has occupied much of obama first week on vacation , he still found plenty of time to golf , go to the beach with his family and go out to dinner on the island . he hit the golf course one more time sunday ahead of his departure , joining two aides and former nba player alonzo mourning for an afternoon round . he then joined wife michelle for an evening jazz performance featuring singer rachelle ferrell . obama vacation has also been infused with a dose of politics . he headlined a fundraiser on the island for democratic senate candidates and attended a birthday party for democratic adviser vernon jordan wife , where he spent time with former president bill clinton and hillary rodham clinton . that get together between the former rivals turned partners added another complicated dynamic to obama vacation . just as obama was arriving on martha vineyard , an interview with the former secretary of state was published in which she levied some of her sharpest criticism of obama foreign policy . clinton later promised she and obama would hug it out when they saw each other at jordan party . no reporters were allowed in , so it not clear whether there was any hugging , but the white house said the president danced to nearly every song .
Noticia # 3
Sumario: texas governor rick perry announced this afternoon that he will dispatch up to 1 , 000 texas national guard troops to the border the deployment will cost texas taxpayers 12 million a month , according to a leaked memo perry has asked obama multiple times to send the national guard to the border but obama keeps refusing a texas lawmaker said the republican governor is not sincerely concerned about the border , he just wants to play politics .
Texto: texas governor rick perry announced today his intentions to deploy up to 1 , 000 texas national guard troops to his state southern border , which is also the u . s . border with mexico . there . can be no national security without border security , and texans have paid too . high a price for the federal government failure to secure our border , the republican governor said at a news conference this afternoon . the action i am ordering today will tackle this crisis head on by . multiplying our efforts to combat the cartel activity , human traffickers and . individual criminals who threaten the safety of people across texas and . america . according to a memo leaked late last night to the monitor , the executive action will cost texas taxpayers 12 million a month . scroll down for video . done waiting around texas governor rick perry said this afternoon that he is deploying up to 1 , 000 texas national guard troops to the state southern border , which is also the u . s . border with mexico . already , perry has instructed the texas department of public safety to increase personnel in the rio grande river valley area at a weekly cost of 1 . 3 million . added together , the two measures will cost 5 million a week , the memo reportedly states , and it is not clear where the money will come . from in the budget other than non critical areas like health care or transportation . the rise in border protection measures follows a surge of central american children streaming into the u . s . from mexico . more than 57 , 000 immigrant children , many of whom are unaccompanied , have illegally entered the country since last year , and the government estimates that approximately 90 , 000 will arrive by the close of this year . u . s . border patrol has been overloaded by the deluge , and the federal government is quickly running out of money to care for the children . congress is in the process of reviewing a 3 . 7 billion emergency funding request from president barack obama that would appropriate additional money to the agencies involved , but house republicans remain skeptical of the president plan . roughly half of the money obama asking for would go toward providing humanitarian aid to the children while relatively little would go toward returning the them to their home countries . furthermore , republicans would like to see changes to a 2008 trafficking law that requires the government to give children from non contiguous countries who show up at the border health screenings and due process before they can be sent home . the judicial process often takes months , and even years , clogging up courts and slowing down the repatriation process . the president had initially planned to include a revised version of the 2008 legislation in his request to congress that would have allowed the department of homeland security to exercise the discretion to bypass the current process by giving children the option to voluntarily return home . obama backed down at the last minute after receiving negative feedback from democratic lawmakers . perry held a news conference with attorney general greg abbott , right , this afternoon in austin , texas , to formally announce the deployment . perry said the national guard troops were needed to combat criminals that are exploiting a surge of children and families entering the u . s . illegally . also not included in obama request to congress was funding for a national guard deployment to the border something house speaker john boehner and the texas governor had both called on the president to do . republicans say a national guard presence is needed at areas of high crime to help border patrol agents crack down on smugglers and drug cartels . in a face to face meeting with obama when the president came to texas two weeks ago perry again asked the president to deploy the national guard through a federally funded statue but obama resisted . perry is now taking matters into his own hands , sending his own set of troops down to the rio grande valley to aid law enforcement officials . state senator juan chuy hinojosa , a democrat who represents border town mcallen , criticized the republican governor deployment as unnecessary . the cartels are taking advantage of the situation , he told the monitor . but our local law enforcement from the sheriff offices of the different counties to the different police departments are taking care of the situation . this is a civil matter , not a military matter . what we need is more resources to hire more deputies , hire more border patrol , hinojosa said . these are young people , just families coming across . they are not armed . they are not carrying weapons . the leaked memo on the national guard deployment specifically denies that it is a militarization of the border , however . and perry office reiterated today that troops would work seamlessly and side by side with law enforcement officials . hinojosa also accused perry , who recently toured the border with sean hannity as part of a special for fox news , of being insincere in his concern about the situation at the border . all . these politicians coming down to border , they don t care about solving . the problem , they just want to make a political point , he said .
###Markdown
Análisis de la extensión de los textos
###Code
text_lengths =[]
for i in (range(0,len(clean_texts))):
text_lengths.append(len(clean_texts[i].split()))
import matplotlib.pyplot as plt
plt.title('Número de palabras de los textos')
plt.hist(text_lengths, bins = 30)
text_sentences =[]
for i in (range(0,len(clean_texts))):
text_sentences.append(len(clean_texts[i].split(".")))
import matplotlib.pyplot as plt
plt.title('Número de frases de los textos')
plt.hist(text_sentences, bins = 30)
summaries_lengths =[]
for i in (range(0,len(clean_summaries))):
summaries_lengths.append(len(clean_summaries[i].split()))
import matplotlib.pyplot as plt
plt.title('Número de palabras de los sumarios')
plt.hist(summaries_lengths, bins = 30)
summaries_sentences =[]
for i in (range(0,len(clean_summaries))):
summaries_sentences.append(len(clean_summaries[i].split(".")))
import matplotlib.pyplot as plt
plt.title('Número de frases de los sumarios')
plt.hist(summaries_sentences, bins = 30)
#Devuelve la frecuencia con la que aparece cada palabra en el texto
def count_words(count_dict, text):
for sentence in text: #Separa los textos del conjunto text introducido. Cada sentence es uno de estos textos.
for word in sentence.split(): #Separa los textos en palabras
if word not in count_dict:
count_dict[word] = 1
else:
count_dict[word] += 1
word_frequency = {}
count_words(word_frequency, clean_summaries)
count_words(word_frequency, clean_texts)
print("Vocabulario total:", len(word_frequency))
#Buscar restos de la conversión del texto ('x99', 'x99s', 'x98', etc.) para incluirlos en la función clean_text
import operator
sorted(word_frequency.items(), key=operator.itemgetter(1), reverse=True )
###Output
_____no_output_____
###Markdown
Construcción del modeloPara generar resúmenes de texto por extracción, es necesario conocer qué frases del texto original son las que mayor información relevante contienen. Para ello, se seguirán los siguientes pasos para cada una de las noticias del conjunto de datos:- Calcular la frecuencia de aparición de las palabras.- Calcular la frecuencia ponderada de cada una de las palabras, siendo la frecuencia ponderada la división entre la frecuencia de aparición de la palabra en cuestión y la frecuencia de la palabra que aparece más veces en el texto. - Calcular la puntuación de cada una de las frases del texto, siendo la puntuación la suma ponderada de cada palabra que conforma dicha frase.- Seleccionar las N frases con mayor puntuación para generar el resumen a partir de estas.
###Code
def word_frequency (word_frequencies, text):
""" Calcula la frecuencia de las palabras en cada uno de los textos y añadirlo como pares clave-valor a un diccionario
Las palabras añadidas no deben ser ni stop words ni signos de puntuación"""
punctuations = {".",":",",","[","]", "“", "|", "”", "?"}
for word in nltk.word_tokenize(text):
if word not in stop_words:
if word not in punctuations:
if word not in word_frequencies.keys():
word_frequencies[word] = 1
else:
word_frequencies[word] += 1
word_freq_per_text = [] # Lista recogiendo los diccionarios de las frecuencias de aparición de las palabras de cada texto
for text in clean_texts:
word_frequencies = {}
word_frequency(word_frequencies, text) # Devuelve el diccionario de frecuencias de las palabras
word_freq_per_text.append(word_frequencies)
def word_score(index):
""" Calcula la puntuación ponderada de cada una de las palabras del texto mediante la fórmula: frecuencia_palabra / frecuencia_máxima
siendo la frecuencia_palabra el número de veces que aparece en el texto la palabra en cuestión y la frecuencia_máxima
el número de veces que aparece en el texto la palabra más repetida"""
sentence_list = nltk.sent_tokenize(clean_texts[index])
word_frequency = word_freq_per_text[index]
maximum_frequency = max(word_freq_per_text[index].values()) #Frecuencia de la palabra que más veces aparece
for word in word_freq_per_text[index].keys():
word_freq_per_text[index][word] = (word_freq_per_text[index][word]/maximum_frequency) # Cálculo de la puntuación de cada una de las palabras del texto: word_freq/max_freq
for i in range(0, len(clean_texts)):
word_score(i)
def sentence_score(sentence_scores, index):
""" Calcula la puntuación de cada una de las frases del texto siendo esta la suma de las frecuencias
ponderadas de todas las palabras que conforman el texto"""
sentence_list = nltk.sent_tokenize(clean_texts[index]) # Tokenización de las frases del texto
for sent in sentence_list:
for word in nltk.word_tokenize(sent.lower()):
if word in word_freq_per_text[index].keys():
if len(sent.split(' ')) < 20:
if sent not in sentence_scores.keys():
sentence_scores[sent] = word_freq_per_text[index][word]
else:
sentence_scores[sent] += word_freq_per_text[index][word]
sent_sc_per_text = [] # Lista recogiendo los diccionarios de las frases y sus puntuaciones
for i in range(0, len(clean_texts)):
sentence_scores = {}
sentence_score(sentence_scores, i) # Devuelve el diccionario de la puntuación de la frase
sent_sc_per_text.append(sentence_scores)
###Output
_____no_output_____
###Markdown
Generar nuevos resúmenesEn el apartado anterior *Análisis de la extensión de los datos* se ha examinado el número de palabras y frases de las noticias y sus respectivos resúmenes que forman el conjunto de datos. En los gráficos presentados se ha podido observar que la extensión de los textos es muy variable, variando entre 5 y 134 frases. En cuanto a los sumarios, estos tienen entre 1 y 19 frases. El número de frases con las que se desea generar el resumen por extracción debe ser indicado de forma avanzada. No se ha creído oportuno especificar un número concreto de frases para producir el resumen de todos los textos del conjunto debido a que las extensiones de estos son muy variables. Por ello, se ha establecido que el número de frases a escoger debe ser de un 25% del total de frases del texto original.
###Code
def generate_summary(index):
""" Genera el resumen del texto en función de las n_sentences con mayor puntuación"""
n_sentences = m.ceil(len(nltk.sent_tokenize(clean_texts[index]))*25/100)
summary_sentences = heapq.nlargest(n_sentences, sent_sc_per_text[index], key=sent_sc_per_text[index].get)
summary = ' '.join(summary_sentences)
#Eliminar un espacio antes de los signos de puntuación y los símbolos
summary = summary.replace(" .", ".")
summary = summary.replace(" ,", ",")
summary = summary.replace(" ?", "?")
return summary
generated_summaries = []
for i in range(0, len(clean_texts)):
new_summary = generate_summary(i) # Devuelve el resumen generado
generated_summaries.append(new_summary)
# Inspeccionar el texto completo de las tres primeras filas y los resúmenes que se han generado
for i in range(3):
print("\nNoticia #",i+1)
print('\nTexto original: ', clean_texts[i])
print('\nResumen original: ', clean_summaries[i])
print('\nResumen generado: ', generated_summaries[i])
print()
###Output
_____no_output_____ |
Exercises-with-open-data/Advanced/Normfit-transversemomentum+pseudorapidity.ipynb | ###Markdown
Creating a normfit and transverse momentum+pseudorapidity The point of this exercise is to learn to create a normal distribution fit for the data, and to learn what are transverse momentum and pseudorapidity (and how are they linked together). The data used is open data released by the [CMS](https://home.cern/about/experiments/cms) experiment. First the fit Let's begin by loading the needed modules, data and creating a histogram of the data to see the more interesting points (the area for which we want to create the fit).
###Code
# This is needed to create the fit
from scipy.stats import norm
import pandas as pd
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# Let's choose Dimuon_DoubleMu.csv
data = pd.read_csv('http://opendata.cern.ch/record/545/files/Dimuon_DoubleMu.csv')
# And save the invariant masses to iMass
iMass = data['M']
# Plus draw the histogram
n, bins, patches = plt.hist(iMass, 300, facecolor='g')
plt.xlabel('Invariant Mass (GeV)')
plt.ylabel('Amount')
plt.title('Histogram of the invariant masses')
plt.show()
###Output
_____no_output_____
###Markdown
Let's take a closer look of the bump around 90GeVs.
###Code
min = 85
max = 97
# Let's crop the area. croMass now includes all the masses between the values of min and max
croMass = iMass[(min < iMass) & (iMass < max)]
# Calculate the mean (µ) and standard deviation (sigma) of normal distribution using norm.fit-function from scipy
(mu, sigma) = norm.fit(croMass)
# Histogram of the cropped data. Note that the data is normalized (density = 1)
n, bins, patches = plt.hist(croMass, 300, density = 1, facecolor='g')
#mlab.normpdf calculates the normal distribution's y-value with given µ and sigma
# let's also draw the distribution to the same image with histogram
y = norm.pdf(bins, mu, sigma)
l = plt.plot(bins, y, 'r-.', linewidth=3)
plt.xlabel('Invarian Mass(GeV)')
plt.ylabel('Probability')
plt.title(r'$\mathrm{Histogram \ and\ fit,\ where:}\ \mu=%.3f,\ \sigma=%.3f$' %(mu, sigma))
plt.show()
###Output
_____no_output_____
###Markdown
Does the invariant mass distribution follow normal distribution?How does cropping the data affect the distribution? (Try to crop the data with different values of min and max)Why do we need to normalize the data? (Check out of the image changes if you remove the normalisation [density]) And then about transeverse momenta and pseudorapidity Transeverse momentum $p_t$ means the momentum, which is perpendicular to the beam. It can be calculated from the momenta to the x and y directions using vector analysis, but (in most datasets from CMS at least) can be found directly from the loaded data.Pseudorapidity tells the angle between the particle and the beam, although not using any 'classical' angle values. You can see the connection between degree (°) and pseudorapidity from an image a bit later. Pseudorapidity is the column Eta $(\eta)$ in the loaded data. Let's check out what does the distribution of transverse momenta looks like
###Code
# allPt now includes all the transverse momenta
allPt = pd.concat([data.pt1, data.pt2])
# concat-command from the pandas module combines (concatenates) the information to a single column
# (it returns here a DataFrame -type variable, but it only has a singe unnamed column, so later
# we don't have to choose the wanted column from the allPt variable)
# And the histogram
plt.hist(allPt, bins=400, range = (0,50))
plt.xlabel('$p_t$ (GeV)', fontsize = 12)
plt.ylabel('Amount', fontsize = 12)
plt.title('Histogram of transverse momenta', fontsize = 15)
plt.show()
###Output
_____no_output_____
###Markdown
Looks like most of the momenta are between 0 and 10. Let's use this to limit the data we're about to draw
###Code
# using the below cond, we only choose the events below that amount (pt < cond)
cond = 10
smallPt = data[(data.pt1 < cond) & (data.pt2 < cond)]
# Let's save all the etas and pts to variables
allpPt = pd.concat([smallPt.pt1, smallPt.pt2])
allEta = pd.concat([smallPt.eta1, smallPt.eta2])
# and draw a scatterplot
plt.scatter(allEta, allpPt, s=1)
plt.ylabel('$p_t$ (GeV)', fontsize=13)
plt.xlabel('Pseudorapidity ($\eta$)', fontsize=13)
plt.title('Tranverse momenta vs. pseudorapidity', fontsize=15)
plt.show()
###Output
_____no_output_____ |
reinforcement learning/03-TD_0.ipynb | ###Markdown
TD(0) learningThe TD(0) is an alternative algorithm that can estimate an environment'svalue function. The main difference is that TD(0) doesn't need to waituntil the agent has reached the goal to update a state's estimatedvalue.Here is the example pseudo-code we will be implementing:From: Sutton and Barto, 2018. Ch. 6.
###Code
# first, import necessary modules
import sys
import gym
import random
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# add your own path to the RL repo here
sys.path.append('/Users/wingillis/dev/reinforcement-learning')
from collections import defaultdict
from lib.envs.gridworld import GridworldEnv
from lib.plotting import plot_gridworld_value_function, plot_value_updates
sns.set_style('white')
# define some hyperparameters
gamma = 0.75 # discounting factor
alpha = 0.1 # learning rate - a low value is more stable
n_episodes = 5000
# initialize the environment
shape = (5, 5) # size of the gridworld
env = GridworldEnv(shape, n_goals=2)
env.seed(23)
random.seed(23)
# define a policy function
def policy_fun():
return random.randint(0, 3)
deltas = defaultdict(list)
# initialize the value function
for i in range(n_episodes):
# reset the env and get current state
while True:
# select the next action
# conduct the selected action, store the results
# update the value function
# store the change from the old value function to the new one
# stop iterating if you've reached the end
# update the current state for the next loop
###Output
_____no_output_____
###Markdown
Reference learning and value functionFor:- gamma: 0.75- alpha: 0.1- n_episodes: 5000- shape: (5, 5)
###Code
V.reshape(shape)
fig = plot_value_updates(deltas[12][:100])
plt.imshow(V.reshape(shape), cmap='mako')
plt.colorbar()
fig = plot_gridworld_value_function(V.reshape(shape))
fig.tight_layout()
###Output
_____no_output_____ |
examples/howto/Linked panning.ipynb | ###Markdown
Linked Panning"Linked Panning" is the capability of updating multiple plot ranges simultaneously as a result of panning one plot. In Bokeh, linked panning is acheived by shared `x_range` and/or `y_range` values between plots. Exeute the cells below and pan the resulting plots.
###Code
from bokeh.io import output_notebook, show
from bokeh.layouts import gridplot
from bokeh.plotting import figure
output_notebook()
x = list(range(11))
y0 = x
y1 = [10-xx for xx in x]
y2 = [abs(xx-5) for xx in x]
# create a new plot
s1 = figure(width=250, plot_height=250, title=None)
s1.circle(x, y0, size=10, color="navy", alpha=0.5)
# create a new plot and share both ranges
s2 = figure(width=250, height=250, x_range=s1.x_range, y_range=s1.y_range, title=None)
s2.triangle(x, y1, size=10, color="firebrick", alpha=0.5)
# create a new plot and share only one range
s3 = figure(width=250, height=250, x_range=s1.x_range, title=None)
s3.square(x, y2, size=10, color="olive", alpha=0.5)
p = gridplot([[s1, s2, s3]], toolbar_location=None)
show(p)
###Output
_____no_output_____
###Markdown
Linked Panning"Linked Panning" is the capability of updating multiple plot ranges simultaneously as a result of panning one plot. In Bokeh, linked panning is achieved by shared `x_range` and/or `y_range` values between plots. Execute the cells below and pan the resulting plots.
###Code
from bokeh.io import output_notebook, show
from bokeh.layouts import gridplot
from bokeh.plotting import figure
output_notebook()
x = list(range(11))
y0 = x
y1 = [10-xx for xx in x]
y2 = [abs(xx-5) for xx in x]
# create a new plot
s1 = figure(width=250, height=250, title=None)
s1.circle(x, y0, size=10, color="navy", alpha=0.5)
# create a new plot and share both ranges
s2 = figure(width=250, height=250, x_range=s1.x_range, y_range=s1.y_range, title=None)
s2.triangle(x, y1, size=10, color="firebrick", alpha=0.5)
# create a new plot and share only one range
s3 = figure(width=250, height=250, x_range=s1.x_range, title=None)
s3.square(x, y2, size=10, color="olive", alpha=0.5)
p = gridplot([[s1, s2, s3]], toolbar_location=None)
show(p)
###Output
_____no_output_____
###Markdown
Linked Panning"Linked Panning" is the capability of updating multiple plot ranges simultaneously as a result of panning one plot. In Bokeh, linked panning is acheived by shared `x_range` and/or `y_range` values between plots. Exeute the cells below and pan the resulting plots.
###Code
from bokeh.io import output_notebook, show
from bokeh.layouts import gridplot
from bokeh.plotting import figure
output_notebook()
x = list(range(11))
y0 = x
y1 = [10-xx for xx in x]
y2 = [abs(xx-5) for xx in x]
# create a new plot
s1 = figure(width=250, plot_height=250, title=None)
s1.circle(x, y0, size=10, color="navy", alpha=0.5)
# create a new plot and share both ranges
s2 = figure(width=250, height=250, x_range=s1.x_range, y_range=s1.y_range, title=None)
s2.triangle(x, y1, size=10, color="firebrick", alpha=0.5)
# create a new plot and share only one range
s3 = figure(width=250, height=250, x_range=s1.x_range, title=None)
s3.square(x, y2, size=10, color="olive", alpha=0.5)
p = gridplot([[s1, s2, s3]], toolbar_location=None)
show(p)
###Output
_____no_output_____
###Markdown
Linked Panning"Linked Panning" is the capability of updating multiple plot ranges simultaneously as a result of panning one plot. In Bokeh, linked panning is acheived by shared `x_range` and/or `y_range` values between plots. Exeute the cells below and pan the resulting plots.
###Code
from bokeh.io import output_notebook, show
from bokeh.layouts import gridplot
from bokeh.plotting import figure
output_notebook()
x = list(range(11))
y0 = x
y1 = [10-xx for xx in x]
y2 = [abs(xx-5) for xx in x]
# create a new plot
s1 = figure(width=250, height=250, title=None)
s1.circle(x, y0, size=10, color="navy", alpha=0.5)
# create a new plot and share both ranges
s2 = figure(width=250, height=250, x_range=s1.x_range, y_range=s1.y_range, title=None)
s2.triangle(x, y1, size=10, color="firebrick", alpha=0.5)
# create a new plot and share only one range
s3 = figure(width=250, height=250, x_range=s1.x_range, title=None)
s3.square(x, y2, size=10, color="olive", alpha=0.5)
p = gridplot([[s1, s2, s3]], toolbar_location=None)
show(p)
###Output
_____no_output_____ |
src/attempts/regularization_attempt.ipynb | ###Markdown
ATTEMPTING TO USE REGULARIZATION
###Code
import os
os.chdir(os.path.pardir)
print(os.getcwd())
from utilities import plot_fd_and_original, plot_fd_and_speeds, get_all_result_data, plot_results
import matplotlib.pyplot as plt
### CORRIDOR_85
# corridor scenario
corridor_85_results = {'(1,)--1': {'tr': (0.045912862271070484, 0.0031267345031365146), 'val': (0.04742169372737408, 0.0031119121269140935), 'test': (0.0476672403847664, 0.0039469590234120335)},
'(2,)--1': {'tr': (0.03664128817617894, 0.003182476102173639), 'val': (0.03853945817798376, 0.0033564231210634296), 'test': (0.04050630591336996, 0.002731126083062373)},
'(3,)--1': {'tr': (0.033373928852379324, 0.0018829191626169833), 'val': (0.035154239647090434, 0.0018664107327763268), 'test': (0.03871563824589665, 0.0016801948788747623)},
'(4, 2)-0.4': {'tr': (0.030664595104753972, 0.001851953384442897), 'val': (0.03249462600797414, 0.0019940795626529786), 'test': (0.03825132212658357, 0.0019405094251864585)},
'(5, 2)-0.4': {'tr': (0.029973307326436043, 0.0019320427350314425), 'val': (0.03196305438876153, 0.002029259209802658), 'test': (0.03746844874792181, 0.0010017009011062954)},
'(5, 3)-0.4': {'tr': (0.03051294256001711, 0.002172826645295372), 'val': (0.03253234028816223, 0.0023648521244316076), 'test': (0.03710711689572175, 0.0018621574798779472)},
'(6, 3)-0.4': {'tr': (0.03004354938864708, 0.0021208966134423995), 'val': (0.03249930314719677, 0.0027163875850789647), 'test': (0.03759107491804222, 0.0019266819358078092)},
'(10, 4)-0.4': {'tr': (0.02741089586168528, 0.001986981721097928), 'val': (0.02964006066322326, 0.0021461797559127775), 'test': (0.03635013228379024, 0.0010135238565525883)}}
tr_mean, tr_std, val_mean, val_std, test_mean, test_std = get_all_result_data(corridor_85_results)
plot_results(corridor_85_results, tr_mean, tr_std, val_mean, val_std, test_mean, test_std, title="corridor_85_dropout")
corridor_85_results = {'(1,)': {'tr': (0.045258586704730985, 0.004853406766732973), 'val': (0.04714435026049614, 0.004653697406037759), 'test': (0.04686351530840396, 0.0012961715061691429)},
'(2,)': {'tr': (0.03719027779996395, 0.0020733957764011005), 'val': (0.041013209708034994, 0.0023950303734451657), 'test': (0.041472892633610745, 0.0024322010449302914)},
'(3,)': {'tr': (0.034038560874760156, 0.0013176987634252566), 'val': (0.03938843499869109, 0.0014865264094550156), 'test': (0.03914595563880523, 0.0010876265935449074)},
'(4, 2)': {'tr': (0.03526925183832645, 0.0029871246019426367), 'val': (0.04105438970029355, 0.0023958553187249138), 'test': (0.04126933727288883, 0.002845698267115585)},
'(5, 2)': {'tr': (0.035543992817401886, 0.0037010336212553643), 'val': (0.04197082627564668, 0.0025311766537130707), 'test': (0.04235170498124496, 0.00310823593982254)},
'(5, 3)': {'tr': (0.0334813929721713, 0.0037666412204319824), 'val': (0.041099048890173434, 0.003189935635059386), 'test': (0.04175731243589502, 0.003266454835106925)},
'(6, 3)': {'tr': (0.03116175480186939, 0.0022358561594462813), 'val': (0.03927014149725437, 0.002838615402109067), 'test': (0.039632171653358556, 0.001673111008447824)},
'(10, 4)': {'tr': (0.03036436103284359, 0.0047435348802268695), 'val': (0.0395459621399641, 0.003676904672583768), 'test': (0.03933931185179666, 0.002598055735067817)}}
tr_mean, tr_std, val_mean, val_std, test_mean, test_std = get_all_result_data(corridor_85_results)
plot_results(corridor_85_results, tr_mean, tr_std, val_mean, val_std, test_mean, test_std, title="corridor_85")
###Output
_____no_output_____
###Markdown
BOTTLENECK_070
###Code
# bottleneck scenario
bottleneck_070_results = {'(1,)--1': {'tr': (0.04420872837305069, 0.0031599580431971148), 'val': (0.04631242074072361, 0.00335683444621767), 'test': (0.04404655892877306, 0.0006008492096808868)}, '(2,)--1': {'tr': (0.04261961035430431, 0.002050199416835133), 'val': (0.045190324261784556, 0.002178216300224125), 'test': (0.04425408023114118, 0.0008173671821354187)}, '(3,)-0.2': {'tr': (0.04100157126784325, 0.0010626492394933162), 'val': (0.04339961282908916, 0.001260815392322356), 'test': (0.04458922187974386, 0.0010175661421570764)}, '(4, 2)-0.2': {'tr': (0.03906548090279102, 0.004921965053963709), 'val': (0.04078196577727795, 0.0036245149614968913), 'test': (0.046912882178146036, 0.0033055514368516012)}, '(5, 2)-0.2': {'tr': (0.03745610669255257, 0.0019795109400485966), 'val': (0.04004896491765976, 0.0020840289722979183), 'test': (0.04501635135997649, 0.002588274218708516)}, '(5, 3)-0.2': {'tr': (0.03834013599902392, 0.006017051040785215), 'val': (0.040765413381159306, 0.005984761763053988), 'test': (0.045824969932470705, 0.003220847503347146)}, '(6, 3)-0.2': {'tr': (0.0381568444147706, 0.004517518886731544), 'val': (0.040278446376323704, 0.004180930432849594), 'test': (0.04526766299581149, 0.0028434087673124566)}, '(10, 4)-0.2': {'tr': (0.0350448851287365, 0.004679395392748039), 'val': (0.03768944654613733, 0.004560997047374159), 'test': (0.04326768843836832, 0.003076716056470942)}}
tr_mean, tr_std, val_mean, val_std, test_mean, test_std = get_all_result_data(bottleneck_070_results)
plot_results(bottleneck_070_results, tr_mean, tr_std, val_mean, val_std, test_mean, test_std, title="bottleneck_070_dropout")
bottleneck_070_results = {'(1,)': {'tr': (0.04955769553780556, 0.0057570421608399685), 'val': (0.05123388793319463, 0.005096320416198529), 'test': (0.051791704269027836, 0.005582774144924822)},
'(2,)': {'tr': (0.04184448003768921, 0.004760314561714955), 'val': (0.045550852119922644, 0.0046212267515334735), 'test': (0.04491133585904681, 0.0022928527407057274)},
'(3,)': {'tr': (0.04102290675044059, 0.003348045017959455), 'val': (0.04824198216199875, 0.004444882233662878), 'test': (0.04584651970424176, 0.002920595092481756)},
'(4, 2)': {'tr': (0.04031206876039505, 0.0051648797471331685), 'val': (0.048427933976054195, 0.0037716492903911523), 'test': (0.04719023609986878, 0.004526066784584657)},
'(5, 2)': {'tr': (0.03668047532439232, 0.003667154431939146), 'val': (0.04531076699495316, 0.0025808096847436896), 'test': (0.0461659374304043, 0.0027352878110425798)},
'(5, 3)': {'tr': (0.04162880904972553, 0.007151998469464794), 'val': (0.049127314463257785, 0.005513828520265613), 'test': (0.049083456967275084, 0.005650612198688121)},
'(6, 3)': {'tr': (0.037646884098649025, 0.005999453764399783), 'val': (0.04634510710835457, 0.004201980591786483), 'test': (0.04620908804838612, 0.0035137353907239953)},
'(10, 4)': {'tr': (0.036670266091823576, 0.00472639424911708), 'val': (0.04604802552610636, 0.003957607023497358), 'test': (0.045713807815980285, 0.003483159691551754)}}
tr_mean, tr_std, val_mean, val_std, test_mean, test_std = get_all_result_data(bottleneck_070_results)
plot_results(bottleneck_070_results, tr_mean, tr_std, val_mean, val_std, test_mean, test_std, title="bottleneck_070")
###Output
_____no_output_____ |
graph/python_src/3d.ipynb | ###Markdown
3次元グラフ
###Code
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
まずはデータ用意
###Code
z = np.arange(-2*np.pi, 2*np.pi, 0.01)
x = np.cos(z)
y = np.sin(z)
###Output
_____no_output_____
###Markdown
3次元のグラフにするにはadd_ubplotの引数projectionを3dに設定する```fig.add_subplot(projection='3d')```
###Code
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.plot(x, y, z, label="hoge")
plt.show()
###Output
_____no_output_____
###Markdown
それ以外は平面のときと同じ
###Code
x2 = np.cos(z+np.pi); y2 = np.sin(z+np.pi)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.plot(x, y, z, label="1", linestyle='solid', color="blue")
ax.plot(x2, y2, z, label="2", linestyle='dashed', color="green")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.legend()
ax.grid()
plt.show()
###Output
_____no_output_____ |
Lecture8 Tensor Manipulation.ipynb | ###Markdown
Lecture 8 Tensor Manipulation
###Code
import tensorflow as tf
t = tf.constant([1,2,3,4])
s = tf.Session()
tf.shape(t).eval(session = s)
matrix1 = tf.constant([[1,2],[3,4]])
matrix2 = tf.constant([[1],[2]])
print(matrix1.shape)
print(matrix2.shape)
print('\n',tf.matmul(matrix1, matrix2).eval(session=s),'\n')
print((matrix1 * matrix2).eval(session=s))
###Output
(2, 2)
(2, 1)
[[ 5]
[11]]
[[1 2]
[6 8]]
|
CMR/python/demos/tokens.ipynb | ###Markdown
Python Wrapper for CMR`A python library to interface with CMR - Token Demo`This demo will show how to request an EDL token from CMR while inside a notebook. Demo SafeJust for this demo, I'm going to create a function that hids some of an EDL token, I don't want anyone to actually see my tokens
###Code
def safe_token_print(actual_token):
if actual_token is None:
print ("no token")
return
scrub = 5
strike = "*" * scrub
safe = actual_token[scrub:(len(actual_token)-scrub)]
print (strike + safe + strike)
print ("example:")
safe_token_print("012345678909876543210")
###Output
_____no_output_____
###Markdown
Loading the libraryThis will not be needed once we have the library working through PIP, but today I'm going to use my local checkout.
###Code
import sys
sys.path.append('/Users/tacherr1/src/project/eo-metadata-tools/CMR/python/')
###Output
_____no_output_____
###Markdown
Import the libraryThis should be all you need after we get PIP support
###Code
import cmr.auth.token as t
###Output
_____no_output_____
###Markdown
Using a token fileIn this example we are going to store our password in a file, listed below is how you can specify the file, however the setting is actually the assumed file if no setting is given.
###Code
options = {"cmr.token.file": "~/.cmr_token"} #this is the default actually
safe_token_print(t.token(t.token_file, options))
options = {"cmr.token.file": "~/.cmr_token_fake_file"} #this is the default actually
safe_token_print(t.token(t.token_file, options))
###Output
_____no_output_____
###Markdown
Using Keychain on Mac OS Xin this example I am using an already existing password saved securly in keychain. Keychain may require a human to type in the password, I have clicked "Allways allow" so we may not see it.
###Code
options = {'token.manager.service': 'cmr lib token'} #this is not the default
safe_token_print(t.token(t.token_manager, options))
###Output
_____no_output_____
###Markdown
Search both at once
###Code
options = {"cmr.token.file": "~/.cmr_token_fake_file", 'token.manager.service': 'cmr lib token'}
safe_token_print(t.token([t.token_manager, t.token_file], options))
###Output
_____no_output_____
###Markdown
Bulit in helpI can't remember anything, so here is some built in help which pulls from the python docstring for each function of interest
###Code
print(t.print_help('token_'))
###Output
_____no_output_____ |
modulo - 1 Fundamentos/trabalho_pratico1.ipynb | ###Markdown
Trabalho Prático 1Bootcamp Analista de Machine Learning @ IGTI Attribute Information:Both hour.csv and day.csv have the following fields, except hr which is not available in day.csv- instant: record index- dteday : date- season : season (1:winter, 2:spring, 3:summer, 4:fall)- yr : year (0: 2011, 1:2012)- mnth : month ( 1 to 12)- hr : hour (0 to 23)- holiday : weather day is holiday or not (extracted from [Web Link])- weekday : day of the week- workingday : if day is neither weekend nor holiday is 1, otherwise is 0.+ weathersit :- 1: Clear, Few clouds, Partly cloudy, Partly cloudy- 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist- 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds- 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog- temp : Normalized temperature in Celsius. The values are derived via (t-t_min)/(t_max-t_min), t_min=-8, t_max=+39 (only in hourly scale)- atemp: Normalized feeling temperature in Celsius. The values are derived via (t-t_min)/(t_max-t_min), t_min=-16, t_max=+50 (only in hourly scale)- hum: Normalized humidity. The values are divided to 100 (max)- windspeed: Normalized wind speed. The values are divided to 67 (max)- casual: count of casual users- registered: count of registered users- cnt: count of total rental bikes including both casual and registered
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
bikes = pd.read_csv('/content/drive/MyDrive/Data Science/Bootcamp Analista de ML/Módulo 1 - Introdução ao Aprendizado de Maquina/Desafio/comp_bikes_mod.csv')
bikes.head()
bikes.describe()
#No dataset utilizado para o desafio, quantas instâncias e atributos existem, respectivamente?
bikes.info()
15641 + 1738
#Contando valores nulos
bikes['temp'].isnull().sum()
#Fazendo % de valores nulos
percentual = bikes['temp'].isnull().sum()/len(bikes)
percentual
bikes['dteday'].dropna(inplace=True)
bikes.info()
bikes['temp'].describe()
bikes['season'].unique()
pd.to_datetime(bikes['dteday'])
bikes['dteday'].head(-1)
import seaborn as sns
#sns.boxplot(bikes['windspeed'], bikes['dteday'])
windspeed = sns.boxplot(x=bikes['windspeed'])
#df_compart_bikes.boxplot(['windspeed']) #boxplot para a velocidade do vento (['windspeed'])
bikescorr = bikes[['season', 'temp', 'atemp', 'hum', 'windspeed', 'cnt']]
bikescorr
sns.heatmap(bikescorr.corr())
#Preencha os valores nulos das colunas "hum","cnt" e "casual" com os valores médios.
#Utilize as variáveis "hum" e "casual" como independentes e a "cnt" como dependente.
#Aplique uma regressão linear. Qual o valor de R2? Utilize as entradas como teste.
bikes.fillna(bikes.mean(), inplace=True)
#utiliza as funções do sklearn para construir a regressão linear
from sklearn.linear_model import LinearRegression
bikes[["hum","cnt", "casual"]].isnull().sum()
xbikes = np.array(bikes[['hum', 'casual']])
ybikes = np.array(bikes[['cnt']])
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
regressao = LinearRegression()
regressao.fit(xbikes, ybikes)
from sklearn.model_selection import train_test_split
x_treinamento, x_teste, y_treinamento, y_teste = train_test_split(xbikes, ybikes, test_size = 0.3, random_state = 0)
ln = LinearRegression()
ln.fit(x_treinamento, y_treinamento)
predict1 = ln.predict(x_teste)
plt.scatter(y_teste, predict1)
sns.distplot(y_teste-predict1)
teste1score = ln.score(x_teste, y_teste)
teste1score
#adotando decision tree regressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score
dtr = DecisionTreeRegressor(random_state=0)
cross_val_score(dtr, xbikes, ybikes, cv=10)
dtr.fit(xbikes, ybikes, sample_weight=50)
x_treinamento, x_teste, y_treinamento, y_teste = train_test_split(xbikes, ybikes, test_size = 0.3, random_state = 0)
predict2 = dtr.predict(x_teste)
plt.scatter(y_teste, predict2)
teste2score = dtr.score(x_teste, y_teste)
teste2score
###Output
_____no_output_____ |
cinematica-directa-2D/robot_2dof_sm.ipynb | ###Markdown
Forward kinematics of 2dof planar robots Case 1) Two revolute joints Case 2) Revolute joint followed by prismatic joint
###Code
import numpy as np
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
import doctest
import spatialmath as sm
import sympy as sy
import sys
def fwd_kinematics_2rev(th1, th2, l1=2, l2=1):
'''
Implements the forward kinematics of a robot with two revolute joints.
Arguments
---------
th1, th2 : float
Angle in radians of the two degree of freedoms, respectively.
l1, l2 : float
Length of the two links, respectively.
Returns
-------
x : float
The position in the global x-direction of the end-effector (tool point)
y : float
The position in the global y-direction of the end-effector (tool point)
theta : float
The orientation of the end-effector with respect to the positive global x-axis.
The angle returned is in the range [-np.pi, np.pi]
j : tuple with 2 elements
The position of the joint between the two links
Tests
------
1) End-effector pose at default position
>>> x, y, th, j = fwd_kinematics_2rev(0, 0)
>>> "(%0.2f, %0.2f, %0.2f)" %(x, y, th)
'(3.00, 0.00, 0.00)'
2) End-effector pose at 90 degrees in both joints
>>> x, y, th, j = fwd_kinematics_2rev(np.pi/2, np.pi/2)
>>> "(%0.2f, %0.2f, %0.2f)" %(x, y, th)
'(-1.00, 2.00, 3.14)'
3) End-effector pose at 0 degress in first joint and 90 degress in second
>>> x, y, th, j = fwd_kinematics_2rev(0, np.pi/2)
>>> "(%0.2f, %0.2f, %0.2f)" %(x, y, th)
'(2.00, 1.00, 1.57)'
4) End-effector position is always inside a circle of a certain radius
>>> poses = [fwd_kinematics_2rev(th1_, th2_, 3, 2)
... for th1_ in np.arange(0, 2*np.pi, 0.2)
... for th2_ in np.arange(0, 2*np.pi, 0.2)]
>>> distances = np.array([np.sqrt(x_**2 + y_**2) for x_, y_, th_, j_ in poses])
>>> max_radius = 5 + 1e-12 # Add a small tolerance
>>> np.any(distances > max_radius)
False
5) Joint is always at constant distance from the origin
>>> poses = [fwd_kinematics_2rev(th1_, 0, 3, 2)
... for th1_ in np.arange(0, 2*np.pi, 0.2) ]
>>> distances = np.array([np.sqrt(j_[0]**2 + j_[1]**2) for x_, y_, th_, j_ in poses])
>>> np.any(np.abs(distances - 3) > 1e-12)
False
'''
# Static transformation between frame 2 and frame E
g_2e = sm.SE3()
g_2e.A[0,3] = l1+l2
# Transformation betwen frame 1 and frame 2
g_I = sm.SE3() # Identity transformation
g_12 = sm.SE3.Rz(th2) # Rotation about z-axis
q = [l1,0,0]
d_12 = g_I*q - g_12*q
g_12.A[:3, 3] = d_12.ravel()
# Transformation between frame S and frame 1
g_s1 = sm.SE3.Rz(th1)
# Chain of transformations
g_se = g_s1 * g_12 * g_2e
x = g_se.A[0,3]
y = g_se.A[1, 3]
theta = th1+th2
#print(np.arccos(g_se[0,0]), theta)
#assert(np.abs(theta-np.arccos(g_se[0,0])) < 1e-8)
j_s = g_s1 * [l1,0,0]
j = tuple(j_s[:2])
return (x, y, theta, j)
# Case 1)
doctest.run_docstring_examples(fwd_kinematics_2rev, globals(), verbose=True)
def fwd_kinematics_2rev_symbolic(th1, th2,
l1=sy.symbols('l1'), l2=sy.symbols('l2')):
'''
Implements the forward kinematics of a robot with two revolute joints.
Arguments
---------
th1, th2 : sympy symbols
Symbol representing the angle in radians of the two degree of freedoms, respectively.
l1, l2 : sympy symbols
Symbol representing the length of the two links, respectively.
Returns
-------
x : sympy expression
The position in the global x-direction of the end-effector (tool point)
y : sympy expression
The position in the global y-direction of the end-effector (tool point)
theta : sympy expression
The orientation of the end-effector with respect to the positive global x-axis.
j : tuple with 2 elements, each is a sympy expression
The position of the joint between link1 and link2
Tests
------
1) End-effector pose at default position
>>> th1, th2, l1, l2 = sy.symbols('th1, th2, l1, l2')
>>> x, y, th, j = fwd_kinematics_2rev_symbolic(th1, th2, l1, l2)
>>> subsdict = {th1: 0, th2: 0, l1: 2, l2: 1}
>>> xn = x.evalf(subs=subsdict)
>>> yn = y.evalf(subs=subsdict)
>>> thn = th.evalf(subs=subsdict)
>>> "(%0.2f, %0.2f, %0.2f)" %(xn, yn, thn)
'(3.00, 0.00, 0.00)'
2) End-effector pose at 90 degrees in both joints
>>> th1, th2, l1, l2 = sy.symbols('th1, th2, l1, l2')
>>> x, y, th, j = fwd_kinematics_2rev_symbolic(th1, th2, l1, l2)
>>> subsdict = {th1: np.pi/2, th2: np.pi/2, l1: 2, l2: 1}
>>> xn = x.evalf(subs=subsdict)
>>> yn = y.evalf(subs=subsdict)
>>> thn = th.evalf(subs=subsdict)
>>> "(%0.2f, %0.2f, %0.2f)" %(xn, yn, thn)
'(-1.00, 2.00, 3.14)'
'''
# Static transformation between frame 2 and frame E
g = sy.eye(4)
g[0, 3] = l1+l2
g_2e = sm.SE3(np.array(g), check=False)
# Transformation betwen frame 1 and frame 2
g_I = sm.SE3(np.array(sy.eye(4)), check=False) # Identity transformation
g_12 = sm.SE3.Rz(th2) # Rotation about z-axis
q = [l1,0,0]
d_12 = g_I*q - g_12*q
g_12.A[:3, 3] = d_12.ravel()
# Transformation between frame S and frame 1
g_s1 = sm.SE3.Rz(th1)
# Chain of transformations
g_se = g_s1 * g_12 * g_2e
x = g_se.A[0,3]
y = g_se.A[1, 3]
theta = th1+th2
#print(np.arccos(g_se[0,0]), theta)
#assert(np.abs(theta-np.arccos(g_se[0,0])) < 1e-8)
j_s = g_s1 * [l1,0,0]
j = tuple(j_s[:2])
return (x, y, theta, j)
doctest.run_docstring_examples(fwd_kinematics_2rev_symbolic, globals())
def fwd_kinematics_rev_prism(th1, th2, l1=2):
'''
Implements the forward kinematics of a robot with one revolute joint and one prismatic.
Arguments
---------
th1 : float
Angle in radians of the first degree of freedom.
th2 : float
Displacement in meter of the second degree of freedom.
l1 : float
Length of the first link.
Returns
-------
x : float
The position in the global x-direction of the end-effector (tool point)
y : float
The position in the global y-direction of the end-effector (tool point)
theta : float
The orientation of the end-effector with respect to the positive global x-axis
Tests
------
1) End-effector pose at default position
>>> "(%0.2f, %0.2f, %0.2f)" %fwd_kinematics_rev_prism(0, 0)
'(2.00, 0.00, 0.00)'
2) End-effector pose at 90 degrees in first joint and 0.6m in second
>>> "(%0.2f, %0.2f, %0.2f)" %fwd_kinematics_rev_prism(np.pi/2, 0.6)
'(0.00, 2.60, 1.57)'
4) End-effector orientation is always the same as the angle of the first dof
>>> angles = np.array( [th1_ for th1_ in np.arange(0, 2*np.pi, 0.2)
... for th2_ in np.arange(-1, 1, 0.2)])
>>> poses = [fwd_kinematics_rev_prism(th1_, th2_)
... for th1_ in np.arange(0, 2*np.pi, 0.2)
... for th2_ in np.arange(-1, 1, 0.2)]
>>> orientations = np.array([th_ for x_, y_, th_ in poses])
>>> np.any(np.abs(angles-orientations) > 1e-12)
False
'''
x = 0
y = 0
theta = 0
return (x, y, theta)
###Output
_____no_output_____
###Markdown
Run doctestsIf tests pass, no output is generated.
###Code
# Case 2)
doctest.run_docstring_examples(fwd_kinematics_rev_prism, globals())
###Output
_____no_output_____
###Markdown
Visualize the work space of the robot
###Code
th1 = np.arange(0, 2*np.pi, 0.1)
th2 = np.arange(-np.pi, np.pi, 0.1)
xythetaj =[ fwd_kinematics_2rev(th1_, th2_) for th1_ in th1 for th2_ in th2]
xytheta = np.array([ (x_, y_, th_) for x_, y_, th_, j_ in xythetaj])
df = pd.DataFrame(data=np.reshape(xytheta, (-1,3)), columns=['x', 'y', 'theta'])
fig = px.scatter_3d(df, x='x', y='y', z='theta')
camera = dict(
up=dict(x=0, y=1, z=0),
center=dict(x=0, y=0, z=0),
eye=dict(x=0, y=0, z=4)
)
fig.update_scenes(camera_projection_type="orthographic")
fig.update_layout(scene_camera=camera)
fig.show()
###Output
_____no_output_____
###Markdown
Visualize movement of the manipulator
###Code
poses = [ fwd_kinematics_2rev(th1_, th2_) for th1_, th2_ in zip(th1, th2)]
endeff_trajectory = np.array([ [x_, y_] for x_, y_, th_, j_ in poses])
joint_trajectory = np.array([ j_ for x_, y_, th_, j_ in poses])
fig = go.Figure(
data=[go.Scatter(x=[0, joint_trajectory[0,0]], y=[0, joint_trajectory[0,1]],
name="First link", mode="lines",
line=dict(width=6, color="blue")),
go.Scatter(x=[joint_trajectory[0,0], endeff_trajectory[0,0]],
y=[joint_trajectory[0,1], endeff_trajectory[0,1]],
name="Second link", mode="lines",
line=dict(width=5, color="red")),
go.Scatter(x=joint_trajectory[:,0], y=joint_trajectory[:,1],
name="Joint trajectory", mode="lines",
line=dict(width=1, color="lightblue")),
go.Scatter(x=endeff_trajectory[:,0], y=endeff_trajectory[:,1],
name="End-point trajectory", mode="lines",
line=dict(width=1, color="red"))],
layout=go.Layout( width=700, height=600,
xaxis=dict(range=[-4, 4], autorange=False),
yaxis=dict(range=[-4, 4], autorange=False),
title="End-effector trajectory",
updatemenus=[dict(
type="buttons",
buttons=[dict(label="Play",
method="animate",
args=[None])])]
),
frames=[go.Frame(data=[go.Scatter(x=[0, xj_], y=[0, yj_]),
go.Scatter(x=[xj_, xe_], y=[yj_, ye_])])
for xj_, yj_, xe_, ye_ in np.hstack((joint_trajectory, endeff_trajectory))]
)
fig.show()
?px.scatter_3d
###Output
_____no_output_____ |
desafio.ipynb | ###Markdown
Desafio Python e E-mail DescriçãoDigamos que você trabalha em uma indústria e está responsável pela área de inteligência de negócio.Todo dia, você, a equipe ou até mesmo um programa, gera um report diferente para cada área da empresa:- Financeiro- Logística- Manutenção- Marketing- Operações- Produção- VendasCada um desses reports deve ser enviado por e-mail para o Gerente de cada Área.Crie um programa que faça isso automaticamente. A relação de Gerentes (com seus respectivos e-mails) e áreas está no arquivo 'Enviar E-mails.xlsx'.Dica: Use o pandas read_excel para ler o arquivo dos e-mails que isso vai facilitar.
###Code
import pandas as pd
import win32com.client as win32
outlook = win32.Dispatch('outlook.application')
gerentes_df = pd.read_excel('Enviar E-mails.xlsx')
#gerentes_df.info()
for i, email in enumerate(gerentes_df['E-mail']):
gerente = gerentes_df.loc[i, 'Gerente']
area = gerentes_df.loc[i, 'Relatório']
mail = outlook.CreateItem(0)
mail.To = '[email protected]'
mail.Subject = 'Relatório de {}'.format(area)
mail.Body = '''
Prezado {},
Segue em anexo o Relatório de {}, conforme solicitado.
Qualquer dúvida estou à disposição.
Att.,
'''.format(gerente, area)
attachment = r'C:\Users\Maki\Downloads\e-mail\{}.xlsx'.format(area)
mail.Attachments.Add(attachment)
mail.Send()
###Output
_____no_output_____
###Markdown
Descrição do banco de dadosO banco de dados [bank.zip](https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank.zip) possui as seguintes caracterísicas:* Área: Negócios;* Número de atributos: 17;* Número de amostras: 45211;* Tipos de variáveis: categórica, binária e inteiro;O banco de dados está relacionado a uma capanha de marketing, baseada em ligações, de um banco português. Os atributos do banco de dados incluem dados pessoais dos clientes do banco como:* Idade - *inteiro*;* Trabalho - *categórica*;* Estado civil - *categórica*;* Escolaridade - *categórica*;* Dívidas - *categórica*;* Empréstimo imobiliário - *categórica*;* Empréstimo - *categórica*.Além desses dados, tem-se também os dados e resultados da campanha de marketing atual como:* Forma de contato - *categórica*;* Mês do último contato - *categórica*;* Dia da semana do contato - *categórica*;* Duração da ligação - *inteiro*;* Número de contatos - *inteiro*;* Intervalo do contato entre campanhas - *inteiro*;* Resultado da capanha - *binária*.Por fim, tem-se duas informações da camapanha anterior como:* Resultado da capanha - *categórica*;* Número de contatos - *inteiro*. QuestõesO desafio proposto é composto por 6 questões. Os códigos utilizados para o obter o resultados de cada questão são apresentados juntamente com as questões. Obtendo e organizando o banco de dadosAntes do desenvolvimento das questões é necessário importar as bibliotecas relevantes, baixar, organizar e preprocessar os dados para fazer as análises, estes procedimentos são executados abaixo.
###Code
import os
import numpy as np
import pandas as pd
import scipy.stats as st
import urllib.request as ur
import matplotlib.pyplot as plt
import sklearn.feature_selection as fs
from zipfile import ZipFile
# Especificações do banco de dados.
url = \
'https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank.zip'
dataset = 'bank-full.csv'
# Armazenamento do banco de dados
path_ext = 'data'
file = 'data.zip'
path_data = os.path.relpath(os.getcwd())
path_data = os.path.join(path_data, path_ext)
path_file = os.path.join(path_data, file)
if not os.path.exists(path_data):
os.mkdir(path_data)
ur.urlretrieve(url, path_file)
with ZipFile(path_file) as zfile:
zfile.extractall(path_data)
# Importar o banco de dados como um Dataframe Pandas
df = pd.read_csv(os.path.join(path_data, dataset), ';')
if df.isnull().values.any():
print('Removendo linhas com NaN.')
df = df.dropna()
# Converte as colunas do tipo 'object' para 'categorical'
df_obj = df.select_dtypes(include=['object'])
for col in df_obj.columns:
df[col] = df[col].astype('category')
###Output
_____no_output_____
###Markdown
Questão 1Questão: *Qual profissão tem mais tendência a fazer um empréstimo? De qual tipo?*Nesta questão foi considerado como empréstimo tanto o empréstimo imobiliário quanto o empréstimo. Primeiramente, obteve-se o percentual de pessoas que têm qualquer tipo de empréstimo por profissão. Este resultado é apresentdo no gráfico abaixo.
###Code
# Colunas para análise
cols = ['housing', 'loan']
# Obtêm-se a ocorrências de empréstimo por profissão
msk = (df[cols] == 'yes').sum(axis=1) > 0
loan_y = df['job'][msk]
loan_n = df['job'][~msk]
jobs = df['job'].value_counts()
loan_y = loan_y.value_counts()
loan_n = loan_n.value_counts()
# Normaliza-se os dados
idx = jobs.index
loan_yn = loan_y[idx] / jobs
loan_nn = loan_n[idx] / jobs
# Organiza-se os dados
loan_yn = loan_yn.sort_values(ascending=False)*100
idx = loan_yn.index
loan_nn = loan_nn[idx]*100
loan_y = loan_y[idx]
loan_n = loan_n[idx]
# Gera-se o gráfico
plt.bar(loan_yn.index, loan_yn)
plt.bar(loan_nn.index, loan_nn, bottom=loan_yn)
plt.grid(True, alpha=0.5)
plt.legend(['Possui', 'Não possui'])
plt.xticks(rotation=45, ha='right')
plt.xlabel('Profissão')
plt.ylabel('Percentual (%)')
plt.title('Empréstimos por profissão')
plt.show()
###Output
_____no_output_____
###Markdown
Como pode-se observar a profissão que tem a maior tendência em fazer empréstimo são profissionais colarinho azul (blue-collar). Destes profissionais cerca de 78% possui algum tipo de empréstimo. Por fim, obtêm-se o número de empréstimos de cada tipo dessa profissão.
###Code
# Obtêm-se o número de cada tipo de empréstimo por profissão
loan_h = df['job'][df['housing'] == 'yes'].value_counts()
loan_l = df['job'][df['loan'] == 'yes'].value_counts()
print('Número de empréstimos:')
print( 'Imobiliário: {}'.format(loan_h[idx[0]]))
print( 'Empréstimo: {}'.format(loan_l[idx[0]]))
###Output
Número de empréstimos:
Imobiliário: 7048
Empréstimo: 1684
###Markdown
Dessa forma, temos que essa profissão tem tendência a fazer empréstimos imobiliários. Questão 2Questão: *Fazendo uma relação entre número de contatos e sucesso da campanha quais são os pontos relevantes a serem observados?*Nesta questão foi considerado o número de contatos e o sucesso da campanha atual. O sucesso neste caso foi considerado quando o cliente assina o termo de adesão. Assim, para verificar se há uma relação entre o número de contato e o sucesso na campanha, foi gerado um gráfico de barras onde mostra o percentual do sucesso e insucesso para cada número de ligações. O gráfico é mostrado abaixo.
###Code
# Obtêm-se o sucesso e o insucesso da campanha por número
# de ligações
success = df[df['y'] == 'yes']['campaign']
fail = df[df['y'] == 'no']['campaign']
n = df['campaign'].value_counts()
success = success.value_counts()
fail = fail.value_counts()
# Normaliza-se os dados
idx = n.index.sort_values()
n = n[idx]
success_n = success.reindex(idx, fill_value=0) / n
fail_n = fail.reindex(idx, fill_value=0) / n
success_n *= 100
fail_n *= 100
# Gera-se o gráfico
plt.bar(success_n.index, success_n)
plt.bar(fail_n.index, fail_n, bottom=success_n)
plt.grid(True, alpha=0.5)
plt.legend(['Sucesso', 'Insucesso'])
plt.xlabel('Número de ligações (-)')
plt.ylabel('Percentual (%)')
plt.title('Sucesso na campanha por número de ligações')
plt.show()
###Output
_____no_output_____
###Markdown
Como pode-se observar, de forma geral o percentual reduz a medida que o número de ligações aumenta. Além disso, observa-se também um aumento do sucesso a medida que o número de contato aumenta acima de 20 ligações. Contudo, nestes casos há apenas uma amostra que resultou em sucesso para cada caso. Portanto, devido ao número de amostragem, para esses casos não é possível afirmar com certeza se essa tendência se repetiriria caso houvesse um maior número de amostras.Além disso, observa-se pelo percentual de insucesso que de forma geral não houve sucesso nos casos em que o número de contato superou 18 ligações. Portanto, não justificaria continuar entrando em contato acima desse número de ligações. Questão 3Questão: *Baseando-se nos resultados de adesão desta campanha qual o número médio e o máximo de ligações que você indica para otimizar a adesão?*Como análise incial foi feita o histograma cumulativo, apresentado abaixo, entre o número de contatos e o sucesso da campanha. Além disso, também é mostrado o número médio de ligações.
###Code
# Obtêm-se o sucesso campanha por número de ligações
contact = df[df['y'] == 'yes']['campaign']
contact_counts = contact.value_counts()
print('Número médio de ligações: {:.2f}'.format(contact.mean()))
# Gera-se o gráfico
plt.hist(contact, bins=contact_counts.shape[0],
cumulative=True, density=1)
plt.grid(True, alpha=0.5)
plt.xlabel('Número de contatos (-)')
plt.ylabel('Probabilidade de ocorrência (-)')
plt.title('Histograma cumulativo')
plt.show()
###Output
Número médio de ligações: 2.14
###Markdown
Pode-se observar no histograma cumulativo que a maior parte dos casos que obtiveram sucesso tiveram um número de ligações inferior a 11 ligações, que corresponde a 99.11% dos casos. Portanto, indicaria o número máximo de 10 ligações. Já o número médio de ligações que recomendaria seria de 5 ligações, que corresponde a 95.21% dos casos de sucesso.Contudo, para se obter um número de ligações ótimo, o ideal é que se tivesse ao menos o custo referente a cada ligação e se há uma duração da campanha. Assim, seria possível estimar mais precisamente qual seria o número de ligações ótimo. Uma vez que seria considerado o gasto e o retorno do possível cliente. Também, caso a campanha tenha uma duração limitada, o tempo gasto fazer múltiplas ligações para um mesmo cliente pode limitar o alcance da campanha, já que poderia-se estar ligando para outros clientes diferentes e obtendo a adesão destes. Questão 4Questão: *O resultado da campanha anterior tem relevância na campanha atual?*Para analisar se o resultado da campanha anterior tem alguma relevância na campanha atual, obteve-se os casos em que houve sucesso na campanha anterior e cotrastou-se com os casos que obteve-se sucesso na campanha atual. O resultado é mostrado no gráfico abaixo.
###Code
# Obtêm-se os casos que obtiveram sucesso na campanha anterior
success_y = df[df['poutcome'] == 'success']['y']
success_y = success_y.value_counts()
# Normaliza-se os dados
success_yn = success_y / success_y.sum()
success_yn *= 100
# Gera-se o gráfico
bar = plt.bar(success_yn.index, success_yn)
bar[1].set_color('orange')
plt.grid(True, alpha=0.5)
plt.xlabel('Percentual (%)')
plt.ylabel('Sucesso na campanha atual (-)')
plt.title('Relação entre a campanha atual e anteior')
plt.show()
###Output
_____no_output_____
###Markdown
Pode-se observar no gráfico acima que aproximadamente 65% dos casos em que obteve-se sucesso na campanha anterior também se obteve sucesso na campanha atual. Este resultado indica que há uma tendência entre clientes que aceitaram uma proposta no passado em aceitar uma nova no futuro. Este resultado portanto pode ser utilizado para otimizar as ligações em futuras campanhas, priorizando clientes que já aceitaram o serviço anteiormente. Questão 5Questão: *Qual o fator determinante para que o banco exija um seguro de crédito?*Para obter o fator que está mais relacionado a dívida do cliente e portanto exigir um seguro de crédito, foi selecionado apenas os dados pessoais do cliente. Assim, será possível obter uma característica mesmo se não houver dados do cliente referente a campanhas atuais ou anteriores.Ao todo tem-se 7 dados pessoais dos clientes, portanto, para não ter que analisar cada dado separadamente, foi utilizado um *wrapper* que seleciona as características que apresenta os maiores valores *k*, com as funções de avaliação *ANOVA F-value* e *Mutual information*. Nesse caso foi escolhido apenas o maior valor.
###Code
# Seleciona-se os dados dos clientes
client_data = [
'age',
'job',
'marital',
'education',
'balance',
'housing',
'loan',
]
# Seleciona-se o dado desejado
target_col = ['default']
# Transforma as variáveis do tipo 'string' para 'inteiro'
X = df[client_data].apply(lambda x: (x.cat.codes if x.dtype.name
is 'category' else x))
Y = df[target_col].apply(lambda x: (x.cat.codes if x.dtype.name
is 'category' else x))
# Obtêm-se as duas melhores características de cada função de avaliação
X_f_class = fs.SelectKBest(fs.f_classif, k=1).fit(X, Y[target_col[0]])
X_mutual = fs.SelectKBest(fs.mutual_info_classif, k=1).fit(X, Y[target_col[0]])
f_class = X.columns.values[X_f_class.get_support()][0]
mutual = X.columns.values[X_mutual.get_support()][0]
print('ANOVA F-value: {}'.format(f_class))
print('Mutual information: {}'.format(mutual))
###Output
ANOVA F-value: loan
Mutual information: balance
###Markdown
Apesar das funções de avaliações resultarem e características distintas, a segunda melhor caracterítica para a função *ANOVA F-value* foi também o saldo do cliente. Dessa forma, será analisado os dois casos separadamente.Primeiramente para analisar se existe de fato uma relação, foi feito o teste de chi-quadrado para avaliar a indepêndencia dos casos em que o cliente tem dívida e também tem empréstimo.
###Code
# Seleciona-se dados referente ao empréstimo
col = 'loan'
x = df[col].value_counts()
y = df[col][df['default'] == 'yes'].value_counts()
z = df[col][df['default'] == 'no'].value_counts()
# Calcula-se o chi-quadrado
chi, p, = st.chisquare(y, y.sum() * x[y.index] / x.sum())
print('Chi-quadrado: {:.2f}'.format(chi))
print('P-valor: {:.4f}'.format(p))
###Output
Chi-quadrado: 264.83
P-valor: 0.0000
###Markdown
Como o P-valor obtido foi aproximadamente 0, temos que os casos são independente. Pode-se então avaliar a relação entre os clientes que possuem dívida e também empréstimo.
###Code
percent = (y / y.sum())*100
print('Possui empréstimo: {:.2f}%'.format(percent['yes']))
print('Não possui empréstimo: {:.2f}%'.format(percent['no']))
z = df['default'][df[col] == 'yes'].value_counts()
percent_d = (z / z.sum())*100
print('Possui empréstimo e tem dívida: {:.2f}%'.format(percent_d['yes']))
###Output
Possui empréstimo: 36.93%
Não possui empréstimo: 63.07%
Possui empréstimo e tem dívida: 4.16%
###Markdown
Observa-se que cerca de 37% dos clientes que possuem dívida também possuem empréstimo. Contudo, apenas aproximadamente 4% dos clientes que possuem empréstimo tem dívida. Portanto, a dívida não é um fator determinante. Para analisar o saldo do cliente foi feito um histograma do saldo dos clientes que possuem dívida e um outro para os que não possuem dívidas. Os histogramas são apresentados abaixo.
###Code
# Seleciona-se dados referente a profissão
col = 'balance'
yes = df[col][df['default'] == 'yes']
no = df[col][df['default'] == 'no']
# Gera-se o gráfico
plt.hist(yes, bins=100, density=True)
plt.hist(no, bins=100, density=True, alpha=0.5)
plt.ylim([0, 6e-4])
plt.xlim([-4057, 20000])
plt.grid(True, alpha=0.5)
plt.legend(['Possui', 'Não possui'])
plt.xlabel('Saldo (€)')
plt.ylabel('Probabilidade de ocorrência (-)')
plt.title('Histogramas dos saldos')
plt.show()
###Output
_____no_output_____
###Markdown
Pode-se observar nos histogramas acima, as distribuições dos saldos para os casos que possuem e não possuem dívida são diferentes. Onde no caso dos que possuem dívida a distribuição está mais deslocada para e esquerda, saldo negativo, do que os que não possuem, mais deslocada a direita, saldo positivo. Dessa forma tem-se que a mediana das distibuições são perceptivelmente diferentes. Além disso, de forma geral o saldo dos clientes que não possuem dívidas são maiores dos que possuem.Como a mediana dos dois casos são sensivelmente diferentes, este pode ser um critério para se avaliar para exigir ou não um seguro de crédito. Abaixo, avalia-se caso este critério fosse usado.
###Code
print('Mediana do saldo dos que possuem dívida: €{}'.format(yes.median()))
print('Mediana do saldo dos que não possuem dívida: €{}'.format(no.median()))
lim = no.median()
percent_y = (np.sum(yes > lim) / yes.shape[0]) * 100
percent_n = (np.sum(no < lim) / no.shape[0]) * 100
text_y = 'Percentual dos que possuem dívida e saldo maior que'
text_n = 'Percentual dos que não possuem dívida e saldo menor que'
print(text_y + ' €{}: {:.2f}%'.format(lim, percent_y))
print(text_n + ' €{}: {:.2f}%'.format(lim, percent_n))
###Output
Mediana do saldo dos que possuem dívida: €-7.0
Mediana do saldo dos que não possuem dívida: €468.0
Percentual dos que possuem dívida e saldo maior que €468.0: 5.52%
Percentual dos que não possuem dívida e saldo menor que €468.0: 49.98%
###Markdown
Assim, tem-se que o saldo do cliente é um fator determinante para exigir o seguro de crédito. Questão 6Questão: *Quais são as características mais proeminentes de um cliente que possuaempréstimo imobiliário?*O metodologia para obter essas características é semelhante a descrita e utilizada na Questão 5. Ou seja, para não ter que analisar cada dado separadamente, foi utilizado o mesmo *wrapper* da Questão 5, com as mesmas funções de avaliação. Além disso, também foi usado teste de chi-quadrado para avaliar para avaliar a indepêndencia dos casos estudados.De forma semelhante a Questão 5 foi selecionado apenas os dados pessoais do cliente para se obter uma característica que independe da campanha atual ou anteior. Assim, obtêm-se duas características utilizando o *wrapper* que serão avaliadas inicialmente.
###Code
# Seleciona-se os dados dos clientes
client_data = [
'age',
'job',
'marital',
'education',
'default',
'balance',
'loan',
]
# Seleciona-se o dado desejado
target_col = ['housing']
# Transforma as variáveis do tipo 'string' para 'inteiro'
X = df[client_data].apply(lambda x: (x.cat.codes if x.dtype.name
is 'category' else x))
Y = df[target_col].apply(lambda x: (x.cat.codes if x.dtype.name
is 'category' else x))
# Obtêm-se as duas melhores características de cada função de avaliação
X_f_class = fs.SelectKBest(fs.f_classif, k=1).fit(X, Y[target_col[0]])
X_mutual = fs.SelectKBest(fs.mutual_info_classif, k=1).fit(X, Y[target_col[0]])
f_class = X.columns.values[X_f_class.get_support()]
mutual = X.columns.values[X_mutual.get_support()]
print('ANOVA F-value: {}'.format(f_class[0]))
print('Mutual information: {}'.format(mutual[0]))
###Output
ANOVA F-value: age
Mutual information: job
###Markdown
Cada função de avaliação resultou em uma característica distinta que serão analisadas. Fez-se então o teste de independência chi-quadrado para profissão. Em seguida é mostrado em um gráfico de barras o percentual de cada profissão que possui e não possui um empréstimo imobiliário.
###Code
# Seleciona-se dados referente a profissão
col = 'job'
x = df[col].value_counts()
y = df[col][df['housing'] == 'yes'].value_counts()
z = df[col][df['housing'] == 'no'].value_counts()
# Calcula-se o chi-quadrado
chi, p, = st.chisquare(y, y.sum() * x[y.index] / x.sum())
print('Chi-quadrado: {:.2f}'.format(chi))
print('P-valor: {:.4f}'.format(p))
# Normaliza-se os dados
y_norm = (y / x[y.index]).sort_values(ascending=False)
z_norm = (z / x[z.index])[y_norm.index]
y_norm *= 100
z_norm *= 100
# Gera-se o gráfico
plt.bar(y_norm.index, y_norm)
plt.bar(z_norm.index, z_norm, bottom=y_norm)
plt.grid(True, alpha=0.5)
plt.legend(['Possui', 'Não possui'])
plt.xticks(rotation=45, ha='right')
plt.xlabel('Profissão')
plt.ylabel('Percentual (%)')
plt.title('Empréstimos por profissão')
plt.show()
###Output
Chi-quadrado: 1593.98
P-valor: 0.0000
###Markdown
Tem-se que o P-valor é próximo de 0, portanto tem-se que os casos são independentes. Como pode-se observar, a profissão que mais faz empréstimos imobiliários é de colarinho azul, seguida de serviços e administração. Tem-se também que aposentados estudantes e empregadas domésticas são os que possuem menor percentual de empréstimo imobiliário.Já para o caso da idade das pessoas foi feito um histograma cumulativo para avaliar quais idades fazem mais empréstimo imobiliário. Em seguida é calculado a média de idade que possui e não possui empréstimo imobiliário.
###Code
# Seleciona-se dados referente a idade
col = 'age'
x = df[col]
yes = df[col][df['housing'] == 'yes']
no = df[col][df['housing'] == 'no']
# Gera-se o gráfico
plt.hist(yes, bins=20, density=True, cumulative=True)
plt.hist(no, bins=20, density=True, cumulative=True)
plt.grid(True, alpha=0.5)
plt.legend(['Possui', 'Não possui'])
plt.xlabel('Idade (anos)')
plt.ylabel('Probabilidade de ocorrência (-)')
plt.title('Histograma cumulativo')
plt.show()
# Obtêm-se a idade média
print('Idade média:')
print('* Possui empréstimo: {:.2f} anos'.format(yes.mean()))
print('* Não possui empréstimo: {:.2f} anos'.format(no.mean()))
###Output
_____no_output_____
###Markdown
Observa-se no histograma cumulativo acima que pessoas mais jovens tendem a fazer mais empréstimo do que pessoas mais velhas, como evidenciado pelo cálculo da média dos dois casos. Além disso, observa-se no histograma cerca de 80% das pessoas que fazem empréstimo imobiliário têm idade inferior a 45 anos e cerca de 50% das pessoas têm idade inferior 34 anos.Por fim, foi avaliado também uma tercerira característica, escolaridade, que apresentou uma ligeira diferença entre os casos que possui ou não um empréstimo imobiliário. Foi feito o mesmo procedimento utilizado no caso da profissão. Os resultados são apresentados abaixo.
###Code
# Seleciona-se dados referente a escolaridade
col = 'education'
x = df[col].value_counts()
y = df[col][df['housing'] == 'yes'].value_counts()
z = df[col][df['housing'] == 'no'].value_counts()
# Calcula-se o chi-quadrado
chi, p, = st.chisquare(y, y.sum() * x[y.index] / x.sum())
print('Chi-quadrado: {:.2f}'.format(chi))
print('P-valor: {:.4f}'.format(p))
# Normaliza-se os dados
y_norm = (y / x[y.index]).sort_values(ascending=False)
z_norm = (z / x[z.index])[y_norm.index]
# Gera-se o gráfico
plt.bar(y_norm.index, y_norm)
plt.bar(z_norm.index, z_norm, bottom=y_norm)
plt.grid(True, alpha=0.5)
plt.legend(['Possui', 'Não possui'])
plt.xticks(rotation=45, ha='right')
plt.xlabel('Nível de escolaridade')
plt.ylabel('Percentual (%)')
plt.title('Empréstimos por nível de escolaridade')
plt.show()
###Output
Chi-quadrado: 285.99
P-valor: 0.0000
###Markdown
ENEM 2016 - predição da nota da prova de matemáticaO desafio abaixo visa prever as notas da prova de matemática de determinados alunos participantes do ENEM 2016 a partir da seleção livre de atributos de dois datasets pré-existentes.As informações fornecidas para o desafio foram divididas em dois grupos:* **Treino** - 13.730 instâncias e 167 atributos* **Teste** - 4.576 instâncias e 47 atributosObservações com fins didáticos:* **As bibliotecas serão importadas na medida em que forem sendo necessárias*** **Não foi levado em conta o tuning do modelo de Machine Learning** Importando a biblioteca Pandas para transformar os datasets em dataframes e manipulá-los
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Lendo os dados de treino e teste e instanciando um dataframe vazio para a resposta do desafio
###Code
df_train = pd.read_csv('/home/igorvroberto/GitHub/Data-Science-Projects/Data-Science-Projects/ENEM-2016/data/train.csv')
df_test = pd.read_csv('/home/igorvroberto/GitHub/Data-Science-Projects/Data-Science-Projects/ENEM-2016/data/test.csv')
df_answer = pd.DataFrame()
###Output
_____no_output_____
###Markdown
Verificando a forma dos datasets de treino e teste
###Code
df_train.shape, df_test.shape
###Output
_____no_output_____
###Markdown
Inserindo no dataset de resposta a coluna com o número de inscrição provinda do dataset de teste
###Code
df_answer['NU_INSCRICAO'] = df_test['NU_INSCRICAO']
###Output
_____no_output_____
###Markdown
Verificando se o dataset de teste advém do dataset de treino
###Code
print(set(df_test.columns).issubset(set(df_train.columns)))
###Output
True
###Markdown
Feature EngineeringComo a intenção do desafio é prever uma variável contínua, deve ser adotado um modelo de regressão.Com base nessa premissa e após analisar a biblioteca (documento que explica o que significa cada coluna), busca-se selecionar os atributos (features) mais adequados para uma melhor predição do modelo. Selecionando apenas os atributos numéricos do dataset de teste
###Code
df_test = df_test.select_dtypes(include=['int64','float64'])
df_test.columns.unique()
###Output
_____no_output_____
###Markdown
Analisando a correlação entre as colunas que PROVAVELMENTE fazem mais sentido (questão subjetiva)
###Code
escolha_features = ['NU_IDADE',
'NU_NOTA_CN',
'NU_NOTA_CH',
'NU_NOTA_LC',
'NU_NOTA_REDACAO'
]
df_test[escolha_features].corr()
###Output
_____no_output_____
###Markdown
Como a coluna "NU_IDADE" possui uma baixa correlação negativa com os demais atributos, será desconsiderada
###Code
features = [
'NU_NOTA_CN',
'NU_NOTA_CH',
'NU_NOTA_LC',
'NU_NOTA_REDACAO'
]
###Output
_____no_output_____
###Markdown
Verificando valores nulos dos dados de treino e teste
###Code
df_train[features].isnull().sum(), df_test[features].isnull().sum()
###Output
_____no_output_____
###Markdown
ObservaçãoCom os valores nulos, é necessário tomar alguns cuidados que serão explicados conforme abaixo:1) A ordem dos datasets é treino > teste > resposta, portanto não se deve utilizar a função **dropna** para a remoção das linhas com valores nulos de forma a evitar posterior erro no preenchimento do dataset de resposta (haverá divergência no número de linhas entre os datasets de teste e de resposta).2) Existem duas possibilidades: substituir os valores nulos pela média das notas de cada prova ou alterá-los para 0 (valores nulos incluem **NaN**). Entre as duas opções, o modelo desempenhou melhor performance com a segunda.
###Code
for c in features:
df_train[c].fillna(0, inplace=True)
df_test[c].fillna(0, inplace=True)
df_train['NU_NOTA_MT'].fillna(0, inplace=True)
###Output
_____no_output_____
###Markdown
Instanciando os dados de treino e teste
###Code
X_train = df_train[features]
y_train = df_train['NU_NOTA_MT']
X_test = df_test[features]
###Output
_____no_output_____
###Markdown
Importando o modelo de regressão e treinando-o
###Code
from sklearn.ensemble import RandomForestRegressor
mdl=RandomForestRegressor()
mdl.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predizendo os valores das notas de matemática
###Code
y_pred = mdl.predict(X_test)
y_pred
###Output
_____no_output_____
###Markdown
Verificando as métricas R2, MAE, MSE e RMSE
###Code
from sklearn import metrics
import numpy as np
pred_train = mdl.predict(X_train)
print('R2:', np.around((metrics.r2_score(y_train, pred_train)),3))
print('MAE:', np.around((metrics.mean_absolute_error(y_train, pred_train)),3))
print('MSE:', np.around((metrics.mean_squared_error(y_train, pred_train)),3))
print('RMSE:', np.around(np.sqrt(metrics.mean_squared_error(y_train, pred_train)),3))
###Output
R2: 0.988
MAE: 16.708
MSE: 618.729
RMSE: 24.874
###Markdown
Inserindo a coluna com as notas de matemática preditas no dataset de resposta
###Code
df_answer['NU_NOTA_MT'] = np.around(y_pred,2)
df_answer.head()
###Output
_____no_output_____
###Markdown
Salvando o arquivo .csv
###Code
df_resposta.to_csv('answer.csv', index=False, header=True)
###Output
_____no_output_____ |
data structures/enum.ipynb | ###Markdown
enum enum is used to create symbols for values instead of using strings and interger.
###Code
# Creating enum class
import enum
class PlaneStatus(enum.Enum):
standing = 0
enroute_runway = 1
takeoff = 2
in_air = 3
landing = 4
print('\nMember name: {}'.format(PlaneStatus.enroute_runway.name))
print('Member value: {}'.format(PlaneStatus.enroute_runway.value))
#Iterating over enums
for member in PlaneStatus:
print('{} = {}'.format(member.name, member.value))
#Comparing enums - compare using identity and equality
actual_state = PlaneStatus.enroute_runway
desired_state = PlaneStatus.in_air
#comparison through equality
print('Equality: ', actual_state == desired_state, actual_state == PlaneStatus.enroute_runway)
#comparison through identity
print('Identity: ', actual_state is desired_state, actual_state is PlaneStatus.enroute_runway)
'''NO SUPPORT FOR ORDERED SORTING AND COMPARISON'''
print('Ordered by value:')
try:
print('\n'.join('' + s.name for s in sorted(PlaneStatus)))
except TypeError as err:
print('Cannot sort:{}'.format(err))
###Output
Ordered by value:
Cannot sort:'<' not supported between instances of 'PlaneStatus' and 'PlaneStatus'
###Markdown
Use IntEnum for order support
###Code
# Ordered by value
class NewPlaneStatus(enum.IntEnum):
standing = 0
enroute_runway = 1
takeoff = 2
in_air = 3
landing = 4
print('\n'.join(' ' + s.name for s in sorted(NewPlaneStatus)))
###Output
standing
enroute_runway
takeoff
in_air
landing
###Markdown
Unique Ebumeration values
###Code
#Aliases for other members, do not appear separately in the output when iterating over the Enum.
#The canonical name for a member is the first name attached to the value.
class SamePlaneStatus(enum.Enum):
standing = 0
enroute_runway = 1
takeoff = 2
in_air = 3
landing = 4
maintainance = 0
fueling = 3
for status in SamePlaneStatus:
print('{} = {}'.format(status.name, status.value))
print('\nSame: standing is maintainance: ', SamePlaneStatus.standing is SamePlaneStatus.maintainance)
print('Same: in_air is fueling: ', SamePlaneStatus.in_air is SamePlaneStatus.fueling)
# Add @unique decorator to the Enum
@enum.unique
class UniPlaneStatus(enum.Enum):
standing = 0
enroute_runway = 1
takeoff = 2
in_air = 3
landing = 4
#error triggered here
maintainance = 0
fueling = 3
for status in SamePlaneStatus:
print('{} = {}'.format(status.name, status.value))
###Output
_____no_output_____
###Markdown
Creating Enums programmatically
###Code
PlaneStatus = enum.Enum(
value = 'PlaneStatus',
names = ('standing', 'enroute_runway', 'takeoff', 'in_air', 'landing')
)
print('Member:{}'.format(PlaneStatus.in_air))
print('\nAll Members:')
for status in PlaneStatus:
print('{} = {}'.format(status.name, status.value))
PlaneStatus = enum.Enum(
value = 'PlaneStatus',
names = [
('standing', 1),
('enroute_runway', 2),
('takeoff', 3),
('in_air', 4),
('landing', 5)
]
)
print('\nAll Members:')
for status in PlaneStatus:
print('{} = {}'.format(status.name, status.value))
###Output
All Members:
standing = 1
enroute_runway = 2
takeoff = 3
in_air = 4
landing = 5
|
Regression/Support Vector Machine/NuSVR_StandardScaler_PowerTransformer.ipynb | ###Markdown
Nu-Support Vector Regression with StandardScaler & Power Transformer This Code template is for regression analysis using a Nu-Support Vector Regressor(NuSVR) based on the Support Vector Machine algorithm with PowerTransformer as Feature Transformation Technique and StandardScaler for Feature Scaling in a pipeline. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.svm import NuSVR
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
features =[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
Y=NullClearner(Y)
X=EncodeX(X)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
ModelSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.Here we will use NuSVR, the NuSVR implementation is based on libsvm. Similar to NuSVC, for regression, uses a parameter nu to control the number of support vectors. However, unlike NuSVC, where nu replaces C, here nu replaces the parameter epsilon of epsilon-SVR. Model Tuning Parameters 1. nu : float, default=0.5> An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken. 2. C : float, default=1.0> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. 3. kernel : {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’> Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples). 4. gamma : {‘scale’, ‘auto’} or float, default=’scale’> Gamma is a hyperparameter that we have to set before the training model. Gamma decides how much curvature we want in a decision boundary. 5. degree : int, default=3> Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.Using degree 1 is similar to using a linear kernel. Also, increasing degree parameter leads to higher training times. Rescaling techniqueStandardize features by removing the mean and scaling to unit varianceThe standard score of a sample x is calculated as: z = (x - u) / swhere u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False. Feature TransformationPower transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
###Code
model=make_pipeline(StandardScaler(),PowerTransformer(),NuSVR())
model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.> **score**: The **score** function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
###Output
Accuracy score 93.60 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 93.60 %
Mean Absolute Error 3.29
Mean Squared Error 18.53
###Markdown
Prediction PlotFirst, we make use of a scatter plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual", "prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____
###Markdown
Nu-Support Vector Regression with StandardScaler & Power Transformer
This Code template is for regression analysis using a Nu-Support Vector Regressor(NuSVR) based on the Support Vector Machine algorithm with PowerTransformer as Feature Transformation Technique and StandardScaler for Feature Scaling in a pipeline. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.svm import NuSVR
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
features =[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
Y=NullClearner(Y)
X=EncodeX(X)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
Model
Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.
A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.
Here we will use NuSVR, the NuSVR implementation is based on libsvm. Similar to NuSVC, for regression, uses a parameter nu to control the number of support vectors. However, unlike NuSVC, where nu replaces C, here nu replaces the parameter epsilon of epsilon-SVR.
Model Tuning Parameters
1. nu : float, default=0.5
> An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken.
2. C : float, default=1.0
> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty.
3. kernel : {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’
> Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples).
4. gamma : {‘scale’, ‘auto’} or float, default=’scale’
> Gamma is a hyperparameter that we have to set before the training model. Gamma decides how much curvature we want in a decision boundary.
5. degree : int, default=3
> Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.Using degree 1 is similar to using a linear kernel. Also, increasing degree parameter leads to higher training times.
Rescaling technique
Standardize features by removing the mean and scaling to unit variance
The standard score of a sample x is calculated as:
z = (x - u) / s
where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.
Feature Transformation
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
###Code
model=make_pipeline(StandardScaler(),PowerTransformer(),NuSVR())
model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.> **score**: The **score** function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
###Output
Accuracy score 93.60 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 93.60 %
Mean Absolute Error 3.29
Mean Squared Error 18.53
###Markdown
Prediction PlotFirst, we make use of a scatter plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual", "prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____
###Markdown
Nu-Support Vector Regression with StandardScaler & Power Transformer This Code template is for regression analysis using a Nu-Support Vector Regressor(NuSVR) based on the Support Vector Machine algorithm with PowerTransformer as Feature Transformation Technique and StandardScaler for Feature Scaling in a pipeline. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, PowerTransformer
from sklearn.model_selection import train_test_split
from sklearn.svm import NuSVR
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
features =[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
Y=NullClearner(Y)
X=EncodeX(X)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
X_train,X_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
ModelSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.A Support Vector Machine is a discriminative classifier formally defined by a separating hyperplane. In other terms, for a given known/labelled data points, the SVM outputs an appropriate hyperplane that classifies the inputted new cases based on the hyperplane. In 2-Dimensional space, this hyperplane is a line separating a plane into two segments where each class or group occupied on either side.Here we will use NuSVR, the NuSVR implementation is based on libsvm. Similar to NuSVC, for regression, uses a parameter nu to control the number of support vectors. However, unlike NuSVC, where nu replaces C, here nu replaces the parameter epsilon of epsilon-SVR. Model Tuning Parameters 1. nu : float, default=0.5> An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. By default 0.5 will be taken. 2. C : float, default=1.0> Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive. The penalty is a squared l2 penalty. 3. kernel : {‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’}, default=’rbf’> Specifies the kernel type to be used in the algorithm. It must be one of ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’ or a callable. If none is given, ‘rbf’ will be used. If a callable is given it is used to pre-compute the kernel matrix from data matrices; that matrix should be an array of shape (n_samples, n_samples). 4. gamma : {‘scale’, ‘auto’} or float, default=’scale’> Gamma is a hyperparameter that we have to set before the training model. Gamma decides how much curvature we want in a decision boundary. 5. degree : int, default=3> Degree of the polynomial kernel function (‘poly’). Ignored by all other kernels.Using degree 1 is similar to using a linear kernel. Also, increasing degree parameter leads to higher training times. Rescaling techniqueStandardize features by removing the mean and scaling to unit varianceThe standard score of a sample x is calculated as: z = (x - u) / swhere u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False. Feature TransformationPower transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussian-like. This is useful for modeling issues related to heteroscedasticity (non-constant variance), or other situations where normality is desired.
###Code
model=make_pipeline(StandardScaler(),PowerTransformer(),NuSVR())
model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.> **score**: The **score** function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
###Output
Accuracy score 93.60 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 93.60 %
Mean Absolute Error 3.29
Mean Squared Error 18.53
###Markdown
Prediction PlotFirst, we make use of a scatter plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual", "prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
storytelling.ipynb | ###Markdown
Average price per neighbourhood
###Code
helpers.PDF('price_per_neighbourhood.pdf',size=(900,450))
###Output
_____no_output_____
###Markdown
Listings price per neighbourhood
###Code
helpers.PDF('listings_per_neighbourhood.pdf',size=(900,450))
###Output
_____no_output_____
###Markdown
Price mean compared against number of listings per neighbourhood
###Code
helpers.PDF('listings_against_price.pdf',size=(1200,650))
###Output
_____no_output_____ |
0311_CUDA_in-class-assignment.ipynb | ###Markdown
In order to successfully complete this assignment you need to participate both individually and in groups during class on **Monday March 11th**. In-Class Assignment: CUDA Memory and TilingImage from: https://www.appianimosaic.com/ Agenda for today's class (70 minutes)1. (20 minutes) HW4 Review2. (10 minutes) Pre-class Review 1. (10 minutes) Jupyterhub test3. (30 minutes) Tile Example4. (0 minutes) 2D wave Cuda Code Optimization ---- 1. HW4 Review[0301-HW4-Image_processing](0301-HW4-Image_processing.ipynb) --- 2. Pre-class Review [0310--CUDA_Memory-pre-class-assignment](0310--CUDA_Memory-pre-class-assignment.ipynb) --- 3. Jupyterhub TestAs a class lets try to access the GPU jupyterhub server:https://jupyterhub-gpu.egr.msu.eduUpload this file to your server account and lets run class from there. Note any odd behaviors to the instructor. ---- 3. Tile example
###Code
%%writefile tiled_transpose.cu
#include <iostream>
#include <cuda.h>
#include <chrono>
#define CUDA_CALL(x) {cudaError_t cuda_error__ = (x); if (cuda_error__) { fprintf(stderr, "CUDA error: " #x " returned \"%s\"\n", cudaGetErrorString(cuda_error__)); fflush(stderr); exit(cuda_error__); } }
using namespace std;
const int BLOCKDIM = 32;
__global__ void transpose(const double *in_d, double * out_d, int row, int col)
{
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
if (x < col && y < row)
out_d[y+col*x] = in_d[x+row*y];
}
__global__ void tiled_transpose(const double *in_d, double * out_d, int row, int col)
{
int x = blockIdx.x * BLOCKDIM + threadIdx.x;
int y = blockIdx.y * BLOCKDIM + threadIdx.y;
int x2 = blockIdx.y * BLOCKDIM + threadIdx.x;
int y2 = blockIdx.x * BLOCKDIM + threadIdx.y;
__shared__ double in_local[BLOCKDIM][BLOCKDIM];
__shared__ double out_local[BLOCKDIM][BLOCKDIM];
if (x < col && y < row) {
in_local[threadIdx.x][threadIdx.y] = in_d[x+row*y];
__syncthreads();
out_local[threadIdx.y][threadIdx.x] = in_local[threadIdx.x][threadIdx.y];
__syncthreads();
out_d[x2+col*y2] = out_local[threadIdx.x][threadIdx.y];
}
}
__global__ void transpose_symmetric(double *in_d, double * out_d, int row, int col)
{
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
if (x < col && y < row) {
if (x < y) {
double temp = in_d[y+col*x];
in_d[y+col*x] = in_d[x+row*y];
in_d[x+row*y] = temp;
}
}
}
int main(int argc,char **argv)
{
std::cout << "Begin\n";
int sz_x=BLOCKDIM*300;
int sz_y=BLOCKDIM*300;
int nBytes = sz_x*sz_y*sizeof(double);
int block_size = BLOCKDIM;
double *m_h = (double *)malloc(nBytes);
double * in_d;
double * out_d;
int count = 0;
for (int i=0; i < sz_x*sz_y; i++){
m_h[i] = count;
count++;
}
std::cout << "Allocating device memory on host..\n";
CUDA_CALL(cudaMalloc((void **)&in_d,nBytes));
CUDA_CALL(cudaMalloc((void **)&out_d,nBytes));
//Set up blocks
dim3 dimBlock(block_size,block_size,1);
dim3 dimGrid(sz_x/block_size,sz_y/block_size,1);
std::cout << "Doing GPU Transpose\n";
CUDA_CALL(cudaMemcpy(in_d,m_h,nBytes,cudaMemcpyHostToDevice));
auto start_d = std::chrono::high_resolution_clock::now();
/**********************/
transpose<<<dimGrid,dimBlock>>>(in_d,out_d,sz_y,sz_x);
//tiled_transpose<<<dimGrid,dimBlock>>>(in_d,out_d,sz_y,sz_x);
cudaError_t err = cudaGetLastError();
if (err != cudaSuccess) {
fprintf(stderr, "\n\nError: %s\n\n", cudaGetErrorString(err)); fflush(stderr); exit(err);
}
CUDA_CALL(cudaMemcpy(m_h,out_d,nBytes,cudaMemcpyDeviceToHost));
/************************/
/**********************
transpose_symmetric<<<dimGrid,dimBlock>>>(in_d,out_d,sz_y,sz_x);
cudaError_t err = cudaGetLastError();
if (err != cudaSuccess) {
fprintf(stderr, "\n\nError: %s\n\n", cudaGetErrorString(err)); fflush(stderr); exit(err);
}
CUDA_CALL(cudaMemcpy(m_h,in_d,nBytes,cudaMemcpyDeviceToHost));
************************/
auto end_d = std::chrono::high_resolution_clock::now();
std::cout << "Doing CPU Transpose\n";
auto start_h = std::chrono::high_resolution_clock::now();
for (int y=0; y < sz_y; y++){
for (int x=y; x < sz_x; x++){
double temp = m_h[x+sz_x*y];
//std::cout << temp << " ";
m_h[x+sz_x*y] = m_h[y+sz_y*x];
m_h[y+sz_y*x] = temp;
}
//std::cout << "\n";
}
auto end_h = std::chrono::high_resolution_clock::now();
//Checking errors (should be same values as start)
count = 0;
int errors = 0;
for (int i=0; i < sz_x*sz_y; i++){
if (m_h[i] != count)
errors++;
count++;
}
std::cout << errors << " Errors found in transpose\n";
//Print Timing
std::chrono::duration<double> time_d = end_d - start_d;
std::cout << "Device time: " << time_d.count() << " s\n";
std::chrono::duration<double> time_h = end_h - start_h;
std::cout << "Host time: " << time_h.count() << " s\n";
cudaFree(in_d);
cudaFree(out_d);
return 0;
}
#Compile Cuda
!nvcc -std=c++11 -o tiled_transpose tiled_transpose.cu
#Run Example
!./tiled_transpose
###Output
_____no_output_____
###Markdown
---- 4. 1D wave Cuda Code OptimizationAs a group, lets see if we can optimize the code code from lasttime.
###Code
%%writefile wave_cuda.cu
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <cuda.h>
#define CUDA_CALL(x) {cudaError_t cuda_error__ = (x); if (cuda_error__) printf("CUDA error: " #x " returned \"%s\"\n", cudaGetErrorString(cuda_error__));}
__global__ void accel_update(double* d_dvdt, double* d_y, int nx, double dx2inv)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
if (i > 0 && i < nx-1)
d_dvdt[i]=(d_y[i+1]+d_y[i-1]-2.0*d_y[i])*(dx2inv);
else
d_dvdt[i] = 0;
}
__global__ void pos_update(double * d_dvdt, double * d_y, double * d_v, double dt)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
d_v[i] = d_v[i] + dt*d_dvdt[i];
d_y[i] = d_y[i] + dt*d_v[i];
}
int main(int argc, char ** argv) {
int nx = 5000;
int nt = 1000000;
int i,it;
double x[nx];
double y[nx];
double v[nx];
double dvdt[nx];
double dt;
double dx;
double max,min;
double dx2inv;
double tmax;
double *d_x, *d_y, *d_v, *d_dvdt;
CUDA_CALL(cudaMalloc((void **)&d_x,nx*sizeof(double)));
CUDA_CALL(cudaMalloc((void **)&d_y,nx*sizeof(double)));
CUDA_CALL(cudaMalloc((void **)&d_v,nx*sizeof(double)));
CUDA_CALL(cudaMalloc((void **)&d_dvdt,nx*sizeof(double)));
max=10.0;
min=0.0;
dx = (max-min)/(double)(nx-1);
x[0] = min;
for(i=1;i<nx-1;i++) {
x[i] = min+(double)i*dx;
}
x[nx-1] = max;
tmax=10.0;
dt= (tmax-0.0)/(double)(nt-1);
for (i=0;i<nx;i++) {
y[i] = exp(-(x[i]-5.0)*(x[i]-5.0));
v[i] = 0.0;
dvdt[i] = 0.0;
}
CUDA_CALL(cudaMemcpy(d_x,x,nx*sizeof(double),cudaMemcpyHostToDevice));
CUDA_CALL(cudaMemcpy(d_y,y,nx*sizeof(double),cudaMemcpyHostToDevice));
CUDA_CALL(cudaMemcpy(d_v,v,nx*sizeof(double),cudaMemcpyHostToDevice));
CUDA_CALL(cudaMemcpy(d_dvdt,dvdt,nx*sizeof(double),cudaMemcpyHostToDevice));
dx2inv=1.0/(dx*dx);
int block_size=1024;
int block_no = nx/block_size;
dim3 dimBlock(block_size,1,1);
dim3 dimGrid(block_no,1,1);
for(it=0;it<nt-1;it++) {
accel_update<<<dimGrid, dimBlock>>>(d_dvdt, d_y, nx, dx2inv);
pos_update<<<dimGrid, dimBlock>>>(d_dvdt, d_y, d_v, dt);
}
CUDA_CALL(cudaMemcpy(x,d_x,nx*sizeof(double),cudaMemcpyDeviceToHost));
CUDA_CALL(cudaMemcpy(y,d_y,nx*sizeof(double),cudaMemcpyDeviceToHost));
for(i=nx/2-10; i<nx/2+10; i++) {
printf("%g %g\n",x[i],y[i]);
}
return 0;
}
!nvcc -std=c++11 -o wave_cuda wave_cuda.cu
%%time
!./wave_cuda
###Output
_____no_output_____ |
BlogPosts/CORD19_topics/cord19-2020-04-10-v7/notebooks/2020-04-10-covid19-topics-gensim-mallet-scispacy.ipynb | ###Markdown
Topics Modeling using Mallet (through gensim wrapper) Initialization Preliminaries & Configurations
###Code
import os
import sys
import string
import numpy as np
import datetime
import pandas as pd
import json
import re
import nltk
import gensim
import seaborn as sns
import matplotlib.pyplot as plt
#!pip install pyLDAvis
#!pip install panel
#!pip install pycld2
import pyLDAvis
import pyLDAvis.gensim
import panel as pn
import pycld2 as cld2
# !pip install openpyxl
pn.extension() # This can cause Save to error "Requested Entity to large"; Clear this cell's output after running
None
MALLET_ROOT = '/home/jovyan'
mallet_home = os.path.join(MALLET_ROOT, 'mark/Systems/mallet-2.0.8')
mallet_path = os.path.join(mallet_home, 'bin', 'mallet')
mallet_stoplist_path = os.path.join(mallet_home, 'stoplists', 'en.txt')
ROOT = '..'
# Configurations
datafile_date = '2020-04-10-v7'
basedir = ROOT + f'/data/interim/{datafile_date}/'
# parser = 'moana'
parser = 'scispacy'
parser_model = 'spacy-en_core_sci_lg'
# Inputs
datafile = f'{basedir}{datafile_date}-covid19-combined-abstracts-tokens-{parser_model}.jsonl'
text_column_name = 'abstract_clean'
tokens_column_name = f'abstract_tokens_{parser}'
ent_column_name = f'abstract_ent_{parser}'
json_args = {'orient': 'records', 'lines': True}
# Other configurations
MODIFIED_LDAVIS_URL = 'https://cdn.jsdelivr.net/gh/roamanalytics/roamresearch@master/BlogPosts/CORD19_topics/ldavis.v1.0.0-roam.js'
random_seed = 42
model_build_workers = 4
# Outputs
outdir = ROOT + f'/results/{datafile_date}/'
model_out_dir = ROOT + f'/models/topics-abstracts-{datafile_date}-{parser}/'
model_path = model_out_dir + 'mallet_models/'
gs_model_path = model_path + 'gs_models/'
gs_model_path_prefix = gs_model_path + f'{datafile_date}-covid19-combined-abstracts-'
out_json_args = {'date_format': 'iso', **json_args}
web_out_dir = outdir + f'topics-abstracts-{datafile_date}-{parser}-html/'
if not os.path.exists(datafile):
print(datafile + ' does not exist')
sys.exit()
out_path_mode = 0o777
os.makedirs(model_out_dir, mode = out_path_mode, exist_ok = True)
os.makedirs(model_path, mode = out_path_mode, exist_ok = True)
os.makedirs(gs_model_path, mode = out_path_mode, exist_ok = True)
os.makedirs(outdir, mode = out_path_mode, exist_ok = True)
os.makedirs(web_out_dir, mode = out_path_mode, exist_ok = True)
import logging
# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.WARNING)
logging.getLogger("gensim").setLevel(logging.WARNING)
with open(mallet_stoplist_path, 'r') as fp:
stopwords = set(fp.read().split())
len(stopwords)
stopwords.update([
'doi', 'preprint', 'copyright', 'peer', 'reviewed', 'org', 'https', 'et', 'al', 'author', 'figure',
'rights', 'reserved', 'permission', 'used', 'using', 'biorxiv', 'fig', 'fig.', 'al.',
'di', 'la', 'il', 'del', 'le', 'della', 'dei', 'delle', 'una', 'da', 'dell', 'non', 'si'
]) # from https://www.kaggle.com/danielwolffram/topic-modeling-finding-related-articles
len(stopwords)
###Output
_____no_output_____
###Markdown
Read in text and create corpus
###Code
original_df = pd.read_json(datafile, **json_args)
documents = original_df[text_column_name]
orig_tokens = original_df[tokens_column_name]
if 'keyterms' in original_df.columns:
# keyterms = original_df['keyterms'].apply(lambda x: [k.lower() for k in x])
keyterms = original_df['keyterms'].apply(lambda lst: ['_'.join(k.lower().split()) for k in lst])
else:
keyterms = None
if ent_column_name in original_df.columns:
ents = original_df['abstract_ent_scispacy'].apply(lambda lst: ['_'.join(k.lower().split()) for k in lst if len(k.split()) > 1])
else:
ents = None
len(documents)
punctuation = string.punctuation + "”“–" # remove both slanted double-quotes
# leave '#$%*+-/<=>'
nonnumeric_punctuation = r'!"&()\,.:;?@[]^_`{|}~' + "'" + "'""”“–’" + ' '
def normalize_token(token):
if token in nonnumeric_punctuation:
return None
if token in stopwords:
return None
if token == token.upper():
return token
return token.lower()
def normalize_token_list(tokens):
result = []
for tok in tokens:
ntok = normalize_token(tok)
if ntok:
result.append(ntok)
return result
nonnumeric_punctuation
texts = orig_tokens.apply(normalize_token_list)
dictionary = gensim.corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
sorted(dictionary.values())[:5]
###Output
_____no_output_____
###Markdown
Topic model collections -- vary corpus and n Prepare corpus collections (various options)
###Code
corpora = {}
# corpora['text'] = corpus
###Output
_____no_output_____
###Markdown
Filter by language
###Code
def predict_language(text):
try:
isReliable, _, details = cld2.detect(text, isPlainText=True)
except cld2.error:
return ('ERROR', 0, '')
if isReliable:
lang_prob = details[0][2]
if lang_prob > 70:
return (details[0][1], lang_prob, text)
elif lang_prob == 0:
return ('', 0, '')
# abstract likely in two languages
_, _, details, vectors = cld2.detect(text, isPlainText=True,
returnVectors=True, hintLanguage='en')
en_text = ''
for vec in vectors:
if vec[3] == 'en':
en_text += text[vec[0] : vec[0]+vec[1]]
return ('en-extract', lang_prob, en_text)
else:
return ('', 0, '')
predicted_lang = pd.DataFrame.from_records(documents.apply(predict_language), columns=('lang', 'lang_prob', 'text'), index=documents.index)
predicted_lang_en_mask = predicted_lang['lang'].isin(['en', 'en-extract'])
(~ predicted_lang_en_mask).sum()
texts_en = texts.where(predicted_lang_en_mask, None)
texts_en = texts.apply(lambda x: x if x is not None else [])
###Output
_____no_output_____
###Markdown
Filter scispacy ents
###Code
from collections import Counter
if ents is not None:
ents_counter = Counter()
for x in ents.iteritems():
for w in x[1]:
ents_counter[w] += 1
ents_common = [k for k, c in ents_counter.items() if c >= 5]
len(ents_common)
###Output
_____no_output_____
###Markdown
Extended token sets
###Code
dictionary = gensim.corpora.Dictionary(texts)
if ents is not None:
dictionary.add_documents([ents_common])
# Several combinations attempted, but 'text-ents' was most useful
if ents is not None:
# corpora['text-ents'] = (texts + ents).apply(dictionary.doc2bow)
corpora['text-ents-en'] = (texts_en + ents).apply(dictionary.doc2bow)
corpora.keys()
###Output
_____no_output_____
###Markdown
HTML Templates
###Code
html_template = '''
<!DOCTYPE html>
<html>
<meta charset="UTF-8">
<head>
<title>{0}</title>
{1}
</head>
<body>
<h2>{0}</h2>
{2}
</body>
</html>
'''
html_style = '''
<style>
table {
font-family: "Trebuchet MS", Arial, Helvetica, sans-serif;
border-collapse: collapse;
width: 100%;
}
td, th {
border: 1px solid #ddd;
padding: 8px;
}
tr:nth-child(even){background-color: #f2f2f2;}
tr:hover {background-color: #ddd;}
th {
padding-top: 12px;
padding-bottom: 12px;
text-align: left;
background-color: #0099FF;
color: white;
}
</style>
'''
html_style_cols = '''
<style>
table {
font-family: "Trebuchet MS", Arial, Helvetica, sans-serif;
border-collapse: collapse;
width: 100%;
}
td, th {
border: 1px solid #ddd;
padding: 8px;
}
td:nth-child(even){background-color: #f2f2f2;}
td:hover {background-color: #ddd;}
th {
padding-top: 12px;
padding-bottom: 12px;
text-align: left;
background-color: #0099FF;
color: white;
}
</style>
'''
###Output
_____no_output_____
###Markdown
Build models
###Code
num_topics = [80] # number of topics
cmallet = {}
for c in corpora.keys():
cmallet[c] = {}
for i in num_topics:
print('Building model for %s (%s topic)' % (c,i))
prefix = os.path.join(model_path, c, str(i), '')
os.makedirs(prefix, mode = out_path_mode, exist_ok = True)
cmallet[c][i] = gensim.models.wrappers.ldamallet.LdaMallet(mallet_path, corpora[c], id2word=dictionary, optimize_interval=10,
prefix=prefix, workers=model_build_workers,
num_topics=i, iterations=2500, random_seed=random_seed)
###Output
Building model for text-ents-en (80 topic)
###Markdown
Save cmallet
###Code
for c in cmallet.keys():
for i in cmallet[c].keys():
cmallet[c][i].save(f'{gs_model_path_prefix}gensim-mallet-model_{c}_{i}.pkl4',
separately=[], sep_limit=134217728, pickle_protocol=4)
print(f'{gs_model_path_prefix}gensim-mallet-model_{c}_{i}.pkl4')
###Output
../models/topics-abstracts-2020-04-10-v7-scispacy/mallet_models/gs_models/2020-04-10-v7-covid19-combined-abstracts-gensim-mallet-model_text-ents-en_80.pkl4
###Markdown
Plot
###Code
vis_data = {}
gensim_lda_model = {}
for c in cmallet.keys():
vis_data[c] = {}
gensim_lda_model[c] = {}
for i in cmallet[c].keys():
gensim_lda_model[c][i] = gensim.models.wrappers.ldamallet.malletmodel2ldamodel(cmallet[c][i])
vis_data[c][i] = pyLDAvis.gensim.prepare(gensim_lda_model[c][i], corpora[c],
dictionary=cmallet[c][i].id2word, mds='tsne')
pyLDAvis.save_json(vis_data[c][i], outdir + f'pyldavis_{c}_{i}.json')
print(outdir + f'pyldavis_{c}_{i}.json')
ofdir = web_out_dir + f'{c}-{i}/'
os.makedirs(ofdir, mode = out_path_mode, exist_ok = True)
pyLDAvis.save_html(vis_data[c][i], ofdir + f'pyldavis_{c}_{i}.html',
ldavis_url=MODIFIED_LDAVIS_URL)
print(web_out_dir + f'{c}-{i}/pyldavis_{c}_{i}.html')
###Output
/opt/conda/lib/python3.6/site-packages/pyLDAvis/_prepare.py:223: RuntimeWarning: divide by zero encountered in log
kernel = (topic_given_term * np.log((topic_given_term.T / topic_proportion).T))
/opt/conda/lib/python3.6/site-packages/pyLDAvis/_prepare.py:240: RuntimeWarning: divide by zero encountered in log
log_lift = np.log(topic_term_dists / term_proportion)
/opt/conda/lib/python3.6/site-packages/pyLDAvis/_prepare.py:241: RuntimeWarning: divide by zero encountered in log
log_ttd = np.log(topic_term_dists)
###Markdown
Save Gensim Mallet Models
###Code
for c in gensim_lda_model.keys():
for i in gensim_lda_model[c].keys():
gensim_lda_model[c][i].save(f'{gs_model_path_prefix}gensim-lda-model_{c}_{i}.pkl4',
separately=[], sep_limit=134217728, pickle_protocol=4)
print(f'{gs_model_path_prefix}gensim-lda-model_{c}_{i}.pkl4')
###Output
../models/topics-abstracts-2020-04-10-v7-scispacy/mallet_models/gs_models/2020-04-10-v7-covid19-combined-abstracts-gensim-lda-model_text-ents-en_80.pkl4
###Markdown
Save _Relevant_ terms for topics (from pyLDAviz)
###Code
num_terms = 50
def sorted_terms(data, topic=1, rlambda=1, num_terms=30):
"""Returns a dataframe using lambda to calculate term relevance of a given topic."""
tdf = pd.DataFrame(data.topic_info[data.topic_info.Category == 'Topic' + str(topic)])
if rlambda < 0 or rlambda > 1:
rlambda = 1
stdf = tdf.assign(relevance=rlambda * tdf['logprob'] + (1 - rlambda) * tdf['loglift'])
rdf = stdf[['Term', 'relevance']]
if num_terms:
return rdf.sort_values('relevance', ascending=False).head(num_terms).set_index(['Term'])
else:
return rdf.sort_values('relevance', ascending=False).set_index(['Term'])
topic_lists = {}
for corp, cdict in vis_data.items():
for numtops in cdict.keys():
model_topic_lists_dict = {}
for topnum in range(numtops):
s = sorted_terms(vis_data[corp][numtops], topnum + 1, rlambda=.5, num_terms=num_terms)
terms = s.index
model_topic_lists_dict['Topic ' + str(topnum + 1)] = np.pad(terms, (0, num_terms - len(terms)),
'constant', constant_values='')
topic_lists[corp + '-' + str(numtops)] = pd.DataFrame(model_topic_lists_dict)
topic_lists.keys()
# !pip install openpyxl
# Save relevant topics - write to xlsx (one corp-numtopics per sheet)
with pd.ExcelWriter(outdir + f'topics-relevant-words-abstracts-{datafile_date}-{num_terms}terms.xlsx') as writer:
for sheetname, dataframe in topic_lists.items():
dataframe.to_excel(writer, sheet_name=sheetname)
print(outdir + f'topics-relevant-words-abstracts-{datafile_date}-{num_terms}terms.xlsx')
###Output
../results/2020-04-10-v7/topics-relevant-words-abstracts-2020-04-10-v7-50terms.xlsx
###Markdown
Save Relevant Topics as html
###Code
# Save relevant topics - write to html
out_topics_html_dir = web_out_dir
for corp_numtopics, dataframe in topic_lists.items():
os.makedirs(out_topics_html_dir + corp_numtopics, mode = out_path_mode, exist_ok = True)
ofname = out_topics_html_dir + corp_numtopics + '/' + 'relevant_terms.html'
with open(ofname, 'w') as ofp:
column_tags = [f'<a href="Topic_{i+1:02d}.html" target="_blank">{name}</a>'
for i, name in enumerate(dataframe.columns)]
temp_df = dataframe.copy()
temp_df.columns = column_tags
temp_df = temp_df.applymap(lambda x: ' '.join(x.split('_')))
temp_df = temp_df.set_index(np.arange(1, len(temp_df) + 1))
html_table = temp_df.to_html(escape=False)
html_str = html_template.format('Most Relevant Terms per Topic', html_style_cols, html_table)
ofp.write(html_str)
print(ofname)
# topic_lists['text-ents-80']
###Output
_____no_output_____
###Markdown
Create dataframes of topic model collections
###Code
ctopicwords_df = {}
for c in cmallet.keys():
ctopicwords_df[c] = {}
for i in cmallet[c].keys():
ctopicwords_df[c][i] = pd.read_table(cmallet[c][i].ftopickeys(), header=None, names=['id', 'weight', 'wordlist'])
REMOVED = []
def normalize_topic_words(words):
results = []
for w in words:
if w in nonnumeric_punctuation:
pass
elif w[-1] == 's' and w[:-1] in words:
# remove plural
REMOVED.append(w)
elif w != w.lower() and w.lower() in words:
# remove capitalized
REMOVED.append(w)
else:
results.append(w)
return results
# Clean words
for c in ctopicwords_df.keys():
for i in ctopicwords_df[c].keys():
ctopicwords_df[c][i]['wordlist'] = ctopicwords_df[c][i]['wordlist'].apply(lambda x: ' '.join(normalize_topic_words(x.split())))
# set(REMOVED)
for c in ctopicwords_df.keys():
for i in ctopicwords_df[c].keys():
ctopicwords_df[c][i].drop(['id'], axis=1, inplace=True)
ctopicwords_df[c][i]['topwords'] = ctopicwords_df[c][i].wordlist.apply(lambda x: ' '.join(x.split()[:3]))
ctopicwords_df[c][i]['topten'] = ctopicwords_df[c][i].wordlist.apply(lambda x: ' '.join(x.split()[:10]))
if True: # use pyLDAvis order
rank_order_new_old = vis_data[c][i].to_dict()['topic.order']
rank_order_old_new = [None] * len(rank_order_new_old)
for new, old in enumerate(rank_order_new_old):
rank_order_old_new[old - 1] = new
ctopicwords_df[c][i]['rank'] = np.array(rank_order_old_new) + 1
else:
ctopicwords_df[c][i]['rank'] = ctopicwords_df[c][i].weight.rank(ascending=False)
ctopicwords_df[c][i]['topicnum'] = ctopicwords_df[c][i].apply(lambda row: ('t%02d' % row['rank']), axis=1)
ctopicwords_df[c][i]['label'] = ctopicwords_df[c][i].apply(lambda row: row['topicnum'] + ' ' + row['topwords'], axis=1)
# doctopics
cdoctopics_df = {}
for c in cmallet.keys():
cdoctopics_df[c] = {}
for n in cmallet[c].keys():
cdoctopics_df[c][n] = pd.read_table(cmallet[c][n].fdoctopics(), header=None, names=['id']+[i for i in range(n)])
cdoctopics_df[c][n].drop(['id'], axis=1, inplace=True)
cdoctopics_df[c][n].head()
# Reorder topics
for c in cdoctopics_df.keys():
for n in cdoctopics_df[c].keys():
# (include top 3 topics in name) cdoctopics_df[c][n] = cdoctopics_df[c][n].T.join(ctopicwords_df[c][n][['rank', 'label']]).set_index('label').sort_values('rank').drop(['rank'], axis=1).T
cdoctopics_df[c][n] = cdoctopics_df[c][n].T.join(ctopicwords_df[c][n][['rank', 'topicnum']]).set_index('topicnum').sort_values('rank').drop(['rank'], axis=1).T
cdoctopics_df[c][n].T.index.rename('topic', inplace=True)
# cdoctopics_df[c][n].head()
###Output
_____no_output_____
###Markdown
Save documents
###Code
# Save topicwords
for c in ctopicwords_df.keys():
for i in ctopicwords_df[c].keys():
ctopicwords_df[c][i].sort_values('rank').to_csv(outdir + 'topickeys_sorted_%s_%d.txt' % (c, i), index_label='original_order')
print(outdir + 'topickeys_sorted_%s_%d.txt' % (c, i))
# ctopicwords_df[c][i].sort_values('rank').to_excel('out/topickeys_sorted_%s_%d.xlsx' % (c, i), index_label='original_order')
# Save doctopics
for c in cdoctopics_df.keys():
for n in cdoctopics_df[c].keys():
cdoctopics_df[c][n].to_csv(outdir + 'doctopic_%s_%d.csv' % (c, n), index_label='original_order')
print(outdir + 'doctopic_%s_%d.csv' % (c, n))
sims_names = ['scispacy', 'specter']
sims_columns = [f'sims_{x}_cord_uid' for x in sims_names]
assert all(x in original_df.columns for x in sims_columns)
assert 'cord_uid' in original_df.columns
def helper_get_sims_html_ids(sim_uids, cord_uid_topic_num, cord_uid_cite_ad):
result = []
for uid in sim_uids:
topic_num = cord_uid_topic_num.get(uid)
cite_ad = cord_uid_cite_ad.get(uid)
if cite_ad and topic_num:
result.append(f'<a href="Topic_{topic_num}.html#{uid}">{cite_ad}</a>')
return ', '.join(result)
original_df['abstract_mentions_covid'].sum()
# Prepare to save docs by topics
predominant_doc_dfd = {}
predominant_doc_df = original_df[['cite_ad', 'title', 'authors', 'publish_year', 'publish_time',
'dataset', 'abstract_mentions_covid',
'pmcid', 'pubmed_id', 'doi', 'cord_uid', 'sha', 'abstract_clean']
+ sims_columns
].copy()
sims_mapping_cord_uid_sd = {}
predominant_doc_df['publish_time'] = predominant_doc_df['publish_time'].dt.strftime('%Y-%m-%d')
for c in cdoctopics_df.keys():
predominant_doc_dfd[c] = {}
sims_mapping_cord_uid_sd[c] = {}
for n in cdoctopics_df[c].keys():
predominant_doc_dfd[c][n] = {}
sims_mapping_cord_uid_sd[c][n] = {}
predominant_doc_df['predominant_topic'] = cdoctopics_df[c][n].idxmax(axis=1)
predominant_doc_df['predominant_topic_num'] = predominant_doc_df['predominant_topic'].str.split().apply(lambda x: x[0][1:])
predominant_doc_df['major_topics'] = cdoctopics_df[c][n].apply(lambda r: {f't{i + 1:02d}': val for i, val in enumerate(r) if val >= 0.3}, axis=1)
for sim_col in sims_columns:
sims_mapping_cord_uid_sd[c][n][sim_col] = {}
sims_mapping_cord_uid_sd[c][n][sim_col]['topic_num'] = predominant_doc_df[['cord_uid', 'predominant_topic_num']].set_index('cord_uid')['predominant_topic_num']
sims_mapping_cord_uid_sd[c][n][sim_col]['cite_ad'] = predominant_doc_df[['cord_uid', 'cite_ad']].set_index('cord_uid')['cite_ad']
for i, topic_name in enumerate(cdoctopics_df[c][n].columns):
temp_df = predominant_doc_df[(predominant_doc_df['major_topics'].apply(lambda x: topic_name in x))].copy()
temp_df['topic_weight'] = temp_df.major_topics.apply(lambda x: x.get(topic_name))
temp_df = temp_df.sort_values(['topic_weight'], axis=0, ascending=False)
predominant_doc_dfd[c][n][i] = temp_df
# Save docs by topics - write to json and tsv
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
outfile_central_docs_base = outdir + f'topics-central-docs-abstracts-{datafile_date}-{c}-{n}'
temp_dfs = []
for i, dataframe in predominant_doc_dfd[c][n].items():
temp_df = dataframe[['title', 'authors', 'publish_year', 'publish_time', 'cord_uid', 'dataset', 'sha', 'abstract_clean']].reset_index()
temp_df['Topic'] = i + 1
temp_dfs.append(temp_df)
result_df = pd.concat(temp_dfs)
print(outfile_central_docs_base + '.{jsonl, txt}')
result_df.to_json(outfile_central_docs_base + '.jsonl', **out_json_args)
result_df.to_csv(outfile_central_docs_base + '.txt', sep='\t')
# Save docs by topics - write to excel
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
print(outdir + f'topics-central-docs-abstracts-{datafile_date}-{c}-{n}.xlsx')
with pd.ExcelWriter(outdir + f'topics-central-docs-abstracts-{datafile_date}-{c}-{n}.xlsx') as writer:
for i in predominant_doc_dfd[c][n].keys():
sheetname = f'Topic {i+1}'
predominant_doc_dfd[c][n][i].drop(columns=['abstract_clean', 'cite_ad', 'major_topics',
'predominant_topic', 'predominant_topic_num']
).to_excel(writer, sheet_name=sheetname)
# prep similarity columns for html
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
for sim_name, sims_col in zip(sims_names, sims_columns):
cord_uid_topic_num = sims_mapping_cord_uid_sd[c][n][sim_col]['topic_num'].to_dict()
cord_uid_cite_ad = sims_mapping_cord_uid_sd[c][n][sim_col]['cite_ad'].to_dict()
for i in predominant_doc_dfd[c][n].keys():
predominant_doc_dfd[c][n][i][f'Similarity {sim_name}'] = (predominant_doc_dfd[c][n][i][sims_col]
.apply(lambda x: helper_get_sims_html_ids(x, cord_uid_topic_num, cord_uid_cite_ad)))
# Modify dataframe for html
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
for i in predominant_doc_dfd[c][n].keys():
predominant_doc_dfd[c][n][i]['pmcid'] = predominant_doc_dfd[c][n][i]['pmcid'].apply(lambda xid: f'<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/{xid}" target="_blank">{xid}</a>' if not pd.isnull(xid) else '')
predominant_doc_dfd[c][n][i]['pubmed_id'] = predominant_doc_dfd[c][n][i]['pubmed_id'].apply(lambda xid: f'<a href="https://www.ncbi.nlm.nih.gov/pubmed/{xid}" target="_blank">{xid}</a>' if not pd.isnull(xid) else '')
predominant_doc_dfd[c][n][i]['doi'] = predominant_doc_dfd[c][n][i]['doi'].apply(lambda xid: f'<a href="https://doi.org/{xid}" target="_blank">{xid}</a>' if not pd.isnull(xid) else '')
predominant_doc_dfd[c][n][i]['abstract_mentions_covid'] = predominant_doc_dfd[c][n][i]['abstract_mentions_covid'].apply(lambda x: 'Y' if x else 'N')
predominant_doc_dfd[c][n][i].columns = [' '.join(c.split('_')) for c in predominant_doc_dfd[c][n][i].columns]
from pandas.io.formats import format as fmt
from pandas.io.formats.html import HTMLFormatter
from typing import Any, Optional
class MyHTMLFormatter(HTMLFormatter):
"Add html id to th for rows"
def __init__(self, html_id_col_name, *args, **kwargs):
super(MyHTMLFormatter, self).__init__(*args, **kwargs)
self.html_id_col_name = html_id_col_name
def write_th(
self, s: Any, header: bool = False, indent: int = 0, tags: Optional[str] = None
) -> None:
if not header and self.html_id_col_name and self.html_id_col_name in self.frame.columns:
try:
key = int(s.strip())
except ValueError:
key = None
if key and key in self.frame.index:
html_id = self.frame.loc[key, self.html_id_col_name]
if html_id:
if tags:
tags += f'id="{html_id}";'
else:
tags = f'id="{html_id}";'
super(MyHTMLFormatter, self).write_th(s, header, indent, tags)
# Save doc by topics - write to html
# out_topics_html_dir = outdir + f'topics-central-docs-abstracts-{datafile_date}-html/'
out_topics_html_dir = web_out_dir
os.makedirs(out_topics_html_dir, mode = out_path_mode, exist_ok = True)
for c in predominant_doc_dfd.keys():
for n in predominant_doc_dfd[c].keys():
ofdir = out_topics_html_dir + f'{c}-{n}/'
os.makedirs(ofdir, mode = out_path_mode, exist_ok = True)
print(ofdir)
for i in predominant_doc_dfd[c][n].keys():
ofname = ofdir + f'Topic_{i+1:02d}.html'
with open(ofname, 'w') as ofp:
html_df = (predominant_doc_dfd[c][n][i]
.drop(columns=['sha', 'major topics', 'abstract clean',
'predominant topic', 'predominant topic num']
+ [' '.join(c.split('_')) for c in sims_columns])
.copy()
.set_index(np.arange(1, len(predominant_doc_dfd[c][n][i])+1)))
# html_table = html_df.to_html(escape=False)
df_formatter = fmt.DataFrameFormatter(escape=False, frame=html_df, index=True, bold_rows=True)
html_formatter = MyHTMLFormatter('cord uid', formatter=df_formatter)
# html_formatter = HTMLFormatter(formatter=df_formatter)
html_table = html_formatter.get_result()
html_str = html_template.format(f'Topic {i+1:02d}', html_style, html_table)
ofp.write(html_str)
###Output
../results/2020-04-10-v7/topics-abstracts-2020-04-10-v7-scispacy-html/text-ents-en-80/
|
Examples/DNC_Simple_Siraj.ipynb | ###Markdown
The Differentiable Neural Computer The Problem - how do we create more general purpose learning machines?Neural networks excel at pattern recognition and quick, reactive decision-making, but we are only just beginning to build neural networks that can think slowly. that is, deliberate or reason using knowledge.For example, how could a neural network store memories for facts like the connections in a transport network and then logically reason about its pieces of knowledge to answer questions?this consists of a neural network that can read from and write to an external memory matrix,analogous to the random-access memory in a conventional computer.Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data.DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read–write memory.[](http://www.youtube.com/watch?v=B9U8sI7TcMYE)Modern computers separate computation and memory. Computation is performed by a processor, which can use an addressable memory to bring operands in and out of play. In contrast to computers, the computational and memory resources of artificial neural networks are mixed together in the network weights and neuron activity. This is a major liability: as the memory demands of a task increase, these networks cannot allocate new storage dynam-ically, nor easily learn algorithms that act independently of the values realized by the task variables. The whole system is differentiable, and can therefore be trained end-to-end with gradient descent, allowing the network to learn how to operate and organize the memory in a goal-directed manner.If the memory can be thought of as the DNC’s RAM, then the network, referred to as the ‘controller’, is a differentiable CPU whose operations are learned with gradient descent.How is it different from its predecessor, the Neural Turing Machine?basically, more memory access methods than NTM DNC extends the NTM addressing the following limitations:(1) Ensuring that blocks of allocated memory do not overlap and interfere.(2) Freeing memory that have already been written to.(3) Handling of non-contiguous memory through temporal links.the system required hand-crafted input to accomplish its learning and inference. This is not an NLP system where unstructured text is applied at input. 3 forms of attention for heads- content lookup- records transitions between consecutively written locations in an N × N temporal link matrix L.This gives a DNC the native ability to recover sequences in the order in which it wrote them, evenwhen consecutive writes did not occur in adjacent time-step- The third form of attention allocates memory for writing. Content lookup enables the formation of associative data structures;temporal links enable sequential retrieval of input sequences;and allocation provides the write head with unused locations. DNC memory modification is fast and can be one-shot, resembling the associative long-term potentiation of hippocampal CA3 and CA1 synapses Human ‘free recall’ experiments demonstrate the increased probability of item recall in the same order as first pre-sented (temporal links) DeepMind hopes that DNCs provide both a new tool for computer science and a new metaphor for cognitive scienceand neuroscience: here is a learning machine that, without prior programming, can organise informationinto connected facts and use those facts to solve problems.
###Code
import numpy as np
import tensorflow as tf
import os
class DNC:
def __init__(self, input_size, output_size, seq_len, num_words = 256, word_size = 64, num_heads = 4):
'''
Initialize the DNC:
In this tutorial we are basically using the DNC to understand the mapping between the input
and output data.
input data: [[0,0], [0,1], [1,0], [1,1]]
output data: [[1,0], [0,0], [0,0], [0,1]]
'''
# define input and output sizes
self.input_size = input_size
self.output_size = output_size
# define read and write vectors
self.num_words = num_words # N
self.word_size = word_size # W
# define number of read and write heads
self.num_heads = num_heads # R
# size of output vector from controller
# the magic numbers are just a type of hyper-parameters
# set them according to your own use-case, they come from the way we divide our
# interface vector into read, write, gate and read mode variables
self.interface_size = num_heads*word_size + 3*word_size + 5*num_heads + 3
# define input size
# this comes after the flatten the input and concatenate it with the
# previously read vectors from the memory
self.nn_input_size = num_heads*word_size + input_size
# define output size
self.nn_output_size = output_size + self.interface_size
# gaussian normal distribution for both outputs
# ???
self.nn_out = tf.truncated_normal([1, self.output_size], stddev = 0.1)
self.interface_vec = tf.truncated_normal([1, self.interface_size], stddev = 0.1)
# define memory matrix
self.mem_mat = tf.zeros([num_words, word_size]) # N*W
# define usage vector
# it tells which part of the memory have been used so far
self.usage_vec = tf.fill([num_words, 1], 1e-6) # W*1
# define temporal link matrix
# it tells in which order the locations were written
self.link_mat = tf.zeros([num_words, num_words]) # N*N
# define precedence weight
# it tell to the degree which the last weight was written to
self.precedence_weight = tf.zeros([num_words, 1]) # N*1
# define read and write weight variables
self.read_weights = tf.fill([num_words, num_heads], 1e-6) # N*R
self.write_weights = tf.fill([num_words, 1], 1e-6) # N*1
self.read_vec = tf.fill([num_heads, word_size], 1e-6) # N*W
#######################
## Network Variables ##
#######################
# parameters
hidden_layer_size = 32
# define placeholders
self.i_data = tf.placeholder(tf.float32, [seq_len*2, self.input_size], name = 'input_placeholder')
self.o_data = tf.placeholder(tf.float32, [seq_len*2, self.output_size], name = 'output_placeholder')
# define feedforward network weights
self.W1 = tf.Variable(tf.truncated_normal([self.nn_input_size, hidden_layer_size], stddev = 0.1),
name = 'layer1_weights', dtype = tf.float32)
self.b1 = tf.Variable(tf.truncated_normal([hidden_layer_size], stddev = 0.1),
name = 'layer1_bias', dtype = tf.float32)
self.W2 = tf.Variable(tf.truncated_normal([hidden_layer_size, self.nn_output_size], stddev = 0.1),
name = 'layer2_weights', dtype = tf.float32)
self.b2 = tf.Variable(tf.truncated_normal([self.nn_output_size], stddev = 0.1),
name = 'layer2_bias', dtype = tf.float32)
# define DNC output weights
# self.nn_out_weights to convert the output of neural network into proper output
self.nn_out_weights = tf.Variable(
tf.truncated_normal([self.nn_output_size, self.output_size],
stddev = 0.1),
name = 'nn_output_weights', dtype = tf.float32)
# self.interface_weights to convert the output of neural network to proper interface vector
self.interface_weights = tf.Variable(
tf.truncated_normal([self.nn_output_size, self.interface_size],
stddev = 0.1),
name = 'interface_weights', dtype = tf.float32)
#
self.read_vec_out_weights = tf.Variable(
tf.truncated_normal([self.num_heads*self.word_size, self.output_size],
stddev = 0.1),
name = 'read_vector_output_weights', dtype = tf.float32)
##########################
## Attention Mechanisms ##
##########################
'''
In DNC we have three different attention mechanisms:
1. Content Lookup (Content-Addressing in paper):
{From NTM paper} For content-addressing, each head (whether employed for reading or
writing) first produces a key-vector k, that is then compared to each vector in memory by
a similarity measure. The content-based system produces a normalized weighting based
on similarity [and a positive key-strength (beta), which can amplify or attenuate the
precision of the focus.]
2. Allocation weighting:
{From DNC paper} To allow controller to free and allocate memory as needed, we developed
a differentiable analogue to 'free-list' memory scheme, whereby a a list of available
memory location is maintained by adding to and removig from a linked list.
{From tutorial} The ‘usage’ of each location is represented as a number between 0 and 1,
and a weighting that picks out unused locations is delivered to the write head. This is
independent of the size and contents of the memory, meaning that DNCs can be trained to
solve a task using one size of memory and later upgraded to a larger memory without
retraining
3. Temporal Linking:
{From DNC paper} The memory location defined [till now] stores no information about the
order in which memory locations are written to. However, there are many situation where
retaining this information is useful: for example, when a sequence inrtuctions must be
recorded and retrieved in order. We therefore use a temporal link matrix to keep track
of consecutively modified memory locations.
'''
# define content lookup
def content_lookup(self, key, str):
# str is 1*1 or 1*R
# l2 normalization of a vector is the square root of sum of absolute values squared
norm_mem = tf.nn.l2_normalize(self.mem_mat, 1) # N*W
norm_key = tf.nn.l2_normalize(key, 0) # 1*W for write, R*W for read
sim = tf.matmul(norm_mem, norm_key, transpose_b = True) # N*1 for write, N*R for read
return tf.nn.softmax(sim*str, 0) # N*1 or N*R
# define allocation weighting
def allocation_weighting(self):
# tf.nn.top_k() returns
# 1.The k largest elements along each last dimensional slice and
# 2.The indices of values within the last dimension of input
sorted_usage_vec, free_list = tf.nn.top_k(-1*self.usage_vec, k = self.num_words)
sorted_usage_vec *= -1
# tf.cumprod() calculates cumulative product
# tf.cumprod([a, b, c]) --> [a, a*b, a*b*c]
# tf.cumprod([a, b, c], exclusive=True) --> [1, a, a * b]
cumprod = tf.cumprod(sorted_usage_vec, axis = 0, exclusive = True)
unorder = (1-sorted_usage_vec)*cumprod
# allocation weight
alloc_weights = tf.zeros([self.num_words])
I = tf.constant(np.identity(self.num_words, dtype = np.float32))
# for each usage vector
for pos, idx in enumerate(tf.unstack(free_list[0])):
m = tf.squeeze(tf.slice(I, [idx, 0], [1, -1]))
alloc_weights += m*unorder[0, pos]
# allocation weighting for each row in memory
return tf.reshape(alloc_weights, [self.num_words, 1])
###################
## Step Function ##
###################
# define the step function
'''
This is the function that we call while we are running our session at each iteration the
controller recieves two inputs that are concatenated, the input vector and the read vector
from previous time step it also gives two outputs, the output vector and the interface
vector that defines it's interaction with the memory at the current time step.
'''
def step_m(self, input_seq):
'''print('input_seq:',input_seq)
print('self.read_vec:', self.read_vec)
print('reshape',tf.reshape(self.read_vec, [1, self.num_words*self.word_size]))
print(tf.concat([input_seq, tf.reshape(self.read_vec, [1, self.num_heads*self.word_size])], 1))'''
# reshape the input
input_vec_nn = tf.concat([input_seq, tf.reshape(self.read_vec, [1, self.num_heads*self.word_size])], 1)
# forward propogation
l1_out = tf.matmul(input_vec_nn, self.W1) + self.b1
l1_act = tf.nn.tanh(l1_out)
l2_out = tf.matmul(l1_out, self.W2) + self.b2
l2_act = tf.nn.tanh(l2_out)
# output vector, the output of the DNC
self.nn_out = tf.matmul(l2_act, self.nn_out_weights)
# interface vector, how to interact with the memory
self.interface_vec = tf.matmul(l2_act, self.interface_weights)
# define partition vector
'''
We need to get lot of information from the interface vector, which will help us get various
vectors such as read vectors, write vectors, degree to which locations will be freed
'''
p_array = [0]*(self.num_heads * self.word_size) # read keys
p_array += [1]*(self.num_heads) # read string
p_array += [2]*(self.word_size) # write key
p_array += [3] # write string
p_array += [4]*(self.word_size) # erase vector
p_array += [5]*(self.word_size) # write vector
p_array += [6]*(self.num_heads) # free gates
p_array += [7] # allocation gates
p_array += [8] # write gates
p_array += [9]*(self.num_heads*3) # read mode
partition = tf.constant([p_array])
# convert interface vector to set of read write vectors
(read_keys, read_str, write_key, write_str, erase_vec,
write_vec, free_gates, alloc_gate, write_gate, read_modes) = \
tf.dynamic_partition(self.interface_vec, partition, 10)
# read vectors
read_keys = tf.reshape(read_keys, [self.num_heads, self.word_size]) # R*W
read_str = 1 + tf.nn.softplus(tf.expand_dims(read_str, 0)) # 1*R
# write vectors
write_key = tf.expand_dims(write_key, 0) # 1*W
write_str = 1 + tf.nn.softplus(tf.expand_dims(write_str, 0)) # 1*1
erase_vec = tf.nn.sigmoid(tf.expand_dims(erase_vec, 0)) # 1*W
write_vec = tf.expand_dims(write_vec, 0) # 1*w
# gates
# free gates, the degree to which the locations at read head will be freed
free_gates = tf.nn.sigmoid(tf.expand_dims(free_gates, 0)) # 1*R
# the fraction of writing that is being allocated in a new location
alloc_gate = tf.nn.sigmoid(alloc_gate) # 1
# the amount of information to be written to memory
write_gate = tf.nn.sigmoid(write_gate) # 1
# read modes
# we do a softmax distribution between 3 read modes (backward, forward, lookup)
# The read heads can use gates called read modes to switch between content lookup
# using a read key and reading out locations either forwards or backwards
# in the order they were written.
read_modes = tf.nn.softmax(tf.reshape(read_modes, [3, self.num_heads])) # 3*R
## WRITING
# the memory retention vector tells by how much each location will not be freed
# by the free gates, helps in determining usage vector
retention_vec = tf.reduce_prod(1 - free_gates*self.read_weights, reduction_indices = 1)
self.usage_vec = (self.usage_vec + self.write_weights - \
self.usage_vec*self.write_weights) * retention_vec
# allocation weighting is used to provide new locations for writing
alloc_weights = self.allocation_weighting()
write_lookup_weights = self.content_lookup(write_key, write_str)
# define write weights
self.write_weights = write_gate*(alloc_gate*alloc_weights + \
(1-alloc_gate)*write_lookup_weights)
# write -> erase -> write to memory
self.mem_mat = self.mem_mat*(1 - tf.matmul(self.write_weights, erase_vec)) + \
tf.matmul(self.write_weights, write_vec)
# temporal link matrix
nnweight_vec = tf.matmul(self.write_weights, tf.ones([1, self.num_words])) # N*N
self.link_mat = self.link_mat*(1 - nnweight_vec - tf.transpose(nnweight_vec)) + \
tf.matmul(self.write_weights, self.precedence_weight, transpose_b = True)
self.link_mat *= tf.ones([self.num_words, self.num_words]) - \
tf.constant(np.identity(self.num_words, dtype = np.float32))
# update precedence weight
self.precedence_weight = (1 - tf.reduce_sum(self.write_weights, reduction_indices = 0)) * \
self.precedence_weight + self.write_weights
# 3 read modes
forw_w = read_modes[2] * tf.matmul(self.link_mat, self.read_weights)
look_w = read_modes[1] * self.content_lookup(read_keys, read_str)
back_w = read_modes[0] * tf.matmul(self.link_mat, self.read_weights, transpose_a = True)
# initialize read weights
self.read_weights = forw_w + look_w + back_w
# read vector
self.read_vec = tf.transpose(tf.matmul(self.mem_mat, self.read_weights, transpose_a = True))
# get final read output
read_vec_mut = tf.matmul(tf.reshape(self.read_vec, [1, self.num_heads*self.word_size]),
self.read_vec_out_weights)
# return the final output
return self.nn_out + read_vec_mut
# output the list of numbers (one hot encoded) by running step function
def run(self):
big_out = []
for t, seq in enumerate(tf.unstack(self.i_data, axis = 0)):
seq = tf.expand_dims(seq, 0)
y = self.step_m(seq)
big_out.append(y)
return tf.stack(big_out, axis = 0)
# generate randomly generated input, output sequences
num_seq = 10
seq_len = 6
seq_width = 4
num_epochs = 1000
con = np.random.randint(0, seq_width,size=seq_len)
seq = np.zeros((seq_len, seq_width))
seq[np.arange(seq_len), con] = 1
end = np.asarray([[-1]*seq_width])
zer = np.zeros((seq_len, seq_width))
# final i/o data
final_i_data = np.concatenate((seq, zer), axis = 0)
final_o_data = np.concatenate((zer, seq), axis = 0)
# define compute graph
graph = tf.Graph()
# running the graph
with graph.as_default():
with tf.Session() as sess:
# define the DNC
dnc = DNC(input_size = seq_width,
output_size = seq_width,
seq_len = seq_len,
num_words = 10,
word_size = 4,
num_heads = 1)
#calculate the predicted output
output = tf.squeeze(dnc.run())
#compare prediction to reality, get loss via sigmoid cross entropy
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=output, labels=dnc.o_data))
#use regularizers for each layer of the controller
regularizers = (tf.nn.l2_loss(dnc.W1) + tf.nn.l2_loss(dnc.W2) +
tf.nn.l2_loss(dnc.b1) + tf.nn.l2_loss(dnc.b2))
#to help the loss convergence faster
loss += 5e-4 * regularizers
#optimize the entire thing (memory + controller) using gradient descent. dope
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss)
#initialize input output pairs
sess.run(tf.global_variables_initializer())
#for each iteration
for i in range(0, num_epochs+1):
#feed in each input output pair
feed_dict = {dnc.i_data: final_i_data, dnc.o_data: final_o_data}
#make predictions
l, _, predictions = sess.run([loss, optimizer, output], feed_dict=feed_dict)
if i%100==0:
print(i,l)
# print predictions
print(np.argmax(final_i_data, 1))
print(np.argmax(final_o_data, 1))
print(np.argmax(predictions, 1))
###Output
0 0.716662
100 0.291819
200 0.139632
300 0.102837
400 0.0829762
500 0.0597146
600 0.034657
700 0.0176897
800 0.01247
900 0.01024
1000 0.00889811
[3 1 1 3 3 0 0 0 0 0 0 0]
[0 0 0 0 0 0 3 1 1 3 3 0]
[2 3 3 3 3 3 3 1 1 3 3 0]
|
voila/callback_text.ipynb | ###Markdown
Text output from callbacks
###Code
import ipywidgets
###Output
_____no_output_____
###Markdown
Plain `print`(doesn't work)
###Code
def callback1(w):
print('callback1')
button1 = ipywidgets.Button(description='Run')
button1.on_click(callback1)
button1
###Output
_____no_output_____
###Markdown
Output widget
###Code
def callback2(w):
with output2:
print('callback2')
output2 = ipywidgets.Output()
button2 = ipywidgets.Button(description='Run')
button2.on_click(callback2)
ipywidgets.VBox(children=[button2, output2])
###Output
_____no_output_____
###Markdown
HTML widget
###Code
def callback3(w):
html3.value = 'callback3'
html3 = ipywidgets.HTML()
button3 = ipywidgets.Button(description='Run')
button3.on_click(callback3)
ipywidgets.VBox(children=[button3, html3])
###Output
_____no_output_____ |
old_files_unsorted_archive/EDA_Springleaf_screencast.ipynb | ###Markdown
This is a notebook, used in the screencast video. Note, that the data files are not present here in Jupyter hub and you will not be able to run it. But you can always download the notebook to your local machine as well as the competition data and make it interactive. Competition data can be found here: https://www.kaggle.com/c/springleaf-marketing-response/data
###Code
import os
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import seaborn
def autolabel(arrayA):
''' label each colored square with the corresponding data value.
If value > 20, the text is in black, else in white.
'''
arrayA = np.array(arrayA)
for i in range(arrayA.shape[0]):
for j in range(arrayA.shape[1]):
plt.text(j,i, "%.2f"%arrayA[i,j], ha='center', va='bottom',color='w')
def hist_it(feat):
plt.figure(figsize=(16,4))
feat[Y==0].hist(bins=range(int(feat.min()),int(feat.max()+2)),normed=True,alpha=0.8)
feat[Y==1].hist(bins=range(int(feat.min()),int(feat.max()+2)),normed=True,alpha=0.5)
plt.ylim((0,1))
def gt_matrix(feats,sz=16):
a = []
for i,c1 in enumerate(feats):
b = []
for j,c2 in enumerate(feats):
mask = (~train[c1].isnull()) & (~train[c2].isnull())
if i>=j:
b.append((train.loc[mask,c1].values>=train.loc[mask,c2].values).mean())
else:
b.append((train.loc[mask,c1].values>train.loc[mask,c2].values).mean())
a.append(b)
plt.figure(figsize = (sz,sz))
plt.imshow(a, interpolation = 'None')
_ = plt.xticks(range(len(feats)),feats,rotation = 90)
_ = plt.yticks(range(len(feats)),feats,rotation = 0)
autolabel(a)
def hist_it1(feat):
plt.figure(figsize=(16,4))
feat[Y==0].hist(bins=100,range=(feat.min(),feat.max()),normed=True,alpha=0.5)
feat[Y==1].hist(bins=100,range=(feat.min(),feat.max()),normed=True,alpha=0.5)
plt.ylim((0,1))
###Output
_____no_output_____
###Markdown
Read the data
###Code
train = pd.read_csv('train.csv.zip')
Y = train.target
test = pd.read_csv('test.csv.zip')
test_ID = test.ID
###Output
_____no_output_____
###Markdown
Data overview Probably the first thing you check is the shapes of the train and test matrices and look inside them.
###Code
print 'Train shape', train.shape
print 'Test shape', test.shape
train.head()
test.head()
###Output
_____no_output_____
###Markdown
There are almost 2000 anonymized variables! It's clear, some of them are categorical, some look like numeric. Some numeric feateures are integer typed, so probably they are event conters or dates. And others are of float type, but from the first few rows they look like integer-typed too, since fractional part is zero, but pandas treats them as `float` since there are NaN values in that features. From the first glance we see train has one more column `target` which we should not forget to drop before fitting a classifier. We also see `ID` column is shared between train and test, which sometimes can be succesfully used to improve the score. It is also useful to know if there are any NaNs in the data. You should pay attention to columns with NaNs and the number of NaNs for each row can serve as a nice feature later.
###Code
# Number of NaNs for each object
train.isnull().sum(axis=1).head(15)
# Number of NaNs for each column
train.isnull().sum(axis=0).head(15)
###Output
_____no_output_____
###Markdown
Just by reviewing the head of the lists we immediately see the patterns, exactly 56 NaNs for a set of variables, and 24 NaNs for objects. Dataset cleaning Remove constant features All 1932 columns are anonimized which makes us to deduce the meaning of the features ourselves. We will now try to clean the dataset. It is usually convenient to concatenate train and test into one dataframe and do all feature engineering using it.
###Code
traintest = pd.concat([train, test], axis = 0)
###Output
_____no_output_____
###Markdown
First we schould look for a constant features, such features do not provide any information and only make our dataset larger.
###Code
# `dropna = False` makes nunique treat NaNs as a distinct value
feats_counts = train.nunique(dropna = False)
feats_counts.sort_values()[:10]
###Output
_____no_output_____
###Markdown
We found 5 constant features. Let's remove them.
###Code
constant_features = feats_counts.loc[feats_counts==1].index.tolist()
print (constant_features)
traintest.drop(constant_features,axis = 1,inplace=True)
###Output
['VAR_0207', 'VAR_0213', 'VAR_0840', 'VAR_0847', 'VAR_1428']
###Markdown
Remove duplicated features Fill NaNs with something we can find later if needed.
###Code
traintest.fillna('NaN', inplace=True)
###Output
_____no_output_____
###Markdown
Now let's encode each feature, as we discussed.
###Code
train_enc = pd.DataFrame(index = train.index)
for col in tqdm_notebook(traintest.columns):
train_enc[col] = train[col].factorize()[0]
###Output
_____no_output_____
###Markdown
We could also do something like this:
###Code
# train_enc[col] = train[col].map(train[col].value_counts())
###Output
_____no_output_____
###Markdown
The resulting data frame is very very large, so we cannot just transpose it and use .duplicated. That is why we will use a simple loop.
###Code
dup_cols = {}
for i, c1 in enumerate(tqdm_notebook(train_enc.columns)):
for c2 in train_enc.columns[i + 1:]:
if c2 not in dup_cols and np.all(train_enc[c1] == train_enc[c2]):
dup_cols[c2] = c1
dup_cols
###Output
_____no_output_____
###Markdown
Don't forget to save them, as it takes long time to find these.
###Code
import cPickle as pickle
pickle.dump(dup_cols, open('dup_cols.p', 'w'), protocol=pickle.HIGHEST_PROTOCOL)
###Output
_____no_output_____
###Markdown
Drop from traintest.
###Code
traintest.drop(dup_cols.keys(), axis = 1,inplace=True)
###Output
_____no_output_____
###Markdown
Determine types Let's examine the number of unique values.
###Code
nunique = train.nunique(dropna=False)
nunique
###Output
_____no_output_____
###Markdown
and build a histogram of those values
###Code
plt.figure(figsize=(14,6))
_ = plt.hist(nunique.astype(float)/train.shape[0], bins=100)
###Output
_____no_output_____
###Markdown
Let's take a looks at the features with a huge number of unique values:
###Code
mask = (nunique.astype(float)/train.shape[0] > 0.8)
train.loc[:, mask]
###Output
_____no_output_____
###Markdown
The values are not float, they are integer, so these features are likely to be even counts. Let's look at another pack of features.
###Code
mask = (nunique.astype(float)/train.shape[0] < 0.8) & (nunique.astype(float)/train.shape[0] > 0.4)
train.loc[:25, mask]
###Output
_____no_output_____
###Markdown
These look like counts too. First thing to notice is the 23th line: 99999.., -99999 values look like NaNs so we should probably built a related feature. Second: the columns are sometimes placed next to each other, so the columns are probably grouped together and we can disentangle that. Our conclusion: there are no floating point variables, there are some counts variables, which we will treat as numeric. And finally, let's pick one variable (in this case 'VAR_0015') from the third group of features.
###Code
train['VAR_0015'].value_counts()
cat_cols = list(train.select_dtypes(include=['object']).columns)
num_cols = list(train.select_dtypes(exclude=['object']).columns)
###Output
_____no_output_____
###Markdown
Go through Let's replace NaNs with something first.
###Code
train.replace('NaN', -999, inplace=True)
###Output
_____no_output_____
###Markdown
Let's calculate how many times one feature is greater than the other and create cross tabel out of it.
###Code
# select first 42 numeric features
feats = num_cols[:42]
# build 'mean(feat1 > feat2)' plot
gt_matrix(feats,16)
###Output
_____no_output_____
###Markdown
Indeed, we see interesting patterns here. There are blocks of geatures where one is strictly greater than the other. So we can hypothesize, that each column correspondes to cumulative counts, e.g. feature number one is counts in first month, second -- total count number in first two month and so on. So we immediately understand what features we should generate to make tree-based models more efficient: the differences between consecutive values. VAR_0002, VAR_0003
###Code
hist_it(train['VAR_0002'])
plt.ylim((0,0.05))
plt.xlim((-10,1010))
hist_it(train['VAR_0003'])
plt.ylim((0,0.03))
plt.xlim((-10,1010))
train['VAR_0002'].value_counts()
train['VAR_0003'].value_counts()
###Output
_____no_output_____
###Markdown
We see there is something special about 12, 24 and so on, sowe can create another feature x mod 12. VAR_0004
###Code
train['VAR_0004_mod50'] = train['VAR_0004'] % 50
hist_it(train['VAR_0004_mod50'])
plt.ylim((0,0.6))
###Output
_____no_output_____
###Markdown
Categorical features Let's take a look at categorical features we have.
###Code
train.loc[:,cat_cols].head().T
###Output
_____no_output_____
###Markdown
`VAR_0200`, `VAR_0237`, `VAR_0274` look like some georgraphical data thus one could generate geography related features, we will talk later in the course.There are some features, that are hard to identify, but look, there a date columns `VAR_0073` -- `VAR_0179`, `VAR_0204`, `VAR_0217`. It is useful to plot one date against another to find relationships.
###Code
date_cols = [u'VAR_0073','VAR_0075',
u'VAR_0156',u'VAR_0157',u'VAR_0158','VAR_0159',
u'VAR_0166', u'VAR_0167',u'VAR_0168',u'VAR_0169',
u'VAR_0176',u'VAR_0177',u'VAR_0178',u'VAR_0179',
u'VAR_0204',
u'VAR_0217']
for c in date_cols:
train[c] = pd.to_datetime(train[c],format = '%d%b%y:%H:%M:%S')
test[c] = pd.to_datetime(test[c], format = '%d%b%y:%H:%M:%S')
c1 = 'VAR_0217'
c2 = 'VAR_0073'
# mask = (~test[c1].isnull()) & (~test[c2].isnull())
# sc2(test.ix[mask,c1].values,test.ix[mask,c2].values,alpha=0.7,c = 'black')
mask = (~train[c1].isnull()) & (~train[c2].isnull())
sc2(train.loc[mask,c1].values,train.loc[mask,c2].values,c=train.loc[mask,'target'].values)
###Output
_____no_output_____ |
meteology/ulmo_pyconjp2016.ipynb | ###Markdown
事例1: ulmoで気象データと戯れる 1-1 東京の最高気温のプロット - ulmoとプロット系のライブラリを読み込む
###Code
import ulmo
import pandas
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
- NOAAが提供する気象データで日本の日毎の気象データをさがす- その中の東京のデータをさがす
###Code
st = ulmo.ncdc.ghcn_daily.get_stations(country='JA', as_dataframe=True)
st[st.name.str.contains('TOKYO')]
###Output
_____no_output_____
###Markdown
- 東京の気象データがあったので,pandasのデータフレームに
###Code
data = ulmo.ncdc.ghcn_daily.get_data('JA000047662', as_dataframe=True)
###Output
_____no_output_____
###Markdown
- 最高気温だけを取り出して,スケーリングをキチンとしてプロット
###Code
tm = data['TMAX'].copy()
tm.value = tm.value/10.0
tm['value'].plot()
###Output
_____no_output_____
###Markdown
1-2 Daymet気象データ - ORNL Daymet: https://daymet.ornl.gov/- 北米の1kmx1kmのグリッド解像度の日次気象データ - 2012年から2013年にかけて,- 緯度35.9313167, 経度-84.3104124の場所の気象
###Code
from ulmo.nasa import daymet
ornl_lat, ornl_long = 35.9313167, -84.3104124
df = daymet.get_daymet_singlepixel(longitude=ornl_long, latitude=ornl_lat,
years=[2012,2013])
df.head()
###Output
_____no_output_____
###Markdown
- 温度変化をグラフに- 15日の移動平均をとった最高気温,最低気温を同時に
###Code
fig, (ax1, ax2) = plt.subplots(2, figsize=(18, 10), sharex=True)
rolling15day = df.rolling(center=False,window=15).mean()
ax1.fill_between(rolling15day.index, rolling15day.tmin, rolling15day.tmax, alpha=0.5, lw=0)
ax1.plot(df.index, df[['tmax', 'tmin']].mean(axis=1), lw=2, alpha=0.5)
ax1.set_title('Daymet temp at ORNL', fontsize=20)
ax1.set_ylabel(u'Temp. (°C)', fontsize=20)
monthlysum = df.resample("M").sum()
ax2.bar(monthlysum.index, monthlysum.prcp, width=20,)
ax2.set_title('Daymet precip at ORNL', fontsize=20)
ax2.set_ylabel(u'Precip. (mm)', fontsize=20)
fig.tight_layout()
fig
###Output
_____no_output_____
###Markdown
- デンバーとマイアミの気温を通年で比較
###Code
denver_loc = (-104.9903, 39.7392)
miami_loc = (-80.2089, 25.7753)
denver = daymet.get_daymet_singlepixel(longitude=denver_loc[0], latitude=denver_loc[1],
years=[2012, 2013, 2014])
miami = daymet.get_daymet_singlepixel(longitude=miami_loc[0], latitude=miami_loc[1],
years=[2012, 2013, 2014])
sns.set_context("talk")
fig, ax1 = plt.subplots(1, figsize=(18, 10))
den_15day = denver.rolling(center=False,window=15).mean()
ax1.fill_between(den_15day.index, den_15day.tmin, den_15day.tmax,
alpha=0.4, lw=0, label='Denver', color=sns.xkcd_palette(['faded green'])[0])
ax1.set_title('Denver vs Miami temps (15 day rolling mean)', fontsize=20)
miami_15day = miami.rolling(center=False,window=15).mean()
ax1.fill_between(miami_15day.index, miami_15day.tmin, miami_15day.tmax,
alpha=0.4, lw=0, label='Miami', color=sns.xkcd_palette(['dusty purple'])[0])
ax1.set_ylabel(u'Temp. (°C)', fontsize=20)
fig.tight_layout()
plt.legend(fontsize=20)
###Output
_____no_output_____
###Markdown
- フロリダは常夏,しかし夏はデンバーのほうが最高気温が高くなる- 一日の気温差も年間の気温差もデンバーのほうが幅がある
###Code
fig
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.