code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
<a href="https://colab.research.google.com/github/ndoshi83/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/NDoshi_DS4_114_Making_Data_backed_Assertions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Lambda School Data Science - Making Data-backed Assertions
This is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it.
## Lecture - generating a confounding variable
The prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.
Let's use Python to generate data that actually behaves in this fashion!
```
# y = "health outcome" - predicted variable - dependent variable
# x = "drug usage" - explanatory variable - independent variable
import random
dir(random) # Reminding ourselves what we can do here
random.seed(10) # Random Seed for reproducibility
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# !pip freeze
!pip install pandas==0.23.4
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# Let's use crosstabulation to try to see what's going on
# pd.crosstab(user_data['purchased'], user_data['time_on_site'], margins=True)
# Trying to show the margins on our Crosstab. Think this might be another
# versioning issue.
# pd.crosstab(user_data['purchased'], time_bins, margins=True)
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
ct = pd.crosstab(user_data['mobile'], [user_data['purchased'], time_bins],
rownames=['device'],
colnames=["purchased", "time on site"],
normalize='index')
ct
# help(user_data.plot)
import seaborn as sns
sns.heatmap(pd.crosstab(user_data['mobile'], [user_data['purchased'], time_bins] ),
cmap="YlGnBu", annot=True, cbar=False)
# user_data.hist()
pd.pivot_table(user_data, values='purchased',
index=time_bins).plot.bar()
pd.pivot_table(
user_data, values='mobile', index=time_bins).plot.bar();
user_data['time_on_site'].plot.density();
ct = pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']],
normalize='columns')
ct
ct.plot();
ct.plot(kind='bar')
ct.plot(kind='bar', stacked=True)
time_bins = pd.cut(user_data['time_on_site'], 6) # 6 equal-sized bins
ct = pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']],
normalize='columns')
ct
ct.plot(kind='bar', stacked=True)
```
## Assignment - what's going on here?
Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.
Try to figure out which variables are possibly related to each other, and which may be confounding relationships.
```
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
# Import pandas library
import pandas as pd
# Load data into pandas dataframe
df = pd.read_csv('https://raw.githubusercontent.com/ndoshi83/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module4-databackedassertions/persons.csv')
# Show example of df
df.head(10)
df.dtypes
# Start with a pairplot to compare all the variables
import seaborn as sns
sns.pairplot(df)
# Create a distplots for all three variables
sns.distplot(df['age']);
sns.distplot(df['weight']);
sns.distplot(df['exercise_time']);
sns.jointplot('exercise_time', 'weight', df, kind = 'kde')
```
### Assignment questions
After you've worked on some code, answer the following questions in this text block:
1. What are the variable types in the data?
The variables are interger type.
2. What are the relationships between the variables?
There is no relation between age and weight directly, there seems to be a relationship between age/exercise time and exercise time/weight.
3. Which relationships are "real", and which spurious?
The real relationship is between exercise time and weight whereas the relationship between age/exercise time seems to be spurious.
## Stretch goals and resources
Following are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub.
- [Spurious Correlations](http://tylervigen.com/spurious-correlations)
- [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/)
Stretch goals:
- Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it)
- Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)
- Use a groupby object to create some useful visualizations
| github_jupyter |
# Visual Designer (Data Prep)
In this exercise we will be building a pipeline in Azure Machine Learning using the [Visual Designer](https://docs.microsoft.com/azure/machine-learning/concept-designer). Traditionally the Visual Designer is used for training and deploying models. Here we will build a data prep pipeline that get a dataset ready for downstream model scoring. Below you can see a final picture of the data prep pipeline that will be built as part of this exercise.
The pipeline will join two datasets together that consists of the diabetes dataset. We will perform binning on the Age column. After joining the datasets together, we will use the [SQL Transformation](https://docs.microsoft.com/azure/machine-learning/component-reference/apply-sql-transformation) component to demonstrate the flexibility of the Visual Designer by creating an aggregate dataset. The resulting datasets will be landed in the /1-bronze folder of the data lake. Later we will build in a scoring pipeline that will use the result dataset.

## Step 1: Stage data
Let's first upload our source files to the /0-raw layer of the data lake. We will use this as the source for the pipeline.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
#TODO: Supply userid value for naming artifacts.
userid = ''
tabular_dataset_name = 'diabetes-data-bronze-' + userid
print(
tabular_dataset_name
)
from azureml.core import Datastore, Dataset
# Set datastore name where raw diabetes data is stored.
datastore_name = ''
datastore = Datastore.get(ws, datastore_name)
print("Found Datastore with name: %s" % datastore_name)
from azureml.data.datapath import DataPath
# Upload local csv files to ADLS using AML Datastore.
ds = Dataset.File.upload_directory(src_dir='../data/stage',
target=DataPath(datastore, '0-raw/diabetes/' + userid + '/stage/'),
show_progress=True)
type(ds)
```
## Step 2: Create target datasets
Register datasets to use as targets for writing data from pipeline.
```
diabetes_ds = Dataset.Tabular.from_delimited_files(path=(datastore,'1-bronze/diabetes/' + userid + '/diabetes.csv'),validate=False,infer_column_types=False)
diabetes_ds.register(ws,name=tabular_dataset_name,create_new_version=True)
diabetes_ds = Dataset.Tabular.from_delimited_files(path=(datastore,'1-bronze/diabetes/' + userid + '/diabetes_sql_example.csv'),validate=False,infer_column_types=False)
diabetes_ds.register(ws,name=tabular_dataset_name + '_sql_example',create_new_version=True)
```
## Step 3: Create new pipeline
In the Azure ML studio, navigate to <b>Designer</b> and press the <b>+</b> button under <b>New pipeline</b>

1. In <b>Settings</b> change the compute type to <b>Compute cluster</b> and select the appropriate compute cluster.
1. Name the pipeline in the <b>Draft name</b> field using the convention "pipeline-data-prep-diabetes-<'userid'>-prod"

1. Open <b>Data Input and Output</b> from the components menu.
2. Drag <b>Import Data</b> onto the canvas.
3. Change the <b>Data source</b> to <b>URL via HTTP</b>
4. Enter the storage url to the <b>patient-age.csv</b> file in the <b>/0-raw</b> folder of the data lake.
5. Validate by pressing <b>Preview schema</b>

1. Open <b>Data Input and Output</b> from the components menu.
2. Drag <b>Import Data</b> onto the canvas.
3. Change the <b>Data source</b> to <b>URL via HTTP</b>
4. Enter the storage url to the <b>patient-levels.csv</b> file in the <b>/0-raw</b> folder of the data lake.
5. Validate by pressing <b>Preview schema</b>

1. Open <b>Data Transformation</b> from the components menu.
2. Drag <b>Group Data into Bins</b> onto the canvas.
3. Connect <b>Import Data</b> for patient-age.csv.
4. Change <b>Binning mode</b> to <b>Custom Edges</b>.
5. Paste the following value in the <b>Comma-separated list of bin edges</b> field. "1,11,21,31,41,51,61,71,81,91"
6. Select the <b>Age</b> column for <b>Columns to bin</b>.

1. Open <b>Data Transformation</b> from the components menu.
2. Drag <b>Join Data</b> onto the canvas.
3. Connect <b>Group Data into Bins</b> component using the <b>Quantized dataset: DataFrameDirectory</b> output to <b>Join Data</b> component <b>Left dataset: DataFrameDirectory</b> input.
4. Connect <b>Import Data</b> for patient-levels.csv to <b>Join Data</b> component <b>Right dataset: DataFrameDirectory</b> input.
5. Set the right and left join key columns to <b>Id</b>
6. Leave defaults as shown in screenshot.

1. Open <b>Data Transformation</b> from the components menu.
2. Drag <b>Select Columns in Dataset</b> onto the canvas.
3. Connect <b>Join Data</b> to <b>Select Columns in Dataset</b>.
4. Add the following columns to <b>Select columns</b>. "Id,PatientID,Age,Age_quantized,Pregnancies,PlasmaGlucose,DiastolicBloodPressure,TricepsThickness,SerumInsulin,BMI,DiabetesPedigree"

1. Open <b>Data Input and Output</b> from the components menu.
2. Drag <b>Export Data</b> onto the canvas.
3. Connect <b>Select Columns in Dataset</b> to <b>Export Data</b>.
4. Choose <b>Azure Data Lake Storage Gen2</b> from the <b>Datastore type</b> dropdown.
5. Select the workshop datastore from the <b>Datastore</b> dropdown.
6. Enter the path to the <b>/1-bronze</b> diabetes folder with filename <b>diabetes.csv</b>
7. Choose <b>csv</b> for the <b>File format</b>.

1. Open <b>Data Transformation</b> from the components menu.
2. Drag <b>Apply SQL Transformation</b> onto the canvas.
3. Connect <b>Join Data</b> to the <b>Apply SQL Transformation</b> input <b>t1: DataFrameDirectory</b>.
4. Enter the following SQL statement in the <b>SQL query script</b> field.
```sql
SELECT
PatientID
,MAX(BMI)
FROM t1
GROUP BY PatientID
```

1. Open <b>Data Input and Output</b> from the components menu.
2. Drag <b>Export Data</b> onto the canvas.
3. Connect <b>Apply SQL Transformation</b> to <b>Export Data</b>.
4. Choose <b>Azure Data Lake Storage Gen2</b> from the <b>Datastore type</b> dropdown.
5. Select the workshop datastore from the <b>Datastore</b> dropdown.
6. Enter the path to the <b>/1-bronze</b> diabetes folder with filename <b>diabetes_sql_example.csv</b>
7. Choose <b>csv</b> for the <b>File format</b>.

## Step 4: Submit and Publish pipeline
First submit the pipeline and ensure it runs as expected. Second publish the pipeline endpoint.
1. Press <b>Submit</b>
2. Choose <b>Create New</b> for Experiment.
3. Name the new experiment using this convention. "pipeline-data-prep-diabetes-\<userid\>-prod"
4. Press the <b>Submit</b> button.
5. Monitor the run for completion.

1. Verify <b>diabetes.csv</b> and <b>diabetes_sql_example.csv</b> are created after the pipeline run in <b>/1-bronze</b> folder.

1. Verify registered datasets recognize the new files


1. Open the pipeline and press the <b>Publish</b> button.
2. Choose <b>Create new</b> and name the pipeline endpoint the same as the pipeline draft.
3. Press the <b>Publish</b> button.

## The End
This data prep pipeline will be orchestrated using Azure Data Factory with scoring and training pipelines that are published in Module 3.
| github_jupyter |
# Overview
### `clean_us_data.ipynb`: Fix data inconsistencies in the raw time series data from [`etl_us_data.ipynb`](./etl_us_data.ipynb).
Inputs:
* `outputs/us_counties.csv`: Raw county-level time series data for the United States, produced by running [etl_us_data.ipynb](./etl_us_data.ipynb)
* `outputs/us_counties_meta.json`: Column type metadata for reading `data/us_counties.csv` with `pd.read_csv()`
Outputs:
* `outputs/us_counties_clean.csv`: The contents of `outputs/us_counties.csv` after data cleaning
* `outputs/us_counties_clean_meta.json`: Column type metadata for reading `data/us_counties_clean.csv` with `pd.read_csv()`
* `outputs/us_counties_clean.feather`: Binary version of `us_counties_clean.csv`, in [Feather](https://arrow.apache.org/docs/python/feather.html) format.
* `outputs/dates.feather`: Dates associated with points in time series, in [Feather](https://arrow.apache.org/docs/python/feather.html) format.
**Note:** You can redirect these input and output files by setting the environment variables `COVID_INPUTS_DIR` and `COVID_OUTPUTS_DIR` to replacement values for the prefixes `inputs` and `outputs`, respectively, in the above paths.
# Read and reformat the raw data
```
# Initialization boilerplate
import os
import json
import pandas as pd
import numpy as np
import scipy.optimize
import sklearn.metrics
import matplotlib.pyplot as plt
from typing import *
import text_extensions_for_pandas as tp
# Local file of utility functions
import util
# Allow environment variables to override data file locations.
_INPUTS_DIR = os.getenv("COVID_INPUTS_DIR", "inputs")
_OUTPUTS_DIR = os.getenv("COVID_OUTPUTS_DIR", "outputs")
util.ensure_dir_exists(_OUTPUTS_DIR) # create if necessary
```
## Read the CSV file from `etl_us_data.ipynb` and apply the saved type information
```
csv_file = os.path.join(_OUTPUTS_DIR, "us_counties.csv")
meta_file = os.path.join(_OUTPUTS_DIR, "us_counties_meta.json")
# Read column type metadata
with open(meta_file) as f:
cases_meta = json.load(f)
# Pandas does not currently support parsing datetime64 from CSV files.
# As a workaround, read the "Date" column as objects and manually
# convert after.
cases_meta["Date"] = "object"
cases_raw = pd.read_csv(csv_file, dtype=cases_meta, parse_dates=["Date"])
# Restore the Pandas index
cases_vertical = cases_raw.set_index(["FIPS", "Date"], verify_integrity=True)
cases_vertical
```
## Replace missing values in the secondary datasets with zeros
```
for colname in ("Confirmed_NYT", "Deaths_NYT", "Confirmed_USAFacts", "Deaths_USAFacts"):
cases_vertical[colname].fillna(0, inplace=True)
cases_vertical[colname] = cases_vertical[colname].astype("int64")
cases_vertical
```
## Collapse each time series down to a single cell
This kind of time series data is easier to manipulate at the macroscopic level if each time series occupies a
single cell of the DataFrame. We use the [TensorArray](https://text-extensions-for-pandas.readthedocs.io/en/latest/#text_extensions_for_pandas.TensorArray) Pandas extension type from [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas).
```
cases, dates = util.collapse_time_series(cases_vertical, ["Confirmed", "Deaths", "Recovered",
"Confirmed_NYT", "Deaths_NYT",
"Confirmed_USAFacts", "Deaths_USAFacts"])
cases
# Note that the previous cell also saved the values from the "Date"
# column of `cases_vertical` into the Python variable `dates`:
dates[:10], dates.shape
# Print out the time series for the Bronx as a sanity check
bronx_fips = 36005
cases.loc[bronx_fips]["Confirmed"]
```
# Correct for missing data for today in USAFacts data
The USAFacts database only receives the previous day's updates late in the day,
so it's often missing the last value. Substitute the previous day's value if
that is the case.
```
# Last 10 days of the time series for the Bronx before this change
cases.loc[bronx_fips]["Deaths_USAFacts"].to_numpy()[-10:]
# last element <-- max(last element, second to last)
new_confirmed = cases["Confirmed_USAFacts"].to_numpy().copy()
new_confirmed[:, -1] = np.maximum(new_confirmed[:, -1], new_confirmed[:, -2])
cases["Confirmed_USAFacts"] = tp.TensorArray(new_confirmed)
new_deaths = cases["Deaths_USAFacts"].to_numpy().copy()
new_deaths[:, -1] = np.maximum(new_deaths[:, -1], new_deaths[:, -2])
cases["Deaths_USAFacts"] = tp.TensorArray(new_deaths)
# Last 10 days of the time series for the Bronx after this change
cases.loc[bronx_fips]["Deaths_USAFacts"].to_numpy()[-10:]
```
# Validate the New York City confirmed cases data
Older versions of the Johns Hopkins data coded all of New York city as being
in New York County. Each borough is actually in a different county
with a different FIPS code.
Verify that this problem hasn't recurred.
```
max_bronx_confirmed = np.max(cases.loc[36005]["Confirmed"])
if max_bronx_confirmed == 0:
raise ValueError(f"Time series for the Bronx is all zeros again:\n{cases.loc[36005]['Confirmed']}")
max_bronx_confirmed
```
Also plot the New York City confirmed cases time series to allow for manual validation.
```
new_york_county_fips = 36061
nyc_fips = [
36005, # Bronx County
36047, # Kings County
new_york_county_fips, # New York County
36081, # Queens County
36085, # Richmond County
]
util.graph_examples(cases.loc[nyc_fips], "Confirmed", {}, num_to_pick=5)
```
## Adjust New York City deaths data
Plot deaths for New York City in the Johns Hopkins data set. The jump in June is due to a change in reporting.
```
util.graph_examples(cases.loc[nyc_fips], "Deaths", {}, num_to_pick=5)
```
New York Times version of the time series for deaths in New York city:
```
util.graph_examples(cases.loc[nyc_fips], "Deaths_NYT", {}, num_to_pick=5)
```
USAFacts version of the time series for deaths in New York city:
```
util.graph_examples(cases.loc[nyc_fips], "Deaths_USAFacts", {}, num_to_pick=5)
```
Currently the USAFacts version is cleanest, so we use that one.
```
new_deaths = cases["Deaths"].copy(deep=True)
for fips in nyc_fips:
new_deaths.loc[fips] = cases["Deaths_USAFacts"].loc[fips]
cases["Deaths"] = new_deaths
print("After:")
util.graph_examples(cases.loc[nyc_fips], "Deaths", {}, num_to_pick=5)
```
# Clean up the Rhode Island data
The Johns Hopkins data reports zero deaths in most of Rhode Island. Use
the secondary data set from the New York Times for Rhode Island.
```
print("Before:")
util.graph_examples(cases, "Deaths", {}, num_to_pick=8,
mask=(cases["State"] == "Rhode Island"))
# Use our secondary data set for all Rhode Island data.
ri_fips = cases[cases["State"] == "Rhode Island"].index.values.tolist()
for colname in ["Confirmed", "Deaths"]:
new_series = cases[colname].copy(deep=True)
for fips in ri_fips:
new_series.loc[fips] = cases[colname + "_NYT"].loc[fips]
cases[colname] = new_series
# Note that the secondary data set has not "Recovered" time series, so
# we leave those numbers alone for now.
print("After:")
util.graph_examples(cases, "Deaths", {}, num_to_pick=8,
mask=(cases["State"] == "Rhode Island"))
```
# Clean up the Utah data
The Johns Hopkins data for Utah is missing quite a few data points.
Use the New York Times data for Utah.
```
print("Before:")
util.graph_examples(cases, "Confirmed", {}, num_to_pick=8,
mask=(cases["State"] == "Utah"))
# The Utah time series from the New York Times' data set are more
# complete, so we use those numbers.
ut_fips = cases[cases["State"] == "Utah"].index.values
for colname in ["Confirmed", "Deaths"]:
new_series = cases[colname].copy(deep=True)
for fips in ut_fips:
new_series.loc[fips] = cases[colname + "_NYT"].loc[fips]
cases[colname] = new_series
# Note that the secondary data set has not "Recovered" time series, so
# we leave those numbers alone for now.
print("After:")
util.graph_examples(cases, "Confirmed", {}, num_to_pick=8,
mask=(cases["State"] == "Utah"))
```
# Flag additional problematic and missing data points
Use heuristics to identify and flag problematic data points across all
the time series. Generate Boolean masks that show the locations of these
outliers.
```
# Now we're done with the secondary data set, so drop its columns.
cases = cases.drop(columns=["Confirmed_NYT", "Deaths_NYT", "Confirmed_USAFacts", "Deaths_USAFacts"])
cases
# Now we need to find and flag obvious data-entry errors.
# We'll start by creating columns of "is outlier" masks.
# We use integers instead of Boolean values as a workaround for
# https://github.com/pandas-dev/pandas/issues/33770
# Start out with everything initialized to "not an outlier"
cases["Confirmed_Outlier"] = tp.TensorArray(np.zeros_like(cases["Confirmed"].values))
cases["Deaths_Outlier"] = tp.TensorArray(np.zeros_like(cases["Deaths"].values))
cases["Recovered_Outlier"] = tp.TensorArray(np.zeros_like(cases["Recovered"].values))
cases
```
## Flag time series that go from zero to nonzero and back again
One type of anomaly that occurs fairly often involves a time series
jumping from zero to a nonzero value, then back to zero again.
Locate all instances of that pattern and mark the nonzero values
as outliers.
```
def nonzero_then_zero(series: np.array):
empty_mask = np.zeros_like(series, dtype=np.int8)
if series[0] > 0:
# Special case: first value is nonzero
return empty_mask
first_nonzero_offset = 0
while first_nonzero_offset < len(series):
if series[first_nonzero_offset] > 0:
# Found the first nonzero.
# Find the distance to the next zero value.
next_zero_offset = first_nonzero_offset + 1
while (next_zero_offset < len(series)
and series[next_zero_offset] > 0):
next_zero_offset += 1
# Check the length of the run of zeros after
# dropping back to zero.
second_nonzero_offset = next_zero_offset + 1
while (second_nonzero_offset < len(series)
and series[second_nonzero_offset] == 0):
second_nonzero_offset += 1
nonzero_run_len = next_zero_offset - first_nonzero_offset
second_zero_run_len = second_nonzero_offset - next_zero_offset
# print(f"{first_nonzero_offset} -> {next_zero_offset} -> {second_nonzero_offset}; series len {len(series)}")
if next_zero_offset >= len(series):
# Everything after the first nonzero was a nonzero
return empty_mask
elif second_zero_run_len <= nonzero_run_len:
# Series dropped back to zero, but the second zero
# part was shorter than the nonzero section.
# In this case, it's more likely that the second run
# of zero values are actually missing values.
return empty_mask
else:
# Series went zero -> nonzero -> zero -> nonzero
# or zero -> nonzero -> zero -> [end]
nonzero_run_mask = empty_mask.copy()
nonzero_run_mask[first_nonzero_offset:next_zero_offset] = 1
return nonzero_run_mask
first_nonzero_offset += 1
# If we get here, the series was all zeros
return empty_mask
for colname in ["Confirmed", "Deaths", "Recovered"]:
addl_outliers = np.stack([nonzero_then_zero(s.to_numpy()) for s in cases[colname]])
outliers_colname = colname + "_Outlier"
new_outliers = cases[outliers_colname].values.astype(np.bool) | addl_outliers
cases[outliers_colname] = tp.TensorArray(new_outliers.astype(np.int8))
# fips = 13297
# print(cases.loc[fips]["Confirmed"])
# print(nonzero_then_zero(cases.loc[fips]["Confirmed"]))
# Let's have a look at which time series acquired the most outliers as
# a result of the code in the previous cell.
df = cases[["State", "County"]].copy()
df["Confirmed_Num_Outliers"] = np.count_nonzero(cases["Confirmed_Outlier"], axis=1)
counties_with_outliers = df.sort_values("Confirmed_Num_Outliers", ascending=False).head(10)
counties_with_outliers
# Plot the couties in the table above, with outliers highlighted.
# The graph_examples() function is defined in util.py.
util.graph_examples(cases, "Confirmed", {}, num_to_pick=10, mask=(cases.index.isin(counties_with_outliers.index)))
```
## Flag time series that drop to zero, then go back up
Another type of anomaly involves the time series dropping down to
zero, then going up again. Since all three time series are supposed
to be cumulative counts, this pattern most likely indicates missing
data.
To correct for this problem, we mark any zero values after the
first nonzero, non-outlier values as outliers, across all time series.
```
def zeros_after_first_nonzero(series: np.array, outliers: np.array):
nonzero_mask = (series != 0)
nonzero_and_not_outlier = nonzero_mask & (~outliers)
first_nonzero = np.argmax(nonzero_and_not_outlier)
if 0 == first_nonzero and series[0] == 0:
# np.argmax(nonzero_mask) will return 0 if there are no nonzeros
return np.zeros_like(series)
after_nonzero_mask = np.zeros_like(series)
after_nonzero_mask[first_nonzero:] = True
return (~nonzero_mask) & after_nonzero_mask
for colname in ["Confirmed", "Deaths", "Recovered"]:
outliers_colname = colname + "_Outlier"
addl_outliers = np.stack([zeros_after_first_nonzero(s.to_numpy(), o.to_numpy())
for s, o in zip(cases[colname], cases[outliers_colname])])
new_outliers = cases[outliers_colname].values.astype(np.bool) | addl_outliers
cases[outliers_colname] = tp.TensorArray(new_outliers.astype(np.int8))
# fips = 47039
# print(cases.loc[fips]["Confirmed"])
# print(cases.loc[fips]["Confirmed_Outlier"])
# print(zeros_after_first_nonzero(cases.loc[fips]["Confirmed"], cases.loc[fips]["Confirmed_Outlier"]))
# Redo our "top 10 by number of outliers" analysis with the additional outliers
df = cases[["State", "County"]].copy()
df["Confirmed_Num_Outliers"] = np.count_nonzero(cases["Confirmed_Outlier"], axis=1)
counties_with_outliers = df.sort_values("Confirmed_Num_Outliers", ascending=False).head(10)
counties_with_outliers
util.graph_examples(cases, "Confirmed", {}, num_to_pick=10, mask=(cases.index.isin(counties_with_outliers.index)))
# The steps we've just done have removed quite a few questionable
# data points, but you will definitely want to flag additional
# outliers by hand before trusting descriptive statistics about
# any county.
# TODO: Incorporate manual whitelists and blacklists of outliers
# into this notebook.
```
# Precompute totals for the last 7 days
Several of the notebooks downstream of this one need the number of cases and deaths
for the last 7 days, so we compute those values here for convenience.
```
def last_week_results(s: pd.Series):
arr = s.to_numpy()
today = arr[:,-1]
week_ago = arr[:,-8]
return today - week_ago
cases["Confirmed_7_Days"] = last_week_results(cases["Confirmed"])
cases["Deaths_7_Days"] = last_week_results(cases["Deaths"])
cases.head()
```
# Write out cleaned time series data
By default, output files go to the `outputs` directory. You can use the `COVID_OUTPUTS_DIR` environment variable to override that location.
## CSV output
Comma separated value (CSV) files are a portable text-base format supported by a wide variety
of different tools. The CSV format does not include type information, so we write a second
file of schema data in JSON format.
```
# Break out our time series into multiple rows again for writing to disk.
cleaned_cases_vertical = util.explode_time_series(cases, dates)
cleaned_cases_vertical
# The outlier masks are stored as integers as a workaround for a Pandas
# bug. Convert them to Boolean values for writing to disk.
cleaned_cases_vertical["Confirmed_Outlier"] = cleaned_cases_vertical["Confirmed_Outlier"].astype(np.bool)
cleaned_cases_vertical["Deaths_Outlier"] = cleaned_cases_vertical["Deaths_Outlier"].astype(np.bool)
cleaned_cases_vertical["Recovered_Outlier"] = cleaned_cases_vertical["Recovered_Outlier"].astype(np.bool)
cleaned_cases_vertical
# Write out the results to a CSV file plus a JSON file of type metadata.
cleaned_cases_vertical_csv_data_file = os.path.join(_OUTPUTS_DIR,"us_counties_clean.csv")
print(f"Writing cleaned data to {cleaned_cases_vertical_csv_data_file}")
cleaned_cases_vertical.to_csv(cleaned_cases_vertical_csv_data_file, index=True)
col_type_mapping = {
key: str(value) for key, value in cleaned_cases_vertical.dtypes.iteritems()
}
cleaned_cases_vertical_json_data_file = os.path.join(_OUTPUTS_DIR,"us_counties_clean_meta.json")
print(f"Writing metadata to {cleaned_cases_vertical_json_data_file}")
with open(cleaned_cases_vertical_json_data_file, "w") as f:
json.dump(col_type_mapping, f)
```
## Feather output
The [Feather](https://arrow.apache.org/docs/python/feather.html) file format supports
fast binary I/O over any data that can be represented using [Apache Arrow](https://arrow.apache.org/)
Feather files also include schema and type information.
```
# Also write out the nested data in Feather format so that downstream
# notebooks don't have to re-nest it.
# No Feather serialization support for Pandas indices currently, so convert
# the index on FIPS code to a normal column
cases_for_feather = cases.reset_index()
cases_for_feather.head()
# Write to Feather and make sure that reading back works too.
# Also write dates that go with the time series
dates_file = os.path.join(_OUTPUTS_DIR, "dates.feather")
cases_file = os.path.join(_OUTPUTS_DIR, "us_counties_clean.feather")
pd.DataFrame({"date": dates}).to_feather(dates_file)
cases_for_feather.to_feather(cases_file)
pd.read_feather(cases_file).head()
# Also make sure the dates can be read back in from a binary file
pd.read_feather(dates_file).head()
```
| github_jupyter |
___
<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
___
# Scikit-learn Primer
**Scikit-learn** (http://scikit-learn.org/) is an open-source machine learning library for Python that offers a variety of regression, classification and clustering algorithms.
In this section we'll perform a fairly simple classification exercise with scikit-learn. In the next section we'll leverage the machine learning strength of scikit-learn to perform natural language classifications.
# Installation and Setup
### From the command line or terminal:
> `conda install scikit-learn`
> <br>*or*<br>
> `pip install -U scikit-learn`
Scikit-learn additionally requires that NumPy and SciPy be installed. For more info visit http://scikit-learn.org/stable/install.html
# Perform Imports and Load Data
For this exercise we'll be using the **SMSSpamCollection** dataset from [UCI datasets](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection) that contains more than 5 thousand SMS phone messages.<br>You can check out the [**sms_readme**](../TextFiles/sms_readme.txt) file for more info.
The file is a [tab-separated-values](https://en.wikipedia.org/wiki/Tab-separated_values) (tsv) file with four columns:
> **label** - every message is labeled as either ***ham*** or ***spam***<br>
> **message** - the message itself<br>
> **length** - the number of characters in each message<br>
> **punct** - the number of punctuation characters in each message
```
import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/smsspamcollection.tsv', sep='\t')
df.head()
len(df)
```
## Check for missing values:
Machine learning models usually require complete data.
```
df.isnull().sum()
```
## Take a quick look at the *ham* and *spam* `label` column:
```
df['label'].unique()
df['label'].value_counts()
```
<font color=green>We see that 4825 out of 5572 messages, or 86.6%, are ham.<br>This means that any machine learning model we create has to perform **better than 86.6%** to beat random chance.</font>
## Visualize the data:
Since we're not ready to do anything with the message text, let's see if we can predict ham/spam labels based on message length and punctuation counts. We'll look at message `length` first:
```
df['length'].describe()
```
<font color=green>This dataset is extremely skewed. The mean value is 80.5 and yet the max length is 910. Let's plot this on a logarithmic x-axis.</font>
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.xscale('log')
bins = 1.15**(np.arange(0,50))
plt.hist(df[df['label']=='ham']['length'],bins=bins,alpha=0.8)
plt.hist(df[df['label']=='spam']['length'],bins=bins,alpha=0.8)
plt.legend(('ham','spam'))
plt.show()
```
<font color=green>It looks like there's a small range of values where a message is more likely to be spam than ham.</font>
Now let's look at the `punct` column:
```
df['punct'].describe()
plt.xscale('log')
bins = 1.5**(np.arange(0,15))
plt.hist(df[df['label']=='ham']['punct'],bins=bins,alpha=0.8)
plt.hist(df[df['label']=='spam']['punct'],bins=bins,alpha=0.8)
plt.legend(('ham','spam'))
plt.show()
```
<font color=green>This looks even worse - there seem to be no values where one would pick spam over ham. We'll still try to build a machine learning classification model, but we should expect poor results.</font>
___
# Split the data into train & test sets:
If we wanted to divide the DataFrame into two smaller sets, we could use
> `train, test = train_test_split(df)`
For our purposes let's also set up our Features (X) and Labels (y). The Label is simple - we're trying to predict the `label` column in our data. For Features we'll use the `length` and `punct` columns. *By convention, **X** is capitalized and **y** is lowercase.*
## Selecting features
There are two ways to build a feature set from the columns we want. If the number of features is small, then we can pass those in directly:
> `X = df[['length','punct']]`
If the number of features is large, then it may be easier to drop the Label and any other unwanted columns:
> `X = df.drop(['label','message'], axis=1)`
These operations make copies of **df**, but do not change the original DataFrame in place. All the original data is preserved.
```
# Create Feature and Label sets
X = df[['length','punct']] # note the double set of brackets
y = df['label']
```
## Additional train/test/split arguments:
The default test size for `train_test_split` is 30%. Here we'll assign 33% of the data for testing.<br>
Also, we can set a `random_state` seed value to ensure that everyone uses the same "random" training & testing sets.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
print('Training Data Shape:', X_train.shape)
print('Testing Data Shape: ', X_test.shape)
```
Now we can pass these sets into a series of different training & testing algorithms and compare their results.
___
# Train a Logistic Regression classifier
One of the simplest multi-class classification tools is [logistic regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html). Scikit-learn offers a variety of algorithmic solvers; we'll use [L-BFGS](https://en.wikipedia.org/wiki/Limited-memory_BFGS).
```
from sklearn.linear_model import LogisticRegression
lr_model = LogisticRegression(solver='lbfgs')
lr_model.fit(X_train, y_train)
```
## Test the Accuracy of the Model
```
from sklearn import metrics
# Create a prediction set:
predictions = lr_model.predict(X_test)
# Print a confusion matrix
print(metrics.confusion_matrix(y_test,predictions))
# You can make the confusion matrix less confusing by adding labels:
df = pd.DataFrame(metrics.confusion_matrix(y_test,predictions), index=['ham','spam'], columns=['ham','spam'])
df
```
<font color=green>These results are terrible! More spam messages were confused as ham (241) than correctly identified as spam (5), although a relatively small number of ham messages (46) were confused as spam.</font>
```
# Print a classification report
print(metrics.classification_report(y_test,predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))
```
<font color=green>This model performed *worse* than a classifier that assigned all messages as "ham" would have!</font>
___
# Train a naïve Bayes classifier:
One of the most common - and successful - classifiers is [naïve Bayes](http://scikit-learn.org/stable/modules/naive_bayes.html#naive-bayes).
```
from sklearn.naive_bayes import MultinomialNB
nb_model = MultinomialNB()
nb_model.fit(X_train, y_train)
```
## Run predictions and report on metrics
```
predictions = nb_model.predict(X_test)
print(metrics.confusion_matrix(y_test,predictions))
```
<font color=green>The total number of confusions dropped from **287** to **256**. [241+46=287, 246+10=256]</font>
```
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
```
<font color=green>Better, but still less accurate than 86.6%</font>
___
# Train a support vector machine (SVM) classifier
Among the SVM options available, we'll use [C-Support Vector Classification (SVC)](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC)
```
from sklearn.svm import SVC
svc_model = SVC(gamma='auto')
svc_model.fit(X_train,y_train)
```
## Run predictions and report on metrics
```
predictions = svc_model.predict(X_test)
print(metrics.confusion_matrix(y_test,predictions))
```
<font color=green>The total number of confusions dropped even further to **209**.</font>
```
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
```
<font color=green>And finally we have a model that performs *slightly* better than random chance.</font>
Great! Now you should be able to load a dataset, divide it into training and testing sets, and perform simple analyses using scikit-learn.
## Next up: Feature Extraction from Text
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Gena/map_center_object.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Gena/map_center_object.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Gena/map_center_object.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Gena/map_center_object.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# get a single feature
countries = ee.FeatureCollection("USDOS/LSIB_SIMPLE/2017")
country = countries.filter(ee.Filter.eq('country_na', 'Ukraine'))
Map.addLayer(country, { 'color': 'orange' }, 'feature collection layer')
# TEST: center feature on a map
Map.centerObject(country, 6)
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
# Examples for the AbsComponent Class (v1.1)
```
%matplotlib inline
# suppress warnings for these examples
import warnings
warnings.filterwarnings('ignore')
# import
try:
import seaborn as sns; sns.set_style("white")
except:
pass
import numpy as np
from astropy.table import QTable
import astropy.units as u
from linetools.spectralline import AbsLine
from linetools.isgm import utils as ltiu
from linetools.analysis import absline as laa
from linetools.spectra import io as lsio
from linetools.isgm.abscomponent import AbsComponent
import linetools.analysis.voigt as lav
import imp
lt_path = imp.find_module('linetools')[1]
```
## Instantiate
### Standard
```
abscomp = AbsComponent((10.0*u.deg, 45*u.deg), (14,2), 1.0, [-300,300]*u.km/u.s)
abscomp
```
### From AbsLines
#### From one line
```
lya = AbsLine(1215.670*u.AA, z=2.92939)
lya.limits.set([-300.,300.]*u.km/u.s) # vlim
abscomp = AbsComponent.from_abslines([lya])
print(abscomp)
abscomp._abslines
```
#### From multiple
```
lyb = AbsLine(1025.7222*u.AA, z=lya.z)
lyb.limits.set([-300.,300.]*u.km/u.s) # vlim
abscomp = AbsComponent.from_abslines([lya,lyb])
print(abscomp)
abscomp._abslines
#### Define from QTable and make an spectrum model
# We first create a QTable with the most relevant information for defining AbsComponents
tab = QTable()
tab['ion_name'] = ['HI', 'HI']
tab['z_comp'] = [0.2, 0.15] # you should put the right redshifts here
tab['logN'] = [19., 19.] # you should put the right column densities here
tab['sig_logN'] = [0.1, 0.1] # you should put the right column density uncertainties here
tab['flag_logN'] = [1, 1] # Flags correspond to linetools notation
tab['RA'] = [0, 0]*u.deg # you should put the right coordinates here
tab['DEC'] = [0, 0]*u.deg # you should put the right coordinates here
tab['vmin'] = [-100, -100]*u.km/u.s # This correspond to the velocity lower limit for the absorption components
tab['vmax'] = [100, 100]*u.km/u.s # This correspond to the velocity upper limit for the absorption components
tab['b'] = [20, 20]*u.km/u.s # you should put the right Dopper parameters here
# We now use this table to create a list of AbsComponents
complist = ltiu.complist_from_table(tab)
# Now we need to add AbsLines to the component that are relevant for your spectrum
# This will be done by knowing the observed wavelength limits
wvlim = [1150, 1750]*u.AA
for comp in complist:
comp.add_abslines_from_linelist(llist='HI') # you can also use llist="ISM" if you have other non HI components
# Finally, we can create a model spectrum for each AbsCompontent
wv_array = np.arange(1150,1750, 0.01) * u.AA # This should match your spectrum wavelength array
model_1 = ltav.voigt_from_components(wv_array, [complist[0]])
```
## Methods
### Generate a Component Table
```
lya.attrib['logN'] = 14.1
lya.attrib['sig_logN'] = 0.15
lya.attrib['flag_N'] = 1
laa.linear_clm(lya.attrib)
lyb.attrib['logN'] = 14.15
lyb.attrib['sig_logN'] = 0.19
lyb.attrib['flag_N'] = 1
laa.linear_clm(lyb.attrib)
abscomp = AbsComponent.from_abslines([lya,lyb])
comp_tbl = abscomp.build_table()
comp_tbl
```
### Synthesize multiple components
```
SiIItrans = ['SiII 1260', 'SiII 1304', 'SiII 1526']
SiIIlines = []
for trans in SiIItrans:
iline = AbsLine(trans, z=2.92939)
iline.attrib['logN'] = 12.8 + np.random.rand()
iline.attrib['sig_logN'] = 0.15
iline.attrib['flag_N'] = 1
iline.limits.set([-300.,50.]*u.km/u.s) # vlim
_,_ = laa.linear_clm(iline.attrib)
SiIIlines.append(iline)
SiIIcomp = AbsComponent.from_abslines(SiIIlines)
SiIIcomp.synthesize_colm()
SiIIlines2 = []
for trans in SiIItrans:
iline = AbsLine(trans, z=2.92939)
iline.attrib['logN'] = 13.3 + np.random.rand()
iline.attrib['sig_logN'] = 0.15
iline.attrib['flag_N'] = 1
iline.limits.set([50.,300.]*u.km/u.s) # vlim
_,_ = laa.linear_clm(iline.attrib)
SiIIlines2.append(iline)
SiIIcomp2 = AbsComponent.from_abslines(SiIIlines2)
SiIIcomp2.synthesize_colm()
abscomp.synthesize_colm()
[abscomp,SiIIcomp,SiIIcomp2]
synth_SiII = ltiu.synthesize_components([SiIIcomp,SiIIcomp2])
synth_SiII
```
### Generate multiple components from abslines
```
comps = ltiu.build_components_from_abslines([lya,lyb,SiIIlines[0],SiIIlines[1]])
comps
```
### Generate an Ion Table
```
tbl = ltiu.iontable_from_components([abscomp,SiIIcomp,SiIIcomp2])
tbl
```
### Stack plot
#### Load a spectrum
```
xspec = lsio.readspec(lt_path+'/spectra/tests/files/UM184_nF.fits')
lya.analy['spec'] = xspec
lyb.analy['spec'] = xspec
```
#### Show
```
abscomp = AbsComponent.from_abslines([lya,lyb])
abscomp.stack_plot()
```
| github_jupyter |
```
"""
Update Parameters Here
"""
CONTRACT_ADDRESS = "0x9A534628B4062E123cE7Ee2222ec20B86e16Ca8F"
COLLECTION = "MekaVerse"
METHOD = "raritytools"
TOKEN_COL = "TOKEN_ID" # Use TOKEN_NAME if you prefer to infer token id from token name
NUMBERS_TO_CHECK = 50 # Number of tokens to search for opportunities
OPENSEA_API_KEY = "YOUR_API_KEY"
import time
import requests
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
# Define variables used throughout
RARITY_DATABASE = f"../metadata/rarity_data/{COLLECTION}_{METHOD}.csv"
ETHER_UNITS = 1e18
"""
Plot params
"""
plt.rcParams.update({"figure.facecolor": "white", "savefig.facecolor": "white"})
# Load rarity database and format
RARITY_DB = pd.read_csv(RARITY_DATABASE)
RARITY_DB = RARITY_DB[RARITY_DB["TOKEN_ID"].duplicated() == False]
if TOKEN_COL == "TOKEN_NAME":
RARITY_DB["TOKEN_ID"] = RARITY_DB["TOKEN_NAME"].str.split("#").str[1].astype(int)
"""
Get open bids from OpenSea and plot.
"""
def getOpenseaOrders(token_id, contract_address):
url = "https://api.opensea.io/wyvern/v1/orders"
querystring = {
"bundled": "false",
"include_bundled": "false",
"is_english": "false",
"include_invalid": "false",
"limit": "50",
"offset": "0",
"order_by": "created_date",
"order_direction": "desc",
"asset_contract_address": contract_address,
"token_ids": [token_id],
}
headers = {"Accept": "application/json", "X-API-KEY": OPENSEA_API_KEY}
response = requests.request("GET", url, headers=headers, params=querystring)
response_json = response.json()
return response_json
def plot_all_bids(bid_db):
series = []
max_listings = bid_db["token_ids"].value_counts().max()
for i in range(1, max_listings + 1):
n_bids = bid_db.groupby("token_ids").filter(lambda x: len(x) == i)
series.append(n_bids)
colors = iter(cm.rainbow(np.linspace(0, 1, len(series))))
for i in range(0, len(series)):
plt.scatter(
series[i]["ranks"], series[i]["bid"], color=next(colors), label=i + 1
)
plt.xlabel("rarity rank")
plt.ylabel("price (ETHER)")
plt.legend(loc="best")
plt.show()
def get_all_bids(rarity_db):
token_ids = []
ranks = []
bids = []
numbersToCheck = []
for x in rarity_db["TOKEN_ID"]:
numbersToCheck.append(x)
if len(numbersToCheck) == 15: # send 15 NFTs at a time to API
orders = getOpenseaOrders(numbersToCheck, CONTRACT_ADDRESS)
numbersToCheck = []
for order in orders["orders"]:
if order["side"] == 0:
tokenId = int(order["asset"]["token_id"])
token_ids.append(tokenId)
ranks.append(
float(rarity_db[rarity_db["TOKEN_ID"] == tokenId]["Rank"])
)
bids.append(float(order["base_price"]) / ETHER_UNITS)
bid_db = pd.DataFrame(columns=["token_ids", "ranks", "bid"])
bid_db["token_ids"] = token_ids
bid_db["ranks"] = ranks
bid_db["bid"] = bids
return bid_db
bid_db = get_all_bids(RARITY_DB.head(NUMBERS_TO_CHECK))
bid_db = bid_db.sort_values(by=["ranks"])
print(bid_db.set_index("token_ids").head(50))
plot_all_bids(bid_db)
"""
Get open offers from OpenSea and plot.
"""
def getOpenseaOrders(token_id, contract_address):
# gets orders, both bids and asks
# divide token_list into limit sized chunks and get output
url = "https://api.opensea.io/wyvern/v1/orders"
querystring = {
"bundled": "false",
"include_bundled": "false",
"is_english": "false",
"include_invalid": "false",
"limit": "50",
"offset": "0",
"order_by": "created_date",
"order_direction": "desc",
"asset_contract_address": contract_address,
"token_ids": [token_id],
}
headers = {"Accept": "application/json", "X-API-KEY": OPENSEA_API_KEY}
response = requests.request("GET", url, headers=headers, params=querystring)
responseJson = response.json()
return responseJson
def display_orders(rarity_db):
print("RANK TOKEN_ID PRICE URL")
numbersToCheck = []
for x in rarity_db["TOKEN_ID"]:
numbersToCheck.append(x)
if len(numbersToCheck) == 15:
orders = getOpenseaOrders(numbersToCheck, CONTRACT_ADDRESS)
numbersToCheck = []
time.sleep(2)
for order in orders["orders"]:
if order["side"] == 1:
tokenId = int(order["asset"]["token_id"])
price = float(order["current_price"]) / 1e18
if price <= 20:
current_order = dict()
current_order["RANK"] = str(
int(rarity_db[rarity_db["TOKEN_ID"] == tokenId]["Rank"])
)
current_order["TOKEN_ID"] = str(tokenId)
current_order["PRICE"] = str(price)
current_order[
"URL"
] = f"https://opensea.io/assets/{CONTRACT_ADDRESS}/{tokenId}"
str_to_print = ""
for x in ["RANK", "TOKEN_ID", "PRICE"]:
str_to_print += f"{current_order[x]}"
str_to_print += " " * (len(x) + 1 - len(current_order[x]))
str_to_print += current_order["URL"]
print(str_to_print)
display_orders(RARITY_DB.head(NUMBERS_TO_CHECK))
import numpy as np
A = -0.9
K = 1
B = 5
v = 1
Q = 1.1
C = 1
RARITY_DB["VALUE"] = A + (
(K - A) / np.power((C + Q * np.exp(-B * (1 / RARITY_DB["Rank"]))), 1 / v)
)
RARITY_DB["VALUE"] = np.where(RARITY_DB["Rank"] > 96 * 2, 0, RARITY_DB["VALUE"])
RARITY_DB[["Rank", "VALUE"]].sort_values("Rank").plot(
x="Rank", y="VALUE", figsize=(14, 7), logx=True, grid=True
)
plt.show()
RARITY_DB = RARITY_DB.sort_values("TOKEN_ID")
RARITY_DB.plot(x="TOKEN_ID", y="VALUE", grid=True, figsize=(14, 7))
RARITY_DB = RARITY_DB.sort_values("TOKEN_ID")
RARITY_DB["EXPANDING_VALUE"] = RARITY_DB["VALUE"].expanding().sum()
RARITY_DB.plot(x="TOKEN_ID", y="EXPANDING_VALUE", grid=True, figsize=(14, 7))
pd.set_option("display.max_rows", 100)
RARITY_DB.sort_values("Rank").head(96)
```
| github_jupyter |
<p> Notice: This notebook is not optimized for memory nor performance yet. Please use it with caution when handling large datasets.
### Notice: Please ignore Feature engineering part if you are using a ready dataset
# Feature engineering
This notebook is for BDSE12_03G_HomeCredit_V2.csv processing for bear LGBM final
### Prepare work environment
```
# Pandas for managing datasets
import numpy as np
import pandas as pd
np.__version__, pd.__version__
# math for operating numbers
import math
import gc
# Change pd displayg format for float
pd.options.display.float_format = '{:,.4f}'.format
# Matplotlib for additional customization
from matplotlib import pyplot as plt
%matplotlib inline
# Seaborn for plotting and styling
import seaborn as sns
#Seaborn set() to set aesthetic parameters in one step.
sns.set()
```
---
### Read & combine datasets
```
appl_all_df = pd.read_csv('../..//datasets/homecdt_fteng/BDSE12_03G_HomeCredit_V2.csv',index_col=0)
appl_all_df.info()
```
---
```
# appl_all_df.apply(lambda x:x.unique().size).describe()
appl_all_df['TARGET'].unique(), \
appl_all_df['TARGET'].unique().size
appl_all_df['TARGET'].value_counts()
appl_all_df['TARGET'].isnull().sum(), \
appl_all_df['TARGET'].size, \
(appl_all_df['TARGET'].isnull().sum()/appl_all_df['TARGET'].size).round(4)
# Make sure we can use the nullness of 'TARGET' column to separate train & test
# assert appl_all_df['TARGET'].isnull().sum() == appl_test_df.shape[0]
```
---
## Randomized sampleing:
#### If the dataset is too large, consider following randomized sampling from original dataset to facilitate development and testing
```
# Randomized sampling from original dataset.
# This is just for simplifying the development process
# After coding is complete, should replace all df-->df, and remove this cell
# Reference: https://yiidtw.github.io/blog/2018-05-29-how-to-shuffle-dataframe-in-pandas/
# df= appl_all_df.sample(n = 1000).reset_index(drop=True)
# df.shape
# df.head()
```
---
## Tool: Get numerical/ categorical variables(columns) from a dataframe
```
def get_num_df (data_df, unique_value_threshold: int):
"""
Output: a new dataframe with columns of numerical variables from the input dataframe.
Input:
data_df: original dataframe,
unique_value_threshold(int): number of unique values of each column
e.g. If we define a column with > 3 unique values as being numerical variable, unique_value_threshold = 3
"""
num_mask = data_df.apply(lambda x:x.unique().size > unique_value_threshold,axis=0)
num_df = data_df[data_df.columns[num_mask]]
return num_df
def get_cat_df (data_df, unique_value_threshold: int):
"""
Output: a new dataframe with columns of categorical variables from the input dataframe.
Input:
data_df: original dataframe,
unique_value_threshold(int): number of unique values of each column
e.g. If we define a column with =<3 unique values as being numerical variable, unique_value_threshold = 3
"""
cat_mask = data_df.apply(lambda x:x.unique().size <= unique_value_threshold,axis=0)
cat_df = data_df[data_df.columns[cat_mask]]
return cat_df
# Be careful when doing this assertion with large datasets
# assert get_cat_df(appl_all_df, 3).columns.size + get_num_df(appl_all_df, 3).columns.size == appl_all_df.columns.size
```
---
#### Splitting id_target_df, cat_df, num_df
```
# Separate id and target columns before any further processing
id_target_df = appl_all_df.loc[:, ['SK_ID_CURR','TARGET']]
# Get the operating appl_all_df by removing id and target columns
appl_all_df_opr = appl_all_df.drop(['SK_ID_CURR','TARGET'], axis=1)
# A quick check of their shapes
appl_all_df.shape, id_target_df.shape, appl_all_df_opr.shape
# Spliting the numerical and categorical variable containing columns via the tools decribed above.
# Max identified unique value of categorical column 'ORGANIZATION_TYPE' = 58
cat_df = get_cat_df (appl_all_df_opr, 58)
num_df = get_num_df (appl_all_df_opr, 58)
cat_df.info()
num_df.info()
# A quick check of their shapes
appl_all_df_opr.shape, cat_df.shape, num_df.shape
assert cat_df.shape[1] + num_df.shape[1] + id_target_df.shape[1] \
== appl_all_df_opr.shape[1] + id_target_df.shape[1] \
== appl_all_df.shape[1]
assert cat_df.shape[0] == num_df.shape[0] == id_target_df.shape[0] \
== appl_all_df_opr.shape[0] \
== appl_all_df.shape[0]
# Apply the following gc if memory is running slow
appl_all_df_opr.info()
appl_all_df.info()
del appl_all_df_opr
del appl_all_df
gc.collect()
```
---
## Dealing with categorical variables
#### Transform to String (i.e., python object) and fill nan with String 'nan'
```
cat_df_obj = cat_df.astype(str)
assert np.all(cat_df_obj.dtypes) == object
# There are no NA left
assert all(cat_df_obj.isnull().sum())==0
# The float nan will be tranformed to String 'nan'
# Use this assertion carefully when dealing with extra-large datasets
assert cat_df.isnull().equals(cat_df_obj.isin({'nan'}))
```
#### Dealing with special columns
Replace 'nan' with 'not specified' in column 'FONDKAPREMONT_MODE'
```
# Do the replacement and re-assign the modified column back to the original dataframe
cat_df_obj['FONDKAPREMONT_MODE'] = cat_df_obj['FONDKAPREMONT_MODE'].replace('nan','not specified')
# check again the unique value, it should be 1 less than the original cat_df
assert cat_df['FONDKAPREMONT_MODE'].unique().size == cat_df_obj['FONDKAPREMONT_MODE'].unique().size +1
# Apply the following gc if memory is running slow
cat_df.info()
del cat_df
gc.collect()
```
#### Do one-hot encoding
Check the input dataframe (i.e., cat_df_obj)
```
cat_df_obj.shape
cat_df_obj.apply(lambda x:x.unique().size).sum()
# ?pd.get_dummies
# pd.get_dummies() method deals only with categorical variables.
# Although it has a built-in argument 'dummy_na' to manage the na value,
# our na value has already been converted to string object which are not recognized by the method.
# Let's just move forward as planned
cat_df_obj_ohe = pd.get_dummies(cat_df_obj, drop_first=True)
cat_df_obj_ohe.shape
# Make sure the ohe is successful
assert np.all(np.isin(cat_df_obj_ohe.values,[0,1])) == True
# cat_df_obj_ohe.dtypes
assert np.all(cat_df_obj_ohe.dtypes) == 'uint8'
# make sure the column counts are correct
assert cat_df_obj.apply(lambda x:x.unique().size).sum() == cat_df_obj_ohe.shape[1] + cat_df_obj.shape[1]
cat_df_obj_ohe.info()
# Apply the following gc if memory is running slow
del cat_df_obj
gc.collect()
# %timeit np.isin(cat_df_obj_ohe.values,[0,1])
# # 1.86 s ± 133 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# %timeit cat_df_obj_ohe.isin([0 , 1])
# # 3.38 s ± 32.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# %timeit np.all(np.isin(cat_df_obj_ohe.values,[0,1]))
# # 1.85 s ± 28 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# %timeit np.all(cat_df_obj_ohe.isin([0 , 1]))
# # 3.47 s ± 193 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
---
## Dealing with numerial variables
#### Get na flags
```
num_df.shape
# How many columns contain na value.
num_df.isna().any().sum()
num_isna_df = num_df[num_df.columns[num_df.isna().any()]]
num_notna_df = num_df[num_df.columns[num_df.notna().all()]]
assert num_isna_df.shape[1] + num_notna_df.shape[1] == num_df.shape[1]
assert num_isna_df.shape[0] == num_notna_df.shape[0] == num_df.shape[0]
num_isna_df.shape, num_notna_df.shape
# num_df.isna().any(): column names for those na containing columns
# use it to transform values bool to int, and then add suffix on the column names to get the na-flag df
num_naFlag_df = num_isna_df.isna().astype(np.uint8).add_suffix('_na')
num_naFlag_df.info()
```
#### replace na with zero
```
num_isna_df = num_isna_df.fillna(0)
num_isna_df.shape
# How many columns contain na value.
num_isna_df.isna().any().sum()
num_isna_df.info()
assert num_isna_df.shape == num_naFlag_df.shape
num_df = pd.concat([num_notna_df,num_isna_df,num_naFlag_df], axis = 'columns')
assert num_notna_df.shape[1] + num_isna_df.shape[1] + num_naFlag_df.shape[1] == num_df.shape[1]
num_df.info(verbose=False)
# Apply the following gc if memory is running slow
del num_notna_df
del num_isna_df
del num_naFlag_df
gc.collect()
```
---
#### Normalization (DO LATER!!)
##### Generally, in tree-based models, the scale of the features does not matter.
https://scikit-learn.org/stable/modules/preprocessing.html#normalization
https://datascience.stackexchange.com/questions/22036/how-does-lightgbm-deal-with-value-scale
---
## Combine to a complete, processed dataset
```
frames = np.array([id_target_df, cat_df_obj_ohe, num_df])
id_target_df.shape, cat_df_obj_ohe.shape, num_df.shape
appl_all_processed_df = pd.concat(frames, axis ='columns')
appl_all_processed_df.shape
assert appl_all_processed_df.shape[1] == id_target_df.shape[1] + cat_df_obj_ohe.shape[1] + num_df.shape[1]
appl_all_processed_df.info()
# Apply the following gc if memory is running slow
del id_target_df
del cat_df_obj_ohe
del num_df
gc.collect()
```
---
## Export to CSV
```
# Export the dataframe to csv for future use
appl_all_processed_df.to_csv('../../datasets/homecdt_fteng/ss_fteng_fromBDSE12_03G_HomeCredit_V2_20200204a.csv', index = False)
# Export the dtypes Series to csv for future use
appl_all_processed_df.dtypes.to_csv('../../datasets/homecdt_fteng/ss_fteng_fromBDSE12_03G_HomeCredit_V2_20200204a_dtypes_series.csv')
```
---
## Interface connecting fteng & model parts
```
# Assign appl_all_processed_df to final_df for follow-up modeling
final_df = appl_all_processed_df
# Apply the following gc if memory is running slow
del appl_all_processed_df
gc.collect()
final_df.columns = ["".join (c if c.isalnum() else "_" for c in str(x)) for x in final_df.columns]
final_df.info()
```
---
## Modeling part. If using a ready dataset, please start here
```
# Reading the saved dtypes Series
final_df_dtypes = \
pd.read_csv('../../datasets/homecdt_fteng/ss_fteng_fromBDSE12_03G_HomeCredit_V2_20200204a_dtypes_series.csv'\
, header=None, index_col=0, squeeze=True)
del final_df_dtypes.index.name
final_df_dtypes = final_df_dtypes.to_dict()
final_df = \
pd.read_csv('../../datasets/homecdt_fteng/ss_fteng_fromBDSE12_03G_HomeCredit_V2_20200204a.csv'\
, dtype= final_df_dtypes)
final_df.columns = ["".join (c if c.isalnum() else "_" for c in str(x)) for x in final_df.columns]
final_df.info()
```
This following is based on 'bear_Final_model' released 2020/01/23
```
# Forked from excellent kernel : https://www.kaggle.com/jsaguiar/updated-0-792-lb-lightgbm-with-simple-features
# From Kaggler : https://www.kaggle.com/jsaguiar
# Just added a few features so I thought I had to make release it as well...
import numpy as np
import pandas as pd
import gc
import time
from contextlib import contextmanager
import lightgbm as lgb
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.model_selection import KFold, StratifiedKFold
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import csv
lgb.__version__
print(final_df['TARGET'].isna().sum(),
final_df['TARGET'].dtypes)
```
# LightGBM 模型
```
def timer(title):
t0 = time.time()
yield
print("{} - done in {:.0f}s".format(title, time.time() - t0))
def kfold_lightgbm(df, num_folds = 5, stratified = True, debug= False, boosting_type= 'goss', epoch=20000, early_stop=200):
# Divide in training/validation and test data
train_df = df[df['TARGET'].notnull()]
test_df = df[df['TARGET'].isnull()]
print("Starting LightGBM goss. Train shape: {}, test shape: {}".format(train_df.shape, test_df.shape))
del df
gc.collect()
# Cross validation model
if stratified:
folds = StratifiedKFold(n_splits= num_folds, shuffle=True, random_state=924)
else:
folds = KFold(n_splits= num_folds, shuffle=True, random_state=924)
# Create arrays and dataframes to store results
oof_preds = np.zeros(train_df.shape[0])
sub_preds = np.zeros(test_df.shape[0])
feature_importance_df = pd.DataFrame()
feats = [f for f in train_df.columns if f not in ['TARGET','SK_ID_CURR','SK_ID_BUREAU','SK_ID_PREV','index']]
for n_fold, (train_idx, valid_idx) in enumerate(folds.split(train_df[feats], train_df['TARGET'])):
dtrain = lgb.Dataset(data=train_df[feats].iloc[train_idx],
label=train_df['TARGET'].iloc[train_idx],
free_raw_data=False, silent=True)
dvalid = lgb.Dataset(data=train_df[feats].iloc[valid_idx],
label=train_df['TARGET'].iloc[valid_idx],
free_raw_data=False, silent=True)
# LightGBM parameters found by Bayesian optimization
# {'learning_rate': 0.027277797382058662,
# 'max_bin': 252.71833139557864,
# 'max_depth': 19.94051833524931,
# 'min_child_weight': 20.868586608046186,
# 'min_data_in_leaf': 68.98157854879867,
# 'min_split_gain': 0.04938251335634182,
# 'num_leaves': 23.027556285612434,
# 'reg_alpha': 0.9107785355990146,
# 'reg_lambda': 0.15418005208807806,
# 'subsample': 0.7997032951619153}
params = {
'objective': 'binary',
'boosting_type': boosting_type,
'nthread': 4,
'learning_rate': 0.0272778, # 02,
'num_leaves': 23, #20,33
'tree_learner': 'voting',
'colsample_bytree': 0.9497036,
'subsample': 0.8715623,
'subsample_freq': 0,
'max_depth': 20, #8,7
'reg_alpha': 0.9107785,
'reg_lambda': 0.1541800,
'subsample': 0.7997033,
'min_split_gain': 0.0493825,
'min_data_in_leaf': 69, # ss add
'min_child_weight': 49, # 60,39
'seed': 924,
'verbose': 2000,
'metric': 'auc',
'max_bin': 253,
# 'histogram_pool_size': 20480
# 'device' : 'gpu',
# 'gpu_platform_id': 0,
# 'gpu_device_id':0
}
clf = lgb.train(
params=params,
train_set=dtrain,
num_boost_round=epoch,
valid_sets=[dtrain, dvalid],
early_stopping_rounds=early_stop,
verbose_eval=2000
)
oof_preds[valid_idx] = clf.predict(dvalid.data)
sub_preds += clf.predict(test_df[feats]) / folds.n_splits
fold_importance_df = pd.DataFrame()
fold_importance_df["feature"] = feats
fold_importance_df["importance"] = clf.feature_importance(importance_type='gain')
fold_importance_df["fold"] = n_fold + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
print('Fold %2d AUC : %.6f' % (n_fold + 1, roc_auc_score(dvalid.label, oof_preds[valid_idx])))
del clf, dtrain, dvalid
gc.collect()
print('Full AUC score %.6f' % roc_auc_score(train_df['TARGET'], oof_preds))
# Write submission file and plot feature importance
if not debug:
sub_df = test_df[['SK_ID_CURR']].copy()
sub_df['TARGET'] = sub_preds
sub_df[['SK_ID_CURR', 'TARGET']].to_csv('homecdt_submission_LGBM.csv', index= False)
display_importances(feature_importance_df)
return feature_importance_df
# Display/plot feature importance
def display_importances(feature_importance_df_):
cols = feature_importance_df_[["feature", "importance"]].groupby("feature").mean().sort_values(by="importance", ascending=False)[:40].index
best_features = feature_importance_df_.loc[feature_importance_df_.feature.isin(cols)]
plt.figure(figsize=(8, 10))
sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False))
plt.title('LightGBM Features (avg over folds)')
plt.tight_layout
plt.savefig('lgbm_importances01.png')
```
## boosting_type:goss
```
init_time = time.time()
kfold_lightgbm(final_df,10)
print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
init_time = time.time()
kfold_lightgbm(final_df,10)
print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
```
## boosting_type:gbdt
```
# init_time = time.time()
# kfold_lightgbm(final_df, 10, boosting_type= 'gbdt')
# print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
```
## boosting_type:dart
```
# init_time = time.time()
# kfold_lightgbm(final_df,10, boosting_type= 'dart')
# print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
```
## boosting_type:rf
```
# init_time = time.time()
# kfold_lightgbm(final_df,10,boosting_type= 'rf')
# print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
```
# XGBoost 模型
```
from numba import cuda
cuda.select_device(0)
cuda.close()
import numpy as np
import pandas as pd
import gc
import time
from contextlib import contextmanager
from xgboost import XGBClassifier
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.model_selection import KFold, StratifiedKFold
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import pickle
def kfold_xgb(df, num_folds, stratified = True, debug= False):
# Divide in training/validation and test data
train_df = df[df['TARGET'].notnull()]
test_df = df[df['TARGET'].isnull()]
print("Starting XGBoost. Train shape: {}, test shape: {}".format(train_df.shape, test_df.shape))
del df
gc.collect()
# Cross validation model
if stratified:
folds = StratifiedKFold(n_splits= num_folds, shuffle=True, random_state=1054)
else:
folds = KFold(n_splits= num_folds, shuffle=True, random_state=1054)
# Create arrays and dataframes to store results
oof_preds = np.zeros(train_df.shape[0])
sub_preds = np.zeros(test_df.shape[0])
feature_importance_df = pd.DataFrame()
feats = [f for f in train_df.columns if f not in ['TARGET','SK_ID_CURR','SK_ID_BUREAU','SK_ID_PREV','index']]
for n_fold, (train_idx, valid_idx) in enumerate(folds.split(train_df[feats], train_df['TARGET'])):
#if n_fold == 0: # REmove for full K-fold run
cuda.select_device(0)
cuda.close()
train_x, train_y = train_df[feats].iloc[train_idx], train_df['TARGET'].iloc[train_idx]
valid_x, valid_y = train_df[feats].iloc[valid_idx], train_df['TARGET'].iloc[valid_idx]
clf = XGBClassifier(learning_rate =0.01,
n_estimators=5000,
max_depth=4,
min_child_weight=5,
# tree_method='gpu_hist',
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=2.5,
seed=28,
reg_lambda = 1.2)
# clf = pickle.load(open('test.pickle','rb'))
cuda.select_device(0)
cuda.close()
clf.fit(train_x, train_y, eval_set=[(train_x, train_y), (valid_x, valid_y)],
eval_metric= 'auc', verbose= 1000, early_stopping_rounds= 200)
cuda.select_device(0)
cuda.close()
oof_preds[valid_idx] = clf.predict_proba(valid_x)[:, 1]
sub_preds += clf.predict_proba(test_df[feats])[:, 1] # / folds.n_splits # - Uncomment for K-fold
fold_importance_df = pd.DataFrame()
fold_importance_df["feature"] = feats
fold_importance_df["importance"] = clf.feature_importances_
fold_importance_df["fold"] = n_fold + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
print('Fold %2d AUC : %.6f' % (n_fold + 1, roc_auc_score(valid_y, oof_preds[valid_idx])))
del clf, train_x, train_y, valid_x, valid_y
gc.collect()
np.save("xgb_oof_preds_1", oof_preds)
np.save("xgb_sub_preds_1", sub_preds)
cuda.select_device(0)
cuda.close()
clf = pickle.load(open('test.pickle','rb'))
# print('Full AUC score %.6f' % roc_auc_score(train_df['TARGET'], oof_preds))
# Write submission file and plot feature importance
if not debug:
test_df['TARGET'] = sub_preds
test_df[['SK_ID_CURR', 'TARGET']].to_csv('submission_XGBoost_GPU.csv', index= False)
#display_importances(feature_importance_df)
#return feature_importance_df
# Display/plot feature importance
def display_importances(feature_importance_df_):
cols = feature_importance_df_[["feature", "importance"]].groupby("feature").mean().sort_values(by="importance", ascending=False)[:40].index
best_features = feature_importance_df_.loc[feature_importance_df_.feature.isin(cols)]
plt.figure(figsize=(8, 10))
sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False))
plt.title('XGBoost Features (avg over folds)')
plt.tight_layout()
plt.savefig('xgb_importances02.png')
init_time = time.time()
kfold_xgb(final_df, 5)
print("Elapsed time={:5.2f} sec.".format(time.time() - init_time))
```
---
Below not executed
## Balance the 'TARGET' column
```
appl_all_processed_df['TARGET'].value_counts()
balanceFactor = ((appl_all_processed_df['TARGET'].value_counts()[0])/(appl_all_processed_df['TARGET'].value_counts()[1])).round(0).astype(int)
balanceFactor
# appl_all_processed_df['TARGET'].value_counts()[0]
# appl_all_processed_df['TARGET'].value_counts()[1]
default_df = appl_all_processed_df[appl_all_processed_df['TARGET']==1]
default_df.shape
default_df_balanced = pd.concat( [default_df] * (balanceFactor - 1), sort=False, ignore_index=True )
default_df_balanced.shape
appl_all_processed_df_balanced = pd.concat([appl_all_processed_df , default_df_balanced], sort=False, ignore_index=True)
appl_all_processed_df_balanced.shape
(appl_all_processed_df_balanced['TARGET'].unique(),
(appl_all_processed_df_balanced['TARGET'].value_counts()[1], \
appl_all_processed_df_balanced['TARGET'].value_counts()[0], \
appl_all_processed_df_balanced['TARGET'].isnull().sum()))
# Apply the following gc if memory is running slow
del appl_all_processed_df_balanced
gc.collect()
```
---
---
# Todo
Todo:
* cleaning:
* num_df: normalize with z-score
* feature engineering:
* make reciprocol, polynomial columns of the existing columns. 1/x, x^x.
* multiplying each columns, two columns at a time.
* asset items, income items, willingness(history + misc profile) items, loading(principle + interest) items
* Integration from other tables?
https://ithelp.ithome.com.tw/articles/10202059
https://stackoverflow.com/questions/26414913/normalize-columns-of-pandas-data-frame
https://www.kaggle.com/parasjindal96/how-to-normalize-dataframe-pandas
---
## EDA
### Quick check for numerical columns
```
numcol = df['CNT_FAM_MEMBERS']
numcol.describe(), \
numcol.isnull().sum(), \
numcol.size
numcol.value_counts(sort=True), numcol.unique().size
# numcol_toYear = pd.to_numeric(\
# ((numcol.abs() / 365) \
# .round(0)) \
# ,downcast='integer')
# numcol_toYear.describe()
# numcol_toYear.value_counts(sort=True), numcol_toYear.unique().size
```
### Quick check for categorical columns
```
catcol = df['HOUR_APPR_PROCESS_START']
catcol.unique(), \
catcol.unique().size
catcol.value_counts(sort=True)
catcol.isnull().sum(), \
catcol.size
catcol.isnull().sum(), \
catcol.size
```
## Appendix
### Tool: Getting summary dataframe
```
# might not be very useful at this point
def summary_df (data_df):
"""
Output: a new dataframe with summary info from the input dataframe.
Input: data_df, the original dataframe
"""
summary_df = pd.concat([(data_df.describe(include='all')), \
(data_df.dtypes.to_frame(name='dtypes').T), \
(data_df.isnull().sum().to_frame(name='isnull').T), \
(data_df.apply(lambda x:x.unique().size).to_frame(name='uniqAll').T)])
return summary_df
def data_quality_df (data_df):
"""
Output: a new dataframe with summary info from the input dataframe.
Input: data_df, the original dataframe
"""
data_quality_df = pd.concat([(data_df.describe(include='all')), \
(data_df.dtypes.to_frame(name='dtypes').T), \
(data_df.isnull().sum().to_frame(name='isnull').T), \
(data_df.apply(lambda x:x.unique().size).to_frame(name='uniqAll').T)])
return data_quality_df.iloc[[11,13,12,0,],:]
data_quality_df(appl_all_df)
# df.to_csv(file_name, encoding='utf-8', index=False)
# data_quality_df(df).to_csv("./eda_output/application_train_data_quality.csv")
df['CNT_CHILDREN'].value_counts()
df['CNT_CHILDREN'].value_counts().sum()
df.describe()
summary_df(df)
# df.to_csv(file_name, encoding='utf-8', index=False)
# summary_df(df).to_csv("./eda_output/application_train_summary_df.csv")
```
---
### .nunique() function
```
# nunique() function excludes NaN
# i.e. it does not consider NaN as a "value", therefore NaN is not counted as a "unique value"
df.nunique()
df.nunique() == df.apply(lambda x:x.unique().shape[0])
df['AMT_REQ_CREDIT_BUREAU_YEAR'].unique().shape[0]
df['AMT_REQ_CREDIT_BUREAU_YEAR'].nunique()
df['AMT_REQ_CREDIT_BUREAU_YEAR'].unique().size
```
### .value_counts() function
```
# .value_counts() function has similar viewpoint towards NaN.
# i.e. it does not consider null as a value, therefore not counted in .value_counts()
df['NAME_TYPE_SUITE'].value_counts()
df['AMT_REQ_CREDIT_BUREAU_YEAR'].isnull().sum()
df['AMT_REQ_CREDIT_BUREAU_YEAR'].size
df['AMT_REQ_CREDIT_BUREAU_YEAR'].value_counts().sum() + df['AMT_REQ_CREDIT_BUREAU_YEAR'].isnull().sum() == \
df['AMT_REQ_CREDIT_BUREAU_YEAR'].size
```
### 重複值
```
# Counting unique values (cf. .nunique() function, see above section)
# This code was retrieved from HT
df.apply(lambda x:x.unique().shape[0])
# It is the same if you write (df.apply(lambda x:x.unique().size))
assert (df.apply(lambda x:x.unique().shape[0])==df.apply(lambda x:x.unique().size)).all
# # %timeit showed the performances are similar
# %timeit df.apply(lambda x:x.unique().shape[0])
# %timeit df.apply(lambda x:x.unique().size)
```
### 空值
```
# 含空值欄位占比
print(f"{df.isnull().any().sum()} in {df.shape[1]} columns (ratio: {(df.isnull().any().sum()/df.shape[1]).round(2)}) has empty value(s)")
```
---
## re-casting to reduce memory use (beta)
```
# np.isfinite(num_df).all().value_counts()
# num_df_finite = num_df[num_df.columns[np.isfinite(num_df).all()]]
# num_df_infinite = num_df[num_df.columns[np.isfinite(num_df).all() == False]]
# num_df_finite.shape, num_df_infinite.shape
# assert num_df_finite.shape[0] == num_df_infinite.shape[0] == num_df.shape[0]
# assert num_df_finite.shape[1] + num_df_infinite.shape[1] == num_df.shape[1]
# def reduce_mem_usage(props, finite:bool = True):
# props.info(verbose=False)
# start_mem_usg = props.memory_usage().sum() / 1024**2
# print("Memory usage of properties dataframe is :",start_mem_usg," MB")
# if finite == True:
# props[props.columns[(props.min()>=0) & (props.max()<255)]] = \
# props[props.columns[(props.min()>=0) & (props.max()<255)]].astype(np.uint8, copy=False)
# props.info(verbose=False)
# props[props.columns[(props.min()>=0) &(props.max() >= 255) & (props.max()<65535)]] = \
# props[props.columns[(props.min()>=0) &(props.max() >= 255) & (props.max()<65535)]] \
# .astype(np.uint16, copy=False)
# props.info(verbose=False)
# props[props.columns[(props.min()>=0) &(props.max() >= 65535) & (props.max()<4294967295)]] = \
# props[props.columns[(props.min()>=0) &(props.max() >= 65535) & (props.max()<4294967295)]] \
# .astype(np.uint32, copy=False)
# props.info(verbose=False)
# props[props.columns[(props.min()>=0) &(props.max() >= 4294967295)]] = \
# props[props.columns[(props.min()>=0) &(props.max() >= 4294967295)]] \
# .astype(np.uint64, copy=False)
# props.info(verbose=False)
# else:
# props = props.astype(np.float32, copy=False)
# props.info(verbose=False)
# print("___MEMORY USAGE AFTER COMPLETION:___")
# mem_usg = props.memory_usage().sum() / 1024**2
# print("Memory usage is: ",mem_usg," MB")
# print("This is ",100*mem_usg/start_mem_usg,"% of the initial size")
# return props
# if num_na_df_finite.min()>=0:
# if num_na_df_finite.max() < 255:
# props[col] = props[col].astype(np.uint8)
# elif num_na_df_finite.max() < 65535:
# props[col] = props[col].astype(np.uint16)
# elif num_na_df_finite.max() < 4294967295:
# props[col] = props[col].astype(np.uint32)
# else:
# props[col] = props[col].astype(np.uint64)
# num_df_finite.info()
# num_df_finite = reduce_mem_usage(num_df_finite, finite = True)
# num_df_infinite.info()
# num_df_infinite = reduce_mem_usage(num_df_infinite, finite = False)
# num_df = pd.concat([num_df_finite, num_df_infinite], axis ='columns')
# num_df.info()
# assert num_df_finite.shape[0] == num_df_infinite.shape[0] == num_df.shape[0]
# assert num_df_finite.shape[1] + num_df_infinite.shape[1] == num_df.shape[1]
# del num_df_finite
# del num_df_infinite
# gc.collect()
```
THE END
| github_jupyter |
# Batch Prediction
## 1. Download demo data
```
cd PhaseNet
wget https://github.com/wayneweiqiang/PhaseNet/releases/download/test_data/test_data.zip
unzip test_data.zip
```
## 2. Run batch prediction
PhaseNet currently supports three data formats: numpy, hdf5, and mseed
- For numpy format:
~~~bash
python phasenet/predict.py --model=model/190703-214543 --data_list=test_data/npz.csv --data_dir=test_data/npz --format=numpy --plot_figure
~~~
- For hdf5 format:
~~~bash
python phasenet/predict.py --model=model/190703-214543 --hdf5_file=test_data/data.h5 --hdf5_group=data --format=hdf5
~~~
- For mseed format:
~~~bash
python phasenet/predict.py --model=model/190703-214543 --data_list=test_data/mseed.csv --data_dir=test_data/mseed --format=mseed
~~~
- For sac format:
~~~bash
python phasenet/predict.py --model=model/190703-214543 --data_list=test_data/sac.csv --data_dir=test_data/sac --format=sac
~~~
- For mseed file of an array of stations (used by [QuakeFlow](https://github.com/wayneweiqiang/QuakeFlow)):
~~~bash
python phasenet/predict.py --model=model/190703-214543 --data_list=test_data/mseed_array.csv --data_dir=test_data/mseed_array --stations=test_data/stations.csv --format=mseed_array --amplitude
~~~
Optional arguments:
```
usage: predict.py [-h] [--batch_size BATCH_SIZE] [--model_dir MODEL_DIR]
[--data_dir DATA_DIR] [--data_list DATA_LIST]
[--hdf5_file HDF5_FILE] [--hdf5_group HDF5_GROUP]
[--result_dir RESULT_DIR] [--result_fname RESULT_FNAME]
[--min_p_prob MIN_P_PROB] [--min_s_prob MIN_S_PROB]
[--mpd MPD] [--amplitude] [--format FORMAT]
[--s3_url S3_URL] [--stations STATIONS] [--plot_figure]
[--save_prob]
optional arguments:
-h, --help show this help message and exit
--batch_size BATCH_SIZE
batch size
--model_dir MODEL_DIR
Checkpoint directory (default: None)
--data_dir DATA_DIR Input file directory
--data_list DATA_LIST
Input csv file
--hdf5_file HDF5_FILE
Input hdf5 file
--hdf5_group HDF5_GROUP
data group name in hdf5 file
--result_dir RESULT_DIR
Output directory
--result_fname RESULT_FNAME
Output file
--min_p_prob MIN_P_PROB
Probability threshold for P pick
--min_s_prob MIN_S_PROB
Probability threshold for S pick
--mpd MPD Minimum peak distance
--amplitude if return amplitude value
--format FORMAT input format
--s3_url S3_URL s3 url
--stations STATIONS seismic station info
--plot_figure If plot figure for test
--save_prob If save result for test
```
## 3. Read P/S picks
PhaseNet currently outputs two format: **CSV** and **JSON**
```
import pandas as pd
import json
import os
PROJECT_ROOT = os.path.realpath(os.path.join(os.path.abspath(''), ".."))
picks_csv = pd.read_csv(os.path.join(PROJECT_ROOT, "results/picks.csv"), sep="\t")
picks_csv.loc[:, 'p_idx'] = picks_csv["p_idx"].apply(lambda x: x.strip("[]").split(","))
picks_csv.loc[:, 'p_prob'] = picks_csv["p_prob"].apply(lambda x: x.strip("[]").split(","))
picks_csv.loc[:, 's_idx'] = picks_csv["s_idx"].apply(lambda x: x.strip("[]").split(","))
picks_csv.loc[:, 's_prob'] = picks_csv["s_prob"].apply(lambda x: x.strip("[]").split(","))
print(picks_csv.iloc[1])
print(picks_csv.iloc[0])
with open(os.path.join(PROJECT_ROOT, "results/picks.json")) as fp:
picks_json = json.load(fp)
print(picks_json[1])
print(picks_json[0])
```
| github_jupyter |
# Multithreading and Multiprocessing
Recall the phrase "many hands make light work". This is as true in programming as anywhere else.
What if you could engineer your Python program to do four things at once? What would normally take an hour could (almost) take one fourth the time.<font color=green>\*</font>
This is the idea behind parallel processing, or the ability to set up and run multiple tasks concurrently.
<br><font color=green>\* *We say almost, because you do have to take time setting up four processors, and it may take time to pass information between them.*</font>
## Threading vs. Processing
A good illustration of threading vs. processing would be to download an image file and turn it into a thumbnail.
The first part, communicating with an outside source to download a file, involves a thread. Once the file is obtained, the work of converting it involves a process. Essentially, two factors determine how long this will take; the input/output speed of the network communication, or I/O, and the available processor, or CPU.
#### I/O-intensive processes improved with multithreading:
* webscraping
* reading and writing to files
* sharing data between programs
* network communications
#### CPU-intensive processes improved with multiprocessing:
* computations
* text formatting
* image rescaling
* data analysis
## Multithreading Example: Webscraping
Historically, the programming knowledge required to set up multithreading was beyond the scope of this course, as it involved a good understanding of Python's Global Interpreter Lock (the GIL prevents multiple threads from running the same Python code at once). Also, you had to set up special classes that behave like Producers to divvy up the work, Consumers (aka "workers") to perform the work, and a Queue to hold tasks and provide communcations. And that was just the beginning.
Fortunately, we've already learned one of the most valuable tools we'll need – the `map()` function. When we apply it using two standard libraries, *multiprocessing* and *multiprocessing.dummy*, setting up parallel processes and threads becomes fairly straightforward.
Here's a classic multithreading example provided by [IBM](http://www.ibm.com/developerworks/aix/library/au-threadingpython/) and adapted by [Chris Kiehl](http://chriskiehl.com/article/parallelism-in-one-line/) where you divide the task of retrieving web pages across multiple threads:
import time
import threading
import Queue
import urllib2
class Consumer(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self._queue = queue
def run(self):
while True:
content = self._queue.get()
if isinstance(content, str) and content == 'quit':
break
response = urllib2.urlopen(content)
print 'Thanks!'
def Producer():
urls = [
'http://www.python.org', 'http://www.yahoo.com'
'http://www.scala.org', 'http://www.google.com'
# etc..
]
queue = Queue.Queue()
worker_threads = build_worker_pool(queue, 4)
start_time = time.time()
# Add the urls to process
for url in urls:
queue.put(url)
# Add the poison pill
for worker in worker_threads:
queue.put('quit')
for worker in worker_threads:
worker.join()
print 'Done! Time taken: {}'.format(time.time() - start_time)
def build_worker_pool(queue, size):
workers = []
for _ in range(size):
worker = Consumer(queue)
worker.start()
workers.append(worker)
return workers
if __name__ == '__main__':
Producer()
Using the multithreading library provided by the *multiprocessing.dummy* module and `map()` all of this becomes:
import urllib2
from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(4) # choose a number of workers
urls = [
'http://www.python.org', 'http://www.yahoo.com'
'http://www.scala.org', 'http://www.google.com'
# etc..
]
results = pool.map(urllib2.urlopen, urls)
pool.close()
pool.join()
In the above code, the *multiprocessing.dummy* module provides the parallel threads, and `map(urllib2.urlopen, urls)` assigns the labor!
## Multiprocessing Example: Monte Carlo
Let's code out an example to see how the parts fit together. We can time our results using the *timeit* module to measure any performance gains. Our task is to apply the Monte Carlo Method to estimate the value of Pi.
### Monte Carle Method and Estimating Pi
If you draw a circle of radius 1 (a unit circle) and enclose it in a square, the areas of the two shapes are given as
<table>
<caption>Area Formulas</caption>
<tr><td>circle</td><td>$$πr^2$$</td></tr>
<tr><td>square</td><td>$$4 r^2$$</td></tr>
</table>
Therefore, the ratio of the volume of the circle to the volume of the square is $$\frac{π}{4}$$
The Monte Carlo Method plots a series of random points inside the square. By comparing the number that fall within the circle to those that fall outside, with a large enough sample we should have a good approximation of Pi. You can see a good demonstration of this [here](https://academo.org/demos/estimating-pi-monte-carlo/) (Hit the **Animate** button on the page).
For a given number of points *n*, we have $$π = \frac{4 \cdot points\ inside\ circle}{total\ points\ n}$$
To set up our multiprocessing program, we first derive a function for finding Pi that we can pass to `map()`:
```
from random import random # perform this import outside the function
def find_pi(n):
"""
Function to estimate the value of Pi
"""
inside=0
for i in range(0,n):
x=random()
y=random()
if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle
inside+=1
pi=4*inside/n
return pi
```
Let's test `find_pi` on 5,000 points:
```
find_pi(5000)
```
This ran very quickly, but the results are not very accurate!
Next we'll write a script that sets up a pool of workers, and lets us time the results against varying sized pools. We'll set up two arguments to represent *processes* and *total_iterations*. Inside the script, we'll break *total_iterations* down into the number of iterations passed to each process, by making a processes-sized list.<br>For example:
total_iterations = 1000
processes = 5
iterations = [total_iterations//processes]*processes
iterations
# Output: [200, 200, 200, 200, 200]
This list will be passed to our `map()` function along with `find_pi()`
```
%%writefile test.py
from random import random
from multiprocessing import Pool
import timeit
def find_pi(n):
"""
Function to estimate the value of Pi
"""
inside=0
for i in range(0,n):
x=random()
y=random()
if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle
inside+=1
pi=4*inside/n
return pi
if __name__ == '__main__':
N = 10**5 # total iterations
P = 5 # number of processes
p = Pool(P)
print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10))
p.close()
p.join()
print(f'{N} total iterations with {P} processes')
! python test.py
```
Great! The above test took under a second on our computer.
Now that we know our script works, let's increase the number of iterations, and compare two different pools. Sit back, this may take awhile!
```
%%writefile test.py
from random import random
from multiprocessing import Pool
import timeit
def find_pi(n):
"""
Function to estimate the value of Pi
"""
inside=0
for i in range(0,n):
x=random()
y=random()
if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle
inside+=1
pi=4*inside/n
return pi
if __name__ == '__main__':
N = 10**7 # total iterations
P = 1 # number of processes
p = Pool(P)
print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10))
p.close()
p.join()
print(f'{N} total iterations with {P} processes')
P = 5 # number of processes
p = Pool(P)
print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.7f}'), number=10))
p.close()
p.join()
print(f'{N} total iterations with {P} processes')
! python test.py
```
Hopefully you saw that with 5 processes our script ran faster!
## More is Better ...to a point.
The gain in speed as you add more parallel processes tends to flatten out at some point. In any collection of tasks, there are going to be one or two that take longer than average, and no amount of added processing can speed them up. This is best described in [Amdahl's Law](https://en.wikipedia.org/wiki/Amdahl%27s_law).
## Advanced Script
In the example below, we'll add a context manager to shrink these three lines
p = Pool(P)
...
p.close()
p.join()
to one line:
with Pool(P) as p:
And we'll accept command line arguments using the *sys* module.
```
%%writefile test2.py
from random import random
from multiprocessing import Pool
import timeit
import sys
N = int(sys.argv[1]) # these arguments are passed in from the command line
P = int(sys.argv[2])
def find_pi(n):
"""
Function to estimate the value of Pi
"""
inside=0
for i in range(0,n):
x=random()
y=random()
if (x*x+y*y)**(0.5)<=1: # if i falls inside the circle
inside+=1
pi=4*inside/n
return pi
if __name__ == '__main__':
with Pool(P) as p:
print(timeit.timeit(lambda: print(f'{sum(p.map(find_pi, [N//P]*P))/P:0.5f}'), number=10))
print(f'{N} total iterations with {P} processes')
! python test2.py 10000000 500
```
Great! Now you should have a good understanding of multithreading and multiprocessing!
| github_jupyter |
```
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, QiskitError
#from qiskit import execute, BasicAer
import qiskit.ignis.verification.randomized_benchmarking as rb
#import qiskit.test.benchmarks.randomized_benchmarking as br
import pyzx
from pyzx.circuit.qasmparser import QASMParser
from pyzx.circuit.qiskitqasmparser import QiskitQASMParser
#qc = rb.randomized_benchmarking_seq()
qc = rb.randomized_benchmarking_seq(nseeds=1, length_vector=None,
rb_pattern=[[0,1]],
length_multiplier=1, seed_offset=0,
align_cliffs=False,
interleaved_gates=None,
is_purity=False)
qc = qc[0][0][0]
# setting up the backend
# print(BasicAer.backends())
# running the job
# job_sim = execute(qc, BasicAer.get_backend('qasm_simulator'))
# sim_result = job_sim.result()
# print("\nPrint all gates:")
# [print(dat) for dat in qc.data]
qasm = qc.decompose().qasm()
### if you want to remove all barriers
## qasm = '\n'.join(['' if line.startswith("barrier") else line for line in qasm.splitlines()])
qc = qc.from_qasm_str(qasm)
print("\nPrint QASM:")
print(qasm)
# Draw the circuit
print(qc)
p = QiskitQASMParser()
circ_list, whichpyzx = p.qiskitparse(qasm)
print(circ_list)
print(whichpyzx)
print(p.registers)
[print(circ_list[w].__dict__) for w in whichpyzx]
#qasm = qc.decompose().qasm()
pyzx.draw_many(circ_list, whichpyzx)
graph_list = [circ_list[w].to_graph() for w in whichpyzx]
[pyzx.full_reduce(g) for g in graph_list]
pyzx.draw_many(graph_list, range(len(whichpyzx)))
pyzx_circ_list = [pyzx.extract.streaming_extract(g) for g in graph_list]
pyzx_circ_list = [pyzx.optimize.basic_optimization(new_c.to_basic_gates()) for new_c in pyzx_circ_list]
pyzx_qasm = [new_c.to_basic_gates().to_qasm() for new_c in pyzx_circ_list]
passedAll = True
for i in range(len(pyzx_circ_list)):
try:
assert(pyzx.compare_tensors(pyzx_circ_list[i], circ_list[whichpyzx[i]], False))
except AssertionError:
print(i)
print(circ_list[whichpyzx[i]].__dict__)
print(pyzx_circ_list[i].__dict__)
passedAll = False
assert(passedAll)
pyzx_qasm = ["\n".join(['' if line.startswith("qreg") else line for line in circ.splitlines()[2:]]) for circ in pyzx_qasm]
for new_qasm in pyzx_qasm:
[print(line) for line in new_qasm.splitlines()]
print()
#now we need to map registers and glue all the pieces back together
for i in range(len(pyzx_qasm)):
circ_list[whichpyzx[i]] = pyzx_qasm[i]
#print(circ_list)
## join the
qasm_string = 'OPENQASM 2.0;\ninclude "qelib1.inc";\n'+"\n".join(circ_list)
qasm_string = qasm_string.replace('q[', 'qr[')
print(qasm_string)
# pqsl = [line + "\n" for line in pyzx_qasm] #took out .splitlines()
# qsl = [line + "\n" for line in qasm.splitlines()]
# # print(pqsl)
# # print(qsl)
# new_qasm = '\n'.join(qsl[0:4]) + ''.join(pqsl[3:]) + ''.join(qsl[-2:])
# new_qasm = new_qasm.replace('q[', 'qr[')
# print(new_qasm)
new_qc = qc.from_qasm_str(qasm_string)
print(new_qc)
print(qc)
import qiskit
from qiskit.providers.basicaer import QasmSimulatorPy
c1 = qiskit.execute(qc, QasmSimulatorPy()).result().get_counts()
c2 = qiskit.execute(new_qc, QasmSimulatorPy()).result().get_counts()
c1
c2
assert(c1 == c2)
qc.depth()
qc.size()
new_qc.depth()
new_qc.size()
new_new_qc = qiskit.transpile(qc, basis_gates=['u3', 'cx'], optimization_level=2)
print(new_new_qc)
new_new_qc.depth()
new_new_qc.size()
doubly_qc = qiskit.transpile(new_qc, basis_gates=['u3', 'cx'], optimization_level=2)
print(doubly_qc)
doubly_qc.depth()
doubly_qc.size()
c3 = qiskit.execute(new_new_qc, QasmSimulatorPy()).result().get_counts()
c4 = qiskit.execute(doubly_qc, QasmSimulatorPy()).result().get_counts()
c3
c4
```
| github_jupyter |
```
import pandas as pd
from textblob import Word
headers = pd.read_csv("header.csv")
headers['Header']
citation = [Word("citation").synsets[2], Word("reference").synsets[1], Word("cite").synsets[3]]
run = [Word("run").synsets[9],Word("run").synsets[34],Word("execute").synsets[4]]
install = [Word("installation").synsets[0],Word("install").synsets[0],Word("setup").synsets[1],Word("prepare").synsets[0],Word("preparation").synsets[0],Word("manual").synsets[0],Word("guide").synsets[2],Word("guide").synsets[9]]
download = [Word("download").synsets[0]]
requirement = [Word("requirement").synsets[2],Word("prerequisite").synsets[0],Word("prerequisite").synsets[1],Word("dependency").synsets[0],Word("dependent").synsets[0]]
contact = [Word("contact").synsets[9]]
description = [Word("description").synsets[0],Word("description").synsets[1],Word("introduction").synsets[3],Word("introduction").synsets[6],Word("basics").synsets[0],Word("initiation").synsets[1],Word("start").synsets[0],Word("start").synsets[4],Word("started").synsets[0],Word("started").synsets[1],Word("started").synsets[7],Word("started").synsets[8],Word("overview").synsets[0],Word("summary").synsets[0],Word("summary").synsets[2]]
contributor = [Word("contributor").synsets[0]]
documentation = [Word("documentation").synsets[1]]
license = [Word("license").synsets[3],Word("license").synsets[0]]
usage = [Word("usage").synsets[0],Word("example").synsets[0],Word("example").synsets[5],Word("implement").synsets[1],Word("implementation").synsets[1],Word("demo").synsets[1],Word("tutorial").synsets[0],Word("tutorial").synsets[1]]
update = [Word("updating").synsets[0],Word("updating").synsets[3]]
issues = [Word("issues").synsets[0],Word("errors").synsets[5],Word("problems").synsets[0],Word("problems").synsets[2]]
support = [Word("support").synsets[7],Word("help").synsets[0],Word("help").synsets[9],Word("report").synsets[0],Word("report").synsets[6]]
group = dict()
group.update({"citation":citation})
group.update({"download":download})
group.update({"run":run})
group.update({"installation":install})
group.update({"requirement":requirement})
group.update({"contact":contact})
group.update({"description":description})
group.update({"contributor":contributor})
group.update({"documentation":documentation})
group.update({"license":license})
group.update({"usage":usage})
group.update({"update":update})
group.update({"issues":issues})
group.update({"support":support})
def find_sim(wordlist,wd): #returns the max probability between a word and subgroup
simvalue = []
for sense in wordlist:
if(wd.path_similarity(sense)!=None):
simvalue.append(wd.path_similarity(sense))
if(len(simvalue)!=0):
return max(simvalue)
else:
return 0
def match_group(word_syn,group,threshold):
currmax = 0
maxgroup = ""
simvalues = dict()
for sense in word_syn: #for a given sense of a word
similarities = []
for key, value in group.items(): #value has all the similar words
path_sim = find_sim(value,sense)
# print("Similarity is:",path_sim)
if(path_sim>threshold): #then append to the list
if(path_sim>currmax):
maxgroup = key
currmax = path_sim
return maxgroup
datadf = pd.DataFrame({'Header': [], 'Group': []})
matchedgroups = []
for h in headers["Header"]:
sentence = h.split(" ")[1:]
for s in sentence:
synn = Word(s).synsets
if(len(synn)>0):
bestgroup = match_group(synn,group,0.6)
if(bestgroup!=""):
datadf = datadf.append({'Header' : h, 'Group' : bestgroup}, ignore_index=True)
print(datadf)
datadf.to_csv('header_groups.csv', index=False)
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
```
### Define a function to integrate
```
def func(x):
a = 1.01
b= -3.04
c = 2.07
return a*x**2 + b*x + c
```
### Define it's integral so we know the right answer
```
def func_integral(x):
a = 1.01
b= -3.04
c = 2.07
return (a*x**3)/3. + (b*x**2)/2. + c*x
```
### Define core of trapezoid method
```
def trapezoid_core(f,x,h):
return 0.5*h*(f(x*h)+f(x))
```
### Define the wrapper function to perform the trapezoid method
```
def trapezoid_method(f,a,b,N):
#f == function to integrate
#a == lower limit of integration
#b == upper limit of integration
#N == number of intervals to use
#define x values to perform the trapezoid rule
x = np.linspace(a,b,N)
h = x[1]-x[0]
#define the value of the integral
Fint = 0.0
#perform the integral using the trapezoid method
for i in range(0,len(x)-1,1):
Fint += trapezoid_core(f,x[i],h)
#return the answer
return Fint
```
### Define the core of simpson's method
```
def simpsons_core(f,x,h):
return h*(f(x) + 4*f(x+h) + f(x+2*h))/3
```
### Define a wrapper for simpson's method
```
def simpsons_method(f,a,b,N):
#f == function to integrate
#a == lower limit of integration
#b == upper limit of integration
#N == number of intervals to use
x = np.linspace(a,b,N)
h = x[1]-x[0]
#define the value of the integral
Fint = 0.0
#perform the integral using the simpson's method
for i in range(0,len(x)-2,2):
Fint += simpsons_core(f,x[i],h)
#apply simpson's rule over the last interval if X is even
if((N%2)==0):
Fint += simpsons_core(f,x[-2],0.5*h)
#return the answer
return Fint
```
### Define Romberg core
```
def romberg_core(f,a,b,i):
#we need the difference between a and b
h = b-a
#interval betwen function evaluations at refine level i
dh = h/2.**(i)
#we need the cofactor
K = h/2.**(i+1)
#and the function evaluations
M = 0.0
for j in range(2**i):
M += f(a + 0.5*dh +j*dh)
#return the answer
return K*M
```
### Define a wrapper function
```
def romberg_integration(f,a,b,tol):
#define an iteration variable
i=0
#define a max number of iterations
imax = 1000
#define an error estimate
delta = 100.0*np.fabs(tol)
#set an array of integral answers
I = np.zeros(imax,dtype=float)
#fet the zeroth romberg iteration first
I[0] = 0.5*(b-a)*(f(a) + f(b))
#iterate by 1
i += 1
#iterate until we reach tolerance
while(delta>tol):
#find the romberg integration
I[i] = 0.5*I[i-1] + romberg_core(f,a,b,i)
#compute a fractional error estimate
delta = np.fabs((I[i]-I[i-1])/I[i])
print(i,":",I[i],I[i-1],delta)
if(delta>tol):
#iterate
i += 1
#if we've reached maximim iterations
if(i>imax):
print("Max iterations reached")
raise StopIteration("Stopping iterations after ",i)
#return the answer
return I[i]
```
### Check the interages
```
Answer = func_integral(1) - func_integral(0)
print(Answer)
print("Trapezoidal method")
print(trapezoid_method(func,0,1,10))
print("Simpson's method")
print(simpsons_method(func,0,1,10))
print("Romberg")
tolerance = 1.0e-4
RI = romberg_integration(func,0,1,tolerance)
print(RI, (RI-Answer)/Answer, tolerance)
```
| github_jupyter |
# Tuning an estimator
[José C. García Alanis (he/him)](https://github.com/JoseAlanis)
Research Fellow - Child and Adolescent Psychology at [Uni Marburg](https://www.uni-marburg.de/de)
Member - [RTG 2271 | Breaking Expectations](https://www.uni-marburg.de/en/fb04/rtg-2271), [Brainhack](https://brainhack.org/)
<img align="left" src="https://raw.githubusercontent.com/G0RELLA/gorella_mwn/master/lecture/static/Twitter%20social%20icons%20-%20circle%20-%20blue.png" alt="logo" title="Twitter" width="30" height="30" /> <img align="left" src="https://raw.githubusercontent.com/G0RELLA/gorella_mwn/master/lecture/static/GitHub-Mark-120px-plus.png" alt="logo" title="Github" width="30" height="30" /> @JoiAlhaniz
<img align="right" src="https://raw.githubusercontent.com/PeerHerholz/ML-DL_workshop_SynAGE/master/lecture/static/ml-dl_workshop.png" alt="logo" title="Github" width="400" height="280" />
### Aim(s) of this section
It's very important to learn when and where its appropriate to "tweak" your model.
Since we have done all of the previous analysis in our training data, it's fine to try out different models.
But we absolutely cannot "test" it on our *left out data*. If we do, we are in great danger of overfitting.
It is not uncommon to try other models, or tweak hyperparameters. In this case, due to our relatively small sample size, we are probably not powered sufficiently to do so, and we would once again risk overfitting. However, for the sake of demonstration, we will do some tweaking.
We will try a few different examples:
- normalizing our target data
- tweaking our hyperparameters
- trying a more complicated model
- feature selection
### Prepare data for model
Lets bring back our example data set
```
import numpy as np
import pandas as pd
# get the data set
data = np.load('MAIN2019_BASC064_subsamp_features.npz')['a']
# get the labels
info = pd.read_csv('participants.csv')
print('There are %s samples and %s features' % (data.shape[0], data.shape[1]))
```
We'll set `Age` as target
- i.e., well look at these from the `regression` perspective
```
# set age as target
Y_con = info['Age']
Y_con.describe()
```
### Model specification
Now let's bring back the model specifications we used last time
```
from sklearn.model_selection import train_test_split
# split the data
X_train, X_test, y_train, y_test = train_test_split(data, Y_con, random_state=0)
# use `AgeGroup` for stratification
age_class2 = info.loc[y_train.index,'AgeGroup']
```
### Normalize the target data¶
```
# plot the data
sns.displot(y_train,label='train')
plt.legend()
# create a log transformer function and log transform Y (age)
from sklearn.preprocessing import FunctionTransformer
log_transformer = FunctionTransformer(func = np.log, validate=True)
log_transformer.fit(y_train.values.reshape(-1,1))
y_train_log = log_transformer.transform(y_train.values.reshape(-1,1))[:,0]
```
Now let's plot the transformed data
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.displot(y_train_log,label='test log')
plt.legend()
```
and go on with fitting the model to the log-tranformed data
```
# split the data
X_train2, X_test, y_train2, y_test = train_test_split(
X_train, # x
y_train, # y
test_size = 0.25, # 75%/25% split
shuffle = True, # shuffle dataset before splitting
stratify = age_class2, # keep distribution of age class consistent
# betw. train & test sets.
random_state = 0 # same shuffle each time
)
from sklearn.svm import SVR
from sklearn.model_selection import cross_val_predict, cross_val_score
from sklearn.metrics import r2_score, mean_absolute_error
# re-intialize the model
lin_svr = SVR(kernel='linear')
# predict
y_pred = cross_val_predict(lin_svr, X_train, y_train_log, cv=10)
# scores
acc = r2_score(y_train_log, y_pred)
mae = mean_absolute_error(y_train_log,y_pred)
# check the accuracy
print('R2:', acc)
print('MAE:', mae)
# plot the relationship
sns.regplot(x=y_pred, y=y_train_log, scatter_kws=dict(color='k'))
plt.xlabel('Predicted Log Age')
plt.ylabel('Log Age')
```
Alright, seems like a definite improvement, right? We might agree on that.
But we can't forget about interpretability? The MAE is much less interpretable now
- do you know why?
### Tweak the hyperparameters¶
Many machine learning algorithms have hyperparameters that can be "tuned" to optimize model fitting.
Careful parameter tuning can really improve a model, but haphazard tuning will often lead to overfitting.
Our SVR model has multiple hyperparameters. Let's explore some approaches for tuning them
for 1000 points, what is a parameter?
```
SVR?
```
Now, how do we know what parameter tuning does?
- One way is to plot a **Validation Curve**, this will let us view changes in training and validation accuracy of a model as we shift its hyperparameters. We can do this easily with sklearn.
We'll fit the same model, but with a range of different values for `C`
- The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyperplane does a better job of getting all the training points classified correctly. Conversely, a very small value of C will cause the optimizer to look for a larger-margin separating hyperplane, even if that hyperplane misclassifies more points. For very tiny values of C, you should get misclassified examples, often even if your training data is linearly separable.
```
from sklearn.model_selection import validation_curve
C_range = 10. ** np.arange(-3, 7)
train_scores, valid_scores = validation_curve(lin_svr, X_train, y_train_log,
param_name= "C",
param_range = C_range,
cv=10,
scoring='neg_mean_squared_error')
# A bit of pandas magic to prepare the data for a seaborn plot
tScores = pd.DataFrame(train_scores).stack().reset_index()
tScores.columns = ['C','Fold','Score']
tScores.loc[:,'Type'] = ['Train' for x in range(len(tScores))]
vScores = pd.DataFrame(valid_scores).stack().reset_index()
vScores.columns = ['C','Fold','Score']
vScores.loc[:,'Type'] = ['Validate' for x in range(len(vScores))]
ValCurves = pd.concat([tScores,vScores]).reset_index(drop=True)
ValCurves.head()
# and plot the results
g = sns.catplot(x='C',y='Score',hue='Type',data=ValCurves,kind='point')
plt.xticks(range(10))
g.set_xticklabels(C_range, rotation=90)
```
It looks like accuracy is better for higher values of `C`, and plateaus somewhere between 0.1 and 1.
The default setting is `C=1`, so it looks like we can't really improve much by changing `C`.
But our SVR model actually has two hyperparameters, `C` and `epsilon`. Perhaps there is an optimal combination of settings for these two parameters.
We can explore that somewhat quickly with a `grid search`, which is once again easily achieved with `sklearn`.
Because we are fitting the model multiple times witih cross-validation, this will take some time ...
### Let's tune some hyperparameters
```
from sklearn.model_selection import GridSearchCV
C_range = 10. ** np.arange(-3, 8)
epsilon_range = 10. ** np.arange(-3, 8)
param_grid = dict(epsilon=epsilon_range, C=C_range)
grid = GridSearchCV(lin_svr, param_grid=param_grid, cv=10)
grid.fit(X_train, y_train_log)
```
Now that the grid search has completed, let's find out what was the "best" parameter combination
```
print(grid.best_params_)
```
And if redo our cross-validation with this parameter set?
```
y_pred = cross_val_predict(SVR(kernel='linear',
C=grid.best_params_['C'],
epsilon=grid.best_params_['epsilon'],
gamma='auto'),
X_train, y_train_log, cv=10)
# scores
acc = r2_score(y_train_log, y_pred)
mae = mean_absolute_error(y_train_log,y_pred)
# print model performance
print('R2:', acc)
print('MAE:', mae)
# and plot the results
sns.regplot(x=y_pred, y=y_train_log, scatter_kws=dict(color='k'))
plt.xlabel('Predicted Log Age')
plt.ylabel('Log Age')
```
Perhaps unsurprisingly, the model fit is only very slightly improved from what we had with our defaults. **There's a reason they are defaults, you silly**
Grid search can be a powerful and useful tool. But can you think of a way that, if not properly utilized, it could lead to overfitting? Could it be happening here?
You can find a nice set of tutorials with links to very helpful content regarding how to tune hyperparameters while being aware of over- and under-fitting here:
https://scikit-learn.org/stable/modules/learning_curve.html
| github_jupyter |
# Homework 2: classification
Data source: http://archive.ics.uci.edu/ml/datasets/Polish+companies+bankruptcy+data
**Description:** The goal of this HW is to be familiar with the basic classifiers PML Ch 3.
For this HW, we continue to use Polish companies bankruptcy data Data Set from UCI Machine Learning Repository. Download the dataset and put the 4th year file (4year.arff) in your YOUR_GITHUB_ID/PHBS_MLF_2019/HW2/
I did a basic process of the data (loading to dataframe, creating bankruptcy column, changing column names, filling-in na values, training-vs-test split, standardizatino, etc). See my github。
# Preparation
## Load, read and clean
```
from scipy.io import arff
import pandas as pd
import numpy as np
data = arff.loadarff('./data/4year.arff')
df = pd.DataFrame(data[0])
df['bankruptcy'] = (df['class']==b'1')
del df['class']
df.columns = ['X{0:02d}'.format(k) for k in range(1,65)] + ['bankruptcy']
df.describe()
sum(df.bankruptcy == True)
from sklearn.impute import SimpleImputer
imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
X_imp = imp_mean.fit_transform(df.values)
```
*A dll load error occured here. Solution recorded in [my blog](https://quoth.win/671.html)*
```
from sklearn.model_selection import train_test_split
X, y = X_imp[:, :-1], X_imp[:, -1]
X_train, X_test, y_train, y_test =\
train_test_split(X, y,
test_size=0.3,
random_state=0,
stratify=y)
from sklearn.preprocessing import StandardScaler
stdsc = StandardScaler()
X_train_std = stdsc.fit_transform(X_train)
X_test_std = stdsc.transform(X_test)
```
## 1. Find the 2 most important features
Select the 2 most important features using LogisticRegression with L1 penalty. **(Adjust C until you see 2 features)**
```
from sklearn.linear_model import LogisticRegression
C = [1, .1, .01, 0.001]
cdf = pd.DataFrame()
for c in C:
lr = LogisticRegression(penalty='l1', C=c, solver='liblinear', random_state=0)
lr.fit(X_train_std, y_train)
print(f'[C={c}] with {lr.coef_[lr.coef_!=0].shape[0]} features: \n {lr.coef_[lr.coef_!=0]} \n') # Python >= 3.7
if lr.coef_[lr.coef_!=0].shape[0] == 2:
cdf = pd.DataFrame(lr.coef_.T , df.columns[:-1], columns=['coef'])
lr = LogisticRegression(penalty='l1', C=0.01, solver='liblinear', random_state=0) # complete
lr.fit(X_train_std, y_train)
cdf = cdf[cdf.coef != 0]
cdf
```
### redefine X_train_std and X_test_std
```
X_train_std = X_train_std[:, lr.coef_[0]!=0]
X_test_std = X_test_std[:, lr.coef_[0]!=0]
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.scatter(x=X_train_std[:,0], y=X_train_std[:,1], c=y_train, cmap='Set1')
```
## 2. Apply LR / SVM / Decision Tree below
Using the 2 selected features, apply LR / SVM / decision tree. **Try your own hyperparameters (C, gamma, tree depth, etc)** to maximize the prediction accuracy. (Just try several values. You don't need to show your answer is the maximum.)
## LR
```
CLr = np.arange(0.000000000000001, 0.0225, 0.0001)
acrcLr = [] # acurracy
for c in CLr:
lr = LogisticRegression(C=c,penalty='l1',solver='liblinear')
lr.fit(X_train_std, y_train)
acrcLr.append([lr.score(X_train_std, y_train), lr.score(X_test_std, y_test), c])
acrcLr = np.array(acrcLr)
plt.plot(acrcLr[:,2], acrcLr[:,0])
plt.plot(acrcLr[:,2], acrcLr[:,1])
plt.xlabel('C')
plt.ylabel('Accuracy')
plt.title('Logistic Regression')
plt.show()
```
Choose `c=.01`
```
c = .01
lr = LogisticRegression(C=c,penalty='l1',solver='liblinear')
lr.fit(X_train_std, y_train)
print(f'Accuracy when [c={c}] \nTrain {lr.score(X_train_std, y_train)}\nTest {lr.score(X_test_std, y_test)}')
```
## SVM
```
from sklearn.svm import SVC
G = np.arange(0.00001, 0.3, 0.005)
acrcSvm = []
for g in G:
svm = SVC(kernel='rbf', gamma=g, C=1.0, random_state=0)
svm.fit(X_train_std, y_train)
acrcSvm.append([svm.score(X_train_std, y_train), svm.score(X_test_std, y_test), g])
acrcSvm = np.array(acrcSvm)
plt.plot(acrcSvm[:,2], acrcSvm[:,0])
plt.plot(acrcSvm[:,2], acrcSvm[:,1])
plt.xlabel('gamma')
plt.ylabel('Accuracy')
plt.title('SVM')
plt.show()
```
Choose `gamma = 0.2`
```
g = 0.2
svm = SVC(kernel='rbf', gamma=g, C=1.0, random_state=0)
svm.fit(X_train_std, y_train)
print(f'Accuracy when [gamma={g}] \nTrain {svm.score(X_train_std, y_train)}\nTest {svm.score(X_test_std, y_test)}')
```
## Decision Tree
```
from sklearn.tree import DecisionTreeClassifier
depthTree = range(1, 6)
acrcTree = []
for depth in depthTree:
tree = DecisionTreeClassifier(criterion='gini', max_depth=depth, random_state=0)
tree.fit(X_train_std, y_train)
acrcTree.append([tree.score(X_train_std, y_train), tree.score(X_test_std, y_test), depth])
acrcTree = np.array(acrcTree)
plt.plot(acrcTree[:,2], acrcTree[:,0])
plt.plot(acrcTree[:,2], acrcTree[:,1])
plt.xlabel('max_depth')
plt.ylabel('Accuracy')
plt.title('Decision Tree')
plt.show()
```
Choose `max_depth=2`:
```
depth = 2
tree = DecisionTreeClassifier(criterion='gini', max_depth=depth, random_state=0)
tree.fit(X_train_std, y_train)
print(f'Accuracy when [max_depth={depth}] \nTrain {tree.score(X_train_std, y_train)}\nTest {tree.score(X_test_std, y_test)}')
```
## 3. Visualize the classification
Visualize your classifiers using the plot_decision_regions function from PML Ch. 3
```
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.8,
c=colors[idx],
marker=markers[idx],
label=cl,
edgecolor='black')
# highlight test samples
if test_idx:
# plot all samples
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
edgecolor='black',
alpha=1.0,
linewidth=1,
marker='o',
s=100,
label='test set')
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
```
## LR
`test_idx` removed on purpose
```
plot_decision_regions(X=X_combined_std, y=y_combined,
classifier=lr)
plt.xlabel(cdf.index[0])
plt.ylabel(cdf.index[1])
plt.legend(loc='lower left')
plt.tight_layout()
#plt.savefig('images/03_01.png', dpi=300)
plt.show()
```
## Decision Tree
```
plot_decision_regions(X=X_combined_std, y=y_combined,
classifier=tree)
plt.xlabel(cdf.index[0])
plt.ylabel(cdf.index[1])
plt.legend(loc='lower left')
plt.tight_layout()
#plt.savefig('images/03_01.png', dpi=300)
plt.show()
```
## SVM (samples)
```
# Visualization of all features in a SVM model is too slow
# Because the complexity is very high (sourse:https://scikit-learn.org/stable/modules/svm.html#complexity)
# So use random samples(n=3000) instead
samples = np.random.randint(0, len(X_combined_std), size=3000)
plot_decision_regions(X=X_combined_std[samples], y=y_combined[samples],
classifier=svm)
plt.xlabel(cdf.index[0] + '[samples]')
plt.ylabel(cdf.index[1] + '[samples]')
plt.legend(loc='lower left')
plt.tight_layout()
#plt.savefig('images/03_01.png', dpi=300)
plt.show()
```
| github_jupyter |
```
"""
Title: Python Crash Course
Author: Rafia Bushra
Description: A python tutorial for DSA/ISE 5113 students
Last Updated: 3/26/21
"""
```
### Topics
This notebook covers the following topics -
1. Basic Concepts
1. [Basic Syntax](#basic-syntax)
2. [Lists](#lists)
3. [String Manipulation](#string)
4. [Decision making (If statement)](#if)
5. Loops
1. [For loop](#for)
2. [While loop](#while)
6. [Function](#func)
7. [Scope](#scope)
8. Miscellaneous
1. [Dictionary](#dict)
2. [Tuples](#tuple)
3. [List Comprehension](#lc)
4. [Error Handling](#eh)
5. [Lambda Expressions](#le)
6. [Mapping Function](#mf)
7. [User Input](#ui)
2. Advanced Concepts
1. [Numpy](#numpy)
2. [Pandas](#pandas)
3. [Matplotlib (Plotting)](#plot)
4. [pdb (Debugging)](#pdb)
5. [Other Useful Libraries](#oul)
# Basic Topics
### Basic Syntax <a class="anchor" id="basic-syntax"></a>
###### Hello World!
```
#A basic print statement to display given message
print("Hello World!")
```
##### Basic Operations
```
#Addition
2 + 10
#Subtraction
2 - 10
#Multiplication
2*10
#Division
3/2
#Integer division
3//2
#Raising to a power
10**3
#Exponentiating - not the same as 10^3
10e3
```
##### Defining Variables
You can define variables as `variable_name = value`
- Variable names can be alphanumeric though it can't start with a number.
- Variable names are case sensitive
- The values that you assign to a variable will typically be of these 5 standard data types (In python, you can assign almost anything to a variable and not have to declare what type of variable it is)
- Numbers (floats, integers, complex etc)
- Strings*
- List*
- Tuple*
- Dictionary*
*Discussed in a later section. Will only show how to define them in this section.
```
#Numbers
my_num = 5113 #Example of defining an integer
my_float = 3.0 #Example of defining a float
#Strings
truth = "This crash course is just the tip of the iceberg o_O"
#Lists
same_type_list = [1,2,3,4,5] #A simple list of same type of objects - integers
mixed_list = [1,2,"three", my_num, same_type_list] #A list containing many type of objects - integer, string, variable, another list
#Dictionary
simple_dict = {"red": 1, "blue":2, "green":3} #Similar to a list but enclosed in curly braces {} and consists of key-value pairs
#Tuple
aTuple = (1,2,3) #Similar to a list but enclosed in parenthesis ()
```
##### More print statements
Now we're going to print the variables we defined in the previous cell and look at some more ways to use the print statement
```
#printing a variable
print(my_float)
#printing the truth!
print(truth)
print(simple_dict)
print(mixed_list) #Notice how the 4th & 5th objects got the value of the variables we defined earlier
#Dynamic printing
print("This is DSA {}".format(my_num)) #The value/variable given inside format replaces the curly braces in the string
#When the dynamically set part is a number, we can set the precision
print("Value of pi up to 4 decimal places = {:.4f}".format(3.141592653589793238))
```
###### Variable Type & Conversion
Every variable has a type (int, float, string, list, etc) and some of them can be converted into certain types
```
#Finding out the type of a variable
type(my_float)
#printing the types of some other variables
print(type(my_num), type(simple_dict), type(truth), type(mixed_list))
#Converting anything to string
str(my_float)
str(simple_dict)
str(mixed_list)
#converting string to number
three = "3"
int(three)
float(three)
#Converting tuple to a list
list(aTuple)
#Converting list to a tuple
tuple(same_type_list)
```
### Lists <a class="anchor" id="lists"></a>
A versatile datatype that can be thought of as a collection of comma-seperated values.
Each item in a list has an index. The indices start with 0.
The items in a list doesn't need to be of the same type
```
#Defining some lists
l1 = [1,2,3,4,5,6]
l2 = ["a", "b", "c", "d"]
l3 = list(range(2,50,2)) #Creates a list going from 2 up to and not including 50 in increments of 2
print(l3) #displaying l3
#Length of a list
#The len command gives the size of the list i.e. the total number of items
len(l1)
len(l2)
```
**Accessing list items**
List items can be accessed using their index.
The first item has an index of 0, the next one has 1 and so on
```
#First item of l2 is "a" and third item of l1 is 3
print("First item of l2: {}".format(l2[0])) # l2[0] accesses the item at 0th index of l2
print("Third item of l1: {}".format(l1[2])) # l1[0] accesses the item at 2nd index of l1
```
**Indexing in reverse** List items can be accessed in reversed order using negative indices.
The last item canbe accessed with -1, second from last with -2 and so on
```
print("Last item of l3: {}".format(l3[-1]))
print("Third to last item of l1: {}".format(l1[-3]))
```
**Slicing**
Portions of a list can be chosen using some or all of 3 numbers - starting index, stopping index and increment
The syntax is `list_name[start:stop:increment]`
```
#If I want 2,3,4 from list l1, I want to start from index 1 and end at index 3
#The stopping indes is not included so we choose 3+1=4 as stopping index
l1[1:4]
#In this example we chose items from idex 1 up to index 5, skipping an item every time (increment of 2)
l1[1:6:2]
#If we just indicate starting index, everything after that is kept
l1[2:]
#If we just indicate stopping index, everything up to that is kept
l1[:4]
#Using reverse index
l1[:-2] #Everything except for the last 2 items
```
##### List operations
```
#"adding" two lists results in concatenation
l4 = l1 + l2
l4
#Multiplying a list by a scalar results in repetition
["hello"]*5
l2*3
[2]*7
```
##### Some other popular list manipulation functions
```
#Appending to the end of an existing string
l2.append("e")
l2
#Insert an item at a particular index - list_name(index, value)
l2.insert(2,"f")
l2
#sorting a list
l2.sort()
l2
#removes item by index and returns the removed item
l4.pop(3) #remove the item at index 3
l4
#remove item by matching value
l4.remove("a")
l4
#maximum or minimum value of a list
max(l3)
#min(l3) for minimum
```
### String Manipulation <a class="anchor" id="string"></a>
Strings are values enclosed in single quotes (' ') or double quotes (" ")
These are characters or a series of characters and can be manipulated in very similar way to lists, though they have their own special functions
```
#Defining some strings
str1 = "I hear Rafia is a harsh grader"
str2 = "NO NEED TO SHOUT"
str3 = "fine, no caps lock"
```
**Accessing & Slicing**
```
#Very similar to lists
print(str1[:12]) #Takes the 1st 10 characters
print(str1[0]) #Accesses the first character
print(str2[-5:]) #Takes last 5 characters
print(str3[6:13]) #Takes 6 through 9
```
**Other popular string manipulation functions**
```
#Splitting a string based on a sperator - str_name.split(separator)
print(str1.split(" ")) #separating based on space
print(str2.split()) #If no argument is given to split, default separator is space
print(str3.split(",")) #separating based on space
#Changing case
print(str2.lower()) #All lower case
print(str3.upper()) #All upper case
print(str3.capitalize()) #Only first letter upper case
print("Red".swapcase()) #swaps cases
#Replace characters by given string
str1.replace("harsh", "good")
#Find a given pattern in a string
str1.find("Rafia") #Returns the index of where the pattern is found
#Concatenating and formating string
print(str2 + " -- " + str3) #adding string concatenates them
str4 = "Strings can be anything, like {} is a string".format(12345)
print(str4)
#Like lists, we can multiply to repeat
"Hi"*4
#Like lists, we can use len command to find the size of a string
len("apples")
```
**Special Characters**
```
#\n makes a new line
print("This is making \n a new line")
#\t inserts a tab
print("This just inserts \t a tab")
```
### If Statement <a class="anchor" id="if"></a>
Executing blockes of code based on whether or not a given condition is true
The syntax is -
```python
if (condition):
Do somthing
elif (condition):
Do some other thing
else:
Do somet other thing
```
Only one block will execute - the condition that returns true first
You can use as many elif blocks as needed
```
if ("c" in l2):
print("Yes c is in l2")
l2.remove("c")
print("But now it's removed. Here's the new list")
print(l2)
a = 5 #defining a variable
if (a>10):
print("a is greater than 10")
else:
print("a is less than 10")
if (a>5):
print("a is greater than 5")
elif (a<5):
print("a is less than 5")
else:
print("a is equal to 5")
# assigning a value to variable using if statement
str5 = "This is a great class"
b = "yes" if "great" in str5 else "no" #if great is in str5, b will get a value of yes, otherwise it will be no
c = 1 if a>10 else 0 #if the variable a is greater than 10, c will be 1, otherwise 0
print("b = {}, c = {}".format(b,c))
```
## Loops
Loops are an essential tool in python that allows you to repeatedly excute a block of code given certain conditions or based on interating over a given list or array. There's two main types of loops in python - `For` and `While`. There's also `Do..While` loop in python by combinging the Do command and While command but I won't discuss that here.
### For Loop <a class="anchor" id="for"></a>
For loops are useful when you want to iterate a certain number of times or when you want to iterate over a list or array type object
```python
for i in list_name:
do something
```
```
#Looping a certain number of time
for i in range(10): #iterating over a list going from 0 to 9
a = i*5
print("Multiply {} by 5 gives {}".format(i, a))
#Looping over a list
for item in l4:
str_item = str(item)
print("{} - {}".format(str_item, type(str_item)))
```
**Loop Control Statements** You can control the execution of a loop using 3 statements -
- `break` : This breaks out of a loop and moves on to the next segment of your code
- `continue` : This skips any code below it (inside the loop) and moves on to the next iteration
- `pass` : It's used when a statement is required syntactically but you don't want any code to execute
Demonstrating `break`
```
#l4 is a list that contains both integers and numbers
l4
```
So if you try to add numbers to the string elements, you'll get an error.
To avoid it when iterating over this list, you can insert a break statement in your loop so that your code breaks out of the loop when it encounters a string.
```
for i in l4:
if type(i)==str:
print("Encountered a string, breaking out of the loop")
break
tmp = i+10
print("Added 10 to list item {} to get {}".format(i, tmp))
```
Demonstrating `continue`
But now, with the `break` statement, it breaks out of the loop any time it encounters string element. If the next element after a string element is an integer, we're missing out on it.
That is where the continue statment comes in. If you use `continue` instead of `break` then, instead of breaking out of the loop, you just skip the current iteration and move to the next one. i.e. you move on to the next element and check again whether it's a string or not and so on..
```
for i in l4:
if type(i)==str:
print("Encountered a string, moving on to the next element")
continue
tmp = i+10
print("Added 10 to list item {} to get {}".format(i, tmp))
```
Demonstrating `pass`
`pass` is more of a placeholder. If you start a loop, you are bound by syntax to write at least one statement inside it. If you don't want to write anything yet, you can use a `pass` statement to avoid getting an error
```
for i in l4:
pass
```
**Popular functions related to loops** There's a lot of usefull functions in python that work well with loops e.g. (range, unpack(*), tuple, split etc.) But there are two very important ones that go hand-in-hand with loops - `zip` & `enumerate` - so these are the ones I'm discussing here.
- `zip` : Used when you want to iterate over two lists of equal length (If the length are not equal, it only iterates up to the length of the shorter list)
- `enumerate` : Used when you want the index of the list item you're iterating over
```
print(len(l1), len(l3))
for a, b in zip(l1, l3):
print("list 1 item is {}, corresponding list 3 item is {}".format(a,b))
for i, (a,b) in enumerate(zip(l1,l3)):
print("At index {}, list 1 item is {}, corresponding list 3 item is {}".format(i, a, b))
```
### While Loop <a class="anchor" id="while"></a>
While loops are usefull when you want to iterate a code block **until** a certain condition is satified. While loops often need a counter variable that increments as the loop goes on.
```python
while (condition):
do something
```
```
counter = 10
while counter>0:
print("The counter is still positive and right now, it's {}".format(counter))
counter-= 1 #incrementing the counter, reducing it by 1 in every iteration
```
`pass`, `break` and `continue` statements all work well with `while` loop. `zip` and `enumerate` doesn't usually pair with while since it doesn't iterate over list type objects
### Function <a class="anchor" id="func"></a>
In python, apart from using the built-in functions, you can define your own customized functions using the following syntax -
```python
def function_name(arg1, arg2):
value = do something using arg1 & arg2
return value
#calling your function
function_name(value1, value2)
```
This is useful when you find yourself repeathing a block of code often.
```
#Defining the function
def arithmatic_operations(num1, num2):
"""
A function to perform a series of arithmatic operations on num1 and num2
Returns the final result as an integer rounded up/down
"""
add = num1 + num2
mltply = add*num2
sbtrct = mltply - num2
divide = sbtrct/num2
result = round(divide)
return result
#Anything put inside a multi-line comment (""" """) inside a function, is called a doc-string.
#You can describe your function inside """ """ and then retrieve this information by doing help(function_name)
help(arithmatic_operations)
#Calling the function
resA = arithmatic_operations(10, 5)
resA
arithmatic_operations(10, 15)
```
**Setting default values** You can use default argument in you parameter list to set default values or optional arguments
Default arguments are optional parameters for a function i.e. you can call the function without these parameters
```python
def new_func(arg1, arg2, arg3=5):
result = arg1 + arg2 + arg3
return result
```
Here, arg3 is the optional argument because you've set it to a default value of 5. If you don't provide arg3 when you call this function, arg3 will assume a value of 5. If you don't provide arg1 or arg2, you'll get an error because they are required/positional arguments
Now imagine if someone were to call the `arithmatic_operations` function using string arguments, they'd get an error - because you can't perform arithmatic operations on a string. In that case, we want to be able to convert the input to a number. Let's instroduce a keyword argument `convert` to handle such cases
```
#Defining the function
def new_arith(num1, num2, convert=False):
"""
A new function function that can handle even string arguments
"""
if convert!=False:
num1 = float(num1)
num2 = float(num2)
add = num1 + num2
mltply = add*num2
sbtrct = mltply - num2
divide = sbtrct/num2
result = round(divide)
return result
#Handles numbers as usual
#Function works fine even if we don't specify convert
new_arith(10, 5)
#Since we didn't specify convert, it's assumed to be False
#strings are not converted and we get an error
new_arith("10", "5")
new_arith("10", "5", convert=True)
```
### Scope <a class="anchor" id="scope"></a>
The variables in a program are not accessible by every part of the program. Based on accessibility, there are two types of variables - global variable and local variable.
Global variables are variables that can be accessed by any part of the program. Example from this notebook would be `str1`, `str2`, `truth`, `l1` etc. These variables can be accesed by this entire notebook.
Local variables are variables that can only be accessed in certain parts of the program, e.g. variables defined inside function. Example from this notebook would be `mltply`, `sbtrct`, `add`, `convert`, `result` etc. these variables are only defined inside the function and can only be accessed by the respective functions
```
result
mltply
```
## Miscellaneous
### Dictionary <a class="anchor" id="dict"></a>
Dictionaries are another iterable data type that comes in comma separated, key-value pairs.
```
#Definging some dictionaries
dict1 = {} #One way to define an empty dictionary
dict2 = dict() #One way to define an empty dictionary or convert another data type into a dictionary
ou_mascots = {"Name": "Boomer", "Species": "Horse", "Partner": "Sooner", "Represents": "Oklahoma Sooners"}
dict3 = {1:"uno", 2:34, "three": [1,2,3], 4:(4,5), 5:ou_mascots}
ou_mascots
dict3 #Dictionary values can be of any type - string, number, lists, even dictionary
```
###### Accessing elements
```
ou_mascots["Name"]
ou_mascots.get("Partner")
```
###### Updating Dictionary
```
#Adding new element
dict1["new_element"] = 5113
dict1
#Deleting
del dict3[1] #removes the entry with key 1
dict1.clear() #removes all entries
del dict2 #deletes entire dictionary
dict3
dict1
dict2
```
###### Useful Dictionary Functions
```
ou_mascots.keys() #Returns keys
ou_mascots.items() #Returns key-value pairs as tuples
ou_mascots.values() #Returns values
ou_mascots.pop("Species") #removes given key and returns value
len(dict1)
```
### Tuples <a class="anchor" id="tuple"></a>
Tuples are another iterable and sequence data type. Almost everything disccussed in the list section can be applied to tuples and they work in the same way - operations, functions etc.
```
#Defining some tuples
tup1 = (20,) #If your tuple has only one element, you still have to use a comma
tup2 = (1,3,4,6,7)
tup3 = ("a", "b", "c")
tup4 = (5,6,7)
#The key difference with lists, you can't change tuple items
tup2[3] = 4
#You can use tuples to define deictionaries
dict(zip(tup3, tup4))
```
### List Comprehension <a class="anchor" id="lc"></a>
List comprehension is a quick way to create a new list from an existing list (or any other iterable like tuples or dictionaries). The syntax is as follows -
```python
new_list = [(x+5) for x in existing_list]
```
The above one line code is the same as writing the following lengthy code block:
```python
new_list=[]
for x in existing_list:
value = x + 5
new_list.append(value)
```
```
print(l3)
#We need a new list of numbers that are an even multiple of 5
#We already have a list of even numbers up to 48 - l3
#time to create the new list
l5 = [2*i for i in l3]
print(l5)
```
### Error Handling <a class="anchor" id="eh"></a>
Sometimes we might have a code block, especially in a loop or a function that might not work for all kind of values. In that case, error hadnling is something to consider in order to avoid error and continue on with the rest of the program.
Errors can be handled in many ways depending on your needs but here I'm showing the `try .. except` method.
```
#inserting another string in l4
l4.insert(2, "a")
l4
#let's try running the arithmatic_operations functions on the elements of l4
for item in l4:
try:
res = ariethmatic_operations(item,5)
print("list item {}, result {}".format(item, res))
except:
print("Could not perform arithmatic operations for list item {}".format(item))
```
### Lambda Expression <a class="anchor" id="le"></a>
A quick way to define short anonymous functions - one liner functions.
Handy when you keep repeating an expression and it's too small to define a formal function.
```python
#Defining
x = lambda arg : expression
#calling
x(value)
```
This is equivalent to -
```python
#Defining
def x(arg):
result = expression
return result
#calling
x(value)
```
```
#small function with 1 argument
x = lambda a : a + 10
x(5)
#small function with multiple arguments
x = lambda a,b,c : ((a + 10)*b)/c
x(5,10,2)
```
### Mapping Function <a class="anchor" id="mf"></a>
`map` function is quick way to apply a function to many values using an iterable (lists, tuples etc). The function to apply can be a built in function, user defined function or even a lambda expression. In fact, mapping and lambda expression work really well together. The syntax is as follows :
```python
map(function_name, list_name)
```
The above one line code is eqivalent to the lengthy code block below -
```python
for item in list_name:
function_name(list_name)
```
**applying the built-in `type` function to the dictionary values**
```
dict3
result = map(type, dict3.values())
list(result)
```
**applying the user-defined `arithmetic_operations` function to two lists**
```
print(l1, l3)
result = map(ariethmatic_operations, l1, l3) #mapped up to the shorter of the two lists
list(result)
```
**combining lambda expression and mapping function**
```
numbers1 = [1, 2, 3]
numbers2 = [4, 5, 6]
result = map(lambda x, y: x + y, numbers1, numbers2)
list(result)
```
### User Input <a class="anchor" id="ui"></a>
Sometimes, it is necessary to take user input and you can do that in python using the `input` command.
The `input` command returns the user input as a string so, always remember to convert the input to the data type you need.
```python
input("Your customized prompt goes here")
```
```
inp = input("please input two integers seperated by comma")
inp
#let's apply the arithmetic_operation function to this user input
a,b = inp.split(",")
a
ariethmatic_operations(int(a), int(b)) #Need to convert to integers since this one doesn't handle strings
new_arith(a,b, convert=True)
```
# Advanced Topics
### Numpy <a class="anchor" id="numpy"></a>
### Pandas <a class="anchor" id="pandas"></a>
### Plotting <a class="anchor" id="plot"></a>
### Debugging <a class="anchor" id="pdb"></a>
### Other Useful Libraries <a class="anchor" id="oul"></a>
| github_jupyter |
# webgrabber für Listen von Wikipedia
```
# Gebäckliste
import requests
from bs4 import BeautifulSoup
# man muss der liste einen letzten eintrag geben, weil sonst weitere listen unter der eigentlichen ausgelesen werden.
def grab_list(url, last_item): # wenn wikipedia eine Tabelle anzeigt
grabbed_list = []
r = requests.get(url)
text = r.text
soup = BeautifulSoup(text, 'lxml')
soup.prettify()
matches = soup.find_all('tr')
for index, row in enumerate(matches):
try:
obj = row.find('td').a.get('title')
if obj.endswith(' (page does not exist)'): obj = obj.replace(' (page does not exist)', '')
grabbed_list.append(obj)
if obj == last_item:
break
except AttributeError:
continue
return grabbed_list
def grab_list2(url, last_item): # wenn wikipedia eine bullet-point liste anzeigt
grabbed_list = []
r = requests.get(url)
text = r.text
soup = BeautifulSoup(text, 'lxml')
soup.prettify()
matches = soup.find_all('li')
for index, row in enumerate(matches):
try:
obj = row.a.get('title')
if obj.endswith(' (page does not exist)'): obj = obj.replace(' (page does not exist)', '')
grabbed_list.append(obj)
if obj == last_item:
break
except AttributeError:
continue
return grabbed_list
url_gebaeck = r'https://en.wikipedia.org/wiki/List_of_pastries'
gebaeckliste = grab_list(url_gebaeck, 'Zlebia')
print(gebaeckliste)
# deutsche desserts
url_deutschedesserts = r'https://en.wikipedia.org/wiki/List_of_German_desserts'
germanpastrylist = grab_list(url_deutschedesserts, 'Zwetschgenkuchen')
print(germanpastrylist)
# Milchprodukte
url_dairy = r'https://en.wikipedia.org/wiki/List_of_dairy_products'
dairyproductlist = grab_list(url_dairy, 'Yogurt')
print(dairyproductlist)
# Cheeses
url_cheese = r'https://en.wikipedia.org/wiki/List_of_cheeses'
cheeselist = grab_list(url_cheese, 'Rice cheese')
print(cheeselist)
url_fruit = r'https://en.wikipedia.org/wiki/List_of_culinary_fruits'
fruits = grab_list(url_fruit, 'Yantok')
print(fruits)
url_vegetables = r'https://en.wikipedia.org/wiki/List_of_vegetables'
vegetables = grab_list(url_vegetables, 'Wakame')
print(vegetables)
url_seafood = r'https://en.wikipedia.org/wiki/List_of_types_of_seafood'
seafood = grab_list2(url_seafood, 'Nautilus')
print(seafood)
url_seafood = r'https://en.wikipedia.org/wiki/List_of_seafood_dishes'
seafood = grab_list2(url_seafood, 'Cuttlefish')
print(seafood)
```
| github_jupyter |
# Introduction
Now that I have removed the RNA/DNA node and we have fixed many pathways, I will re-visit the things that were raised in issue #37: 'Reaction reversibility'. There were reactions that we couldn't reverse or remove or they would kill the biomass. I will try to see if these problems have been resolved now. If not, I will dig into the underlying cause in a smanner similar to what was done in notebook 20.
```
import cameo
import pandas as pd
import cobra.io
import escher
from escher import Builder
from cobra import Reaction
model = cobra.io.read_sbml_model('../model/p-thermo.xml')
model_e_coli = cameo.load_model('iML1515')
model_b_sub = cameo.load_model('iYO844')
```
__ALDD2x__
should be irreversible, but doing so kills the biomass growth completely at this moment. It needs to be changed as we right now have an erroneous energy generating cycle going from aad_c --> ac_c (+atp) --> acald --> accoa_c -->aad_c.
Apparently, unconciously i already fixed this problem in notebook 20. So this is fine now.
__GLYO1__ This reaction has already been removed in notebook 20 to fix the glycine pathway.
__DHORDfum__ Has been renamed to DHORD6 in notebook 20 in the first check of fixing dCMP. And the reversability has been fixed too.
__OMPDC__ This has by chance also been fixed in notebook 20 in the first pass to fix dCMP biosynthesis.
__NADK__ The reaction is currently reversible, but should be irreversible, producing nadp and adp.
Still, when I try to fix the flux in the direction it should be, it kills the biomass production. I will try to figure out why, likely it has to do with co-factor balance.
```
model.reactions.NADK.bounds = (0,1000)
model.reactions.ALAD_L.bounds = (-1000,0)
model.optimize().objective_value
cofactors = ['nad_c', 'nadh_c','', '', '', '']
with model:
# model.add_boundary(model.metabolites.glc__D_c, type = 'sink', reaction_id = 'test')
# model.add_boundary(model.metabolites.r5p_c , type = 'sink', reaction_id = 'test2')
# model.add_boundary(model.metabolites.hco3_c, type = 'sink', reaction_id = 'test3')
for met in model.reactions.biomass.metabolites:
if met.id in cofactors:
coeff = model.reactions.biomass.metabolites[met]
model.reactions.biomass.add_metabolites({met:-coeff})
else:
continue
solution = model.optimize()
#print (model.metabolites.glu__D_c.summary())
#print ('test flux:', solution['test'])
#print ('test2 flux:', solution['test2'])
print (solution.objective_value)
```
It seems that the NAD and NADH are the blocked metabolites for biomass generation. Now lets try to figure out where this problem lies.
I think the problem lies in re-generating NAD. The model uses this reaction togehter with oth strange reactions to regenerate NAD, where normally in oxygen containing conditions I would expect respiration to do this. So let me see how bacillus and e. coli models do this and see if maybe some form of ETC is missing in our model. This would explain why adding the ATP synthase didn't influence our biomass prediction at all.
__Flavin reductase__
In E. coli we observed that there is a flavin reductase in the genome, contributing to NADH regeneration. We've looked into the genome annotation for our strain, and have found that there is a flavin reductase annotated there aswell (https://www.genome.jp/dbget-bin/www_bget?ptl:AOT13_02085), but not in bacillus (fitting the model). Therefore, I will add this reaction into our model, named FADRx.
__NADH dehydrogenase__
The NADH dehydrogenase, tansfering reducing equivalents from NADH to quinone, is the first part of the electron transport chain. The quinones then can transfer the electrons to pump out protons, which can allow ATP synthase to generate additional energy. in iML1515 this reaction is captures by NADH16pp, NADH17pp and NADH18pp. In B. subtilis NADH4 reflects this reaction. In our model, we don't currently have anything that resembles this reaction. However, in Beata's thesis (and the genome) we can find EC 1.6.5.3, which performs the a similar reaction to NADH16pp. Therefore, I will add this reactin into our model.
In our model, we also have the reactions QH2OR and NADHQOR, which somewhat resemble the NADHDH reaction. Both do not include proton translocation or are reversible. To prevent these reactions from forming a cycle and having incorrect duplicate reactions in the model, I will remove them.
__CYOR__
The last 'step' in the model electron transport chain is the transfer of electrons from the quinone to oxygen, pumping protons out of the cell. E. coli has a CYTBO3_4pp reaction that shows this, performed by a cytochrome oxidase. The model doesnt have this reaction, but from Beata's thesis and the genome annotation one would expect this to be present. We found the reaction in a way similar to the E. coli model. Therefor I will add the CYTBO3 reaction to our model, as indicated in Beata's thesis.
```
model.add_reaction(Reaction(id='FADRx'))
model.reactions.FADRx.name = 'Flavin reductase'
model.reactions.FADRx.annotation = model_e_coli.reactions.FADRx.annotation
model.reactions.FADRx.add_metabolites({
model.metabolites.fad_c:-1,
model.metabolites.h_c: -1,
model.metabolites.nadh_c:-1,
model.metabolites.fadh2_c:1,
model.metabolites.nad_c:1
})
#add NADH dehydrogenase reaction
model.add_reaction(Reaction(id='NADHDH'))
model.reactions.NADHDH.name = 'NADH Dehydrogenase (ubiquinone & 3.5 protons)'
model.reactions.NADHDH.annotation['ec-code'] = '1.6.5.3'
model.reactions.NADHDH.annotation['kegg.reaction'] = 'R11945'
model.reactions.NADHDH.add_metabolites({
model.metabolites.nadh_c:-1, model.metabolites.h_c: -4.5, model.metabolites.ubiquin_c:-1,
model.metabolites.nad_c: 1, model.metabolites.h_e: 3.5, model.metabolites.qh2_c: 1
})
model.remove_reactions(model.reactions.NADHQOR)
model.remove_reactions(model.reactions.QH2OR)
model.add_reaction(Reaction(id='CYTBO3'))
model.reactions.CYTBO3.name = 'Cytochrome oxidase bo3 (ubiquinol: 2.5 protons)'
model.reactions.CYTBO3.add_metabolites({
model.metabolites.o2_c:-0.5, model.metabolites.h_c: -2.5, model.metabolites.qh2_c:-1,
model.metabolites.h2o_c:1, model.metabolites.h_e: 2.5, model.metabolites.ubiquin_c:1
})
```
In looking at the above, I also observed some other reactions that probably should not looked at and modified.
```
model.reactions.MALQOR.id = 'MDH2'
model.reactions.MDH2.bounds = (0,1000)
model.metabolites.hexcoa_c.id = 'hxcoa_c'
model.reactions.HEXOT.id = 'ACOAD2f'
model.metabolites.dccoa_c.id = 'dcacoa_c'
model.reactions.DECOT.id = 'ACOAD4f'
#in the wrong direction and id
model.reactions.GLYCDH_1.id = 'HPYRRx'
model.reactions.HPYRRx.bounds = (-1000,0)
#in the wrong direction
model.reactions.FMNRx.bounds = (-1000,0)
model.metabolites.get_by_id('3hbycoa_c').id = '3hbcoa_c'
```
Even with the changes above we still do not restore growth... Supplying nmn_c restores growth, but supplying aspartate (beginning of the pathway) doesn't sovle the problem. so maybe the problem lies more with the NAD biosynthesis pathway than really the regeneration anymore?
```
model.metabolites.nicrnt_c.name = 'Nicotinate ribonucleotide'
model.metabolites.ncam_c.name = 'Niacinamide'
#wrong direction
model.reactions.QULNS.bounds = (-1000,0)
#this rescued biomass accumulation!
#connected to aspartate
model.optimize().objective_value
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
```
Flux is carried through the
Still it is strange that flux is not carried through the ETC, but is through the ATP synthase as one would expect in the presence of oxygen. Therefore I will investigate where the extracellular protons come from.
It seems all the extracellular protons come from the export of phosphate (pi_c) which is proton symport coupled. We are producing so much phosphate from the biomass reaction. Though in theory, phosphate should not be produced so much, as it is also used for the generation of ATP from ADP. Right now I don't really see how to solve this problem. I've made an issueof it and will look into this at another time.
```
model.optimize()['ATPS4r']
model.metabolites.pi_c.summary()
```
I also noticed that now most ATP comes from dGTP. The production of dGDP should just play a role in supplying nucleotides for biomass and so the flux it carries be low. I will check where the majority of the dGTP comes from.
What is happening is the following: dgtp is converted dgdp and atp (rct ATDGDm). The dgdp then reacts with pep to form dGTP again. Pep formation is somewhat energy neutral, but it is wierd the metabolism decides to do this by themselves instead of flowing the pep into pyruvate via the normal glycolysis and into the TCA.
```
#reaction to be removed
model.remove_reactions(model.reactions.PYRPT)
```
Removing these reactions triggers normal ATP production via ETC and ATP synthase again. So this may be solved now.
```
model.metabolites.pi_c.summary()
#save & commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
```
| github_jupyter |
# Градиентный бустинг своими руками
**Внимание:** в тексте задания произошли изменения - поменялось число деревьев (теперь 50), правило изменения величины шага в задании 3 и добавился параметр `random_state` у решающего дерева. Правильные ответы не поменялись, но теперь их проще получить. Также исправлена опечатка в функции `gbm_predict`.
В этом задании будет использоваться датасет `boston` из `sklearn.datasets`. Оставьте последние 25% объектов для контроля качества, разделив `X` и `y` на `X_train`, `y_train` и `X_test`, `y_test`.
Целью задания будет реализовать простой вариант градиентного бустинга над регрессионными деревьями для случая квадратичной функции потерь.
```
from sklearn import datasets, model_selection
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error
import numpy as np
boston = datasets.load_boston()
X_train, X_test = boston.data[: 380, :], boston.data[381 :, :]
y_train, y_test = boston.target[: 380], boston.target[381 :]
```
## Задание 1
Как вы уже знаете из лекций, **бустинг** - это метод построения композиций базовых алгоритмов с помощью последовательного добавления к текущей композиции нового алгоритма с некоторым коэффициентом.
Градиентный бустинг обучает каждый новый алгоритм так, чтобы он приближал антиградиент ошибки по ответам композиции на обучающей выборке. Аналогично минимизации функций методом градиентного спуска, в градиентном бустинге мы подправляем композицию, изменяя алгоритм в направлении антиградиента ошибки.
Воспользуйтесь формулой из лекций, задающей ответы на обучающей выборке, на которые нужно обучать новый алгоритм (фактически это лишь чуть более подробно расписанный градиент от ошибки), и получите частный ее случай, если функция потерь `L` - квадрат отклонения ответа композиции `a(x)` от правильного ответа `y` на данном `x`.
Если вы давно не считали производную самостоятельно, вам поможет таблица производных элементарных функций (которую несложно найти в интернете) и правило дифференцирования сложной функции. После дифференцирования квадрата у вас возникнет множитель 2 — т.к. нам все равно предстоит выбирать коэффициент, с которым будет добавлен новый базовый алгоритм, проигноируйте этот множитель при дальнейшем построении алгоритма.
```
def accent_l(z, y):
'''result = list()
for i in range(0, len(y)):
result.append(-(y[i] - z[i]))
'''
return -1.0*(z - y)
```
## Задание 2
Заведите массив для объектов `DecisionTreeRegressor` (будем их использовать в качестве базовых алгоритмов) и для вещественных чисел (это будут коэффициенты перед базовыми алгоритмами).
В цикле от обучите последовательно 50 решающих деревьев с параметрами `max_depth=5` и `random_state=42` (остальные параметры - по умолчанию). В бустинге зачастую используются сотни и тысячи деревьев, но мы ограничимся 50, чтобы алгоритм работал быстрее, и его было проще отлаживать (т.к. цель задания разобраться, как работает метод). Каждое дерево должно обучаться на одном и том же множестве объектов, но ответы, которые учится прогнозировать дерево, будут меняться в соответствие с полученным в задании 1 правилом.
Попробуйте для начала всегда брать коэффициент равным 0.9. Обычно оправдано выбирать коэффициент значительно меньшим - порядка 0.05 или 0.1, но т.к. в нашем учебном примере на стандартном датасете будет всего 50 деревьев, возьмем для начала шаг побольше.
В процессе реализации обучения вам потребуется функция, которая будет вычислять прогноз построенной на данный момент композиции деревьев на выборке `X`:
```
def gbm_predict(X):
return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X]
(считаем, что base_algorithms_list - список с базовыми алгоритмами, coefficients_list - список с коэффициентами перед алгоритмами)
```
Эта же функция поможет вам получить прогноз на контрольной выборке и оценить качество работы вашего алгоритма с помощью `mean_squared_error` в `sklearn.metrics`.
Возведите результат в степень 0.5, чтобы получить `RMSE`. Полученное значение `RMSE` — **ответ в пункте 2**.
```
base_algorithms_list = list()
coefficients_list = list()
algorithm = DecisionTreeRegressor(max_depth=5, random_state=42)
def gbm_predict(X):
return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X]
base_algorithms_list = list()
coefficients_list = list()
b_0 = algorithm.fit(X_train, y_train)
base_algorithms_list.append(b_0)
coefficients_list.append(0.9)
for i in range(1, 50):
algorithm_i = DecisionTreeRegressor(max_depth=5, random_state=42)
s_i = accent_l(gbm_predict(X_train), y_train)
b_i = algorithm_i.fit(X_train, s_i)
base_algorithms_list.append(b_i)
coefficients_list.append(0.9)
print(mean_squared_error(y_test, gbm_predict(X_test))**0.5)
```
## Задание 3
Вас может также беспокоить, что двигаясь с постоянным шагом, вблизи минимума ошибки ответы на обучающей выборке меняются слишком резко, перескакивая через минимум.
Попробуйте уменьшать вес перед каждым алгоритмом с каждой следующей итерацией по формуле `0.9 / (1.0 + i)`, где `i` - номер итерации (от 0 до 49). Используйте качество работы алгоритма как **ответ в пункте 3**.
В реальности часто применяется следующая стратегия выбора шага: как только выбран алгоритм, подберем коэффициент перед ним численным методом оптимизации таким образом, чтобы отклонение от правильных ответов было минимальным. Мы не будем предлагать вам реализовать это для выполнения задания, но рекомендуем попробовать разобраться с такой стратегией и реализовать ее при случае для себя.
```
base_algorithms_list = list()
coefficients_list = list()
b_0 = algorithm.fit(X_train, y_train)
base_algorithms_list.append(b_0)
coefficients_list.append(0.9)
for i in range(1, 50):
algorithm_i = DecisionTreeRegressor(max_depth=5, random_state=42)
s_i = accent_l(gbm_predict(X_train), y_train)
b_i = algorithm_i.fit(X_train, s_i)
base_algorithms_list.append(b_i)
coefficients_list.append(0.9/(1.0+i))
#coefficients_list.append(0.05)
print(mean_squared_error(y_test, gbm_predict(X_test))**0.5)
```
## Задание 4
Реализованный вами метод - градиентный бустинг над деревьями - очень популярен в машинном обучении. Он представлен как в самой библиотеке `sklearn`, так и в сторонней библиотеке `XGBoost`, которая имеет свой питоновский интерфейс. На практике `XGBoost` работает заметно лучше `GradientBoostingRegressor` из `sklearn`, но для этого задания вы можете использовать любую реализацию.
Исследуйте, переобучается ли градиентный бустинг с ростом числа итераций (и подумайте, почему), а также с ростом глубины деревьев. На основе наблюдений выпишите через пробел номера правильных из приведенных ниже утверждений в порядке возрастания номера (это будет **ответ в п.4**):
1. С увеличением числа деревьев, начиная с некоторого момента, качество работы градиентного бустинга не меняется существенно.
2. С увеличением числа деревьев, начиная с некоторого момента, градиентный бустинг начинает переобучаться.
3. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга на тестовой выборке начинает ухудшаться.
4. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга перестает существенно изменяться
```
from xgboost import XGBClassifier
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingRegressor
%pylab inline
n_trees = [1] + list(range(10, 105, 5))
X = boston.data
y = boston.target
estimator = GradientBoostingRegressor(learning_rate=0.1, max_depth=5, n_estimators=100)
estimator.fit(X_train, y_train)
print(mean_squared_error(y_test, estimator.predict(X_test))**0.5)
estimator = XGBClassifier(learning_rate=0.25, max_depth=5, n_estimators=50, min_child_weight=3)
estimator.fit(X_train, y_train)
print(mean_squared_error(y_test, estimator.predict(X_test))**0.5)
%%time
xgb_scoring = []
for n_tree in n_trees:
estimator = XGBClassifier(learning_rate=0.1, max_depth=5, n_estimators=n_tree, min_child_weight=3)
estimator.fit(X_train, y_train)
#estimator = GradientBoostingRegressor(learning_rate=0.25, max_depth=5, n_estimators=n_tree)
#score = cross_val_score(estimator, X, y, scoring = 'accuracy', cv = 3)
score = mean_squared_error(y_test, estimator.predict(X_test))**0.5
xgb_scoring.append(score)
xgb_scoring = np.asmatrix(xgb_scoring)
print(xgb_scoring.reshape(xgb_scoring.shape[1]))
pylab.plot(n_trees, xgb_scoring.reshape(20, 1), marker='.', label='XGBoost')
pylab.grid(True)
pylab.xlabel('n_trees')
pylab.ylabel('score')
pylab.title('Accuracy score')
pylab.legend(loc='lower right')
%%time
xgb_scoring = []
depths = range(1, 21)
for depth in depths:
estimator = XGBClassifier(learning_rate=0.1, max_depth=depth, n_estimators=50, min_child_weight=3)
estimator.fit(X_train, y_train)
#estimator = GradientBoostingRegressor(learning_rate=0.25, max_depth=5, n_estimators=n_tree)
#score = cross_val_score(estimator, X, y, scoring = 'accuracy', cv = 3)
score = mean_squared_error(y_test, estimator.predict(X_test))**0.5
xgb_scoring.append(score)
xgb_scoring = np.asmatrix(xgb_scoring)
pylab.plot(n_trees, xgb_scoring.reshape(20, 1), marker='.', label='XGBoost')
pylab.grid(True)
pylab.xlabel('n_trees')
pylab.ylabel('score')
pylab.title('Accuracy score')
pylab.legend(loc='lower right')
```
## Задание 5
Сравните получаемое с помощью градиентного бустинга качество с качеством работы линейной регрессии.
Для этого обучите `LinearRegression` из `sklearn.linear_model` (с параметрами по умолчанию) на обучающей выборке и оцените для прогнозов полученного алгоритма на тестовой выборке `RMSE`. Полученное качество - ответ в **пункте 5**.
В данном примере качество работы простой модели должно было оказаться хуже, но не стоит забывать, что так бывает не всегда. В заданиях к этому курсу вы еще встретите пример обратной ситуации.
```
from sklearn.linear_model import LinearRegression
estimator = LinearRegression()
estimator.fit(X_train, y_train)
print(mean_squared_error(y_test, estimator.predict(X_test))**0.5)
```
| github_jupyter |
# Modeling and Simulation in Python
Project 1 example
Copyright 2018 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
from pandas import read_html
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
if len(timeseries):
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
un = table2.un / 1e9
census = table2.census / 1e9
empty = TimeSeries()
plot_results(census, un, empty, 'World population estimates')
half = get_first_value(census) / 2
init = State(young=half, old=half)
system = System(birth_rate1 = 1/18,
birth_rate2 = 1/26,
mature_rate = 1/40,
death_rate = 1/40,
t_0 = 1950,
t_end = 2016,
init=init)
def update_func1(state, t, system):
if t < 1970:
births = system.birth_rate1 * state.young
else:
births = system.birth_rate2 * state.young
maturings = system.mature_rate * state.young
deaths = system.death_rate * state.old
young = state.young + births - maturings
old = state.old + maturings - deaths
return State(young=young, old=old)
state = update_func1(init, system.t_0, system)
state = update_func1(state, system.t_0, system)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
init: initial State object
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
state = system.init
results[system.t_0] = state.young + state.old
for t in linrange(system.t_0, system.t_end):
state = update_func(state, t, system)
results[t+1] = state.young + state.old
return results
results = run_simulation(system, update_func1);
plot_results(census, un, results, 'World population estimates')
```
| github_jupyter |
# Trial 2: classification with learned graph filters
We want to classify data by first extracting meaningful features from learned filters.
```
import time
import numpy as np
import scipy.sparse, scipy.sparse.linalg, scipy.spatial.distance
from sklearn import datasets, linear_model
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
sys.path.append('..')
from lib import graph
```
# Parameters
# Dataset
* Two digits version of MNIST with N samples of each class.
* Distinguishing 4 from 9 is the hardest.
```
def mnist(a, b, N):
"""Prepare data for binary classification of MNIST."""
folder = os.path.join('..', 'data')
mnist = datasets.fetch_mldata('MNIST original', data_home=folder)
assert N < min(sum(mnist.target==a), sum(mnist.target==b))
M = mnist.data.shape[1]
X = np.empty((M, 2, N))
X[:,0,:] = mnist.data[mnist.target==a,:][:N,:].T
X[:,1,:] = mnist.data[mnist.target==b,:][:N,:].T
y = np.empty((2, N))
y[0,:] = -1
y[1,:] = +1
X.shape = M, 2*N
y.shape = 2*N, 1
return X, y
X, y = mnist(4, 9, 1000)
print('Dimensionality: N={} samples, M={} features'.format(X.shape[1], X.shape[0]))
X -= 127.5
print('X in [{}, {}]'.format(np.min(X), np.max(X)))
def plot_digit(nn):
M, N = X.shape
m = int(np.sqrt(M))
fig, axes = plt.subplots(1,len(nn), figsize=(15,5))
for i, n in enumerate(nn):
n = int(n)
img = X[:,n]
axes[i].imshow(img.reshape((m,m)))
axes[i].set_title('Label: y = {:.0f}'.format(y[n,0]))
plot_digit([0, 1, 1e2, 1e2+1, 1e3, 1e3+1])
```
# Regularized least-square
## Reference: sklearn ridge regression
* With regularized data, the objective is the same with or without bias.
```
def test_sklearn(tauR):
def L(w, b=0):
return np.linalg.norm(X.T @ w + b - y)**2 + tauR * np.linalg.norm(w)**2
def dL(w):
return 2 * X @ (X.T @ w - y) + 2 * tauR * w
clf = linear_model.Ridge(alpha=tauR, fit_intercept=False)
clf.fit(X.T, y)
w = clf.coef_.T
print('L = {}'.format(L(w, clf.intercept_)))
print('|dLw| = {}'.format(np.linalg.norm(dL(w))))
# Normalized data: intercept should be small.
print('bias: {}'.format(abs(np.mean(y - X.T @ w))))
test_sklearn(1e-3)
```
## Linear classifier
```
def test_optim(clf, X, y, ax=None):
"""Test optimization on full dataset."""
tstart = time.process_time()
ret = clf.fit(X, y)
print('Processing time: {}'.format(time.process_time()-tstart))
print('L = {}'.format(clf.L(*ret, y)))
if hasattr(clf, 'dLc'):
print('|dLc| = {}'.format(np.linalg.norm(clf.dLc(*ret, y))))
if hasattr(clf, 'dLw'):
print('|dLw| = {}'.format(np.linalg.norm(clf.dLw(*ret, y))))
if hasattr(clf, 'loss'):
if not ax:
fig = plt.figure()
ax = fig.add_subplot(111)
ax.semilogy(clf.loss)
ax.set_title('Convergence')
ax.set_xlabel('Iteration number')
ax.set_ylabel('Loss')
if hasattr(clf, 'Lsplit'):
print('Lsplit = {}'.format(clf.Lsplit(*ret, y)))
print('|dLz| = {}'.format(np.linalg.norm(clf.dLz(*ret, y))))
ax.semilogy(clf.loss_split)
class rls:
def __init__(s, tauR, algo='solve'):
s.tauR = tauR
if algo is 'solve':
s.fit = s.solve
elif algo is 'inv':
s.fit = s.inv
def L(s, X, y):
return np.linalg.norm(X.T @ s.w - y)**2 + s.tauR * np.linalg.norm(s.w)**2
def dLw(s, X, y):
return 2 * X @ (X.T @ s.w - y) + 2 * s.tauR * s.w
def inv(s, X, y):
s.w = np.linalg.inv(X @ X.T + s.tauR * np.identity(X.shape[0])) @ X @ y
return (X,)
def solve(s, X, y):
s.w = np.linalg.solve(X @ X.T + s.tauR * np.identity(X.shape[0]), X @ y)
return (X,)
def predict(s, X):
return X.T @ s.w
test_optim(rls(1e-3, 'solve'), X, y)
test_optim(rls(1e-3, 'inv'), X, y)
```
# Feature graph
```
t_start = time.process_time()
z = graph.grid(int(np.sqrt(X.shape[0])))
dist, idx = graph.distance_sklearn_metrics(z, k=4)
A = graph.adjacency(dist, idx)
L = graph.laplacian(A, True)
lmax = graph.lmax(L)
print('Execution time: {:.2f}s'.format(time.process_time() - t_start))
```
# Lanczos basis
```
def lanczos(L, X, K):
M, N = X.shape
a = np.empty((K, N))
b = np.zeros((K, N))
V = np.empty((K, M, N))
V[0,...] = X / np.linalg.norm(X, axis=0)
for k in range(K-1):
W = L.dot(V[k,...])
a[k,:] = np.sum(W * V[k,...], axis=0)
W = W - a[k,:] * V[k,...] - (b[k,:] * V[k-1,...] if k>0 else 0)
b[k+1,:] = np.linalg.norm(W, axis=0)
V[k+1,...] = W / b[k+1,:]
a[K-1,:] = np.sum(L.dot(V[K-1,...]) * V[K-1,...], axis=0)
return V, a, b
def lanczos_H_diag(a, b):
K, N = a.shape
H = np.zeros((K*K, N))
H[:K**2:K+1, :] = a
H[1:(K-1)*K:K+1, :] = b[1:,:]
H.shape = (K, K, N)
Q = np.linalg.eigh(H.T, UPLO='L')[1]
Q = np.swapaxes(Q,1,2).T
return Q
def lanczos_basis_eval(L, X, K):
V, a, b = lanczos(L, X, K)
Q = lanczos_H_diag(a, b)
M, N = X.shape
Xt = np.empty((K, M, N))
for n in range(N):
Xt[...,n] = Q[...,n].T @ V[...,n]
Xt *= Q[0,:,np.newaxis,:]
Xt *= np.linalg.norm(X, axis=0)
return Xt, Q[0,...]
```
# Tests
* Memory arrangement for fastest computations: largest dimensions on the outside, i.e. fastest varying indices.
* The einsum seems to be efficient for three operands.
```
def test():
"""Test the speed of filtering and weighting."""
def mult(impl=3):
if impl is 0:
Xb = Xt.view()
Xb.shape = (K, M*N)
XCb = Xb.T @ C # in MN x F
XCb = XCb.T.reshape((F*M, N))
return (XCb.T @ w).squeeze()
elif impl is 1:
tmp = np.tensordot(Xt, C, (0,0))
return np.tensordot(tmp, W, ((0,2),(1,0)))
elif impl is 2:
tmp = np.tensordot(Xt, C, (0,0))
return np.einsum('ijk,ki->j', tmp, W)
elif impl is 3:
return np.einsum('kmn,fm,kf->n', Xt, W, C)
C = np.random.normal(0,1,(K,F))
W = np.random.normal(0,1,(F,M))
w = W.reshape((F*M, 1))
a = mult(impl=0)
for impl in range(4):
tstart = time.process_time()
for k in range(1000):
b = mult(impl)
print('Execution time (impl={}): {}'.format(impl, time.process_time() - tstart))
np.testing.assert_allclose(a, b)
#test()
```
# GFL classification without weights
* The matrix is singular thus not invertible.
```
class gflc_noweights:
def __init__(s, F, K, niter, algo='direct'):
"""Model hyper-parameters"""
s.F = F
s.K = K
s.niter = niter
if algo is 'direct':
s.fit = s.direct
elif algo is 'sgd':
s.fit = s.sgd
def L(s, Xt, y):
#tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, np.ones((s.F,M))) - y.squeeze()
#tmp = np.einsum('kmn,kf->mnf', Xt, s.C).sum((0,2)) - y.squeeze()
#tmp = (C.T @ Xt.reshape((K,M*N))).reshape((F,M,N)).sum((0,2)) - y.squeeze()
tmp = np.tensordot(s.C, Xt, (0,0)).sum((0,1)) - y.squeeze()
return np.linalg.norm(tmp)**2
def dLc(s, Xt, y):
tmp = np.tensordot(s.C, Xt, (0,0)).sum(axis=(0,1)) - y.squeeze()
return np.dot(Xt, tmp).sum(1)[:,np.newaxis].repeat(s.F,1)
#return np.einsum('kmn,n->km', Xt, tmp).sum(1)[:,np.newaxis].repeat(s.F,1)
def sgd(s, X, y):
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.random.normal(0, 1, (s.K, s.F))
s.loss = [s.L(Xt, y)]
for t in range(s.niter):
s.C -= 1e-13 * s.dLc(Xt, y)
s.loss.append(s.L(Xt, y))
return (Xt,)
def direct(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.random.normal(0, 1, (s.K, s.F))
W = np.ones((s.F, M))
c = s.C.reshape((s.K*s.F, 1))
s.loss = [s.L(Xt, y)]
Xw = np.einsum('kmn,fm->kfn', Xt, W)
#Xw = np.tensordot(Xt, W, (1,1))
Xw.shape = (s.K*s.F, N)
#np.linalg.inv(Xw @ Xw.T)
c[:] = np.linalg.solve(Xw @ Xw.T, Xw @ y)
s.loss.append(s.L(Xt, y))
return (Xt,)
#test_optim(gflc_noweights(1, 4, 100, 'sgd'), X, y)
#test_optim(gflc_noweights(1, 4, 0, 'direct'), X, y)
```
# GFL classification with weights
```
class gflc_weights():
def __init__(s, F, K, tauR, niter, algo='direct'):
"""Model hyper-parameters"""
s.F = F
s.K = K
s.tauR = tauR
s.niter = niter
if algo is 'direct':
s.fit = s.direct
elif algo is 'sgd':
s.fit = s.sgd
def L(s, Xt, y):
tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze()
return np.linalg.norm(tmp)**2 + s.tauR * np.linalg.norm(s.W)**2
def dLw(s, Xt, y):
tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze()
return 2 * np.einsum('kmn,kf,n->fm', Xt, s.C, tmp) + 2 * s.tauR * s.W
def dLc(s, Xt, y):
tmp = np.einsum('kmn,kf,fm->n', Xt, s.C, s.W) - y.squeeze()
return 2 * np.einsum('kmn,n,fm->kf', Xt, tmp, s.W)
def sgd(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.random.normal(0, 1, (s.K, s.F))
s.W = np.random.normal(0, 1, (s.F, M))
s.loss = [s.L(Xt, y)]
for t in range(s.niter):
s.C -= 1e-12 * s.dLc(Xt, y)
s.W -= 1e-12 * s.dLw(Xt, y)
s.loss.append(s.L(Xt, y))
return (Xt,)
def direct(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.random.normal(0, 1, (s.K, s.F))
s.W = np.random.normal(0, 1, (s.F, M))
#c = s.C.reshape((s.K*s.F, 1))
#w = s.W.reshape((s.F*M, 1))
c = s.C.view()
c.shape = (s.K*s.F, 1)
w = s.W.view()
w.shape = (s.F*M, 1)
s.loss = [s.L(Xt, y)]
for t in range(s.niter):
Xw = np.einsum('kmn,fm->kfn', Xt, s.W)
#Xw = np.tensordot(Xt, s.W, (1,1))
Xw.shape = (s.K*s.F, N)
c[:] = np.linalg.solve(Xw @ Xw.T, Xw @ y)
Z = np.einsum('kmn,kf->fmn', Xt, s.C)
#Z = np.tensordot(Xt, s.C, (0,0))
#Z = s.C.T @ Xt.reshape((K,M*N))
Z.shape = (s.F*M, N)
w[:] = np.linalg.solve(Z @ Z.T + s.tauR * np.identity(s.F*M), Z @ y)
s.loss.append(s.L(Xt, y))
return (Xt,)
def predict(s, X):
Xt, q = lanczos_basis_eval(L, X, s.K)
return np.einsum('kmn,kf,fm->n', Xt, s.C, s.W)
#test_optim(gflc_weights(3, 4, 1e-3, 50, 'sgd'), X, y)
clf_weights = gflc_weights(F=3, K=50, tauR=1e4, niter=5, algo='direct')
test_optim(clf_weights, X, y)
```
# GFL classification with splitting
Solvers
* Closed-form solution.
* Stochastic gradient descent.
```
class gflc_split():
def __init__(s, F, K, tauR, tauF, niter, algo='direct'):
"""Model hyper-parameters"""
s.F = F
s.K = K
s.tauR = tauR
s.tauF = tauF
s.niter = niter
if algo is 'direct':
s.fit = s.direct
elif algo is 'sgd':
s.fit = s.sgd
def L(s, Xt, XCb, Z, y):
return np.linalg.norm(XCb.T @ s.w - y)**2 + s.tauR * np.linalg.norm(s.w)**2
def Lsplit(s, Xt, XCb, Z, y):
return np.linalg.norm(Z.T @ s.w - y)**2 + s.tauF * np.linalg.norm(XCb - Z)**2 + s.tauR * np.linalg.norm(s.w)**2
def dLw(s, Xt, XCb, Z, y):
return 2 * Z @ (Z.T @ s.w - y) + 2 * s.tauR * s.w
def dLc(s, Xt, XCb, Z, y):
Xb = Xt.reshape((s.K, -1)).T
Zb = Z.reshape((s.F, -1)).T
return 2 * s.tauF * Xb.T @ (Xb @ s.C - Zb)
def dLz(s, Xt, XCb, Z, y):
return 2 * s.w @ (s.w.T @ Z - y.T) + 2 * s.tauF * (Z - XCb)
def lanczos_filter(s, Xt):
M, N = Xt.shape[1:]
Xb = Xt.reshape((s.K, M*N)).T
#XCb = np.tensordot(Xb, C, (2,1))
XCb = Xb @ s.C # in MN x F
XCb = XCb.T.reshape((s.F*M, N)) # Needs to copy data.
return XCb
def sgd(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.zeros((s.K, s.F))
s.w = np.zeros((s.F*M, 1))
Z = np.random.normal(0, 1, (s.F*M, N))
XCb = np.empty((s.F*M, N))
s.loss = [s.L(Xt, XCb, Z, y)]
s.loss_split = [s.Lsplit(Xt, XCb, Z, y)]
for t in range(s.niter):
s.C -= 1e-7 * s.dLc(Xt, XCb, Z, y)
XCb[:] = s.lanczos_filter(Xt)
Z -= 1e-4 * s.dLz(Xt, XCb, Z, y)
s.w -= 1e-4 * s.dLw(Xt, XCb, Z, y)
s.loss.append(s.L(Xt, XCb, Z, y))
s.loss_split.append(s.Lsplit(Xt, XCb, Z, y))
return Xt, XCb, Z
def direct(s, X, y):
M, N = X.shape
Xt, q = lanczos_basis_eval(L, X, s.K)
s.C = np.zeros((s.K, s.F))
s.w = np.zeros((s.F*M, 1))
Z = np.random.normal(0, 1, (s.F*M, N))
XCb = np.empty((s.F*M, N))
Xb = Xt.reshape((s.K, M*N)).T
Zb = Z.reshape((s.F, M*N)).T
s.loss = [s.L(Xt, XCb, Z, y)]
s.loss_split = [s.Lsplit(Xt, XCb, Z, y)]
for t in range(s.niter):
s.C[:] = Xb.T @ Zb / np.sum((np.linalg.norm(X, axis=0) * q)**2, axis=1)[:,np.newaxis]
XCb[:] = s.lanczos_filter(Xt)
#Z[:] = np.linalg.inv(s.tauF * np.identity(s.F*M) + s.w @ s.w.T) @ (s.tauF * XCb + s.w @ y.T)
Z[:] = np.linalg.solve(s.tauF * np.identity(s.F*M) + s.w @ s.w.T, s.tauF * XCb + s.w @ y.T)
#s.w[:] = np.linalg.inv(Z @ Z.T + s.tauR * np.identity(s.F*M)) @ Z @ y
s.w[:] = np.linalg.solve(Z @ Z.T + s.tauR * np.identity(s.F*M), Z @ y)
s.loss.append(s.L(Xt, XCb, Z, y))
s.loss_split.append(s.Lsplit(Xt, XCb, Z, y))
return Xt, XCb, Z
def predict(s, X):
Xt, q = lanczos_basis_eval(L, X, s.K)
XCb = s.lanczos_filter(Xt)
return XCb.T @ s.w
#test_optim(gflc_split(3, 4, 1e-3, 1e-3, 50, 'sgd'), X, y)
clf_split = gflc_split(3, 4, 1e4, 1e-3, 8, 'direct')
test_optim(clf_split, X, y)
```
# Filters visualization
Observations:
* Filters learned with the splitting scheme have much smaller amplitudes.
* Maybe the energy sometimes goes in W ?
* Why are the filters so different ?
```
lamb, U = graph.fourier(L)
print('Spectrum in [{:1.2e}, {:1.2e}]'.format(lamb[0], lamb[-1]))
def plot_filters(C, spectrum=False):
K, F = C.shape
M, M = L.shape
m = int(np.sqrt(M))
X = np.zeros((M,1))
X[int(m/2*(m+1))] = 1 # Kronecker
Xt, q = lanczos_basis_eval(L, X, K)
Z = np.einsum('kmn,kf->mnf', Xt, C)
Xh = U.T @ X
Zh = np.tensordot(U.T, Z, (1,0))
pmin = int(m/2) - K
pmax = int(m/2) + K + 1
fig, axes = plt.subplots(2,int(np.ceil(F/2)), figsize=(15,5))
for f in range(F):
img = Z[:,0,f].reshape((m,m))[pmin:pmax,pmin:pmax]
im = axes.flat[f].imshow(img, vmin=Z.min(), vmax=Z.max(), interpolation='none')
axes.flat[f].set_title('Filter {}'.format(f))
fig.subplots_adjust(right=0.8)
cax = fig.add_axes([0.82, 0.16, 0.02, 0.7])
fig.colorbar(im, cax=cax)
if spectrum:
ax = plt.figure(figsize=(15,5)).add_subplot(111)
for f in range(F):
ax.plot(lamb, Zh[...,f] / Xh, '.-', label='Filter {}'.format(f))
ax.legend(loc='best')
ax.set_title('Spectrum of learned filters')
ax.set_xlabel('Frequency')
ax.set_ylabel('Amplitude')
ax.set_xlim(0, lmax)
plot_filters(clf_weights.C, True)
plot_filters(clf_split.C, True)
```
# Extracted features
```
def plot_features(C, x):
K, F = C.shape
m = int(np.sqrt(x.shape[0]))
xt, q = lanczos_basis_eval(L, x, K)
Z = np.einsum('kmn,kf->mnf', xt, C)
fig, axes = plt.subplots(2,int(np.ceil(F/2)), figsize=(15,5))
for f in range(F):
img = Z[:,0,f].reshape((m,m))
#im = axes.flat[f].imshow(img, vmin=Z.min(), vmax=Z.max(), interpolation='none')
im = axes.flat[f].imshow(img, interpolation='none')
axes.flat[f].set_title('Filter {}'.format(f))
fig.subplots_adjust(right=0.8)
cax = fig.add_axes([0.82, 0.16, 0.02, 0.7])
fig.colorbar(im, cax=cax)
plot_features(clf_weights.C, X[:,[0]])
plot_features(clf_weights.C, X[:,[1000]])
```
# Performance w.r.t. hyper-parameters
* F plays a big role.
* Both for performance and training time.
* Larger values lead to over-fitting !
* Order $K \in [3,5]$ seems sufficient.
* $\tau_R$ does not have much influence.
```
def scorer(clf, X, y):
yest = clf.predict(X).round().squeeze()
y = y.squeeze()
yy = np.ones(len(y))
yy[yest < 0] = -1
nerrs = np.count_nonzero(y - yy)
return 1 - nerrs / len(y)
def perf(clf, nfolds=3):
"""Test training accuracy."""
N = X.shape[1]
inds = np.arange(N)
np.random.shuffle(inds)
inds.resize((nfolds, int(N/nfolds)))
folds = np.arange(nfolds)
test = inds[0,:]
train = inds[folds != 0, :].reshape(-1)
fig, axes = plt.subplots(1,3, figsize=(15,5))
test_optim(clf, X[:,train], y[train], axes[2])
axes[0].plot(train, clf.predict(X[:,train]), '.')
axes[0].plot(train, y[train].squeeze(), '.')
axes[0].set_ylim([-3,3])
axes[0].set_title('Training set accuracy: {:.2f}'.format(scorer(clf, X[:,train], y[train])))
axes[1].plot(test, clf.predict(X[:,test]), '.')
axes[1].plot(test, y[test].squeeze(), '.')
axes[1].set_ylim([-3,3])
axes[1].set_title('Testing set accuracy: {:.2f}'.format(scorer(clf, X[:,test], y[test])))
if hasattr(clf, 'C'):
plot_filters(clf.C)
perf(rls(tauR=1e6))
for F in [1,3,5]:
perf(gflc_weights(F=F, K=50, tauR=1e4, niter=5, algo='direct'))
#perf(rls(tauR=1e-3))
#for K in [2,3,5,7]:
# perf(gflc_weights(F=3, K=K, tauR=1e-3, niter=5, algo='direct'))
#for tauR in [1e-3, 1e-1, 1e1]:
# perf(rls(tauR=tauR))
# perf(gflc_weights(F=3, K=3, tauR=tauR, niter=5, algo='direct'))
```
# Classification
* Greater is $F$, greater should $K$ be.
```
def cross_validation(clf, nfolds, nvalidations):
M, N = X.shape
scores = np.empty((nvalidations, nfolds))
for nval in range(nvalidations):
inds = np.arange(N)
np.random.shuffle(inds)
inds.resize((nfolds, int(N/nfolds)))
folds = np.arange(nfolds)
for n in folds:
test = inds[n,:]
train = inds[folds != n, :].reshape(-1)
clf.fit(X[:,train], y[train])
scores[nval, n] = scorer(clf, X[:,test], y[test])
return scores.mean()*100, scores.std()*100
#print('Accuracy: {:.2f} +- {:.2f}'.format(scores.mean()*100, scores.std()*100))
#print(scores)
def test_classification(clf, params, param, values, nfolds=10, nvalidations=1):
means = []
stds = []
fig, ax = plt.subplots(1,1, figsize=(15,5))
for i,val in enumerate(values):
params[param] = val
mean, std = cross_validation(clf(**params), nfolds, nvalidations)
means.append(mean)
stds.append(std)
ax.annotate('{:.2f} +- {:.2f}'.format(mean,std), xy=(i,mean), xytext=(10,10), textcoords='offset points')
ax.errorbar(np.arange(len(values)), means, stds, fmt='.', markersize=10)
ax.set_xlim(-.8, len(values)-.2)
ax.set_xticks(np.arange(len(values)))
ax.set_xticklabels(values)
ax.set_xlabel(param)
ax.set_ylim(50, 100)
ax.set_ylabel('Accuracy')
ax.set_title('Parameters: {}'.format(params))
test_classification(rls, {}, 'tauR', [1e8,1e7,1e6,1e5,1e4,1e3,1e-5,1e-8], 10, 10)
params = {'F':1, 'K':2, 'tauR':1e3, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'tauR', [1e8,1e6,1e5,1e4,1e3,1e2,1e-3,1e-8], 10, 10)
params = {'F':2, 'K':10, 'tauR':1e4, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'F', [1,2,3,5])
params = {'F':2, 'K':4, 'tauR':1e4, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'K', [2,3,4,5,8,10,20,30,50,70])
```
# Sampled MNIST
```
Xfull = X
def sample(X, p, seed=None):
M, N = X.shape
z = graph.grid(int(np.sqrt(M)))
# Select random pixels.
np.random.seed(seed)
mask = np.arange(M)
np.random.shuffle(mask)
mask = mask[:int(p*M)]
return z[mask,:], X[mask,:]
X = Xfull
z, X = sample(X, .5)
dist, idx = graph.distance_sklearn_metrics(z, k=4)
A = graph.adjacency(dist, idx)
L = graph.laplacian(A)
lmax = graph.lmax(L)
lamb, U = graph.fourier(L)
print('Spectrum in [{:1.2e}, {:1.2e}]'.format(lamb[0], lamb[-1]))
print(L.shape)
def plot(n):
M, N = X.shape
m = int(np.sqrt(M))
x = X[:,n]
#print(x+127.5)
plt.scatter(z[:,0], -z[:,1], s=20, c=x+127.5)
plot(10)
def plot_digit(nn):
M, N = X.shape
m = int(np.sqrt(M))
fig, axes = plt.subplots(1,len(nn), figsize=(15,5))
for i, n in enumerate(nn):
n = int(n)
img = X[:,n]
axes[i].imshow(img.reshape((m,m)))
axes[i].set_title('Label: y = {:.0f}'.format(y[n,0]))
#plot_digit([0, 1, 1e2, 1e2+1, 1e3, 1e3+1])
#clf_weights = gflc_weights(F=3, K=4, tauR=1e-3, niter=5, algo='direct')
#test_optim(clf_weights, X, y)
#plot_filters(clf_weights.C, True)
#test_classification(rls, {}, 'tauR', [1e1,1e0])
#params = {'F':2, 'K':5, 'tauR':1e-3, 'niter':5, 'algo':'direct'}
#test_classification(gflc_weights, params, 'F', [1,2,3])
test_classification(rls, {}, 'tauR', [1e8,1e7,1e6,1e5,1e4,1e3,1e-5,1e-8], 10, 10)
params = {'F':2, 'K':2, 'tauR':1e3, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'tauR', [1e8,1e5,1e4,1e3,1e2,1e1,1e-3,1e-8], 10, 1)
params = {'F':2, 'K':10, 'tauR':1e5, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'F', [1,2,3,4,5,10])
params = {'F':2, 'K':4, 'tauR':1e5, 'niter':5, 'algo':'direct'}
test_classification(gflc_weights, params, 'K', [2,3,4,5,6,7,8,10,20,30])
```
| github_jupyter |
```
"""
This example demonstrates many of the 2D plotting capabilities
in pyqtgraph. All of the plots may be panned/scaled by dragging with
the left/right mouse buttons. Right click on any plot to show a context menu.
"""
from pyqtgraph.jupyter import GraphicsLayoutWidget
from IPython.display import display
import numpy as np
import pyqtgraph as pg
class CustomGLW(GraphicsLayoutWidget):
def get_frame(self):
# rather than eating up cpu cycles by perpetually updating "Updating plot",
# we will only update it opportunistically on a redraw.
# self.request_draw()
update()
return super().get_frame()
pg.mkQApp()
win = CustomGLW(css_width="1000px", css_height="600px")
# Enable antialiasing for prettier plots
pg.setConfigOptions(antialias=True)
p1 = win.addPlot(title="Basic array plotting", y=np.random.normal(size=100))
p2 = win.addPlot(title="Multiple curves")
p2.plot(np.random.normal(size=100), pen=(255,0,0), name="Red curve")
p2.plot(np.random.normal(size=110)+5, pen=(0,255,0), name="Green curve")
p2.plot(np.random.normal(size=120)+10, pen=(0,0,255), name="Blue curve")
p3 = win.addPlot(title="Drawing with points")
p3.plot(np.random.normal(size=100), pen=(200,200,200), symbolBrush=(255,0,0), symbolPen='w')
win.nextRow()
p4 = win.addPlot(title="Parametric, grid enabled")
x = np.cos(np.linspace(0, 2*np.pi, 1000))
y = np.sin(np.linspace(0, 4*np.pi, 1000))
p4.plot(x, y)
p4.showGrid(x=True, y=True)
p5 = win.addPlot(title="Scatter plot, axis labels, log scale")
x = np.random.normal(size=1000) * 1e-5
y = x*1000 + 0.005 * np.random.normal(size=1000)
y -= y.min()-1.0
mask = x > 1e-15
x = x[mask]
y = y[mask]
p5.plot(x, y, pen=None, symbol='t', symbolPen=None, symbolSize=10, symbolBrush=(100, 100, 255, 50))
p5.setLabel('left', "Y Axis", units='A')
p5.setLabel('bottom', "Y Axis", units='s')
p5.setLogMode(x=True, y=False)
p6 = win.addPlot(title="Updating plot")
curve = p6.plot(pen='y')
data = np.random.normal(size=(10,1000))
ptr = 0
def update():
global curve, data, ptr, p6
curve.setData(data[ptr%10])
if ptr == 0:
p6.enableAutoRange('xy', False) ## stop auto-scaling after the first data set is plotted
ptr += 1
win.nextRow()
p7 = win.addPlot(title="Filled plot, axis disabled")
y = np.sin(np.linspace(0, 10, 1000)) + np.random.normal(size=1000, scale=0.1)
p7.plot(y, fillLevel=-0.3, brush=(50,50,200,100))
p7.showAxis('bottom', False)
x2 = np.linspace(-100, 100, 1000)
data2 = np.sin(x2) / x2
p8 = win.addPlot(title="Region Selection")
p8.plot(data2, pen=(255,255,255,200))
lr = pg.LinearRegionItem([400,700])
lr.setZValue(-10)
p8.addItem(lr)
p9 = win.addPlot(title="Zoom on selected region")
p9.plot(data2)
def updatePlot():
p9.setXRange(*lr.getRegion(), padding=0)
def updateRegion():
lr.setRegion(p9.getViewBox().viewRange()[0])
lr.sigRegionChanged.connect(updatePlot)
p9.sigXRangeChanged.connect(updateRegion)
updatePlot()
display(win)
```
| github_jupyter |
## Import libraries
```
# generic tools
import numpy as np
import datetime
# tools from sklearn
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
# tools from tensorflow
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.datasets import mnist
from tensorflow.keras import backend as K
from tensorflow.keras.utils import plot_model
# matplotlib
import matplotlib.pyplot as plt
# Load the TensorBoard notebook extension
%load_ext tensorboard
# delete logs from previous runs - not always safe!
!rm -rf ./logs/
```
## Download data, train-test split, binarize labels
```
data, labels = fetch_openml('mnist_784', version=1, return_X_y=True)
# to data
data = data.astype("float")/255.0
# split data
(trainX, testX, trainY, testY) = train_test_split(data,
labels,
test_size=0.2)
# convert labels to one-hot encoding
lb = LabelBinarizer()
trainY = lb.fit_transform(trainY)
testY = lb.fit_transform(testY)
```
## Define neural network architecture using ```tf.keras```
```
# define architecture 784x256x128x10
model = Sequential()
model.add(Dense(256, input_shape=(784,), activation="sigmoid"))
model.add(Dense(128, activation="sigmoid"))
model.add(Dense(10, activation="softmax")) # generalisation of logistic regression for multiclass task
```
## Show summary of model architecture
```
model.summary()
```
## Visualise model layers
```
plot_model(model, show_shapes=True, show_layer_names=True)
```
## Compile model loss function, optimizer, and preferred metrics
```
# train model using SGD
sgd = SGD(1e-2)
model.compile(loss="categorical_crossentropy",
optimizer=sgd,
metrics=["accuracy"])
```
## Set ```tensorboard``` parameters - not compulsory!
```
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,
histogram_freq=1)
```
## Train model and save history
```
history = model.fit(trainX, trainY,
validation_data=(testX,testY),
epochs=100,
batch_size=128,
callbacks=[tensorboard_callback])
```
## Visualise using ```matplotlib```
```
plt.style.use("fivethirtyeight")
plt.figure()
plt.plot(np.arange(0, 100), history.history["loss"], label="train_loss")
plt.plot(np.arange(0, 100), history.history["val_loss"], label="val_loss", linestyle=":")
plt.plot(np.arange(0, 100), history.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, 100), history.history["val_accuracy"], label="val_acc", linestyle=":")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.tight_layout()
plt.legend()
plt.show()
```
## Inspect using ```tensorboard```
This won't run on JupyterHub!
```
%tensorboard --logdir logs/fit
```
## Classifier metrics
```
# evaluate network
print("[INFO] evaluating network...")
predictions = model.predict(testX, batch_size=128)
print(classification_report(testY.argmax(axis=1),
predictions.argmax(axis=1),
target_names=[str(x) for x in lb.classes_]))
```
| github_jupyter |
```
# G Oldford Feb 19 2022
# visualize monte carlo results from ecosim Monte Carlo
# uses ggplot2
#
# https://erdavenport.github.io/R-ecology-lesson/05-visualization-ggplot2.html
library(tidyverse)
library(matrixStats)
# No biomass found in the auto written MC run out files, so saving from the plot direct from MC plugin
# The B's are relative to initialization year!
path_MC_sc1 = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//Results//"
file_MC_sc1 = "MRM_SealsTKWJuveSalm_Feb172022_NOTKWforce_graph_MCs_500trials.csv"
n_MC_runs_1 = 500 # sets cols that correspond to seal B
path_MC_sc2 = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//Results//SealWCT_Feb172022_midB-WCT//mc_Scenario 2b- WCT Forcing - Mid B//"
file_MC_sc2 = "BiomassDirectSaveMC_test_2022-02-19_500runs.csv"
n_MC_runs_2 = 500
relB_base = 0.134 # base yr seal B hard coded - careful
path_TS = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//"
file_TS = "SealWCT_B_timeseries_Scen2b_rev20220217v3_MidB.csv"
# ==== read MC results file ====
header_lines = 1
header_lines = 1
results_df_sc1 <- read.csv(paste(path_MC_sc1, file_MC_sc1,sep=""), skip = header_lines)
# rename col and get seals B only
results_trim_TKWForcemid_sc1 = results_df_sc1 %>% rename(year = Data) %>%
select(c("year",starts_with("X2..Seals"))) %>%
mutate(year_int = round(year,0)) %>%
filter(year_int < 2022) #deals with single row w/ erroroneous large year at end of TS data
# head(results_trim_TKWForcemid_sc1)
results_df_sc2 <- read.csv(paste(path_MC_sc2, file_MC_sc2,sep=""), skip = header_lines)
# rename col and get seals B only
results_trim_TKWForcemid_sc2 = results_df_sc2 %>% rename(year = Data) %>%
select(c("year",starts_with("X2..Seals"))) %>%
mutate(year_int = round(year,0)) %>%
filter(year_int < 2022) #deals with single row w/ erroroneous large year at end of TS data
# head(results_trim_TKWForcemid_sc2)
# ==== read TS reference file ====
header_lines = 3
sealobs_df <- read.csv(paste(path, file,sep=""), skip = header_lines)
#relB_base = sealobs_df$BiomassAbs[1]
# convert to relative B
sealobs_df$SealsObsRelB = sealobs_df$BiomassAbs / relB_base
seals_obs_relB = sealobs_df %>% rename(year = Type) %>%
select(c("year","SealsObsRelB")) %>%
mutate(source = "surveys")
# pivot tables to long, for scatter plotting
sc1_df = results_trim_TKWForcemid_sc1 %>% select(-year) %>%
pivot_longer(!year_int, names_to = "Mc_run_sc", values_to = "RelB") %>%
mutate(scenario = "No TKW")
sc2_df = results_trim_TKWForcemid_sc2 %>% select(-year) %>%
pivot_longer(!year_int, names_to = "Mc_run_sc", values_to = "RelB") %>%
mutate(scenario = "TKW")
# combine
sc_df = bind_rows(sc1_df,sc2_df)
# # rename 'data' col to Year
# results_trim = results_df %>% rename(year = Data) %>%
# select(c(0:n_MC_runs))
# head(results_trim)
# to do - eliminate scenarios where seals go extinct.
# likely this is due to issues with total catch (forcing) time series.
# EwE doesn't allow for F forcing.
ggplot(data = sc_df, aes(x = year_int, y = RelB)) +
geom_point(alpha = 0.01, aes(color=scenario))
# visualize, scenario 1 vs scenario 2, after year 2000 when seals plateau
sc_2000fwd_df = sc_df %>% filter(year_int < 2000) %>%
filter(RelB > 0.05)
ggplot(data = sc_2000fwd_df, aes(x = scenario, y = RelB)) +
geom_boxplot()
# OLD CODE BELOW
# for geom_ribbon plots get upper and lower bound
columns <- grep("X2..Seals", colnames(results_trim_TKWForcemid))
results_trim_TKWmid = results_trim_TKWForcemid %>%
mutate(Mean= rowMeans(.[columns],,na.rm = TRUE),
logMean = rowMeans(log(.[columns]),na.rm = TRUE),
stdev=rowSds(as.matrix(.[columns]),na.rm = TRUE),
stdev_log=rowSds(as.matrix(log(.[columns])),na.rm = TRUE)) %>%
# 95% confidence interv https://www.mathsisfun.com/data/confidence-interval.html
mutate(upper_B = Mean + (1.96 * stdev / sqrt(n_MC_runs)),
lower_B = Mean - (1.96 * stdev / sqrt(n_MC_runs))) %>%
mutate(year_int = round(year,0)) %>%
filter(year_int < 2022) %>% #deals with weird super-large year at end of TS data
select(c("year_int","Mean", "stdev", "lower_B","upper_B")) %>%
mutate(source = "EwE") %>%
rename(year = year_int) %>%
# 12 vals per year - average the stats within years
group_by(year) %>% dplyr::summarize(mean_yr = mean(Mean, na.rm=TRUE),
mean_std = mean(stdev, na.rm=TRUE),
mean_lwrB = mean(lower_B, na.rm=TRUE),
mean_uppB = mean(upper_B, na.rm=TRUE))
results_trim_TKWmid
model_obs_binding = bind_rows(results_trim2,seals_obs_relB)
(model_obs_binding)
ggplot(data = roughrundata_df, aes(x = year, y = seals_sc1)) +
geom_line() +
geom_ribbon(aes(ymin=seals_sc1_lo, ymax=seals_sc1_up),alpha = 0.1, fill = "blue") +
geom_line(aes(y=seals_sc2b)) +
geom_ribbon(aes(y=seals_sc2b, ymin=seals_sc2b_lo, ymax=seals_sc2b_up),alpha = 0.1, fill = "green") +
geom_point(data = seals_obs_relB_norecent, aes(y=SealsObsRelB_mt, x=year),alpha = 0.8, color = "black") +
ylab("Seal Biomass Density (mt km-2)")
# # for geom_ribbon plots get upper and lower bound
# columns <- c(2:n_MC_runs)
# Old stuff
# results_trim2 = results_trim_TKWForcemid %>%
# mutate(Mean= rowMeans(.[columns]),
# logMean = rowMeans(log(.[columns])),
# stdev=rowSds(as.matrix(.[columns])),
# stdev_log=rowSds(as.matrix(log(.[columns])))) %>%
# mutate(upper_B = Mean + (1.96 * stdev / sqrt(n_MC_runs)), # 95% confidence interv https://www.mathsisfun.com/data/confidence-interval.html
# lower_B = Mean - (1.96 * stdev / sqrt(n_MC_runs))) %>%
# mutate(year_int = round(year,0)) %>%
# filter(year_int < 2022) %>% #deals with weird super-large year at end of TS data
# select(c("year_int","Mean", "stdev", "lower_B","upper_B")) %>%
# mutate(source = "EwE") %>%
# rename(year = year_int) %>%
# # at this point there are 12 vals per year but these appear to jump every year
# # below will average the stats across each year
# group_by(year) %>% dplyr::summarize(mean_yr = mean(Mean, na.rm=TRUE),
# mean_std = mean(stdev, na.rm=TRUE),
# mean_lwrB = mean(lower_B, na.rm=TRUE),
# mean_uppB = mean(upper_B, na.rm=TRUE))
#mutate(upper_B = exp(upper_logB),
# lower_B = exp(lower_logB))
# pivot wide to long
#results_piv = results_trim2 %>% pivot_longer(
# cols = starts_with("X2"),
# names_to = "Seals",
# names_prefix = "",
# values_to = "B",
# values_drop_na = TRUE
# )
# head(results_trim2)
# I can't find Biomass in the auto written MC run out files, so I'm saving from the plot in the MC plugin
#path = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//Results//SealWCT_Feb172022_midB-WCT//mc_Scenario 3c- MonteCarlo TKWForce Mid//"
#file = "BiomassPlotSave_Scen3b_TKWForce_min.csv"
#file = "BiomassPlotSave_Scen3c_TKWForce_mid_500runs.csv"
#file = "BiomassDirectSaveMC_test_2022-02-19.csv"
#starts_with(results_trim_TKWForcemid,"X2..Seals")
#grep("X2..Seals", colnames(results_trim_TKWForcemid))
#results_trim_TKWForcemid
# # for geom_ribbon plots get upper and lower bound
# columns <- grep("X2..Seals", colnames(results_trim_TKWForcemid))
# results_trim_TKWmid = results_trim_TKWForcemid %>%
# mutate(Mean= rowMeans(.[columns],,na.rm = TRUE),
# logMean = rowMeans(log(.[columns]),na.rm = TRUE),
# stdev=rowSds(as.matrix(.[columns]),na.rm = TRUE),
# stdev_log=rowSds(as.matrix(log(.[columns])),na.rm = TRUE)) %>%
# mutate(upper_B = Mean + (1.96 * stdev / sqrt(n_MC_runs)), # 95% confidence interv https://www.mathsisfun.com/data/confidence-interval.html
# lower_B = Mean - (1.96 * stdev / sqrt(n_MC_runs))) %>%
# mutate(year_int = round(year,0)) %>%
# filter(year_int < 2022) %>% #deals with weird super-large year at end of TS data
# select(c("year_int","Mean", "stdev", "lower_B","upper_B")) %>%
# mutate(source = "EwE") %>%
# rename(year = year_int) %>%
# # at this point there are 12 vals per year but these appear to jump every year
# # below will average the stats across each year
# group_by(year) %>% dplyr::summarize(mean_yr = mean(Mean, na.rm=TRUE),
# mean_std = mean(stdev, na.rm=TRUE),
# mean_lwrB = mean(lower_B, na.rm=TRUE),
# mean_uppB = mean(upper_B, na.rm=TRUE))
#mutate(upper_B = exp(upper_logB),
# lower_B = exp(lower_logB))
# pivot wide to long
#results_piv = results_trim2 %>% pivot_longer(
# cols = starts_with("X2"),
# names_to = "Seals",
# names_prefix = "",
# values_to = "B",
# values_drop_na = TRUE
# )
# tail(results_trim_TKWmid)
# read seal time series data
# convert from abs to rel to match MC out
# path = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//"
#file = "SealTKW_timeseries_Scen1_NoTKWForcing.csv"
# file = "SealWCT_B_timeseries_Scen2b_rev20220217v3_MidB.csv"
# header_lines = 3
# sealobs_df <- read.csv(paste(path, file,sep=""), skip = header_lines)
# #relB_base = sealobs_df$BiomassAbs[1]
# relB_base = 0.134
# sealobs_df$SealsObsRelB = sealobs_df$BiomassAbs / relB_base
# seals_obs_relB = sealobs_df %>% rename(year = Type) %>%
# select(c("year","SealsObsRelB")) %>%
# mutate(source = "surveys")
# #sealobs_df
# (seals_obs_relB)
# merge two tables
# model_obs_binding = bind_rows(results_trim2,seals_obs_relB)
# (model_obs_binding)
SealsObsRelB_1970on = seals_obs_relB %>% filter(year > 1969)
ggplot(data = results_trim2, aes(x = year, y = mean_yr)) +
geom_ribbon(aes(ymin=mean_lwrB, ymax=mean_uppB),alpha = 0.1, color = "blue") +
geom_ribbon(data = results_trim_TKWmid, aes(y=mean_yr, ymin=mean_lwrB, ymax=mean_uppB),alpha = 0.1, color = "blue") +
geom_point(data = SealsObsRelB_1970on, aes(y=SealsObsRelB, x=year),alpha = 0.8, color = "black") +
ylab("Relative Seal Biomass (SoG)")
# temporary
# read seal time series data
# convert from abs to rel to match MC out
path = "C://Users//Greig//Sync//PSF//EwE//Georgia Strait 2021//UTL_model//6_MRM_SealTKWJuveSlmn//results//"
file = "biomass_annual_justresultsnoMC_scen1scen2b.csv"
header_lines = 0
roughrundata_df <- read.csv(paste(path, file,sep=""), skip = header_lines)
head(roughrundata_df)
ggplot(data = roughrundata_df, aes(x = year, y = seals_sc1)) +
geom_line() +
geom_ribbon(aes(ymin=seals_sc1_lo, ymax=seals_sc1_up),alpha = 0.1, fill = "blue") +
geom_line(aes(y=seals_sc2b)) +
geom_ribbon(aes(y=seals_sc2b, ymin=seals_sc2b_lo, ymax=seals_sc2b_up),alpha = 0.1, fill = "green") +
geom_point(data = seals_obs_relB_norecent, aes(y=SealsObsRelB_mt, x=year),alpha = 0.8, color = "black") +
ylab("Seal Biomass Density (mt km-2)")
seals_obs_relB$SealsObsRelB_mt = seals_obs_relB$SealsObsRelB * 0.169
seals_obs_relB_norecent = seals_obs_relB %>% filter(seals_obs_relB < 2015)
```
| github_jupyter |
## Dependencies
```
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler, ModelCheckpoint
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
```
## Load data
```
fold_set = pd.read_csv('../input/aptos-split-oldnew/5-fold.csv')
X_train = fold_set[fold_set['fold_2'] == 'train']
X_val = fold_set[fold_set['fold_2'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
# Preprocecss data
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
display(X_train.head())
```
# Model parameters
```
# Model parameters
model_path = '../working/effNetB4_img256_noBen_fold3.h5'
FACTOR = 4
BATCH_SIZE = 8 * FACTOR
EPOCHS = 20
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-3/2 * FACTOR
WARMUP_LEARNING_RATE = 1e-3/2 * FACTOR
HEIGHT = 256
WIDTH = 256
CHANNELS = 3
TTA_STEPS = 5
ES_PATIENCE = 5
LR_WARMUP_EPOCHS = 5
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS = EPOCHS * STEP_SIZE
WARMUP_STEPS = LR_WARMUP_EPOCHS * STEP_SIZE
```
# Pre-procecess images
```
old_data_base_path = '../input/diabetic-retinopathy-resized/resized_train/resized_train/'
new_data_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
# image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['id_code']
item_set = item['fold_2']
item_data = item['data']
if item_set == 'train':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, train_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, train_dest_path)
if item_set == 'validation':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, validation_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, validation_dest_path)
def preprocess_test(df, base_path=test_base_path, save_path=test_dest_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
image_id = df.iloc[i]['id_code']
preprocess_image(image_id, base_path, save_path)
n_cpu = mp.cpu_count()
train_n_cnt = X_train.shape[0] // n_cpu
val_n_cnt = X_val.shape[0] // n_cpu
test_n_cnt = test.shape[0] // n_cpu
# Pre-procecss old data train set
pool = mp.Pool(n_cpu)
dfs = [X_train.iloc[train_n_cnt*i:train_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_train.iloc[train_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss validation set
pool = mp.Pool(n_cpu)
dfs = [X_val.iloc[val_n_cnt*i:val_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_val.iloc[val_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss test set
pool = mp.Pool(n_cpu)
dfs = [test.iloc[test_n_cnt*i:test_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = test.iloc[test_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_test, [x_df for x_df in dfs])
pool.close()
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
def plot_metrics(history, figsize=(20, 14)):
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=figsize)
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB4(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b4_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-2, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=2).history
```
# Fine-tune the model
```
for layer in model.layers:
layer.trainable = True
checkpoint = ModelCheckpoint(model_path, monitor='val_loss', mode='min', save_best_only=True, save_weights_only=True)
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS,
hold_base_rate_steps=(3 * STEP_SIZE))
callback_list = [checkpoint, es, cosine_lr]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=2).history
fig, ax = plt.subplots(1, 1, sharex='col', figsize=(20, 4))
ax.plot(cosine_lr.learning_rates)
ax.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
```
# Model loss graph
```
plot_metrics(history)
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Quadratic Weighted Kappa
```
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Apply model to test set and output predictions
```
preds = apply_tta(model, test_generator, TTA_STEPS)
predictions = [classify(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
| github_jupyter |
# Translate `dzn` to `smt2` for z3
### Check Versions of Tools
```
import os
import subprocess
my_env = os.environ.copy()
output = subprocess.check_output(f'''/home/{my_env['USER']}/optimathsat/bin/optimathsat -version''', shell=True, universal_newlines=True)
output
output = subprocess.check_output(f'''/home/{my_env['USER']}/minizinc/build/minizinc --version''', shell=True, universal_newlines=True)
output
output = subprocess.check_output(f'''/home/{my_env['USER']}/z3/build/z3 --version''', shell=True, universal_newlines=True)
output
```
First generate the FlatZinc files using the MiniZinc tool. Make sure that a `smt2` folder is located inside `./minizinc/share/minizinc/`. Else, to enable OptiMathSAT's support for global constraints download the [smt2.tar.gz](http://optimathsat.disi.unitn.it/data/smt2.tar.gz) package and unpack it there using
```zsh
tar xf smt2.tar.gz $MINIZINC_PATH/share/minizinc/
```
If next output shows a list of `.mzn` files, then this dependency is satified.
```
output = subprocess.check_output(f'''ls -la /home/{my_env['USER']}/minizinc/share/minizinc/smt2/''', shell=True, universal_newlines=True)
print(output)
```
## Transform `dzn` to `fzn` Using a `mzn` Model
Then transform the desired `.dzn` file to `.fzn` using a `Mz.mzn` MiniZinc model.
First list all `dzn` files contained in the `dzn_path` that should get processed.
```
import os
dzn_files = []
dzn_path = f'''/home/{my_env['USER']}/data/dzn/'''
for filename in os.listdir(dzn_path):
if filename.endswith(".dzn"):
dzn_files.append(filename)
len(dzn_files)
```
#### Model $Mz_1$
```
import sys
fzn_path = f'''/home/{my_env['USER']}/data/fzn/smt2/Mz1-noAbs/'''
minizinc_base_cmd = f'''/home/{my_env['USER']}/minizinc/build/minizinc \
-Werror \
--compile --solver org.minizinc.mzn-fzn \
--search-dir /home/{my_env['USER']}/minizinc/share/minizinc/smt2/ \
/home/{my_env['USER']}/models/mzn/Mz1-noAbs.mzn '''
translate_count = 0
for dzn in dzn_files:
translate_count += 1
minizinc_transform_cmd = minizinc_base_cmd + dzn_path + dzn \
+ ' --output-to-file ' + fzn_path + dzn.replace('.', '-') + '.fzn'
print(f'''\r({translate_count}/{len(dzn_files)}) Translating {dzn_path + dzn} to {fzn_path + dzn.replace('.', '-')}.fzn''', end='')
sys.stdout.flush()
subprocess.check_output(minizinc_transform_cmd, shell=True,
universal_newlines=True)
```
#### Model $Mz_2$
```
import sys
fzn_path = f'''/home/{my_env['USER']}/data/fzn/smt2/Mz2-noAbs/'''
minizinc_base_cmd = f'''/home/{my_env['USER']}/minizinc/build/minizinc \
-Werror \
--compile --solver org.minizinc.mzn-fzn \
--search-dir /home/{my_env['USER']}/minizinc/share/minizinc/smt2/ \
/home/{my_env['USER']}/models/mzn/Mz2-noAbs.mzn '''
translate_count = 0
for dzn in dzn_files:
translate_count += 1
minizinc_transform_cmd = minizinc_base_cmd + dzn_path + dzn \
+ ' --output-to-file ' + fzn_path + dzn.replace('.', '-') + '.fzn'
print(f'''\r({translate_count}/{len(dzn_files)}) Translating {dzn_path + dzn} to {fzn_path + dzn.replace('.', '-')}.fzn''', end='')
sys.stdout.flush()
subprocess.check_output(minizinc_transform_cmd, shell=True,
universal_newlines=True)
```
## Translate `fzn` to `smt2`
The generated `.fzn` files can be used to generate a `.smt2` files using the `fzn2smt2.py` script from this [project](https://github.com/PatrickTrentin88/fzn2omt).
**NOTE**: Files `R001` (no cables) and `R002` (one one-sided cable) throw an error while translating.
#### $Mz_1$
```
import os
fzn_files = []
fzn_path = f'''/home/{my_env['USER']}/data/fzn/smt2/Mz1-noAbs/'''
for filename in os.listdir(fzn_path):
if filename.endswith(".fzn"):
fzn_files.append(filename)
len(fzn_files)
smt2_path = f'''/home/{my_env['USER']}/data/smt2/z3/Mz1-noAbs/'''
fzn2smt2_base_cmd = f'''/home/{my_env['USER']}/fzn2omt/bin/fzn2z3.py'''
translate_count = 0
my_env = os.environ.copy()
my_env['PATH'] = f'''/home/{my_env['USER']}/optimathsat/bin/:{my_env['PATH']}'''
my_env['PATH'] = f'''/home/{my_env['USER']}/z3/build/:{my_env['PATH']}'''
for fzn in fzn_files:
translate_count += 1
fzn2smt2_transform_cmd = f'''{fzn2smt2_base_cmd} {fzn_path}{fzn} --smt2 {smt2_path}{fzn.replace('.', '-')}.smt2'''
print(f'''\r({translate_count}/{len(fzn_files)}) Translating {fzn_path + fzn} to {smt2_path + fzn.replace('.', '-')}.smt2''', end='')
try:
output = subprocess.check_output(fzn2smt2_transform_cmd,
shell=True,env=my_env,
universal_newlines=True)
except Exception as e:
output = str(e.output)
print(f'''\r{output}''', end='')
sys.stdout.flush()
```
#### $Mz_2$
```
import os
fzn_files = []
fzn_path = f'''/home/{my_env['USER']}/data/fzn/smt2/Mz2-noAbs/'''
for filename in os.listdir(fzn_path):
if filename.endswith(".fzn"):
fzn_files.append(filename)
len(fzn_files)
smt2_path = f'''/home/{my_env['USER']}/data/smt2/z3/Mz2-noAbs/'''
fzn2smt2_base_cmd = f'''/home/{my_env['USER']}/fzn2omt/bin/fzn2z3.py'''
translate_count = 0
my_env = os.environ.copy()
my_env['PATH'] = f'''/home/{my_env['USER']}/optimathsat/bin/:{my_env['PATH']}'''
my_env['PATH'] = f'''/home/{my_env['USER']}/z3/build/:{my_env['PATH']}'''
for fzn in fzn_files:
translate_count += 1
fzn2smt2_transform_cmd = f'''{fzn2smt2_base_cmd} {fzn_path}{fzn} --smt2 {smt2_path}{fzn.replace('.', '-')}.smt2'''
print(f'''\r({translate_count}/{len(fzn_files)}) Translating {fzn_path + fzn} to {smt2_path + fzn.replace('.', '-')}.smt2''', end='')
try:
output = subprocess.check_output(fzn2smt2_transform_cmd,
shell=True,env=my_env,
universal_newlines=True)
except Exception as e:
output = str(e.output)
print(f'''\r{output}''', end='')
sys.stdout.flush()
```
### Adjust `smt2` Files According to Chapter 5.2
- Add lower and upper bounds for the decision variable `pfc`
- Add number of cavities as comments for later solution extraction (workaround)
```
import os
import re
def adjust_smt2_file(smt2_path: str, file: str, write_path: str):
with open(smt2_path+'/'+file, 'r+') as myfile:
data = "".join(line for line in myfile)
filename = os.path.splitext(file)[0]
newFile = open(os.path.join(write_path, filename +'.smt2'),"w+")
newFile.write(data)
newFile.close()
openFile = open(os.path.join(write_path, filename +'.smt2'))
data = openFile.readlines()
additionalLines = data[-5:]
data = data[:-5]
openFile.close()
newFile = open(os.path.join(write_path, filename +'.smt2'),"w+")
newFile.writelines([item for item in data])
newFile.close()
with open(os.path.join(write_path, filename +'.smt2'),"r") as myfile:
data = "".join(line for line in myfile)
newFile = open(os.path.join(write_path, filename +'.smt2'),"w+")
matches = re.findall(r'\(define-fun .\d\d \(\) Int (\d+)\)', data)
try:
cavity_count = int(matches[0])
newFile.write(f''';; k={cavity_count}\n''')
newFile.write(f''';; Extract pfc from\n''')
for i in range(0,cavity_count):
newFile.write(f''';; X_INTRODUCED_{str(i)}_\n''')
newFile.write(data)
for i in range(1,cavity_count+1):
lb = f'''(define-fun lbound{str(i)} () Bool (> X_INTRODUCED_{str(i-1)}_ 0))\n'''
ub = f'''(define-fun ubound{str(i)} () Bool (<= X_INTRODUCED_{str(i-1)}_ {str(cavity_count)}))\n'''
assertLb = f'''(assert lbound{str(i)})\n'''
assertUb = f'''(assert ubound{str(i)})\n'''
newFile.write(lb)
newFile.write(ub)
newFile.write(assertLb)
newFile.write(assertUb)
except:
print(f'''\nCheck {filename} for completeness - data missing?''')
newFile.writelines([item for item in additionalLines])
newFile.close()
```
#### $Mz_1$
```
import os
smt2_files = []
smt2_path = f'''/home/{my_env['USER']}/data/smt2/z3/Mz1-noAbs'''
for filename in os.listdir(smt2_path):
if filename.endswith(".smt2"):
smt2_files.append(filename)
len(smt2_files)
fix_count = 0
for smt2 in smt2_files:
fix_count += 1
print(f'''\r{fix_count}/{len(smt2_files)} Fixing file {smt2}''', end='')
adjust_smt2_file(smt2_path=smt2_path, file=smt2, write_path=f'''{smt2_path}''')
sys.stdout.flush()
```
#### $Mz_2$
```
import os
smt2_files = []
smt2_path = f'''/home/{my_env['USER']}/data/smt2/z3/Mz2-noAbs'''
for filename in os.listdir(smt2_path):
if filename.endswith(".smt2"):
smt2_files.append(filename)
len(smt2_files)
fix_count = 0
for smt2 in smt2_files:
fix_count += 1
print(f'''\r{fix_count}/{len(smt2_files)} Fixing file {smt2}''', end='')
adjust_smt2_file(smt2_path=smt2_path, file=smt2, write_path=f'''{smt2_path}''')
sys.stdout.flush()
```
## Test Generated `smt2` Files Using `z3`
This shoud generate the `smt2` file without any error. If this was the case then the `z3` prover can be called on a file by running
```zsh
z3 output/A001-dzn-smt2-fzn.smt2
```
yielding something similar to
```zsh
z3 output/A001-dzn-smt2-fzn.smt2
sat
(objectives
(obj 41881)
)
(model
(define-fun X_INTRODUCED_981_ () Bool
false)
(define-fun X_INTRODUCED_348_ () Bool
false)
.....
```
#### Test with `smt2` from $Mz_1$
```
command = f'''/home/{my_env['USER']}/z3/build/z3 /home/{my_env['USER']}/data/smt2/z3/Mz1-noAbs/A001-dzn-fzn.smt2'''
print(command)
try:
result = subprocess.check_output(command, shell=True, universal_newlines=True)
except Exception as e:
print(e.output)
print(result)
```
#### Test with `smt2` from $Mz_2$
```
result = subprocess.check_output(
f'''/home/{my_env['USER']}/z3/build/z3 \
/home/{my_env['USER']}/data/smt2/z3/Mz2-noAbs/v3/A004-dzn-fzn_v3.smt2''',
shell=True, universal_newlines=True)
print(result)
```
| github_jupyter |
<p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p>
<h1>Welcome to Colaboratory!</h1>
Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.
With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser.
```
#@title Introducing Colaboratory { display-mode: "form" }
#@markdown This 3-minute video gives an overview of the key features of Colaboratory:
from IPython.display import YouTubeVideo
YouTubeVideo('inN8seMm7UI', width=600, height=400)
```
## Getting Started
The document you are reading is a [Jupyter notebook](https://jupyter.org/), hosted in Colaboratory. It is not a static page, but an interactive environment that lets you write and execute code in Python and other languages.
For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
```
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
```
To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter".
All cells modify the same global state, so variables that you define by executing a cell can be used in other cells:
```
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
```
For more information about working with Colaboratory notebooks, see [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb).
---
# Cells
A notebook is a list of cells. Cells contain either explanatory text or executable code and its output. Click a cell to select it.
## Code cells
Below is a **code cell**. Once the toolbar button indicates CONNECTED, click in the cell to select it and execute the contents in the following ways:
* Click the **Play icon** in the left gutter of the cell;
* Type **Cmd/Ctrl+Enter** to run the cell in place;
* Type **Shift+Enter** to run the cell and move focus to the next cell (adding one if none exists); or
* Type **Alt+Enter** to run the cell and insert a new code cell immediately below it.
There are additional options for running some or all cells in the **Runtime** menu.
```
a = 13
a
```
## Text cells
This is a **text cell**. You can **double-click** to edit this cell. Text cells
use markdown syntax. To learn more, see our [markdown
guide](/notebooks/markdown_guide.ipynb).
You can also add math to text cells using [LaTeX](http://www.latex-project.org/)
to be rendered by [MathJax](https://www.mathjax.org). Just place the statement
within a pair of **\$** signs. For example `$\sqrt{3x-1}+(1+x)^2$` becomes
$\sqrt{3x-1}+(1+x)^2.$
## Adding and moving cells
You can add new cells by using the **+ CODE** and **+ TEXT** buttons that show when you hover between cells. These buttons are also in the toolbar above the notebook where they can be used to add a cell below the currently selected cell.
You can move a cell by selecting it and clicking **Cell Up** or **Cell Down** in the top toolbar.
Consecutive cells can be selected by "lasso selection" by dragging from outside one cell and through the group. Non-adjacent cells can be selected concurrently by clicking one and then holding down Ctrl while clicking another. Similarly, using Shift instead of Ctrl will select all intermediate cells.
# Integration with Drive
Colaboratory is integrated with Google Drive. It allows you to share, comment, and collaborate on the same document with multiple people:
* The **SHARE** button (top-right of the toolbar) allows you to share the notebook and control permissions set on it.
* **File->Make a Copy** creates a copy of the notebook in Drive.
* **File->Save** saves the File to Drive. **File->Save and checkpoint** pins the version so it doesn't get deleted from the revision history.
* **File->Revision history** shows the notebook's revision history.
## Commenting on a cell
You can comment on a Colaboratory notebook like you would on a Google Document. Comments are attached to cells, and are displayed next to the cell they refer to. If you have **comment-only** permissions, you will see a comment button on the top right of the cell when you hover over it.
If you have edit or comment permissions you can comment on a cell in one of three ways:
1. Select a cell and click the comment button in the toolbar above the top-right corner of the cell.
1. Right click a text cell and select **Add a comment** from the context menu.
3. Use the shortcut **Ctrl+Shift+M** to add a comment to the currently selected cell.
You can resolve and reply to comments, and you can target comments to specific collaborators by typing *+[email address]* (e.g., `[email protected]`). Addressed collaborators will be emailed.
The Comment button in the top-right corner of the page shows all comments attached to the notebook.
## More Resources
- [Guide to Markdown](/notebooks/markdown_guide.ipynb)
- Colaboratory is built on top of [Jupyter Notebook](https://jupyter.org/).
---
**Original Sources:**
1. https://colab.research.google.com/notebooks/welcome.ipynb
2. https://colab.research.google.com/notebooks/basic_features_overview.ipynb
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Leveraging-Pre-trained-Word-Embedding-for-Text-Classification" data-toc-modified-id="Leveraging-Pre-trained-Word-Embedding-for-Text-Classification-1"><span class="toc-item-num">1 </span>Leveraging Pre-trained Word Embedding for Text Classification</a></span><ul class="toc-item"><li><span><a href="#Data-Preparation" data-toc-modified-id="Data-Preparation-1.1"><span class="toc-item-num">1.1 </span>Data Preparation</a></span></li><li><span><a href="#Glove" data-toc-modified-id="Glove-1.2"><span class="toc-item-num">1.2 </span>Glove</a></span></li><li><span><a href="#Model" data-toc-modified-id="Model-1.3"><span class="toc-item-num">1.3 </span>Model</a></span><ul class="toc-item"><li><span><a href="#Model-with-Pretrained-Embedding" data-toc-modified-id="Model-with-Pretrained-Embedding-1.3.1"><span class="toc-item-num">1.3.1 </span>Model with Pretrained Embedding</a></span></li><li><span><a href="#Model-without-Pretrained-Embedding" data-toc-modified-id="Model-without-Pretrained-Embedding-1.3.2"><span class="toc-item-num">1.3.2 </span>Model without Pretrained Embedding</a></span></li></ul></li><li><span><a href="#Submission" data-toc-modified-id="Submission-1.4"><span class="toc-item-num">1.4 </span>Submission</a></span></li><li><span><a href="#Summary" data-toc-modified-id="Summary-1.5"><span class="toc-item-num">1.5 </span>Summary</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
```
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import time
import numpy as np
import pandas as pd
from typing import List, Tuple, Dict
from sklearn.model_selection import train_test_split
from keras import layers
from keras.models import Model
from keras.preprocessing.text import Tokenizer
from keras.utils.np_utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
# prevent scientific notations
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,keras
```
# Leveraging Pre-trained Word Embedding for Text Classification
There are two main ways to obtain word embeddings:
- Learn it from scratch: We specify a neural network architecture and learn the word embeddings jointly with the main task at our hand (e.g. sentiment classification). i.e. we would start off with some random word embeddings, and it would update itself along with the word embeddings.
- Transfer Learning: The whole idea behind transfer learning is to avoid reinventing the wheel as much as possible. It gives us the capability to transfer knowledge that was gained/learned in some other task and use it to improve the learning of another related task. In practice, one way to do this is for the embedding part of the neural network architecture, we load some other embeddings that were trained on a different machine learning task than the one we are trying to solve and use that to bootstrap the process.
One area that transfer learning shines is when we have little training data available and using our data alone might not be enough to learn an appropriate task specific embedding/features for our vocabulary. In this case, leveraging a word embedding that captures generic aspect of the language can prove to be beneficial from both a performance and time perspective (i.e. we won't have to spend hours/days training a model from scratch to achieve a similar performance). Keep in mind that, as with all machine learning application, everything is still all about trial and error. What makes a embedding good depends heavily on the task at hand: The word embedding for a movie review sentiment classification model may look very different from a legal document classification model as the semantic of the corpus varies between these two tasks.
## Data Preparation
We'll use the movie review sentiment analysis dataset from [Kaggle](https://www.kaggle.com/c/word2vec-nlp-tutorial/overview) for this example. It's a binary classification problem with AUC as the ultimate evaluation metric. The next few code chunk performs the usual text preprocessing, build up the word vocabulary and performing a train/test split.
```
data_dir = 'data'
submission_dir = 'submission'
input_path = os.path.join(data_dir, 'word2vec-nlp-tutorial', 'labeledTrainData.tsv')
df = pd.read_csv(input_path, delimiter='\t')
print(df.shape)
df.head()
raw_text = df['review'].iloc[0]
raw_text
import re
def clean_str(string: str) -> str:
string = re.sub(r"\\", "", string)
string = re.sub(r"\'", "", string)
string = re.sub(r"\"", "", string)
return string.strip().lower()
from bs4 import BeautifulSoup
def clean_text(df: pd.DataFrame,
text_col: str,
label_col: str) -> Tuple[List[str], List[int]]:
texts = []
labels = []
for raw_text, label in zip(df[text_col], df[label_col]):
text = BeautifulSoup(raw_text).get_text()
cleaned_text = clean_str(text)
texts.append(cleaned_text)
labels.append(label)
return texts, labels
text_col = 'review'
label_col = 'sentiment'
texts, labels = clean_text(df, text_col, label_col)
print('sample text: ', texts[0])
print('corresponding label:', labels[0])
random_state = 1234
val_split = 0.2
labels = to_categorical(labels)
texts_train, texts_val, y_train, y_val = train_test_split(
texts, labels,
test_size=val_split,
random_state=random_state)
print('labels shape:', labels.shape)
print('train size: ', len(texts_train))
print('validation size: ', len(texts_val))
max_num_words = 20000
tokenizer = Tokenizer(num_words=max_num_words, oov_token='<unk>')
tokenizer.fit_on_texts(texts_train)
print('Found %s unique tokens.' % len(tokenizer.word_index))
max_sequence_len = 1000
sequences_train = tokenizer.texts_to_sequences(texts_train)
x_train = pad_sequences(sequences_train, maxlen=max_sequence_len)
sequences_val = tokenizer.texts_to_sequences(texts_val)
x_val = pad_sequences(sequences_val, maxlen=max_sequence_len)
sequences_train[0][:5]
```
## Glove
There are many different pretrained word embeddings online. The one we'll be using is from [Glove](https://nlp.stanford.edu/projects/glove/). Others include but not limited to [FastText](https://fasttext.cc/docs/en/crawl-vectors.html), [bpemb](https://github.com/bheinzerling/bpemb).
If we look at the project's wiki page, we can find any different pretrained embeddings available for us to experiment.
<img src="img/pretrained_weights.png" width="100%" height="100%">
```
import requests
from tqdm import tqdm
def download_glove(embedding_type: str='glove.6B.zip'):
"""
download GloVe word vector representations, this step may take a while
Parameters
----------
embedding_type : str, default 'glove.6B.zip'
Specifying different glove embeddings to download if not already there.
{'glove.6B.zip', 'glove.42B.300d.zip', 'glove.840B.300d.zip', 'glove.twitter.27B.zip'}
Be wary of the size. e.g. 'glove.6B.zip' is a 822 MB zipped, 2GB unzipped
"""
base_url = 'http://nlp.stanford.edu/data/'
if not os.path.isfile(embedding_type):
url = base_url + embedding_type
# the following section is a pretty generic http get request for
# saving large files, provides progress bars for checking progress
response = requests.get(url, stream=True)
response.raise_for_status()
content_len = response.headers.get('Content-Length')
total = int(content_len) if content_len is not None else 0
with tqdm(unit='B', total=total) as pbar, open(embedding_type, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
pbar.update(len(chunk))
f.write(chunk)
if response.headers.get('Content-Type') == 'application/zip':
from zipfile import ZipFile
with ZipFile(embedding_type, 'r') as f:
f.extractall(embedding_type.strip('.zip'))
download_glove()
```
The way we'll leverage the pretrained embedding is to first read it in as a dictionary lookup, where the key is the word and the value is the corresponding word embedding. Then for each token in our vocabulary, we'll lookup this dictionary to see if there's a pretrained embedding available, if there is, we'll use the pretrained embedding, if there isn't, we'll leave the embedding for this word in its original randomly initialized form.
The format for this particular pretrained embedding is for every line, we have a space delimited values, where the first token is the word, and the rest are its corresponding embedding values. e.g. the first line from the line looks like:
```
the -0.038194 -0.24487 0.72812 -0.39961 0.083172 0.043953 -0.39141 0.3344 -0.57545 0.087459 0.28787 -0.06731 0.30906 -0.26384 -0.13231 -0.20757 0.33395 -0.33848 -0.31743 -0.48336 0.1464 -0.37304 0.34577 0.052041 0.44946 -0.46971 0.02628 -0.54155 -0.15518 -0.14107 -0.039722 0.28277 0.14393 0.23464 -0.31021 0.086173 0.20397 0.52624 0.17164 -0.082378 -0.71787 -0.41531 0.20335 -0.12763 0.41367 0.55187 0.57908 -0.33477 -0.36559 -0.54857 -0.062892 0.26584 0.30205 0.99775 -0.80481 -3.0243 0.01254 -0.36942 2.2167 0.72201 -0.24978 0.92136 0.034514 0.46745 1.1079 -0.19358 -0.074575 0.23353 -0.052062 -0.22044 0.057162 -0.15806 -0.30798 -0.41625 0.37972 0.15006 -0.53212 -0.2055 -1.2526 0.071624 0.70565 0.49744 -0.42063 0.26148 -1.538 -0.30223 -0.073438 -0.28312 0.37104 -0.25217 0.016215 -0.017099 -0.38984 0.87424 -0.72569 -0.51058 -0.52028 -0.1459 0.8278 0.27062
```
```
def get_embedding_lookup(embedding_path) -> Dict[str, np.ndarray]:
embedding_lookup = {}
with open(embedding_path) as f:
for line in f:
values = line.split()
word = values[0]
coef = np.array(values[1:], dtype=np.float32)
embedding_lookup[word] = coef
return embedding_lookup
def get_pretrained_embedding(embedding_path: str,
index2word: Dict[int, str],
max_features: int) -> np.ndarray:
embedding_lookup = get_embedding_lookup(embedding_path)
pretrained_embedding = np.stack(list(embedding_lookup.values()))
embedding_dim = pretrained_embedding.shape[1]
embeddings = np.random.normal(pretrained_embedding.mean(),
pretrained_embedding.std(),
(max_features, embedding_dim)).astype(np.float32)
# we track how many tokens in our vocabulary exists in the pre-trained embedding,
# i.e. how many tokens has a pre-trained embedding from this particular file
n_found = 0
# the loop starts from 1 due to keras' Tokenizer reserves 0 for padding index
for i in range(1, max_features):
word = index2word[i]
embedding_vector = embedding_lookup.get(word)
if embedding_vector is not None:
embeddings[i] = embedding_vector
n_found += 1
print('number of words found:', n_found)
return embeddings
glove_path = os.path.join('glove.6B', 'glove.6B.100d.txt')
max_features = max_num_words + 1
pretrained_embedding = get_pretrained_embedding(glove_path, tokenizer.index_word, max_features)
pretrained_embedding.shape
```
## Model
To train our text classifier, we specify a 1D convolutional network. Our embedding layer can either be initialized randomly or loaded from a pre-trained embedding. Note that for the pre-trained embedding case, apart from loading the weights, we also "freeze" the embedding layer, i.e. we set its trainable attribute to False. This idea is often times used in transfer learning, where when parts of a model are pre-trained (in our case, only our Embedding layer), and parts of it are randomly initialized, the pre-trained part should ideally not be trained together with the randomly initialized part. The rationale behind it is that a large gradient update triggered by the randomly initialized layer would become very disruptive to those pre-trained weights.
Once we train the randomly initialized weights for a few iterations, we can then go about un-freezing the layers that were loaded with pre-trained weights, and do an update on the weight for the entire thing. The [keras documentation](https://keras.io/applications/#fine-tune-inceptionv3-on-a-new-set-of-classes) also provides an example of how to do this, although the example is for image models, the same idea can also be applied here, and can be something that's worth experimenting.
```
def simple_text_cnn(max_sequence_len: int,
max_features: int,
num_classes: int,
optimizer: str='adam',
metrics: List[str]=['acc'],
pretrained_embedding: np.ndarray=None) -> Model:
sequence_input = layers.Input(shape=(max_sequence_len,), dtype='int32')
if pretrained_embedding is None:
embedded_sequences = layers.Embedding(max_features, 100,
name='embedding')(sequence_input)
else:
embedded_sequences = layers.Embedding(max_features, pretrained_embedding.shape[1],
weights=[pretrained_embedding],
name='embedding',
trainable=False)(sequence_input)
conv1 = layers.Conv1D(128, 5, activation='relu')(embedded_sequences)
pool1 = layers.MaxPooling1D(5)(conv1)
conv2 = layers.Conv1D(128, 5, activation='relu')(pool1)
pool2 = layers.MaxPooling1D(5)(conv2)
conv3 = layers.Conv1D(128, 5, activation='relu')(pool2)
pool3 = layers.MaxPooling1D(35)(conv3)
flatten = layers.Flatten()(pool3)
dense = layers.Dense(128, activation='relu')(flatten)
preds = layers.Dense(num_classes, activation='softmax')(dense)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=metrics)
return model
```
### Model with Pretrained Embedding
```
num_classes = 2
model1 = simple_text_cnn(max_sequence_len, max_features, num_classes,
pretrained_embedding=pretrained_embedding)
model1.summary()
```
We can confirm whether our embedding layer is trainable by looping through each layer and checking the trainable attribute.
```
df_model_layers = pd.DataFrame(
[(layer.name, layer.trainable, layer.count_params()) for layer in model1.layers],
columns=['layer', 'trainable', 'n_params']
)
df_model_layers
# time : 70
# test performance : auc 0.93212
start = time.time()
history1 = model1.fit(x_train, y_train,
validation_data=(x_val, y_val),
batch_size=128,
epochs=8)
end = time.time()
elapse1 = end - start
elapse1
```
### Model without Pretrained Embedding
```
num_classes = 2
model2 = simple_text_cnn(max_sequence_len, max_features, num_classes)
model2.summary()
# time : 86 secs
# test performance : auc 0.92310
start = time.time()
history1 = model2.fit(x_train, y_train,
validation_data=(x_val, y_val),
batch_size=128,
epochs=8)
end = time.time()
elapse1 = end - start
elapse1
```
## Submission
For the submission section, we read in and preprocess the test data provided by the competition, then generate the predicted probability column for both the model that uses pretrained embedding and one that doesn't to compare their performance.
```
input_path = os.path.join(data_dir, 'word2vec-nlp-tutorial', 'testData.tsv')
df_test = pd.read_csv(input_path, delimiter='\t')
print(df_test.shape)
df_test.head()
def clean_text_without_label(df: pd.DataFrame, text_col: str) -> List[str]:
texts = []
for raw_text in df[text_col]:
text = BeautifulSoup(raw_text).get_text()
cleaned_text = clean_str(text)
texts.append(cleaned_text)
return texts
texts_test = clean_text_without_label(df_test, text_col)
sequences_test = tokenizer.texts_to_sequences(texts_test)
x_test = pad_sequences(sequences_test, maxlen=max_sequence_len)
len(x_test)
def create_submission(ids, predictions, ids_col, label_col, submission_path) -> pd.DataFrame:
df_submission = pd.DataFrame({
ids_col: ids,
label_col: predictions
}, columns=[ids_col, label_col])
if submission_path is not None:
# create the directory if need be, e.g. if the submission_path = submission/submission.csv
# we'll create the submission directory first if it doesn't exist
directory = os.path.split(submission_path)[0]
if (directory != '' or directory != '.') and not os.path.isdir(directory):
os.makedirs(directory, exist_ok=True)
df_submission.to_csv(submission_path, index=False, header=True)
return df_submission
ids_col = 'id'
label_col = 'sentiment'
ids = df_test[ids_col]
models = {
'pretrained_embedding': model1,
'without_pretrained_embedding': model2
}
for model_name, model in models.items():
print('generating submission for: ', model_name)
submission_path = os.path.join(submission_dir, '{}_submission.csv'.format(model_name))
predictions = model.predict(x_test, verbose=1)[:, 1]
df_submission = create_submission(ids, predictions, ids_col, label_col, submission_path)
# sanity check to make sure the size and the output of the submission makes sense
print(df_submission.shape)
df_submission.head()
```
## Summary
In this article, we took a look at how to leverage pre-trained word embeddings for our text classification task. There're also various Kaggle Kernels [here](https://www.kaggle.com/sudalairajkumar/a-look-at-different-embeddings) and [here](https://www.kaggle.com/sbongo/do-pretrained-embeddings-give-you-the-extra-edge) that experiments whether different pre-trained embeddings or even an ensemble of models each with a different pre-trained embedding on various text classification tasks to see if it gives us an edge.
# Reference
- [Blog: Text Classification, Part I - Convolutional Networks](https://richliao.github.io/supervised/classification/2016/11/26/textclassifier-convolutional/)
- [Blog: Using pre-trained word embeddings in a Keras model](https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html)
- [Jupyter Notebook - Deep Learning with Python - Using Word Embeddings](https://nbviewer.jupyter.org/github/fchollet/deep-learning-with-python-notebooks/blob/master/6.1-using-word-embeddings.ipynb)
| github_jupyter |
<p><font size="6"><b>04 - Pandas: Working with time series data</b></font></p>
> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:[email protected]>, <mailto:[email protected]>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
```
# Introduction: `datetime` module
Standard Python contains the `datetime` module to handle date and time data:
```
import datetime
dt = datetime.datetime(year=2016, month=12, day=19, hour=13, minute=30)
dt
print(dt) # .day,...
print(dt.strftime("%d %B %Y"))
```
# Dates and times in pandas
## The ``Timestamp`` object
Pandas has its own date and time objects, which are compatible with the standard `datetime` objects, but provide some more functionality to work with.
The `Timestamp` object can also be constructed from a string:
```
ts = pd.Timestamp('2016-12-19')
ts
```
Like with `datetime.datetime` objects, there are several useful attributes available on the `Timestamp`. For example, we can get the month (experiment with tab completion!):
```
ts.month
```
There is also a `Timedelta` type, which can e.g. be used to add intervals of time:
```
ts + pd.Timedelta('5 days')
```
## Parsing datetime strings

Unfortunately, when working with real world data, you encounter many different `datetime` formats. Most of the time when you have to deal with them, they come in text format, e.g. from a `CSV` file. To work with those data in Pandas, we first have to *parse* the strings to actual `Timestamp` objects.
<div class="alert alert-info">
<b>REMEMBER</b>: <br><br>
To convert string formatted dates to Timestamp objects: use the `pandas.to_datetime` function
</div>
```
pd.to_datetime("2016-12-09")
pd.to_datetime("09/12/2016")
pd.to_datetime("09/12/2016", dayfirst=True)
pd.to_datetime("09/12/2016", format="%d/%m/%Y")
```
A detailed overview of how to specify the `format` string, see the table in the python documentation: https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
## `Timestamp` data in a Series or DataFrame column
```
s = pd.Series(['2016-12-09 10:00:00', '2016-12-09 11:00:00', '2016-12-09 12:00:00'])
s
```
The `to_datetime` function can also be used to convert a full series of strings:
```
ts = pd.to_datetime(s)
ts
```
Notice the data type of this series has changed: the `datetime64[ns]` dtype. This indicates that we have a series of actual datetime values.
The same attributes as on single `Timestamp`s are also available on a Series with datetime data, using the **`.dt`** accessor:
```
ts.dt.hour
ts.dt.dayofweek
```
To quickly construct some regular time series data, the [``pd.date_range``](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html) function comes in handy:
```
pd.Series(pd.date_range(start="2016-01-01", periods=10, freq='3H'))
```
# Time series data: `Timestamp` in the index
## River discharge example data
For the following demonstration of the time series functionality, we use a sample of discharge data of the Maarkebeek (Flanders) with 3 hour averaged values, derived from the [Waterinfo website](https://www.waterinfo.be/).
```
data = pd.read_csv("data/vmm_flowdata.csv")
data.head()
```
We already know how to parse a date column with Pandas:
```
data['Time'] = pd.to_datetime(data['Time'])
```
With `set_index('datetime')`, we set the column with datetime values as the index, which can be done by both `Series` and `DataFrame`.
```
data = data.set_index("Time")
data
```
The steps above are provided as built-in functionality of `read_csv`:
```
data = pd.read_csv("data/vmm_flowdata.csv", index_col=0, parse_dates=True)
```
<div class="alert alert-info">
<b>REMEMBER</b>: <br><br>
`pd.read_csv` provides a lot of built-in functionality to support this kind of transactions when reading in a file! Check the help of the read_csv function...
</div>
## The DatetimeIndex
When we ensure the DataFrame has a `DatetimeIndex`, time-series related functionality becomes available:
```
data.index
```
Similar to a Series with datetime data, there are some attributes of the timestamp values available:
```
data.index.day
data.index.dayofyear
data.index.year
```
The `plot` method will also adapt its labels (when you zoom in, you can see the different levels of detail of the datetime labels):
```
%matplotlib widget
data.plot()
# switching back to static inline plots (the default)
%matplotlib inline
```
We have too much data to sensibly plot on one figure. Let's see how we can easily select part of the data or aggregate the data to other time resolutions in the next sections.
## Selecting data from a time series
We can use label based indexing on a timeseries as expected:
```
data[pd.Timestamp("2012-01-01 09:00"):pd.Timestamp("2012-01-01 19:00")]
```
But, for convenience, indexing a time series also works with strings:
```
data["2012-01-01 09:00":"2012-01-01 19:00"]
```
A nice feature is **"partial string" indexing**, where we can do implicit slicing by providing a partial datetime string.
E.g. all data of 2013:
```
data['2013':]
```
Or all data of January up to March 2012:
```
data['2012-01':'2012-03']
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>select all data starting from 2012</li>
</ul>
</div>
```
data['2012':]
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>select all data in January for all different years</li>
</ul>
</div>
```
data[data.index.month == 1]
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>select all data in April, May and June for all different years</li>
</ul>
</div>
```
data[data.index.month.isin([4, 5, 6])]
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>select all 'daytime' data (between 8h and 20h) for all days</li>
</ul>
</div>
```
data[(data.index.hour > 8) & (data.index.hour < 20)]
```
## The power of pandas: `resample`
A very powerfull method is **`resample`: converting the frequency of the time series** (e.g. from hourly to daily data).
The time series has a frequency of 1 hour. I want to change this to daily:
```
data.resample('D').mean().head()
```
Other mathematical methods can also be specified:
```
data.resample('D').max().head()
```
<div class="alert alert-info">
<b>REMEMBER</b>: <br><br>
The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases <br>
These strings can also be combined with numbers, eg `'10D'`...
</div>
```
data.resample('M').mean().plot() # 10D
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the monthly standard deviation of the columns</li>
</ul>
</div>
```
data.resample('M').std().plot() # 'A'
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Plot the monthly mean and median values for the years 2011-2012 for 'L06_347'<br><br></li>
</ul>
__Note__ Did you know <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.agg.html"><code>agg</code></a> to derive multiple statistics at the same time?
</div>
```
subset = data['2011':'2012']['L06_347']
subset.resample('M').agg(['mean', 'median']).plot()
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>plot the monthly mininum and maximum daily average value of the 'LS06_348' column</li>
</ul>
</div>
```
daily = data['LS06_348'].resample('D').mean() # daily averages calculated
daily.resample('M').agg(['min', 'max']).plot() # monthly minimum and maximum values of these daily averages
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Make a bar plot of the mean of the stations in year of 2013</li>
</ul>
</div>
```
data['2013':'2013'].mean().plot(kind='barh')
```
| github_jupyter |
Demonstrating how to get DonkeyCar Tub files into a PyTorch/fastai DataBlock
```
from fastai.data.all import *
from fastai.vision.all import *
from fastai.data.transforms import ColReader, Normalize, RandomSplitter
import torch
from torch import nn
from torch.nn import functional as F
from donkeycar.parts.tub_v2 import Tub
import pandas as pd
from pathlib import Path
from malpi.dk.train import preprocessFileList, get_data, get_learner, get_autoencoder, train_autoencoder
def learn_resnet():
learn2 = cnn_learner(dls, resnet18, loss_func=MSELossFlat(), metrics=[rmse], cbs=ActivationStats(with_hist=True))
learn2.fine_tune(5)
learn2.recorder.plot_loss()
learn2.show_results(figsize=(20,10))
```
The below code is modified from: https://github.com/cmasenas/fastai_navigation_training/blob/master/fastai_train.ipynb.
TODO: Figure out how to have multiple output heads
```
def test_one_transform(name, inputs, df_all, batch_tfms, item_tfms, epochs, lr):
dls = get_data(inputs, df_all=df_all, batch_tfms=batch_tfms, item_tfms=item_tfms)
callbacks = [CSVLogger(f"Transform_{name}.csv", append=True)]
learn = get_learner(dls)
#learn.no_logging() #Try this to block logging when doing many training test runs
learn.fit_one_cycle(epochs, lr, cbs=callbacks)
#learn.recorder.plot_loss()
#learn.show_results(figsize=(20,10))
# Train multipel times using a list of Transforms, one at a time.
# Compare mean/stdev of best validation loss (or rmse?) for each Transform
df_all = get_dataframe("track1_warehouse.txt")
transforms = [None]
transforms.extend( [*aug_transforms(do_flip=False, size=128)] )
for tfm in transforms:
name = "None" if tfm is None else str(tfm.__class__.__name__)
print( f"Transform: {name}" )
for i in range(5):
print( f" Run {i+1}" )
test_one_transform(name, "track1_warehouse.txt", df_all, None, 5, 3e-3)
def visualize_learner( learn ):
#dls=nav.dataloaders(df, bs=512)
preds, tgt = learn.get_preds(dl=[dls.one_batch()])
plt.title("Target vs Predicted Steering", fontsize=18, y=1.0)
plt.xlabel("Target", fontsize=14, labelpad=15)
plt.ylabel("Predicted", fontsize=14, labelpad=15)
plt.plot(tgt.T[0], preds.T[0],'bo')
plt.plot([-1,1],[-1,1],'r', linewidth = 4)
plt.show()
plt.title("Target vs Predicted Throttle", fontsize=18, y=1.02)
plt.xlabel("Target", fontsize=14, labelpad=15)
plt.ylabel("Predicted", fontsize=14, labelpad=15)
plt.plot(tgt.T[1], preds.T[1],'bo')
plt.plot([0,1],[0,1],'r', linewidth = 4)
plt.show()
learn.export()
df_all = get_dataframe("track1_warehouse.txt")
dls = get_data("track1_warehouse.txt", df_all=df_all, batch_tfms=None)
learn = get_learner(dls)
learn.fit_one_cycle(15, 3e-3)
visualize_learner(learn)
learn.export('models/track1_v2.pkl')
def clear_pyplot_memory():
plt.clf()
plt.cla()
plt.close()
df_all = get_dataframe("track1_warehouse.txt")
transforms=[None,
RandomResizedCrop(128,p=1.0,min_scale=0.5,ratio=(0.9,1.1)),
RandomErasing(sh=0.2, max_count=6,p=1.0),
Brightness(max_lighting=0.4, p=1.0),
Contrast(max_lighting=0.4, p=1.0),
Saturation(max_lighting=0.4, p=1.0)]
#dls = get_data(None, df_all, item_tfms=item_tfms, batch_tfms=batch_tfms)
for tfm in transforms:
name = "None" if tfm is None else str(tfm.__class__.__name__)
if name == "RandomResizedCrop":
item_tfms = tfm
batch_tfms = None
else:
item_tfms = None
batch_tfms = tfm
dls = get_data("track1_warehouse.txt",
df_all=df_all,
item_tfms=item_tfms, batch_tfms=batch_tfms)
dls.show_batch(unique=True, show=True)
plt.savefig( f'Transform_{name}.png' )
#clear_pyplot_memory()
learn, dls = train_autoencoder( "tracks_all.txt", 5, 3e-3, name="ae_test1", verbose=False )
learn.recorder.plot_loss()
learn.show_results(figsize=(20,10))
#plt.savefig(name + '.png')
idx = 0
idx += 1
im1 = dls.one_batch()[0]
im1_out = learn.model.forward(im1)
show_image(im1[idx])
show_image(im1_out[idx])
from fastai.metrics import rmse
from typing import List, Callable, Union, Any, TypeVar, Tuple
Tensor = TypeVar('torch.tensor')
from abc import abstractmethod
class BaseVAE(nn.Module):
def __init__(self) -> None:
super(BaseVAE, self).__init__()
def encode(self, input: Tensor) -> List[Tensor]:
raise NotImplementedError
def decode(self, input: Tensor) -> Any:
raise NotImplementedError
def sample(self, batch_size:int, current_device: int, **kwargs) -> Tensor:
raise NotImplementedError
def generate(self, x: Tensor, **kwargs) -> Tensor:
raise NotImplementedError
@abstractmethod
def forward(self, *inputs: Tensor) -> Tensor:
pass
@abstractmethod
def loss_function(self, *inputs: Any, **kwargs) -> Tensor:
pass
class VanillaVAE(BaseVAE):
def __init__(self,
in_channels: int,
latent_dim: int,
hidden_dims: List = None,
**kwargs) -> None:
super(VanillaVAE, self).__init__()
self.latent_dim = latent_dim
self.kld_weight = 0.00025 # TODO calculate based on: #al_img.shape[0]/ self.num_train_imgs
modules = []
if hidden_dims is None:
hidden_dims = [32, 64, 128, 256, 512]
# Build Encoder
for h_dim in hidden_dims:
modules.append(
nn.Sequential(
nn.Conv2d(in_channels, out_channels=h_dim,
kernel_size= 3, stride= 2, padding = 1),
nn.BatchNorm2d(h_dim),
nn.LeakyReLU())
)
in_channels = h_dim
self.encoder = nn.Sequential(*modules)
self.fc_mu = nn.Linear(hidden_dims[-1]*4, latent_dim)
self.fc_var = nn.Linear(hidden_dims[-1]*4, latent_dim)
# Build Decoder
modules = []
self.decoder_input = nn.Linear(latent_dim, hidden_dims[-1] * 4)
hidden_dims.reverse()
for i in range(len(hidden_dims) - 1):
modules.append(
nn.Sequential(
nn.ConvTranspose2d(hidden_dims[i],
hidden_dims[i + 1],
kernel_size=3,
stride = 2,
padding=1,
output_padding=1),
nn.BatchNorm2d(hidden_dims[i + 1]),
nn.LeakyReLU())
)
self.decoder = nn.Sequential(*modules)
self.final_layer = nn.Sequential(
nn.ConvTranspose2d(hidden_dims[-1],
hidden_dims[-1],
kernel_size=3,
stride=2,
padding=1,
output_padding=1),
nn.BatchNorm2d(hidden_dims[-1]),
nn.LeakyReLU(),
nn.Conv2d(hidden_dims[-1], out_channels= 3,
kernel_size= 3, padding= 1),
nn.Tanh())
def encode(self, input: Tensor) -> List[Tensor]:
"""
Encodes the input by passing through the encoder network
and returns the latent codes.
:param input: (Tensor) Input tensor to encoder [N x C x H x W]
:return: (Tensor) List of latent codes
"""
result = self.encoder(input)
result = torch.flatten(result, start_dim=1)
# Split the result into mu and var components
# of the latent Gaussian distribution
mu = self.fc_mu(result)
log_var = self.fc_var(result)
return [mu, log_var]
def decode(self, z: Tensor) -> Tensor:
"""
Maps the given latent codes
onto the image space.
:param z: (Tensor) [B x D]
:return: (Tensor) [B x C x H x W]
"""
result = self.decoder_input(z)
result = result.view(-1, 512, 2, 2)
result = self.decoder(result)
result = self.final_layer(result)
return result
def reparameterize(self, mu: Tensor, logvar: Tensor) -> Tensor:
"""
Reparameterization trick to sample from N(mu, var) from
N(0,1).
:param mu: (Tensor) Mean of the latent Gaussian [B x D]
:param logvar: (Tensor) Standard deviation of the latent Gaussian [B x D]
:return: (Tensor) [B x D]
"""
std = torch.exp(0.5 * logvar)
eps = torch.randn_like(std)
return eps * std + mu
def forward(self, input: Tensor, **kwargs) -> List[Tensor]:
mu, log_var = self.encode(input)
z = self.reparameterize(mu, log_var)
return [self.decode(z), input, mu, log_var]
def loss_function(self,
*args,
**kwargs) -> dict:
"""
Computes the VAE loss function.
KL(N(\mu, \sigma), N(0, 1)) = \log \frac{1}{\sigma} + \frac{\sigma^2 + \mu^2}{2} - \frac{1}{2}
:param args:
:param kwargs:
:return:
"""
#print( f"loss_function: {len(args[0])} {type(args[0][0])} {args[1].shape}" )
recons = args[0][0]
input = args[1]
mu = args[0][2]
log_var = args[0][3]
kld_weight = self.kld_weight # kwargs['M_N'] # Account for the minibatch samples from the dataset
recons_loss =F.mse_loss(recons, input)
kld_loss = torch.mean(-0.5 * torch.sum(1 + log_var - mu ** 2 - log_var.exp(), dim = 1), dim = 0)
loss = recons_loss + kld_weight * kld_loss
return loss
#return {'loss': loss, 'Reconstruction_Loss':recons_loss.detach(), 'KLD':-kld_loss.detach()}
def sample(self,
num_samples:int,
current_device: int, **kwargs) -> Tensor:
"""
Samples from the latent space and return the corresponding
image space map.
:param num_samples: (Int) Number of samples
:param current_device: (Int) Device to run the model
:return: (Tensor)
"""
z = torch.randn(num_samples,
self.latent_dim)
z = z.to(current_device)
samples = self.decode(z)
return samples
def generate(self, x: Tensor, **kwargs) -> Tensor:
"""
Given an input image x, returns the reconstructed image
:param x: (Tensor) [B x C x H x W]
:return: (Tensor) [B x C x H x W]
"""
return self.forward(x)[0]
input_file="track1_warehouse.txt"
item_tfms = [Resize(64,method="squish")]
dls = get_data(input_file, item_tfms=item_tfms, verbose=False, autoencoder=True)
vae = VanillaVAE(3, 64)
learn = Learner(dls, vae, loss_func=vae.loss_function)
learn.fit_one_cycle(5, 3e-3)
vae
```
| github_jupyter |
# Redis列表实现一次pop 弹出多条数据

```
# 连接 Redis
import redis
client = redis.Redis(host='122.51.39.219', port=6379, password='leftright123')
# 注意:
# 这个 Redis 环境仅作为练习之用,每小时会清空一次,请勿存放重要数据。
# 准备数据
client.lpush('test_batch_pop', *list(range(10000)))
# 一条一条读取,非常耗时
import time
start = time.time()
while True:
data = client.lpop('test_batch_pop')
if not data:
break
end = time.time()
delta = end - start
print(f'循环读取10000条数据,使用 lpop 耗时:{delta}')
```
## 为什么使用`lpop`读取10000条数据这么慢?
因为`lpop`每次只弹出1条数据,每次弹出数据都要连接 Redis 。大量时间浪费在了网络传输上面。
## 如何实现批量弹出多条数据,并在同一次网络请求中返回?
先使用 `lrange` 获取数据,再使用`ltrim`删除被获取的数据。
```
# 复习一下 lrange 的用法
datas = client.lrange('test_batch_pop', 0, 9) # 读取前10条数据
datas
# 学习一下 ltrim 的用法
client.ltrim('test_batch_pop', 10, -1) # 删除前10条数据
# 验证一下数据是否被成功删除
length = client.llen('test_batch_pop')
print(f'现在列表里面还剩{length}条数据')
datas = client.lrange('test_batch_pop', 0, 9) # 读取前10条数据
datas
# 一种看起来正确的做法
def batch_pop_fake(key, n):
datas = client.lrange(key, 0, n - 1)
client.ltrim(key, n, -1)
return datas
batch_pop_fake('test_batch_pop', 10)
client.lrange('test_batch_pop', 0, 9)
```
## 这种写法用什么问题
在多个进程同时使用 batch_pop_fake 函数的时候,由于执行 lrange 与 ltrim 是在两条语句中,因此实际上会分成2个网络请求。那么当 A 进程
刚刚执行完lrange,还没有来得及执行 ltrim 时,B 进程刚好过来执行 lrange,那么 AB 两个进程就会获得相同的数据。
等 B 进程获取完成数据以后,A 进程的 ltrim 刚刚抵达,此时Redis 会删除前 n 条数据,然后 B 进程的 ltrim 也到了,再删除前 n 条数据。那么最终导致的结果就是,AB 两个进程同时拿到前 n 条数据,但是却有2n 条数据被删除。
## 使用 pipeline 打包多个命令到一个请求中
pipeline 的使用方法如下:
```python
import redis
client = redis.Redis()
pipe = client.pipeline()
pipe.lrange('key', 0, n - 1)
pipe.ltrim('key', n, -1)
result = pipe.execute()
```
pipe.execute()返回一个列表,这个列表每一项按顺序对应每一个命令的执行结果。在上面的例子中,result 是一个有两项的列表,第一项对应 lrange 的返回结果,第二项为 True,表示 ltrim 执行成功。
```
# 真正可用的批量弹出数据函数
def batch_pop_real(key, n):
pipe = client.pipeline()
pipe.lrange(key, 0, n - 1)
pipe.ltrim(key, n, -1)
result = pipe.execute()
return result[0]
# 清空列表并重新添加10000条数据
client.delete('test_batch_pop')
client.lpush('test_batch_pop', *list(range(10000)))
start = time.time()
while True:
datas = batch_pop_real('test_batch_pop', 1000)
if not datas:
break
for data in datas:
pass
end = time.time()
print(f'批量弹出10000条数据,耗时:{end - start}')
client.llen('test_batch_pop')
```



| github_jupyter |
```
# importamos las librerías necesarias
%matplotlib inline
import random
import tsfresh
import os
import math
from scipy import stats
from scipy.spatial.distance import pdist
from math import sqrt, log, floor
from fastdtw import fastdtw
import ipywidgets as widgets
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import pandas as pd
import seaborn as sns
from statistics import mean
from scipy.spatial.distance import euclidean
import scipy.cluster.hierarchy as hac
from scipy.cluster.hierarchy import fcluster
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, AgglomerativeClustering, DBSCAN
from sklearn.manifold import TSNE
from sklearn.metrics import normalized_mutual_info_score, adjusted_rand_score, silhouette_score, silhouette_samples
from sklearn.metrics import mean_squared_error
from scipy.spatial import distance
sns.set(style='white')
# "fix" the randomness for reproducibility
random.seed(42)
!pip install tsfresh
```
### Dataset
Los datos son series temporales (casos semanales de Dengue) de distintos distritos de Paraguay
```
path = "./data/Notificaciones/"
filename_read = os.path.join(path,"normalizado.csv")
notificaciones = pd.read_csv(filename_read,delimiter=",",engine='python')
notificaciones.shape
listaMunicp = notificaciones['distrito_nombre'].tolist()
listaMunicp = list(dict.fromkeys(listaMunicp))
print('Son ', len(listaMunicp), ' distritos')
listaMunicp.sort()
print(listaMunicp)
```
A continuación tomamos las series temporales que leímos y vemos como quedan
```
timeSeries = pd.DataFrame()
for muni in listaMunicp:
municipio=notificaciones['distrito_nombre']==muni
notif_x_municp=notificaciones[municipio]
notif_x_municp = notif_x_municp.reset_index(drop=True)
notif_x_municp = notif_x_municp['incidencia']
notif_x_municp = notif_x_municp.replace('nan', np.nan).fillna(0.000001)
notif_x_municp = notif_x_municp.replace([np.inf, -np.inf], np.nan).fillna(0.000001)
timeSeries = timeSeries.append(notif_x_municp)
ax = sns.tsplot(ax=None, data=notif_x_municp.values, err_style="unit_traces")
plt.show()
#timeseries shape
n=217
timeSeries.shape
timeSeries.describe()
```
### Análisis de grupos (Clustering)
El Clustering o la clusterización es un proceso importante dentro del Machine learning. Este proceso desarrolla una acción fundamental que le permite a los algoritmos de aprendizaje automatizado entrenar y conocer de forma adecuada los datos con los que desarrollan sus actividades. Tiene como finalidad principal lograr el agrupamiento de conjuntos de objetos no etiquetados, para lograr construir subconjuntos de datos conocidos como Clusters. Cada cluster dentro de un grafo está formado por una colección de objetos o datos que a términos de análisis resultan similares entre si, pero que poseen elementos diferenciales con respecto a otros objetos pertenecientes al conjunto de datos y que pueden conformar un cluster independiente.

Aunque los datos no necesariamente son tan fáciles de agrupar

### Métricas de similitud
Para medir lo similares ( o disimilares) que son los individuos existe una enorme cantidad de índices de similaridad y de disimilaridad o divergencia. Todos ellos tienen propiedades y utilidades distintas y habrá que ser consciente de ellas para su correcta aplicación al caso que nos ocupe.
La mayor parte de estos índices serán o bien, indicadores basados en la distancia (considerando a los individuos como vectores en el espacio de las variables) (en este sentido un elevado valor de la distancia entre dos individuos nos indicará un alto grado de disimilaridad entre ellos); o bien, indicadores basados en coeficientes de correlación ; o bien basados en tablas de datos de posesión o no de una serie de atributos.
A continuación mostramos las funciones de:
* Distancia Euclidiana
* Error cuadrático medio
* Fast Dynamic Time Warping
* Correlación de Pearson y
* Correlación de Spearman.
Existen muchas otras métricas y depende de la naturaleza de cada problema decidir cuál usar. Por ejemplo, *Fast Dymanic Time Warping* es una medida de similitud diseña especialmente para series temporales.
```
#Euclidean
def euclidean(x, y):
r=np.linalg.norm(x-y)
if math.isnan(r):
r=1
#print(r)
return r
#RMSE
def rmse(x, y):
r=sqrt(mean_squared_error(x,y))
if math.isnan(r):
r=1
#print(r)
return r
#Fast Dynamic time warping
def fast_DTW(x, y):
r, _ = fastdtw(x, y, dist=euclidean)
if math.isnan(r):
r=1
#print(r)
return r
#Correlation
def corr(x, y):
r=np.dot(x-mean(x),y-mean(y))/((np.linalg.norm(x-mean(x)))*(np.linalg.norm(y-mean(y))))
if math.isnan(r):
r=0
#print(r)
return 1 - r
#Spearman
def scorr(x, y):
r = stats.spearmanr(x, y)[0]
if math.isnan(r):
r=0
#print(r)
return 1 - r
# compute distances using LCSS
# function for LCSS computation
# based on implementation from
# https://rosettacode.org/wiki/Longest_common_subsequence
def lcs(a, b):
lengths = [[0 for j in range(len(b)+1)] for i in range(len(a)+1)]
# row 0 and column 0 are initialized to 0 already
for i, x in enumerate(a):
for j, y in enumerate(b):
if x == y:
lengths[i+1][j+1] = lengths[i][j] + 1
else:
lengths[i+1][j+1] = max(lengths[i+1][j], lengths[i][j+1])
x, y = len(a), len(b)
result = lengths[x][y]
return result
def discretise(x):
return int(x * 10)
def multidim_lcs(a, b):
a = a.applymap(discretise)
b = b.applymap(discretise)
rows, dims = a.shape
lcss = [lcs(a[i+2], b[i+2]) for i in range(dims)]
return 1 - sum(lcss) / (rows * dims)
#Distancias para kmeans
#Euclidean
euclidean_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
euclidean_dist[i,j] = euclidean(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#RMSE
rmse_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
rmse_dist[i,j] = rmse(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#Corr
corr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
corr_dist[i,j] = corr(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#scorr
scorr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
scorr_dist[i,j] = scorr(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
#DTW
dtw_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
dtw_dist[i,j] = fast_DTW(timeSeries.iloc[i].values.flatten(), timeSeries.iloc[j].values.flatten())
```
### Determinar el número de clusters a formar
La mayoría de las técnicas de clustering necesitan como *input* el número de clusters a formar, para eso lo que se hace es hacer una prueba con diferentes números de cluster y nos quedamos con el que dió menor error en general. Para medir ese error utilizamos **Silhouette score**.
El **Silhoutte score** se puede utilizar para estudiar la distancia de separación entre los clusters resultantes, especialmente si no hay conocimiento previo de cuáles son los verdaderos grupos para cada objeto, que es el caso más común en aplicaciones reales.
El Silhouette score $s(i)$ se calcula:
\begin{equation}
s(i)=\dfrac{b(i)-a(i)}{max(b(i),a(i))}
\end{equation}
Definamos $a (i)$ como la distancia media del punto $(i)$ a todos los demás puntos del grupo que se le asignó ($A$). Podemos interpretar $a (i)$ como qué tan bien se asigna el punto al grupo. Cuanto menor sea el valor, mejor será la asignación.
De manera similar, definamos $b (i)$ como la distancia media del punto $(i)$ a otros puntos de su grupo vecino más cercano ($B$). El grupo ($B$) es el grupo al que no se asigna el punto $(i)$ pero su distancia es la más cercana entre todos los demás grupos. $ s (i) $ se encuentra en el rango de [-1,1].
```
from yellowbrick.cluster import KElbowVisualizer
model = AgglomerativeClustering()
visualizer = KElbowVisualizer(model, k=(3,20),metric='distortion', timings=False)
visualizer.fit(rmse_dist) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
```
Así tenemos que son 9 los grupos que formaremos
```
k=9
```
## Técnicas de clustering
### K-means
El objetivo de este algoritmo es el de encontrar “K” grupos (clusters) entre los datos crudos. El algoritmo trabaja iterativamente para asignar a cada “punto” (las filas de nuestro conjunto de entrada forman una coordenada) uno de los “K” grupos basado en sus características. Son agrupados en base a la similitud de sus features (las columnas). Como resultado de ejecutar el algoritmo tendremos:
* Los “centroids” de cada grupo que serán unas “coordenadas” de cada uno de los K conjuntos que se utilizarán para poder etiquetar nuevas muestras.
* Etiquetas para el conjunto de datos de entrenamiento. Cada etiqueta perteneciente a uno de los K grupos formados.
Los grupos se van definiendo de manera “orgánica”, es decir que se va ajustando su posición en cada iteración del proceso, hasta que converge el algoritmo. Una vez hallados los centroids deberemos analizarlos para ver cuales son sus características únicas, frente a la de los otros grupos.

En la figura de arriba vemos como los datos se agrupan según el *centroid* que está representado por una estrella. El algortimo inicializa los centroides aleatoriamente y va ajustandolo en cada iteracción, los puntos que están más cerca del *centroid* son los que pertenecen al mismo grupo.
### Clustering jerárquico

El algortimo de clúster jerárquico agrupa los datos basándose en la distancia entre cada uno y buscando que los datos que están dentro de un clúster sean los más similares entre sí.
En una representación gráfica los elementos quedan anidados en jerarquías con forma de árbol.
### DBScan
El agrupamiento espacial basado en densidad de aplicaciones con ruido o Density-based spatial clustering of applications with noise (DBSCAN) es un algoritmo de agrupamiento de datos (data clustering). Es un algoritmo de agrupamiento basado en densidad (density-based clustering) porque encuentra un número de grupos (clusters) comenzando por una estimación de la distribución de densidad de los nodos correspondientes. DBSCAN es uno de los algoritmos de agrupamiento más usados y citados en la literatura científica.

Los puntos marcados en rojo son puntos núcleo. Los puntos amarillos son densamente alcanzables desde rojo y densamente conectados con rojo, y pertenecen al mismo clúster. El punto azul es un punto ruidoso que no es núcleo ni densamente alcanzable.
```
#Experimentos
print('Silhouette coefficent')
#HAC + euclidean
Z = hac.linkage(timeSeries, method='complete', metric=euclidean)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + euclidean distance: ",silhouette_score(euclidean_dist, clusters))
#HAC + rmse
Z = hac.linkage(timeSeries, method='complete', metric=rmse)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + rmse distance: ",silhouette_score( rmse_dist, clusters))
#HAC + corr
Z = hac.linkage(timeSeries, method='complete', metric=corr)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + corr distance: ",silhouette_score( corr_dist, clusters))
#HAC + scorr
Z = hac.linkage(timeSeries, method='complete', metric=scorr)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + scorr distance: ",silhouette_score( scorr_dist, clusters))
#HAC + LCSS
#Z = hac.linkage(timeSeries, method='complete', metric=multidim_lcs)
#clusters = fcluster(Z, k, criterion='maxclust')
#print("HAC + LCSS distance: ",silhouette_score( timeSeries, clusters, metric=multidim_lcs))
#HAC + DTW
Z = hac.linkage(timeSeries, method='complete', metric=fast_DTW)
clusters = fcluster(Z, k, criterion='maxclust')
print("HAC + DTW distance: ",silhouette_score( dtw_dist, clusters))
km_euc = KMeans(n_clusters=k).fit_predict(euclidean_dist)
silhouette_avg=silhouette_score( euclidean_dist, km_euc)
print("KM + euclidian distance: ",silhouette_score( euclidean_dist, km_euc))
km_rmse = KMeans(n_clusters=k).fit_predict(rmse_dist)
print("KM + rmse distance: ",silhouette_score( rmse_dist, km_rmse))
km_corr = KMeans(n_clusters=k).fit_predict(corr_dist)
print("KM + corr distance: ",silhouette_score( corr_dist, km_corr))
km_scorr = KMeans(n_clusters=k).fit_predict(scorr_dist)
print("KM + scorr distance: ",silhouette_score( scorr_dist, km_scorr))
km_dtw = KMeans(n_clusters=k).fit_predict(dtw_dist)
print("KM + dtw distance: ",silhouette_score( dtw_dist, clusters))
#Experimentos DBSCAN
DB_euc = DBSCAN(eps=3, min_samples=2).fit_predict(euclidean_dist)
silhouette_avg=silhouette_score( euclidean_dist, DB_euc)
print("DBSCAN + euclidian distance: ",silhouette_score( euclidean_dist, DB_euc))
DB_rmse = DBSCAN(eps=12, min_samples=10).fit_predict(rmse_dist)
#print("DBSCAN + rmse distance: ",silhouette_score( rmse_dist, DB_rmse))
print("DBSCAN + rmse distance: ",0.00000000)
DB_corr = DBSCAN(eps=3, min_samples=2).fit_predict(corr_dist)
print("DBSCAN + corr distance: ",silhouette_score( corr_dist, DB_corr))
DB_scorr = DBSCAN(eps=3, min_samples=2).fit_predict(scorr_dist)
print("DBSCAN + scorr distance: ",silhouette_score( scorr_dist, DB_scorr))
DB_dtw = DBSCAN(eps=3, min_samples=2).fit_predict(dtw_dist)
print("KM + dtw distance: ",silhouette_score( dtw_dist, DB_dtw))
```
## Clustering basado en propiedades
Otro enfoque en el clustering es extraer ciertas propiedades de nuestros datos y hacer la agrupación basándonos en eso, el procedimiento es igual a como si estuviesemos trabajando con nuestros datos reales.
```
from tsfresh import extract_features
#features extraction
extracted_features = extract_features(timeSeries, column_id="indice")
extracted_features.shape
list(extracted_features.columns.values)
n=217
features = pd.DataFrame()
Mean=[]
Var=[]
aCF1=[]
Peak=[]
Entropy=[]
Cpoints=[]
for muni in listaMunicp:
municipio=notificaciones['distrito_nombre']==muni
notif_x_municp=notificaciones[municipio]
notif_x_municp = notif_x_municp.reset_index(drop=True)
notif_x_municp = notif_x_municp['incidencia']
notif_x_municp = notif_x_municp.replace('nan', np.nan).fillna(0.000001)
notif_x_municp = notif_x_municp.replace([np.inf, -np.inf], np.nan).fillna(0.000001)
#Features
mean=tsfresh.feature_extraction.feature_calculators.mean(notif_x_municp)
var=tsfresh.feature_extraction.feature_calculators.variance(notif_x_municp)
ACF1=tsfresh.feature_extraction.feature_calculators.autocorrelation(notif_x_municp,1)
peak=tsfresh.feature_extraction.feature_calculators.number_peaks(notif_x_municp,20)
entropy=tsfresh.feature_extraction.feature_calculators.sample_entropy(notif_x_municp)
cpoints=tsfresh.feature_extraction.feature_calculators.number_crossing_m(notif_x_municp,5)
Mean.append(mean)
Var.append(var)
aCF1.append(ACF1)
Peak.append(peak)
Entropy.append(entropy)
Cpoints.append(cpoints)
data_tuples = list(zip(Mean,Var,aCF1,Peak,Entropy,Cpoints))
features = pd.DataFrame(data_tuples, columns =['Mean', 'Var', 'ACF1', 'Peak','Entropy','Cpoints'])
# print the data
features
features.iloc[1]
#Distancias para kmeans
#Euclidean
f_euclidean_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(1,n):
#print("j",j)
f_euclidean_dist[i,j] = euclidean(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#RMSE
f_rmse_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_rmse_dist[i,j] = rmse(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#Corr
#print(features.iloc[i].values.flatten())
#print(features.iloc[j].values.flatten())
print('-------------------------------')
f_corr_dist = np.zeros((n,n))
#for i in range(0,n):
# print("i",i)
# for j in range(0,n):
# print("j",j)
# print(features.iloc[i].values.flatten())
# print(features.iloc[j].values.flatten())
# f_corr_dist[i,j] = corr(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#scorr
f_scorr_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_scorr_dist[i,j] = scorr(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
#DTW
f_dtw_dist = np.zeros((n,n))
for i in range(0,n):
#print("i",i)
for j in range(0,n):
# print("j",j)
f_dtw_dist[i,j] = fast_DTW(features.iloc[i].values.flatten(), features.iloc[j].values.flatten())
from yellowbrick.cluster import KElbowVisualizer
model = AgglomerativeClustering()
visualizer = KElbowVisualizer(model, k=(3,50),metric='distortion', timings=False)
visualizer.fit(f_scorr_dist) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
k=9
km_euc = KMeans(n_clusters=k).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, km_euc)
print("KM + euclidian distance: ",silhouette_score( f_euclidean_dist, km_euc))
km_rmse = KMeans(n_clusters=k).fit_predict(f_rmse_dist)
print("KM + rmse distance: ",silhouette_score( f_rmse_dist, km_rmse))
#km_corr = KMeans(n_clusters=k).fit_predict(f_corr_dist)
#print("KM + corr distance: ",silhouette_score( f_corr_dist, km_corr))
#print("KM + corr distance: ",silhouette_score( f_corr_dist, 0.0))
km_scorr = KMeans(n_clusters=k).fit_predict(f_scorr_dist)
print("KM + scorr distance: ",silhouette_score( f_scorr_dist, km_scorr))
km_dtw = KMeans(n_clusters=k).fit_predict(f_dtw_dist)
print("KM + dtw distance: ",silhouette_score( f_dtw_dist, clusters))
#Experimentos HAC
HAC_euc = AgglomerativeClustering(n_clusters=k).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, HAC_euc)
print("HAC + euclidian distance: ",silhouette_score( f_euclidean_dist, HAC_euc))
HAC_rmse = AgglomerativeClustering(n_clusters=k).fit_predict(f_rmse_dist)
print("HAC + rmse distance: ",silhouette_score( f_rmse_dist, HAC_rmse))
#HAC_corr = AgglomerativeClustering(n_clusters=k).fit_predict(f_corr_dist)
#print("HAC + corr distance: ",silhouette_score( f_corr_dist,HAC_corr))
print("HAC + corr distance: ",0.0)
HAC_scorr = AgglomerativeClustering(n_clusters=k).fit_predict(f_scorr_dist)
print("HAC + scorr distance: ",silhouette_score( f_scorr_dist, HAC_scorr))
HAC_dtw = AgglomerativeClustering(n_clusters=k).fit_predict(f_dtw_dist)
print("HAC + dtw distance: ",silhouette_score( f_dtw_dist, HAC_dtw))
#Experimentos DBSCAN
DB_euc = DBSCAN(eps=3, min_samples=2).fit_predict(f_euclidean_dist)
silhouette_avg=silhouette_score( f_euclidean_dist, DB_euc)
print("DBSCAN + euclidian distance: ",silhouette_score( f_euclidean_dist, DB_euc))
DB_rmse = DBSCAN(eps=12, min_samples=10).fit_predict(f_rmse_dist)
#print("DBSCAN + rmse distance: ",silhouette_score( f_rmse_dist, DB_rmse))
#print("DBSCAN + rmse distance: ",0.00000000)
#DB_corr = DBSCAN(eps=3, min_samples=2).fit_predict(f_corr_dist)
#print("DBSCAN + corr distance: ",silhouette_score( f_corr_dist, DB_corr))
print("DBSCAN + corr distance: ",0.0)
DB_scorr = DBSCAN(eps=3, min_samples=2).fit_predict(f_scorr_dist)
print("DBSCAN + scorr distance: ",silhouette_score( f_scorr_dist, DB_scorr))
DB_dtw = DBSCAN(eps=3, min_samples=2).fit_predict(f_dtw_dist)
print("KM + dtw distance: ",silhouette_score( f_dtw_dist, DB_dtw))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/user9990/Synthetic-data-gen/blob/master/Welcome_To_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p>
<h1>What is Colaboratory?</h1>
Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with
- Zero configuration required
- Free access to GPUs
- Easy sharing
Whether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below!
## **Getting started**
The document you are reading is not a static web page, but an interactive environment called a **Colab notebook** that lets you write and execute code.
For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
```
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
```
To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter". To edit the code, just click the cell and start editing.
Variables that you define in one cell can later be used in other cells:
```
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
```
Colab notebooks allow you to combine **executable code** and **rich text** in a single document, along with **images**, **HTML**, **LaTeX** and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see [Overview of Colab](/notebooks/basic_features_overview.ipynb). To create a new Colab notebook you can use the File menu above, or use the following link: [create a new Colab notebook](http://colab.research.google.com#create=true).
Colab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see [jupyter.org](https://www.jupyter.org).
## Data science
With Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses **numpy** to generate some random data, and uses **matplotlib** to visualize it. To edit the code, just click the cell and start editing.
```
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Sample Visualization")
plt.show()
```
You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from Github and many other sources. To learn more about importing data, and how Colab can be used for data science, see the links below under [Working with Data](#working-with-data).
## Machine learning
With Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just [a few lines of code](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb). Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including [GPUs and TPUs](#using-accelerated-hardware), regardless of the power of your machine. All you need is a browser.
Colab is used extensively in the machine learning community with applications including:
- Getting started with TensorFlow
- Developing and training neural networks
- Experimenting with TPUs
- Disseminating AI research
- Creating tutorials
To see sample Colab notebooks that demonstrate machine learning applications, see the [machine learning examples](#machine-learning-examples) below.
## More Resources
### Working with Notebooks in Colab
- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)
- [Guide to Markdown](/notebooks/markdown_guide.ipynb)
- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)
- [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)
- [Interactive forms](/notebooks/forms.ipynb)
- [Interactive widgets](/notebooks/widgets.ipynb)
- <img src="/img/new.png" height="20px" align="left" hspace="4px" alt="New"></img>
[TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb)
<a name="working-with-data"></a>
### Working with Data
- [Loading data: Drive, Sheets, and Google Cloud Storage](/notebooks/io.ipynb)
- [Charts: visualizing data](/notebooks/charts.ipynb)
- [Getting started with BigQuery](/notebooks/bigquery.ipynb)
### Machine Learning Crash Course
These are a few of the notebooks from Google's online Machine Learning course. See the [full course website](https://developers.google.com/machine-learning/crash-course/) for more.
- [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb)
- [Tensorflow concepts](/notebooks/mlcc/tensorflow_programming_concepts.ipynb)
- [First steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)
- [Intro to neural nets](/notebooks/mlcc/intro_to_neural_nets.ipynb)
- [Intro to sparse data and embeddings](/notebooks/mlcc/intro_to_sparse_data_and_embeddings.ipynb)
<a name="using-accelerated-hardware"></a>
### Using Accelerated Hardware
- [TensorFlow with GPUs](/notebooks/gpu.ipynb)
- [TensorFlow with TPUs](/notebooks/tpu.ipynb)
<a name="machine-learning-examples"></a>
## Machine Learning Examples
To see end-to-end examples of the interactive machine learning analyses that Colaboratory makes possible, check out these tutorials using models from [TensorFlow Hub](https://tfhub.dev).
A few featured examples:
- [Retraining an Image Classifier](https://tensorflow.org/hub/tutorials/tf2_image_retraining): Build a Keras model on top of a pre-trained image classifier to distinguish flowers.
- [Text Classification](https://tensorflow.org/hub/tutorials/tf2_text_classification): Classify IMDB movie reviews as either *positive* or *negative*.
- [Style Transfer](https://tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization): Use deep learning to transfer style between images.
- [Multilingual Universal Sentence Encoder Q&A](https://tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa): Use a machine learning model to answer questions from the SQuAD dataset.
- [Video Interpolation](https://tensorflow.org/hub/tutorials/tweening_conv3d): Predict what happened in a video between the first and the last frame.
| github_jupyter |
# Practice with conditionals
Before we practice conditionals, let's review:
To execute a command when a condition is true, use `if`:
```
if [condition]:
[command]
```
To execute a command when a condition is true, and execute something else otherwise, use `if/else`:
```
if [condition]:
[command 1]
else:
[command 2]
```
To execute a command when one condition is true, a different command if a second condition is true, and execute something else otherwise, use `if/elif/else`:
```
if [condition 1]:
[command 1]
elif [condition 2]:
[command 2]
else:
[command 3]
```
Remember that commands in an `elif` will only run if the first condition is false AND the second condition is true.
Let's say we are making a smoothie. In order to make a big enough smoothie, we want at least 4 cups of ingredients.
```
strawberries = 1
bananas = 0.5
milk = 1
# create a variable ingredients that equals the sum of all our ingredients
ingredients = strawberries + bananas + milk
# write an if statement that prints out "We have enough ingredients!" if we have at least 4 cups of ingredients
if ingredients >= 4:
print("We have enough ingredients!")
```
The code above will let us know if we have enough ingredients for our smoothie. But, if we don't have enough ingredients, the code won't print anything. Our code would be more informative if it also told us when we didn't have enough ingredients. Next, let's write code that also lets us know when we _don't_ have enough ingredients.
```
# write code that prints "We have enough ingredients" if we have at least 4 cups of ingredients
# and also prints "We don't have enough ingredients" if we have less than 4 cups of ingredients
if ingredients >=4:
print("We have enough ingredients!")
else:
print("We do not have enough ingredients.")
```
It might also be useful to know if we have exactly 4 cups of ingredients. Add to the code above so that it lets us know when we have more than enough ingredients, exactly enough ingredients, or not enough ingredients.
```
# write code that prints informative messages when we have more than 4 cups of ingredients,
# exactly 4 cups of ingredients, or less than 4 cups of ingredients
if ingredients > 4:
print("we have more than enough ingredients")
elif ingredients is 4:
print("we have exactly enough ingredients")
else:
print("we do not have enough ingredients")
```
**Challenge**: Suppose our blender can only fit up to 6 cups inside. Add to the above code so that it also warns us when we have too many ingredients.
```
# write an if/elif/else style statement that does the following:
# prints a message when we have exactly 4 cups of ingredients saying we have exactly the right amount of ingredients
# prints a message when we have less than 4 cups of ingredients say we do not have enough
# prints a message when we have 4-6 cups of ingredients saying we have more than enough
# prints a message otherwise that says we have too many ingredients
if ingredients is 4:
print("we have exactly enough ingredients")
elif ingredients < 4:
print("we do not have enough ingredients")
elif ingredients > 4 and ingredients < 6:
print("we have more than enough ingredients")
else:
print("We have too many ingredients")
```
| github_jupyter |
# Using "method chains" to create more readable code
### Game of Thrones example - slicing, group stats, and plotting
I didn't find an off the shelf dataset to run our seminal analysis from last week, but I found [an analysis](https://www.kaggle.com/dhanushkishore/impact-of-game-of-thrones-on-us-baby-names) that explored if Game of Thrones prompted parents to start naming their children differently. The following is inspired by that, but uses pandas to acquire and wrangle our data in a "Tidyverse"-style (how R would do it) flow.
```
#TO USE datadotworld PACKAGE:
#1. create account at data.world, then run the next two lines:
#2. in terminal/powershell: pip install datadotworld[pandas]
#
# IF THIS DOESN'T WORK BC YOU GET AN ERROR ABOUT "CCHARDET", RUN:
# conda install -c conda-forge cchardet
# THEN RERUN: pip install datadotworld[pandas]
#
#3. in terminal/powershell: dw configure
#3a. copy in API token from data.world (get from settings > advanced)
import datadotworld as dw
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
baby_names = dw.load_dataset('nkrishnaswami/us-ssa-baby-names-national')
baby_names = baby_names.dataframes['names_ranks_counts']
```
#### Version 1
1. save a slice of the dataset with the names we want (using `.loc`)
2. sometimes a name is used by boys and girls in the same year, so combine the counts so that we have one observation per name per year
3. save the dataset and then call a plot function
```
# restrict by name and only keep years after 2000
somenames = baby_names.loc[( # formating inside this () is just to make it clearer to a reader
( # condition 1: one of these names, | means "or"
(baby_names['name'] == "Sansa") | (baby_names['name'] == "Daenerys") |
(baby_names['name'] == "Brienne") | (baby_names['name'] == "Cersei") | (baby_names['name'] == "Tyrion")
) # end condition 1
& # & means "and"
( # condition 2: these years
baby_names['year'] >= 2000) # end condition 2
)]
# if a name is used by F and M in a given year, combine the count variable
# Q: why is there a "reset_index"?
# A: groupby automatically turns the groups (here name and year) into the index
# reset_index makes the index simple integers 0, 1, 2 and also
# turns the the grouping variables back into normal columns
# A2: instead of reset_index, you can include `as_index=False` inside groupby!
# (I just learned that myself!)
somenames_agg = somenames.groupby(['name','year'])['count'].sum().reset_index().sort_values(['name','year'])
# plot
sns.lineplot(data=somenames_agg, hue='name',x='year',y='count')
plt.axvline(2011, 0,160,color='red') # add a line for when the show debuted
```
#### Version 2 - `query` > `loc`, for readability
Same as V1, but step 1 uses `.query` to slice inside of `.loc`
1. save a slice of the dataset with the names we want (using `.query`)
2. sometimes a name is used by boys and girls in the same year, so combine the counts so that we have one observation per name per year
3. save the dataset and then call a plot function
```
# use query instead to slice, and the rest is the same
somenames = baby_names.query('name in ["Sansa","Daenerys","Brienne","Cersei","Tyrion"] & \
year >= 2000') # this is one string with ' as the string start/end symbol. Inside, I can use
# normal quote marks for strings. Also, I can break it into multiple lines with \
somenames_agg = somenames.groupby(['name','year'])['count'].sum().reset_index().sort_values(['name','year'])
sns.lineplot(data=somenames_agg, hue='name',x='year',y='count')
plt.axvline(2011, 0,160,color='red') # add a line for when the show debuted
```
#### Version 3 - Method chaining!
Method chaining: Call the object (`baby_names`) and then keep calling one method on it after another.
- Python will call the methods from left to right.
- There is no need to store the intermediate dataset (like `somenames` and `somenames_agg` above!)
- --> Easier to read and write without "temp" objects all over the place
- You can always save the dataset at an intermediate step if you need to
So, the first two steps are the same, just the methods will be chained. And then, a bonus trick to plot
without saving.
1. Slice with `.query` to GoT-related names
2. Combine M and F gender counts if a name is used by both in the same year
3. Plot without saving: "Pipe" in the plotting function
The code below produces a plot identical to V1 and V2, **but it is unreadable. Don't try - I'm about to make this readable!** Just _one more_ iteration...
```
baby_names.query('name in ["Sansa","Daenerys","Brienne","Cersei","Tyrion"] & year >= 2000').groupby(['name','year'])['count'].sum().reset_index().pipe((sns.lineplot, 'data'),hue='name',x='year',y='count')
plt.axvline(2011, 0,160,color='red') # add a line for when the show debuted
```
To make this readable, we write a parentheses over multiple lines
```
(
and python knows to execute the code inside as one line
)
```
And as a result, we can write a long series of methods that is comprehensible, and if we want we can even comment on each line:
```
(baby_names
.query('name in ["Sansa","Daenerys","Brienne","Cersei","Tyrion"] & \
year >= 2000')
.groupby(['name','year'])['count'].sum() # for each name-year, combine M and F counts
.reset_index() # give us the column names back as they were (makes the plot call easy)
.pipe((sns.lineplot, 'data'),hue='name',x='year',y='count')
)
plt.axvline(2011, 0,160,color='red') # add a line for when the show debuted
plt.title("WOW THAT WAS EASY TO WRITE AND SHARE")
```
**WOW. That's nice code!**
Also: **Naming your baby Daenerys after the hero...**
...is a bad break.
```
(baby_names
.query('name in ["Khaleesi","Ramsay","Lyanna","Ellaria","Meera"] & \
year >= 2000')
.groupby(['name','year'])['count'].sum() # for each name-year, combine M and F counts
.reset_index() # give use the column names back as they were (makes the plot call easy)
.pipe((sns.lineplot, 'data'),hue='name',x='year',y='count')
)
plt.axvline(2011, 0,160,color='red') # add a line for when the show debuted
plt.title("PEOPLE NAMED THEIR KID KHALEESI")
```
**BUT IT COULD BE WORSE**
```
(baby_names
.query('name in ["Krymson"] & year >= 1950')
.groupby(['name','year'])['count'].sum() # for each name-year, combine M and F counts
.reset_index() # give use the column names back as they were (makes the plot call easy)
.pipe((sns.lineplot, 'data'),hue='name',x='year',y='count')
)
plt.title("Alabama, wow...Krymson, really?")
```
| github_jupyter |
Text classification with attention and synthetic gradients.
Imports and set-up:
```
%tensorflow_version 2.x
import numpy as np
import tensorflow as tf
import pandas as pd
import subprocess
from sklearn.model_selection import train_test_split
import gensim
import re
import sys
import time
# TODO: actually implement distribution in the training loop
strategy = tf.distribute.get_strategy()
use_mixed_precision = False
tf.config.run_functions_eagerly(False)
tf.get_logger().setLevel('ERROR')
is_tpu = None
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
is_tpu = True
except ValueError:
is_tpu = False
if is_tpu:
print('TPU available.')
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.TPUStrategy(tpu)
if use_mixed_precision:
policy = tf.keras.mixed_precision.experimental.Policy('mixed_bfloat16')
tf.keras.mixed_precision.experimental.set_policy(policy)
else:
print('No TPU available.')
result = subprocess.run(
['nvidia-smi', '-L'],
stdout=subprocess.PIPE).stdout.decode("utf-8").strip()
if "has failed" in result:
print("No GPU available.")
else:
print(result)
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy(
tf.distribute.experimental.CollectiveCommunication.NCCL)
if use_mixed_precision:
policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16')
tf.keras.mixed_precision.experimental.set_policy(policy)
```
Downloading the data
```
# Download the Sentiment140 dataset
!mkdir -p data
!wget -nc https://nyc3.digitaloceanspaces.com/ml-files-distro/v1/sentiment-analysis-is-bad/data/training.1600000.processed.noemoticon.csv.zip -P data
!unzip -n -d data data/training.1600000.processed.noemoticon.csv.zip
```
Loading and splitting the data
```
sen140 = pd.read_csv(
"data/training.1600000.processed.noemoticon.csv", encoding='latin-1',
names=["target", "ids", "date", "flag", "user", "text"])
sen140.head()
sen140 = sen140.sample(frac=1).reset_index(drop=True)
sen140 = sen140[['text', 'target']]
features, targets = sen140.iloc[:, 0].values, sen140.iloc[:, 1].values
print("A random tweet\t:", features[0])
# split between train and test sets
x_train, x_test, y_train, y_test = train_test_split(features,
targets,
test_size=0.33)
y_train = y_train.astype("float32") / 4.0
y_test = y_test.astype("float32") / 4.0
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
```
Preprocessing data
```
def process_tweet(x):
x = x.strip()
x = x.lower()
x = re.sub(r"[^a-zA-Z0-9üöäÜÖÄß\.,!\?\-%\$€\/ ]+'", ' ', x)
x = re.sub('([\.,!\?\-%\$€\/])', r' \1 ', x)
x = re.sub('\s{2,}', ' ', x)
x = x.split()
x.append("[&END&]")
length = len(x)
return x
tweets_train = []
tweets_test = []
for tweet in x_train:
tweets_train.append(process_tweet(tweet[0]))
for tweet in x_test:
tweets_test.append(process_tweet(tweet[0]))
# Building the initial vocab with all words from the training set
def add_or_update_word(_vocab, word):
entry = None
if word in _vocab:
entry = _vocab[word]
entry = (entry[0], entry[1] + 1)
else:
entry = (len(_vocab), 1)
_vocab[word] = entry
vocab_pre = {}
# "[&END&]" is for padding, "[&UNK&]" for unknown words
add_or_update_word(vocab_pre, "[&END&]")
add_or_update_word(vocab_pre, "[&UNK&]")
for tweet in tweets_train:
for word in tweet:
add_or_update_word(vocab_pre, word)
# limiting the vocabulary to only include words that appear at least 3 times
# in the training data set. Reduces vocab size to about 1/6th.
# This is to make it harder for the model to overfit by focusing on words that
# may only appear in the training data, and also to generally make it learn to
# handle unknown words (more robust)
keys = vocab_pre.keys()
vocab = {}
vocab["[&END&]"] = 0
vocab["[&UNK&]"] = 1
for key in keys:
freq = vocab_pre[key][1]
index = vocab_pre[key][0]
if freq >= 3 and index > 1:
vocab[key] = len(vocab)
# Replace words that have been removed from the vocabulary with "[&UNK&]" in
# both the training and testing data
def filter_unknown(_in, _vocab):
for tweet in _in:
for i in range(len(tweet)):
if not tweet[i] in _vocab:
tweet[i] = "[&UNK&]"
filter_unknown(tweets_train, vocab)
filter_unknown(tweets_test, vocab)
```
Using gensim word2vec to get a good word embedding.
```
# train the embedding
embedding_dims = 128
embedding = gensim.models.Word2Vec(tweets_train,
size=embedding_dims, min_count=0)
def tokenize(_in, _vocab):
_out = []
for i in range(len(_in)):
tweet = _in[i]
wordlist = []
for word in tweet:
wordlist.append(_vocab[word].index)
_out.append(wordlist)
return _out
tokens_train = tokenize(tweets_train, embedding.wv.vocab)
tokens_test = tokenize(tweets_test, embedding.wv.vocab)
```
Creating modules and defining the model.
```
class SequenceCollapseAttention(tf.Module):
'''
Collapses a sequence of arbitrary length into num_out_entries entries from
the sequence according to dot-product attention. So, a variable length
sequence is reduced to a sequence of a fixed, known length.
'''
def __init__(self,
num_out_entries,
initializer=tf.keras.initializers.HeNormal,
name=None):
super().__init__(name=name)
self.is_built = False
self.num_out_entries = num_out_entries
self.initializer = initializer()
def __call__(self, keys, query):
if not self.is_built:
self.weights = tf.Variable(
self.initializer([query.shape[-1], self.num_out_entries]),
trainable=True)
self.biases = tf.Variable(tf.zeros([self.num_out_entries]),
trainable=True)
self.is_built = True
scores = tf.linalg.matmul(query, self.weights) + self.biases
scores = tf.transpose(scores, perm=(0, 2, 1))
scores = tf.nn.softmax(scores)
output = tf.linalg.matmul(scores, keys)
return output
class WordEmbedding(tf.Module):
'''
Creates a word-embedding module from a provided embedding matrix.
'''
def __init__(self, embedding_matrix, trainable=False, name=None):
super().__init__(name=name)
self.embedding = tf.Variable(embedding_matrix, trainable=trainable)
def __call__(self, x):
return tf.nn.embedding_lookup(self.embedding, x)
testvar = None
class PositionalEncoding1D(tf.Module):
'''
Positional encoding as in the Attention Is All You Need paper. I hope.
For experimentation, the weight by which the positional information is mixed
into the input vectors is learned.
'''
def __init__(self, axis=-2, base=1000, name=None):
super().__init__(name=name)
self.axis = axis
self.base = base
self.encoding_weight = tf.Variable([2.0], trainable=True)
testvar = self.encoding_weight
def __call__(self, x):
sequence_length = tf.shape(x)[self.axis]
d = tf.shape(x)[-1]
T = tf.shape(x)[self.axis]
pos_enc = tf.range(0, d / 2, delta=1, dtype=tf.float32)
pos_enc = (-2.0 / tf.cast(d, dtype=tf.float32)) * pos_enc
base = tf.cast(tf.fill(tf.shape(pos_enc), self.base), dtype=tf.float32)
pos_enc = tf.math.pow(base, pos_enc)
pos_enc = tf.expand_dims(pos_enc, axis=0)
pos_enc = tf.tile(pos_enc, [T, 1])
t = tf.expand_dims(tf.range(1, T+1, delta=1, dtype=tf.float32), axis=-1)
pos_enc = tf.math.multiply(pos_enc, t)
pos_enc_sin = tf.expand_dims(tf.math.sin(pos_enc), axis=-1)
pos_enc_cos = tf.expand_dims(tf.math.cos(pos_enc), axis=-1)
pos_enc = tf.concat((pos_enc_sin, pos_enc_cos), axis=-1)
pos_enc = tf.reshape(pos_enc, [T, d])
return x + (pos_enc * self.encoding_weight)
class MLP_Block(tf.Module):
'''
With batch normalization before the activations.
A regular old multilayer perceptron, hidden shapes are defined by the
"shapes" argument.
'''
def __init__(self,
shapes,
initializer=tf.keras.initializers.HeNormal,
name=None,
activation=tf.nn.swish,
trainable_batch_norms=False):
super().__init__(name=name)
self.is_built = False
self.shapes = shapes
self.initializer = initializer()
self.weights = [None] * len(shapes)
self.biases = [None] * len(shapes)
self.bnorms = [None] * len(shapes)
self.activation = activation
self.trainable_batch_norms = trainable_batch_norms
def _build(self, x):
for n in range(0, len(self.shapes)):
in_shape = x.shape[-1] if n == 0 else self.shapes[n - 1]
factor = 1 if self.activation != tf.nn.crelu or n == 0 else 2
self.weights[n] = tf.Variable(
self.initializer([in_shape * factor, self.shapes[n]]),
trainable=True)
self.biases[n] = tf.Variable(tf.zeros([self.shapes[n]]),
trainable=True)
self.bnorms[n] = tf.keras.layers.BatchNormalization(
trainable=self.trainable_batch_norms)
self.is_built = True
def __call__(self, x, training=False):
if not self.is_built:
self._build(x)
h = x
for n in range(len(self.shapes)):
h = tf.linalg.matmul(h, self.weights[n]) + self.biases[n]
h = self.bnorms[n](h, training=training)
h = self.activation(h)
return h
class SyntheticGradient(tf.Module):
'''
An implementation of synthetic gradients. When added to a model, this
module will intercept incoming gradients and replace them by learned,
synthetic ones.
If you encounter NANs, try setting the sg_output_scale parameter to a lower
value, or increase the number of initial_epochs or epochs.
When the model using this module does not learn, the generator might be too
simple, the sg_output_scale might be too low, the learning rate of the
generator might be too large or too low, or the number of epochs might be
too large or too low.
If the number of initial epochs is too large, the generator can get stuck
in a local minimum and fail to learn.
The relative_generator_hidden_shapes list defines the shapes of the hidden
layers of the generator as a multiple of its input dimension. For an affine
transormation, pass an empty list.
'''
def __init__(self,
initializer=tf.keras.initializers.GlorotUniform,
activation=tf.nn.tanh,
relative_generator_hidden_shapes=[6, ],
learning_rate=0.01,
epochs=1,
initial_epochs=16,
sg_output_scale=1,
name=None):
super().__init__(name=name)
self.is_built = False
self.initializer = initializer
self.activation = activation
self.relative_generator_hidden_shapes = relative_generator_hidden_shapes
self.initial_epochs = initial_epochs
self.epochs = epochs
self.sg_output_scale = sg_output_scale
self.optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
def build(self, xy, dy):
'''
Builds the gradient generator on its first run, and trains on the first
incoming batch of gradients for a number of epochs to avoid bad results
(including NANs) in the first few batches where the generator still
outputs bad approximations. To further reduce NANs due to bad gradients,
a fixed scaler for the outputs of the generator is computed based on the
first batch.
'''
if self.is_built:
return
if len(self.relative_generator_hidden_shapes) > 0:
generator_shape = [
xy.shape[-1] * mult
for mult in
self.relative_generator_hidden_shapes]
self.generator_hidden = MLP_Block(
generator_shape,
activation=self.activation,
initializer=self.initializer,
trainable_batch_norms=False)
else:
self.generator_hidden = tf.identity
self.generator_out = MLP_Block(
[dy.shape[-1]],
activation=tf.identity,
initializer=self.initializer,
trainable_batch_norms=False)
# calculate a static scaler for the generated gradients to avoid
# overflows due to too large gradients
self.generator_out_scale = 1.0
x = self.generate_gradient(xy) / self.sg_output_scale
mag_y = tf.math.sqrt(tf.math.reduce_sum(tf.math.square(dy), axis=-1))
mag_x = tf.math.sqrt(tf.math.reduce_sum(tf.math.square(x), axis=-1))
mag_scale = tf.math.reduce_mean(mag_y / mag_x,
axis=tf.range(0, tf.rank(dy) - 1))
self.generator_out_scale = tf.Variable(mag_scale, trainable=False)
# train for a number of epochs on the first run, by default 16, to avoid
# bad results in the beginning of training.
for i in range(self.initial_epochs):
self.train_generator(xy, dy)
self.is_built = True
def generate_gradient(self, x):
'''
Just an MLP, or an affine transformation if the hidden shape in the
constructor is set to be empty.
'''
x = self.generator_hidden(x)
out = self.generator_out(x)
out = out * self.generator_out_scale
return out * self.sg_output_scale
def train_generator(self, x, target):
'''
Gradient descend for the gradient generator. This is called every time a
gradient comes in, although in theory (especially with deeper gradient
generators) once the gradients are modeled sufficiently, it could be OK
to stop training on incoming gradients, thus fully decoupling the lower
parts of the network from the upper parts relative to this SG module.
'''
with tf.GradientTape() as tape:
l2_loss = target - self.generate_gradient(x)
l2_loss = tf.math.reduce_sum(tf.math.square(l2_loss), axis=-1)
# l2_loss = tf.math.sqrt(l2_dist)
grads = tape.gradient(l2_loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(grads, self.trainable_variables))
@tf.custom_gradient
def sg(self, x, y):
'''
In the forward pass it is essentially a no-op (identity). In the
backwards pass it replaces the incoming gradient by a synthetic one.
'''
x = tf.identity(x)
def grad(dy):
# concat x and the label to be inputs for the generator:
xy = self.concat_x_and_y(x, y)
if not self.is_built:
self.build(xy, dy)
# train the generator on the incoming gradient:
for i in range(self.epochs):
self.train_generator(xy, dy)
# return the gradient. The second return value is the gradient for y
# which should be zero since we only need y (labels) to generate the
# synthetic gradients
dy = self.generate_gradient(xy)
return dy, tf.zeros(tf.shape(y))
return x, grad
def __call__(self, x, y):
return self.sg(x, y)
def concat_x_and_y(self, x, y):
'''
Probably an overly complex yet incomplete solution to a rather small
inconvenience.
Inconvenience: The gradient generators take the output of the last
module AND the target/labels of the network as inputs. But those two
tensors can be of different shapes. The obvious solution would be to
manually reshape the targets so they can be concatenated with the
outputs of the past state. But because i wanted this SG module to be as
"plug-and-play" as possible, i tried to attempt automatic reshaping.
Should work for 1d->1d, and 1d-sequence -> 1d, possibly 1d seq->seq,
unsure about the rest.
'''
# insert as many dims before the last dim of y to give it the same rank
# as x
amount = tf.math.maximum(tf.rank(x) - tf.rank(y), 0)
new_shape = tf.concat((tf.shape(y)[:-1],
tf.tile([1], [amount]),
[tf.shape(y)[-1]]), axis=-1)
y = tf.reshape(y, new_shape)
# tile the added dims such that x and y can be concatenated
# In order to tile only the added dims, i need to set the dimensions
# with a length of 1 (except the last) to the length of the
# corresponding dimensions in x, while setting the rest to 1.
# This is waiting to break.
mask = tf.cast(tf.math.less_equal(tf.shape(y),
tf.constant([1])), dtype=tf.int32)
# ignore the last dim
mask = tf.concat([mask[:-1], tf.constant([0])], axis=-1)
zeros_to_ones = tf.math.subtract(
tf.ones(tf.shape(mask), dtype=tf.int32),
mask)
# has ones where there is a one in the shape, now the 1s are set to the
# length in x
mask = tf.math.multiply(mask, tf.shape(x))
# add ones to all other dimensions to preserve their shape
mask = tf.math.add(zeros_to_ones, mask)
# tile
y = tf.tile(y, mask)
return tf.concat((x, y), axis=-1)
class FlattenL2D(tf.Module):
"Flattens the last two dimensions only"
def __init__(self, name=None):
super().__init__(name=name)
def __call__(self, x):
new_shape = tf.concat(
(tf.shape(x)[:-2], [(tf.shape(x)[-1]) * (tf.shape(x)[-2])]),
axis=-1)
return tf.reshape(x, new_shape)
initializer = tf.keras.initializers.HeNormal
class SentimentAnalysisWithAttention(tf.Module):
def __init__(self, name=None):
super().__init__(name=name)
# Structure and the idea behind it:
# 1: The input sequence is embedded and is positionally encoded.
# 2.1: An MLP block ('query') computes scores for the following
# attention layer for each entry in the sequence. Ie, it decides
# which words are worth a closer look.
# 2.2: An attention layer selects n positionally encoded word
# embeddings from the input sequence based on the learned queries.
# 3: The result is flattened into a tensor of known shape and a number
# of dense layers compute the final classification.
self.embedding = WordEmbedding(embedding.wv.vectors)
self.batch_norm = tf.keras.layers.BatchNormalization(trainable=True)
self.pos_enc = PositionalEncoding1D()
self.query = MLP_Block([256, 128], initializer=initializer)
self.attention = SequenceCollapseAttention(num_out_entries=9,
initializer=initializer)
self.flatten = FlattenL2D()
self.dense = MLP_Block([512, 256, 128, 64],
initializer=initializer,
trainable_batch_norms=True)
self.denseout = MLP_Block([1],
initializer=initializer,
activation=tf.nn.sigmoid,
trainable_batch_norms=True)
# Synthetic gradient modules for the various layers.
self.sg_query = SyntheticGradient(relative_generator_hidden_shapes=[9])
self.sg_attention = SyntheticGradient()
self.sg_dense = SyntheticGradient()
def __call__(self, x, y=tf.constant([]), training=False):
x = self.embedding(x)
x = self.pos_enc(x)
x = self.batch_norm(x, training=training)
q = self.query(x, training=training)
# q = self.sg_query(q, y) # SG
x = self.attention(x, q)
x = self.flatten(x)
x = self.sg_attention(x, y) # SG
x = self.dense(x, training=training)
x = self.sg_dense(x, y) # SG
output = self.denseout(x, training=training)
return output
model = SentimentAnalysisWithAttention()
class BatchGenerator(tf.keras.utils.Sequence):
'''
Creates batches from the given data, specifically it pads the sequences
per batch only as much as necessary to make every sequence within a batch
be of the same length.
'''
def __init__(self, inputs, labels, padding, batch_size):
self.batch_size = batch_size
self.labels = labels
self.inputs = inputs
self.padding = padding
# self.on_epoch_end()
def __len__(self):
return int(np.floor(len(self.inputs) / self.batch_size))
def __getitem__(self, index):
max_length = 0
start_index = index * self.batch_size
end_index = start_index + self.batch_size
for i in range(start_index, end_index):
l = len(self.inputs[i])
if l > max_length:
max_length = l
out_x = np.empty([self.batch_size, max_length], dtype='int32')
out_y = np.empty([self.batch_size, 1], dtype='float32')
for i in range(self.batch_size):
out_y[i] = self.labels[start_index + i]
tweet = self.inputs[start_index + i]
l = len(tweet)
l = min(l, max_length)
for j in range(0, l):
out_x[i][j] = tweet[j]
for j in range(l, max_length):
out_x[i][j] = self.padding
return out_x, out_y
```
Training the model
```
def train_validation_loop(model_caller, data_generator, epochs, metrics=[]):
batch_time = -1
for epoch in range(epochs):
start_e = time.time()
start_p = time.time()
num_batches = len(data_generator)
predictions = [None] * num_batches
for b in range(num_batches):
start_b = time.time()
x_batch, y_batch = data_generator[b]
predictions[b] = model_caller(x_batch, y_batch, metrics=metrics)
# progress output
elapsed_t = time.time() - start_b
if batch_time != -1:
batch_time = 0.05 * elapsed_t + 0.95 * batch_time
else:
batch_time = elapsed_t
if int(time.time() - start_p) >= 1 or b == (num_batches - 1):
start_p = time.time()
eta = int((num_batches - b) * batch_time)
ela = int(time.time() - start_e)
out_string = "\rEpoch %d/%d,\tbatch %d/%d,\telapsed: %d/%ds" % (
(epoch + 1), epochs, b + 1, num_batches, ela, ela + eta)
for metric in metrics:
out_string += "\t %s: %f" % (metric.name,
float(metric.result()))
out_length = len(out_string)
sys.stdout.write(out_string)
sys.stdout.flush()
for metric in metrics:
metric.reset_states()
sys.stdout.write("\n")
return np.concatenate(predictions)
def trainer(model, loss, optimizer):
@tf.function(experimental_relax_shapes=True)
def training_step(x_batch,
y_batch,
model=model,
loss=loss,
optimizer=optimizer,
metrics=[]):
with tf.GradientTape() as tape:
predictions = model(x_batch, y_batch, training=True)
losses = loss(y_batch, predictions)
grads = tape.gradient(losses, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
for metric in metrics:
metric.update_state(y_batch, predictions)
return predictions
return training_step
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9)
metrics = (tf.keras.metrics.BinaryCrossentropy(from_logits=True),
tf.keras.metrics.BinaryAccuracy())
batch_size = 512
epochs = 4
padding = embedding.wv.vocab["[&END&]"].index
training_generator = BatchGenerator(tokens_train,
y_train,
padding,
batch_size=batch_size)
train_validation_loop(trainer(model, loss, optimizer),
training_generator,
epochs,
metrics)
```
Testing it on validation data
```
def validator(model):
@tf.function(experimental_relax_shapes=True)
def validation_step(x_batch, y_batch, model=model, metrics=[]):
predictions = model(x_batch, training=False)
for metric in metrics:
metric.update_state(y_batch, predictions)
return predictions
return validation_step
testing_generator = BatchGenerator(tokens_test,
y_test,
padding,
batch_size=batch_size)
predictions = train_validation_loop(validator(model),
testing_generator,
1,
metrics)
```
Get some example results from the the test data.
```
most_evil_tweet=None
most_evil_evilness=1
most_cool_tweet=None
most_cool_coolness=1
most_angelic_tweet=None
most_angelic_angelicness=0
y_pred = np.concatenate(predictions)
for i in range(0,len(y_pred)):
judgement = y_pred[i]
polarity = abs(judgement-0.5)*2
if judgement>=most_angelic_angelicness:
most_angelic_angelicness = judgement
most_angelic_tweet = x_test[i]
if judgement<=most_evil_evilness:
most_evil_evilness = judgement
most_evil_tweet = x_test[i]
if polarity<=most_cool_coolness:
most_cool_coolness = polarity
most_cool_tweet = x_test[i]
print("The evilest tweet known to humankind:\n\t", most_evil_tweet)
print("Evilness: ", 1.0-most_evil_evilness)
print("\n")
print("The most angelic tweet any mortal has ever laid eyes upon:\n\t",
most_angelic_tweet)
print("Angelicness: ", most_angelic_angelicness)
print("\n")
print("This tweet is too cool for you, don't read:\n\t", most_cool_tweet)
print("Coolness: ", 1.0-most_cool_coolness)
```
| github_jupyter |
# Imports
```
import numpy as np
import xarray as xr
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
from matplotlib.patches import Polygon
from matplotlib import colors as mat_colors
import mpl_toolkits.axisartist as axisartist
from mpl_toolkits.axes_grid1 import Size, Divider
```
# Define Functions
## Performance measurements
```
def BIAS(a1, a2):
return (a1 - a2).mean().item()
def RMSE(a1, a2):
return np.sqrt(((a1 - a2)**2).mean()).item()
def DIFF(a1, a2):
return np.max(np.abs(a1 - a2)).item()
```
## help function for heatmap axis
```
def setup_axes(fig, rect):
ax = axisartist.Subplot(fig, rect)
fig.add_subplot(ax)
return ax
```
## heatmap
```
def heatmap(datasets, # first_dataset, second_dataset,
opti_var,
annotation=None,
annotation_x_position=0,
annotation_y_position=1,
fig=None, ax=None,
cmap='vlag',
cmap_levels=None,
grid_color='grey',
grid_linewidth=1.5,
presentation=False,
labels_pad=-360,
xlim=None, # here use it do define max Diff sfc_h
nr_of_iterations=None):
if not ax:
ax = plt.gca()
if not fig:
fig = plt.gcf()
if all(dataset is None for dataset in datasets):
raise ValueError('All datasets are None!')
# define variables for plotting
guess_opti_var = []
first_guess_diff = []
true_opti_var = []
BIAS_opti_var = []
RMSE_opti_var = []
DIFF_opti_var = []
fct_opti_var = []
times = []
maxiters = []
BIAS_sfc = []
RMSE_sfc = []
DIFF_sfc = []
BIAS_w = []
RMSE_w = []
DIFF_w = []
BIAS_fg = []
RMSE_fg = []
DIFF_fg = []
BIAS_sfc_fg = []
RMSE_sfc_fg = []
DIFF_sfc_fg = []
array_length = 0
check_first_guess = None
check_true_opti_var = None
# create data and label variables
for dataset in datasets:
# check if the current dataset contains data or if the data was not available
if dataset is None:
guess_opti_var.append(None)
first_guess_diff.append(None)
true_opti_var.append(None)
BIAS_opti_var.append(None)
RMSE_opti_var.append(None)
DIFF_opti_var.append(None)
fct_opti_var.append(None)
times.append(None)
maxiters.append(None)
BIAS_sfc.append(None)
RMSE_sfc.append(None)
DIFF_sfc.append(None)
BIAS_w.append(None)
RMSE_w.append(None)
DIFF_w.append(None)
elif type(dataset) != xr.core.dataset.Dataset: # if no minimisation possible
guess_opti_var.append('no_minimisation')
first_guess_diff.append(None)
true_opti_var.append(None)
BIAS_opti_var.append(None)
RMSE_opti_var.append(None)
DIFF_opti_var.append(None)
fct_opti_var.append(None)
times.append(None)
maxiters.append(None)
BIAS_sfc.append(None)
RMSE_sfc.append(None)
DIFF_sfc.append(None)
BIAS_w.append(None)
RMSE_w.append(None)
DIFF_w.append(None)
else:
# find index corresponding to max time
max_index = len(dataset['computing_time'].values) - 1
if nr_of_iterations is not None:
max_index = nr_of_iterations - 1
elif xlim is not None:
# calculate all max diff surface_h
all_DIFF_sfc_h = np.array(
[DIFF(dataset.true_surface_h.data,
dataset.surface_h.data[i-1])
for i in dataset.coords['nr_of_iteration'].data])
# only consider as many points until max DIFF is smaller xlim
if all_DIFF_sfc_h[-1] < xlim:
max_index = np.argmax(all_DIFF_sfc_h < xlim)
if opti_var == 'bed_h':
guess_opti_var.append((dataset.guessed_bed_h[max_index] - dataset.true_bed_h).values)
first_guess_diff.append((dataset.first_guessed_bed_h - dataset.true_bed_h).values)
true_opti_var.append(dataset.true_bed_h.values)
BIAS_opti_var.append(BIAS(dataset.guessed_bed_h[max_index], dataset.true_bed_h))
RMSE_opti_var.append(RMSE(dataset.guessed_bed_h[max_index], dataset.true_bed_h))
DIFF_opti_var.append(DIFF(dataset.guessed_bed_h[max_index], dataset.true_bed_h))
if check_first_guess is None:
BIAS_fg = BIAS(dataset.first_guessed_bed_h, dataset.true_bed_h)
RMSE_fg = RMSE(dataset.first_guessed_bed_h, dataset.true_bed_h)
DIFF_fg = DIFF(dataset.first_guessed_bed_h, dataset.true_bed_h)
elif opti_var == 'bed_shape':
guess_opti_var.append((dataset.guessed_bed_shape[-1] - dataset.true_bed_shape).values)
first_guess_diff.append((dataset.first_guessed_bed_shape - dataset.true_bed_shape).values)
true_opti_var.append(dataset.true_bed_shape.values)
BIAS_opti_var.append(BIAS(dataset.guessed_bed_shape[max_index], dataset.true_bed_shape))
RMSE_opti_var.append(RMSE(dataset.guessed_bed_shape[max_index], dataset.true_bed_shape))
DIFF_opti_var.append(DIFF(dataset.guessed_bed_shape[max_index], dataset.true_bed_shape))
if check_first_guess is None:
BIAS_fg = BIAS(dataset.first_guessed_bed_shape, dataset.true_bed_shape)
RMSE_fg = RMSE(dataset.first_guessed_bed_shape, dataset.true_bed_shape)
DIFF_fg = DIFF(dataset.first_guessed_bed_shape, dataset.true_bed_shape)
elif opti_var == 'w0':
guess_opti_var.append((dataset.guessed_w0[-1] - dataset.true_w0).values)
first_guess_diff.append((dataset.first_guessed_w0 - dataset.true_w0).values)
true_opti_var.append(dataset.true_w0.values)
BIAS_opti_var.append(BIAS(dataset.guessed_w0[max_index], dataset.true_w0))
RMSE_opti_var.append(RMSE(dataset.guessed_w0[max_index], dataset.true_w0))
DIFF_opti_var.append(DIFF(dataset.guessed_w0[max_index], dataset.true_w0))
if check_first_guess is None:
BIAS_fg = BIAS(dataset.first_guessed_w0, dataset.true_w0)
RMSE_fg = RMSE(dataset.first_guessed_w0, dataset.true_w0)
DIFF_fg = DIFF(dataset.first_guessed_w0, dataset.true_w0)
else:
raise ValueError('Unknown opti var!')
fct_opti_var.append(dataset.function_calls[max_index].values)
times.append(dataset.computing_time[max_index].values)
maxiters.append(dataset.attrs['maxiter_reached'])
BIAS_sfc.append(BIAS(dataset.surface_h[max_index], dataset.true_surface_h))
RMSE_sfc.append(RMSE(dataset.surface_h[max_index], dataset.true_surface_h))
DIFF_sfc.append(DIFF(dataset.surface_h[max_index], dataset.true_surface_h))
BIAS_w.append(BIAS(dataset.widths[max_index], dataset.true_widths))
RMSE_w.append(RMSE(dataset.widths[max_index], dataset.true_widths))
DIFF_w.append(DIFF(dataset.widths[max_index], dataset.true_widths))
# determine array length for empty lines
if array_length == 0:
array_length = dataset.points_with_ice[-1].values + 1
# check that the arrays have the same number of points with ice
elif array_length != dataset.points_with_ice[-1].values + 1:
raise ValueError('Not the same lentgth of points with ice!!!')
# check if all experiments start with the same true values and first guess
# in the first round save values
if check_first_guess is None:
check_first_guess = first_guess_diff[-1]
check_true_opti_var = true_opti_var[-1]
# not implemented yet
BIAS_sfc_fg = BIAS(dataset.first_guess_surface_h, dataset.true_surface_h)
RMSE_sfc_fg = RMSE(dataset.first_guess_surface_h, dataset.true_surface_h)
DIFF_sfc_fg = DIFF(dataset.first_guess_surface_h, dataset.true_surface_h)
BIAS_w_fg = BIAS(dataset.first_guess_widths, dataset.true_widths)
RMSE_w_fg = RMSE(dataset.first_guess_widths, dataset.true_widths)
DIFF_w_fg = DIFF(dataset.first_guess_widths, dataset.true_widths)
# after first round compare all values to first ones to make sure comparing the same start conditions
else:
if np.sum(check_true_opti_var - true_opti_var[-1]) != 0:
raise ValueError('Not the same true control variable!!!')
if np.sum(check_first_guess - first_guess_diff[-1]) != 0:
raise ValueError('Not the same first guess!!!')
# create variables for ploting (data and y label)
data = []
y_labels = []
# first add heading
data.append(np.empty((array_length)) * np.nan)
if not presentation:
if opti_var == 'bed_h':
y_labels.append(r' RMSE_b, DIFF_b, RMSE_s, DIFF_s, fct, $T_{cpu}$')
elif opti_var in ['bed_shape', 'w0']:
y_labels.append(r' RMSE_Ps, DIFF_Ps, RMSE_w, DIFF_w, fct, $T_{cpu}$')
else:
raise ValueError('Unknown opti_var !')
y_label_variable_format = '{:7.2f}, {: 7.2f}, {:7.2f}, {:7.2f}'
else:
if opti_var == 'bed_h':
y_labels.append(' DIFF_b, fct, t')
elif opti_var in ['bed_shape', 'w0']:
y_labels.append(' DIFF DIFF_w fct time')
else:
raise ValueError('Unknown opti_var !')
y_label_variable_format = '{: 6.2f}' #', {:6.2f}'
if not presentation:
# add first guess
data.append(check_first_guess)
if opti_var == 'bed_h':
y_labels.append(('fg:' + y_label_variable_format).format(RMSE_fg, DIFF_fg,
RMSE_sfc_fg, DIFF_sfc_fg))
elif opti_var in ['bed_shape', 'w0']:
y_labels.append(('fg:' + y_label_variable_format).format(RMSE_fg, DIFF_fg,
RMSE_w_fg, DIFF_w_fg))
else:
raise ValueError('Unknown opti_var !')
else:
# add first guess
data.append(check_first_guess)
if opti_var == 'bed_h':
y_labels.append(('fg:' + y_label_variable_format).format(DIFF_fg))
elif opti_var in ['bed_shape', 'w0']:
y_labels.append(('fg:' + y_label_variable_format).format(DIFF_fg, DIFF_w_fg))
else:
raise ValueError('Unknown opti_var !')
# add two format placeholders for fct_calls and time
y_label_variable_format += ', {:4d}, {:4.0f}s'
# add all other data with empty line for None datasets
for i, guess in enumerate(guess_opti_var):
if guess is None:
data.append(np.empty((array_length)) * np.nan)
if i < 9:
y_labels.append((' ' + chr(65+i) + ': NO DATAFILE FOUND'))
else:
y_labels.append((' ' + chr(65+i) + ': NO DATAFILE FOUND'))
elif type(guess) is str:
data.append(np.empty((array_length)) * np.nan)
if i < 9:
y_labels.append((' ' + chr(65+i) + ': NO Minimisation Possible'))
else:
y_labels.append((' ' + chr(65+i) + ': NO Minimisation Possible'))
else:
data.append(guess)
if i < 9:
y_label_text = (' ' + chr(65+i) + ':' + y_label_variable_format)
else:
y_label_text = (' ' + chr(65+i) + ':' + y_label_variable_format)
if maxiters[i] == 'yes':
y_label_text += '+'
if opti_var == 'bed_h':
if not presentation:
y_labels.append(y_label_text.format(RMSE_opti_var[i],
DIFF_opti_var[i],
RMSE_sfc[i],
DIFF_sfc[i],
fct_opti_var[i],
times[i]))
else:
y_labels.append(y_label_text.format(DIFF_opti_var[i],
fct_opti_var[i],
times[i]))
elif opti_var in ['bed_shape', 'w0']:
if not presentation:
y_labels.append(y_label_text.format(RMSE_opti_var[i],
DIFF_opti_var[i],
RMSE_w[i],
DIFF_w[i],
fct_opti_var[i],
times[i]))
else:
y_labels.append(y_label_text.format(DIFF_opti_var[i],
DIFF_w[i],
fct_opti_var[i],
times[i]))
else:
raise ValueError('Unknown opti_var !')
# make data an numpy array
data = np.array(data)
#choose colormap
if not cmap_levels:
color_nr = 100
if opti_var == 'bed_h':
cmap_limit = np.max(np.abs(check_first_guess))
#cmap_limit = np.max(np.array([np.abs(np.floor(np.nanmin(np.array(data)))),
# np.abs(np.ceil(np.nanmax(np.array(data))))]))
elif opti_var in ['bed_shape', 'w0']:
cmap_limit = np.max(np.abs(check_first_guess))
#cmap_limit = np.max(np.array([np.abs(np.floor(np.nanmin(np.array(data)) * 10)),
# np.abs(np.ceil(np.nanmax(np.array(data)) * 10))])) / 10
else:
raise ValueError('Unknown opti var!!')
#if (np.min(data) < 0) & (np.max(data) > 0):
cmap_levels = np.linspace(-cmap_limit, cmap_limit, color_nr, endpoint=True)
#elif (np.min(data) < 0) & (np.max(data) =< 0):
# cmap_levels = np.linspace(-cmap_limit, 0, color_nr, endpoint=True)
#elif (np.min(data) >= 0) & (np.max(data) > 0)
else:
color_nr = len(cmap_levels) - 1
rel_color_steps = np.arange(color_nr)/color_nr
if cmap == 'rainbow':
colors = cm.rainbow(rel_color_steps)
elif cmap == 'vlag':
colors = sns.color_palette('vlag', color_nr)
elif cmap == 'icefire':
colors = sns.color_palette('icefire', color_nr)
elif cmap == 'Spectral':
colors = sns.color_palette('Spectral_r', color_nr)
cmap = LinearSegmentedColormap.from_list('custom', colors, N=color_nr)
cmap.set_bad(color='white')
norm = mat_colors.BoundaryNorm(cmap_levels, cmap.N)
# plot heatmap
im = plt.imshow(data, aspect='auto', interpolation=None, cmap=cmap, norm=norm, alpha=1.)
# Turn spines and ticks off and create white frame.
for key, spine in ax.axis.items():
spine.major_ticks.set_visible(False)
spine.minor_ticks.set_visible(False)
spine.line.set_visible(False)
# spine.line.set_color(grid_color)
# spine.line.set_linewidth(0) #grid_linewidth)
# set y ticks
ax.set_yticks(np.arange(data.shape[0]))
ax.set_yticklabels(y_labels)
#for tick in ax.get_yticklabels():
# tick.set_fontname("Arial")
# align yticklabels left
ax.axis["left"].major_ticklabels.set_ha("left")
# set pad to put labels over heatmap
ax.axis["left"].major_ticklabels.set_pad(labels_pad)
# set y minor grid
ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True)
ax.grid(which="minor", axis='y', color=grid_color, linestyle='-', linewidth=grid_linewidth)
# set x ticklabels off
ax.set_xticklabels([])
# create colorbar
cax = ax.inset_axes([1.01, 0.1, 0.03, 0.8])
#cax = fig.add_axes([0.5, 0, 0.01, 1])
cbar = fig.colorbar(im, cax=cax, boundaries=cmap_levels, spacing='proportional',)
cbar.set_ticks([np.min(cmap_levels),0,np.max(cmap_levels)])
if opti_var == 'bed_h':
cbar.set_ticklabels(['{:d}'.format(int(-cmap_limit)), '0' ,'{:d}'.format(int(cmap_limit))])
elif opti_var == 'bed_shape':
cbar.set_ticklabels(['{:.1f}'.format(-cmap_limit), '0' ,'{:.1f}'.format(cmap_limit)])
elif opti_var == 'w0':
cbar.set_ticklabels(['{:d}'.format(int(-cmap_limit)), '0' ,'{:d}'.format(int(cmap_limit))])
else:
raise ValueError('Unknown opti var!!')
#cbar.ax.set_ylabel(cbarlabel,)
# set title
#ax.set_title(title)
if annotation is not None:
# include text
ax.text(annotation_x_position, annotation_y_position,
annotation,
horizontalalignment='left',
verticalalignment='center',
transform=ax.transAxes)
return im
```
## legend plot
```
def add_legend2(ax,
title,
fontsize,
lw,
ms,
labels):
ax.plot([],
[],
'-',
lw=lw,
ms=ms,
c='none',
label=labels[0])
# plot for first gradient scaling
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c=color_1,
label=labels[1])
# plot for second gradient scaling
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c=color_2,
zorder=5,
label=labels[2])
# plot for second gradient scaling
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c=color_3,
zorder=5,
label=labels[3])
# plot for second gradient scaling
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c=color_4,
zorder=5,
label=labels[4])
l = ax.legend(loc='center', fontsize=fontsize, title=title)
plt.setp(l.get_title(), multialignment='center')
ax.axis('off')
def add_legend(ax,
#title,
fontsize,
lw,
ms,
labels):
ax.plot([],
[],
'-',
lw=lw,
ms=ms,
c='none',
label=labels[0])
# plot for first gradient scaling
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c='none',
label=labels[1])
# plot for second gradient scaling
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c='none',
zorder=5,
label=labels[2])
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c='none',
zorder=5,
label=labels[3])
leg = ax.legend(loc='center',
fontsize=fontsize,
#title=title,
handlelength=0,
handletextpad=0,
fancybox=True)
for item in leg.legendHandles:
item.set_visible(False)
ax.axis('off')
```
## performance plot
```
def performance_plot(ax,
datasets,
fig=None,
# 'bed_h RMSE', 'bed_h Diff', 'bed_h Bias',
# 'bed_shape RMSE', 'bed_shape Diff', 'bed_shape Bias',
# 'w0 RMSE', 'w0 Diff', 'w0 Bias',
# 'sfc_h RMSE', 'sfc_h Diff', 'sfc_h Bias',
# 'widths RMSE', 'widths Diff', 'widths Bias'
performance_measurement='bed_h RMSE',
xlim=5,
y_label='',
annotation=None,
annotation_x_position=-0.2,
annotation_y_position=1,
lw=2,
fontsize=25,
ms=10,
nr_of_iterations=None,
ax_xlim=None
):
if not fig:
fig = plt.gcf()
measure = performance_measurement
all_x = []
all_y = []
for dataset in datasets:
if dataset is not None:
max_index = len(dataset['computing_time'].values) - 1
if nr_of_iterations is not None:
max_index = nr_of_iterations - 1
elif xlim is not None:
# calculate all max diff surface_h
all_DIFF_sfc_h = np.array(
[DIFF(dataset.true_surface_h.data,
dataset.surface_h.data[i-1])
for i in dataset.coords['nr_of_iteration'].data])
# only consider as many points until max DIFF is smaller xlim
if all_DIFF_sfc_h[-1] < xlim:
max_index = np.argmax(all_DIFF_sfc_h < xlim)
# include time 0 for first guess
tmp_x = [0]
# add first guess values
if measure == 'bed_h RMSE':
tmp_y = [RMSE(dataset['first_guessed_bed_h'], dataset['true_bed_h'])]
elif measure == 'bed_h Diff':
tmp_y = [DIFF(dataset['first_guessed_bed_h'], dataset['true_bed_h'])]
elif measure == 'bed_h Bias':
tmp_y = [BIAS(dataset['first_guessed_bed_h'], dataset['true_bed_h'])]
elif measure == 'bed_shape RMSE':
tmp_y = [RMSE(dataset['first_guessed_bed_shape'], dataset['true_bed_shape'])]
elif measure == 'bed_shape Diff':
tmp_y = [DIFF(dataset['first_guessed_bed_shape'], dataset['true_bed_shape'])]
elif measure == 'bed_shape Bias':
tmp_y = [BIAS(dataset['first_guessed_bed_shape'], dataset['true_bed_shape'])]
elif measure == 'w0 RMSE':
tmp_y = [RMSE(dataset['first_guessed_w0'], dataset['true_w0'])]
elif measure == 'w0 Diff':
tmp_y = [DIFF(dataset['first_guessed_w0'], dataset['true_w0'])]
elif measure == 'w0 Bias':
tmp_y = [BIAS(dataset['first_guessed_w0'], dataset['true_w0'])]
elif measure == 'sfc_h RMSE':
tmp_y = [RMSE(dataset['first_guess_surface_h'], dataset['true_surface_h'])]
elif measure == 'sfc_h Diff':
tmp_y = [DIFF(dataset['first_guess_surface_h'], dataset['true_surface_h'])]
elif measure == 'sfc_h Bias':
tmp_y = [DIFF(dataset['first_guess_surface_h'], dataset['true_surface_h'])]
elif measure == 'widths RMSE':
tmp_y = [RMSE(dataset['first_guess_widths'], dataset['true_widths'])]
elif measure == 'widths Diff':
tmp_y = [DIFF(dataset['first_guess_widths'], dataset['true_widths'])]
elif measure == 'widths Bias':
tmp_y = [DIFF(dataset['first_guess_widths'], dataset['true_widths'])]
else:
raise ValueError('Unknown performance measurement!')
for i in dataset.coords['nr_of_iteration'].values[:max_index + 1] - 1:
tmp_x.append(dataset['computing_time'][i])
if measure == 'bed_h RMSE':
tmp_y.append(RMSE(dataset['guessed_bed_h'][i], dataset['true_bed_h']))
elif measure == 'bed_h Diff':
tmp_y.append(DIFF(dataset['guessed_bed_h'][i], dataset['true_bed_h']))
elif measure == 'bed_h Bias':
tmp_y.append(BIAS(dataset['guessed_bed_h'][i], dataset['true_bed_h']))
elif measure == 'bed_shape RMSE':
tmp_y.append(RMSE(dataset['guessed_bed_shape'][i], dataset['true_bed_shape']))
elif measure == 'bed_shape Diff':
tmp_y.append(DIFF(dataset['guessed_bed_shape'][i], dataset['true_bed_shape']))
elif measure == 'bed_shape Bias':
tmp_y.append(BIAS(dataset['guessed_bed_shape'][i], dataset['true_bed_shape']))
elif measure == 'w0 RMSE':
tmp_y.append(RMSE(dataset['guessed_w0'][i], dataset['true_w0']))
elif measure == 'w0 Diff':
tmp_y.append(DIFF(dataset['guessed_w0'][i], dataset['true_w0']))
elif measure == 'w0 Bias':
tmp_y.append(BIAS(dataset['guessed_w0'][i], dataset['true_w0']))
elif measure == 'sfc_h RMSE':
tmp_y.append(RMSE(dataset['surface_h'][i], dataset['true_surface_h']))
elif measure == 'sfc_h Diff':
tmp_y.append(DIFF(dataset['surface_h'][i], dataset['true_surface_h']))
elif measure == 'sfc_h Bias':
tmp_y.append(BIAS(dataset['surface_h'][i], dataset['true_surface_h']))
elif measure == 'widths RMSE':
tmp_y.append(RMSE(dataset['widths'][i], dataset['true_widths']))
elif measure == 'widths Diff':
tmp_y.append(DIFF(dataset['widths'][i], dataset['true_widths']))
elif measure == 'widths Bias':
tmp_y.append(BIAS(dataset['widths'][i], dataset['true_widths']))
else:
raise ValueError('Unknown performance measurement!')
else:
tmp_x = []
tmp_y = []
all_x.append(tmp_x)
all_y.append(tmp_y)
colors = [color_1, color_2, color_3, color_4]
for i, (x, y) in enumerate(zip(all_x, all_y)):
ax.plot(x, y,
'.-',
lw=lw,
ms=ms,
c=colors[i])
#ax.legend((),(),title=measure, loc='best')
#ax.axvline(60, alpha=0.5, c='gray', ls='--')
# ax.axvline(20, alpha=0.5, c='gray', ls='--')
#if xlim is not None:
# ax.set_xlim(xlim)
ax.tick_params(axis='both', colors=axis_color, width=lw)
ax.spines['bottom'].set_color(axis_color)
ax.spines['bottom'].set_linewidth(lw)
ax.spines['left'].set_color(axis_color)
ax.spines['left'].set_linewidth(lw)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.set_xlabel(r'$T_{cpu}$', fontsize=fontsize, c=axis_color)
ax.set_ylabel(y_label, fontsize=fontsize, c=axis_color)
if ax_xlim is not None:
ax.set_xlim(ax_xlim)
if annotation is not None:
ax.text(annotation_x_position, annotation_y_position,
annotation,
horizontalalignment='left',
verticalalignment='center',
transform = ax.transAxes)
```
# Define Colors
```
colors = sns.color_palette("colorblind")
colors
axis_color = list(colors[7]) + [1.]
color_1 = list(colors[3]) + [1.]
color_2 = list(colors[0]) + [1.]
color_3 = list(colors[4]) + [1.]
color_4 = list(colors[2]) + [1.]
glacier_color = glacier_color = list(colors[9]) + [.5]
```
# Import Data
```
input_folder = 'plot_data/'
filename_scale_1 = 'par_clif_cons_ret_bed_h_and_bed_shape_at_once_scal1reg11.nc'
filename_scale_1e4 = 'par_clif_cons_ret_bed_h_and_bed_shape_at_once_scal1e4reg11.nc'
filename_separated = 'par_clif_cons_ret_bed_h_and_bed_shape_separatedreg11.nc'
filename_calculated = 'par_clif_cons_ret_bed_h_and_bed_shape_calculatedreg11.nc'
datasets = []
for filename in [filename_scale_1, filename_separated, filename_scale_1e4, filename_calculated]:
with xr.open_dataset(input_folder + filename) as ds:
datasets.append(ds)
```
# Create figure with performance
```
#dataset,
nr_of_iterations = None
facecolor = 'white'
labels_pad = -700
cmap = 'Spectral'
fontsize = 25
lw=2
ms=10
annotation_x_position_spatial = -0.15
annotation_y_position_spatial = 0.9
annotation_x_position_performance = -0.14
annotation_y_position_performance = 1.05
#index_start_first_profil_row = 0
#index_end_first_profil_row = 6
#index_start_second_profil_row = 65
#index_end_second_profil_row = 71
save_file = True
filename = 'par_methods_overview.pdf'
#plt.rcParams['font.family'] = 'monospace'
mpl.rcParams.update({'font.size': fontsize})
fig = plt.figure(figsize=(1,1), facecolor='white')
# define grid
total_width = 10
# define fixed size of spatial subplot
spatial_height = 2.5
spatial_y_separation = 0.5
# define fixed size for performance plot
performance_height = 2.5
performance_width = 8
performance_separation_y = 1
separation_y_performance_spatial = 0.5
# define fixed size for legend
legend_height = 3.5
separation_x_legend_spatial = 0.5
# fixed size in inch
# along x axis x-index for locator
horiz = [Size.Fixed(total_width), # 0
]
# y-index for locator
vert = [Size.Fixed(performance_height), # 0 performance row 2
Size.Fixed(separation_y_performance_spatial),
Size.Fixed(performance_height), # 2 performance row 1
Size.Fixed(separation_y_performance_spatial),
Size.Fixed(spatial_height), # 4 spatial row 2
Size.Fixed(spatial_y_separation),
Size.Fixed(spatial_height), # 6 spatial row 1
Size.Fixed(separation_x_legend_spatial),
Size.Fixed(legend_height), # 8 legend
]
# define indices for subplots for easier changes later
# spatial heatmap
spatial_nx = 0
spatial_nx1 = 1
spatial_ny_row_1 = 6
spatial_ny_row_2 = 4
spatial_annotation = ['(a)', '(b)']
# performance
performance_nx = 0
performance_ny_row_1 = 2
performance_ny_row_2 = 0
# legend
legend_nx = 0
legend_ny = 8
# Position of the grid in the figure
rect = (0., 0., 1., 1.)
# divide the axes rectangle into grid whose size is specified by horiz * vert
divider = Divider(fig, rect, horiz, vert, aspect=False)
with plt.rc_context({'font.family': 'monospace'}):
ax = setup_axes(fig, 111)
im = heatmap(datasets,
opti_var='bed_h',
annotation=spatial_annotation[0],
annotation_x_position=annotation_x_position_spatial,
annotation_y_position=annotation_y_position_spatial,
fig=fig,
ax=ax,
cmap=cmap,
grid_color=facecolor,
presentation=False,
labels_pad=labels_pad,
xlim=5,
nr_of_iterations=nr_of_iterations)
ax.set_axes_locator(divider.new_locator(nx=spatial_nx,
nx1=spatial_nx1,
ny=spatial_ny_row_1))
ax = setup_axes(fig, 111)
im = heatmap(datasets,
opti_var='bed_shape',
annotation=spatial_annotation[1],
annotation_x_position=annotation_x_position_spatial,
annotation_y_position=annotation_y_position_spatial,
fig=fig,
ax=ax,
cmap='vlag',
grid_color=facecolor,
presentation=False,
labels_pad=labels_pad,
xlim=5,
nr_of_iterations=nr_of_iterations)
ax.set_axes_locator(divider.new_locator(nx=spatial_nx,
nx1=spatial_nx1,
ny=spatial_ny_row_2))
# add perfomance plot bed_h RMSE
ax = fig.subplots()
performance_plot(ax,
datasets,
fig=None,
# 'bed_h RMSE', 'bed_h Diff', 'bed_h Bias',
# 'bed_shape RMSE', 'bed_shape Diff', 'bed_shape Bias',
# 'w0 RMSE', 'w0 Diff', 'w0 Bias',
# 'sfc_h RMSE', 'sfc_h Diff', 'sfc_h Bias',
# 'widths RMSE', 'widths Diff', 'widths Bias'
performance_measurement='bed_h RMSE',
xlim=5,
y_label='RMSE_b',
annotation='(c)',
annotation_x_position=annotation_x_position_performance,
annotation_y_position=annotation_y_position_performance,
lw=lw,
fontsize=fontsize,
ms=ms,
nr_of_iterations=nr_of_iterations,
ax_xlim=[0, 400]
)
ax.set_axes_locator(divider.new_locator(nx=performance_nx,
ny=performance_ny_row_1))
# add perfomance plot bed_shape RMSE
ax = fig.subplots()
performance_plot(ax,
datasets,
fig=None,
# 'bed_h RMSE', 'bed_h Diff', 'bed_h Bias',
# 'bed_shape RMSE', 'bed_shape Diff', 'bed_shape Bias',
# 'w0 RMSE', 'w0 Diff', 'w0 Bias',
# 'sfc_h RMSE', 'sfc_h Diff', 'sfc_h Bias',
# 'widths RMSE', 'widths Diff', 'widths Bias'
performance_measurement='bed_shape RMSE',
xlim=5,
y_label='RMSE_Ps',
annotation='(d)',
annotation_x_position=annotation_x_position_performance,
annotation_y_position=annotation_y_position_performance,
lw=lw,
fontsize=fontsize,
ms=ms,
nr_of_iterations=nr_of_iterations,
ax_xlim=[0, 350]
)
ax.set_axes_locator(divider.new_locator(nx=performance_nx,
ny=performance_ny_row_2))
# add legend
ax = fig.subplots()
add_legend2(ax=ax,
title=(r'$\bf{cliff}$ with $\bf{constant}$ width and $\bf{parabolic}$ shape,' +
'\n' +
r'$\bf{retreating}$ from an $\bf{initial~ glacier~ surface}$,' +
'\n' +
r'regularisation parameters $\lambda_0$ = 1 and $\lambda_1$ = 100'),
fontsize=fontsize,
lw=lw,
ms=ms,
labels=['fg: first guess',
"and A: 'explicit' without scaling",
"and B: 'iterative'",
"and C: 'explicit' with scaling of 1e-4",
"and D: 'implicit' with no limits"])
ax.set_axes_locator(divider.new_locator(nx=legend_nx,
ny=legend_ny))
if save_file:
fig.savefig(filename, format='pdf', bbox_inches='tight', dpi=300);
```
| github_jupyter |
```
# import sys
# #sys.path.insert(0,'../input/dlibpkg/dlib-19.19.0/')
# sys.path.insert(0,'../input/imutils/imutils-0.5.3/')
# !pip install dlib
# import dlib
# from scipy.spatial import distance as dist
# from imutils.video import FileVideoStream
# from imutils.video import VideoStream
# from imutils import face_utils
# import numpy as np
# import imutils
# import time
# import cv2
# def eye_aspect_ratio(eye):
# # compute the euclidean distances between the two sets of vertical eye landmarks (x, y)-coordinates
# A = dist.euclidean(eye[1], eye[5])
# B = dist.euclidean(eye[2], eye[4])
# # compute the euclidean distance between the horizontal-eye landmark (x, y)-coordinates
# C = dist.euclidean(eye[0], eye[3])
# ear = (A + B) / (2.0 * C)
# return ear
# # define two constants, one for the eye aspect ratio to indicate
# # blink and then a second constant for the number of consecutive
# # frames the eye must be below the threshold
# EYE_AR_THRESH = 0.3
# EYE_AR_CONSEC_FRAMES = 3
# # initialize the frame counters and the total number of blinks
# COUNTER = 0
# TOTAL = 0
# print("[INFO] loading facial landmark predictor...")
# detector = dlib.get_frontal_face_detector()
# dlib_path = '../input/face-det/shape_predictor_68_face_landmarks.dat'
# predictor = dlib.shape_predictor(dlib_path)
# # loop over frames from the video stream and grab the indexes of the facial landmarks for the left and right eye, respectively
# (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
# (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
# # start the video stream thread
# print("[INFO] starting video stream thread..................")
# video_path = '../input/deepfake-detection-challenge/train_sample_videos/dkzvdrzcnr.mp4'
# vs = FileVideoStream(video_path).start()
# fileStream = True
# time.sleep(1.0)
# while True:
# if fileStream and not vs.more():
# break
# frame = vs.read()
# print(frame.shape)
# frame = imutils.resize(frame, width=450)
# print("after")
# gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# # detect faces in the grayscale frame
# rects = detector(gray, 0)
# print(rects)
# # loop over the face detections
# for rect in rects:
# # determine the facial landmarks for the face region, then convert the facial landmark (x, y)-coordinates to a NumPy array
# shape = predictor(gray, rect)
# shape = face_utils.shape_to_np(shape)
# # extract the left and right eye coordinates, then use the
# # coordinates to compute the eye aspect ratio for both eyes
# leftEye = shape[lStart:lEnd]
# rightEye = shape[rStart:rEnd]
# leftEAR = eye_aspect_ratio(leftEye)
# rightEAR = eye_aspect_ratio(rightEye)
# # average the eye aspect ratio together for both eyes
# ear = (leftEAR + rightEAR) / 2.0
# print("ear")
# # compute the convex hull for the left and right eye, then
# # visualize each of the eyes
# leftEyeHull = cv2.convexHull(leftEye)
# rightEyeHull = cv2.convexHull(rightEye)
# cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1)
# cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1)
# # check to see if the eye aspect ratio is below the blink
# # threshold, and if so, increment the blink frame counter
# if ear < EYE_AR_THRESH:
# COUNTER += 1
# # otherwise, the eye aspect ratio is not below the blink
# # threshold
# else:
# # if the eyes were closed for a sufficient number of
# # then increment the total number of blinks
# if COUNTER >= EYE_AR_CONSEC_FRAMES:
# TOTAL += 1
# # reset the eye frame counter
# COUNTER = 0
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import cv2
import os
from tqdm import tqdm,trange
from sklearn.model_selection import train_test_split
import sklearn.metrics
import torch
import torch.nn as nn
import torch.nn.functional as F
import warnings
warnings.filterwarnings("ignore")
```
# Setup Data
```
#----------Training data set
df_train0 = pd.read_json('../input/deepfake/metadata0.json')
df_train1 = pd.read_json('../input/deepfake/metadata1.json')
df_train2 = pd.read_json('../input/deepfake/metadata2.json')
df_train3 = pd.read_json('../input/deepfake/metadata3.json')
df_train4 = pd.read_json('../input/deepfake/metadata4.json')
df_train5 = pd.read_json('../input/deepfake/metadata5.json')
df_train6 = pd.read_json('../input/deepfake/metadata6.json')
df_train7 = pd.read_json('../input/deepfake/metadata7.json')
df_train8 = pd.read_json('../input/deepfake/metadata8.json')
df_train9 = pd.read_json('../input/deepfake/metadata9.json')
df_train10 = pd.read_json('../input/deepfake/metadata10.json')
df_train11 = pd.read_json('../input/deepfake/metadata11.json')
df_train12 = pd.read_json('../input/deepfake/metadata12.json')
df_train13 = pd.read_json('../input/deepfake/metadata13.json')
df_train14 = pd.read_json('../input/deepfake/metadata14.json')
df_train15 = pd.read_json('../input/deepfake/metadata15.json')
df_train16 = pd.read_json('../input/deepfake/metadata16.json')
df_train17 = pd.read_json('../input/deepfake/metadata17.json')
df_train18 = pd.read_json('../input/deepfake/metadata18.json')
df_train19 = pd.read_json('../input/deepfake/metadata19.json')
df_train20 = pd.read_json('../input/deepfake/metadata20.json')
df_train21 = pd.read_json('../input/deepfake/metadata21.json')
df_train22 = pd.read_json('../input/deepfake/metadata22.json')
df_train23 = pd.read_json('../input/deepfake/metadata23.json')
df_train24 = pd.read_json('../input/deepfake/metadata24.json')
df_train25 = pd.read_json('../input/deepfake/metadata25.json')
df_train26 = pd.read_json('../input/deepfake/metadata26.json')
df_train27 = pd.read_json('../input/deepfake/metadata27.json')
df_train28 = pd.read_json('../input/deepfake/metadata28.json')
df_train29 = pd.read_json('../input/deepfake/metadata29.json')
df_train30 = pd.read_json('../input/deepfake/metadata30.json')
df_train31 = pd.read_json('../input/deepfake/metadata31.json')
df_train32 = pd.read_json('../input/deepfake/metadata32.json')
df_train33 = pd.read_json('../input/deepfake/metadata33.json')
df_train34 = pd.read_json('../input/deepfake/metadata34.json')
df_train35 = pd.read_json('../input/deepfake/metadata35.json')
df_train36 = pd.read_json('../input/deepfake/metadata36.json')
df_train37 = pd.read_json('../input/deepfake/metadata37.json')
df_train38 = pd.read_json('../input/deepfake/metadata38.json')
df_train39 = pd.read_json('../input/deepfake/metadata39.json')
df_train40 = pd.read_json('../input/deepfake/metadata40.json')
df_train41 = pd.read_json('../input/deepfake/metadata41.json')
df_train42 = pd.read_json('../input/deepfake/metadata42.json')
df_train43 = pd.read_json('../input/deepfake/metadata43.json')
df_train44 = pd.read_json('../input/deepfake/metadata44.json')
df_train45 = pd.read_json('../input/deepfake/metadata45.json')
df_train46 = pd.read_json('../input/deepfake/metadata46.json')
df_train = [df_train0 ,df_train1, df_train2, df_train3, df_train4,
df_train5, df_train6, df_train7, df_train8, df_train9,df_train10,
df_train11, df_train12, df_train13, df_train14, df_train15,df_train16,
df_train17, df_train18, df_train19, df_train20, df_train21, df_train22,
df_train23, df_train24, df_train25, df_train26, df_train27, df_train28,
df_train29, df_train30, df_train31, df_train32, df_train33, df_train34,
df_train35, df_train36, df_train37, df_train38, df_train39,
df_train40, df_train41, df_train42, df_train43, df_train44, df_train45,
df_train46]
train_nums = ["%.2d" % i for i in range(len(df_train)+1)]
#--------------Validation data set
df_val1 = pd.read_json('../input/deepfake/metadata47.json')
df_val2 = pd.read_json('../input/deepfake/metadata48.json')
df_val3 = pd.read_json('../input/deepfake/metadata49.json')
df_val = [df_val1, df_val2, df_val3]
val_nums =['47', '48', '49']
# def get_all_paths(df_list,suffixes_list):
# LABELS = {'REAL':0,'FAKE':1}
# paths = []
# labels = []
# for df,suffix in tqdm(zip(df_list,suffixes_list),total=len(df_list)):
# images_names = list(df.columns.values)
# for img_name in images_names:
# try:
# paths.append(get_path(img_name,suffix))
# labels.append(LABELS[df[img_name]['label']])
# except Exception as err:
# #print(err)
# pass
# return paths,labels
def get_orig_fakes(df):
orig_fakes = {}
temp = df.T.groupby(['original',df.T.index,]).count()
for orig,fake in (list(temp.index)):
fakes = []
try:#if key exists
fakes = orig_fakes[orig]
fakes.append(fake)
except KeyError as e:
fakes.append(fake)
finally:
orig_fakes[orig] = fakes
return orig_fakes
def get_path(img_name,suffix):
path = '../input/deepfake/DeepFake'+suffix+'/DeepFake'+suffix+'/' + img_name.replace(".mp4","")+ '.jpg'
if not os.path.exists(path):
raise Exception
return path
def get_all_paths(df_list,suffixes_list):
paths = []
labels = []
count = 0
for df in tqdm(df_list,total=len(df_list)):
orig_fakes = get_orig_fakes(df)
for suffix in suffixes_list:
try:
for orig,fakes in orig_fakes.items():
paths.append(get_path(orig,suffix))
labels.append(0)#processing REAL image
for img_name in fakes:
paths.append(get_path(img_name,suffix))
labels.append(1)#processing FAKES image
except Exception as err:
count+=1
pass
print("Exceptions:",count)
return paths,labels
%%time
val_img_paths, val_img_labels = get_all_paths(df_val,val_nums)
train_img_paths, train_img_labels = get_all_paths(df_train,train_nums)
len(train_img_paths),len(val_img_paths)
#NOT IDEMPOTENT
val_img_labels = val_img_labels[:500]
val_img_paths = val_img_paths[:500]
len(val_img_paths)
```
# Dataset
```
def read_img(path):
return cv2.cvtColor(cv2.imread(path),cv2.COLOR_BGR2RGB)
import random
def shuffle(X,y):
new = []
for m,n in zip(X,y):
new.append([m,n])
random.shuffle(new)
X,y = [],[]
for path,label in new:
X.append(path)
y.append(label)
return X,y
```
## FAKE-->1 REAL-->0
```
def get_data(train_paths, train_y, val_paths, val_y):
train_X=[]
for img in tqdm(train_paths):
train_X.append(read_img(img))
val_X=[]
for img in tqdm(val_paths):
val_X.append(read_img(img))
train_X, train_y = shuffle(train_X,train_y)
val_X, val_y = shuffle(val_X,val_y)
return train_X, val_X, train_y, val_y
'''
def get_random_sampling(paths, y, val_paths, val_y):
real=[]
fake=[]
for path,label in zip(paths,y):
if label==0:
real.append(path)
else:
fake.append(path)
# fake=random.sample(fake,len(real))
paths,y=[],[]
for x in real:
paths.append(x)
y.append(0)
for x in fake:
paths.append(x)
y.append(1)
real=[]
fake=[]
for m,n in zip(val_paths,val_y):
if n==0:
real.append(m)
else:
fake.append(m)
# fake=random.sample(fake,len(real))
val_paths,val_y=[],[]
for x in real:
val_paths.append(x)
val_y.append(0)
for x in fake:
val_paths.append(x)
val_y.append(1)
#training dataset
X=[]
for img in tqdm(paths):
X.append(read_img(img))
#validation dataset
val_X=[]
for img in tqdm(val_paths):
val_X.append(read_img(img))
# Balance with ffhq dataset
ffhq = os.listdir('../input/ffhq-face-data-set/thumbnails128x128')
X_ = []#used for train and val
for file in tqdm(ffhq):
im = read_img(f'../input/ffhq-face-data-set/thumbnails128x128/{file}')
im = cv2.resize(im, (150,150))
X_.append(im)
random.shuffle(X_)
#Appending REAL images from FFHQ dataset
for i in range(64773 - 12130):
X.append(X_[i])
y.append(0)
del X_[0:64773 - 12130]
for i in range(6108 - 1258):
val_X.append(X_[i])
val_y.append(0)
X, y = shuffle(X,y)
val_X, val_y = shuffle(val_X,val_y)
return X, val_X, y, val_y
'''
from torch.utils.data import Dataset, DataLoader
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
class ImageDataset(Dataset):
def __init__(self, X, y, training=True, transform=None):
self.X = X
self.y = y
self.transform = transform
self.training = training
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img = self.X[idx]
if self.transform is not None:
res = self.transform(image=img)
img = res['image']
img = np.rollaxis(img, 2, 0)
# img = np.array(img).astype(np.float32) / 255.
labels = self.y[idx]
labels = np.array(labels).astype(np.float32)
return [img, labels]
```
# Model
```
!pip install pytorchcv --quiet
from pytorchcv.model_provider import get_model
model = get_model("xception", pretrained=True)
# model = get_model("resnet18", pretrained=True)
model = nn.Sequential(*list(model.children())[:-1]) # Remove original output layer
model[0].final_block.pool = nn.Sequential(nn.AdaptiveAvgPool2d(1))
# model[0].final_pool = nn.Sequential(nn.AdaptiveAvgPool2d(1))
class Head(torch.nn.Module):
def __init__(self, in_f, out_f):
super().__init__()
self.f = nn.Flatten()
self.l = nn.Linear(in_f, 512)
self.d = nn.Dropout(0.30)
self.o = nn.Linear(512, out_f)
self.b1 = nn.BatchNorm1d(in_f)
self.b2 = nn.BatchNorm1d(512)
self.r = nn.ReLU()
def forward(self, x):
x = self.f(x)
x = self.b1(x)
x = self.d(x)
x = self.l(x)
x = self.r(x)
x = self.b2(x)
x = self.d(x)
out = self.o(x)
return out
class FCN(torch.nn.Module):
def __init__(self, base, in_f):
super().__init__()
self.base = base
self.h1 = Head(in_f, 1)
def forward(self, x):
x = self.base(x)
return self.h1(x)
model = FCN(model, 2048)
PATH = './model1.pth'
model.load_state_dict(torch.load(PATH))
model.eval()
```
# Train Functions
```
def calculate_loss(preds, targets):
return F.binary_cross_entropy(F.sigmoid(preds), targets)
def train_model(epoch, optimizer, scheduler=None, history=None):
model.train()
total_loss = 0
t = tqdm(train_loader)
for i, (img_batch, y_batch) in enumerate(t):
img_batch = img_batch.cuda().float()
y_batch = y_batch.cuda().float()
optimizer.zero_grad()#to avoid accumulating gradients
preds_batch = model(img_batch)
loss = calculate_loss(preds_batch, y_batch)
total_loss += loss
t.set_description(f'Epoch {epoch+1}/{n_epochs}, LR: %6f, Loss: %.4f'%(optimizer.state_dict()['param_groups'][0]['lr'],total_loss/(i+1)))
if history is not None:
history.loc[epoch + i / len(X), 'train_loss'] = loss.data.cpu().numpy()
history.loc[epoch + i / len(X), 'lr'] = optimizer.state_dict()['param_groups'][0]['lr']
loss.backward()#computing gradients
optimizer.step()#updating parameters
if scheduler is not None:
scheduler.step()
def evaluate_model(epoch, scheduler=None, history=None):
model.eval()
total_loss = 0.0
pred = []
target = []
with torch.no_grad():
for img_batch, y_batch in val_loader:
img_batch = img_batch.cuda().float()
y_batch = y_batch.cuda().float()
preds_batch = model(img_batch)
loss = calculate_loss(preds_batch, y_batch)
total_loss += loss
pred = [*map(F.sigmoid,preds_batch)]
target = [*map(lambda i: i.data.cpu(),y_batch)]
pred = [p.data.cpu().numpy() for p in pred]
pred2 = pred
pred = [np.round(p) for p in pred]
pred = np.array(pred)
#calculating accuracy
acc = sklearn.metrics.recall_score(target, pred, average='macro')
target = [i.item() for i in target]
pred2 = np.array(pred2).clip(0.1, 0.9)
#calculating log-loss after clipping
log_loss = sklearn.metrics.log_loss(target, pred2)
total_loss /= len(val_loader)
if history is not None:
history.loc[epoch, 'dev_loss'] = total_loss.cpu().numpy()
if scheduler is not None:
scheduler.step(total_loss)
print(f'Dev loss: %.4f, Acc: %.6f, log_loss: %.6f'%(total_loss,acc,log_loss))
return total_loss
```
# Dataloaders
```
X, val_X, y, val_y = get_data(train_img_paths, train_img_labels,val_img_paths, val_img_labels)
print('There are '+str(y.count(1))+' fake train samples')
print('There are '+str(y.count(0))+' real train samples')
print('There are '+str(val_y.count(1))+' fake val samples')
print('There are '+str(val_y.count(0))+' real val samples')
import albumentations
from albumentations.augmentations.transforms import ShiftScaleRotate, HorizontalFlip, Normalize, RandomBrightnessContrast, MotionBlur, Blur, GaussNoise, JpegCompression
train_transform = albumentations.Compose([
ShiftScaleRotate(p=0.3, scale_limit=0.25, border_mode=1, rotate_limit=25),
HorizontalFlip(p=0.2),
RandomBrightnessContrast(p=0.3, brightness_limit=0.25, contrast_limit=0.5),
MotionBlur(p=.2),
GaussNoise(p=.2),
JpegCompression(p=.2, quality_lower=50),
Normalize()
])
val_transform = albumentations.Compose([Normalize()])
train_dataset = ImageDataset(X, y, transform=train_transform)
val_dataset = ImageDataset(val_X, val_y, transform=val_transform)
nrow, ncol = 5, 6
fig, axes = plt.subplots(nrow, ncol, figsize=(20, 8))
axes = axes.flatten()
for i, ax in enumerate(axes):
image, label = train_dataset[i]
image = np.rollaxis(image, 0, 3)
image = image*std + mean
image = np.clip(image, 0., 1.)
ax.imshow(image)
ax.set_title(f'label: {label}')
```
# Train
```
import gc
history = pd.DataFrame()
history2 = pd.DataFrame()
torch.cuda.empty_cache()
gc.collect()
best = 1e10
n_epochs = 20
batch_size = 64#BATCH SIZE CHANGED
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=4)
val_loader = DataLoader(dataset=val_dataset, batch_size=batch_size, shuffle=False, num_workers=0)
model = model.cuda()
optimizer = torch.optim.AdamW(model.parameters(), lr=0.001)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=5, mode='min', factor=0.7, verbose=True, min_lr=1e-5)
for epoch in range(n_epochs):
torch.cuda.empty_cache()
gc.collect()
train_model(epoch, optimizer, scheduler=None, history=history)
loss = evaluate_model(epoch, scheduler=scheduler, history=history2)
if loss < best:
best = loss
print(f'Saving best model...')
torch.save(model.state_dict(), f'model2.pth')
history2.plot()
import torch
w = torch.rand(5)
w.requires_grad_()
print(w)
s = w.sum()
print(s)
s.backward()
print(w.grad) # tensor([1., 1., 1., 1., 1.])
s.backward()
print(w.grad) # tensor([2., 2., 2., 2., 2.])
s.backward()
print(w.grad) # tensor([3., 3., 3., 3., 3.])
s.backward()
print(w.grad) # tensor([4., 4., 4., 4., 4.])
```
| github_jupyter |
# Face Recognition
In this assignment, you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf).
Face recognition problems commonly fall into two categories:
- **Face Verification** - "is this the claimed person?". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem.
- **Face Recognition** - "who is this person?". For example, the video lecture showed a [face recognition video](https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem.
FaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.
**In this assignment, you will:**
- Implement the triplet loss function
- Use a pretrained model to map face images into 128-dimensional encodings
- Use these encodings to perform face verification and face recognition
#### Channels-first notation
* In this exercise, we will be using a pre-trained model which represents ConvNet activations using a **"channels first"** convention, as opposed to the "channels last" convention used in lecture and previous programming assignments.
* In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$.
* Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community.
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3a".
* You can find your original work saved in the notebook with the previous version name ("v3")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* `triplet_loss`: Additional Hints added.
* `verify`: Hints added.
* `who_is_it`: corrected hints given in the comments.
* Spelling and formatting updates for easier reading.
#### Load packages
Let's load the required packages.
```
from keras.models import Sequential
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
from keras.layers.merge import Concatenate
from keras.layers.core import Lambda, Flatten, Dense
from keras.initializers import glorot_uniform
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format('channels_first')
import cv2
import os
import numpy as np
from numpy import genfromtxt
import pandas as pd
import tensorflow as tf
from fr_utils import *
from inception_blocks_v2 import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
```
## 0 - Naive Face Verification
In Face Verification, you're given two images and you have to determine if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person!
<img src="images/pixel_comparison.png" style="width:380px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u></center></caption>
* Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on.
* You'll see that rather than using the raw image, you can learn an encoding, $f(img)$.
* By using an encoding for each image, an element-wise comparison produces a more accurate judgement as to whether two pictures are of the same person.
## 1 - Encoding face images into a 128-dimensional vector
### 1.1 - Using a ConvNet to compute encodings
The FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning, let's load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks_v2.py` to see how it is implemented (do so by going to "File->Open..." at the top of the Jupyter notebook. This opens the file directory that contains the '.py' file).
The key things you need to know are:
- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$
- It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector
Run the cell below to create the model for face images.
```
FRmodel = faceRecoModel(input_shape=(3, 96, 96))
print("Total Params:", FRmodel.count_params())
```
** Expected Output **
<table>
<center>
Total Params: 3743280
</center>
</table>
By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings to compare two face images as follows:
<img src="images/distance_kiank.png" style="width:680px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 2**: <br> </u> <font color='purple'> By computing the distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>
So, an encoding is a good one if:
- The encodings of two images of the same person are quite similar to each other.
- The encodings of two images of different persons are very different.
The triplet loss function formalizes this, and tries to "push" the encodings of two images of the same person (Anchor and Positive) closer together, while "pulling" the encodings of two images of different persons (Anchor, Negative) further apart.
<img src="images/triplet_comparison.png" style="width:280px;height:150px;">
<br>
<caption><center> <u> <font color='purple'> **Figure 3**: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>
### 1.2 - The Triplet Loss
For an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.
<img src="images/f_x.png" style="width:380px;height:150px;">
<!--
We will also add a normalization step at the end of our model so that $\mid \mid f(x) \mid \mid_2 = 1$ (means the vector of encoding should be of norm 1).
!-->
Training will use triplets of images $(A, P, N)$:
- A is an "Anchor" image--a picture of a person.
- P is a "Positive" image--a picture of the same person as the Anchor image.
- N is a "Negative" image--a picture of a different person than the Anchor image.
These triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example.
You'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\alpha$:
$$\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 + \alpha < \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$$
You would thus like to minimize the following "triplet cost":
$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \underbrace{\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2}_\text{(1)} - \underbrace{\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2}_\text{(2)} + \alpha \large ] \small_+ \tag{3}$$
Here, we are using the notation "$[z]_+$" to denote $max(z,0)$.
Notes:
- The term (1) is the squared distance between the anchor "A" and the positive "P" for a given triplet; you want this to be small.
- The term (2) is the squared distance between the anchor "A" and the negative "N" for a given triplet, you want this to be relatively large. It has a minus sign preceding it because minimizing the negative of the term is the same as maximizing that term.
- $\alpha$ is called the margin. It is a hyperparameter that you pick manually. We will use $\alpha = 0.2$.
Most implementations also rescale the encoding vectors to haven L2 norm equal to one (i.e., $\mid \mid f(img)\mid \mid_2$=1); you won't have to worry about that in this assignment.
**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:
1. Compute the distance between the encodings of "anchor" and "positive": $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$
2. Compute the distance between the encodings of "anchor" and "negative": $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$
3. Compute the formula per training example: $ \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2 + \alpha$
3. Compute the full formula by taking the max with zero and summing over the training examples:
$$\mathcal{J} = \sum^{m}_{i=1} \large[ \small \mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2 - \mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2+ \alpha \large ] \small_+ \tag{3}$$
#### Hints
* Useful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.
* For steps 1 and 2, you will sum over the entries of $\mid \mid f(A^{(i)}) - f(P^{(i)}) \mid \mid_2^2$ and $\mid \mid f(A^{(i)}) - f(N^{(i)}) \mid \mid_2^2$.
* For step 4 you will sum over the training examples.
#### Additional Hints
* Recall that the square of the L2 norm is the sum of the squared differences: $||x - y||_{2}^{2} = \sum_{i=1}^{N}(x_{i} - y_{i})^{2}$
* Note that the `anchor`, `positive` and `negative` encodings are of shape `(m,128)`, where m is the number of training examples and 128 is the number of elements used to encode a single example.
* For steps 1 and 2, you will maintain the number of `m` training examples and sum along the 128 values of each encoding.
[tf.reduce_sum](https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum) has an `axis` parameter. This chooses along which axis the sums are applied.
* Note that one way to choose the last axis in a tensor is to use negative indexing (`axis=-1`).
* In step 4, when summing over training examples, the result will be a single scalar value.
* For `tf.reduce_sum` to sum across all axes, keep the default value `axis=None`.
```
# GRADED FUNCTION: triplet_loss
def triplet_loss(y_true, y_pred, alpha = 0.2):
"""
Implementation of the triplet loss as defined by formula (3)
Arguments:
y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.
y_pred -- python list containing three objects:
anchor -- the encodings for the anchor images, of shape (None, 128)
positive -- the encodings for the positive images, of shape (None, 128)
negative -- the encodings for the negative images, of shape (None, 128)
Returns:
loss -- real number, value of the loss
"""
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
### START CODE HERE ### (≈ 4 lines)
# Step 1: Compute the (encoding) distance between the anchor and the positive
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis = -1)
# Step 2: Compute the (encoding) distance between the anchor and the negative
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis = -1)
# Step 3: subtract the two previous distances and add alpha.
basic_loss = pos_dist - neg_dist + alpha
# Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.
loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
### END CODE HERE ###
return loss
with tf.Session() as test:
tf.set_random_seed(1)
y_true = (None, None, None)
y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),
tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),
tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))
loss = triplet_loss(y_true, y_pred)
print("loss = " + str(loss.eval()))
```
**Expected Output**:
<table>
<tr>
<td>
**loss**
</td>
<td>
528.143
</td>
</tr>
</table>
## 2 - Loading the pre-trained model
FaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run.
```
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
```
Here are some examples of distances between the encodings between three individuals:
<img src="images/distance_matrix.png" style="width:380px;height:200px;">
<br>
<caption><center> <u> <font color='purple'> **Figure 4**:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>
Let's now use this model to perform face verification and face recognition!
## 3 - Applying the model
You are building a system for an office building where the building manager would like to offer facial recognition to allow the employees to enter the building.
You'd like to build a **Face verification** system that gives access to the list of people who live or work there. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the entrance. The face recognition system then checks that they are who they claim to be.
### 3.1 - Face Verification
Let's build a database containing one encoding vector for each person who is allowed to enter the office. To generate the encoding we use `img_to_encoding(image_path, model)`, which runs the forward propagation of the model on the specified image.
Run the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.
```
database = {}
database["danielle"] = img_to_encoding("images/danielle.png", FRmodel)
database["younes"] = img_to_encoding("images/younes.jpg", FRmodel)
database["tian"] = img_to_encoding("images/tian.jpg", FRmodel)
database["andrew"] = img_to_encoding("images/andrew.jpg", FRmodel)
database["kian"] = img_to_encoding("images/kian.jpg", FRmodel)
database["dan"] = img_to_encoding("images/dan.jpg", FRmodel)
database["sebastiano"] = img_to_encoding("images/sebastiano.jpg", FRmodel)
database["bertrand"] = img_to_encoding("images/bertrand.jpg", FRmodel)
database["kevin"] = img_to_encoding("images/kevin.jpg", FRmodel)
database["felix"] = img_to_encoding("images/felix.jpg", FRmodel)
database["benoit"] = img_to_encoding("images/benoit.jpg", FRmodel)
database["arnaud"] = img_to_encoding("images/arnaud.jpg", FRmodel)
```
Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.
**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called "identity". You will have to go through the following steps:
1. Compute the encoding of the image from `image_path`.
2. Compute the distance between this encoding and the encoding of the identity image stored in the database.
3. Open the door if the distance is less than 0.7, else do not open it.
* As presented above, you should use the L2 distance [np.linalg.norm](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html).
* (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.)
#### Hints
* `identity` is a string that is also a key in the `database` dictionary.
* `img_to_encoding` has two parameters: the `image_path` and `model`.
```
# GRADED FUNCTION: verify
def verify(image_path, identity, database, model):
"""
Function that verifies if the person on the "image_path" image is "identity".
Arguments:
image_path -- path to an image
identity -- string, name of the person you'd like to verify the identity. Has to be an employee who works in the office.
database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).
model -- your Inception model instance in Keras
Returns:
dist -- distance between the image_path and the image of "identity" in the database.
door_open -- True, if the door should open. False otherwise.
"""
### START CODE HERE ###
# Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)
encoding = img_to_encoding(image_path, model)
# Step 2: Compute distance with identity's image (≈ 1 line)
dist = np.linalg.norm(encoding - database[identity])
# Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)
if dist < 0.7:
print("It's " + str(identity) + ", welcome in!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
### END CODE HERE ###
return dist, door_open
```
Younes is trying to enter the office and the camera takes a picture of him ("images/camera_0.jpg"). Let's run your verification algorithm on this picture:
<img src="images/camera_0.jpg" style="width:100px;height:100px;">
```
verify("images/camera_0.jpg", "younes", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**It's younes, welcome in!**
</td>
<td>
(0.65939283, True)
</td>
</tr>
</table>
Benoit, who does not work in the office, stole Kian's ID card and tried to enter the office. The camera took a picture of Benoit ("images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.
<img src="images/camera_2.jpg" style="width:100px;height:100px;">
```
verify("images/camera_2.jpg", "kian", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**It's not kian, please go away**
</td>
<td>
(0.86224014, False)
</td>
</tr>
</table>
### 3.2 - Face Recognition
Your face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the office the next day and couldn't get in!
To solve this, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the building, and the door will unlock for them!
You'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as one of the inputs.
**Exercise**: Implement `who_is_it()`. You will have to go through the following steps:
1. Compute the target encoding of the image from image_path
2. Find the encoding from the database that has smallest distance with the target encoding.
- Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.
- Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`.
- Compute the L2 distance between the target "encoding" and the current "encoding" from the database.
- If this distance is less than the min_dist, then set `min_dist` to `dist`, and `identity` to `name`.
```
# GRADED FUNCTION: who_is_it
def who_is_it(image_path, database, model):
"""
Implements face recognition for the office by finding who is the person on the image_path image.
Arguments:
image_path -- path to an image
database -- database containing image encodings along with the name of the person on the image
model -- your Inception model instance in Keras
Returns:
min_dist -- the minimum distance between image_path encoding and the encodings from the database
identity -- string, the name prediction for the person on image_path
"""
### START CODE HERE ###
## Step 1: Compute the target "encoding" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)
encoding = img_to_encoding(image_path, model)
## Step 2: Find the closest encoding ##
# Initialize "min_dist" to a large value, say 100 (≈1 line)
min_dist = 100
# Loop over the database dictionary's names and encodings.
for (name, db_enc) in database.items():
# Compute L2 distance between the target "encoding" and the current db_enc from the database. (≈ 1 line)
dist = np.linalg.norm(encoding - db_enc)
# If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)
if dist < min_dist:
min_dist = dist
identity = name
### END CODE HERE ###
if min_dist > 0.7:
print("Not in the database.")
else:
print ("it's " + str(identity) + ", the distance is " + str(min_dist))
return min_dist, identity
```
Younes is at the front-door and the camera takes a picture of him ("images/camera_0.jpg"). Let's see if your who_it_is() algorithm identifies Younes.
```
who_is_it("images/camera_0.jpg", database, FRmodel)
```
**Expected Output**:
<table>
<tr>
<td>
**it's younes, the distance is 0.659393**
</td>
<td>
(0.65939283, 'younes')
</td>
</tr>
</table>
You can change "`camera_0.jpg`" (picture of younes) to "`camera_1.jpg`" (picture of bertrand) and see the result.
#### Congratulations!
* Your face recognition system is working well! It only lets in authorized persons, and people don't need to carry an ID card around anymore!
* You've now seen how a state-of-the-art face recognition system works.
#### Ways to improve your facial recognition model
Although we won't implement it here, here are some ways to further improve the algorithm:
- Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increase accuracy.
- Crop the images to just contain the face, and less of the "border" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.
## Key points to remember
- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem.
- The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.
- The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person.
Congrats on finishing this assignment!
### References:
- Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832.pdf)
- Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). [DeepFace: Closing the gap to human-level performance in face verification](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf)
- The pretrained model we use is inspired by Victor Sy Wang's implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.
- Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet
| github_jupyter |
<a href="https://colab.research.google.com/github/DeepInsider/playground-data/blob/master/docs/articles/deeplearningdat.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
##### Copyright 2019 Digital Advantage - Deep Insider.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 連載『機械学習 & ディープラーニング入門(データ構造編)』のノートブック
<table valign="middle">
<td>
<a target="_blank" href="https://deepinsider.jp/tutor/deeplearningdat"> <img src="https://re.deepinsider.jp/img/ml-logo/manabu.svg"/>Deep Insiderで記事を読む</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/DeepInsider/playground-data/blob/master/docs/articles/deeplearningdat.ipynb"> <img src="https://re.deepinsider.jp/img/ml-logo/gcolab.svg" />Google Colabで実行する</a>
</td>
<td>
<a target="_blank" href="https://github.com/DeepInsider/playground-data/blob/master/docs/articles/deeplearningdat.ipynb"> <img src="https://re.deepinsider.jp/img/ml-logo/github.svg" />GitHubでソースコードを見る</a>
</td>
</table>
※上から順に実行してください。上のコードで実行したものを再利用しているところがあるため、すべて実行しないとエラーになるコードがあります。
すべてのコードを一括実行したい場合は、メニューバーから[ランタイム]-[すべてのセルを実行]をクリックしてください。
※このノートブックは「Python 2」でも実行できるようにしていますが、基本的に「Python 3」を利用することをお勧めします。
Python 3を利用するには、メニューバーから[ランタイム]-[ランタイムのタイプを変更]を選択すると表示される[ノートブックの設定]ダイアログの、[ランタイムのタイプ]欄で「Python 3」に選択し、その右下にある[保存]ボタンをクリックしてください。
```
# Python バージョン2への対応
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import sys
print(sys.version_info.major) # 3 # バージョン(メジャー)
print(sys.version_info.minor) # 6 # バージョン(マイナー)
```
## Python言語におけるデータの構造
### Pythonにおける「1つの」データの表現
#### リスト1-1 「単一の」データを表現するコード
```
height = 177.2
print(height) # 177.2と出力される
```
#### リスト1-2 変数名だけを記述してオブジェクトの評価結果を出力
```
height # 177.2と出力される
```
#### リスト1-3 オブジェクト評価結果の出力とprint()関数の出力の違い
```
import numpy as np
array2d = np.array([ [ 165.5, 58.4 ],
[ 177.2, 67.8 ],
[ 183.2, 83.7 ] ])
print(array2d) # [[165.5 58.4]
# [177.2 67.8]
# [183.2 83.7]]
array2d # array([[165.5, 58.4],
# [177.2, 67.8],
# [183.2, 83.7]])
```
#### リスト2-1 「単一の」データを複数書いて表現するコード
```
hana_height = 165.5
taro_height = 177.2
jiro_height = 183.2
hana_height, taro_height, jiro_height # (165.5, 177.2, 183.2)
```
#### リスト2-2 「複数(1次元)の」データを表現するコード
```
heights = [ 165.5, 177.2, 183.2 ]
heights # [165.5, 177.2, 183.2]
```
### Pythonにおける「複数(2次元)の」データの表現
#### リスト3 「複数(2次元)の」データを表現するコード
```
people = [ [ 165.5, 58.4 ],
[ 177.2, 67.8 ],
[ 183.2, 83.7 ] ]
people # [165.5, 177.2, 183.2]
```
### Pythonにおける「複数(多次元)の」データの表現
#### リスト4 「複数(3次元)の」データを表現するコード
```
list3d = [
[ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ],
[ [ 155.5, 48.4 ], [ 167.2, 57.8 ], [ 173.2, 73.7 ] ],
[ [ 145.5, 38.4 ], [ 157.2, 47.8 ], [ 163.2, 63.7 ] ]
]
list3d # [[[165.5, 58.4], [177.2, 67.8], [183.2, 83.7]],
# [[155.5, 48.4], [167.2, 57.8], [173.2, 73.7]],
# [[145.5, 38.4], [157.2, 47.8], [163.2, 63.7]]]
```
## AIプログラムにおけるデータの構造(基本編)
### NumPyのインストール
#### リスト5-1 `numpy`パッケージをインストールするためのシェルコマンド
```
!pip install numpy
```
### numpyモジュールのインポート
#### リスト5-2 `numpy`モジュールをインポートするコード例
```
import numpy as np
```
### NumPyのデータ型「多次元配列」オブジェクトの作成
#### リスト5-3 `array`関数で多次元配列を作成するコード例(値を使用)
```
array2d = np.array([ [ 165.5, 58.4 ],
[ 177.2, 67.8 ],
[ 183.2, 83.7 ] ])
array2d # array([[165.5, 58.4],
# [177.2, 67.8],
# [183.2, 83.7]])
```
#### リスト5-4 `array`関数で多次元配列を作成するコード例(変数を使用)
```
array3d = np.array(list3d)
array3d # array([[[165.5, 58.4],
# [177.2, 67.8],
# [183.2, 83.7]],
#
# [[155.5, 48.4],
# [167.2, 57.8],
# [173.2, 73.7]],
#
# [[145.5, 38.4],
# [157.2, 47.8],
# [163.2, 63.7]]])
```
#### リスト5-5 `ndarray`クラスの`tolist()`メソッドで多次元リストに変換するコード例
```
tolist3d = array3d.tolist()
tolist3d # [[[165.5, 58.4], [177.2, 67.8], [183.2, 83.7]],
# [[155.5, 48.4], [167.2, 57.8], [173.2, 73.7]],
# [[145.5, 38.4], [157.2, 47.8], [163.2, 63.7]]]
```
## AIプログラムにおけるデータの構造(応用編)
### Pandasのインストール
#### リスト6 ◎pandas◎パッケージをインストールするためのシェルコマンド
```
!pip install pandas
```
#### 図7-1 NumPyのデータをPandasで一覧表として表示する例
```
import pandas as pd
df = pd.DataFrame(array2d, columns=['身長', '体重'])
df
```
## AIプログラムにおけるデータの計算
### AI・ディープラーニングで数学を使う理由
#### リスト7-1 3人の身長の平均を計算するコード例(個別の値を使用)
```
# hana_height, taro_height, jiro_height = 165.5, 177.2, 183.2 # Lesson 1のリスト2-1で宣言済み
average_height = (
hana_height +
taro_height +
jiro_height
) / 3
print(average_height) # 175.29999999999998
```
#### リスト7-2 3人の身長と体重の平均を計算するコード例(多次元配列を使用)
```
import numpy as np
array1d = np.array([ 165.5, 177.2, 183.2 ])
average_height = np.average(array1d)
average_height # 175.29999999999998
```
### NumPyを使った計算
#### リスト8-1 3行2列の行列のさまざまな特性を表示するコード例
```
array2d = np.array([ [ 165.5, 58.4 ],
[ 177.2, 67.8 ],
[ 183.2, 83.7 ] ])
print(array2d.shape) # (3, 2)
print(array2d.ndim) # 2
print(array2d.size) # 6
```
#### リスト8-2 NumPyを使った行列計算
```
diet = np.array([ [ 1.0, 0.0 ],
[ 0.0, 0.9 ] ])
lose_weights = diet @ array2d.T
# Python 3.5以降の場合。それ以前のPython 2系などの場合は、以下のmatmul関数を使う必要がある
#lose_weights = np.matmul(diet, array2d.T)
print(lose_weights.T) # [[165.5 52.56]
# [177.2 61.02]
# [183.2 75.33]]
```
#### リスト8-3 全要素の平均値を算出(身長/体重別ではない)
```
averages = np.average(array2d)
averages # 122.63333333333334
```
#### リスト8-4 身長/体重別の平均値を算出
```
averages = np.average(array2d, axis=0)
averages # array([175.3 , 69.96666667])
```
#### リスト8-5 3次元配列データでグループごとの身長/体重別の平均値を算出
```
array3d = np.array(
[ [ [ 165.5, 58.4 ], [ 177.2, 67.8 ], [ 183.2, 83.7 ] ],
[ [ 155.5, 48.4 ], [ 167.2, 57.8 ], [ 173.2, 73.7 ] ],
[ [ 145.5, 38.4 ], [ 157.2, 47.8 ], [ 163.2, 63.7 ] ] ]
)
avr3d = np.average(array3d, axis=1)
print(avr3d) # [[175.3 69.96666667]
# [165.3 59.96666667]
# [155.3 49.96666667]]
```
## お疲れさまでした。データ構造の学習は修了です。
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Time series forecasting
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/time_series"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/time_series.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial is an introduction to time series forecasting using TensorFlow. It builds a few different styles of models including Convolutional and Recurrent Neural Networks (CNNs and RNNs).
This is covered in two main parts, with subsections:
* Forecast for a single timestep:
* A single feature.
* All features.
* Forecast multiple steps:
* Single-shot: Make the predictions all at once.
* Autoregressive: Make one prediction at a time and feed the output back to the model.
## Setup
```
import os
import datetime
import IPython
import IPython.display
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False
```
## The weather dataset
This tutorial uses a <a href="https://www.bgc-jena.mpg.de/wetter/" class="external">weather time series dataset</a> recorded by the <a href="https://www.bgc-jena.mpg.de" class="external">Max Planck Institute for Biogeochemistry</a>.
This dataset contains 14 different features such as air temperature, atmospheric pressure, and humidity. These were collected every 10 minutes, beginning in 2003. For efficiency, you will use only the data collected between 2009 and 2016. This section of the dataset was prepared by François Chollet for his book [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).
```
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip',
fname='jena_climate_2009_2016.csv.zip',
extract=True)
csv_path, _ = os.path.splitext(zip_path)
```
This tutorial will just deal with **hourly predictions**, so start by sub-sampling the data from 10 minute intervals to 1h:
```
df = pd.read_csv(csv_path)
# slice [start:stop:step], starting from index 5 take every 6th record.
df = df[5::6]
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
```
Let's take a glance at the data. Here are the first few rows:
```
df.head()
```
Here is the evolution of a few features over time.
```
plot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)']
plot_features = df[plot_cols]
plot_features.index = date_time
_ = plot_features.plot(subplots=True)
plot_features = df[plot_cols][:480]
plot_features.index = date_time[:480]
_ = plot_features.plot(subplots=True)
```
### Inspect and cleanup
Next look at the statistics of the dataset:
```
df.describe().transpose()
```
#### Wind velocity
One thing that should stand out is the `min` value of the wind velocity, `wv (m/s)` and `max. wv (m/s)` columns. This `-9999` is likely erroneous. There's a separate wind direction column, so the velocity should be `>=0`. Replace it with zeros:
```
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
# The above inplace edits are reflected in the DataFrame
df['wv (m/s)'].min()
```
### Feature engineering
Before diving in to build a model it's important to understand your data, and be sure that you're passing the model appropriately formatted data.
#### Wind
The last column of the data, `wd (deg)`, gives the wind direction in units of degrees. Angles do not make good model inputs, 360° and 0° should be close to each other, and wrap around smoothly. Direction shouldn't matter if the wind is not blowing.
Right now the distribution of wind data looks like this:
```
plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind Direction [deg]')
plt.ylabel('Wind Velocity [m/s]')
```
But this will be easier for the model to interpret if you convert the wind direction and velocity columns to a wind **vector**:
```
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
```
The distribution of wind vectors is much simpler for the model to correctly interpret.
```
plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind X [m/s]')
plt.ylabel('Wind Y [m/s]')
ax = plt.gca()
ax.axis('tight')
```
#### Time
Similarly the `Date Time` column is very useful, but not in this string form. Start by converting it to seconds:
```
timestamp_s = date_time.map(datetime.datetime.timestamp)
```
Similar to the wind direction the time in seconds is not a useful model input. Being weather data it has clear daily and yearly periodicity. There are many ways you could deal with periodicity.
A simple approach to convert it to a usable signal is to use `sin` and `cos` to convert the time to clear "Time of day" and "Time of year" signals:
```
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
plt.plot(np.array(df['Day sin'])[:25])
plt.plot(np.array(df['Day cos'])[:25])
plt.xlabel('Time [h]')
plt.title('Time of day signal')
```
This gives the model access to the most important frequency features. In this case you knew ahead of time which frequencies were important.
If you didn't know, you can determine which frequencies are important using an `fft`. To check our assumptions, here is the `tf.signal.rfft` of the temperature over time. Note the obvious peaks at frequencies near `1/year` and `1/day`:
```
fft = tf.signal.rfft(df['T (degC)'])
f_per_dataset = np.arange(0, len(fft))
n_samples_h = len(df['T (degC)'])
hours_per_year = 24*365.2524
years_per_dataset = n_samples_h/(hours_per_year)
f_per_year = f_per_dataset/years_per_dataset
plt.step(f_per_year, np.abs(fft))
plt.xscale('log')
plt.ylim(0, 400000)
plt.xlim([0.1, max(plt.xlim())])
plt.xticks([1, 365.2524], labels=['1/Year', '1/day'])
_ = plt.xlabel('Frequency (log scale)')
```
### Split the data
We'll use a `(70%, 20%, 10%)` split for the training, validation, and test sets. Note the data is **not** being randomly shuffled before splitting. This is for two reasons.
1. It ensures that chopping the data into windows of consecutive samples is still possible.
2. It ensures that the validation/test results are more realistic, being evaluated on data collected after the model was trained.
```
column_indices = {name: i for i, name in enumerate(df.columns)}
n = len(df)
train_df = df[0:int(n*0.7)]
val_df = df[int(n*0.7):int(n*0.9)]
test_df = df[int(n*0.9):]
num_features = df.shape[1]
```
### Normalize the data
It is important to scale features before training a neural network. Normalization is a common way of doing this scaling. Subtract the mean and divide by the standard deviation of each feature.
The mean and standard deviation should only be computed using the training data so that the models have no access to the values in the validation and test sets.
It's also arguable that the model shouldn't have access to future values in the training set when training, and that this normalization should be done using moving averages. That's not the focus of this tutorial, and the validation and test sets ensure that you get (somewhat) honest metrics. So in the interest of simplicity this tutorial uses a simple average.
```
train_mean = train_df.mean()
train_std = train_df.std()
train_df = (train_df - train_mean) / train_std
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std
```
Now peek at the distribution of the features. Some features do have long tails, but there are no obvious errors like the `-9999` wind velocity value.
```
df_std = (df - train_mean) / train_std
df_std = df_std.melt(var_name='Column', value_name='Normalized')
plt.figure(figsize=(12, 6))
ax = sns.violinplot(x='Column', y='Normalized', data=df_std)
_ = ax.set_xticklabels(df.keys(), rotation=90)
```
## Data windowing
The models in this tutorial will make a set of predictions based on a window of consecutive samples from the data.
The main features of the input windows are:
* The width (number of time steps) of the input and label windows
* The time offset between them.
* Which features are used as inputs, labels, or both.
This tutorial builds a variety of models (including Linear, DNN, CNN and RNN models), and uses them for both:
* *Single-output*, and *multi-output* predictions.
* *Single-time-step* and *multi-time-step* predictions.
This section focuses on implementing the data windowing so that it can be reused for all of those models.
Depending on the task and type of model you may want to generate a variety of data windows. Here are some examples:
1. For example, to make a single prediction 24h into the future, given 24h of history you might define a window like this:

2. A model that makes a prediction 1h into the future, given 6h of history would need a window like this:

The rest of this section defines a `WindowGenerator` class. This class can:
1. Handle the indexes and offsets as shown in the diagrams above.
1. Split windows of features into a `(features, labels)` pairs.
2. Plot the content of the resulting windows.
3. Efficiently generate batches of these windows from the training, evaluation, and test data, using `tf.data.Dataset`s.
### 1. Indexes and offsets
Start by creating the `WindowGenerator` class. The `__init__` method includes all the necessary logic for the input and label indices.
It also takes the train, eval, and test dataframes as input. These will be converted to `tf.data.Dataset`s of windows later.
```
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
```
Here is code to create the 2 windows shown in the diagrams at the start of this section:
```
w1 = WindowGenerator(input_width=24, label_width=1, shift=24,
label_columns=['T (degC)'])
w1
w2 = WindowGenerator(input_width=6, label_width=1, shift=1,
label_columns=['T (degC)'])
w2
```
### 2. Split
Given a list consecutive inputs, the `split_window` method will convert them to a window of inputs and a window of labels.
The example `w2`, above, will be split like this:

This diagram doesn't show the `features` axis of the data, but this `split_window` function also handles the `label_columns` so it can be used for both the single output and multi-output examples.
```
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window
```
Try it out:
```
# Stack three slices, the length of the total window:
example_window = tf.stack([np.array(train_df[:w2.total_window_size]),
np.array(train_df[100:100+w2.total_window_size]),
np.array(train_df[200:200+w2.total_window_size])])
example_inputs, example_labels = w2.split_window(example_window)
print('All shapes are: (batch, time, features)')
print(f'Window shape: {example_window.shape}')
print(f'Inputs shape: {example_inputs.shape}')
print(f'labels shape: {example_labels.shape}')
```
Typically data in TensorFlow is packed into arrays where the outermost index is across examples (the "batch" dimension). The middle indices are the "time" or "space" (width, height) dimension(s). The innermost indices are the features.
The code above took a batch of 3, 7-timestep windows, with 19 features at each time step. It split them into a batch of 6-timestep, 19 feature inputs, and a 1-timestep 1-feature label. The label only has one feature because the `WindowGenerator` was initialized with `label_columns=['T (degC)']`. Initially this tutorial will build models that predict single output labels.
### 3. Plot
Here is a plot method that allows a simple visualization of the split window:
```
w2.example = example_inputs, example_labels
def plot(self, model=None, plot_col='T (degC)', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(3, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
WindowGenerator.plot = plot
```
This plot aligns inputs, labels, and (later) predictions based on the time that the item refers to:
```
w2.plot()
```
You can plot the other columns, but the example window `w2` configuration only has labels for the `T (degC)` column.
```
w2.plot(plot_col='p (mbar)')
```
### 4. Create `tf.data.Dataset`s
Finally this `make_dataset` method will take a time series `DataFrame` and convert it to a `tf.data.Dataset` of `(input_window, label_window)` pairs using the `preprocessing.timeseries_dataset_from_array` function.
```
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
ds = tf.keras.preprocessing.timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset
```
The `WindowGenerator` object holds training, validation and test data. Add properties for accessing them as `tf.data.Datasets` using the above `make_dataset` method. Also add a standard example batch for easy access and plotting:
```
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of `inputs, labels` for plotting."""
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
```
Now the `WindowGenerator` object gives you access to the `tf.data.Dataset` objects, so you can easily iterate over the data.
The `Dataset.element_spec` property tells you the structure, `dtypes` and shapes of the dataset elements.
```
# Each element is an (inputs, label) pair
w2.train.element_spec
```
Iterating over a `Dataset` yields concrete batches:
```
for example_inputs, example_labels in w2.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
```
## Single step models
The simplest model you can build on this sort of data is one that predicts a single feature's value, 1 timestep (1h) in the future based only on the current conditions.
So start by building models to predict the `T (degC)` value 1h into the future.

Configure a `WindowGenerator` object to produce these single-step `(input, label)` pairs:
```
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['T (degC)'])
single_step_window
```
The `window` object creates `tf.data.Datasets` from the training, validation, and test sets, allowing you to easily iterate over batches of data.
```
for example_inputs, example_labels in single_step_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
```
### Baseline
Before building a trainable model it would be good to have a performance baseline as a point for comparison with the later more complicated models.
This first task is to predict temperature 1h in the future given the current value of all features. The current values include the current temperature.
So start with a model that just returns the current temperature as the prediction, predicting "No change". This is a reasonable baseline since temperature changes slowly. Of course, this baseline will work less well if you make a prediction further in the future.

```
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]
```
Instantiate and evaluate this model:
```
baseline = Baseline(label_index=column_indices['T (degC)'])
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
```
That printed some performance metrics, but those don't give you a feeling for how well the model is doing.
The `WindowGenerator` has a plot method, but the plots won't be very interesting with only a single sample. So, create a wider `WindowGenerator` that generates windows 24h of consecutive inputs and labels at a time.
The `wide_window` doesn't change the way the model operates. The model still makes predictions 1h into the future based on a single input time step. Here the `time` axis acts like the `batch` axis: Each prediction is made independently with no interaction between time steps.
```
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['T (degC)'])
wide_window
```
This expanded window can be passed directly to the same `baseline` model without any code changes. This is possible because the inputs and labels have the same number of timesteps, and the baseline just forwards the input to the output:

```
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
```
Plotting the baseline model's predictions you can see that it is simply the labels, shifted right by 1h.
```
wide_window.plot(baseline)
```
In the above plots of three examples the single step model is run over the course of 24h. This deserves some explaination:
* The blue "Inputs" line shows the input temperature at each time step. The model recieves all features, this plot only shows the temperature.
* The green "Labels" dots show the target prediction value. These dots are shown at the prediction time, not the input time. That is why the range of labels is shifted 1 step relative to the inputs.
* The orange "Predictions" crosses are the model's prediction's for each output time step. If the model were predicting perfectly the predictions would land directly on the "labels".
### Linear model
The simplest **trainable** model you can apply to this task is to insert linear transformation between the input and output. In this case the output from a time step only depends on that step:

A `layers.Dense` with no `activation` set is a linear model. The layer only transforms the last axis of the data from `(batch, time, inputs)` to `(batch, time, units)`, it is applied independently to every item across the `batch` and `time` axes.
```
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', linear(single_step_window.example[0]).shape)
```
This tutorial trains many models, so package the training procedure into a function:
```
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
```
Train the model and evaluate its performance:
```
history = compile_and_fit(linear, single_step_window)
val_performance['Linear'] = linear.evaluate(single_step_window.val)
performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)
```
Like the `baseline` model, the linear model can be called on batches of wide windows. Used this way the model makes a set of independent predictions on consecuitive time steps. The `time` axis acts like another `batch` axis. There are no interactions between the predictions at each time step.

```
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
```
Here is the plot of its example predictions on the `wide_window`, note how in many cases the prediction is clearly better than just returning the input temperature, but in a few cases it's worse:
```
wide_window.plot(linear)
```
One advantage to linear models is that they're relatively simple to interpret.
You can pull out the layer's weights, and see the weight assigned to each input:
```
plt.bar(x = range(len(train_df.columns)),
height=linear.layers[0].kernel[:,0].numpy())
axis = plt.gca()
axis.set_xticks(range(len(train_df.columns)))
_ = axis.set_xticklabels(train_df.columns, rotation=90)
```
Sometimes the model doesn't even place the most weight on the input `T (degC)`. This is one of the risks of random initialization.
### Dense
Before applying models that actually operate on multiple time-steps, it's worth checking the performance of deeper, more powerful, single input step models.
Here's a model similar to the `linear` model, except it stacks several a few `Dense` layers between the input and the output:
```
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=1)
])
history = compile_and_fit(dense, single_step_window)
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
```
### Multi-step dense
A single-time-step model has no context for the current values of its inputs. It can't see how the input features are changing over time. To address this issue the model needs access to multiple time steps when making predictions:

The `baseline`, `linear` and `dense` models handled each time step independently. Here the model will take multiple time steps as input to produce a single output.
Create a `WindowGenerator` that will produce batches of the 3h of inputs and, 1h of labels:
Note that the `Window`'s `shift` parameter is relative to the end of the two windows.
```
CONV_WIDTH = 3
conv_window = WindowGenerator(
input_width=CONV_WIDTH,
label_width=1,
shift=1,
label_columns=['T (degC)'])
conv_window
conv_window.plot()
plt.title("Given 3h as input, predict 1h into the future.")
```
You could train a `dense` model on a multiple-input-step window by adding a `layers.Flatten` as the first layer of the model:
```
multi_step_dense = tf.keras.Sequential([
# Shape: (time, features) => (time*features)
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
# Add back the time dimension.
# Shape: (outputs) => (1, outputs)
tf.keras.layers.Reshape([1, -1]),
])
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', multi_step_dense(conv_window.example[0]).shape)
history = compile_and_fit(multi_step_dense, conv_window)
IPython.display.clear_output()
val_performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.val)
performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.test, verbose=0)
conv_window.plot(multi_step_dense)
```
The main down-side of this approach is that the resulting model can only be executed on input windows of exactly this shape.
```
print('Input shape:', wide_window.example[0].shape)
try:
print('Output shape:', multi_step_dense(wide_window.example[0]).shape)
except Exception as e:
print(f'\n{type(e).__name__}:{e}')
```
The convolutional models in the next section fix this problem.
### Convolution neural network
A convolution layer (`layers.Conv1D`) also takes multiple time steps as input to each prediction.
Below is the **same** model as `multi_step_dense`, re-written with a convolution.
Note the changes:
* The `layers.Flatten` and the first `layers.Dense` are replaced by a `layers.Conv1D`.
* The `layers.Reshape` is no longer necessary since the convolution keeps the time axis in its output.
```
conv_model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=(CONV_WIDTH,),
activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
])
```
Run it on an example batch to see that the model produces outputs with the expected shape:
```
print("Conv model on `conv_window`")
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', conv_model(conv_window.example[0]).shape)
```
Train and evaluate it on the ` conv_window` and it should give performance similar to the `multi_step_dense` model.
```
history = compile_and_fit(conv_model, conv_window)
IPython.display.clear_output()
val_performance['Conv'] = conv_model.evaluate(conv_window.val)
performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)
```
The difference between this `conv_model` and the `multi_step_dense` model is that the `conv_model` can be run on inputs of any length. The convolutional layer is applied to a sliding window of inputs:

If you run it on wider input, it produces wider output:
```
print("Wide window")
print('Input shape:', wide_window.example[0].shape)
print('Labels shape:', wide_window.example[1].shape)
print('Output shape:', conv_model(wide_window.example[0]).shape)
```
Note that the output is shorter than the input. To make training or plotting work, you need the labels, and prediction to have the same length. So build a `WindowGenerator` to produce wide windows with a few extra input time steps so the label and prediction lengths match:
```
LABEL_WIDTH = 24
INPUT_WIDTH = LABEL_WIDTH + (CONV_WIDTH - 1)
wide_conv_window = WindowGenerator(
input_width=INPUT_WIDTH,
label_width=LABEL_WIDTH,
shift=1,
label_columns=['T (degC)'])
wide_conv_window
print("Wide conv window")
print('Input shape:', wide_conv_window.example[0].shape)
print('Labels shape:', wide_conv_window.example[1].shape)
print('Output shape:', conv_model(wide_conv_window.example[0]).shape)
```
Now you can plot the model's predictions on a wider window. Note the 3 input time steps before the first prediction. Every prediction here is based on the 3 preceding timesteps:
```
wide_conv_window.plot(conv_model)
```
### Recurrent neural network
A Recurrent Neural Network (RNN) is a type of neural network well-suited to time series data. RNNs process a time series step-by-step, maintaining an internal state from time-step to time-step.
For more details, read the [text generation tutorial](https://www.tensorflow.org/tutorials/text/text_generation) or the [RNN guide](https://www.tensorflow.org/guide/keras/rnn).
In this tutorial, you will use an RNN layer called Long Short Term Memory ([LSTM](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/LSTM)).
An important constructor argument for all keras RNN layers is the `return_sequences` argument. This setting can configure the layer in one of two ways.
1. If `False`, the default, the layer only returns the output of the final timestep, giving the model time to warm up its internal state before making a single prediction:

2. If `True` the layer returns an output for each input. This is useful for:
* Stacking RNN layers.
* Training a model on multiple timesteps simultaneously.

```
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=1)
])
```
With `return_sequences=True` the model can be trained on 24h of data at a time.
Note: This will give a pessimistic view of the model's performance. On the first timestep the model has no access to previous steps, and so can't do any better than the simple `linear` and `dense` models shown earlier.
```
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', lstm_model(wide_window.example[0]).shape)
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)
wide_window.plot(lstm_model)
```
### Performance
With this dataset typically each of the models does slightly better than the one before it.
```
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.ylabel('mean_absolute_error [T (degC), normalized]')
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
_ = plt.legend()
for name, value in performance.items():
print(f'{name:12s}: {value[1]:0.4f}')
```
### Multi-output models
The models so far all predicted a single output feature, `T (degC)`, for a single time step.
All of these models can be converted to predict multiple features just by changing the number of units in the output layer and adjusting the training windows to include all features in the `labels`.
```
single_step_window = WindowGenerator(
# `WindowGenerator` returns all features as labels if you
# don't set the `label_columns` argument.
input_width=1, label_width=1, shift=1)
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
for example_inputs, example_labels in wide_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
```
Note above that the `features` axis of the labels now has the same depth as the inputs, instead of 1.
#### Baseline
The same baseline model can be used here, but this time repeating all features instead of selecting a specific `label_index`.
```
baseline = Baseline()
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(wide_window.val)
performance['Baseline'] = baseline.evaluate(wide_window.test, verbose=0)
```
#### Dense
```
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(dense, single_step_window)
IPython.display.clear_output()
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
```
#### RNN
```
%%time
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate( wide_window.val)
performance['LSTM'] = lstm_model.evaluate( wide_window.test, verbose=0)
print()
```
<a id="residual"></a>
#### Advanced: Residual connections
The `Baseline` model from earlier took advantage of the fact that the sequence doesn't change drastically from time step to time step. Every model trained in this tutorial so far was randomly initialized, and then had to learn that the output is a a small change from the previous time step.
While you can get around this issue with careful initialization, it's simpler to build this into the model structure.
It's common in time series analysis to build models that instead of predicting the next value, predict how the value will change in the next timestep.
Similarly, "Residual networks" or "ResNets" in deep learning refer to architectures where each layer adds to the model's accumulating result.
That is how you take advantage of the knowledge that the change should be small.

Essentially this initializes the model to match the `Baseline`. For this task it helps models converge faster, with slightly better performance.
This approach can be used in conjunction with any model discussed in this tutorial.
Here it is being applied to the LSTM model, note the use of the `tf.initializers.zeros` to ensure that the initial predicted changes are small, and don't overpower the residual connection. There are no symmetry-breaking concerns for the gradients here, since the `zeros` are only used on the last layer.
```
class ResidualWrapper(tf.keras.Model):
def __init__(self, model):
super().__init__()
self.model = model
def call(self, inputs, *args, **kwargs):
delta = self.model(inputs, *args, **kwargs)
# The prediction for each timestep is the input
# from the previous time step plus the delta
# calculated by the model.
return inputs + delta
%%time
residual_lstm = ResidualWrapper(
tf.keras.Sequential([
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(
num_features,
# The predicted deltas should start small
# So initialize the output layer with zeros
kernel_initializer=tf.initializers.zeros)
]))
history = compile_and_fit(residual_lstm, wide_window)
IPython.display.clear_output()
val_performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.val)
performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.test, verbose=0)
print()
```
#### Performance
Here is the overall performance for these multi-output models.
```
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
plt.ylabel('MAE (average over all outputs)')
_ = plt.legend()
for name, value in performance.items():
print(f'{name:15s}: {value[1]:0.4f}')
```
The above performances are averaged across all model outputs.
## Multi-step models
Both the single-output and multiple-output models in the previous sections made **single time step predictions**, 1h into the future.
This section looks at how to expand these models to make **multiple time step predictions**.
In a multi-step prediction, the model needs to learn to predict a range of future values. Thus, unlike a single step model, where only a single future point is predicted, a multi-step model predicts a sequence of the future values.
There are two rough approaches to this:
1. Single shot predictions where the entire time series is predicted at once.
2. Autoregressive predictions where the model only makes single step predictions and its output is fed back as its input.
In this section all the models will predict **all the features across all output time steps**.
For the multi-step model, the training data again consists of hourly samples. However, here, the models will learn to predict 24h of the future, given 24h of the past.
Here is a `Window` object that generates these slices from the dataset:
```
OUT_STEPS = 24
multi_window = WindowGenerator(input_width=24,
label_width=OUT_STEPS,
shift=OUT_STEPS)
multi_window.plot()
multi_window
```
### Baselines
A simple baseline for this task is to repeat the last input time step for the required number of output timesteps:

```
class MultiStepLastBaseline(tf.keras.Model):
def call(self, inputs):
return tf.tile(inputs[:, -1:, :], [1, OUT_STEPS, 1])
last_baseline = MultiStepLastBaseline()
last_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance = {}
multi_performance = {}
multi_val_performance['Last'] = last_baseline.evaluate(multi_window.val)
multi_performance['Last'] = last_baseline.evaluate(multi_window.test, verbose=0)
multi_window.plot(last_baseline)
```
Since this task is to predict 24h given 24h another simple approach is to repeat the previous day, assuming tomorrow will be similar:

```
class RepeatBaseline(tf.keras.Model):
def call(self, inputs):
return inputs
repeat_baseline = RepeatBaseline()
repeat_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance['Repeat'] = repeat_baseline.evaluate(multi_window.val)
multi_performance['Repeat'] = repeat_baseline.evaluate(multi_window.test, verbose=0)
multi_window.plot(repeat_baseline)
```
### Single-shot models
One high level approach to this problem is use a "single-shot" model, where the model makes the entire sequence prediction in a single step.
This can be implemented efficiently as a `layers.Dense` with `OUT_STEPS*features` output units. The model just needs to reshape that output to the required `(OUTPUT_STEPS, features)`.
#### Linear
A simple linear model based on the last input time step does better than either baseline, but is underpowered. The model needs to predict `OUTPUT_STEPS` time steps, from a single input time step with a linear projection. It can only capture a low-dimensional slice of the behavior, likely based mainly on the time of day and time of year.

```
multi_linear_model = tf.keras.Sequential([
# Take the last time-step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_linear_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Linear'] = multi_linear_model.evaluate(multi_window.val)
multi_performance['Linear'] = multi_linear_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_linear_model)
```
#### Dense
Adding a `layers.Dense` between the input and output gives the linear model more power, but is still only based on a single input timestep.
```
multi_dense_model = tf.keras.Sequential([
# Take the last time step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, dense_units]
tf.keras.layers.Dense(512, activation='relu'),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_dense_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Dense'] = multi_dense_model.evaluate(multi_window.val)
multi_performance['Dense'] = multi_dense_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_dense_model)
```
#### CNN
A convolutional model makes predictions based on a fixed-width history, which may lead to better performance than the dense model since it can see how things are changing over time:

```
CONV_WIDTH = 3
multi_conv_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, CONV_WIDTH, features]
tf.keras.layers.Lambda(lambda x: x[:, -CONV_WIDTH:, :]),
# Shape => [batch, 1, conv_units]
tf.keras.layers.Conv1D(256, activation='relu', kernel_size=(CONV_WIDTH)),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_conv_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Conv'] = multi_conv_model.evaluate(multi_window.val)
multi_performance['Conv'] = multi_conv_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_conv_model)
```
#### RNN
A recurrent model can learn to use a long history of inputs, if it's relevant to the predictions the model is making. Here the model will accumulate internal state for 24h, before making a single prediction for the next 24h.
In this single-shot format, the LSTM only needs to produce an output at the last time step, so set `return_sequences=False`.

```
multi_lstm_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, lstm_units]
# Adding more `lstm_units` just overfits more quickly.
tf.keras.layers.LSTM(32, return_sequences=False),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_lstm_model, multi_window)
IPython.display.clear_output()
multi_val_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.val)
multi_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_lstm_model)
```
### Advanced: Autoregressive model
The above models all predict the entire output sequence as a in a single step.
In some cases it may be helpful for the model to decompose this prediction into individual time steps. Then each model's output can be fed back into itself at each step and predictions can be made conditioned on the previous one, like in the classic [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/abs/1308.0850).
One clear advantage to this style of model is that it can be set up to produce output with a varying length.
You could take any of single single-step multi-output models trained in the first half of this tutorial and run in an autoregressive feedback loop, but here you'll focus on building a model that's been explicitly trained to do that.

#### RNN
This tutorial only builds an autoregressive RNN model, but this pattern could be applied to any model that was designed to output a single timestep.
The model will have the same basic form as the single-step `LSTM` models: An `LSTM` followed by a `layers.Dense` that converts the `LSTM` outputs to model predictions.
A `layers.LSTM` is a `layers.LSTMCell` wrapped in the higher level `layers.RNN` that manages the state and sequence results for you (See [Keras RNNs](https://www.tensorflow.org/guide/keras/rnn) for details).
In this case the model has to manually manage the inputs for each step so it uses `layers.LSTMCell` directly for the lower level, single time step interface.
```
class FeedBack(tf.keras.Model):
def __init__(self, units, out_steps):
super().__init__()
self.out_steps = out_steps
self.units = units
self.lstm_cell = tf.keras.layers.LSTMCell(units)
# Also wrap the LSTMCell in an RNN to simplify the `warmup` method.
self.lstm_rnn = tf.keras.layers.RNN(self.lstm_cell, return_state=True)
self.dense = tf.keras.layers.Dense(num_features)
feedback_model = FeedBack(units=32, out_steps=OUT_STEPS)
```
The first method this model needs is a `warmup` method to initialize its internal state based on the inputs. Once trained this state will capture the relevant parts of the input history. This is equivalent to the single-step `LSTM` model from earlier:
```
def warmup(self, inputs):
# inputs.shape => (batch, time, features)
# x.shape => (batch, lstm_units)
x, *state = self.lstm_rnn(inputs)
# predictions.shape => (batch, features)
prediction = self.dense(x)
return prediction, state
FeedBack.warmup = warmup
```
This method returns a single time-step prediction, and the internal state of the LSTM:
```
prediction, state = feedback_model.warmup(multi_window.example[0])
prediction.shape
```
With the `RNN`'s state, and an initial prediction you can now continue iterating the model feeding the predictions at each step back as the input.
The simplest approach to collecting the output predictions is to use a python list, and `tf.stack` after the loop.
Note: Stacking a python list like this only works with eager-execution, using `Model.compile(..., run_eagerly=True)` for training, or with a fixed length output. For a dynamic output length you would need to use a `tf.TensorArray` instead of a python list, and `tf.range` instead of the python `range`.
```
def call(self, inputs, training=None):
# Use a TensorArray to capture dynamically unrolled outputs.
predictions = []
# Initialize the lstm state
prediction, state = self.warmup(inputs)
# Insert the first prediction
predictions.append(prediction)
# Run the rest of the prediction steps
for n in range(1, self.out_steps):
# Use the last prediction as input.
x = prediction
# Execute one lstm step.
x, state = self.lstm_cell(x, states=state,
training=training)
# Convert the lstm output to a prediction.
prediction = self.dense(x)
# Add the prediction to the output
predictions.append(prediction)
# predictions.shape => (time, batch, features)
predictions = tf.stack(predictions)
# predictions.shape => (batch, time, features)
predictions = tf.transpose(predictions, [1, 0, 2])
return predictions
FeedBack.call = call
```
Test run this model on the example inputs:
```
print('Output shape (batch, time, features): ', feedback_model(multi_window.example[0]).shape)
```
Now train the model:
```
history = compile_and_fit(feedback_model, multi_window)
IPython.display.clear_output()
multi_val_performance['AR LSTM'] = feedback_model.evaluate(multi_window.val)
multi_performance['AR LSTM'] = feedback_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(feedback_model)
```
### Performance
There are clearly diminishing returns as a function of model complexity on this problem.
```
x = np.arange(len(multi_performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in multi_val_performance.values()]
test_mae = [v[metric_index] for v in multi_performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=multi_performance.keys(),
rotation=45)
plt.ylabel(f'MAE (average over all times and outputs)')
_ = plt.legend()
```
The metrics for the multi-output models in the first half of this tutorial show the performance averaged across all output features. These performances similar but also averaged across output timesteps.
```
for name, value in multi_performance.items():
print(f'{name:8s}: {value[1]:0.4f}')
```
The gains achieved going from a dense model to convolutional and recurrent models are only a few percent (if any), and the autoregressive model performed clearly worse. So these more complex approaches may not be worth while on **this** problem, but there was no way to know without trying, and these models could be helpful for **your** problem.
## Next steps
This tutorial was a quick introduction to time series forecasting using TensorFlow.
* For further understanding, see:
* Chapter 15 of [Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/), 2nd Edition
* Chapter 6 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).
* Lesson 8 of [Udacity's intro to TensorFlow for deep learning](https://www.udacity.com/course/intro-to-tensorflow-for-deep-learning--ud187), and the [exercise notebooks](https://github.com/tensorflow/examples/tree/master/courses/udacity_intro_to_tensorflow_for_deep_learning)
* Also remember that you can implement any [classical time series model](https://otexts.com/fpp2/index.html) in TensorFlow, this tutorial just focuses on TensorFlow's built-in functionality.
| github_jupyter |
# Word Embeddings in MySQL
This example uses the official MySQL Connector within Python3 to store and retrieve various amounts of Word Embeddings.
We will use a local MySQL database running as a Docker Container for testing purposes. To start the database run:
```
docker run -ti --rm --name ohmysql -e MYSQL_ROOT_PASSWORD=mikolov -e MYSQL_DATABASE=embeddings -p 3306:3306 mysql:5.7
```
```
import mysql.connector
import io
import time
import numpy
import plotly
from tqdm import tqdm_notebook as tqdm
```
# Dummy Embeddings
For testing purposes we will use randomly generated numpy arrays as dummy embbeddings.
```
def embeddings(n=1000, dim=300):
"""
Yield n tuples of random numpy arrays of *dim* length indexed by *n*
"""
idx = 0
while idx < n:
yield (str(idx), numpy.random.rand(dim))
idx += 1
```
# Conversion Functions
Since we can't just save a NumPy array into the database, we will convert it into a BLOB.
```
def adapt_array(array):
"""
Using the numpy.save function to save a binary version of the array,
and BytesIO to catch the stream of data and convert it into a BLOB.
"""
out = io.BytesIO()
numpy.save(out, array)
out.seek(0)
return out.read()
def convert_array(blob):
"""
Using BytesIO to convert the binary version of the array back into a numpy array.
"""
out = io.BytesIO(blob)
out.seek(0)
return numpy.load(out)
connection = mysql.connector.connect(user='root', password='mikolov',
host='127.0.0.1',
database='embeddings')
cursor = connection.cursor()
cursor.execute('CREATE TABLE IF NOT EXISTS `embeddings` (`key` TEXT, `embedding` BLOB);')
connection.commit()
%%time
for key, emb in embeddings():
arr = adapt_array(emb)
cursor.execute('INSERT INTO `embeddings` (`key`, `embedding`) VALUES (%s, %s);', (key, arr))
connection.commit()
%%time
for key, _ in embeddings():
cursor.execute('SELECT embedding FROM `embeddings` WHERE `key`=%s;', (key,))
data = cursor.fetchone()
arr = convert_array(data[0])
```
# Sample some data
To test the I/O we will write and read some data from the database. This may take a while.
```
write_times = []
read_times = []
counts = [500, 1000, 2000, 3000, 4000, 5000]
for c in counts:
print(c)
cursor.execute('DROP TABLE IF EXISTS `embeddings`;')
cursor.execute('CREATE TABLE IF NOT EXISTS `embeddings` (`key` TEXT, `embedding` BLOB);')
connection.commit()
start_time_write = time.time()
for key, emb in tqdm(embeddings(c), total=c):
arr = adapt_array(emb)
cursor.execute('INSERT INTO `embeddings` (`key`, `embedding`) VALUES (%s, %s);', (key, arr))
connection.commit()
write_times.append(time.time() - start_time_write)
start_time_read = time.time()
for key, emb in embeddings(c):
cursor.execute('SELECT embedding FROM `embeddings` WHERE `key`=%s;', (key,))
data = cursor.fetchone()
arr = convert_array(data[0])
read_times.append(time.time() - start_time_read)
print('DONE')
```
# Results
```
plotly.offline.init_notebook_mode(connected=True)
trace = plotly.graph_objs.Scatter(
y = write_times,
x = counts,
mode = 'lines+markers'
)
layout = plotly.graph_objs.Layout(title="MySQL Write Times",
yaxis=dict(title='Time in Seconds'),
xaxis=dict(title='Embedding Count'))
data = [trace]
fig = plotly.graph_objs.Figure(data=data, layout=layout)
plotly.offline.iplot(fig, filename='jupyter-scatter-write')
plotly.offline.init_notebook_mode(connected=True)
trace = plotly.graph_objs.Scatter(
y = read_times,
x = counts,
mode = 'lines+markers'
)
layout = plotly.graph_objs.Layout(title="MySQL Read Times",
yaxis=dict(title='Time in Seconds'),
xaxis=dict(title='Embedding Count'))
data = [trace]
fig = plotly.graph_objs.Figure(data=data, layout=layout)
plotly.offline.iplot(fig, filename='jupyter-scatter-read')
```
| github_jupyter |
```
from future.builtins import next
import os
import csv
import re
import logging
import optparse
import dedupe
from unidecode import unidecode
import pandas as pd
pd.options.display.float_format = '{:20,.2f}'.format
pd.set_option('display.max_rows', 5000)
pd.set_option('display.max_columns', 5000)
pd.set_option('display.width', 1000)
pd.set_option('display.max_colwidth', -1)
amazon_walmart_all_path = (r'/home/ubuntu/jupyter/ServerX/1_Standard Data Integration/Sample Datasets'
r'/Processed Data/product_samples/amazon_walmart_all.csv')
```
## Prepare df and dict corpus
```
fields_of_interest = [
'Id',
'name',
'producer',
'description',
'price',
'category',
'source'
]
amazon_walmart_all_df = pd.read_csv(amazon_walmart_all_path, sep=',', quotechar='"')[fields_of_interest]
amazon_walmart_all_df.dtypes
x = amazon_walmart_all_df[amazon_walmart_all_df['category'].isnull()]
x.head(1)
z = amazon_walmart_all_df[amazon_walmart_all_df['producer'].isnull()]
z.head(1)
y = amazon_walmart_all_df[amazon_walmart_all_df['price'].isnull()]
y.head(1)
h = amazon_walmart_all_df[amazon_walmart_all_df['description'].isnull()]
h.head()
amazon_walmart_all_df[amazon_walmart_all_df['name'].isnull()]
description_corpus = amazon_walmart_all_df['description'].to_list()
description_corpus = [x for x in description_corpus if str(x) != 'nan']
description_corpus[1]
category_corpus = amazon_walmart_all_df.drop_duplicates().to_dict('records')
categories = list(amazon_walmart_all_df['category'].unique())
categories = [x for x in categories if str(x) != 'nan']
producer_corpus = amazon_walmart_all_df.drop_duplicates().to_dict('records')
producers = list(amazon_walmart_all_df['producer'].unique())
producers = [x for x in producers if str(x) != 'nan']
producers.sort()
producers
input_file = amazon_walmart_all_path
output_file = 'amazon_walmart_output3.csv'
settings_file = 'amazon_walmart_learned_settings3'
training_file = 'amazon_walmart_training3.json'
float('1.25')
def preProcess(key, column):
try : # python 2/3 string differences
column = column.decode('utf8')
except AttributeError:
pass
column = unidecode(column)
column = re.sub(' +', ' ', column)
column = re.sub('\n', ' ', column)
column = column.strip().strip('"').strip("'").lower().strip()
column = column.lower()
if not column:
return None
if key == 'price':
column = float(column)
return column
def readData(filename):
data_d = {}
with open(filename) as f:
reader = csv.DictReader(f)
for row in reader:
clean_row = [(k, preProcess(k, v)) for (k, v) in row.items()]
row_id = int(row['Id'])
data_d[row_id] = dict(clean_row)
return data_d
print('importing data ...')
data_d = readData(input_file)
fields = [
{'field' : 'name', 'type': 'Name'},
{'field' : 'name', 'type': 'String'},
{'field' : 'description',
'type': 'Text',
'corpus': description_corpus,
'has_missing': True
},
{'field' : 'category',
'type': 'FuzzyCategorical',
'categories': categories,
'corpus': category_corpus,
'has missing' : True
},
{'field' : 'producer',
'type': 'FuzzyCategorical',
'categories': producers,
'corpus': producer_corpus,
'has_missing': True
},
{'field' : 'price',
'type': 'Price',
'has_missing': True
},
]
deduper = dedupe.Dedupe(fields)
# took about 20 min with blocked proportion 0.8
deduper.prepare_training(data_d)
dedupe.consoleLabel(deduper)
data_d[1]
deduper.train()
threshold = deduper.threshold(data_d, recall_weight=1)
threshold
print('clustering...')
clustered_dupes = deduper.match(data_d, threshold)
print('# duplicate sets', len(clustered_dupes))
for key, values in data_d.items():
values['price'] = str(values['price'])
cluster_membership = {}
cluster_id = 0
for (cluster_id, cluster) in enumerate(clustered_dupes):
id_set, scores = cluster
cluster_d = [data_d[c] for c in id_set]
canonical_rep = dedupe.canonicalize(cluster_d)
for record_id, score in zip(id_set, scores):
cluster_membership[record_id] = {
"cluster id" : cluster_id,
"canonical representation" : canonical_rep,
"confidence": score
}
singleton_id = cluster_id + 1
with open(output_file, 'w') as f_output, open(input_file) as f_input:
writer = csv.writer(f_output)
reader = csv.reader(f_input)
heading_row = next(reader)
heading_row.insert(0, 'confidence_score')
heading_row.insert(0, 'Cluster ID')
canonical_keys = canonical_rep.keys()
for key in canonical_keys:
heading_row.append('canonical_' + key)
writer.writerow(heading_row)
for row in reader:
row_id = int(row[0])
if row_id in cluster_membership:
cluster_id = cluster_membership[row_id]["cluster id"]
canonical_rep = cluster_membership[row_id]["canonical representation"]
row.insert(0, cluster_membership[row_id]['confidence'])
row.insert(0, cluster_id)
for key in canonical_keys:
row.append(canonical_rep[key].encode('utf8'))
else:
row.insert(0, None)
row.insert(0, singleton_id)
singleton_id += 1
for key in canonical_keys:
row.append(None)
writer.writerow(row)
fields_of_interest = ['Cluster ID', 'confidence_score', 'Id', 'name', 'producer', 'description', 'price']
amazon_walmart_output = pd.read_csv('amazon_walmart_output2.csv', sep=',', quotechar='"')[fields_of_interest]
amazon_walmart_output[amazon_walmart_output['confidence_score'] == None]
amazon_walmart_output = amazon_walmart_output[fields_of_interest]
amazon_walmart_output[amazon_walmart_output['confidence_score'] > 0.9].sort_values('Cluster ID')
```
| github_jupyter |
```
%matplotlib inline
import io
import os
import matplotlib.pyplot as plt
import matplotlib.patches as patch
import pandas as pd
import numpy as np
import seaborn as sns
sns.set_style('darkgrid')
sns.set_context('notebook')
datafile_AMT = '../data/MTurk_anonymous.xlsx'
#datafile_DTU1 = '../data/DTU1_anonymous.xlsx'
#datafile_DTU2 = '../data/DTU2_anonymous.xlsx'
df_MTurk = pd.DataFrame(pd.read_excel(datafile_AMT))
df_MTurk.drop(df_MTurk.columns[df_MTurk.columns.str.contains('unnamed',case = False)],axis = 1, inplace = True)
#df_DTU1 = pd.DataFrame(pd.read_excel(datafile_DTU1))
#df_DTU1.drop(df_DTU1.columns[df_DTU1.columns.str.contains('unnamed',case = False)],axis = 1, inplace = True)
#df_DTU2 = pd.DataFrame(pd.read_excel(datafile_DTU2))
#df_DTU2.drop(df_DTU2.columns[df_DTU2.columns.str.contains('unnamed',case = False)],axis = 1, inplace = True)
df_MTurk.head()
```
In the following, we want to partition responses according to how many times a player has miscoordinated before. For a start, we concatenate all three experiments into one dataframe:
```
#df = pd.concat([
# df_MTurk.join(pd.Series(['MTurk']*len(df_MTurk), name='experiment')),
# df_DTU1.join(pd.Series(['DTU1']*len(df_DTU1), name='experiment')),
# df_DTU2.join(pd.Series(['DTU2']*len(df_DTU2), name='experiment'))],
# ignore_index=True)
#df.head(), len(df)
```
Now we group the data into response pairs
```
df = df_MTurk.groupby(['session', 'group', 'round'], as_index=False)[['code', 'id_in_group', 'arrival', 'choice', 'certainty']].agg(lambda x: tuple(x))
df.head()
#df0 = pd.DataFrame(columns=df.columns)
#df1_ = pd.DataFrame(columns=df.columns)
#sessions = df.session.unique()
#for session in sessions:
# df_session = df[df.session == session]
# groups = df_session.group.unique()
# for group in groups:
# df_group = df_session[df_session.group == group]
# miss = 0
# for idx, row in df.iterrows():
# if sum(row['choice']) != 1:
# row['miss'] = miss
# df0 = df0.append(row, ignore_index=True)
# else:
# miss += 1
# df1_ = df1_.append(row, ignore_index=True)
#df.head()
# initialize new dataframes that will hold data with a condition. df0 will contain all choices
# having had zero previous miscoordinations, while df1_ will contain all choices having had one
# or more previous miscoordinations
df0 = pd.DataFrame(columns=df.columns)
df1_ = pd.DataFrame(columns=df.columns)
# partition the dataframe into two bins - the first having 0 and 1, and the other 2 or more
# miscoordinations:
sessions = df.session.unique()
for session in sessions:
df_session = df[df.session == session]
groups = df_session.group.unique()
for group in groups:
df_group = df_session[df_session.group == group]
miss = 0
for idx, row in df_group.iterrows():
if sum(row['choice']) != 1:
if miss == 0:
df0 = df0.append(row, ignore_index=True)
else:
df1_ = df1_.append(row, ignore_index=True)
else:
if miss == 0:
df0 = df0.append(row, ignore_index=True)
else:
df1_ = df1_.append(row, ignore_index=True)
miss += 1
```
Now a bit of magic: we need to separate the tuples again and create rows for each person: (see ref at https://stackoverflow.com/questions/53218931/how-to-unnest-explode-a-column-in-a-pandas-dataframe
```
def unnesting(df, explode):
idx = df.index.repeat(df[explode[0]].str.len())
df1 = pd.concat([
pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1)
df1.index = idx
return df1.join(df.drop(explode, 1), how='left')
dfnew0 = unnesting(df0,['code', 'id_in_group', 'arrival', 'choice', 'certainty'])
dfnew1_ = unnesting(df1_,['code', 'id_in_group', 'arrival', 'choice', 'certainty'])
dfnew0['arrival'].replace({9.0: 8.6, 9.1: 8.7}, inplace=True)
dfnew1_['arrival'].replace({9.0: 8.6, 9.1: 8.7}, inplace=True)
len(dfnew0), len(dfnew1_)
dfnew0.head()
sns.regplot(x='arrival', y='choice', data=dfnew0, ci=95, logistic=True)
sns.despine()
sns.regplot(x='arrival', y='choice', data=dfnew1_, ci=95, logistic=True)
sns.despine()
dfall = pd.concat([dfnew0.join(pd.Series(['zero']*len(dfnew0), name='miss')),
dfnew1_.join(pd.Series(['one_or_more']*len(dfnew1_), name='miss'))],
ignore_index=True)
dfall.head()
pal = dict(zero="blue", one_or_more="orange")
g = sns.lmplot(x="arrival", y="choice", hue="miss", data=dfall, palette=pal,
logistic=True, ci=95, n_boot=10000, x_estimator=np.mean, x_ci="ci",
y_jitter=.2, legend=False, height=3, aspect=1.5)
#g = sns.FacetGrid(dfall, hue='miscoordinations', palette=pal, height=5);
#g.map(plt.scatter, "arrival", "choice", marker='o', s=50, alpha=.7, linewidth=.5, color='white', edgecolor="white")
#g.map(sns.regplot, "arrival", "choice", scatter_kws={"color": "grey"}, ci=95, n_boot=100, logistic=True, height=3, aspect=1.5)
g.set(xlim=(7.98, 8.72))
g.set(ylabel='frequency of canteen choice')
my_ticks = ["8:00", "8:10", "8:20", "8:30", "8:40", "8:50", "9:00", "9:10"]
g.set(xticks=[8,8.1,8.2,8.3,8.4,8.5,8.6,8.7], yticks=[0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1])
g.set(xticklabels = my_ticks) #, yticklabels = ["office","","","","",".5","","","","","canteen"])
# make my own legend:
name_to_color = {
' 0 (n=2762)': 'blue',
'>0 (n=1498)': 'orange',
}
patches = [patch.Patch(color=v, label=k) for k,v in name_to_color.items()]
plt.legend(title='# miscoordinations', handles=patches, loc='lower left')
plt.title('MTurk', fontsize=16)
plt.tight_layout()
plt.rcParams["font.family"] = "sans-serif"
PLOTS_DIR = '../plots'
if not os.path.exists(PLOTS_DIR):
os.makedirs(PLOTS_DIR)
plt.savefig(os.path.join(PLOTS_DIR, 'figS6_logit_2bins_MTurk.png'),
bbox_inches='tight', transparent=True, dpi=300)
plt.savefig(os.path.join(PLOTS_DIR, 'figS6_logit_2bins_MTurk.pdf'), transparent=True, dpi=300)
sns.despine()
```
| github_jupyter |
# COVID-MultiVent
The COVID MultiVent System was developed through a collaboration between Dr. Arun Agarwal, pulmonary critical care fellow at the University of Missouri, and a group of medical students at the University of Pennsylvania with backgrounds in various types of engineering. The system allows for monitoring of pressure and volume delivered to each patient and incorporates valve components that allow for peak inspiratory pressure (PIP), positive end expiratory pressure (PEEP), and inspiratory volume to be modulated independently for each patient. We have created open source assembly resources for this system to expedite its adoption in hospitals around the world with dire need of ventilation resources. The system is assembled with components that can be readily found in most hospitals. If you have questions regarding the system please contact us at **[email protected]**
Dr. Arun Agarwal
Alexander Lee
Feyisope Eweje
Stephen Landy
David Gordon
Ryan Gallagher
Alfredo Lucas
```
%%html
<iframe src="https://docs.google.com/presentation/d/e/2PACX-1vT2kAsF54vHJoIti5iQkk8zUTkcQuRBFnbtgAhCoHwxrThIzZ14mCpdgNkWrqS8bRf5xFB2ZoeXlfUk/embed?start=false&loop=false&delayms=3000" frameborder="0" width="480" height="299" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
```
### Original Video and Facebook Group
Please see below Dr. Agrawal's original video showing the proposed setup. A link to the MultiVent facebook group can be found [here](https://facebook.com/COVIDMULTIVENT/).
```
%%html
<iframe width="560" height="315" src="https://www.youtube.com/embed/odnhnnlBlpM" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
```
# Assembly Guide and Component List
For detailed instructions on how to build the MultiVent setup please click [here](https://docs.google.com/document/d/11Z7MpeGSC6m3ipsMVLe-Ofkw4HeqCN6x1Zhqy8FMRpc/edit?usp=sharing)
Below see the proposed parts list with the number of each part required for a single or a 2-patient setup. A list of vendors associated with each part is also included, however, note that almost all of these components will be readily available in many hospitals that already have ventilator setups.
Click [here](https://docs.google.com/spreadsheets/d/1XikdFKNdgZAywoPw4oIWqJ9e0-a059ucXh8DSOGl4GA/edit?usp=sharing) for a Google Sheets version of this list.
```
import pandas as pd
df = pd.read_excel('Website Ventilator splitting components.xlsx')
df
```
## Additional Resources
[US Public Health Service guidelines for ventilator splitting](https://www.hhs.gov/sites/default/files/optimizing-ventilator-use-during-covid19-pandemic.pdf)
[Ventilation Sharing Protocol developed by the Columbia University College of Physicians & Surgeons/New York-Presbyterian Hospital (3/24/20)](https://www.gnyha.org/wp-content/uploads/2020/03/Ventilator-Sharing-Protocol-Dual-Patient-Ventilation-with-a-Single-Mechanical-Ventilator-for-Use-during-Critical-Ventilator-Shortages.pdf)
[PulmCrit - Splitting ventilators to provide titrated support to a large group of patients](https://emcrit.org/pulmcrit/split-ventilators/)
[Medium article explain vent splitting methods - A better way of connecting multiple patients to a single ventilator](https://medium.com/@pinsonhannah/a-better-way-of-connecting-multiple-patients-to-a-single-ventilator-fa9cf42679c6)
# ***DISCLAIMER - This system is currently not endorsed by the University of Missouri nor the University of Pennsylvania. Multi-patient ventilation should be attempted with the greatest of caution as an option of last resort with appropriate clinical guidance. The COVID MultiVent team does not assume any liability for adverse events that may occur that could in any way be tied to use of the system.***
| github_jupyter |
```
import glob
import itertools
from ipywidgets import widgets, Layout
import numpy as np
import os
import pandas as pd
import plotly.io as pio
import plotly.graph_objects as go
from apex_performance_plotter.apex_performance_plotter.load_logfiles import load_logfiles
pio.templates.default = "plotly_white"
from IPython.core.interactiveshell import InteractiveShell
# Define the folders where to look for experiment outputs
os.chdir('../../../../experiment')
logfiles = glob.glob('{}*'.format('log'))
selected_logfiles = widgets.SelectMultiple(
options=logfiles,
description='Experiments',
disabled=False,
layout=Layout(width='100%')
)
display(selected_logfiles)
# Select the experiments to plot
# Display selected experiment properties
InteractiveShell.ast_node_interactivity = "all"
headers, dataframes = load_logfiles(selected_logfiles)
for idx, header in enumerate(headers):
display(header)
InteractiveShell.ast_node_interactivity = "last"
colors = ['#4363d8','#800000','#f58231','#e6beff']
# Plot latencies
figure_latencies = go.FigureWidget()
figure_latencies.layout.xaxis.title = 'Time (s)'
figure_latencies.layout.yaxis.title = 'Latencies (ms)'
for idx, experiment in enumerate(dataframes):
figure_latencies.add_scatter(x=experiment['T_experiment'],
y=experiment['latency_max (ms)'],
mode='markers', marker_color=colors[idx],
marker_symbol='x',
name= 'latency_max',
text=headers[idx]['Logfile name']);
figure_latencies.add_scatter(x=experiment['T_experiment'],
y=experiment['latency_mean (ms)'],
mode='markers', marker_color=colors[idx],
marker_symbol='triangle-up',
name='latency_mean',
text=headers[idx]['Logfile name']);
figure_latencies.add_scatter(x=experiment['T_experiment'],
y=experiment['latency_min (ms)'],
mode='markers', marker_color=colors[idx],
name='latency_min',
text=headers[idx]['Logfile name'])
figure_latencies
# Plot CPU usage
figure_cpu_usage = go.FigureWidget()
figure_cpu_usage.layout.xaxis.title = 'Time (s)'
figure_cpu_usage.layout.yaxis.title = 'CPU usage (%)'
for idx, experiment in enumerate(dataframes):
figure_cpu_usage.add_scatter(x=experiment['T_experiment'],
y=experiment['cpu_usage (%)'],
mode='markers', marker_color=colors[idx],
marker_symbol='x',
name= 'cpu_usage',
text=headers[idx]['Logfile name']);
figure_cpu_usage
# Plot memory consumption
figure_memory_usage = go.FigureWidget()
figure_memory_usage.layout.xaxis.title = 'Time (s)'
figure_memory_usage.layout.yaxis.title = 'Memory consumption (MB)'
for idx, experiment in enumerate(dataframes):
figure_memory_usage.add_scatter(x=experiment['T_experiment'],
y=experiment['ru_maxrss']/1024,
mode='markers', marker_color=colors[idx],
marker_symbol='x',
name= 'ru_maxrss',
text=headers[idx]['Logfile name']);
figure_memory_usage
```
| github_jupyter |
# Homework 10: `SQL`
## Due Date: Thursday, November 16th at 11:59 PM
You will create a database of the NASA polynomial coefficients for each specie.
**Please turn in your database with your `Jupyter` notebook!**
# Question 1: Convert XML to a SQL database
Create two tables named `LOW` and `HIGH`, each corresponding to data given for the low and high temperature range.
Each should have the following column names:
- `SPECIES_NAME`
- `TLOW`
- `THIGH`
- `COEFF_1`
- `COEFF_2`
- `COEFF_3`
- `COEFF_4`
- `COEFF_5`
- `COEFF_6`
- `COEFF_7`
Populate the tables using the XML file you created in last assignment. If you did not complete the last assignment, you may also use the `example_thermo.xml` file.
`TLOW` should refer to the temperature at the low range and `THIGH` should refer to the temperature at the high range. For example, in the `LOW` table, $H$ would have `TLOW` at $200$ and `THIGH` at $1000$ and in the `HIGH` table, $H$ would have `TLOW` at $1000$ and `THIGH` at $3500$.
For both tables, `COEFF_1` through `COEFF_7` should be populated with the corresponding coefficients for the low temperature data and high temperature data.
```
import xml.etree.ElementTree as ET
import sqlite3
import pandas as pd
from IPython.core.display import display
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
# Create and connect to database
db = sqlite3.connect('NASA_coef.sqlite')
cursor = db.cursor()
cursor.execute("DROP TABLE IF EXISTS LOW")
cursor.execute("DROP TABLE IF EXISTS HIGH")
cursor.execute("PRAGMA foreign_keys=1")
# Create the table for low temperature range
cursor.execute('''CREATE TABLE LOW (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
SPECIES_NAME TEXT,
TLOW REAL,
THIGH REAL,
COEFF_1 REAL,
COEFF_2 REAL,
COEFF_3 REAL,
COEFF_4 REAL,
COEFF_5 REAL,
COEFF_6 REAL,
COEFF_7 REAL)''')
db.commit()
# Create the table for high temperature range
cursor.execute('''CREATE TABLE HIGH (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
SPECIES_NAME TEXT,
TLOW REAL,
THIGH REAL,
COEFF_1 REAL,
COEFF_2 REAL,
COEFF_3 REAL,
COEFF_4 REAL,
COEFF_5 REAL,
COEFF_6 REAL,
COEFF_7 REAL)''')
db.commit()
# The given helper function (from L18) to visualize tables
def viz_tables(cols, query):
q = cursor.execute(query).fetchall()
framelist = []
for i, col_name in enumerate(cols):
framelist.append((col_name, [col[i] for col in q]))
return pd.DataFrame.from_items(framelist)
tree = ET.parse('thermo.xml')
species_data = tree.find('speciesData')
species_list = species_data.findall('species') # a list of all species
for species in species_list:
name = species.get('name')
NASA_list = species.find('thermo').findall('NASA')
for NASA in NASA_list:
T_min = float(NASA.get('Tmin'))
T_max = float(NASA.get('Tmax'))
coefs = NASA.find('floatArray').text.split(',')
vals_to_insert = (name, T_min, T_max, float(coefs[0].strip()), float(coefs[1].strip()),
float(coefs[2].strip()), float(coefs[3].strip()), float(coefs[4].strip()),
float(coefs[5].strip()), float(coefs[6].strip()))
if T_max > 1000: # high range temperature
cursor.execute('''INSERT INTO HIGH
(SPECIES_NAME, TLOW, THIGH, COEFF_1, COEFF_2, COEFF_3, COEFF_4, COEFF_5, COEFF_6, COEFF_7)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert)
else: # low range temperature
cursor.execute('''INSERT INTO LOW
(SPECIES_NAME, TLOW, THIGH, COEFF_1, COEFF_2, COEFF_3, COEFF_4, COEFF_5, COEFF_6, COEFF_7)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert)
LOW_cols = [col[1] for col in cursor.execute("PRAGMA table_info(LOW)")]
display(viz_tables(LOW_cols, '''SELECT * FROM LOW'''))
HIGH_cols = [col[1] for col in cursor.execute("PRAGMA table_info(HIGH)")]
display(viz_tables(HIGH_cols, '''SELECT * FROM HIGH'''))
```
# Question 2: `WHERE` Statements
1. Write a `Python` function `get_coeffs` that returns an array of 7 coefficients.
The function should take in two parameters: 1.) `species_name` and 2.) `temp_range`, an indicator variable ('low' or 'high') to indicate whether the coefficients should come from the low or high temperature range.
The function should use `SQL` commands and `WHERE` statements on the table you just created in Question 1 (rather than taking data from the XML directly).
```python
def get_coeffs(species_name, temp_range):
''' Fill in here'''
return coeffs
```
2. Write a python function `get_species` that returns all species that have a temperature range above or below a given value. The function should take in two parameters: 1.) `temp` and 2.) `temp_range`, an indicator variable ('low' or 'high').
When temp_range is 'low', we are looking for species with a temperature range lower than the given temperature, and for a 'high' temp_range, we want species with a temperature range higher than the given temperature.
This exercise may be useful if different species have different `LOW` and `HIGH` ranges.
And as before, you should accomplish this through `SQL` queries and where statements.
```python
def get_species(temp, temp_range):
''' Fill in here'''
return coeffs
```
```
def get_coeffs(species_name, temp_range):
query = '''SELECT COEFF_1, COEFF_2, COEFF_3, COEFF_4, COEFF_5, COEFF_6, COEFF_7
FROM {}
WHERE SPECIES_NAME = "{}"'''.format(temp_range.upper(), species_name)
coeffs = list(cursor.execute(query).fetchall()[0])
return coeffs
get_coeffs('H', 'low')
def get_species(temp, temp_range):
if temp_range == 'low': # temp_range == 'low'
query = '''SELECT SPECIES_NAME FROM {} WHERE TLOW < {}'''.format(temp_range.upper(), temp)
else: # temp_range == 'high'
query = '''SELECT SPECIES_NAME FROM {} WHERE THIGH > {}'''.format(temp_range.upper(), temp)
species = []
for s in cursor.execute(query).fetchall():
species.append(s[0])
return species
get_species(500, 'low')
get_species(100, 'low')
get_species(3000, 'high')
get_species(3500, 'high')
```
# Question 3: `JOIN` STATEMENTS
Create a table named `ALL_TEMPS` that has the following columns:
- `SPECIES_NAME`
- `TEMP_LOW`
- `TEMP_HIGH`
This table should be created by joining the tables `LOW` and `HIGH` on the value `SPECIES_NAME`.
```
# Create the table for ALL_TEMPS
cursor.execute("DROP TABLE IF EXISTS ALL_TEMPS")
cursor.execute('''CREATE TABLE ALL_TEMPS (
id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
SPECIES_NAME TEXT,
TEMP_LOW REAL,
TEMP_HIGH REAL)''')
db.commit()
# Insert TEMP_LOW and TEMP_HIGH of all species to ALL_TEMPS
query = '''SELECT LOW.SPECIES_NAME, LOW.TLOW AS TEMP_LOW, HIGH.THIGH AS TEMP_HIGH
FROM LOW
INNER JOIN HIGH ON LOW.SPECIES_NAME = HIGH.SPECIES_NAME'''
for record in cursor.execute(query).fetchall():
cursor.execute('''INSERT INTO ALL_TEMPS
(SPECIES_NAME, TEMP_LOW, TEMP_HIGH)
VALUES (?, ?, ?)''', record)
ALL_TEMPS_cols = [col[1] for col in cursor.execute("PRAGMA table_info(ALL_TEMPS)")]
display(viz_tables(ALL_TEMPS_cols, '''SELECT * FROM ALL_TEMPS'''))
```
1. Write a `Python` function `get_range` that returns the range of temperatures for a given species_name.
The range should be computed within the `SQL` query (i.e. you should subtract within the `SELECT` statement in the `SQL` query).
```python
def get_range(species_name):
'''Fill in here'''
return range
```
Note that `TEMP_LOW` is the lowest temperature in the `LOW` range and `TEMP_HIGH` is the highest temperature in the `HIGH` range.
```
def get_range(species_name):
query = '''SELECT (TEMP_HIGH - TEMP_LOW) AS T_range FROM ALL_TEMPS WHERE SPECIES_NAME = "{}"'''.format(species_name)
T_range = cursor.execute(query).fetchall()[0][0]
return T_range
get_range('O')
# Close the Database
db.commit()
db.close()
```
| github_jupyter |
# Results for BERT when applying syn tranformation to both premise and hypothesis to the MNLI dataset
```
import pandas as pd
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
from IPython.display import display, HTML
from lr.analysis.util import get_ts_from_results_folder
from lr.analysis.util import get_rho_stats_from_result_list
from lr.stats.h_testing import get_ks_stats_from_p_values_compared_to_uniform_dist
```
## Get Results
```
all_accs = []
all_transformed_accs = []
all_paired_t_p_values = []
all_dev_plus_diff = []
all_time = []
m_name = "bert_base"
folder = "mnli"
test_repetitions = 5
batchs = range(1, test_repetitions + 1)
for i in tqdm(batchs):
test_accuracy = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder,m_name, i),
stat="test_accuracy")
transformed_test_accuracy = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder,m_name, i),
stat="transformed_test_accuracy")
paired_t_p_value = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder, m_name, i),
stat="paired_t_p_value")
diff = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder, m_name, i),
stat="dev_plus_accuracy_difference")
t_time = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder, m_name,i),
stat="test_time")
all_accs.append(test_accuracy)
all_transformed_accs.append(transformed_test_accuracy)
all_paired_t_p_values.append(paired_t_p_value)
all_dev_plus_diff.append(diff)
all_time.append(t_time)
total_time = pd.concat(all_time,1).sum().sum()
n_params = 109484547
print("Time for all experiments = {:.1f} hours".format(total_time))
print("Number of paramaters for BERT = {}".format(n_params))
```
## Accuracy
```
rhos, mean_acc, error_acc, _ = get_rho_stats_from_result_list(all_accs)
_, mean_acc_t, error_acc_t, _ = get_rho_stats_from_result_list(all_transformed_accs)
fig, ax = plt.subplots(figsize=(12,6))
ax.errorbar(rhos, mean_acc, yerr=error_acc, fmt='-o', label="original test data");
ax.errorbar(rhos, mean_acc_t, yerr=error_acc_t, fmt='-o', label="transformed test data");
ax.legend(loc="best");
ax.set_xlabel(r"$\rho$", fontsize=14);
ax.set_ylabel("accuracy", fontsize=14);
ax.set_title("BERT accuracy\n\ndataset: MNLI\ntransformation: synonym substitution\ntest repetitions: {}\n".format(test_repetitions));
fig.tight_layout()
fig.savefig('figs/bert_base_acc_mnli_syn_p_h.png', bbox_inches=None, pad_inches=0.5)
```
## P-values
```
rhos, mean_p_values, error_p_values, min_p_values = get_rho_stats_from_result_list(all_paired_t_p_values)
alpha = 0.05
alpha_adj = alpha / test_repetitions
rejected_ids = []
remain_ids = []
for i,p in enumerate(min_p_values):
if p < alpha_adj:
rejected_ids.append(i)
else:
remain_ids.append(i)
rhos_rejected = rhos[rejected_ids]
rhos_remain = rhos[remain_ids]
y_rejected = mean_p_values[rejected_ids]
y_remain = mean_p_values[remain_ids]
error_rejected = error_p_values[rejected_ids]
error_remain = error_p_values[remain_ids]
title_msg = "BERT p-values\n\ndataset:"
title_msg += "MNLI\ntransformation: synonym substitution\ntest repetitions: {}\n".format(test_repetitions)
title_msg += "significance level = {:.1%} \n".format(alpha)
title_msg += "adjusted significance level = {:.2%} \n".format(alpha_adj)
fig, ax = plt.subplots(figsize=(12,6))
ax.errorbar(rhos_rejected, y_rejected, yerr=error_rejected, fmt='o', linewidth=0.50, label="at least one p-value is smaller than {:.2%}".format(alpha_adj));
ax.errorbar(rhos_remain, y_remain, yerr=error_remain, fmt='o', linewidth=0.50, label="all p-values are greater than {:.2%}".format(alpha_adj));
ax.legend(loc="best");
ax.set_xlabel(r"$\rho$", fontsize=14);
ax.set_ylabel("p-value", fontsize=14);
ax.set_title(title_msg);
fig.tight_layout()
fig.tight_layout()
fig.savefig('figs/bert_p_values_mnli_syn_p_h.png', bbox_inches=None, pad_inches=0.5)
```
## Accuracy difference
```
rhos, diff, _,_ = get_rho_stats_from_result_list(all_dev_plus_diff)
_, test_acc, _,_ = get_rho_stats_from_result_list(all_accs)
_, test_acc_t, _,_ = get_rho_stats_from_result_list(all_transformed_accs)
test_diff = np.abs(test_acc - test_acc_t)
fig, ax = plt.subplots(figsize=(12,6))
ax.errorbar(rhos, diff, fmt='-o', label="validation");
ax.errorbar(rhos, test_diff, fmt='-o', label="test");
ax.legend(loc="best");
ax.set_xlabel(r"$\rho$", fontsize=14);
ax.set_ylabel("average accuracy difference", fontsize=14);
ax.set_title("BERT accuracy difference\n\ndataset: MNLI\ntransformation: synonym substitution\ntest repetitions: {}\n".format(test_repetitions));
fig.tight_layout()
fig.savefig('figs/bert_acc_diff_mnli_syn_p_h.png', bbox_inches=None, pad_inches=0.5)
```
## Selecting the best $\rho$
```
id_min = np.argmin(diff)
min_rho = rhos[id_min]
min_rho_test_acc = test_acc[id_min]
min_rho_transformed_test_acc = test_acc_t[id_min]
test_accuracy_loss_pct = np.round(((min_rho_test_acc - test_acc[0]) / test_acc[0]) * 100, 1)
analysis = {"dataset":"mnli",
"model": "BERT",
"rho":min_rho,
"test_accuracy_loss_pct": test_accuracy_loss_pct,
"average_test_accuracy": min_rho_test_acc,
"average_transformed_test_accuracy": min_rho_transformed_test_acc,
"combined_accuracy": np.mean([min_rho_test_acc,min_rho_transformed_test_acc])}
analysis = pd.DataFrame(analysis, index=[0])
analysis
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/tompollard/buenosaires2018/blob/master/1_intro_to_python.ipynb)
# Programming in Python
In this part of the workshop, we will introduce some basic programming concepts in Python. We will then explore how these concepts allow us to carry out an anlysis that can be reproduced.
## Working with variables
You can get output from Python by typing math into a code cell. Try executing a sum below (for example: 3 + 5).
```
3+5
```
However, to do anything useful, we will need to assign values to `variables`. Assign a height in cm to a variable in the cell below.
```
height_cm = 180
```
Now the value has been assigned to our variable, we can print it in the console with `print`.
```
print('Height in cm is:', height_cm)
```
We can also do arithmetic with the variable. Convert the height in cm to metres, then print the new value as before (Warning! In Python 2, dividing an integer by an integer will return an integer.)
```
height_m = height_cm / 100
print('height in metres:',height_m)
```
We can check which variables are available in memory with the special command: `%whos`
```
%whos
```
We can see that each of our variables has a type (in this case `int` and `float`), describing the type of data held by the variable. We can use `type` to check the data type of a variable.
```
type(height_cm)
```
Another data type is a `list`, which can hold a series of items. For example, we might measure a patient's heart rate several times over a period.
```
heartrate = [66,64,63,62,66,69,70,75,76]
print(heartrate)
type(heartrate)
```
## Repeating actions in loops
We can access individual items in a list using an index (note, in Python, indexing begins with 0!). For example, let's view the first `[0]` and second `[1]` heart rate measurements.
```
print(heartrate[0])
print(heartrate[1])
```
We can iterate through a list with the help of a `for` loop. Let's try looping over our list of heart rates, printing each item as we go.
```
for i in heartrate:
print('the heart rate is:', i)
```
## Making choices
Sometimes we want to take different actions depending on a set of conditions. We can do this using an `if/else` statement. Let's write a statement to test if a mean arterial pressure (`meanpressure`) is high or low.
```
meanpressure = 70
if meanpressure < 60:
print('Low pressure')
elif meanpressure > 100:
print('High pressure')
else:
print('Normal pressure')
```
## Writing our own functions
To help organise our code and to avoid replicating the same code again and again, we can create functions.
Let's create a function to convert temperature in fahrenheit to celsius, using the following formula:
`celsius = (fahrenheit - 32) * 5/9`
```
def fahr_to_celsius(temp):
celsius = (temp - 32) * 5/9
return celsius
```
Now we can call the function `fahr_to_celsius` to convert a temperature from celsius to fahrenheit.
```
body_temp_f = 98.6
body_temp_c = fahr_to_celsius(body_temp_f)
print('Patient body temperature is:', body_temp_c, 'celsius')
```
## Reusing code with libraries
Python is a popular language for data analysis, so thankfully we can benefit from the hard work of others with the use of libraries. Pandas, a popular library for data analysis, introduces the `DataFrame`, a convenient data structure similar to a spreadsheet. Before using a library, we will need to import it.
```
# let's assign pandas an alias, pd, for brevity
import pandas as pd
```
We have shared a demo dataset online containing physiological data relating to 1000 patients admitted to an intensive care unit in Boston, Massachussetts, USA. Let's load this data into our new data structure.
```
url="https://raw.githubusercontent.com/tompollard/tableone/master/data/pn2012_demo.csv"
data=pd.read_csv(url)
```
The variable `data` should now contain our new dataset. Let's view the first few rows using `head()`. Note: parentheses `"()"` are generally required when we are performing an action/operation. In this case, the action is to select a limited number of rows.
```
data.head()
```
We can perform other operations on the dataframe. For example, using `mean()` to get an average of the columns.
```
data.mean()
```
If we are unsure of the meaning of a method, we can check by adding `?` after the method. For example, what is `max`?
```
data.max?
data.max()
```
We can access a single column in the data by specifying the column name after the variable. For example, we can select a list of ages with `data.Age`, and then find the mean for this column in a similar way to before.
```
print('The mean age of patients is:', data.Age.mean())
```
Pandas also provides a convenient method `plot` for plotting data. Let's plot a distribution of the patient ages in our dataset.
```
data.Age.plot(kind='kde', title='Age of patients in years')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/achmfirmansyah/sweet_project/blob/master/ICST2020/03_BUildup_transformation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
!pip install rasterio
import rasterio
!pip install geopandas
import geopandas as gpd
import numpy as np
from google.colab import drive
drive.mount('/content/gdrive')
import numpy as np
from sklearn.model_selection import train_test_split
!pip install xgboost
from xgboost import XGBClassifier
from sklearn.metrics import classification_report, confusion_matrix
lokus=['jabodetabek','mebidangro','maminasata','kedungsepur']
confussion_matrix_list=[]
classification_report_list=[]
#Create model
for lokasi in lokus:
print(lokasi)
dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
dataset_bands=pd.DataFrame()
for i in dataset.indexes:
temp=dataset.read(i)
temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
dataset_bands=temp.join(dataset_bands)
dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
dataset[['U_class']], test_size = 0.20)
xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7,
learning_rate=0.05, max_depth= 4,
min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic',
seed= 123, silent=1, subsample= 0.8)
xgclassifier.fit(X_train.values,y_train.values)
y_pred=xgclassifier.predict(X_test.values)
#confussion_matrix_list.append(confusion_matrix(y_test, y_pred))
classification_report_list.append(classification_report(y_test, y_pred))
dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
class_2014_=xgclassifier.predict(dataset.values)
confussion_matrix_list.append(confusion_matrix(dataset_bands[['U_class']],class_2014_))
temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
temp = temp_.read(1)
array_class_2014=class_2014_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
profile = temp_.profile
profile.update(
dtype=array_class_2014.dtype,
count=1,
compress='lzw')
with rasterio.open(
'/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2014_'+lokasi+'.tif','w',**profile) as dst2:
dst2.write(array_class_2014,1)
dst2.close()
confussion_matrix_list[3]
#Create model
for lokasi in lokus:
print(lokasi)
dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
dataset_bands=pd.DataFrame()
for i in dataset.indexes:
temp=dataset.read(i)
temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
dataset_bands=temp.join(dataset_bands)
dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
dataset[['U_class']], test_size = 0.20)
xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7,
learning_rate=0.05, max_depth= 4,
min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic',
seed= 123, silent=1, subsample= 0.8)
xgclassifier.fit(X_train.values,y_train.values)
y_pred=xgclassifier.predict(X_test.values)
confussion_matrix_list.append(confusion_matrix(y_test, y_pred))
classification_report_list.append(classification_report(y_test, y_pred))
#classification_2019:
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
#class_2019_=xgclassifier.predict(dataset.values)
#temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif')
#temp = temp_.read(1)
#array_class_2019=class_2019_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
#profile = temp_.profile
#profile.update(
# dtype=array_class_2019.dtype,
# count=1,
# compress='lzw')
#with rasterio.open(
# '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2019_'+lokasi+'.tif','w',**profile) as dst2:
# dst2.write(array_class_2019,1)
# dst2.close()
#Create model
for lokasi in lokus:
#print(lokasi)
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
#dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
#X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
# dataset[['U_class']], test_size = 0.20)
#xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7,
# learning_rate=0.05, max_depth= 4,
# min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic',
# seed= 123, silent=1, subsample= 0.8)
#xgclassifier.fit(X_train.values,y_train.values)
#y_pred=xgclassifier.predict(X_test.values)
#classification_2015:
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
#class_2015_=xgclassifier.predict(dataset.values)
#temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif')
#temp = temp_.read(1)
#array_class_2015=class_2015_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
#profile = temp_.profile
#profile.update(
# dtype=array_class_2015.dtype,
# count=1,
# compress='lzw')
#with rasterio.open(
# '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2015_'+lokasi+'.tif','w',**profile) as dst2:
# dst2.write(array_class_2015,1)
# dst2.close()
lokus=['gerbangkertasusila']
#Create model
for lokasi in lokus:
print(lokasi)
dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
dataset_bands=pd.DataFrame()
for i in dataset.indexes:
temp=dataset.read(i)
temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
dataset_bands=temp.join(dataset_bands)
dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
dataset[['U_class']], test_size = 0.20)
xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7,
learning_rate=0.05, max_depth= 5,
min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic',
seed= 123, silent=1, subsample= 0.8)
xgclassifier.fit(X_train.values,y_train.values)
#y_pred=xgclassifier.predict(X_test.values)
y_pred=xgclassifier.predict(dataset_bands[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']].values)
confussion_matrix_list.append(confusion_matrix(dataset_bands[['U_class']],y_pred))
#confussion_matrix_list.append(confusion_matrix(y_test, y_pred))
#classification_report_list.append(classification_report(y_test, y_pred))
#classification_2015:
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
#class_2015_=xgclassifier.predict(dataset.values)
#temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif')
#temp = temp_.read(1)
#array_class_2015=class_2015_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
#profile = temp_.profile
#profile.update(
# dtype=array_class_2015.dtype,
# count=1,
# compress='lzw')
#with rasterio.open(
# '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2015_'+lokasi+'.tif','w',**profile) as dst2:
# dst2.write(array_class_2015,1)
# dst2.close()
#Create model
#for lokasi in lokus:
# print(lokasi)
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
#dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
#X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
# dataset[['U_class']], test_size = 0.20)
#xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7,
# learning_rate=0.05, max_depth= 5,
# min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic',
# seed= 123, silent=1, subsample= 0.8)
#xgclassifier.fit(X_train.values,y_train.values)
#y_pred=xgclassifier.predict(X_test.values)
#confussion_matrix_list.append(confusion_matrix(y_test, y_pred))
#classification_report_list.append(classification_report(y_test, y_pred))
#classification_2019:
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
#class_2019_=xgclassifier.predict(dataset.values)
#temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif')
#temp = temp_.read(1)
#array_class_2019=class_2019_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
#profile = temp_.profile
#profile.update(
# dtype=array_class_2019.dtype,
# count=1,
# compress='lzw')
#with rasterio.open(
# '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2019_'+lokasi+'.tif','w',**profile) as dst2:
# dst2.write(array_class_2019,1)
# dst2.close()
#Create model
#for lokasi in lokus:
#print(lokasi)
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
#dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
#X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
# dataset[['U_class']], test_size = 0.20)
#xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7,
# learning_rate=0.05, max_depth= 5,
# min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic',
# seed= 123, silent=1, subsample= 0.8)
#xgclassifier.fit(X_train.values,y_train.values)
#y_pred=xgclassifier.predict(X_test.values)
#confussion_matrix_list.append(confusion_matrix(y_test, y_pred))
#classification_report_list.append(classification_report(y_test, y_pred))
#classification_2019:
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
#class_2014_=xgclassifier.predict(dataset.values)
#temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
#temp = temp_.read(1)
#array_class_2014=class_2014_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
#profile = temp_.profile
#profile.update(
# dtype=array_class_2014.dtype,
# count=1,
# compress='lzw')
#with rasterio.open(
# '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2014_'+lokasi+'.tif','w',**profile) as dst2:
# dst2.write(array_class_2014,1)
# dst2.close()
confussion_matrix_list[4]
lokus=['bandungraya']
from sklearn.ensemble import RandomForestClassifier
#Create model
for lokasi in lokus:
print(lokasi)
dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
dataset_bands=pd.DataFrame()
for i in dataset.indexes:
temp=dataset.read(i)
temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
dataset_bands=temp.join(dataset_bands)
dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
dataset[['U_class']], test_size = 0.20)
rfclassifier = RandomForestClassifier(random_state=123,max_depth=6,n_estimators=500,criterion='entropy')
rfclassifier.fit(X_train.values,y_train.values)
y_pred=rfclassifier.predict(dataset_bands[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']].values)
confussion_matrix_list.append(confusion_matrix(dataset_bands['U_class'],y_pred))
#y_pred=rfclassifier.predict(X_test.values)
#classification_2014:
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
##dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
#class_2015_=rfclassifier.predict(dataset.values)
#dataset_bands['U_class']
#temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif')
#temp = temp_.read(1)
#array_class_2015=class_2015_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
#profile = temp_.profile
#profile.update(
# dtype=array_class_2015.dtype,
# count=1,
# compress='lzw')
#with rasterio.open(
# '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2015_'+lokasi+'_2.tif','w',**profile) as dst2:
# dst2.write(array_class_2015,1)
# dst2.close()
#Create model
#for lokasi in lokus:
# print(lokasi)
# dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
# dataset_bands=pd.DataFrame()
# for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
# dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
#dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
#X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
# dataset[['U_class']], test_size = 0.20)
#rfclassifier = RandomForestClassifier(random_state=123,max_depth=6,n_estimators=500,criterion='entropy')
#rfclassifier.fit(X_train.values,y_train.values)
#y_pred=rfclassifier.predict(X_test.values)
#confussion_matrix_list.append(confusion_matrix(y_test, y_pred))
#classification_report_list.append(classification_report(y_test, y_pred))
#classification_2019:
#dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif')
#dataset_bands=pd.DataFrame()
#for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
#dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
#dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
#dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
#dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
#class_2019_=rfclassifier.predict(dataset.values)
#temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif')
#temp = temp_.read(1)
#array_class_2019=class_2019_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
#profile = temp_.profile
#profile.update(
# dtype=array_class_2019.dtype,
# count=1,
# compress='lzw')
#with rasterio.open(
# '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2019_'+lokasi+'.tif','w',**profile) as dst2:
# dst2.write(array_class_2019,1)
# dst2.close()
#Create model
#for lokasi in lokus:
# print(lokasi)
# dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
# dataset_bands=pd.DataFrame()
# for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
# dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
# dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
# dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
# dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000))
# dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']]
# X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']],
# dataset[['U_class']], test_size = 0.20)
# rfclassifier = RandomForestClassifier(random_state=123,max_depth=6,n_estimators=500,criterion='entropy')
# rfclassifier.fit(X_train.values,y_train.values)
# y_pred=rfclassifier.predict(X_test.values)
# confussion_matrix_list.append(confusion_matrix(y_test, y_pred))
# classification_report_list.append(classification_report(y_test, y_pred))
#classification_2019:
# dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
# dataset_bands=pd.DataFrame()
# for i in dataset.indexes:
# temp=dataset.read(i)
# temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i})
# dataset_bands=temp.join(dataset_bands)
# dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7',
# 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True)
# dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0)
# dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else (
# 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990)))
# dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']]
# class_2014_=rfclassifier.predict(dataset.values)
# temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif')
# temp = temp_.read(1)
# array_class_2014=class_2014_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8)
# profile = temp_.profile
# profile.update(
# dtype=array_class_2014.dtype,
# count=1,
# compress='lzw')
# with rasterio.open(
# '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2014_'+lokasi+'.tif','w',**profile) as dst2:
# dst2.write(array_class_2014,1)
# dst2.close()
confussion_matrix_list[5]
```
| github_jupyter |
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Quantum Neural Networks
This notebook provides an introduction to Quantum Neural Networks (QNNs) using the Cirq. The presentation mostly follows [Farhi and Neven](https://arxiv.org/abs/1802.06002). We will construct a simple network for classification to demonstrate its utility on some randomly generated toy data.
First we need to install cirq, which has to be done each time this notebook is run. Executing the following cell will do that.
```
# install published dev version
# !pip install cirq~=0.4.0.dev
# install directly from HEAD:
!pip install git+https://github.com/quantumlib/Cirq.git@8c59dd97f8880ac5a70c39affa64d5024a2364d0
```
To verify that Cirq is installed in your environment, try to `import cirq` and print out a diagram of the Foxtail device. It should produce a 2x11 grid of qubits.
```
import cirq
import numpy as np
import matplotlib.pyplot as plt
print(cirq.google.Foxtail)
```
### The QNN Idea
We'll begin by describing here the QNN model we are pursuing. We'll discuss the quantum circuit describing a very simple neuron, and how it can be trained.
As in an ordinary neural network, a QNN takes in data, processes that data, and then returns an answer. In the quantum case, the data will be encoded into the initial quantum state, and the processing step is the action of a quantum circuit on that quantum state. At the end we will measure one or more of the qubits, and the statistics of those measurements are the output of the net.
#### Classical vs Quantum
An ordinary neural network can only handle classical input. The input to a QNN, though, is a quantum state, which consists of $2^n$ complex amplitudes for $n$-qubits. If you attached your quantum computer directly to some physics experiment, for example, then you could have a QNN do some post-processing on the experimental wavefunction in lieu of a more traditional measurement. There are some very exciting possiblities there, but unfortunately we wil not be considering them in this Colab. It requires significantly more quantum background to understand what's going on, and it's harder to give examples because the input states themselves can be quite complicated. For recent examples of that kind of network, though, check out [this](https://arxiv.org/abs/1805.08654) paper and [this](https://arxiv.org/abs/1810.03787) paper. The basic ingredients are similar to what we'll cover here
In this Colab we'll focus on classical inputs, by which I mean the specification of one of the computational basis states as the initial state. There are a total of $2^n$ of these states for $n$ qubits. Note the crucial difference between this case and the quantum case: in the quantum case the input is $2^n$-*dimensional*, while in the classical case there are $2^n$ *possible* inputs. The quantum neural network can process these inputs in a "quantum" way, meaning that it may be able to evaluate certain functions on these inputs more efficiently than a classical network. Whether the "quantum" processing is actually useful in practice remains to be seen, and in this Colab we will not have time to really get into that aspect of a QNN.
#### Data Processing
Given the classical input state, what will we do with it? At this stage it's helpful to be more specific and definite about the problem we are trying to solve. The problem we're going to focus on in this Colab is __two-category classicfication__. That means that after the quantum circuit has finished running, we measure one of the qubits, the *readout* qubit, and the value of that qubit will tell us which of the two categories our classical input state belonged to. Since this is quantum, the output that qubit is going to be random according to some probability distributuion. So really we're going to repeat the computation many times and take a majority vote.
Our classical input data is a bitstring that is converted into a computational basis state. We want to influence the readout qubit in a way that depends on this state. Our main tool for this a gate we call the $ZX$-gate, which acts on two qubits as
$$
\exp(i \pi w Z \otimes X) = \begin{bmatrix}
\cos \pi w & i\sin\pi w &0&0\\
i\sin\pi w & \cos \pi w &0&0\\
0&0& \cos \pi w & -i\sin\pi w \\
0&0 & -i\sin\pi w & \cos\pi w
\end{bmatrix},
$$
where $w$ is a free parameter ($w$ stands for weight). This gate rotates the second qubit around the $X$-axis (on the Bloch sphere) either clockwise or counterclockwise depending on the state of the first qubit as seen in the computational basis. The amount of the rotation is determined by $w$.
If we connect each of our input qubits to the readout qubit using one of these gates, then the result is that the readout qubit will be rotated in a way that depeonds the initial state in a straightforward way. This rotation is in the $YZ$ plane, so will change the statistics of measurements in either the $Z$ basis or the $Y$ basis for the readout qubit. We're going to choose to have the initial state of the readout qubit to be a standard computational basis state as usual, which is a $Z$ eigenstate but "neutral" with respect to $Y$ (i.e., 50/50 probabilty of $Y=+1$ or $Y=-1$). Then after all of the rotations are complete we'll measure the readout qubit in the $Y$ basis. If all goes well, then the net rotation induced by the $ZX$ gates will place the readout qubit near one of the two $Y$ eigenstates in a way that depends on the initial data.
To summarize, here is our strategy for processing the two-category classification problem:
1) Prepare a computational basis state corresponding to the input that should be categorized.
2) Use $ZX$ gates to rotate the state of the readout qubit in a way that depends on the input.
3) Measure the readout qubit in the $Y$ basis to get the predicted label. Take a majority vote after many repetitions.
This is the simplest possible kind of network, and really only corresponds to a single neuron. We'll talk about more complicated possibilities after understanding how to implement this one.
### Custom Two-Qubit Gate
Our first task is to code up the $ZX$ gate described above, which is given by the matrix
$$
\exp(i \pi w Z \otimes X) = \begin{bmatrix}
\cos \pi w & i\sin\pi w &0&0\\
i\sin\pi w & \cos \pi w &0&0\\
0&0& \cos \pi w & -i\sin\pi w \\
0&0 & -i\sin\pi w & \cos\pi w
\end{bmatrix},
$$
Just from the form of the gate we can see that it performs a rotation by angle $\pm \pi w$ on the second qubit depending on the value of the first qubit. If we only had one or the other of these two blocks, then this gate would literally be a controlled rotation. For example, using the Cirq conventions,
$$
CR_X(\theta) = \begin{bmatrix}
1 & 0 &0&0\\
0 & 1 &0&0\\
0&0& \cos \theta/2 & -i\sin \theta/2 \\
0&0 & -i\sin\theta/2 & \cos\theta/2
\end{bmatrix},
$$
which means that setting $\theta = 2\pi w$ should give us (part) of our desired transformation.
```
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = .25 # Put your own weight here.
angle = 2*np.pi*w
circuit = cirq.Circuit.from_ops(cirq.ControlledGate(cirq.Rx(angle)).on(a,b))
print(circuit)
circuit.to_unitary_matrix().round(2)
```
__Question__: The rotation in the upper-left block is by the opposite angle. But how do we get the rotation to happen in the upper-left block of the $4\times 4$ matrix in the first place? What is the circuit?
#### Solution
Switching the upper-left and lower-right blocks of a controlled gate corresponds to activating when the control qubit is in the $|0\rangle$ state instead of the $|1\rangle$ state. We can arrange this to happen by taking the control gate we already have and conjugating the control qubit by $X$ gates (which implement the NOT operation). Don't forget to also rotate by the opposite angle.
```
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = 0.25 # Put your own weight here.
angle = 2*np.pi*w
circuit = cirq.Circuit.from_ops([cirq.X(a),
cirq.ControlledGate(cirq.Rx(-angle)).on(a,b),
cirq.X(a)])
print(circuit)
circuit.to_unitary_matrix().round(2)
```
#### The Full $ZX$ Gate
We can put together the two controlled rotations to make the full $ZX$ gate. Having discussed the decomposition already, we can make our own class and specify its action using the `_decpompose_` method. Fill in the following code block to implement this gate.
```
class ZXGate(cirq.ops.gate_features.TwoQubitGate):
"""ZXGate with variable weight."""
def __init__(self, weight=1):
"""Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
"""
self.weight = weight
def _decompose_(self, qubits):
a, b = qubits
## YOUR CODE HERE
# This lets the weight be a Symbol. Useful for paramterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
```
#### Solution
```
class ZXGate(cirq.ops.gate_features.TwoQubitGate):
"""ZXGate with variable weight."""
def __init__(self, weight=1):
"""Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
"""
self.weight = weight
def _decompose_(self, qubits):
a, b = qubits
yield cirq.ControlledGate(cirq.Rx(2*np.pi*self.weight)).on(a,b)
yield cirq.X(a)
yield cirq.ControlledGate(cirq.Rx(-2*np.pi*self.weight)).on(a,b)
yield cirq.X(a)
# This lets the weight be a Symbol. Useful for paramterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
```
#### EigenGate Implementation
Another way to specify how a gate works is by an explicit eigen-action. In our case that is also easy, since we know that the gate acts as a phase (the eigenvalue) when the first qubit is in a $Z$ eigenstate (i.e., a computational basis state) and the second qubit is in an $X$ eigenstate.
The way we specify eigen-actions in Cirq is through the `_eigen_components` method, where we need to specify the eigenvalue as a phase together with a projector onto the eigenspace of that phase. We also conventionally specify the gate at $w=1$ and set $w$ internally to be the `exponent` of the gate, which automatically implements other values of $w$ for us.
In the case of the $ZX$ gate with $w=1$, one of our eigenvalues is $\exp(+i\pi)$, which is specified as $1$ in Cirq. (Because $1$ is the coefficeint of $i\pi$ in the exponential.) This is the phase when when the first qubit is in the $Z=+1$ state and the second qubit is in the $X=+1$ state, or when the first qubit is in the $Z=-1$ state and the second qubit is in the $X=-1$ state. The projector onto these states is
$$
\begin{align}
P &= |0+\rangle \langle 0{+}| + |1-\rangle \langle 1{-}|\\
&= \frac{1}{2}\big(|00\rangle \langle 00| +|00\rangle \langle 01|+|01\rangle \langle 00|+|01\rangle \langle 01|+ |10\rangle \langle 10|-|10\rangle \langle 11|-|11\rangle \langle 10|+|11\rangle \langle 11|\big)\\
&=\frac{1}{2}\begin{bmatrix}
1 & 1 &0&0\\
1 & 1 &0&0\\
0&0& 1 & -1 \\
0&0 & -1 & 1
\end{bmatrix}
\end{align}
$$
A similar formula holds for the eigenvalue $\exp(-i\pi)$ with the two blocks in the projector flipped.
__Exercise__: Implement the $ZX$ gate as an `EigenGate` using this decomposition. The following codeblock will get you started.
```
class ZXGate(cirq.ops.eigen_gate.EigenGate,
cirq.ops.gate_features.TwoQubitGate):
"""ZXGate with variable weight."""
def __init__(self, weight=1):
"""Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
"""
self.weight = weight
super().__init__(exponent=weight) # Automatically handles weights other than 1
def _eigen_components(self):
return [
(1, np.array([[0.5, 0.5, 0, 0],
[ 0.5, 0.5, 0, 0],
[0, 0, 0.5, -0.5],
[0, 0, -0.5, 0.5]])),
(??, ??) # YOUR CODE HERE: phase and projector for the other eigenvalue
]
# This lets the weight be a Symbol. Useful for parameterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
```
#### Solution
```
class ZXGate(cirq.ops.eigen_gate.EigenGate,
cirq.ops.gate_features.TwoQubitGate):
"""ZXGate with variable weight."""
def __init__(self, weight=1):
"""Initializes the ZX Gate up to phase.
Args:
weight: rotation angle, period 2
"""
self.weight = weight
super().__init__(exponent=weight) # Automatically handles weights other than 1
def _eigen_components(self):
return [
(1, np.array([[0.5, 0.5, 0, 0],
[ 0.5, 0.5, 0, 0],
[0, 0, 0.5, -0.5],
[0, 0, -0.5, 0.5]])),
(-1, np.array([[0.5, -0.5, 0, 0],
[ -0.5, 0.5, 0, 0],
[0, 0, 0.5, 0.5],
[0, 0, 0.5, 0.5]]))
]
# This lets the weight be a Symbol. Useful for parameterization.
def _resolve_parameters_(self, param_resolver):
return ZXGate(weight=param_resolver.value_of(self.weight))
# How should the gate look in ASCII diagrams?
def _circuit_diagram_info_(self, args):
return cirq.protocols.CircuitDiagramInfo(
wire_symbols=('Z', 'X'),
exponent=self.weight)
```
#### Testing the Gate
__BEFORE MOVING ON__ make sure you've executed the `EigenGate` solution of the $ZX$ gate implementation. That's the one assumed for the code below, though other implementations may work just as well. In general, the cells in this Colab may depend on previous cells.
Let's test out our gate. First we'll make a simple test circuit to see that the ASCII diagrams are diplaying properly:
```
a = cirq.NamedQubit("a")
b = cirq.NamedQubit("b")
w = .15 # Put your own weight here. Try using a cirq.Symbol.
circuit = cirq.Circuit.from_ops(ZXGate(w).on(a,b))
print(circuit)
```
We should also check that the matrix is what we expect:
```
test_matrix = np.array([[np.cos(np.pi*w), 1j*np.sin(np.pi*w), 0, 0],
[1j*np.sin(np.pi*w), np.cos(np.pi*w), 0, 0],
[0, 0, np.cos(np.pi*w), -1j*np.sin(np.pi*w)],
[0, 0, -1j*np.sin(np.pi*w),np.cos(np.pi*w)]])
# Test for five digits of accuracy. Won't work with cirq.Symbol
assert (circuit.to_unitary_matrix().round(5) == test_matrix.round(5)).all()
```
### Create Circuit
Now we have to create the QNN circuit. We are simply going to let a $ZX$ gate act between each data qubit and the readout qubit. For simplicity, let's share a single weight between all of the gates. You are invited to experiment with making the weights different, but in our example problem below we can set them all equal by symmetry.
__Question__: What about the order of these actions? Which data qubits should interact with the readout qubit first?
Remember that we also want to measure the readout qubit in the $Y$ basis. Fundamentally speaking, all measurements in Cirq are computational basis measurements, and so we have to implement the change of basis by hand.
__Question__: What is the circuit for a basis transformation from the $Y$ basis to the computational basis? We want to choose our transformation so that an eigenstate with $Y=+1$ becomes an eigenstate with $Z=+1$ prior to measurement.
#### Solutions
* The $ZX$ gates all commute with each other, so the order of implementation doesn't matter!
* We want a transformation that maps $\big(|0\rangle + i |1\rangle\big)/\sqrt{2}$ to $|0\rangle$ and $\big(|0\rangle - i |1\rangle\big)\sqrt{2}$ to $|1\rangle$. Recall that the phase gate $S$ is given in matrix form by
$$
S = \begin{bmatrix}
1 & 0 \\
0 & i
\end{bmatrix},
$$
and the Hadamard transform is given by
$$
H = \frac{1}{\sqrt{2}}\begin{bmatrix}
1 & 1 \\
1 & -1
\end{bmatrix},
$$
So acting with $S^{-1}$ and then $H$ gives what we want. We'll add these two gates to the end of the circuit on the readout qubit so that the final measurement effectively occurs in the $Y$ basis.
#### Make Circuit
A clean way of making circuits is to define generators for logically-related circuit elements, and then `append` those to the circuit you want to make. Here is a code snippet that initializes our qubits and defines a generator for a single layer of $ZX$ gates:
```
# Total number of data qubits
INPUT_SIZE = 9
data_qubits = cirq.LineQubit.range(INPUT_SIZE)
readout = cirq.NamedQubit('r')
# Initialize parameters of the circuit
params = {'w': 0}
def ZX_layer():
"""Adds a ZX gate between each data qubit and the readout.
All gates are given the same cirq.Symbol for a weight."""
for qubit in data_qubits:
yield ZXGate(cirq.Symbol('w')).on(qubit, readout)
```
Use this generator to create the QNN circuit. Don't forget to add the basis change for the readout qubit at the end!
```
qnn = cirq.Circuit()
qnn.append(???) # YOUR CODE HERE
```
#### Solution
```
qnn = cirq.Circuit()
qnn.append(ZX_layer())
qnn.append([cirq.S(readout)**-1, cirq.H(readout)]) # Basis transformation
```
#### View the Circuit
It's usually a good idea to view the ASCII diagram of your circuit to make sure it's doing what you want. This can be displayed by printing the circuit.
```
print(qnn)
```
You can experiment with adding more layers of $ZX$ gates (or adding other kinds of transformations!) to your QNN, but we can use this simplest kind of circuit to analyze a simple toy problem, which is what we will do next.
### A Toy Problem: Biased Coin Flips
As a toy problem, let's get our quantum neuron to decide whether a coin is biased toward heads or toward tails based on a sequence of coin flips.
To be specific, let's try to train a QNN to distinguish between a coin that yields "heads" with probability $p$, and one that yields "heads" with probability $1-p$. Without loss of generality, let's say that $p\leq 0.5$. We don't need a fancy QNN to come up with a winning strategy: given a series of coin flips, you guess $p$ if the majority of flips are "tails" and $1-p$ if the majority are "heads." But for purposes of illustration, let's do it the fancy way.
To translate this problem into our QNN language, we need to encode the sequence of coin flips into a computational basis state. Let's associate $0$ with tails and $1$ with heads. So the sequence of coin flips becomes a sequence of $0$s and $1$s, and these define a computational basis state.
We also need to define a convention for our labeling of the two coins. We'll say that the $p$ coin (majority tails) gets the label $-1$ and the $1-p$ coin (majority heads) gets the label $+1$. So when we measure $Y$ at the end of the computation we can say that the majority-vote of the $Y$ outcome is our predicted label.
To be a little more nuanced (and to aid the formulation of the problem), let's say that the expectation value $\langle Y \rangle$ for a given input state defines our estimator for the label of that state. We're going to use that to define a loss function for training next.
### Define Loss Function
Suppose we have a collection of $N$ (bitstring, label) pairs. A useful loss function to characterize the effectiveness of our QNN on this collection is
$$
\text{Loss} = \frac{1}{2N}\sum_{j=1}^n (1- \ell_j\langle Y \rangle_j),
$$
where $\ell_j$ is the label of the $j$th pair and $\langle Y \rangle_j$ is the expectation value of $Y$ on the readout qubit using the $j$th bitstring as input. If the network is perfect, the loss is equal to zero. If the network is maximally unsure about the labels (so that $\langle Y \rangle_j = 0$ for all $j$) then the loss is equal to $1/2$. And if the network gets everything wrong, then the loss is equal to $1$. We're going to train our network using this loss function, so next we'll write some functions to compute the loss.
Another useful function to have around is the average classification error. Recall that our prescription was to execute the quantum circuit many times and take a majority vote to compute the predicted label. The majority vote for the readout is the same as $\text{sign}(\langle Y \rangle)$, so we can write a formula for the error in this procedure as
$$
\text{Error} = \frac{1}{2N}\sum_{j=1}^n \big(1- \ell_j\text{sign}\big(\langle Y \rangle_j\big)\big).
$$
This is not so useful as a loss function because it is not smooth and does not provide an incentive to make $|\langle Y \rangle|$ large, but it can be an informative quantity to compute.
__Question__: Why would we want $|\langle Y \rangle|$ to be large?
#### Solution
When we implement this algorithm on the actual hardware, $\langle Y \rangle$ can only be estimated by repeatedly executing the circuit and measuring the result. The more measurements we make, the better our estimate of $\langle Y \rangle$ will be. Even if we are only interested in $\text{sign}\big(\langle Y \rangle\big)$, we will need to meake enough measurements to be sure that our estimate has the correct sign, and if $|\langle Y \rangle|$ is large then fewer measurements will be required to have high confidence in the sign.
Furthermore, if the machine is noisy (which it will be), then the noise will induce some errors in our estimate of $\langle Y \rangle$. If $|\langle Y \rangle|$ is small then it's likely that the noise will lead to the wrong sign.
#### Expectation Value
Our first function computes the expectation value of the readout qubit for our circuit given a specification of the initial state. Rather than a bitstring, we'll specify the initial state as an array of $0$s and $1$s. These are the outputs of the coin flips in our toy problem. We'll compute the expectation value exactly using the wavefunction for now.
```
def readout_expectation(state):
"""Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to calculate the wavefunction exactly."""
# A convenient representation of the state as an integer
state_num = int(np.sum(state*2**np.arange(len(state))))
resolver = cirq.ParamResolver(params)
simulator = cirq.Simulator()
# Specify an explicit qubit order so that we know which qubit is the readout
result = simulator.simulate(qnn, resolver, qubit_order=[readout]+data_qubits,
initial_state=state_num)
wf = result.final_state
# Becase we specified qubit order, the Z value of the readout is the most
# significant bit.
Z_readout = np.append(np.ones(2**INPUT_SIZE), -np.ones(2**INPUT_SIZE))
return np.sum(np.abs(wf)**2 * Z_readout)
```
#### Loss and Error
The next functions take a list of states (each specified as an array of $0$s and $1$s as before) and a corresponding list of labels and computes the loss and error, respectively, of that list.
```
def loss(states, labels):
loss=0
for state, label in zip(states,labels):
loss += 1 - label*readout_expectation(state)
return loss/(2*len(states))
def classification_error(states, labels):
error=0
for state,label in zip(states,labels):
error += 1 - label*np.sign(readout_expectation(state))
return error/(2*len(states))
```
#### Generating Data
For our toy problem we'll want to be able to generate a batch of data. Here is a helper function for that task:
```
def make_batch():
"""Generates a set of labels, then uses those labels to generate inputs.
label = -1 corresponds to majority 0 in the sate, label = +1 corresponds to
majority 1.
"""
np.random.seed(0) # For consistency in demo
labels = (-1)**np.random.choice(2, size=100) # Smaller batch sizes will speed up computation
states = []
for label in labels:
states.append(np.random.choice(2, size=INPUT_SIZE, p=[0.5-label*0.2,0.5+label*0.2]))
return states, labels
states, labels = make_batch()
```
### Training
Now we'll try to find the optimal weight to solve our toy problem. For illustration, we'll do both a brute force search of the paramter space as well as a stochastic gradient descent.
#### Brute Force Search
Let's compute both the loss and error rate on a batch of data as a function of the shared weight between all the gates.
```
# Using cirq.Simulator with the EigenGate implementation of ZZ, this takes
# about 30s to run. Using the XmonSimulator took about 40 minutes the last
# time I tried it!
%%time
linspace = np.linspace(start=-1, stop=1, num=80)
train_losses = []
error_rates = []
for p in linspace:
params = {'w': p}
train_losses.append(loss(states, labels))
error_rates.append(classification_error(states, labels))
plt.plot(linspace, train_losses)
plt.xlabel('Weight')
plt.ylabel('Loss')
plt.title('Loss as a Function of Weight')
plt.show()
plt.plot(linspace, error_rates)
plt.xlabel('Weight')
plt.ylabel('Error Rate')
plt.title('Error Rate as a Function of Weight')
plt.show()
```
__Question__: Why are the loss and error functions periodic with period $1$ when the $ZX$ gate is periodic with period $2$?
#### Solution
This kind of "halving" of the periodicity of $\langle Y \rangle$ compared to the period of the gates itself is typical of qubit systems. We can analyze how it works mathematically in a simpler setting. Instead of the $ZX$ Gate, let's just imagine that we rotate the readout qubit around the $X$ axis by some fixed amout. This is the effective calculation for a single fixed data input.
$$
\begin{align}
\langle Y \rangle &= \langle 0 |\exp(-i \pi w X) Y \exp(i \pi w X) |0 \rangle\\
&= \langle 0 |\big(\cos \pi w - i X\sin \pi w \big) Y \big(\cos \pi w + i X \sin \pi w \big) |0 \rangle\\
&= \langle 0 |\big(Y\cos 2\pi w +Z \sin 2\pi w \big) |0 \rangle\\
&= \sin 2\pi w.
\end{align}
$$
#### Stochastic Gradient Descent
To train the network we'll use stochastic gradient descent. Note that this isn't necessarily a good idea since the loss function is far from convex, and there's a good chance we'll get stuck in very inefficient local minimum if we initialize the paramters randomly. But as an exercise we'll do it anyway. In the next section we'll discuss other ways to train these sorts of networks.
We'll compute the gradient of the loss function using a symmetric finite-difference approximation: $f'(x) \approx (f(x + \epsilon) - f(x-\epsilon))/2\epsilon$. This is the most straightforward way to do it using the quantum computer. We'll also generate a new instance of the problem each time.
```
def stochastic_grad_loss():
"""Generates a new data point and computes the gradient of the loss
using that data point."""
# Randomly generate the data point.
label = (-1)**np.random.choice(2)
state = np.random.choice(2, size=INPUT_SIZE, p=[0.5-label*0.2,0.5+label*0.2])
# Compute the gradient using finite difference
eps = 10**-5 # Discretization of gradient. Try different values.
params['w'] -= eps
loss1 = loss([state],[label])
params['w'] += 2*eps
grad = (loss([state],[label])-loss1)/(2*eps)
params['w'] -= eps # Reset the parameter value
return grad
```
We can apply this function repeatedly to flow toward the minimum:
```
eta = 10**-4 # Learning rate. Try different values.
params = {'w': 0} # Initialize weight. Try different values.
for i in range(201):
if not i%25:
print('Step: {} Loss: {}'.format(i, loss(states, labels)))
grad = stochastic_grad_loss()
params['w'] += -eta*grad
print('Final Weight: {}'.format(params['w']))
```
### Use Sampling Instead of Calculating from the Wavefunction
On real hardware we will have to use sampling to find results instead of computing the exact wavefunction. Rewrite the `readout_expectation` function to compute the expectation value using sampling instead. Unlike with the wavefunction calculation, we also need to build our circuit in a way that accounts for the initial state (we are always assumed to start in the all $|0\rangle$ state)
```
def readout_expectation_sample(state):
"""Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to sample the final wavefunction."""
# We still need to resolve the parameters in the circuit.
resolver = cirq.ParamResolver(params)
# Make a copy of the QNN to avoid making changes to the global variable.
measurement_circuit = qnn.copy()
# Modify the measurement circuit to account for the desired input state.
# YOUR CODE HERE
# Add appropriate measurement gate(s) to the circuit.
# YOUR CODE HERE
simulator = cirq.google.XmonSimulator()
result = simulator.run(measurement_circuit, resolver, repetitions=10**6) # Try adjusting the repetitions
# Return the Z expectation value
return ((-1)**result.measurements['m']).mean()
```
#### Solution
```
def readout_expectation_sample(state):
"""Takes in a specification of a state as an array of 0s and 1s
and returns the expectation value of Z on ther readout qubit.
Uses the XmonSimulator to sample the final wavefunction."""
# We still need to resolve the parameters in the circuit.
resolver = cirq.ParamResolver(params)
# Make a copy of the QNN to avoid making changes to the global variable.
measurement_circuit = qnn.copy()
# Modify the measurement circuit to account for the desired input state.
for i, qubit in enumerate(data_qubits):
if state[i]:
measurement_circuit.insert(0,cirq.X(qubit))
# Add appropriate measurement gate(s) to the circuit.
measurement_circuit.append(cirq.measure(readout, key='m'))
simulator = cirq.Simulator()
result = simulator.run(measurement_circuit, resolver, repetitions=10**6) # Try adjusting the repetitions
# Return the Z expectation value
return ((-1)**result.measurements['m']).mean()
```
#### Comparison of Sampling with the Exact Wavefunction
Just to illustrate the difference between sampling and using the wavefunction, try running the two methods several times on identical input:
```
state = [0,0,0,1,0,1,1,0,1] # Try different initial states.
params = {'w': 0.05} # Try different weights.
print("Exact expectation value: {}".format(readout_expectation(state)))
print("Estimates from sampling:")
for _ in range(5):
print(readout_expectation_sample(state))
```
As an exercise, try repeating some of the above calculations (e.g., the SGD optimization) using `readout_expectation_sample` in place of `readout_expectation`. How many repetitions should you use? How should the hyperparameters `eps` and `eta` be adjusted in response to the number of repetitions?
### Optimizing For Hardware
There are more issues to think about if you want to run your network on real hardware. First is the connectivity issue, and second is minimizing the number of two-qubit operations.
Consider the Foxtail device:
```
print(cirq.google.Foxtail)
```
The qubits are arranged in two rows of eleven qubits each, and qubits can only communicate to their nearest neighbors along the horizontal and vertial connections. That does not mesh well with the QNN we designed, where all of the data qubits need to interact with the readout qubit.
There is no *in-principle* restriction on the kinds of algorithms you are allowed to run. The solution to the connectivity problem is to make use of SWAP gates, which have the effect of exchanging the states of two (neighboring) qubits. It's equivalent to what you would get if you physically exchanged the positions of two of the qubits in the grid. The problem is that each SWAP operation is costly, so you want to avoid SWAPing as much as possible. We need to think carefully about our algorithm design to minimize the number of SWAPs performed as the circuit is executed.
__Question__: How should we modify our QNN circuit so that it can runs efficiently on the Foxtail device?
#### Solution
One strategy is to move the readout qubit around as it talks to the other qubits. Suppose the readout qubit starts in the $(0,0)$ position. First it can interact with the qubits in the $(1,0)$ and $(0,1)$ positons like normal, then SWAP with the $(0,1)$ qubit. Now the readout qubit is in the $(0,1)$ position and can interact with the $(1,1)$ and $(0,2)$ qubits before SWAPing with the $(0,2)$ qubit. It continues down the line in this fashion.
Let's code up this circuit:
```
qnn_fox = cirq.Circuit()
w = 0.2 # Want an explicit numerical weight for later
for i in range(10):
qnn_fox.append([ZXGate(w).on(cirq.GridQubit(1,i), cirq.GridQubit(0,i)),
ZXGate(w).on(cirq.GridQubit(0,i+1), cirq.GridQubit(0,i)),
cirq.SWAP(cirq.GridQubit(0,i), cirq.GridQubit(0,i+1))])
qnn_fox.append(ZXGate(w).on(cirq.GridQubit(1,10), cirq.GridQubit(0,10)))
qnn_fox.append([(cirq.S**-1)(cirq.GridQubit(0,10)),cirq.H(cirq.GridQubit(0,10)),
cirq.measure(cirq.GridQubit(0,10))])
print(qnn_fox)
```
As coded, this circuit still won't run on the Foxtail device. That's because the gates we've defined are not native gates. Cirq has a built-in method that will convert our gates to Xmon gates (which are native for the Foxtail device) and attempt to optimze the circuit by reducing the total number of gates:
```
cirq.google.optimized_for_xmon(qnn_fox, new_device=cirq.google.Foxtail, allow_partial_czs=True)
```
Notice how we were able to pass in the `new_device` argument without getting an error messgae. That means the circuit will run properly on the Foxtail.
__Question__: We were smart to place the SWAP gates and $ZX$ gates next to each other where possible. Why?
__Question__: Can you see any ways to further optimize this circuit by hand? Hint: not all of the qubits are being measured.
#### Solutions
* Placing the SWAP and $ZX$ gates next to each other lets the optimizer treat the ocmbination of them as a single gate, which leads to fewer total two-qubit gates.
* The state of any qubit which is not being measured does not matter. In particular, any single-qubit gate acting on a non-measured qubit after the last two-qubit gate acting on that qubit will not affect the state of the measured qubit and so can be dropped.
### Exercise: Multiple Weights
Instead of just a single weight, create create a neuron with multiple weights. How will you optimize those weights?
### Exercise: Analytic Calculation
Because we stuck to such a simple example, essentially everything in this notebook can be calculated analytically. Do those calculations.
### Exercise: Add More "Quantum" Operations
The neuron we constructed essentially does a classial calculation. You can add more ingredients that make the data processing more "quantum." For example, you can add layers of Hadamard gates in between additional layers of $ZX$ gates. This sort of thing was explored in [Farhi and Neven](https://arxiv.org/abs/1802.06002). Try playing around with it.
| github_jupyter |
----
<img src="../../../files/refinitiv.png" width="20%" style="vertical-align: top;">
# Data Library for Python
----
## Content layer - News
This notebook demonstrates how to retrieve News.
#### Learn more
To learn more about the Refinitiv Data Library for Python please join the Refinitiv Developer Community. By [registering](https://developers.refinitiv.com/iam/register) and [logging](https://developers.refinitiv.com/content/devportal/en_us/initCookie.html) into the Refinitiv Developer Community portal you will have free access to a number of learning materials like
[Quick Start guides](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/quick-start),
[Tutorials](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/learning),
[Documentation](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/docs)
and much more.
#### Getting Help and Support
If you have any questions regarding using the API, please post them on
the [Refinitiv Data Q&A Forum](https://community.developers.refinitiv.com/spaces/321/index.html).
The Refinitiv Developer Community will be happy to help.
## Set the configuration file location
For a better ease of use, you have the option to set initialization parameters of the Refinitiv Data Library in the _refinitiv-data.config.json_ configuration file. This file must be located beside your notebook, in your user folder or in a folder defined by the _RD_LIB_CONFIG_PATH_ environment variable. The _RD_LIB_CONFIG_PATH_ environment variable is the option used by this series of examples. The following code sets this environment variable.
```
import os
os.environ["RD_LIB_CONFIG_PATH"] = "../../../Configuration"
```
## Some Imports to start with
```
import refinitiv.data as rd
from refinitiv.data.content import news
from datetime import timedelta
```
## Open the data session
The open_session() function creates and open sessions based on the information contained in the refinitiv-data.config.json configuration file. Please edit this file to set the session type and other parameters required for the session you want to open.
```
rd.open_session('platform.rdp')
```
## Retrieve data
### Headlines
#### Get headlines
```
response = news.headlines.Definition("Apple").get_data()
response.data.df
```
#### Get headlines within a range of dates
```
response = news.headlines.Definition(
query="Refinitiv",
date_from="20.03.2021",
date_to=timedelta(days=-4),
count=3
).get_data()
response.data.df
```
#### Get a limited number of headlines
```
response = news.headlines.Definition(query = "Google", count = 350).get_data()
response.data.df
```
### Story
```
response = news.story.Definition("urn:newsml:reuters.com:20211003:nNRAgvhyiu:1").get_data()
print(response.data.story.title, '\n')
print(response.data.story.content)
```
## Close the session
```
rd.close_session()
```
| github_jupyter |
# Migrating scripts from Framework Mode to Script Mode
This notebook focus on how to migrate scripts using Framework Mode to Script Mode. The original notebook using Framework Mode can be find here https://github.com/awslabs/amazon-sagemaker-examples/blob/4c2a93114104e0b9555d7c10aaab018cac3d7c04/sagemaker-python-sdk/tensorflow_distributed_mnist/tensorflow_local_mode_mnist.ipynb
### Set up the environment
```
import os
import subprocess
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
```
### Download the MNIST dataset
```
import utils
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
data_sets = input_data.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
```
### Upload the data
We use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job.
```
inputs = sagemaker_session.upload_data(path='data', key_prefix='data/mnist')
```
# Construct an entry point script for training
On this example, we assume that you aready have a Framework Mode training script named `mnist.py`:
```
!pygmentize 'mnist.py'
```
The training script `mnist.py` include the Framework Mode functions ```model_fn```, ```train_input_fn```, ```eval_input_fn```, and ```serving_input_fn```. We need to create a entrypoint script that uses the functions above to create a ```tf.estimator```:
```
%%writefile train.py
import argparse
# import original framework mode script
import mnist
import tensorflow as tf
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# read hyperparameters as script arguments
parser.add_argument('--training_steps', type=int)
parser.add_argument('--evaluation_steps', type=int)
args, _ = parser.parse_known_args()
# creates a tf.Estimator using `model_fn` that saves models to /opt/ml/model
estimator = tf.estimator.Estimator(model_fn=mnist.model_fn, model_dir='/opt/ml/model')
# creates parameterless input_fn function required by the estimator
def input_fn():
return mnist.train_input_fn(training_dir='/opt/ml/input/data/training', params=None)
train_spec = tf.estimator.TrainSpec(input_fn, max_steps=args.training_steps)
# creates parameterless serving_input_receiver_fn function required by the exporter
def serving_input_receiver_fn():
return mnist.serving_input_fn(params=None)
exporter = tf.estimator.LatestExporter('Servo',
serving_input_receiver_fn=serving_input_receiver_fn)
# creates parameterless input_fn function required by the evaluation
def input_fn():
return mnist.eval_input_fn(training_dir='/opt/ml/input/data/training', params=None)
eval_spec = tf.estimator.EvalSpec(input_fn, steps=args.evaluation_steps, exporters=exporter)
# start training and evaluation
tf.estimator.train_and_evaluate(estimator=estimator, train_spec=train_spec, eval_spec=eval_spec)
```
## Changes in the SageMaker TensorFlow estimator
We need to create a TensorFlow estimator pointing to ```train.py``` as the entrypoint:
```
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='train.py',
dependencies=['mnist.py'],
role='SageMakerRole',
framework_version='1.13',
hyperparameters={'training_steps':10, 'evaluation_steps':10},
py_version='py3',
train_instance_count=1,
train_instance_type='local')
mnist_estimator.fit(inputs)
```
# Deploy the trained model to prepare for predictions
The deploy() method creates an endpoint (in this case locally) which serves prediction requests in real-time.
```
mnist_predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='local')
```
# Invoking the endpoint
```
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
for i in range(10):
data = mnist.test.images[i].tolist()
predict_response = mnist_predictor.predict(data)
print("========================================")
label = np.argmax(mnist.test.labels[i])
print("label is {}".format(label))
print("prediction is {}".format(predict_response))
```
# Clean-up
Deleting the local endpoint when you're finished is important since you can only run one local endpoint at a time.
```
mnist_estimator.delete_endpoint()
```
| github_jupyter |
```
# -*- coding: utf-8 -*-
import sys
import os
import numpy as np
import pandas as pd
import cobra
print('Python version:', sys.version)
print('numpy version:', np.__version__)
print('pandas version:', pd.__version__)
print('cobrapy version:', cobra.__version__)
def AddRxn(model, newRxnFile):
"""Function of adding new reactions to the model."""
n1 = len(model.reactions)
AllAddRxn = pd.read_csv(newRxnFile, sep=',', index_col='RxnID', skipinitialspace=True)
n2 = len(AllAddRxn)
for i in range(n2):
ID = AllAddRxn.index.values[i]
addRxn = cobra.Reaction(ID)
model.add_reactions([addRxn])
addRxnInf = model.reactions[n1 + i]
addRxnInf.name = AllAddRxn.loc[ID, 'RxnName']
addRxnInf.reaction = AllAddRxn.loc[ID, 'RxnFormula']
addRxnInf.subsystem = AllAddRxn.loc[ID, 'Subsystem']
addRxnInf.lower_bound = AllAddRxn.loc[ID, 'LowerBound']
addRxnInf.upper_bound = AllAddRxn.loc[ID, 'UpperBound']
return model
def flux2file(model, product, psw, output_dir='tmp'):
"""Function of exporting flux data."""
n = len(model.reactions)
modelMatrix = np.empty([n, 9], dtype = object)
for i in range(len(model.reactions)):
x = model.reactions[i]
modelMatrix[i, 0] = i + 1
modelMatrix[i, 1] = x.id
modelMatrix[i, 2] = x.name
modelMatrix[i, 3] = x.reaction
modelMatrix[i, 4] = x.subsystem
modelMatrix[i, 5] = x.lower_bound
modelMatrix[i, 6] = x.upper_bound
modelMatrix[i, 7] = x.flux
modelMatrix[i, 8] = abs(x.flux)
df = pd.DataFrame(data = modelMatrix,
columns = ['N', 'RxnID', 'RxnName', 'Reaction', 'SubSystem',
'LowerBound', 'UpperBound', 'Flux-core', 'abs(Flux)'])
if not os.path.exists(output_dir):
os.mkdir(output_dir)
filepath = os.path.join(output_dir, '{}_{}.xlsx'.format(product, psw))
df.to_excel(filepath, index=False)
model = cobra.Model()
AddRxn(model,'CBBrxns.csv')
# the model has a constraint of producing 1 of 3pg (sink_3pg)
# set the objective to minimize ATP cost
model.objective = {model.reactions.DM_atp: 1}
model.objective_direction = 'min'
# set the carboxylation and oxygenation reaction ratio
# of RuBisCO to be 3:1
rubisco_flux = model.problem.Constraint(
model.reactions.RBPC.flux_expression - 3 * model.reactions.RBPO.flux_expression,
lb = 0,
ub = 0
)
model.add_cons_vars(rubisco_flux)
photores = {
'NPR': 'a_NPRrxns.csv', # natural photorespiration
'GLC': 'b_GLCrxns.csv', # glycerate bypass
'OX': 'c_OXrxns.csv', # glycolate oxidation pathway
'A5P': 'd_A5Prxns.csv', # arabinose-5-phosphate shunt
'3OHP': 'e_3OHPrxns.csv', # 3-hydroxypropionate bypass
'TACO': 'f_TACOrxns.csv', # tartronyl-CoA pathway
}
cost_df = pd.DataFrame()
for psw, rxns in photores.items():
with model as m:
AddRxn(m, rxns)
m.optimize()
flux2file(m,'3pg',psw,'output')
for cost in ['DM_atp', 'DM_e', 'Fdr', 'EX_co2', 'RBPC', 'RBPO']:
cost_df.loc[cost, psw] = abs(m.reactions.get_by_id(cost).flux)
# Assuming we cannot improve the GCC hydrolysis
# The GCC M5 hydrolyse 3.9 ATP per carboxylation (Fig. 2c)
with model as m:
AddRxn(m, 'f_TACOrxns.csv')
m.reactions.GCC.add_metabolites({'atp': -2.9, 'adp': 2.9, 'pi': 2.9})
m.optimize()
flux2file(m,'3pg', 'TACO_2','output')
for cost in ['DM_atp', 'DM_e', 'Fdr', 'EX_co2', 'RBPC', 'RBPO']:
cost_df.loc[cost, 'TACO_2'] = abs(m.reactions.get_by_id(cost).flux)
cost_df.to_excel('TaCo costs comparison.xlsx')
cost_df
```
| github_jupyter |
```
import numpy as np
import torch
import os
import pickle
import matplotlib.pyplot as plt
# %matplotlib inline
# plt.rcParams['figure.figsize'] = (20, 20)
# plt.rcParams['image.interpolation'] = 'bilinear'
import sys
sys.path.append('../train/')
from torch.autograd import Variable
from torch.utils.data import DataLoader, Dataset
import torchvision.datasets as datasets
import torchvision
import torchvision.transforms as T
import torch.nn.functional as F
import torch.nn as nn
import collections
import numbers
import random
import math
from PIL import Image, ImageOps, ImageEnhance
import time
from torch.utils.data import Dataset
from networks.DilatedSegUNet_new_version import DilatedSegUNet_new_version
import tool
from tqdm import tqdm
flip_index = ['16', '15', '14', '13', '12', '11', '10']
NUM_CHANNELS = 3
NUM_CLASSES = 2
BATCH_SIZE = 2
W, H = 1918, 1280
STRIDE = 512
IMAGE_SIZE = 1024
test_mask_path = '../../data/test_masks/DilatedSegUNet_new_version/'
weight_path = '../_weights/DilatedSegUNet_new_version-fold0-end.pth'
def load_model(filename, model):
checkpoint = torch.load(filename)
model.load_state_dict(checkpoint['model_state'])
model = DilatedSegUNet_new_version()
model = model.cuda()
model.eval()
load_model(weight_path, model)
test_path = '../../data/images/test/'
if not os.path.exists(test_mask_path):
os.makedirs(test_mask_path)
test_names = os.listdir(test_path)
test_names = sorted(test_names)
coords_full = np.zeros((BATCH_SIZE, 2, H, W), dtype='float32')
xx,yy = np.meshgrid(np.linspace(-0.5,0.5,W), np.linspace(-0.5,0.5,H))
coord = np.concatenate([xx[np.newaxis,...], yy[np.newaxis,...]],0).astype('float32')
for i in range(BATCH_SIZE):
coords_full[i] = coord
with torch.no_grad():
batch_size = BATCH_SIZE
normalize_mean = [.485, .456, .406]
normalize_std = [.229, .224, .225]
test_names = sorted(os.listdir(test_path))
for image_pack in tqdm(range(len(test_names) // batch_size)):
images = np.zeros((batch_size, 3, H, W), dtype='float32')
test_masks = np.zeros((batch_size, 2, H, W), dtype='float32')
ifflip = [False] * batch_size
image_batch_names = test_names[image_pack * batch_size: image_pack * batch_size + batch_size]
mask_names = [input_name.split('.')[0] + '.png' for input_name in image_batch_names]
for idx, image_name in enumerate(image_batch_names):
image = Image.open(os.path.join(test_path, image_name))
angle = image_name.split('.')[0].split('_')[-1]
if angle in flip_index:
ifflip[idx] = True
image = ImageOps.mirror(image)
image = np.array(image).astype('float') / 255
image = image.transpose(2, 0, 1)
for i in range(3):
image[i] = (image[i] - normalize_mean[i]) / normalize_std[i]
images[idx] = image
for h_idx in range(int(math.ceil((H - STRIDE) / STRIDE))):
h_start = h_idx * STRIDE
h_end = h_start + IMAGE_SIZE
if h_end > H:
h_end = H
h_start = h_end - IMAGE_SIZE
for w_idx in range(int(math.ceil((W - STRIDE) / STRIDE))):
w_start = w_idx * STRIDE
w_end = w_start + IMAGE_SIZE
if w_end > W:
w_end = W
w_start = w_end - IMAGE_SIZE
input_batchs = images[:, :, h_start:h_end, w_start:w_end]
input_tensor = torch.from_numpy(input_batchs).cuda()
inputs = Variable(input_tensor)
coord_batchs = coords_full[:, :, h_start:h_end, w_start:w_end]
coord_batchs = coord_batchs[:, :, ::16, ::16]
coord_tensor = torch.from_numpy(coord_batchs).cuda()
coords = Variable(coord_tensor)
outputs = model(inputs, coords)
ouputs = outputs.cpu().data.numpy()
test_masks[:, :, h_start:h_end, w_start:w_end] += ouputs
test_masks = np.argmax(test_masks, axis=1).astype('uint8')
for idx in range(batch_size):
output_PIL = Image.fromarray(test_masks[idx].astype('uint8')*255).convert('1')
if ifflip[idx]:
output_PIL = ImageOps.mirror(output_PIL)
mask_name = mask_names[idx]
output_PIL.save(test_mask_path + mask_name)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yushan111/analytics-zoo/blob/add-autoestimator-quick-start/docs/docs/colab-notebook/orca/quickstart/autoestimator_pytorch_lenet_mnist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>

---
##### Copyright 2018 Analytics Zoo Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
```
## **Environment Preparation**
**Install Java 8**
Run the cell on the **Google Colab** to install jdk 1.8.
**Note:** if you run this notebook on your computer, root permission is required when running the cell to install Java 8. (You may ignore this cell if Java 8 has already been set up in your computer).
```
# Install jdk8
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
import os
# Set environment variable JAVA_HOME.
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
!update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!java -version
```
**Install Analytics Zoo**
[Conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) is needed to prepare the Python environment for running this example.
**Note**: The following code cell is specific for setting up conda environment on Colab; for general conda installation, please refer to the [install guide](https://docs.conda.io/projects/conda/en/latest/user-guide/install/) for more details.
```
import sys
# Get current python version
version_info = sys.version_info
python_version = f"{version_info.major}.{version_info.minor}.{version_info.micro}"
# Install Miniconda
!wget https://repo.continuum.io/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh
!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh
!./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local
# Update Conda
!conda install --channel defaults conda python=$python_version --yes
!conda update --channel defaults --all --yes
# Append to the sys.path
_ = (sys.path
.append(f"/usr/local/lib/python{version_info.major}.{version_info.minor}/site-packages"))
os.environ['PYTHONHOME']="/usr/local"
```
You can install the latest pre-release version using `pip install --pre --upgrade analytics-zoo[ray]`.
```
# Install latest pre-release version of Analytics Zoo
# Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade analytics-zoo[ray]
# Install python dependencies
!pip install torch==1.7.1 torchvision==0.8.2
```
## **Automated hyper-parameter search for PyTorch using Orca APIs**
In this guide we will describe how to enable automated hyper-parameter search for PyTorch using Orca `AutoEstimator` in 5 simple steps.
```
# import necesary libraries and modules
from __future__ import print_function
import os
import argparse
from zoo.orca import init_orca_context, stop_orca_context
from zoo.orca import OrcaContext
```
### **Step 1: Init Orca Context**
```
# recommended to set it to True when running Analytics Zoo in Jupyter notebook.
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cores=4, memory="2g", init_ray_on_spark=True) # run in local mode
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4, init_ray_on_spark=True) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(
cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", init_ray_on_spark=True,
driver_memory="10g", driver_cores=1) # run on Hadoop YARN cluster
```
This is the only place where you need to specify local or distributed mode. View [Orca Context](https://analytics-zoo.readthedocs.io/en/latest/doc/Orca/Overview/orca-context.html) for more details.
**Note**: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster.
### **Step 2: Define the Model**
You may define your model, loss and optimizer in the same way as in any standard PyTorch program.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self, fc1_hidden_size=500):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, fc1_hidden_size)
self.fc2 = nn.Linear(fc1_hidden_size, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
criterion = nn.NLLLoss()
```
After defining your model, you need to define a *Model Creator Function* that returns an instance of your model, and a *Optimizer Creator Function* that returns a PyTorch optimizer. Note that both the *Model Creator Function* and the *Optimizer Creator Function* should take `config` as input and get the hyper-parameter values from `config`.
```
def model_creator(config):
model = LeNet(fc1_hidden_size=config["fc1_hidden_size"])
return model
def optim_creator(model, config):
return torch.optim.Adam(model.parameters(), lr=config["lr"])
```
### **Step 3: Define Dataset**
You can define the train and validation datasets using *Data Creator Function* that has one parameter of `config` and returns a PyTorch `DataLoader`.
```
import torch
from torchvision import datasets, transforms
torch.manual_seed(0)
dir = './dataset'
test_batch_size = 640
def train_loader_creator(config):
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(dir, train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=config["batch_size"], shuffle=True)
return train_loader
def test_loader_creator(config):
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(dir, train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=test_batch_size, shuffle=False)
return test_loader
```
### **Step 4: Define search space**
You should define a dictionary as your hyper-parameter search space. The keys are hyper-parameter names which should be the same with those in your creators, and you can specify how you want to sample each hyper-parameter in the values of the search space. See more about [automl.hp](https://analytics-zoo.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-hp).
```
from zoo.orca.automl import hp
search_space = {
"fc1_hidden_size": hp.choice([500, 600]),
"lr": hp.choice([0.001, 0.003]),
"batch_size": hp.choice([160, 320, 640]),
}
```
### **Step 5: Automatically fit and search *with* Orca AutoEstimator**
First, create an AutoEstimator. You can refer to [AutoEstimator API doc](https://analytics-zoo.readthedocs.io/en/latest/doc/PythonAPI/AutoML/automl.html#orca-automl-auto-estimator) for more details.
```
from zoo.orca.automl.auto_estimator import AutoEstimator
auto_est = AutoEstimator.from_torch(model_creator=model_creator,
optimizer=optim_creator,
loss=criterion,
logs_dir="/tmp/zoo_automl_logs",
resources_per_trial={"cpu": 2},
name="lenet_mnist")
```
Next, use the auto estimator to fit and search for the best hyper-parameter set.
```
auto_est.fit(data=train_loader_creator,
validation_data=test_loader_creator,
search_space=search_space,
n_sampling=2,
epochs=1,
metric="accuracy")
```
Finally, you can get the best learned model and the best hyper-parameters.
```
best_model = auto_est.get_best_model().model # will change later
best_config = auto_est.get_best_config()
print(best_config)
```
You can use the best learned model and the best hyper-parameters as you want. Here, we demonstrate how to evaluate on the test dataset.
```
test_loader = test_loader_creator(best_config)
best_model.eval()
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = best_model(data)
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).sum().numpy()
accuracy = 100. * correct / len(test_loader.dataset)
print(f"accuracy is {accuracy}%")
```
You can find the accuracy of the best model has reached 98%.
```
# stop orca context when program finishes
stop_orca_context()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content-dl/blob/main/tutorials/W1D2_LinearDeepLearning/student/W1D2_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# DL Neuromatch Academy: Week 1, Day 2, Tutorial 1
# Gradients and AutoGrad
__Content creators:__ Saeed Salehi, Vladimir Haltakov, Andrew Saxe
__Content reviewers:__ Polina Turishcheva, Atnafu Lambebo, Yu-Fang Yang
__Content editors:__ Anoop Kulkarni
__Production editors:__ Khalid Almubarak, Spiros Chavlis
---
#Tutorial Objectives
Day 2 Tutorial 1 will continue on buiding PyTorch skillset and motivate its core functionality, Autograd. In this notebook, we will cover the key concepts and ideas of:
* Gradient descent
* PyTorch Autograd
* PyTorch nn module
```
#@markdown Tutorial slides
# you should link the slides for all tutorial videos here (we will store pdfs on osf)
from IPython.display import HTML
HTML('<iframe src="https://docs.google.com/presentation/d/1kfWWYhSIkczYfjebhMaqQILTCu7g94Q-o_ZcWb1QAKs/embed?start=false&loop=false&delayms=3000" frameborder="0" width="960" height="569" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>')
```
---
# Setup
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
import torch
from torch import nn
import time
import random
from tqdm.notebook import tqdm, trange
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Plotting functions
def ex3_plot(epochs, losses, values, gradients):
f, (plot1, plot2, plot3) = plt.subplots(3, 1, sharex=True, figsize=(10, 7))
plot1.set_title("Cross Entropy Loss")
plot1.plot(np.linspace(1, epochs, epochs), losses, color='r')
plot2.set_title("First Parameter value")
plot2.plot(np.linspace(1, epochs, epochs), values, color='c')
plot3.set_title("First Parameter gradient")
plot3.plot(np.linspace(1, epochs, epochs), gradients, color='m')
plot3.set_xlabel("Epoch")
plt.show()
#@title Helper functions
seed = 1943 # McCulloch & Pitts (1943)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
np.random.seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
```
---
# Section 1: Gradient Descent Algorithm
Since the goal of most learning algorithms is **minimizing the risk (cost) function**, optimization is the soul of learning! The gradient descent algorithm, along with its variations such as stochastic gradient descent, is one of the most powerful and popular optimization methods used for deep learning.
## 1.1: Gradient Descent
```
#@title Video 1.1: Introduction
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="PFQeUDxQFls", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
#@title Video 1.2: Gradient Descent
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Z3dyRLR8GbM", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
Gradient Descent (introduced by Augustin-Louis Cauchy in 1847) is an **iterative method** to **minimize** a **continuous** and (ideally) **differentiable function** of **many variables**.
### Definition
Let $f(\mathbf{w}): \mathbb{R}^d \rightarrow \mathbb{R}$ be a differentiable function. Gradient Descent is an iterative algorithm for minimizing the function $f$, starting with an initial value for variables $\mathbf{w}$, taking steps of size $\eta$ in the direction of the negative gradient at the current point to update the variables $\mathbf{w}$.
$$ \mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla f (\mathbf{w}^{(t)}) $$
where $\eta > 0$ and $\nabla f (\mathbf{w})= \left( \frac{\delta f(\mathbf{w})}{\delta w_1}, ..., \frac{\delta f(\mathbf{w})}{\delta w_d} \right)$. Since negative gradients always point locally in the direction of steepest descent, the algorithm makes small steps at each point **towards** the minimum.
<br/>
### Vanilla Algorithm
---
> *inputs*: initial guess $\mathbf{w}^{(0)}$, step size $\eta > 0$, number of steps $T$
> *For* *t = 1, 2, ..., T* *do* \
$\qquad$ $\mathbf{w}^{(t+1)} = \mathbf{w}^{(t)} - \eta \nabla f (\mathbf{w}^{(t)})$\
*end*
> *return*: $\mathbf{w}^{(t+1)}$
---
<br/>
To be able to use this algorithm, we need to calculate the gradient of the loss with respect to the learnable parameters.
## 1.2: Calculating Gradients
To minimize the empirical risk (loss) function using gradient descent, we need to calculate the vector of partial derivatives:
$$\dfrac{\partial Loss}{\partial \mathbf{w}} = \left[ \dfrac{\partial Loss}{\partial w_1}, \dfrac{\partial Loss}{\partial w_2} , ..., \dfrac{\partial Loss}{\partial w_d} \right]^{\top} $$
Although PyTorch and other deep learning frameworks (e.g. JAX and TensorFlow) provide us with incredibly powerful and efficient algorithms for automatic differentiation, calculating few derivatives with hand would be fun.
### Exercise 1.2
1. Given $L(w_1, w_2) = w_1^2 - 2w_1 w_2$ find $\dfrac{\partial L}{\partial w_1}$ and $\dfrac{\partial L}{\partial w_1}$.
<br/>
2. Given $f(x) = sin(x)$ and $g(x) = \ln(x)$, find the derivative of their composite function $\dfrac{d (f \circ g)(x)}{d x}$ (*hint: chain rule*).
**Chain rule**: For a composite function $F(x) = f(g(x)) \equiv (f \circ g)(x)$:
$$F'(x) = f'(g(x)) \cdot g'(x)$$
or differently denoted:
$$ \frac{dF}{dx} = \frac{df}{dg} ~ \frac{dg}{dx} $$
<br/>
3. Given $f(x, y, z) = \tanh \left( \ln \left[1 + z \frac{2x}{sin(y)} \right] \right)$, how easy is it to derive $\dfrac{\partial f}{\partial x}$, $\dfrac{\partial f}{\partial y}$ and $\dfrac{\partial f}{\partial z}$? (*hint: you don't have to actually calculate them!*)
## 1.3: Computational Graphs and Backprop
```
#@title Video 1.3: Computational Graph
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="7c8iCHcVgVs", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
The last function in *Exercise 1.2* is an example of how overwhelming the derivation of gradients can get, as the number of variables and nested functions increases. This function is still extremely simple compared to the loss functions of modern neural networks. So how do PyTorch and similar frameworks approach such beasts?
### 1.3.1: Computational Graphs
Let’s look at the function again:
$$f(x, y, z) = \tanh \left(\ln \left[1 + z \frac{2x}{sin(y)} \right] \right)$$
we can build a so-called computational graph (shown below) to break the original function to smaller and more approachable expressions. If this "reverse engineering" approach seems unintuitive and arbitrary, it's because it is! Usually, the graph is built first.
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/comput_graph.png" alt="Computation Graph" width="800"/></center>
Starting from $x$, $y$, and $z$ and following the arrows and expressions, you would see that our graph returns the same function as $f$. It does so by calculating intermediate variables $a,b,c,d,$ and $e$. This is called the **forward pass**.
Now, let’s start from $f$, and work our way against the arrows while calculating the gradient of each expression as we go. This is called **backward pass**, which later inspires **backpropagation of errors**.
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/comput_graph_full.png" alt="Computation Graph full" width="1200"/></center>
Because we've split the computation into simple operations on intermediate variables, I hope you can appreciate how easy it now is to calculate the partial derivatives.
Now we can use chain rule and simply calculate any gradient:
$$ \dfrac{\partial f}{\partial x} = \dfrac{\partial f}{\partial e}~\dfrac{\partial e}{\partial d}~\dfrac{\partial d}{\partial c}~\dfrac{\partial c}{\partial a}~\dfrac{\partial a}{\partial x} = \left( 1-\tanh^2(e) \right) \cdot \frac{1}{d}\cdot z \cdot \frac{1}{b} \cdot 2$$
Conveniently, the values for $e$, $b$, and $d$ are available to us from when we did the forward pass through the graph. That is, the partial derivatives have simple expressions in terms of the intermediate variables $a,b,c,d,e$ that we calculated and stored during the forward pass.
### Exercise 1.3
For the function above, calculate the $\dfrac{\partial f}{\partial y}$ and $\dfrac{\partial f}{\partial z}$ using the computational graph and chain rule.
For more: [Calculus on Computational Graphs: Backpropagation](https://colah.github.io/posts/2015-08-Backprop/)
---
# Section 2: PyTorch AutoGrad
```
#@title Video 2.1: Automatic Differentiation
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="h8B8Nlcz7yY", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
Deep learning frameworks such as PyTorch, JAX, and TensorFlow come with a very efficient and sophisticated set of algorithms, commonly known as Automatic differentiation. AutoGrad is PyTorch's automatic differentiation engine. Here we start by covering the essentials of AutoGrad, but you will learn more in the coming days.
## Section 2.1: Forward Propagation
Everything starts with the forward propagation (pass). PyTorch plans the computational graph, as we declare the variables and operations, and it builds the graph when we call the backward pass. PyTorch rebuilds the graph every time we iterate or change it (or simply put, PyTorch uses a dynamic graph).
Before we start our first example, let's recall gradient descent algorithm. In gradient descent algorithm, it is only required to have the gradient of our cost function with respect to variables which are accessible to us for updating (changing). These variables are often called "learnable parameters" or simply parameter in PyTorch. In the case of neural networks, weights and biases are often the learnable parameters.
### Exercise 2.1
In PyTorch, we can set the optional argument `requires_grad` in tensors to `True`, so PyTorch would track every operation on them while configuring the computational graph. For this exercise, use the provided tensors to build the following graph.
<br/>
<center><img src="https://raw.githubusercontent.com/ssnio/statics/main/neuromatch/simple_graph.png" alt="Simple nn graph" width="600"/></center>
```
class SimpleGraph:
def __init__(self, w=None, b=None):
"""Initializing the SimpleGraph
Args:
w (float): initial value for weight
b (float): initial value for bias
"""
if w is None:
self.w = torch.randn(1, requires_grad=True)
else:
self.w = torch.tensor([w], requires_grad=True)
if b is None:
self.b = torch.randn(1, requires_grad=True)
else:
self.b = torch.tensor([b], requires_grad=True)
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor): 1D tensor of features
Returns:
torch.Tensor: model predictions
"""
assert isinstance(x, torch.Tensor)
#################################################
## Implement the the forward pass to calculate prediction
## Note that prediction is not the loss, but the value after `tanh`
# Complete the function and remove or comment the line below
raise NotImplementedError("Forward Pass `forward`")
#################################################
prediction = ...
return prediction
def sq_loss(y_true, y_prediction):
"""L2 loss function
Args:
y_true (torch.Tensor): 1D tensor of target labels
y_true (torch.Tensor): 1D tensor of predictions
Returns:
torch.Tensor: L2-loss (squared error)
"""
assert isinstance(y_true, torch.Tensor)
assert isinstance(y_prediction, torch.Tensor)
#################################################
## Implement the L2-loss (squred error) given true label and prediction
# Complete the function and remove or comment the line below
raise NotImplementedError("Loss function `sq_loss`")
#################################################
loss = ...
return loss
# # Uncomment to run
# feature = torch.tensor([1]) # input tensor
# target = torch.tensor([7]) # target tensor
# simple_graph = SimpleGraph(-0.5, 0.5)
# print("initial weight = {} \ninitial bias = {}".format(simple_graph.w.item(),
# simple_graph.b.item()))
# prediction = simple_graph.forward(feature)
# square_loss = sq_loss(target, prediction)
# print("for x={} and y={}, prediction={} and L2 Loss = {}".format(feature.item(),
# target.item(),
# prediction.item(),
# square_loss.item()))
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_b9fccdbe.py)
It is important to appreciate the fact that PyTorch can follow our operations as we arbitrary go through classes and functions.
## 2.2 Backward Propagation
Here is where all the magic lies. We can first look at the operations that PyTorch kept track of. Tensor property `grad_fn` keeps reference to backward propagation functions.
```
print('Gradient function for prediction =', prediction.grad_fn)
print('Gradient function for loss =', square_loss.grad_fn)
```
Now let's kick off backward pass to calculate the gradients by calling the `.backward()` on the tensor we wish to initiate the backpropagation from. Often, `.backward()` is called on the loss, which is the last on the graph. Before doing that, let's calculate the loss gradients:
$$\frac{\partial{loss}}{\partial{w}} = - 2 x (y_t - y_p)(1 - y_p^2)$$
$$\frac{\partial{loss}}{\partial{b}} = - 2 (y_t - y_p)(1 - y_p^2)$$
We can then compare it to PyTorch gradients, which can be obtained by calling `.grad` on tensors.
**Important Notes**
* Always keep in mind that PyTorch is tracking all the operations for tensors that require grad. To stop this tracking, we use `.detach()`.
* PyTorch builds the graph only when `.backward()` is called and then it is set free. If you try calling `.backward()` after it is already called, you get the following error:
*`Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.`*
* Learnable parameters are "contagious". If you recall from our computational graph, we need all the intermediate gradients to be able to use the chain rule. Therefore, we need to `.detach()` any tensor that was on the path of gradient flow (e.g. prediction tensor).
* `.backward()` accumulates gradients in the leaves. For most of training methods, we call `.zero_grad()` on the loss or optimizer to zero `.grad` attributes (see [autograd.backward](https://pytorch.org/docs/stable/autograd.html#torch.autograd.backward) for more).
```
# analytical gradients (remember detaching)
ana_dloss_dw = - 2 * feature * (target - prediction.detach())*(1 - prediction.detach()**2)
ana_dloss_db = - 2 * (target - prediction.detach())*(1 - prediction.detach()**2)
square_loss.backward() # first we should call the backward to build the graph
autograd_dloss_dw = simple_graph.w.grad
autograd_dloss_db = simple_graph.b.grad
print(ana_dloss_dw == autograd_dloss_dw)
print(ana_dloss_db == autograd_dloss_db)
```
References and more:
* [A GENTLE INTRODUCTION TO TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html)
* [AUTOMATIC DIFFERENTIATION PACKAGE - TORCH.AUTOGRAD](https://pytorch.org/docs/stable/autograd.html)
* [AUTOGRAD MECHANICS](https://pytorch.org/docs/stable/notes/autograd.html)
* [AUTOMATIC DIFFERENTIATION WITH TORCH.AUTOGRAD](https://pytorch.org/tutorials/beginner/basics/autogradqs_tutorial.html)
---
# Section 3: PyTorch's Neural Net module (`nn.Module`)
```
#@title Video 3.1: Putting it together
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="rUChBWj9ihw", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
In this section we will focus on training the simple neural network model from yesterday.
```
#@title Generate the sample dataset
import sklearn.datasets
# Create a dataset of 256 points with a little noise
X_orig, y_orig = sklearn.datasets.make_moons(256, noise=0.1)
# Plot the dataset
plt.figure(figsize=(9, 7))
plt.scatter(X_orig[:,0], X_orig[:,1], s=40, c=y_orig)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
# Select the appropriate device (GPU or CPU)
device = "cuda" if torch.cuda.is_available() else "cpu"
# Convert the 2D points to a float tensor
X = torch.from_numpy(X_orig).type(torch.FloatTensor)
X = X.to(device)
# Convert the labels to a long interger tensor
y = torch.from_numpy(y_orig).type(torch.LongTensor)
y = y.to(device)
```
Let's define the same simple neural network model as in Day 1. This time we will not define a `train` method, but instead implement it outside of the class so we can better inspect it.
```
# Simple neural network with a single hidden layer
class NaiveNet(nn.Module):
def __init__(self):
"""
Initializing the NaiveNet
"""
super(NaiveNet, self).__init__()
self.layers = nn.Sequential(
nn.Linear(2, 16),
nn.ReLU(),
nn.Linear(16, 2),
)
def forward(self, x):
"""Forward pass
Args:
x (torch.Tensor): 2D tensor of features
Returns:
torch.Tensor: model predictions
"""
return self.layers(x)
```
PyTorch provides us with ready to use neural network building blocks, such as linear or recurrent layers, activation functions and loss functions, packed in the [`torch.nn`](https://pytorch.org/docs/stable/nn.html) module. If we build a neural network using the `torch.nn` layers, the weights and biases are already in `requires_grad` mode.
Now let's prepare the training! We need 3 things for that:
* **Model parameters** - Model parameters refer to all the learnable parameters' of the model which are accessible by calling `.parameters()` on the model. Please note that not all the `requires_grad` are seen as model parameters. To create a custom model parameter, you can use [`nn.Parameter`](https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html) (*A kind of Tensor that is to be considered a module parameter*). When we create a new instace of our model, layer parameters are initialized using a uniform distribution (more on that in the coming tutorials and days).
* **Loss function** - we need to define the loss that we are going to be optimizing. The cross entropy loss is suitable for classification problems.
* **Optimizer** - the optimizer will perform the adaptation of the model parameters according to the chosen loss function. The optimizer takes the parameters of the model (often by calling `.parameters()` on the model) as its input to be adapted.
You will learn more details about choosing the right loss function and optimizer later in the course.
```
# Create an instance of our network
naive_model = NaiveNet().to(device)
# Create a cross entropy loss function
cross_entropy_loss = nn.CrossEntropyLoss()
# Stochstic Gradient Descent optimizer with a learning rate of 1e-1
learning_rate = 1e-1
sgd_optimizer = torch.optim.SGD(naive_model.parameters(), lr=learning_rate)
```
The training process in PyTorch is interactive - you can perform training iterations as you wish and inspect the results after each iteration. We encourage leaving the loss function outside the explicit forward pass function, and rather calculate it on the output (prediction).
Let's perform one training iteration. You can run the cell multiple times and see how the parameters are being updated and the loss is reducing. We pick the parameters of the first neuron in the first layer. Please make sure you go through all the commands and discuss their purpose with the pod.
```
# Reset all gradients to 0
sgd_optimizer.zero_grad()
# Forward pass (Compute the output of the model on the data)
y_logits = naive_model(X)
# Compute the loss
loss = cross_entropy_loss(y_logits, y)
print('Loss:', loss.item())
# Perform backpropagation to build the graph and compute the gradients
loss.backward()
# `.parameters()` returns a generator
print('Gradients:', next(naive_model.parameters()).grad[0].detach().numpy())
# Print model's first learnable parameters
print('Parameters before:', next(naive_model.parameters())[0].detach().numpy())
# Optimizer takes a step in the steepest direction (negative of gradient)
# and "updates" the weights and biases of the network
sgd_optimizer.step()
# Print model's first learnable parameters
print('Parameters after: ', next(naive_model.parameters())[0].detach().numpy())
```
## Exercise 3
Following everything we learned so far, we ask you to complete the `train` function below.
```
def train(features, labels, model, loss_fun, optimizer, n_epochs):
"""Training function
Args:
features (torch.Tensor): features (input) with shape torch.Size([n_samples, 2])
labels (torch.Tensor): labels (targets) with shape torch.Size([n_samples, 2])
model (torch nn.Module): the neural network
loss_fun (function): loss function
optimizer(function): optimizer
n_epochs (int): number of training epochs
Returns:
list: record (evolution) of losses
list: record (evolution) of value of the first parameter
list: record (evolution) of gradient of the first parameter
"""
loss_record = [] # keeping recods of loss
par_values = [] # keeping recods of first parameter
par_grads = [] # keeping recods of gradient of first parameter
# we use `tqdm` methods for progress bar
epoch_range = trange(n_epochs, desc='loss: ', leave=True)
for i in epoch_range:
if loss_record:
epoch_range.set_description("loss: {:.4f}".format(loss_record[-1]))
epoch_range.refresh() # to show immediately the update
time.sleep(0.01)
#################################################
## Implement the missing parts of the training loop
# Complete the function and remove or comment the line below
raise NotImplementedError("Training setup `train`")
#################################################
... # Initialize gradients to 0
predictions = ... # Compute model prediction (output)
loss = ... # Compute the loss
... # Compute gradients (backward pass)
... # update parameters (optimizer takes a step)
loss_record.append(loss.item())
par_values.append(next(model.parameters())[0][0].item())
par_grads.append(next(model.parameters()).grad[0][0].item())
return loss_record, par_values, par_grads
# # Uncomment to run
# epochs = 5000
# losses, values, gradients = train(X, y,
# naive_model,
# cross_entropy_loss,
# sgd_optimizer,
# epochs)
# ex3_plot(epochs, losses, values, gradients)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content-dl/tree/main//tutorials/W1D2_LinearDeepLearning/solutions/W1D2_Tutorial1_Solution_364cd4e2.py)
*Example output:*
<img alt='Solution hint' align='left' width=704 height=488 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content-dl/main/tutorials/W1D2_LinearDeepLearning/static/W1D2_Tutorial1_Solution_364cd4e2_2.png>
```
#@title Video 3.2: Wrap-up
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="zFmWs6doqhM", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
```
| github_jupyter |
## Classifying Hourly Timestamps into Day or Night
```
import pandas as pd
import numpy as np
import warnings
from df2gspread import df2gspread as d2g
warnings.simplefilter('ignore')
## Import Raw TTN Data from Google SpreadSheet
url = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vRlXVQ6c3fKWvtQlFRSRUs5TI3soU7EghlypcptOM8paKXcUH8HjYv90VoJBncuEKYIZGLq477xE58C/pub?gid=0&single=true&output=csv'
df_hourly = pd.read_csv(url,parse_dates = ['time'],infer_datetime_format = True,usecols = [0,3])
df_hourly.head()
## Cleaning and re-organizing the DataFrame
df_hourly.rename(columns={'time': 'TimeStamps'}, inplace=True)
df_hourly.rename(columns={'CarCount': 'VehicleCountperHour'}, inplace=True)
## Strip the Microseconds from the time column
df_hourly['TimeStamps'] = df_hourly['TimeStamps'].values.astype('datetime64[s]')
## Reorder the dataframe
df_hourly = df_hourly.reindex(['TimeStamps','VehicleCountperHour'], axis=1)
df_hourly.tail()
```
### Importing SunTime Chart for adding Day or Night Classification
Here we add Day or Night classification to Hourly Vehicle Count Data using sunrise and sunset times dataframe of Saarbrucken. These times were acquired from [timeanddate.com](https://www.timeanddate.com/sun/germany/saarbrucken) and the dataframe was manually created.
```
# Reading SunTime Chart of Saarbrücken
url = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vRJCZQkgUHTCcx-wvJOog87qq4EFiQ1W6T4akxLSpiqCb3KjYDf_43coltDGG0YcjjsDTxjeXE-O_NH/pub?gid=0&single=true&output=csv'
df_suntimes = pd.read_csv(url)
# Modified Sun Timings DataFrame
df_suntimes_mod = pd.DataFrame()
df_suntimes_mod['SunriseTimeStamp']= pd.to_datetime(df_suntimes['Date'] + ' ' + df_suntimes['Sunrise'])
df_suntimes_mod['SunsetTimeStamp']= pd.to_datetime(df_suntimes['Date'] + ' ' + df_suntimes['Sunset'])
# Querying values in the dataframe to selected dates
start_time = '2018-02-21 07:25:00'
df_suntimes_mod = df_suntimes_mod.loc[df_suntimes_mod.SunriseTimeStamp >= start_time,:]
end_time = '2018-02-25 18:15:00'
df_suntimes_mod = df_suntimes_mod.loc[df_suntimes_mod.SunriseTimeStamp <= end_time,:]
df_suntimes_mod = df_suntimes_mod.reset_index()
df_suntimes_mod = df_suntimes_mod.drop(['index'], 1)
df_suntimes_mod
## Creating a new Dataframe from original dataframe to classify Hourly TimeStamps as 'Day' or 'Night':
df_dn = df_hourly
## Set Everything to Day First
df_dn['DayorNight'] = 'Day'
## Manually fixing first day's night timestamps to 'Night'
night_index = (df_dn.loc[df_dn.TimeStamps <= (df_suntimes_mod['SunriseTimeStamp'][0]),:]).index
# Select Night Time Traffic Only from Sunset today to next day Sunrise
n_days = len(df_suntimes_mod['SunriseTimeStamp'])
for i,j in zip(range(n_days),range(1,n_days)):
start_time = df_suntimes_mod['SunsetTimeStamp'][i]
end_time = df_suntimes_mod['SunriseTimeStamp'][j]
data = df_dn[(df_dn['TimeStamps'] > start_time) & (df_dn['TimeStamps'] < end_time)]
night_index = night_index.append(data.index)
## Set all the Night TimeStamps to 'Night
df_dn['DayorNight'].iloc[night_index] = 'Night'
df_dn.head(15)
# Writing the file as csv
df_dn.to_csv('data/DayorNight.csv', date_format="%d/%m/%Y %H:%M:%S",index=False)
## Write pandas dataframe to a Google Sheet Using df2spread:
# Insert ID of Google Spreadsheet
spreadsheet = '1LTXIPNb7MX0qEOU_DbBKC-OwE080kyRvt-i_ejFM-Yg'
# Insert Sheet Name
wks_name = 'CleanedData'
d2g.upload(df_dn,spreadsheet,wks_name,col_names=True,clean=True)
```
| github_jupyter |
## Coding Matrices
Here are a few exercises to get you started with coding matrices. The exercises start off with vectors and then get more challenging
### Vectors
```
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable v
v = []
```
The v variable contains a Python list. This list could also be thought of as a 1x5 matrix with 1 row and 5 columns. How would you represent this list as a matrix?
```
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable mv
### The difference between a vector and a matrix in Python is that
### a matrix is a list of lists.
### Hint: See the last quiz on the previous page
mv = [[]]
```
How would you represent this vector in its vertical form with 5 rows and 1 column? When defining matrices in Python, each row is a list. So in this case, you have 5 rows and thus will need 5 lists.
As an example, this is what the vector $$<5, 7>$$ would look like as a 1x2 matrix in Python:
```python
matrix1by2 = [
[5, 7]
]
```
And here is what the same vector would look like as a 2x1 matrix:
```python
matrix2by1 = [
[5],
[7]
]
```
```
### TODO: Assign the vector <5, 10, 2, 6, 1> to the variable vT
### vT is a 5x1 matrix
vT = []
```
### Assigning Matrices to Variables
```
### TODO: Assign the following matrix to the variable m
### 8 7 1 2 3
### 1 5 2 9 0
### 8 2 2 4 1
m = [[]]
```
### Accessing Matrix Values
```
### TODO: In matrix m, change the value
### in the second row last column from 0 to 5
### Hint: You do not need to rewrite the entire matrix
```
### Looping through Matrices to do Math
Coding mathematical operations with matrices can be tricky. Because matrices are lists of lists, you will need to use a for loop inside another for loop. The outside for loop iterates over the rows and the inside for loop iterates over the columns.
Here is some pseudo code
```python
for i in number of rows:
for j in number of columns:
mymatrix[i][j]
```
To figure out how many times to loop over the matrix, you need to know the number of rows and number of columns.
If you have a variable with a matrix in it, how could you figure out the number of rows? How could you figure out the number of columns? The [len](https://docs.python.org/2/library/functions.html#len) function in Python might be helpful.
### Scalar Multiplication
```
### TODO: Use for loops to multiply each matrix element by 5
### Store the answer in the r variable. This is called scalar
### multiplication
###
### HINT: First write a for loop that iterates through the rows
### one row at a time
###
### Then write another for loop within the for loop that
### iterates through the columns
###
### If you used the variable i to represent rows and j
### to represent columns, then m[i][j] would give you
### access to each element in the matrix
###
### Because r is an empty list, you cannot directly assign
### a value like r[i][j] = m[i][j]. You might have to
### work on one row at a time and then use r.append(row).
r = []
```
### Printing Out a Matrix
```
### TODO: Write a function called matrix_print()
### that prints out a matrix in
### a way that is easy to read.
### Each element in a row should be separated by a tab
### And each row should have its own line
### You can test our your results with the m matrix
### HINT: You can use a for loop within a for loop
### In Python, the print() function will be useful
### print(5, '\t', end = '') will print out the integer 5,
### then add a tab after the 5. The end = '' makes sure that
### the print function does not print out a new line if you do
### not want a new line.
### Your output should look like this
### 8 7 1 2 3
### 1 5 2 9 5
### 8 2 2 4 1
def matrix_print(matrix):
return
m = [
[8, 7, 1, 2, 3],
[1, 5, 2, 9, 5],
[8, 2, 2, 4, 1]
]
matrix_print(m)
```
### Test Your Results
```
### You can run these tests to see if you have the expected
### results. If everything is correct, this cell has no output
assert v == [5, 10, 2, 6, 1]
assert mv == [
[5, 10, 2, 6, 1]
]
assert vT == [
[5],
[10],
[2],
[6],
[1]]
assert m == [
[8, 7, 1, 2, 3],
[1, 5, 2, 9, 5],
[8, 2, 2, 4, 1]
]
assert r == [
[40, 35, 5, 10, 15],
[5, 25, 10, 45, 25],
[40, 10, 10, 20, 5]
]
```
### Print Out Your Results
```
### Run this cell to print out your answers
print(v)
print(mv)
print(vT)
print(m)
print(r)
```
| github_jupyter |
```
%load_ext sql
%sql sqlite:///flights.db
```
숙제 1
=======
### 일러두기 :
**_꼼꼼하게 읽어보기 바랍니다_**
* `prettytable` 모듈을 설치해야 스크립트를 실행할 수 있음. (설치 방법: `pip install --user prettytable`)
* `flights.db` 파일이 숙제용 Jupyter notebook과 같은 디렉터리에 있어야 함 (없다면 [여기서](http://open.gnu.ac.kr/lecslides/2018-2-DB/Assignments1/flights.db.zip) 다운 받기) 압축을 해제해야 함. `flights.db.zip`이 있는 곳에서 `unzip flights.db.zip`으로 압축을 해제하면 됨
* 데이터베이스 `flights.db`를 다운 받은 후 가장 위의 셀의 명령 실행하기
* 테스트, 디버그, 탐색하기 등을 위해서 새로운 셀을 생성하는 것을 적극 권장함
* 셀을 실행시키고 셀 왼 편에 `In [*]:` 이 보인다면 _실행 중_ 을 의미함
* **만약 셀이 오랫 동안 결과를 내 놓지 않고 멈춘 것 같다면: SQL 에 다시 연결하도록 python kernel을 다시 시작해야 함**
* 커널을 다시 시작하는 방법: "Kernel >> Restart & Clear Output", 그리고 위의 셀부터 아래로 하나씩 실행 함
* 다른 버전의 데이터베이스를 로드하기 위해서도 마찬가지를 새로운 연결을 만들어야 함
* 기억하기:
* `%sql [SQL 질의문;]` 은 _한 줄짜리_ SQL 질의문에 사용
* `%%sql
[SQL 질의문;]` 은 _여러 줄짜리_ SQL 질의문에 사용
* `submit.py` 을 실행하면 질의문을 처리하고 출력함
* 실행의 결과는 `correct_output.txt` 파일에 나와 있음.
* 실행 결과의 비교를 원한다면 `python sanity_check.py` 을 실행하거나, 다음의 명령을 실행하여 결과를 얻을 수 있음 `python submit.py > my_output; diff my_output correct_output.txt` 터미널에서 입력해야 함
* **숙제로 작성한 `submit.py` 파일은 아래의 형식을 절대적으로 따라야 함.** 형식이라 함은:
* 컬럼의 이름은 `correct_output.txt` 에 나와 있는 이름과 **똑같은 이름**이어야 함
* 컬럼의 순서도 `correct_output.txt` 에 나와 있는 순서와 **똑같은 순서**이어야 함
### 제출 방법:
* iPython notebook 자체를 제출하지 말 것
* 대신에, `submit.py` 에 작성한 번호에 맞게 질의문을 복사 붙여 넣기 할 것
* `%sql` 또는 `%%sql` 명령은 SQL 문에 포함시키지 말 것
* 제출한 질의문을 똑같은 스키마에서 임의로 선택된 값에 대상으로 실행시켜 평가를 할 것임. 그렇기 때문에 해답과 똑같은 결과가 나오도록 상수등을 써서 조작하지 말것
* **`submission_instructions.txt` 에 설명된 방법으로 해답을 제출할 것**
_즐겁게 시작해봅시다!_
개요: 여행 일정 지연
------------------------
여행 일정이 지연 되는 것만큼 짜증 나는 일은 없습니다. 그렇지 않나요?
여행 일정이 지연되지 않도록 여러가지 새로운 방법을 찾아봅니다. 최근에 찾은 데이터가 왜 일정이 지연되는지 이유와 무엇을 포기할지를 잘 설명해주고 있습니다.
SQL을 사용해서 한 번 그 이유들을 알아봅시다.
----
이 과제에서는 2017년 7월의 여객기의 지연 정보의 정보를 담고 있습니다. 데이터베이스의 기본 릴레이션에 대한 정보를 알아 봅시다.
```
%%sql
SELECT *
FROM flight_delays
LIMIT 1;
```
굉장히 많은 컬럼들이 있는 것을 알 수 있는데, 그러면 몇 줄이나 될까요?
```
%%sql
SELECT COUNT(*) AS num_rows
FROM flight_delays
```
데이터의 양이 상당합니다! 손과 머리로만 해답을 찾지 못할 것 같군요.
데이터베이스에 더 많은 데이터를 넣을 필요는 없겠군요. 컬럼들이 어떤 의미를 갖는지 알아 보려면 [이 링크](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236)를 따라가기 바랍니다.
몇 개의 추가적인 테이블들을 같이 포함해 놓았습니다. 이 테이블들을 사용하면 `airline_id`, `airport_id`, 그리고 `day_of_week` 을 사람이 읽기 편한 정보로 변환할 수 있습니다.
아래의 셀을 이용하여 `airlines`과 `weekdays` 의 정보를 확인해보기 바랍니다:
```
%%sql
```
좋습니다. 이제 시작해봅시다.
# SQL 질의문
질의문 1: 항공편의 평균 지연 시간은?
------------------------
데이터에 대한 이해를 돕기 위해, 간단한 질의문을 작성해봅시다.
아래의 셀에 2017년 7월동안 모든 항공편의 평균 지연시간을 구하는 질의문을 작성해봅시다.
```
%%sql
```
질의문 2: 가장 긴 지연 시간은?
------------------------
평균은 그리 크지 않군요. 하지만 _최장_ 지연 시간은 어떻게 되나요?
아래의 셀에 2017년 7월동안 가장 늦게 도착한 시간을 찾는 질의문을 작성해봅시다.
```
%%sql
```
질의문 3: 어떤 항공편을 피하는 것이 정신 건강에 좋을까요?
------------------------
어떤 항공편이 가장 늦었나요?
아래의 셀에 2017년 7월에 가장 늦게 도착한 항공사(`carrier`)와 항공편 명, 출발 도시 명, 도착 도시 명, 항공 일정을 출력하는 질의문을 작성 바랍니다. 앞에서 얻은 정보를 질의문에 삽입해서 계산하지 말고 중첩 질의문을 쓰기 바랍니다.
```
%%sql
```
질의문 4: 어떤 요일이 여행하기 가장 안좋은 날인가요?
------------------------
학기가 시작되었으니 먼 곳으로 여행을 할 수는 없지만, 출장은 가야하겠지요. 비행기를 타기 가장 안좋은 날은 무슨 요일인가요?
아래의 셀에 요일마다 평균 지연 시간이 어떻게 되는지 내림차순으로 정렬하여 결과를 출력하도록 질의문을 작성하기 바랍니다. 출력 결과의 스키마는 (`weekday_name`, `average_delay`)의 형태를 갖고 있어야 합니다.
**Note: 요일의 ID를 그대로 출력하지 말기 바랍니다.** (Hint: `weekdays` 테이블을 사용하여 join하여 요일의 이름을 출력하도록 합시다.)
```
%%sql
```
질의문 5: SFO에서 출발하는 항공사 중 지연 시간이 가장 긴 항공사는 어디입니까?
------------------------
어떤 요일을 피해야 할지 알았으니 SFO에서 출발하는 항공사 중 한 곳을 정해야 합니다. 어디로 갈지는 말하지 않았으니, SFO에서 출발하는 모든 항공사의 항공편들의 평균 지연시간을 구해 봅시다.
아래의 셀에 2017년 7월에 SFO에서 출발한 각 항공사 별로 모든 항공편에 대해 평균 지연시간을 내림차순으로 구하는 질의문을 작성해봅시다.
**Note: 항공사 ID를 그대로 출력하지 맙시다.** (Hint: 중첩 질의문으로 `airlines` 테이블을 join 하여 항공사 이름을 출력합시다.)
```
%%sql
```
질의문 6: 항공사들의 지연 비율을 알아 봅시다
------------------------
지연되는 항공편이 많습니다. 어떤 항공사가 지연시간이 많은 알아봅시다.
아래의 셀에 평균 10분 이상 지연되는 항공편이 있었던 항공사들의 비율을 구해봅시다. 전체 항공사의 수를 세서 질의문에 포함시키지 말기 바랍니다. 그리고 질의문에는 최소한 하나 이상의 `HAVING` 절을 사용합시다.
Note: sqlite 의 `COUNT(*)`는 정수형을 리턴하기 때문에 실수형으로 결과를 출력하려면 최소한 한 번 이상 `SELECT CAST (COUNT(*) AS float)` 또는 `COUNT(*)*1.0` 절을 써야 합니다.
```
%%sql
```
질의문 7: 출발 지연이 도착 지연에 어떤 영향을 미치나요?
------------------------
비행기가 지연 출발하면 도착 시간에 얼마나 영향을 주는지 알고 싶습니다.
[샘플 공분산](https://en.wikipedia.org/wiki/Covariance) 은 두 변수 간의 분산량을 측정하여 상관관계가 있는지 알려주는 통계치입니다. 공분산이 클수록 상관관계가 높고 음수인 경우 역상관관계가 있습니다. 샘플 공분산의 계산 식은 다음과 같습니다:
$$
Cov(X,Y) = \frac{1}{n-1} \sum_{i=1}^n (x_i-\hat{x})(y_i-\hat{y})
$$
이 때, $x_i$ 는 $X$의 $i$번째 값이고, $y_i$는 $Y$의 $i$번째 값입니다. $X$ 와 $Y$의 평균은 $\bar{x}$ 과 $\bar{y}$으로 표현 하였습니다.
아래의 셀에 도착 지연과 출발 지연 시간의 공분산을 구하는 하나의 질의문을 작성 해보기 바랍니다.
*Note: [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) 으로 구할 수도 있습니다. 그 결과는 정규화 되어 1 부터 -1의 값으로 상관관계를 알려 줍니다. 하지만, SQLite는 루트 계산 함수가 들어 있지 않기 때문에 이 계산식을 쓸 수 가 없습니다. 다른 보편적인 데이터베이스(PostgreSQL와 MySQL)에는 루트 계산 함수가 구현되어 있습니다.*
```
%%sql
```
질의문 8: 한 주가 엉망이었습니다...
------------------------
7월 어떤 항공사의 마지막 한 주(24일 이후)의 평균 지연 시간이 그 이전 주(24일 이전)들의 평균 지연 시간보다 절대적으로 길었나요?
아래의 셀에 1일부터 23일까지의 평균 지연 시간 대비 24일 부터 31일 사이의 평균 지연 시간이 절대적으로 길었던 항공사의 이름을 출력하는 질의문을 작성하기 바랍니다.
Note: [sqlite에서 날짜 다루기](http://www.sqlite.org/lang_datefunc.html)에 따라 `day_of_month`을 사용하여 질의문을 작성하는 것이 편할 것입니다.
Note 2: 아마 과제 중 가장 어려운 질의문이 될 수도 있는데, 작은 단위로 질의문을 작성하여 한 부분씩 해결하고, 그 질의문을 합쳐서 최종 질의문을 작성하는 것이 좋습니다.
Hint: 두 개의 하위 질의문으로 계산할 수 있습니다. 하나의 질의문이 24일 이후의 평균 도착 시간을 계산하고, 다른 질의문이 24일 이전의 도착 시간을 계산하고, 두 질의문을 join하여 지연 시간의 차를 계산하면 됩니다.
```
%%sql
```
질의문 9: 진보적인 그리고 혁명적인
------------------------
포트랜드 (PDX)와 유진 (EUG)로 가기를 원하지만, 한 번에 가기가 쉽지 않군요. 우수 고객 마일리지를 채우기 위해 같은 항공편으로 각 도시로 이동하기를 원합니다. SFO -> PDF와 SFO -> EUG 로 가는 같은 항공사가 있는지 알고 싶습니다.
아래의 셀에 2017년 7월에 SFO -> PDX 과 SFO -> EUG 을 출항한 항공사의 유일한 이름(중복 없음 ID 가 아님)을 출력하는 하나의 SQL 질의문을 작성하기 바랍니다.
```
%%sql
```
질의문 10: 피로도와 등거리 간의 결정
------------------------
시카고에서 캘리포니아로 이동하려고 합니다. Midway (MDW) 또는 O'Hare (ORD) 에서 샌프란시스코 (SFO), 산호세 (SJC), 오크랜드 (OAK)로 도착하면 좋겟습니다. 만약 이 번 달이 7월이라고 하면 시카고에서 현지 시간 14시에 출발하는 경로 중 지연 시간이 가장 짧은 경로가 어떤 것입니까?
아래의 셀에 MDW 또는 ORD 에서 현지 시간 14시(`crs_dep_time`)에 출발하고 SFO, SJC, 또는 OAK에 도착하는 항공편들의 평균 지연 시간을 구하는 하나의 질의문을 작성하기 바랍니다. 출발과 도착 공항을 Group by로 묶고 지연 시간의 내림차순으로 출력하기 바랍니다.
Note: `crs_dep_time` 필드는 정수 형을 갖고 있으며 hhmm (e.g. 4:15pm 은 1615 임) 형을 따름.
```
%%sql
```
## 다 끝났습니다. 이제 제출합시다.
* 제출하는 방법은 가장 위의 설명문을 참고하기 바랍니다.
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Generating C code for the right-hand-side of the scalar wave equation, in ***curvilinear*** coordinates, using a reference metric formalism
## Author: Zach Etienne
### Formatting improvements courtesy Brandon Clark
[comment]: <> (Abstract: TODO)
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). In addition, all expressions have been validated against a trusted code (the [original SENR/NRPy+ code](https://bitbucket.org/zach_etienne/nrpy)).
### NRPy+ Source Code for this module: [ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py](../edit/ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py)
[comment]: <> (Introduction: TODO)
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
0. [Preliminaries](#prelim): Reference Metrics and Picking Best Coordinate System to Solve the PDE
1. [Example](#example): The scalar wave equation in spherical coordinates
1. [Step 1](#contracted_christoffel): Contracted Christoffel symbols $\hat{\Gamma}^i = \hat{g}^{ij}\hat{\Gamma}^k_{ij}$ in spherical coordinates, using NRPy+
1. [Step 2](#rhs_scalarwave_spherical): The right-hand side of the scalar wave equation in spherical coordinates, using NRPy+
1. [Step 3](#code_validation): Code Validation against `ScalarWaveCurvilinear.ScalarWaveCurvilinear_RHSs` NRPy+ Module
1. [Step 5](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='prelim'></a>
# Preliminaries: Reference Metrics and Picking Best Coordinate System to Solve the PDE \[Back to [top](#toc)\]
$$\label{prelim}$$
Recall from [NRPy+ tutorial notebook on the Cartesian scalar wave equation](Tutorial-ScalarWave.ipynb), the scalar wave equation in 3D Cartesian coordinates is given by
$$\partial_t^2 u = c^2 \nabla^2 u \text{,}$$
where $u$ (the amplitude of the wave) is a function of time and Cartesian coordinates in space: $u = u(t,x,y,z)$ (spatial dimension as-yet unspecified), and subject to some initial condition
$$u(0,x,y,z) = f(x,y,z),$$
with suitable (sometimes approximate) spatial boundary conditions.
To simplify this equation, let's first choose units such that $c=1$. Alternative wave speeds can be constructed
by simply rescaling the time coordinate, with the net effect being that the time $t$ is replaced with time in dimensions of space; i.e., $t\to c t$:
$$\partial_t^2 u = \nabla^2 u.$$
As we learned in the [NRPy+ tutorial notebook on reference metrics](Tutorial-Reference_Metric.ipynb), reference metrics are a means to pick the best coordinate system for the PDE we wish to solve. However, to take advantage of reference metrics requires first that we generalize the PDE. In the case of the scalar wave equation, this involves first rewriting in [Einstein notation](https://en.wikipedia.org/wiki/Einstein_notation) (with implied summation over repeated indices) via
$$(-\partial_t^2 + \nabla^2) u = \eta^{\mu\nu} u_{,\ \mu\nu} = 0,$$
where $u_{,\mu\nu} = \partial_\mu \partial_\nu u$, and $\eta^{\mu\nu}$ is the contravariant flat-space metric tensor with components $\text{diag}(-1,1,1,1)$.
Next we apply the "comma-goes-to-semicolon rule" and replace $\eta^{\mu\nu}$ with $\hat{g}^{\mu\nu}$ to generalize the scalar wave equation to an arbitrary reference metric $\hat{g}^{\mu\nu}$:
$$\hat{g}^{\mu\nu} u_{;\ \mu\nu} = \hat{\nabla}_{\mu} \hat{\nabla}_{\nu} u = 0,$$
where $\hat{\nabla}_{\mu}$ denotes the [covariant derivative](https://en.wikipedia.org/wiki/Covariant_derivative) with respect to the reference metric basis vectors $\hat{x}^{\mu}$, and $\hat{g}^{\mu \nu} \hat{\nabla}_{\mu} \hat{\nabla}_{\nu} u$ is the covariant
[D'Alembertian](https://en.wikipedia.org/wiki/D%27Alembert_operator) of $u$.
For example, suppose we wish to model a short-wavelength wave that is nearly spherical. In this case, if we were to solve the wave equation PDE in Cartesian coordinates, we would in principle need high resolution in all three cardinal directions. If instead we chose spherical coordinates centered at the center of the wave, we might need high resolution only in the radial direction, with only a few points required in the angular directions. Thus choosing spherical coordinates would be far more computationally efficient than modeling the wave in Cartesian coordinates.
Let's now expand the covariant scalar wave equation in arbitrary coordinates. Since the covariant derivative of a scalar is equivalent to its partial derivative, we have
\begin{align}
0 &= \hat{g}^{\mu \nu} \hat{\nabla}_{\mu} \hat{\nabla}_{\nu} u \\
&= \hat{g}^{\mu \nu} \hat{\nabla}_{\mu} \partial_{\nu} u.
\end{align}
$\partial_{\nu} u$ transforms as a one-form under covariant differentiation, so we have
$$\hat{\nabla}_{\mu} \partial_{\nu} u = \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau_{\mu\nu} \partial_\tau u,$$
where
$$\hat{\Gamma}^\tau_{\mu\nu} = \frac{1}{2} \hat{g}^{\tau\alpha} \left(\partial_\nu \hat{g}_{\alpha\mu} + \partial_\mu \hat{g}_{\alpha\nu} - \partial_\alpha \hat{g}_{\mu\nu} \right)$$
are the [Christoffel symbols](https://en.wikipedia.org/wiki/Christoffel_symbols) associated with the reference metric $\hat{g}_{\mu\nu}$.
Then the scalar wave equation is written:
$$0 = \hat{g}^{\mu \nu} \left( \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau_{\mu\nu} \partial_\tau u\right).$$
Define the contracted Christoffel symbols:
$$\hat{\Gamma}^\tau = \hat{g}^{\mu\nu} \hat{\Gamma}^\tau_{\mu\nu}.$$
Then the scalar wave equation is given by
$$0 = \hat{g}^{\mu \nu} \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau \partial_\tau u.$$
The reference metrics we adopt satisfy
$$\hat{g}^{t \nu} = -\delta^{t \nu},$$
where $\delta^{t \nu}$ is the [Kronecker delta](https://en.wikipedia.org/wiki/Kronecker_delta). Therefore the scalar wave equation in curvilinear coordinates can be written
\begin{align}
0 &= \hat{g}^{\mu \nu} \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau \partial_\tau u \\
&= -\partial_t^2 u + \hat{g}^{i j} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u \\
\implies \partial_t^2 u &= \hat{g}^{i j} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u,
\end{align}
where repeated Latin indices denote implied summation over *spatial* components only. This module implements the bottom equation for arbitrary reference metrics satisfying $\hat{g}^{t \nu} = -\delta^{t \nu}$. To gain an appreciation for what NRPy+ accomplishes automatically, let's first work out the scalar wave equation in spherical coordinates by hand:
<a id='example'></a>
# Example: The scalar wave equation in spherical coordinates \[Back to [top](#toc)\]
$$\label{example}$$
For example, the spherical reference metric is written
$$\hat{g}_{\mu\nu} = \begin{pmatrix}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & r^2 & 0 \\
0 & 0 & 0 & r^2 \sin^2 \theta \\
\end{pmatrix}.
$$
Since the inverse of a diagonal matrix is simply the inverse of the diagonal elements, we can write
$$\hat{g}^{\mu\nu} = \begin{pmatrix}
-1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \frac{1}{r^2} & 0 \\
0 & 0 & 0 & \frac{1}{r^2 \sin^2 \theta} \\
\end{pmatrix}.$$
The scalar wave equation in these coordinates can thus be written
\begin{align}
0 &= \hat{g}^{\mu \nu} \partial_{\mu} \partial_{\nu} u - \hat{\Gamma}^\tau \partial_\tau u \\
&= \hat{g}^{tt} \partial_t^2 u + \hat{g}^{rr} \partial_r^2 u + \hat{g}^{\theta\theta} \partial_\theta^2 u + \hat{g}^{\phi\phi} \partial_\phi^2 u - \hat{\Gamma}^\tau \partial_\tau u \\
&= -\partial_t^2 u + \partial_r^2 u + \frac{1}{r^2} \partial_\theta^2
u + \frac{1}{r^2 \sin^2 \theta} \partial_\phi^2 u - \hat{\Gamma}^\tau \partial_\tau u\\
\implies \partial_t^2 u &= \partial_r^2 u + \frac{1}{r^2} \partial_\theta^2
u + \frac{1}{r^2 \sin^2 \theta} \partial_\phi^2 u - \hat{\Gamma}^\tau \partial_\tau u
\end{align}
The contracted Christoffel symbols
$\hat{\Gamma}^\tau$ can then be computed directly from the metric $\hat{g}_{\mu\nu}$.
It can be shown (exercise to the reader) that the only nonzero
components of $\hat{\Gamma}^\tau$ in static spherical polar coordinates are
given by
\begin{align}
\hat{\Gamma}^r &= -\frac{2}{r} \\
\hat{\Gamma}^\theta &= -\frac{\cos\theta}{r^2 \sin\theta}.
\end{align}
Thus we have found the Laplacian in spherical coordinates is simply:
\begin{align}
\nabla^2 u &=
\partial_r^2 u + \frac{1}{r^2} \partial_\theta^2 u + \frac{1}{r^2 \sin^2 \theta} \partial_\phi^2 u - \hat{\Gamma}^\tau \partial_\tau u\\
&= \partial_r^2 u + \frac{1}{r^2} \partial_\theta^2 u + \frac{1}{r^2 \sin^2 \theta} \partial_\phi^2 u + \frac{2}{r} \partial_r u + \frac{\cos\theta}{r^2 \sin\theta} \partial_\theta u
\end{align}
(cf. http://mathworld.wolfram.com/SphericalCoordinates.html; though note that they defined the angle $\phi$ as $\theta$ and $\theta$ as $\phi$.)
<a id='contracted_christoffel'></a>
# Step 1: Contracted Christoffel symbols $\hat{\Gamma}^i = \hat{g}^{ij}\hat{\Gamma}^k_{ij}$ in spherical coordinates, using NRPy+ \[Back to [top](#toc)\]
$$\label{contracted_christoffel}$$
Let's next use NRPy+ to derive the contracted Christoffel symbols
$$\hat{g}^{ij} \hat{\Gamma}^k_{ij}$$
in spherical coordinates, where $i\in\{1,2,3\}$ and $j\in\{1,2,3\}$ are spatial indices.
As discussed in the [NRPy+ tutorial notebook on reference metrics](Tutorial-Reference_Metric.ipynb), several reference-metric-related quantities in spherical coordinates are computed in NRPy+ (provided the parameter **`reference_metric::CoordSystem`** is set to **`"Spherical"`**), including the inverse spatial spherical reference metric $\hat{g}^{ij}$ and the Christoffel symbols from this reference metric $\hat{\Gamma}^{i}_{jk}$.
```
import sympy as sp
import NRPy_param_funcs as par
import indexedexp as ixp
import reference_metric as rfm
# reference_metric::CoordSystem can be set to Spherical, SinhSpherical, SinhSphericalv2,
# Cylindrical, SinhCylindrical, SinhCylindricalv2, etc.
# See reference_metric.py and NRPy+ tutorial notebook on
# reference metrics for full list and description of how
# to extend.
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
par.set_parval_from_str("grid::DIM",3)
rfm.reference_metric()
contractedGammahatU = ixp.zerorank1()
for k in range(3):
for i in range(3):
for j in range(3):
contractedGammahatU[k] += rfm.ghatUU[i][j] * rfm.GammahatUDD[k][i][j]
for k in range(3):
print("contracted GammahatU["+str(k)+"]:")
sp.pretty_print(sp.simplify(contractedGammahatU[k]))
if k<2:
print("\n\n")
```
<a id='rhs_scalarwave_spherical'></a>
# Step 2: The right-hand side of the scalar wave equation in spherical coordinates, using NRPy+ \[Back to [top](#toc)\]
$$\label{rhs_scalarwave_spherical}$$
Following our [implementation of the scalar wave equation in Cartesian coordinates](Tutorial-ScalarWave.ipynb), we will introduce a new variable $v=\partial_t u$ that will enable us to split the second time derivative into two first-order time derivatives:
\begin{align}
\partial_t u &= v \\
\partial_t v &= \hat{g}^{ij} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u.
\end{align}
Adding back the sound speed $c$, we have a choice of a single factor of $c$ multiplying both right-hand sides, or a factor of $c^2$ multiplying the second equation only. We'll choose the latter:
\begin{align}
\partial_t u &= v \\
\partial_t v &= c^2 \left(\hat{g}^{ij} \partial_{i} \partial_{j} u - \hat{\Gamma}^i \partial_i u\right).
\end{align}
Now let's generate the C code for the finite-difference representations of the right-hand sides of the above "time evolution" equations for $u$ and $v$. Since the right-hand side of $\partial_t v$ contains implied sums over $i$ and $j$ in the first term, and an implied sum over $k$ in the second term, we'll find it useful to split the right-hand side into two parts
\begin{equation}
\partial_t v = c^2 \left(
{\underbrace {\textstyle \hat{g}^{ij} \partial_{i} \partial_{j} u}_{\text{Part 1}}}
{\underbrace {\textstyle -\hat{\Gamma}^i \partial_i u}_{\text{Part 2}}}\right),
\end{equation}
and perform the implied sums in two pieces:
```
import NRPy_param_funcs as par
import indexedexp as ixp
import grid as gri
import finite_difference as fin
import reference_metric as rfm
from outputC import *
# The name of this module ("scalarwave") is given by __name__:
thismodule = __name__
# Step 0: Read the spatial dimension parameter as DIM.
DIM = par.parval_from_str("grid::DIM")
# Step 1: Set the finite differencing order to 4.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",4)
# Step 2a: Reset the gridfunctions list; below we define the
# full complement of gridfunctions needed by this
# tutorial. This line of code enables us to re-run this
# tutorial without resetting the running Python kernel.
gri.glb_gridfcs_list = []
# Step 2b: Register gridfunctions that are needed as input
# to the scalar wave RHS expressions.
uu, vv = gri.register_gridfunctions("EVOL",["uu","vv"])
# Step 3a: Declare the rank-1 indexed expression \partial_{i} u,
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
uu_dD = ixp.declarerank1("uu_dD")
# Step 3b: Declare the rank-2 indexed expression \partial_{ij} u,
# which is symmetric about interchange of indices i and j
# Derivative variables like these must have an underscore
# in them, so the finite difference module can parse the
# variable name properly.
uu_dDD = ixp.declarerank2("uu_dDD","sym01")
# Step 4: Define the C parameter wavespeed. The `wavespeed`
# variable is a proper SymPy variable, so it can be
# used in below expressions. In the C code, it acts
# just like a usual parameter, whose value is
# specified in the parameter file.
wavespeed = par.Cparameters("REAL",thismodule,"wavespeed", 1.0)
# Step 5: Define right-hand sides for the evolution.
uu_rhs = vv
# Step 5b: The right-hand side of the \partial_t v equation
# is given by:
# \hat{g}^{ij} \partial_i \partial_j u - \hat{\Gamma}^i \partial_i u.
# ^^^^^^^^^^^^ PART 1 ^^^^^^^^^^^^^^^^ ^^^^^^^^^^ PART 2 ^^^^^^^^^^^
vv_rhs = 0
for i in range(DIM):
# PART 2:
vv_rhs -= contractedGammahatU[i]*uu_dD[i]
for j in range(DIM):
# PART 1:
vv_rhs += rfm.ghatUU[i][j]*uu_dDD[i][j]
vv_rhs *= wavespeed*wavespeed
# Step 6: Generate C code for scalarwave evolution equations,
# print output to the screen (standard out, or stdout).
fin.FD_outputC("stdout",
[lhrh(lhs=gri.gfaccess("rhs_gfs","uu"),rhs=uu_rhs),
lhrh(lhs=gri.gfaccess("rhs_gfs","vv"),rhs=vv_rhs)])
```
<a id='code_validation'></a>
# Step 3: Code Validation against `ScalarWaveCurvilinear.ScalarWaveCurvilinear_RHSs` NRPy+ Module \[Back to [top](#toc)\]
$$\label{code_validation}$$
Here, as a code validation check, we verify agreement in the SymPy expressions for the RHSs of the Curvilinear Scalar Wave equation (i.e., uu_rhs and vv_rhs) between
1. this tutorial and
2. the NRPy+ [ScalarWaveCurvilinear.ScalarWaveCurvilinear_RHSs](../edit/ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py) module.
By default, we analyze the RHSs in Spherical coordinates, though other coordinate systems may be chosen.
```
# Step 7: We already have SymPy expressions for uu_rhs and vv_rhs in
# terms of other SymPy variables. Even if we reset the list
# of NRPy+ gridfunctions, these *SymPy* expressions for
# uu_rhs and vv_rhs *will remain unaffected*.
#
# Here, we will use the above-defined uu_rhs and vv_rhs to
# validate against the same expressions in the
# ScalarWaveCurvilinear/ScalarWaveCurvilinear module,
# to ensure consistency between the tutorial and the
# module itself.
#
# Reset the list of gridfunctions, as registering a gridfunction
# twice will spawn an error.
gri.glb_gridfcs_list = []
# Step 8: Call the ScalarWaveCurvilinear_RHSs() function from within the
# ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py module,
# which should do exactly the same as in Steps 1-6 above.
import ScalarWaveCurvilinear.ScalarWaveCurvilinear_RHSs as swcrhs
swcrhs.ScalarWaveCurvilinear_RHSs()
# Step 9: Consistency check between the tutorial notebook above
# and the ScalarWaveCurvilinear_RHSs() function from within the
# ScalarWaveCurvilinear/ScalarWaveCurvilinear_RHSs.py module.
print("Consistency check between ScalarWaveCurvilinear tutorial and NRPy+ module:")
print("uu_rhs - swcrhs.uu_rhs: "+str(sp.simplify(uu_rhs - swcrhs.uu_rhs))+"\t\t (should be zero)")
print("vv_rhs - swcrhs.vv_rhs: "+str(sp.simplify(vv_rhs - swcrhs.vv_rhs))+"\t\t (should be zero)")
```
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-ScalarWaveCurvilinear.pdf](Tutorial-ScalarWaveCurvilinear.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-ScalarWaveCurvilinear.ipynb
!pdflatex -interaction=batchmode Tutorial-ScalarWaveCurvilinear.tex
!pdflatex -interaction=batchmode Tutorial-ScalarWaveCurvilinear.tex
!pdflatex -interaction=batchmode Tutorial-ScalarWaveCurvilinear.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
| github_jupyter |
```
import os
import urllib
from zipfile import ZipFile
import fileinput
import numpy as np
import gc
import urllib.request
if not os.path.exists('glove.840B.300d.txt'):
if not os.path.exists('glove.840B.300d.zip'):
print('downloading GloVe')
urllib.request.urlretrieve("http://nlp.stanford.edu/data/glove.840B.300d.zip", "glove.840B.300d.zip")
zip = ZipFile('glove.840B.300d.zip')
zip.extractall()
import torch
from torchtext import data
from torchtext import datasets
from torchtext.vocab import GloVe
import fileinput
import numpy as np
from cove import MTLSTM
inputs = data.Field(lower=True, include_lengths=True, batch_first=True)
answers = data.Field(sequential=False)
print('Generating train, dev, test splits')
train, dev, test = datasets.SNLI.splits(inputs, answers)
print('Building vocabulary')
inputs.build_vocab(train, dev, test)
g = GloVe(name='840B', dim=300)
gc.collect()
inputs.vocab.load_vectors(vectors=g)
gc.collect()
answers.build_vocab(train)
model = MTLSTM(n_vocab=len(inputs.vocab), vectors=inputs.vocab.vectors)
model.cuda(0)
train_iter, dev_iter, test_iter = data.BucketIterator.splits(
(train, dev, test), batch_size=100, device=0)
train_iter.init_epoch()
from keras.models import load_model
import tensorflow as tf
# To prevent Tensorflow from being greedy and allocating all GPU memory for itself
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
# Loading loding saved Keras CoVe model
cove_model = load_model('Keras_CoVe.h5')
TOTAL_NUM_TEST_SENTENCE = 10000
print('Comparing Keras CoVe prediction with Pytorch CoVe')
abs_error_per_dim = 0
total_num_of_dim = 0
num_test_sentence = 0
model.train()
for batch_idx, batch in enumerate(train_iter):
if num_test_sentence > TOTAL_NUM_TEST_SENTENCE:
# It takes a long time to run through all examples hence restricting the test set
break
cove_premise = model(*batch.premise)
#cove_hypothesis = model(*batch.hypothesis)
sentence_sparse_vector = batch.premise[0].data.cpu().numpy()
for i in range(len(sentence_sparse_vector)):
sentence = sentence_sparse_vector[i]
sentence_glove = []
for word in sentence:
sentence_glove.append(inputs.vocab.vectors[word].numpy())
sentence_glove = np.expand_dims(np.array(sentence_glove),0)
if np.any(np.sum(sentence_glove,axis=2)==0):
break
keras_cove_sentence = cove_model.predict(sentence_glove)
keras_cove_sentence = np.squeeze(keras_cove_sentence,0)
pytorch_cove_sentence = cove_premise.data.cpu().numpy()[i]
abs_error_per_dim+=np.sum(np.abs(keras_cove_sentence - pytorch_cove_sentence))
total_num_of_dim+=np.prod(sentence_glove.shape)
num_test_sentence+=1
abs_error_per_dim/=total_num_of_dim
print('abs error per dim:'+str(abs_error_per_dim))
```
| github_jupyter |
```
from HARK.ConsumptionSaving.ConsLaborModel import (
LaborIntMargConsumerType,
init_labor_lifecycle,
)
import numpy as np
import matplotlib.pyplot as plt
from time import process_time
mystr = lambda number: "{:.4f}".format(number) # Format numbers as strings
do_simulation = True
# Make and solve a labor intensive margin consumer i.e. a consumer with utility for leisure
LaborIntMargExample = LaborIntMargConsumerType(verbose=0)
LaborIntMargExample.cycles = 0
t_start = process_time()
LaborIntMargExample.solve()
t_end = process_time()
print(
"Solving a labor intensive margin consumer took "
+ str(t_end - t_start)
+ " seconds."
)
t = 0
bMin_orig = 0.0
bMax = 100.0
# Plot the consumption function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(bMin, bMax, 300)
bMin = bMin_orig
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
C = LaborIntMargExample.solution[t].cFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, C)
bMin = np.minimum(bMin, B_temp[0])
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Normalized consumption level")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, None)
plt.show()
# Plot the marginal consumption function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(bMin, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
C = LaborIntMargExample.solution[t].cFunc.derivativeX(
B_temp, Shk * np.ones_like(B_temp)
)
plt.plot(B_temp, C)
bMin = np.minimum(bMin, B_temp[0])
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Marginal propensity to consume")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the labor function at various transitory productivity shocks
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(0.0, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
Lbr = LaborIntMargExample.solution[t].LbrFunc(B_temp, Shk * np.ones_like(B_temp))
bMin = np.minimum(bMin, B_temp[0])
plt.plot(B_temp, Lbr)
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Labor supply")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the marginal value function at various transitory productivity shocks
pseudo_inverse = True
TranShkSet = LaborIntMargExample.TranShkGrid[t]
bMin = bMin_orig
B = np.linspace(0.0, bMax, 300)
for Shk in TranShkSet:
B_temp = B + LaborIntMargExample.solution[t].bNrmMin(Shk)
if pseudo_inverse:
vP = LaborIntMargExample.solution[t].vPfunc.cFunc(
B_temp, Shk * np.ones_like(B_temp)
)
else:
vP = LaborIntMargExample.solution[t].vPfunc(B_temp, Shk * np.ones_like(B_temp))
bMin = np.minimum(bMin, B_temp[0])
plt.plot(B_temp, vP)
plt.xlabel("Beginning of period bank balances")
if pseudo_inverse:
plt.ylabel("Pseudo inverse marginal value")
else:
plt.ylabel("Marginal value")
plt.xlim(bMin, bMax - bMin_orig + bMin)
plt.ylim(0.0, None)
plt.show()
if do_simulation:
t_start = process_time()
LaborIntMargExample.T_sim = 120 # Set number of simulation periods
LaborIntMargExample.track_vars = ["bNrm", 'cNrm']
LaborIntMargExample.initialize_sim()
LaborIntMargExample.simulate()
t_end = process_time()
print(
"Simulating "
+ str(LaborIntMargExample.AgentCount)
+ " intensive-margin labor supply consumers for "
+ str(LaborIntMargExample.T_sim)
+ " periods took "
+ mystr(t_end - t_start)
+ " seconds."
)
N = LaborIntMargExample.AgentCount
CDF = np.linspace(0.0, 1, N)
plt.plot(np.sort(LaborIntMargExample.controls['cNrm']), CDF)
plt.xlabel(
"Consumption cNrm in " + str(LaborIntMargExample.T_sim) + "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, None)
plt.ylim(0.0, 1.0)
plt.show()
plt.plot(np.sort(LaborIntMargExample.controls['Lbr']), CDF)
plt.xlabel(
"Labor supply Lbr in " + str(LaborIntMargExample.T_sim) + "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, 1.0)
plt.ylim(0.0, 1.0)
plt.show()
plt.plot(np.sort(LaborIntMargExample.state_now['aNrm']), CDF)
plt.xlabel(
"End-of-period assets aNrm in "
+ str(LaborIntMargExample.T_sim)
+ "th simulated period"
)
plt.ylabel("Cumulative distribution")
plt.xlim(0.0, 20.0)
plt.ylim(0.0, 1.0)
plt.show()
# Make and solve a labor intensive margin consumer with a finite lifecycle
LifecycleExample = LaborIntMargConsumerType(**init_labor_lifecycle)
LifecycleExample.cycles = (
1 # Make this consumer live a sequence of periods exactly once
)
start_time = process_time()
LifecycleExample.solve()
end_time = process_time()
print(
"Solving a lifecycle labor intensive margin consumer took "
+ str(end_time - start_time)
+ " seconds."
)
LifecycleExample.unpack('cFunc')
bMax = 20.0
# Plot the consumption function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
C = LifecycleExample.solution[t].cFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, C)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Consumption function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Normalized consumption level")
plt.xlim(b_min, b_max)
plt.ylim(0.0, None)
plt.show()
# Plot the marginal consumption function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
MPC = LifecycleExample.solution[t].cFunc.derivativeX(
B_temp, Shk * np.ones_like(B_temp)
)
plt.plot(B_temp, MPC)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Marginal consumption function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Marginal propensity to consume")
plt.xlim(b_min, b_max)
plt.ylim(0.0, 1.0)
plt.show()
# Plot the labor supply function in each period of the lifecycle, using median shock
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) // 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
L = LifecycleExample.solution[t].LbrFunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, L)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.title("Labor supply function across periods of the lifecycle")
plt.xlabel("Beginning of period bank balances")
plt.ylabel("Labor supply")
plt.xlim(b_min, b_max)
plt.ylim(0.0, 1.01)
plt.show()
# Plot the marginal value function at various transitory productivity shocks
pseudo_inverse = True
TranShkSet = LifecycleExample.TranShkGrid[t]
B = np.linspace(0.0, bMax, 300)
b_min = np.inf
b_max = -np.inf
for t in range(LifecycleExample.T_cycle):
TranShkSet = LifecycleExample.TranShkGrid[t]
Shk = TranShkSet[int(len(TranShkSet) / 2)] # Use the median shock, more or less
B_temp = B + LifecycleExample.solution[t].bNrmMin(Shk)
if pseudo_inverse:
vP = LifecycleExample.solution[t].vPfunc.cFunc(
B_temp, Shk * np.ones_like(B_temp)
)
else:
vP = LifecycleExample.solution[t].vPfunc(B_temp, Shk * np.ones_like(B_temp))
plt.plot(B_temp, vP)
b_min = np.minimum(b_min, B_temp[0])
b_max = np.maximum(b_min, B_temp[-1])
plt.xlabel("Beginning of period bank balances")
if pseudo_inverse:
plt.ylabel("Pseudo inverse marginal value")
else:
plt.ylabel("Marginal value")
plt.title("Marginal value across periods of the lifecycle")
plt.xlim(b_min, b_max)
plt.ylim(0.0, None)
plt.show()
```
| github_jupyter |
### Find the top rooms ignited and the top materials in those rooms that were first ignited
```
import psycopg2
import pandas as pd
from IPython.display import display
conn = psycopg2.connect(service='nfirs')
pd.options.display.max_rows = 1000
df = pd.read_sql_query("select * from codelookup where fieldid = 'PROP_USE' and length(code_value) = 3 order by code_value", conn)['code_value']
codes = list(df.values)
```
#### By property use type (batch by property type)
```
# Create a CSV for each property use type
q = """SELECT x.prop_use,
area_orig,
first_ign,
x.civ_inj,
x.civ_death,
x.flame_sprd,
x.item_sprd,
x.cnt
FROM
( SELECT *,
row_number() over (partition BY area_orig
ORDER BY area_orig, w.cnt DESC, first_ign, w.flame_sprd,w.item_sprd, w.civ_death, w.civ_inj DESC) row_num
FROM
(SELECT distinct bf.area_orig,
bf.first_ign,
bf.prop_use,
bf.flame_sprd,
bf.item_sprd,
COALESCE(bf.oth_death, 0) as civ_death,
COALESCE(bf.oth_inj,0) as civ_inj,
count(*) OVER ( PARTITION BY bf.area_orig, bf.first_ign, bf.flame_sprd, bf.item_sprd, COALESCE(bf.oth_death, 0)+COALESCE(bf.oth_inj,0) ) AS cnt,
row_number() OVER ( PARTITION BY bf.area_orig, bf.first_ign, bf.flame_sprd, bf.item_sprd, COALESCE(bf.oth_death, 0)+COALESCE(bf.oth_inj,0) ) AS row_numbers
FROM joint_buildingfires bf
WHERE bf.area_orig IN
( SELECT area_orig
FROM joint_buildingfires
WHERE prop_use = %(use)s
AND area_orig != 'UU'
AND extract(year from inc_date) > 2011
GROUP BY area_orig
ORDER BY count(1) DESC LIMIT 8)
AND bf.prop_use = %(use)s
AND bf.first_ign != 'UU'
AND extract(year from inc_date) > 2011
ORDER BY area_orig,
first_ign ) w
WHERE w.row_numbers = 1) x
ORDER BY area_orig,
x.cnt DESC,
first_ign
"""
# for c in codes[1:2]:
# df = pd.read_sql_query(q, conn, params=dict(use=c))
# display(df)
for c in codes:
df = pd.read_sql_query(q, conn, params=dict(use=c))
df.to_csv('/tmp/{}.csv'.format(c))
# Testing/sanity checks
q = """SELECT bf.prop_use, bf.area_orig,
bf.first_ign,
bf.flame_sprd,
COALESCE(bf.oth_death, 0) + COALESCE(bf.oth_inj,0) as civ_inj_death,
count(*) OVER ( PARTITION BY bf.area_orig, bf.first_ign, bf.flame_sprd, COALESCE(bf.oth_death, 0)+COALESCE(bf.oth_inj,0) ) AS cnt,
row_number() OVER ( PARTITION BY bf.area_orig, bf.first_ign, bf.flame_sprd, COALESCE(bf.oth_death, 0)+COALESCE(bf.oth_inj,0) ) AS row_numbers
FROM buildingfires bf
WHERE bf.area_orig IN
( SELECT area_orig
FROM buildingfires
WHERE prop_use = %(use)s
AND area_orig != 'UU'
GROUP BY area_orig
ORDER BY count(1) DESC LIMIT 8)
AND bf.prop_use = %(use)s
AND bf.first_ign != 'UU'
ORDER BY area_orig,
first_ign,
cnt desc"""
pd.read_sql_query(q, conn, params=dict(use='100'))
q = """
select count(1)
from joint_buildingfires
where prop_use='100'
and area_orig = '00'
and first_ign = '00'
and COALESCE(oth_death, 0) + COALESCE(oth_inj, 0) = 0
and flame_sprd = 'N'
"""
pd.read_sql_query(q, conn)
# Sanity checks
q = """
select area_orig, first_ign, count(1)
from joint_buildingfires
where area_orig != 'UU'
and first_ign != 'UU'
group by area_orig, first_ign
order by count desc
"""
pd.read_sql_query(q, conn)
# More sanity checks, including civ death/inj + flame spread
q = """
select area_orig, first_ign, flame_sprd, COALESCE(oth_death, 0)+COALESCE(oth_inj,0) as civ_death_inj, count(1)
from joint_buildingfires
where area_orig != 'UU'
and first_ign != 'UU'
group by area_orig, first_ign, flame_sprd, civ_death_inj
order by count desc"""
pd.read_sql_query(q, conn)
# For grouped propety usage only 6 most popular ignition sources
q = """
--
SELECT area_orig,
first_ign,
x.cnt
FROM
( SELECT *,
row_number() over (partition BY area_orig
ORDER BY area_orig, w.cnt DESC, first_ign) row_num
FROM
(SELECT bf.area_orig,
bf.first_ign,
count(*) OVER ( PARTITION BY bf.area_orig, bf.first_ign ) AS cnt,
row_number() OVER ( PARTITION BY bf.area_orig, bf.first_ign ) AS row_numbers
FROM joint_buildingfires bf
WHERE bf.area_orig IN
( SELECT area_orig
FROM joint_buildingfires
WHERE prop_use in ('120', '121', '122', '123', '124', '129')
AND area_orig != 'UU'
GROUP BY area_orig
ORDER BY count(1) DESC LIMIT 8)
AND bf.prop_use in ('120', '121', '122', '123', '124', '129')
AND bf.first_ign != 'UU'
ORDER BY area_orig,
first_ign ) w
WHERE w.row_numbers = 1) x
WHERE x.row_num < 7
ORDER BY area_orig,
x.cnt DESC,
first_ign
"""
df = pd.read_sql_query(q, conn)
display(df)
# Pull all from buildingfires to CSV
q = """
select prop_use, area_orig, first_ign, oth_inj, oth_death, flame_sprd
from joint_buildingfires"""
df = pd.read_sql_query(q, conn)
df.to_csv('/tmp/buildingfires.csv')
```
| github_jupyter |
```
import pandas as pd
import spacy
dir(spacy)
```
# Linguistic Features
## Part-of-speech tagging
After tokenization, spaCy can parse and tag a given Doc. This is where the statistical model comes in, which enables spaCy to make a prediction of which tag or label most likely applies in this context. A model consists of binary data and is produced by showing a system enough examples for it to make predictions that generalise across the language – for example, a word following "the" in English is most likely a noun.
Linguistic annotations are available as Token attributes . Like many NLP libraries, spaCy encodes all strings to hash values to reduce memory usage and improve efficiency. So to get the readable string representation of an attribute, we need to add an underscore _ to its name:
```
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'Apple is looking at buying U.K. startup for $1 billion')
for token in doc:
print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
token.shape_, token.is_alpha, token.is_stop)
```
# Dependency parsing
## Noun chunks
Noun chunks are "base noun phrases" – flat phrases that have a noun as their head. You can think of noun chunks as a noun plus the words describing the noun – for example, "the lavish green grass" or "the world’s largest tech fund". To get the noun chunks in a document, simply iterate over Doc.noun_chunks .
```
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
for chunk in doc.noun_chunks:
print(chunk.text, chunk.root.text, chunk.root.dep_,
chunk.root.head.text)
```
## Navigating the parse tree
spaCy uses the terms head and child to describe the words connected by a single arc in the dependency tree. The term dep is used for the arc label, which describes the type of syntactic relation that connects the child to the head. As with other attributes, the value of .dep is a hash value. You can get the string value with .dep_
```
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
for token in doc:
print(token.text, token.dep_, token.head.text, token.head.pos_,
[child for child in token.children])
```
Because the syntactic relations form a tree, every word has exactly one head. You can therefore iterate over the arcs in the tree by iterating over the words in the sentence. This is usually the best way to match an arc of interest — from below:
```
from spacy.symbols import nsubj, VERB
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
# Finding a verb with a subject from below — good
verbs = set()
for possible_subject in doc:
if possible_subject.dep == nsubj and possible_subject.head.pos == VERB:
verbs.add(possible_subject.head)
print(verbs)
```
If you try to match from above, you'll have to iterate twice: once for the head, and then again through the children:
```
# Finding a verb with a subject from above — less good
verbs = []
for possible_verb in doc:
if possible_verb.pos == VERB:
for possible_subject in possible_verb.children:
if possible_subject.dep == nsubj:
verbs.append(possible_verb)
break
verbs
```
## Iterating around the local tree
A few more convenience attributes are provided for iterating around the local tree from the token. The Token.lefts and Token.rights attributes provide sequences of syntactic children that occur before and after the token. Both sequences are in sentence order. There are also two integer-typed attributes, Token.n_rights and Token.n_lefts , that give the number of left and right children.
```
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"bright red apples on the tree")
print([token.text for token in doc[2].lefts]) # ['bright', 'red']
print([token.text for token in doc[2].rights]) # ['on']
print(doc[2].n_lefts) # 2
print(doc[2].n_rights) # 1
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Credit and mortgage account holders must submit their requests")
root = [token for token in doc if token.head == token][0]
subject = list(root.lefts)[0]
for descendant in subject.subtree:
assert subject is descendant or subject.is_ancestor(descendant)
print(descendant.text, descendant.dep_, descendant.n_lefts,
descendant.n_rights,
[ancestor.text for ancestor in descendant.ancestors])
```
You can get a whole phrase by its syntactic head using the Token.subtree attribute. This returns an ordered sequence of tokens. You can walk up the tree with the Token.ancestors attribute, and check dominance with Token.is_ancestor() .
```
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Credit and mortgage account holders must submit their requests")
root = [token for token in doc if token.head == token][0]
subject = list(root.lefts)[0]
for descendant in subject.subtree:
assert subject is descendant or subject.is_ancestor(descendant)
print(descendant.text, descendant.dep_, descendant.n_lefts,
descendant.n_rights,
[ancestor.text for ancestor in descendant.ancestors])
```
Finally, the .left_edge and .right_edge attributes can be especially useful, because they give you the first and last token of the subtree. This is the easiest way to create a Span object for a syntactic phrase. Note that .right_edge gives a token within the subtree — so if you use it as the end-point of a range, don't forget to +1!
```
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Credit and mortgage account holders must submit their requests")
span = doc[doc[4].left_edge.i : doc[4].right_edge.i+1]
span.merge()
for token in doc:
print(token.text, token.pos_, token.dep_, token.head.text)
```
## Visualizing dependencies
The best way to understand spaCy's dependency parser is interactively. To make this easier, spaCy v2.0+ comes with a visualization module. You can pass a Doc or a list of Doc objects to displaCy and run displacy.serve to run the web server, or displacy.render to generate the raw markup. If you want to know how to write rules that hook into some type of syntactic construction, just plug the sentence into the visualizer and see how spaCy annotates it.
```
from spacy import displacy
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"Autonomous cars shift insurance liability toward manufacturers")
displacy.render(doc, style='dep', jupyter=True)
```
## Disabling the parser
In the default models, the parser is loaded and enabled as part of the standard processing pipeline. If you don't need any of the syntactic information, you should disable the parser. Disabling the parser will make spaCy load and run much faster. If you want to load the parser, but need to disable it for specific documents, you can also control its use on the nlp object.
```
from spacy.lang.en import English
nlp = spacy.load('en', disable=['parser'])
#nlp = English().from_disk('/model', disable=['parser'])
doc = nlp(u"I don't want parsed", disable=['parser'])
```
# Named Entities
spaCy features an extremely fast statistical entity recognition system, that assigns labels to contiguous spans of tokens. The default model identifies a variety of named and numeric entities, including companies, locations, organizations and products. You can add arbitrary classes to the entity recognition system, and update the model with new examples.
## Named Entity Recognition 101
A named entity is a "real-world object" that's assigned a name – for example, a person, a country, a product or a book title. spaCy can recognise various types of named entities in a document, by asking the model for a prediction. Because models are statistical and strongly depend on the examples they were trained on, this doesn't always work perfectly and might need some tuning later, depending on your use case.
Named entities are available as the ents property of a Doc:
```
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'Apple is looking at buying U.K. startup for $1 billion')
for ent in doc.ents:
print(ent.text, ent.start_char, ent.end_char, ent.label_)
```
## Accessing entity annotations
The standard way to access entity annotations is the doc.ents property, which produces a sequence of Span objects. The entity type is accessible either as a hash value or as a string, using the attributes ent.label and ent.label_. The Span object acts as a sequence of tokens, so you can iterate over the entity or index into it. You can also get the text form of the whole entity, as though it were a single token.
You can also access token entity annotations using the token.ent_iob and token.ent_type attributes. token.ent_iob indicates whether an entity starts, continues or ends on the tag. If no entity type is set on a token, it will return an empty string.
```
nlp = spacy.load('en_core_web_sm')
doc = nlp(u'San Francisco considers banning sidewalk delivery robots')
# document level
ents = [(e.text, e.start_char, e.end_char, e.label_) for e in doc.ents]
print(ents)
# token level
ent_san = [doc[0].text, doc[0].ent_iob_, doc[0].ent_type_]
ent_francisco = [doc[1].text, doc[1].ent_iob_, doc[1].ent_type_]
print(ent_san) # [u'San', u'B', u'GPE']
print(ent_francisco) # [u'Francisco', u'I', u'GPE']
```
## Setting entity annotations
To ensure that the sequence of token annotations remains consistent, you have to set entity annotations at the document level. However, you can't write directly to the token.ent_iob or token.ent_type attributes, so the easiest way to set entities is to assign to the doc.ents attribute and create the new entity as a Span .
Keep in mind that you need to create a Span with the start and end index of the token, not the start and end index of the entity in the document. In this case, "FB" is token (0, 1) – but at the document level, the entity will have the start and end indices (0, 2).
```
from spacy.tokens import Span
nlp = spacy.load('en_core_web_sm')
doc = nlp(u"FB is hiring a new Vice President of global policy")
ents = [(e.text, e.start_char, e.end_char, e.label_) for e in doc.ents]
print('Before', ents)
# the model didn't recognise "FB" as an entity :(
ORG = doc.vocab.strings[u'ORG'] # get hash value of entity label
fb_ent = Span(doc, 0, 1, label=ORG) # create a Span for the new entity
doc.ents = list(doc.ents) + [fb_ent]
ents = [(e.text, e.start_char, e.end_char, e.label_) for e in doc.ents]
print('After', ents)
# [(u'FB', 0, 2, 'ORG')] 🎉
```
## Setting entity annotations from array
You can also assign entity annotations using the doc.from_array() method. To do this, you should include both the `ENT_TYPE` and the ENT_IOB attributes in the array you're importing from.
```
import numpy
from spacy.attrs import ENT_IOB, ENT_TYPE
nlp = spacy.load('en_core_web_sm')
doc = nlp.make_doc(u'London is a big city in the United Kingdom.')
print('Before', list(doc.ents)) # []
header = [ENT_IOB, ENT_TYPE]
attr_array = numpy.zeros((len(doc), len(header)))
attr_array[0, 0] = 3 # B
attr_array[0, 1] = doc.vocab.strings[u'GPE']
doc.from_array(header, attr_array)
print('After', list(doc.ents)) # [London
```
## Setting entity annotations in Cython
Finally, you can always write to the underlying struct, if you compile a Cython function. This is easy to do, and allows you to write efficient native code.
This code needs cython to work.
```cython
# cython: infer_types=True
from spacy.tokens.doc cimport Doc
cpdef set_entity(Doc doc, int start, int end, int ent_type):
for i in range(start, end):
doc.c[i].ent_type = ent_type
doc.c[start].ent_iob = 3
for i in range(start+1, end):
doc.c[i].ent_iob = 2
```
## Training and updating
To provide training examples to the entity recogniser, you'll first need to create an instance of the GoldParse class. You can specify your annotations in a stand-off format or as token tags. If a character offset in your entity annotations don't fall on a token boundary, the GoldParse class will treat that annotation as a missing value. This allows for more realistic training, because the entity recogniser is allowed to learn from examples that may feature tokenizer errors.
```python
train_data = [('Who is Chaka Khan?', [(7, 17, 'PERSON')]),
('I like London and Berlin.', [(7, 13, 'LOC'), (18, 24, 'LOC')])]
doc = Doc(nlp.vocab, [u'rats', u'make', u'good', u'pets'])
gold = GoldParse(doc, entities=[u'U-ANIMAL', u'O', u'O', u'O'])
```
## Visualizing named entities
The displaCy ENT visualizer lets you explore an entity recognition model's behaviour interactively. If you're training a model, it's very useful to run the visualization yourself. To help you do that, spaCy v2.0+ comes with a visualization module. You can pass a Doc or a list of Doc objects to displaCy and run displacy.serve to run the web server, or displacy.render to generate the raw markup.
```
from spacy import displacy
text = """But Google is starting from behind. The company made a late push
into hardware, and Apple’s Siri, available on iPhones, and Amazon’s Alexa
software, which runs on its Echo and Dot devices, have clear leads in
consumer adoption."""
nlp = spacy.load('xx_ent_wiki_sm')
doc = nlp(text)
displacy.serve(doc, style='ent')
```
| github_jupyter |
```
# coding=utf-8
import pandas as pd
import numpy as np
from sklearn import preprocessing
df=pd.read_csv(r'./data/happiness_train_complete.csv',encoding='GB2312',index_col='id')
df = df[df["happiness"]>0] #原表中幸福度非正的都是错误数据,可以剔除12条错误数据
df.dtypes[df.dtypes==object] #查得有四列不是数据类型,需要对其进行转化
for i in range(df.dtypes[df.dtypes==object].shape[0]):
print(df.dtypes[df.dtypes==object].index[i])
#转化四列数据,转换后df全为数值格式
df["survey_month"] = df["survey_time"].transform(lambda line:line.split(" ")[0].split("/")[1]).astype("int64") #返回调查月:用空格来切分日期和时间,日期中第1项为月
df["survey_day"] = df["survey_time"].transform(lambda line:line.split(" ")[0].split("/")[2]).astype("int64") #返回调查日
df["survey_hour"] = df["survey_time"].transform(lambda line:line.split(" ")[1].split(":")[0]).astype("int64") #返回调查小时
df=df.drop(columns='survey_time')
enc1=preprocessing.OrdinalEncoder()
enc2=preprocessing.OrdinalEncoder()
enc3=preprocessing.OrdinalEncoder()
df['edu_other']=enc1.fit_transform(df['edu_other'].fillna(0).transform(lambda x:str(x)).values.reshape(-1,1))
print(enc1.categories_) #查看编码类型
df['property_other']=enc2.fit_transform(df['property_other'].fillna(0).transform(lambda x:str(x)).values.reshape(-1,1))
print(enc2.categories_) #查看编码类型
df['invest_other']=enc3.fit_transform(df['invest_other'].fillna(0).transform(lambda x:str(x)).values.reshape(-1,1))
print(enc3.categories_) #查看编码类型
#确定X和Y
X=df.drop(columns='happiness').fillna(0)
Y=df.happiness
from sklearn import metrics
from sklearn import linear_model
from sklearn import model_selection
#=============
#1、线性回归
#=============
#=============
#1.1、普通线性回归
#=============
reg1=linear_model.LinearRegression()
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
#mes1是未取整,mes2是四舍五入取整,mes3是向下取整,mes4是向上取整
mes1=[]
mes2=[]
mes3=[]
mes4=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=reg1.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
e2=metrics.mean_squared_error(np.round(y_pred),y_test)
e3=metrics.mean_squared_error(np.trunc(y_pred),y_test)
e4=metrics.mean_squared_error(np.ceil(y_pred),y_test)
mes1.append(e1)
mes2.append(e2)
mes3.append(e3)
mes4.append(e4)
print('normal_liner:')
print(mes1)
print(np.mean(mes1))
print('-------------')
print(mes2)
print(np.mean(mes2))
print('-------------')
print(mes3)
print(np.mean(mes3))
print('-------------')
print(mes4)
print(np.mean(mes4))
print()
print()
#=============
#1.2、L1的lasso回归
#=============
reg2=linear_model.Lasso()
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
#mes1是未取整,mes2是四舍五入取整,mes3是向下取整,mes4是向上取整
mes1=[]
mes2=[]
mes3=[]
mes4=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=reg2.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
e2=metrics.mean_squared_error(np.round(y_pred),y_test)
e3=metrics.mean_squared_error(np.trunc(y_pred),y_test)
e4=metrics.mean_squared_error(np.ceil(y_pred),y_test)
mes1.append(e1)
mes2.append(e2)
mes3.append(e3)
mes4.append(e4)
print('Lasso:')
print(mes1)
print(np.mean(mes1))
print('-------------')
print(mes2)
print(np.mean(mes2))
print('-------------')
print(mes3)
print(np.mean(mes3))
print('-------------')
print(mes4)
print(np.mean(mes4))
print()
print()
#=============
#1.3、L2的岭回归
#=============
reg3=linear_model.Ridge()
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
#mes1是未取整,mes2是四舍五入取整,mes3是向下取整,mes4是向上取整
mes1=[]
mes2=[]
mes3=[]
mes4=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=reg3.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
e2=metrics.mean_squared_error(np.round(y_pred),y_test)
e3=metrics.mean_squared_error(np.trunc(y_pred),y_test)
e4=metrics.mean_squared_error(np.ceil(y_pred),y_test)
mes1.append(e1)
mes2.append(e2)
mes3.append(e3)
mes4.append(e4)
print('Ridge:')
print(mes1)
print(np.mean(mes1))
print('-------------')
print(mes2)
print(np.mean(mes2))
print('-------------')
print(mes3)
print(np.mean(mes3))
print('-------------')
print(mes4)
print(np.mean(mes4))
print()
print()
#=============
#1.4、逻辑回归
#=============
reg4=linear_model.LogisticRegression(penalty='l2',solver='saga') #正则会导致准确率下降,所以不正则
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes1=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=reg4.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes1.append(e1)
print('LR:')
print(mes1)
print(np.mean(mes1))
print()
print()
from sklearn import metrics
from sklearn import svm
from sklearn import model_selection
#=============
#2、SVM
#=============
clf2=svm.SVC() #gamma和C都是默认值,没有调参
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=clf2.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('SVM:')
print(mes)
print(np.mean(mes))
print()
print()
from sklearn import metrics
from sklearn import neighbors
from sklearn import model_selection
#=============
#3、KNN
#=============
for n in range(10,101,10): #K值肯定会造成影响
clf3=neighbors.KNeighborsClassifier(n_neighbors=n)
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=clf3.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('KNN(n=%d):'%n)
print(mes)
print(np.mean(mes))
print()
print()
from sklearn import metrics
from sklearn import naive_bayes
from sklearn import model_selection
X_new=X # 本来想标准化,但发现标准化后的效果更差,所以就没有标准化
#=============
#4、朴素贝叶斯
#=============
clf4=naive_bayes.GaussianNB() #多想分布朴素贝叶斯跑不通,必须是正定矩阵什么的,所以这里用的高斯
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X_new.iloc[train]
y_train = Y.iloc[train]
X_test = X_new.iloc[test]
y_test = Y.iloc[test]
y_pred=clf4.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('bayes:')
print(mes)
print(np.mean(mes))
print()
print()
from sklearn import metrics
from sklearn import tree
from sklearn import model_selection
#=============
#5、决策树
#=============
clf5=tree.DecisionTreeClassifier()
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=clf5.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('Tree:')
print(mes)
print(np.mean(mes))
print()
print()
from sklearn import metrics
from sklearn import neural_network
from sklearn import model_selection
#=============
#6、MLP
#=============
clf6=neural_network.MLPClassifier(hidden_layer_sizes=(10,8,5,3,2),activation='logistic') #随意设置下隐藏层构成
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=clf6.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('Tree:')
print(mes)
print(np.mean(mes))
print()
print()
from sklearn import metrics
from sklearn import ensemble
from sklearn import model_selection
#=============
#7、随机森林
#=============
clf7=ensemble.RandomForestRegressor(n_estimators=20,n_jobs=-1)
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=clf7.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('Tree:')
print(mes)
print(np.mean(mes))
print()
print()
#=============
#特征重要程度排序
import matplotlib.pyplot as plt
%matplotlib inline
a=ensemble.RandomForestRegressor(n_estimators=20).fit(X,Y).feature_importances_
temp=np.argsort(a) #返回index
a=list(a)
a.sort()
b=[]
for i in temp:
b.append(X.columns[i])
plt.figure(figsize=(10,40))
plt.grid()
plt.barh(b,a,)
#参数结论:
# 1、edu_other、property_other、invest_other这三项转换数据都不太重要,而且property、invest的各项数据似乎都不重要
# 2、前十项中equity、depresion反映社会态度和心态;
# class、family_income、floor_area反映财富;
# birth、marital_1st、weight_jin、country反映客观状态
# survey_day为什么也会有影响,这是一个最有疑问的指标
from sklearn import metrics
from sklearn import ensemble
from sklearn import model_selection
#=============
#8、gdbt
#=============
clf8=ensemble.GradientBoostingRegressor(max_features=20) #必须要设置参数,不然跑太慢了
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=clf8.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('Tree:')
print(mes)
print(np.mean(mes))
print()
print()
#=============
#特征重要程度排序
import matplotlib.pyplot as plt
%matplotlib inline
a=ensemble.GradientBoostingClassifier().fit(X,Y).feature_importances_
temp=np.argsort(a) #返回index
a=list(a)
a.sort()
b=[]
for i in temp:
b.append(X.columns[i])
plt.figure(figsize=(10,40))
plt.grid()
plt.barh(b,a,)
from sklearn import metrics
import xgboost
from sklearn import model_selection
#=============
#9、xgboost
#=============
clf9=xgboost.XGBRegressor()
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=clf9.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('Tree:')
print(mes)
print(np.mean(mes))
print()
print()
#=============
#特征重要程度排序
import matplotlib.pyplot as plt
%matplotlib inline
a=xgboost.XGBRegressor().fit(X,Y).feature_importances_
temp=np.argsort(a) #返回index
a=list(a)
a.sort()
b=[]
for i in temp:
b.append(X.columns[i])
plt.figure(figsize=(10,40))
plt.grid()
plt.barh(b,a,)
#=============
#特征重要程度排序
import matplotlib.pyplot as plt
%matplotlib inline
a=xgboost.XGBRegressor().fit(X,Y).feature_importances_
temp=np.argsort(a) #返回index
a=list(a)
a.sort()
b=[]
for i in temp:
b.append(X.columns[i])
plt.figure(figsize=(10,40))
plt.grid()
plt.barh(b,a,)
from sklearn import metrics
import lightgbm
from sklearn import model_selection
#lighgbm防报错
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
#=============
#10、LightGBM
#=============
clf10=lightgbm.LGBMRegressor()
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test = X.iloc[test]
y_test = Y.iloc[test]
y_pred=clf10.fit(X_train,y_train).predict(X_test)
e1=metrics.mean_squared_error(y_pred,y_test)
mes.append(e1)
print('Tree:')
print(mes)
print(np.mean(mes))
print()
print()
#=============
#特征重要程度排序
import matplotlib.pyplot as plt
%matplotlib inline
a=lightgbm.LGBMRegressor().fit(X,Y).feature_importances_
temp=np.argsort(a) #返回index
a=list(a)
a.sort()
b=[]
for i in temp:
b.append(X.columns[i])
plt.figure(figsize=(10,40))
plt.grid()
plt.barh(b,a,)
df1=pd.read_csv(r'./data/happiness_test_complete.csv',encoding='GB2312',index_col='id')
#转化四列数据,转换后df全为数值格式
df1["survey_month"] = df1["survey_time"].transform(lambda line:line.split(" ")[0].split("/")[1]).astype("int64") #返回调查月:用空格来切分日期和时间,日期中第1项为月
df1["survey_day"] = df1["survey_time"].transform(lambda line:line.split(" ")[0].split("/")[2]).astype("int64") #返回调查日
df1["survey_hour"] = df1["survey_time"].transform(lambda line:line.split(" ")[1].split(":")[0]).astype("int64") #返回调查小时
df1=df1.drop(columns='survey_time')
def temp1(a):
if a not in enc1.categories_[0]:
return 0
else:
return a
df1['edu_other']=enc1.transform(df1['edu_other'].transform(temp1).transform(lambda x:str(x)).values.reshape(-1,1))
def temp2(a):
if a not in enc2.categories_[0]:
return 0
else:
return a
df1['property_other']=enc2.transform(df1['property_other'].transform(temp2).transform(lambda x:str(x)).values.reshape(-1,1))
def temp3(a):
if a not in enc3.categories_[0]:
return 0
else:
return a
df1['invest_other']=enc3.transform(df1['invest_other'].transform(temp2).transform(lambda x:str(x)).values.reshape(-1,1))
#确定X_test
X_test=df1.fillna(0)
import xgboost
# 结果1
y_test=xgboost.XGBRegressor().fit(X,Y).predict(X_test)
df1_final=pd.DataFrame({'id':X_test.index,'happiness':y_test}).set_index('id')
df1_final.to_csv(r'df1_final.csv')
# 结果1四舍五入
df1_final_round=pd.DataFrame({'id':X_test.index,'happiness':np.round(y_test)}).set_index('id')
df1_final_round.to_csv(r'df1_final.csv')
# 结果2
from sklearn import metrics
import xgboost
from sklearn import model_selection
from sklearn.externals import joblib
#=============
#xgboost_modified
#=============
clf_xgboost_modified=xgboost.XGBRegressor(max_depth=4,min_child_weight=5,gamma=0,subsample=0.8,colsample_bytree=0.75,reg_alpha=5,reg_lambda=0.1)
#交叉验证确定准确率,因为对回归值会采用取整操作,所以不用自带的交叉验证模型
mes=[]
i=0
kf=model_selection.KFold(10,shuffle=True)
for train,test in kf.split(X):
X_train = X.iloc[train]
y_train = Y.iloc[train]
X_test1 = X.iloc[test]
y_test1 = Y.iloc[test]
clf_xgboost_modified.fit(X_train,y_train)
y_pred=clf_xgboost_modified.predict(X_test1)
e1=metrics.mean_squared_error(y_pred,y_test1)
mes.append(e1)
joblib.dump(clf_xgboost_modified,filename='xgboost_%d.pkl'%i)
y_test=clf_xgboost_modified.predict(X_test)
df2_final=pd.DataFrame({'id':X_test.index,'happiness':y_test}).set_index('id')
# df2_final.to_csv('df2_xgboost_%d.csv'%i)
i+=1
print('clf_xgboost_modified:')
print(mes)
print(np.mean(mes))
print()
print()
#调试最终结果
clf10=lightgbm.LGBMRegressor(metric='l2') #默认default={l2 for regression}
param_test = {
'max_depth':np.array([9]),
'min_child_weight':np.array([0.0001]),
'min_split_gain':np.array([0.4]),
'subsample':np.array([0.5]),
'colsample_bytree':np.array([1]),
'reg_alpha':np.array([1e-05]),
'reg_lambda':np.array([0.0001]) ,
'learning_rate':np.array([0.1]),
}
clf=model_selection.GridSearchCV(clf10,param_test,cv=10,n_jobs=-1,scoring='neg_mean_squared_error')
clf.fit(X_train,y_train)
joblib.dump(clf_xgboost_modified,filename='xgboost_%d.pkl'%i)
print("clf.cv_results_['mean_test_score']:=%s"%clf.cv_results_['mean_test_score'])
print(clf.best_score_)
print(clf.best_params_)
# 结论:{'colsample_bytree': 1, 'learning_rate': 0.1, 'max_depth': 9, 'min_child_weight': 0.0001, 'min_split_gain': 0.4, 'reg_alpha': 1e-05, 'reg_lambda': 0.0001, 'subsample': 0.5}
#调试最终结果
clf8=ensemble.GradientBoostingRegressor(loss='ls')
param_test = {
'max_depth':np.array([2]),
'min_weight_fraction_leaf':np.array([0.002]),
'min_impurity_split':np.array([0.0001]),
'subsample':np.array([0.96]),
'max_features':np.array([0.88]),
'n_estimators':np.array([80]),
'learning_rate':np.array([0.2]),
}
clf=model_selection.GridSearchCV(clf8,param_test,cv=10,n_jobs=-1,scoring='neg_mean_squared_error')
clf.fit(X_train,y_train)
print("clf.cv_results_['mean_test_score']:=%s"%clf.cv_results_['mean_test_score'])
print(clf.best_score_)
print(clf.best_params_)
# 结论:{'colsample_bytree': 1, 'learning_rate': 0.1, 'max_depth': 9, 'min_child_weight': 0.0001, 'min_split_gain': 0.4, 'reg_alpha': 1e-05, 'reg_lambda': 0.0001, 'subsample': 0.5}
#调试最终结果
param_test = {
'min_samples_split':np.array([4]),
'min_weight_fraction_leaf':np.array([0.01]),
'min_impurity_decrease':np.array([0]),
'n_estimators':[150],
'max_features':[0.8],
}
clf=model_selection.GridSearchCV(clf7,param_test ,cv=10,n_jobs=-1,scoring='neg_mean_squared_error')
clf.fit(X_train,y_train)
print("clf.cv_results_['mean_test_score']:=%s"%clf.cv_results_['mean_test_score'])
print(clf.best_score_)
print(clf.best_params_)
# 结论:{'max_features': 0.8, 'min_impurity_decrease': 0, 'min_samples_split': 4, 'min_weight_fraction_leaf': 0.01, 'n_estimators': 150}
```
| github_jupyter |
# Algorithmic Fairness, Accountability, and Ethics, Spring 2022
# Exercise 5
## Task 0 (Setup)
We use the same dataset as in week 3 and 4. If you missed to install the module, please carry out the installation tasks at <https://github.com/zykls/folktables#basic-installation-instructions>.
After successful installation, you should be able to run the following code to generate a prediction task.
To make your life easier, we made the `BasicProblem`-magic from the `folktables` package (see exercises of week 3) explicit in this task.
This way, you can get access to different encodings of the data.
**Note**: Some Windows users could not run the line `acs_data = data_source.get_data(states=["CA"], download=True)`. The dataset is available as a zip file on learnIT under week 3. The direct link is <https://learnit.itu.dk/mod/resource/view.php?id=155305>. Unzip it in the notebook's location, and set `download` to `False` in the code below.
```
from folktables.acs import adult_filter
from folktables import ACSDataSource
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
data_source = ACSDataSource(survey_year='2018', horizon='1-Year', survey='person')
acs_data = data_source.get_data(states=["CA"], download=True)
feature_names = ['AGEP', # Age
"CIT", # Citizenship status
'COW', # Class of worker
"ENG", # Ability to speak English
'SCHL', # Educational attainment
'MAR', # Marital status
"HINS1", # Insurance through a current or former employer or union
"HINS2", # Insurance purchased directly from an insurance company
"HINS4", # Medicaid
"RAC1P", # Recoded detailed race code
'SEX']
target_name = "PINCP" # Total person's income
def data_processing(data, features, target_name:str, threshold: float = 35000):
df = data
### Adult Filter (STARTS) (from Foltktables)
df = df[~df["SEX"].isnull()]
df = df[~df["RAC1P"].isnull()]
df = df[df['AGEP'] > 16]
df = df[df['PINCP'] > 100]
df = df[df['WKHP'] > 0]
df = df[df['PWGTP'] >= 1]
### Adult Filter (ENDS)
### Groups of interest
sex = df["SEX"].values
### Target
df["target"] = df[target_name] > threshold
target = df["target"].values
df = df[features + ["target", target_name]] ##we want to keep df before one_hot encoding to make Bias Analysis
df_processed = df[features].copy()
cols = [ "HINS1", "HINS2", "HINS4", "CIT", "COW", "SCHL", "MAR", "SEX", "RAC1P"]
df_processed = pd.get_dummies(df_processed, prefix=None, prefix_sep='_', dummy_na=False, columns=cols, drop_first=True)
df_processed = pd.get_dummies(df_processed, prefix=None, prefix_sep='_', dummy_na=True, columns=["ENG"], drop_first=True)
return df_processed, df, target, sex
data, data_original, target, group = data_processing(acs_data, feature_names, target_name)
X_train, X_test, y_train, y_test, group_train, group_test = train_test_split(
data, target, group, test_size=0.2, random_state=0)
```
# Task 1 (Decision tree)
1. Train a decision tree classifier on the training dataset. (You can work on the original dataset or on the one-hot encoded one.) The following parameter choices worked well in our setup: `(DecisionTreeClassifier(min_samples_split = 0.01, min_samples_leaf= 0.01, max_features="auto", max_depth = 15, criterion = "gini", random_state = 0))` Report on its accuracy. Visualize the tree using `plot_tree` from `sklearn`. Which parameters can you change to the adapt the size of the tree? Try to find parameters that make the tree easier to understand.
2. For two training examples, explain their classification given the decision tree.
3. Compute feature importance as shown in the lecture. Which features are most important?
4. Compute permuted feature importance using sklearn as shown in the lecture. How does feature importance change?
5. Provide a counterfactual for a feature vector that is predicted negatively. Compare to the counterfactual for logistic regression (last week's exercises). Is it a counterfactual in both models?
# Task 2 (Black-box model)
1. Train a black-box model classifier (for example, use a random forest, a gradient-boosted decision tree, an SVM, or a Neural Network). Report on its accuracy. If you have used a tree data structure such as RF or gradient-boosted decision trees, report on the feature importance as in Task 1.
2. Both for the decision tree and the black-box classifier, use the `shap` module to explain predictions. Contrast the two models to each other: What are similarities, how do they differ? As shown in the lecture, provide a summary plot, a dependence plot, a force plot for a negatively/positively predicted feature vector, and summary plot on the interaction values.
3. Reflect on the explanations: How does the _decision tree_'s black-box explanation relate to its white-box explanation? Which classifier would you prefer when deploying a model as part of the machine learning pipeline?
| github_jupyter |
1 Short Answer (25pts)
1. (5pts) True or False. (And explain your reason.)Mean-variance optimization goes long the highest Sharpe-Ratio assets and shorts the lowestSharpe-ratio assets.
A. False: Not necessarily. MV optimization optimizes for the total portfolio return vs the portfolio variance. The optimization is driven by the marginal correlation of a given asset with the rest of the assets and individual security level sharpe is not of high importance. If the asset with the lowest Sharpe were to bring in a lot of diversification benefit - the MV portfolio could possibly go long on ssuch an asset
2. (5pts) True or False. (And explain your reason.)Investing in an LETF makes more sense for a long-term horizon than a short-term horizon.
A. False: Investing in Leverage ETF makes more sense for short term instead of long term. While LETF can replicate quite closely for a day to day basis - Over the long term it is prone to calculation errors (due to the effect of compounding) and will not replicate the index over a longer time period
example [as seen in lec notes] : Both of the below are not equivalent
L return = w (1+r1)(1+r2)......-w
LETF = (1+wr1)(1+wr2).......-1
3. (5pts) This week ProShares launches BITO on the NYSE. The ETF holds Bitcoin futures con-tracts. Suppose in a year from now, we want to try to replicate BITO using SPY and IEF asregressors in a LFD. Because BITO will only have a year of data, we do not trust that we willhave a good estimate of the mean return.Do you suggest that we (in a year) estimate the regression with an intercept or without anintercept? Why?
A. We should estimate the regression with an intercept. Given we do not have enough data and while we only have the two regressors available for testing (SPY, IEF) - it is quite possible that the variations in BITO arent explained by these two. Using an intercept will allow for the regression to capture a constant that will help in explaining the variation. If it were so that the two regressors explain the variation of BITO well, we should eventually end up with a small (near zero ) intercept in which case the result will be nearer to a regression without an intercept
4. (5pts) Is HDG effective at tracking HFRI in-sample? And out of sample?
A. HDG is only effective at tracking HFRI in-sample as the estimators were derived from the LFD of the in-sample data space. On applying the same estimators to data OOS, the constant estimators do not replicate the movements well. This results in using a rolling window estimation
5. (5pts) A hedge fund claims to beat the market by having a very high alpha. After regressingthe hedge fund returns on the 6 Merrill-Lynch style factors, you find the alpha to be negative.Explain why this discrepancy can happen.
A. This can happen if one of the regressors used has a strong correlation to the constant (Something like a constant return, ultra low volatility - for example UST maybe). The alpha of the HF will be captured in the beta of this regressor which could result in the negative alpha
2 Allocation (25 pts)
Consider the Merrill-Lynch Style Factors found in “prosharesanalysisdata.xlsx”, sheet “merrillfactors”.We will use “USGG3M Index” as the risk-free rate. Subtract it from the other 5 columns, and proceedwith those 5 risky assets.
1. (5pts) What are the weights of the tangency portfolio,wtan?
2. (5pts) What are the weights of the optimal portfolio,w∗, with a targeted excess mean return of.02 per month?Is the optimal portfolio,w∗, invested in the risk-free rate?
3. (5pts) Report the mean, volatility, and Sharpe ratio of the optimized portfolio. Annualize allthree statistics.
4. (5pts) Re-calculate the optimal portfolio,w∗with target excess mean of .02 per month. Butthis time only use data through 2018 in doing the calculation. Calculate the return in 2019-2021based on those optimal weights.Report the mean, volatility, and Sharpe ratio of the 2019-2021 performance.
5. (5pts) Suppose that instead of optimizing these 5 risky assets, we optimized 5 commodity futures:oil, coffee, cocoa, lumber, cattle, and gold.Do you think the out-of-sample fragility problem would be better or worse than what we haveseen optimizing equities?No calculation is needed for this question–we just want a conceptual (though specific) answer.
```
import pandas as pd
import numpy as np
import statsmodels.api as sm
from statsmodels.regression.rolling import RollingOLS
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
import seaborn as sns
# reading data
factor_data = pd.read_excel('/Users/rkb/Desktop/PortfolioT/data/proshares_analysis_data.xlsx', sheet_name = 'merrill_factors')
factor_data = factor_data.set_index('date')
factor_data.head()
risky_data = factor_data.subtract(factor_data["USGG3M Index"], axis=0)
risky_data = risky_data.drop(columns=["USGG3M Index"])
risky_data.head()
#calc tangency weights
def tangency_weights(returns,dropna=True,scale_cov=1):
if dropna:
returns = returns.dropna()
covmat_full = returns.cov()
covmat_diag = np.diag(np.diag(covmat_full))
covmat = scale_cov * covmat_full + (1-scale_cov) * covmat_diag
weights = np.linalg.solve(covmat,returns.mean())
weights = weights / weights.sum()
return pd.DataFrame(weights, index=returns.columns)
#1. (5pts) What are the weights of the tangency portfolio,wtan?
# assumption = sum of weights = 1, all of the portfolio invested in risky assets and nothing in risk free
weight_v = tangency_weights(risky_data)
weight_v
#2. (5pts) What are the weights of the optimal portfolio,w∗, with a targeted excess mean return of.02 per month?
#Is the optimal portfolio,w∗, invested in the risk-free rate?
def compute_tangency(df_tilde, diagonalize_Sigma=False):
"""Compute tangency portfolio given a set of excess returns.
Also, for convenience, this returns the associated vector of average
returns and the variance-covariance matrix.
Parameters
----------
diagonalize_Sigma: bool
When `True`, set the off diagonal elements of the variance-covariance
matrix to zero.
"""
Sigma = df_tilde.cov()
# N is the number of assets
N = Sigma.shape[0]
Sigma_adj = Sigma.copy()
if diagonalize_Sigma:
Sigma_adj.loc[:,:] = np.diag(np.diag(Sigma_adj))
mu_tilde = df_tilde.mean()
Sigma_inv = np.linalg.inv(Sigma_adj)
weights = Sigma_inv @ mu_tilde / (np.ones(N) @ Sigma_inv @ mu_tilde)
# For convenience, I'll wrap the solution back into a pandas.Series object.
omega_tangency = pd.Series(weights, index=mu_tilde.index)
return omega_tangency, mu_tilde, Sigma_adj
def target_mv_portfolio(df_tilde, target_return=0.02, diagonalize_Sigma=False):
"""Compute MV optimal portfolio, given target return and set of excess returns.
Parameters
----------
diagonalize_Sigma: bool
When `True`, set the off diagonal elements of the variance-covariance
matrix to zero.
"""
omega_tangency, mu_tilde, Sigma = compute_tangency(df_tilde, diagonalize_Sigma=diagonalize_Sigma)
Sigma_adj = Sigma.copy()
if diagonalize_Sigma:
Sigma_adj.loc[:,:] = np.diag(np.diag(Sigma_adj))
Sigma_inv = np.linalg.inv(Sigma_adj)
N = Sigma_adj.shape[0]
delta_tilde = ((np.ones(N) @ Sigma_inv @ mu_tilde)/(mu_tilde @ Sigma_inv @ mu_tilde)) * target_return
omega_star = delta_tilde * omega_tangency
return omega_star, mu_tilde, Sigma_adj
omega_star, mu_tilde, Sigma = target_mv_portfolio(risky_data)
omega_star_df = omega_star.to_frame('MV Portfolio Weights')
omega_star_df
#A. Weights below. optimal portfolio will short the risk free rate.
#As the excess return for the tangential portfolio (estimated previously is <2%)and sum(MV weights) >1
mu_tilde.transpose()*omega_star
#3. (5pts) Report the mean, volatility, and Sharpe ratio of the optimized portfolio.
#Annualize allthree statistics.
def portfolio_stats(omega, mu_tilde, Sigma, annualize_fac):
# Mean
mean = (mu_tilde @ omega) * annualize_fac
# Volatility
vol = np.sqrt(omega @ Sigma @ omega) * np.sqrt(annualize_fac)
# Sharpe ratio
sharpe_ratio = mean / vol
return round(pd.DataFrame(data = [mean, vol, sharpe_ratio],
index = ['Mean', 'Volatility', 'Sharpe'],
columns = ['Portfolio Stats']), 4)
portfolio_stats(omega_star, mu_tilde, Sigma, 12)
#4. (5pts) Re-calculate the optimal portfolio,w∗with target excess mean of .02 per month.
#Butthis time only use data through 2018 in doing the calculation.
#Calculate the return in 2019-2021based on those optimal weights.Report the mean, volatility, and Sharpe ratio of the 2019-2021 performance.
r_data = pd.read_excel('/Users/rkb/Desktop/PortfolioT/data/proshares_analysis_data.xlsx', sheet_name = 'merrill_factors')
pd.to_datetime(r_data['date'])
r_data.set_index('date',inplace=True)
r_data.head()
t_data = r_data.loc('year'<=2018)
t_data.tail()
```
#5. (5pts) Suppose that instead of optimizing these 5 risky assets,
we optimized 5 commodity futures:oil, coffee, cocoa, lumber, cattle, and gold.
Do you think the out-of-sample fragility problem would be better or worse than what we haveseen optimizing equities?
No calculation is needed for this question–we just want a conceptual (though specific) answer.
#A. The correlation seen in equities is larger than the correlation observed in commodities in general (from classroom discussions). With lower correlation, we achieve higher diversification for the optimized portfolio and hence this should be better off than optimizing equities
3 Hedging & Replication (20pts)
Continue to use the same data file from the previous problem.2
Suppose we want to invest in EEM, but hedge out SPY. Do this by estimating a regression of EEMon SPY.•Do NOT include an intercept.•
Use the full sample of data.
1. (5pts) What is the optimal hedge ratio over the full sample of data? That is, for every dollarinvested in EEM, what would you invest in SPY?
2. (5pts) What is the mean, volatility, and Sharpe ratio of the hedged position, had we appliedthat hedge throughout the full sample? Annualize the statistics.
3. (5pts) Does it have the same mean as EEM? Why or why not?
4. (5pts) Suppose we estimated a multifactor regression where in addition to SPY, we had IWMas a regressor. Why might this regression be difficult to use for attribution or even hedging?
```
hedge_data = pd.read_excel('/Users/rkb/Desktop/PortfolioT/data/proshares_analysis_data.xlsx', sheet_name = 'merrill_factors')
hedge_data.set_index('date',inplace=True)
hedge_data.head()
X = risky_data['SPY US Equity']
#X = hedge_data['EEM US Equity']
y = risky_data['EEM US Equity']
static_model_noint = sm.OLS(y,X).fit()
static_model_noint.params
#1 Optimal Hedge ratio = Beta of reg. For every 1$ in SPY, corresponding in EEM = 0.92566
def summary_stats(df, annual_fac):
report = pd.DataFrame()
report['Mean'] = df.mean() * annual_fac
report['Vol'] = df.std() * np.sqrt(annual_fac)
report['Sharpe'] = report['Mean'] / report['Vol']
return round(report, 4)
summary_stats(risky_data[['SPY US Equity']]*0.92566,12)
#2 A: Stats below
summary_stats(risky_data[['EEM US Equity','SPY US Equity']],12)
#It does not have the same mean as EEM
```
4 Modeling Risk (20pts)Continue to use the same data file used in the previous problem. But for this problem use the totalreturns of SPY and EFA. That is, use the returns as given in the spreadsheet–without subtractingUSGG3M Index.
1. (10pts) SPY and EFA are highly correlated, yet SPY has had a much higher return. Howconfident are we that SPY will overperform EFA over the next 10 years?To answer the question,•use statistical estimates of the total returns of SPY and EFA over the full sample.•Assume that log returns for both assets are normally distributed.
2. (10pts) Calculate the 60-month rolling volatility of EFA.Use the latest estimate of the volatility (Sep 2021), along with the normality formula, to calculatea Sep 2021 estimate of the 1-month, 1% VaR. In using the VaR formula, assume that the meanis zero.
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import csv
dataVals = np.genfromtxt(r'1zncA.txt', delimiter='', dtype=float)
avgCoords = []
x = 0
def ray_dir(E,F):
d = E-F
def intersect_line_triangle(q1, q2, p1, p2, p3):
def signed_tetra_volume(a, b, c, d):
return np.sign(np.dot(np.cross(b - a, c - a), d - a) / 6.0)
numknots = 0
s1 = signed_tetra_volume(p1, p2, p3, q1)
s2 = signed_tetra_volume(p1, p2, p3, q2)
if s1 != s2:
s3 = signed_tetra_volume(p1, p2, q1, q2)
s4 = signed_tetra_volume(p2, p3, q1, q2)
s5 = signed_tetra_volume(p3, p1, q1, q2)
if s3==s4 and s4==s5:
# n = np.cross(p2 - p1, p3 - p1)
# t = np.dot(p1 - q1, n) / np.dot(q2 - q1, n)
numknots = numknots + 1
# return None
return numknots
```
# Strategy 1: Once threshold is reach, remove the point
```
def lineseg_dist(p, a, b):
# normalized tangent vector
d = np.divide(b - a, np.linalg.norm(b - a))
# signed parallel distance components
s = np.dot(a - p, d)
t = np.dot(p - b, d)
# clamped parallel distance
h = np.maximum.reduce([s, t, 0])
# perpendicular distance component
c = np.cross(p - a, d)
return np.hypot(h, np.linalg.norm(c))
```
# Run Code
```
for k in range(0, 50):
nproblem = 0
for i in range(0, len(dataVals) - 2):
xCoord = (dataVals[i][0] + dataVals[i + 1][0] + dataVals[i + 2][0]) / 3
yCoord = (dataVals[i][1] + dataVals[i + 1][1] + dataVals[i + 2][1]) / 3
zCoord = (dataVals[i][2] + dataVals[i + 1][2] + dataVals[i + 2][2]) / 3
avgCoords=[xCoord, yCoord, zCoord];
A = dataVals[i]
B = dataVals[i + 1]
C = avgCoords
nk=0
for j in range(0, i-2):
E = dataVals[j]
F = dataVals[j + 1]
nk += intersect_line_triangle(E, F, A, B, C)
for j in range(i + 2, len(dataVals)-1):
E = dataVals[j]
F = dataVals[j + 1]
nk += intersect_line_triangle(E, F, A, B, C)
A = dataVals[i + 1]
B = avgCoords
C = dataVals[i + 2]
for j in range(0, i-1):
E = dataVals[j]
F = dataVals[j + 1]
nk += intersect_line_triangle(E, F, A, B, C)
for j in range(i + 3, len(dataVals)-1):
E = dataVals[j]
F = dataVals[j + 1]
nk += intersect_line_triangle(E, F, A, B, C)
if nk==0:
dataVals[i + 1] = avgCoords
nproblem += nk
# Check if distance is short enough
distance = lineseg_dist(avgCoords, dataVals[i], dataVals[i+2])
if distance < 0.01:
np.delete(dataVals, i,0)
print("On iteration:", k)
print("curr possible numknot:", nproblem)
dd
```
| github_jupyter |
# HistGradientBoostingClassifier with MaxAbsScaler
This code template is for classification analysis using a HistGradientBoostingClassifier and the feature rescaling technique called MaxAbsScaler
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import HistGradientBoostingClassifier
from sklearn.metrics import classification_report,plot_confusion_matrix
from sklearn.preprocessing import MaxAbsScaler
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path=""
```
List of features which are required for model training .
```
#x_values
features = []
```
Target feature for prediction.
```
#y_value
target=''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
### Data Rescaling
sklearn.preprocessing.MaxAbsScaler is used
Scale each feature by its maximum absolute value.
Read more at [scikit-learn.org](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html)
```
Scaler=MaxAbsScaler()
x_train=Scaler.fit_transform(x_train)
x_test=Scaler.transform(x_test)
```
### Model
Histogram-based Gradient Boosting Classification Tree.This estimator is much faster than GradientBoostingClassifier for big datasets (n_samples >= 10 000).This estimator has native support for missing values (NaNs).
[Reference](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.HistGradientBoostingClassifier.html#sklearn.ensemble.HistGradientBoostingClassifier)
> **loss**: The loss function to use in the boosting process. ‘binary_crossentropy’ (also known as logistic loss) is used for binary classification and generalizes to ‘categorical_crossentropy’ for multiclass classification. ‘auto’ will automatically choose either loss depending on the nature of the problem.
> **learning_rate**: The learning rate, also known as shrinkage. This is used as a multiplicative factor for the leaves values. Use 1 for no shrinkage.
> **max_iter**: The maximum number of iterations of the boosting process, i.e. the maximum number of trees.
> **max_depth**: The maximum depth of each tree. The depth of a tree is the number of edges to go from the root to the deepest leaf. Depth isn’t constrained by default.
> **l2_regularization**: The L2 regularization parameter. Use 0 for no regularization (default).
> **early_stopping**: If ‘auto’, early stopping is enabled if the sample size is larger than 10000. If True, early stopping is enabled, otherwise early stopping is disabled.
> **n_iter_no_change**: Used to determine when to “early stop”. The fitting process is stopped when none of the last n_iter_no_change scores are better than the n_iter_no_change - 1 -th-to-last one, up to some tolerance. Only used if early stopping is performed.
> **tol**: The absolute tolerance to use when comparing scores during early stopping. The higher the tolerance, the more likely we are to early stop: higher tolerance means that it will be harder for subsequent iterations to be considered an improvement upon the reference score.
> **scoring**: Scoring parameter to use for early stopping.
```
model = HistGradientBoostingClassifier(random_state = 123)
model.fit(x_train, y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* **where**:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Snehaan Bhawal , Github: [Profile](https://github.com/Sbhawal)
| github_jupyter |
# Prática Guiada: Demonstração de `GridSearchCV`
Vamos usar o conjunto de dados iris... que já conhecemos bem.
Veremos como usar `GridSearchCV` para otimizar o hiperparâmetro `k` do algoritmo de vizinhos mais próximos.
[aqui](http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf) há um link para o paper de Ronald Fisher, que usou este conjunto de dados em 1936.
```
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score, train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
df = load_iris()
X = df.data
y = df.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20, random_state=98)
len(X_train), len(X_test), len(y_train), len(y_test)
```
## 1. Escrevendo os parâmetros à mão
É claro que, dependendo do modelo, os hiperparâmetros podem ter um efeito considerável na qualidade da previsão.
Vamos ver como a precisão varia na hora de prever a espécie das flores para diferentes valores de K.
```
k_range = list(range(1, 100))
k_scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
scores = cross_val_score(knn, X_train, y_train, cv=10, scoring='accuracy')
k_scores.append(scores.mean())
k_scores
plt.plot(k_range, k_scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Cross-Validated Accuracy');
```
Como sempre, observamos que o desempenho muda para diferentes valores do hiperparâmetro. <br />
Como podemos sistematizar essa pesquisa e adicionar mais hiperparâmetros à exploração?
## 2. Usando `GridSearch`
```
from sklearn.model_selection import GridSearchCV
```
É definida uma lista de parâmetros a serem testados.
```
k_range = list(range(1, 31))
knn = KNeighborsClassifier()
range(1, 31)
param_grid = dict(n_neighbors=range(1, 31))
print(param_grid)
```
Instanciar o método `GridSearchCV`
```
grid = GridSearchCV(knn, param_grid, cv=10, scoring='accuracy', n_jobs=-1)
```
Fazer o ajuste
```
grid.fit(X_train, y_train)
```
`GridSeachCV` retorna um dict com muitas informações. Do momento da configuração de cada parâmetro até os scores médios (via validação cruzada). Ele também fornece os scores em cada conjunto de treino e teste da Validação Cruzada K-Fold.
```
grid.cv_results_.keys()
pd.DataFrame(grid.cv_results_).columns
pd.DataFrame(grid.cv_results_)
```
Vamos ver o melhor modelo:
```
grid.best_params_
grid.best_estimator_, grid.best_score_, grid.best_params_
```
### 2.1 Adicionando outros parâmetros para ajustar
Vamos adicionar o parâmetro binário de peso do algoritmo knn que determina se alguns vizinhos terão mais peso do que outros no momento da classificação. O valor de distância indica que o peso é inversamente proporcional à distância
GridSearchCV exige que a grade de parâmetros a serem verificados venha em um dicionário com os nomes dos parâmetros e a lista dos valores possíveis.
Observe que o GridSearchCV possui todos os métodos que a API sklearn oferece para modelos preditivos: fit, predict, predict_proba, etc.
```
k_range = list(range(1, 31))
weight_options = ['uniform', 'distance']
```
Agora a otimização será feita iterando e alternando `weights` e `k` (número de vizinhos próximos).
```
param_grid = dict(n_neighbors=k_range, weights=weight_options)
print(param_grid)
```
**Verificar:**
1. Como o processo de busca será realizado?
2. Quantas vezes o algoritmo terá que ser iterado?
Ajustar os modelos
```
grid = GridSearchCV(knn, param_grid, cv=10, scoring='accuracy')
grid.fit(X_train, y_train)
pd.DataFrame(grid.cv_results_)
```
Escolher o melhor modelo
```
print (grid.best_estimator_)
print(grid.best_score_)
print(grid.best_params_)
```
## 3. Usar os melhores modelos para executar as previsões
```
knn = KNeighborsClassifier(n_neighbors=8, weights='uniform')
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
import seaborn as sns
print (classification_report(y_test, y_pred))
sns.heatmap(confusion_matrix(y_test, y_pred),annot=True)
```
Podemos usar o atalho que `GridSeachCV` possui: usando o método` predict` sobre o objeto `grid`.
```
grid.predict(X_test)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sqlalchemy import create_engine
import statsmodels.api as sm
import warnings
warnings.filterwarnings('ignore')
postgres_user = 'dsbc_student'
postgres_pw = '7*.8G9QH21'
postgres_host = '142.93.121.174'
postgres_port = '5432'
postgres_db = 'weatherinszeged'
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(
postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))
weather_df = pd.read_sql_query('select * from weatherinszeged',con=engine)
# no need for an open connection, as we're only doing a single query
engine.dispose()
# Y is the target variable
Y = weather_df['apparenttemperature'] - weather_df['temperature']
# X is the feature set
X = weather_df[['humidity','windspeed']]
X = sm.add_constant(X)
results = sm.OLS(Y, X).fit()
results.summary()
weather_df['humidity_windspeed_interaction'] = weather_df.humidity * weather_df.windspeed
# Y is the target variable
Y = weather_df['apparenttemperature'] - weather_df['temperature']
# X is the feature set
X = weather_df[['humidity','windspeed', 'humidity_windspeed_interaction']]
X = sm.add_constant(X)
results = sm.OLS(Y, X).fit()
results.summary()
# Y is the target variable
Y = weather_df['apparenttemperature'] - weather_df['temperature']
# X is the feature set
X = weather_df[['humidity','windspeed', 'visibility']]
X = sm.add_constant(X)
results = sm.OLS(Y, X).fit()
results.summary()
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sqlalchemy import create_engine
import statsmodels.api as sm
import warnings
warnings.filterwarnings('ignore')
postgres_user = 'dsbc_student'
postgres_pw = '7*.8G9QH21'
postgres_host = '142.93.121.174'
postgres_port = '5432'
postgres_db = 'houseprices'
engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format(
postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db))
house_prices_df = pd.read_sql_query('select * from houseprices',con=engine)
# no need for an open connection, as we're only doing a single query
engine.dispose()
house_prices_df = pd.concat([house_prices_df,pd.get_dummies(house_prices_df.mszoning, prefix="mszoning", drop_first=True)], axis=1)
house_prices_df = pd.concat([house_prices_df,pd.get_dummies(house_prices_df.street, prefix="street", drop_first=True)], axis=1)
dummy_column_names = list(pd.get_dummies(house_prices_df.mszoning, prefix="mszoning", drop_first=True).columns)
dummy_column_names = dummy_column_names + list(pd.get_dummies(house_prices_df.street, prefix="street", drop_first=True).columns)
# Y is the target variable
Y = house_prices_df['saleprice']
# X is the feature set
X = house_prices_df[['overallqual', 'grlivarea', 'garagecars', 'garagearea', 'totalbsmtsf'] + dummy_column_names]
X = sm.add_constant(X)
results = sm.OLS(Y, X).fit()
results.summary()
house_prices_df['totalsf'] = house_prices_df['totalbsmtsf'] + house_prices_df['firstflrsf'] + house_prices_df['secondflrsf']
house_prices_df['int_over_sf'] = house_prices_df['totalsf'] * house_prices_df['overallqual']
# Y is the target variable
Y = np.log1p(house_prices_df['saleprice'])
# X is the feature set
X = house_prices_df[['overallqual', 'grlivarea', 'garagecars', 'garagearea', 'totalsf', 'int_over_sf'] + dummy_column_names]
X = sm.add_constant(X)
results = sm.OLS(Y, X).fit()
results.summary()
```
| github_jupyter |
# **Deep-STORM (2D)**
---
<font size = 4>Deep-STORM is a neural network capable of image reconstruction from high-density single-molecule localization microscopy (SMLM), first published in 2018 by [Nehme *et al.* in Optica](https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458). The architecture used here is a U-Net based network without skip connections. This network allows image reconstruction of 2D super-resolution images, in a supervised training manner. The network is trained using simulated high-density SMLM data for which the ground-truth is available. These simulations are obtained from random distribution of single molecules in a field-of-view and therefore do not imprint structural priors during training. The network output a super-resolution image with increased pixel density (typically upsampling factor of 8 in each dimension).
Deep-STORM has **two key advantages**:
- SMLM reconstruction at high density of emitters
- fast prediction (reconstruction) once the model is trained appropriately, compared to more common multi-emitter fitting processes.
---
<font size = 4>*Disclaimer*:
<font size = 4>This notebook is part of the *Zero-Cost Deep-Learning to Enhance Microscopy* project (https://github.com/HenriquesLab/DeepLearning_Collab/wiki). Jointly developed by the Jacquemet (link to https://cellmig.org/) and Henriques (https://henriqueslab.github.io/) laboratories.
<font size = 4>This notebook is based on the following paper:
<font size = 4>**Deep-STORM: super-resolution single-molecule microscopy by deep learning**, Optica (2018) by *Elias Nehme, Lucien E. Weiss, Tomer Michaeli, and Yoav Shechtman* (https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458)
<font size = 4>And source code found in: https://github.com/EliasNehme/Deep-STORM
<font size = 4>**Please also cite this original paper when using or developing this notebook.**
# **How to use this notebook?**
---
<font size = 4>Video describing how to use our notebooks are available on youtube:
- [**Video 1**](https://www.youtube.com/watch?v=GzD2gamVNHI&feature=youtu.be): Full run through of the workflow to obtain the notebooks and the provided test datasets as well as a common use of the notebook
- [**Video 2**](https://www.youtube.com/watch?v=PUuQfP5SsqM&feature=youtu.be): Detailed description of the different sections of the notebook
---
###**Structure of a notebook**
<font size = 4>The notebook contains two types of cell:
<font size = 4>**Text cells** provide information and can be modified by douple-clicking the cell. You are currently reading the text cell. You can create a new text by clicking `+ Text`.
<font size = 4>**Code cells** contain code and the code can be modfied by selecting the cell. To execute the cell, move your cursor on the `[ ]`-mark on the left side of the cell (play button appears). Click to execute the cell. After execution is done the animation of play button stops. You can create a new coding cell by clicking `+ Code`.
---
###**Table of contents, Code snippets** and **Files**
<font size = 4>On the top left side of the notebook you find three tabs which contain from top to bottom:
<font size = 4>*Table of contents* = contains structure of the notebook. Click the content to move quickly between sections.
<font size = 4>*Code snippets* = contain examples how to code certain tasks. You can ignore this when using this notebook.
<font size = 4>*Files* = contain all available files. After mounting your google drive (see section 1.) you will find your files and folders here.
<font size = 4>**Remember that all uploaded files are purged after changing the runtime.** All files saved in Google Drive will remain. You do not need to use the Mount Drive-button; your Google Drive is connected in section 1.2.
<font size = 4>**Note:** The "sample data" in "Files" contains default files. Do not upload anything in here!
---
###**Making changes to the notebook**
<font size = 4>**You can make a copy** of the notebook and save it to your Google Drive. To do this click file -> save a copy in drive.
<font size = 4>To **edit a cell**, double click on the text. This will show you either the source code (in code cells) or the source text (in text cells).
You can use the `#`-mark in code cells to comment out parts of the code. This allows you to keep the original code piece in the cell as a comment.
#**0. Before getting started**
---
<font size = 4> Deep-STORM is able to train on simulated dataset of SMLM data (see https://www.osapublishing.org/optica/abstract.cfm?uri=optica-5-4-458 for more info). Here, we provide a simulator that will generate training dataset (section 3.1.b). A few parameters will allow you to match the simulation to your experimental data. Similarly to what is described in the paper, simulations obtained from ThunderSTORM can also be loaded here (section 3.1.a).
---
<font size = 4>**Important note**
<font size = 4>- If you wish to **Train a network from scratch** using your own dataset (and we encourage everyone to do that), you will need to run **sections 1 - 4**, then use **section 5** to assess the quality of your model and **section 6** to run predictions using the model that you trained.
<font size = 4>- If you wish to **Evaluate your model** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 5** to assess the quality of your model.
<font size = 4>- If you only wish to **run predictions** using a model previously generated and saved on your Google Drive, you will only need to run **sections 1 and 2** to set up the notebook, then use **section 6** to run the predictions on the desired model.
---
# **1. Install Deep-STORM and dependencies**
---
```
Notebook_version = '1.13'
Network = 'Deep-STORM'
from builtins import any as b_any
def get_requirements_path():
# Store requirements file in 'contents' directory
current_dir = os.getcwd()
dir_count = current_dir.count('/') - 1
path = '../' * (dir_count) + 'requirements.txt'
return path
def filter_files(file_list, filter_list):
filtered_list = []
for fname in file_list:
if b_any(fname.split('==')[0] in s for s in filter_list):
filtered_list.append(fname)
return filtered_list
def build_requirements_file(before, after):
path = get_requirements_path()
# Exporting requirements.txt for local run
!pip freeze > $path
# Get minimum requirements file
df = pd.read_csv(path, delimiter = "\n")
mod_list = [m.split('.')[0] for m in after if not m in before]
req_list_temp = df.values.tolist()
req_list = [x[0] for x in req_list_temp]
# Replace with package name and handle cases where import name is different to module name
mod_name_list = [['sklearn', 'scikit-learn'], ['skimage', 'scikit-image']]
mod_replace_list = [[x[1] for x in mod_name_list] if s in [x[0] for x in mod_name_list] else s for s in mod_list]
filtered_list = filter_files(req_list, mod_replace_list)
file=open(path,'w')
for item in filtered_list:
file.writelines(item + '\n')
file.close()
import sys
before = [str(m) for m in sys.modules]
#@markdown ##Install Deep-STORM and dependencies
# %% Model definition + helper functions
!pip install fpdf
# Import keras modules and libraries
from tensorflow import keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Activation, UpSampling2D, Convolution2D, MaxPooling2D, BatchNormalization, Layer
from tensorflow.keras.callbacks import Callback
from tensorflow.keras import backend as K
from tensorflow.keras import optimizers, losses
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.callbacks import ReduceLROnPlateau
from skimage.transform import warp
from skimage.transform import SimilarityTransform
from skimage.metrics import structural_similarity
from skimage.metrics import peak_signal_noise_ratio as psnr
from scipy.signal import fftconvolve
# Import common libraries
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import h5py
import scipy.io as sio
from os.path import abspath
from sklearn.model_selection import train_test_split
from skimage import io
import time
import os
import shutil
import csv
from PIL import Image
from PIL.TiffTags import TAGS
from scipy.ndimage import gaussian_filter
import math
from astropy.visualization import simple_norm
from sys import getsizeof
from fpdf import FPDF, HTMLMixin
from pip._internal.operations.freeze import freeze
import subprocess
from datetime import datetime
# For sliders and dropdown menu, progress bar
from ipywidgets import interact
import ipywidgets as widgets
from tqdm import tqdm
# For Multi-threading in simulation
from numba import njit, prange
# define a function that projects and rescales an image to the range [0,1]
def project_01(im):
im = np.squeeze(im)
min_val = im.min()
max_val = im.max()
return (im - min_val)/(max_val - min_val)
# normalize image given mean and std
def normalize_im(im, dmean, dstd):
im = np.squeeze(im)
im_norm = np.zeros(im.shape,dtype=np.float32)
im_norm = (im - dmean)/dstd
return im_norm
# Define the loss history recorder
class LossHistory(Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
# Define a matlab like gaussian 2D filter
def matlab_style_gauss2D(shape=(7,7),sigma=1):
"""
2D gaussian filter - should give the same result as:
MATLAB's fspecial('gaussian',[shape],[sigma])
"""
m,n = [(ss-1.)/2. for ss in shape]
y,x = np.ogrid[-m:m+1,-n:n+1]
h = np.exp( -(x*x + y*y) / (2.*sigma*sigma) )
h.astype(dtype=K.floatx())
h[ h < np.finfo(h.dtype).eps*h.max() ] = 0
sumh = h.sum()
if sumh != 0:
h /= sumh
h = h*2.0
h = h.astype('float32')
return h
# Expand the filter dimensions
psf_heatmap = matlab_style_gauss2D(shape = (7,7),sigma=1)
gfilter = tf.reshape(psf_heatmap, [7, 7, 1, 1])
# Combined MSE + L1 loss
def L1L2loss(input_shape):
def bump_mse(heatmap_true, spikes_pred):
# generate the heatmap corresponding to the predicted spikes
heatmap_pred = K.conv2d(spikes_pred, gfilter, strides=(1, 1), padding='same')
# heatmaps MSE
loss_heatmaps = losses.mean_squared_error(heatmap_true,heatmap_pred)
# l1 on the predicted spikes
loss_spikes = losses.mean_absolute_error(spikes_pred,tf.zeros(input_shape))
return loss_heatmaps + loss_spikes
return bump_mse
# Define the concatenated conv2, batch normalization, and relu block
def conv_bn_relu(nb_filter, rk, ck, name):
def f(input):
conv = Convolution2D(nb_filter, kernel_size=(rk, ck), strides=(1,1),\
padding="same", use_bias=False,\
kernel_initializer="Orthogonal",name='conv-'+name)(input)
conv_norm = BatchNormalization(name='BN-'+name)(conv)
conv_norm_relu = Activation(activation = "relu",name='Relu-'+name)(conv_norm)
return conv_norm_relu
return f
# Define the model architechture
def CNN(input,names):
Features1 = conv_bn_relu(32,3,3,names+'F1')(input)
pool1 = MaxPooling2D(pool_size=(2,2),name=names+'Pool1')(Features1)
Features2 = conv_bn_relu(64,3,3,names+'F2')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2),name=names+'Pool2')(Features2)
Features3 = conv_bn_relu(128,3,3,names+'F3')(pool2)
pool3 = MaxPooling2D(pool_size=(2, 2),name=names+'Pool3')(Features3)
Features4 = conv_bn_relu(512,3,3,names+'F4')(pool3)
up5 = UpSampling2D(size=(2, 2),name=names+'Upsample1')(Features4)
Features5 = conv_bn_relu(128,3,3,names+'F5')(up5)
up6 = UpSampling2D(size=(2, 2),name=names+'Upsample2')(Features5)
Features6 = conv_bn_relu(64,3,3,names+'F6')(up6)
up7 = UpSampling2D(size=(2, 2),name=names+'Upsample3')(Features6)
Features7 = conv_bn_relu(32,3,3,names+'F7')(up7)
return Features7
# Define the Model building for an arbitrary input size
def buildModel(input_dim, initial_learning_rate = 0.001):
input_ = Input (shape = (input_dim))
act_ = CNN (input_,'CNN')
density_pred = Convolution2D(1, kernel_size=(1, 1), strides=(1, 1), padding="same",\
activation="linear", use_bias = False,\
kernel_initializer="Orthogonal",name='Prediction')(act_)
model = Model (inputs= input_, outputs=density_pred)
opt = optimizers.Adam(lr = initial_learning_rate)
model.compile(optimizer=opt, loss = L1L2loss(input_dim))
return model
# define a function that trains a model for a given data SNR and density
def train_model(patches, heatmaps, modelPath, epochs, steps_per_epoch, batch_size, upsampling_factor=8, validation_split = 0.3, initial_learning_rate = 0.001, pretrained_model_path = '', L2_weighting_factor = 100):
"""
This function trains a CNN model on the desired training set, given the
upsampled training images and labels generated in MATLAB.
# Inputs
# TO UPDATE ----------
# Outputs
function saves the weights of the trained model to a hdf5, and the
normalization factors to a mat file. These will be loaded later for testing
the model in test_model.
"""
# for reproducibility
np.random.seed(123)
X_train, X_test, y_train, y_test = train_test_split(patches, heatmaps, test_size = validation_split, random_state=42)
print('Number of training examples: %d' % X_train.shape[0])
print('Number of validation examples: %d' % X_test.shape[0])
# Setting type
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
y_train = y_train.astype('float32')
y_test = y_test.astype('float32')
#===================== Training set normalization ==========================
# normalize training images to be in the range [0,1] and calculate the
# training set mean and std
mean_train = np.zeros(X_train.shape[0],dtype=np.float32)
std_train = np.zeros(X_train.shape[0], dtype=np.float32)
for i in range(X_train.shape[0]):
X_train[i, :, :] = project_01(X_train[i, :, :])
mean_train[i] = X_train[i, :, :].mean()
std_train[i] = X_train[i, :, :].std()
# resulting normalized training images
mean_val_train = mean_train.mean()
std_val_train = std_train.mean()
X_train_norm = np.zeros(X_train.shape, dtype=np.float32)
for i in range(X_train.shape[0]):
X_train_norm[i, :, :] = normalize_im(X_train[i, :, :], mean_val_train, std_val_train)
# patch size
psize = X_train_norm.shape[1]
# Reshaping
X_train_norm = X_train_norm.reshape(X_train.shape[0], psize, psize, 1)
# ===================== Test set normalization ==========================
# normalize test images to be in the range [0,1] and calculate the test set
# mean and std
mean_test = np.zeros(X_test.shape[0],dtype=np.float32)
std_test = np.zeros(X_test.shape[0], dtype=np.float32)
for i in range(X_test.shape[0]):
X_test[i, :, :] = project_01(X_test[i, :, :])
mean_test[i] = X_test[i, :, :].mean()
std_test[i] = X_test[i, :, :].std()
# resulting normalized test images
mean_val_test = mean_test.mean()
std_val_test = std_test.mean()
X_test_norm = np.zeros(X_test.shape, dtype=np.float32)
for i in range(X_test.shape[0]):
X_test_norm[i, :, :] = normalize_im(X_test[i, :, :], mean_val_test, std_val_test)
# Reshaping
X_test_norm = X_test_norm.reshape(X_test.shape[0], psize, psize, 1)
# Reshaping labels
Y_train = y_train.reshape(y_train.shape[0], psize, psize, 1)
Y_test = y_test.reshape(y_test.shape[0], psize, psize, 1)
# Save datasets to a matfile to open later in matlab
mdict = {"mean_test": mean_val_test, "std_test": std_val_test, "upsampling_factor": upsampling_factor, "Normalization factor": L2_weighting_factor}
sio.savemat(os.path.join(modelPath,"model_metadata.mat"), mdict)
# Set the dimensions ordering according to tensorflow consensous
# K.set_image_dim_ordering('tf')
K.set_image_data_format('channels_last')
# Save the model weights after each epoch if the validation loss decreased
checkpointer = ModelCheckpoint(filepath=os.path.join(modelPath,"weights_best.hdf5"), verbose=1,
save_best_only=True)
# Change learning when loss reaches a plataeu
change_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, min_lr=0.00005)
# Model building and complitation
model = buildModel((psize, psize, 1), initial_learning_rate = initial_learning_rate)
model.summary()
# Load pretrained model
if not pretrained_model_path:
print('Using random initial model weights.')
else:
print('Loading model weights from '+pretrained_model_path)
model.load_weights(pretrained_model_path)
# Create an image data generator for real time data augmentation
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=0., # randomly rotate images in the range (degrees, 0 to 180)
width_shift_range=0., # randomly shift images horizontally (fraction of total width)
height_shift_range=0., # randomly shift images vertically (fraction of total height)
zoom_range=0.,
shear_range=0.,
horizontal_flip=False, # randomly flip images
vertical_flip=False, # randomly flip images
fill_mode='constant',
data_format=K.image_data_format())
# Fit the image generator on the training data
datagen.fit(X_train_norm)
# loss history recorder
history = LossHistory()
# Inform user training begun
print('-------------------------------')
print('Training model...')
# Fit model on the batches generated by datagen.flow()
train_history = model.fit_generator(datagen.flow(X_train_norm, Y_train, batch_size=batch_size),
steps_per_epoch=steps_per_epoch, epochs=epochs, verbose=1,
validation_data=(X_test_norm, Y_test),
callbacks=[history, checkpointer, change_lr])
# Inform user training ended
print('-------------------------------')
print('Training Complete!')
# Save the last model
model.save(os.path.join(modelPath, 'weights_last.hdf5'))
# convert the history.history dict to a pandas DataFrame:
lossData = pd.DataFrame(train_history.history)
if os.path.exists(os.path.join(modelPath,"Quality Control")):
shutil.rmtree(os.path.join(modelPath,"Quality Control"))
os.makedirs(os.path.join(modelPath,"Quality Control"))
# The training evaluation.csv is saved (overwrites the Files if needed).
lossDataCSVpath = os.path.join(modelPath,"Quality Control/training_evaluation.csv")
with open(lossDataCSVpath, 'w') as f:
writer = csv.writer(f)
writer.writerow(['loss','val_loss','learning rate'])
for i in range(len(train_history.history['loss'])):
writer.writerow([train_history.history['loss'][i], train_history.history['val_loss'][i], train_history.history['lr'][i]])
return
# Normalization functions from Martin Weigert used in CARE
def normalize(x, pmin=3, pmax=99.8, axis=None, clip=False, eps=1e-20, dtype=np.float32):
"""This function is adapted from Martin Weigert"""
"""Percentile-based image normalization."""
mi = np.percentile(x,pmin,axis=axis,keepdims=True)
ma = np.percentile(x,pmax,axis=axis,keepdims=True)
return normalize_mi_ma(x, mi, ma, clip=clip, eps=eps, dtype=dtype)
def normalize_mi_ma(x, mi, ma, clip=False, eps=1e-20, dtype=np.float32):#dtype=np.float32
"""This function is adapted from Martin Weigert"""
if dtype is not None:
x = x.astype(dtype,copy=False)
mi = dtype(mi) if np.isscalar(mi) else mi.astype(dtype,copy=False)
ma = dtype(ma) if np.isscalar(ma) else ma.astype(dtype,copy=False)
eps = dtype(eps)
try:
import numexpr
x = numexpr.evaluate("(x - mi) / ( ma - mi + eps )")
except ImportError:
x = (x - mi) / ( ma - mi + eps )
if clip:
x = np.clip(x,0,1)
return x
def norm_minmse(gt, x, normalize_gt=True):
"""This function is adapted from Martin Weigert"""
"""
normalizes and affinely scales an image pair such that the MSE is minimized
Parameters
----------
gt: ndarray
the ground truth image
x: ndarray
the image that will be affinely scaled
normalize_gt: bool
set to True of gt image should be normalized (default)
Returns
-------
gt_scaled, x_scaled
"""
if normalize_gt:
gt = normalize(gt, 0.1, 99.9, clip=False).astype(np.float32, copy = False)
x = x.astype(np.float32, copy=False) - np.mean(x)
#x = x - np.mean(x)
gt = gt.astype(np.float32, copy=False) - np.mean(gt)
#gt = gt - np.mean(gt)
scale = np.cov(x.flatten(), gt.flatten())[0, 1] / np.var(x.flatten())
return gt, scale * x
# Multi-threaded Erf-based image construction
@njit(parallel=True)
def FromLoc2Image_Erf(xc_array, yc_array, photon_array, sigma_array, image_size = (64,64), pixel_size = 100):
w = image_size[0]
h = image_size[1]
erfImage = np.zeros((w, h))
for ij in prange(w*h):
j = int(ij/w)
i = ij - j*w
for (xc, yc, photon, sigma) in zip(xc_array, yc_array, photon_array, sigma_array):
# Don't bother if the emitter has photons <= 0 or if Sigma <= 0
if (sigma > 0) and (photon > 0):
S = sigma*math.sqrt(2)
x = i*pixel_size - xc
y = j*pixel_size - yc
# Don't bother if the emitter is further than 4 sigma from the centre of the pixel
if (x+pixel_size/2)**2 + (y+pixel_size/2)**2 < 16*sigma**2:
ErfX = math.erf((x+pixel_size)/S) - math.erf(x/S)
ErfY = math.erf((y+pixel_size)/S) - math.erf(y/S)
erfImage[j][i] += 0.25*photon*ErfX*ErfY
return erfImage
@njit(parallel=True)
def FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = (64,64), pixel_size = 100):
w = image_size[0]
h = image_size[1]
locImage = np.zeros((image_size[0],image_size[1]) )
n_locs = len(xc_array)
for e in prange(n_locs):
locImage[int(max(min(round(yc_array[e]/pixel_size),w-1),0))][int(max(min(round(xc_array[e]/pixel_size),h-1),0))] += 1
return locImage
def getPixelSizeTIFFmetadata(TIFFpath, display=False):
with Image.open(TIFFpath) as img:
meta_dict = {TAGS[key] : img.tag[key] for key in img.tag.keys()}
# TIFF tags
# https://www.loc.gov/preservation/digital/formats/content/tiff_tags.shtml
# https://www.awaresystems.be/imaging/tiff/tifftags/resolutionunit.html
ResolutionUnit = meta_dict['ResolutionUnit'][0] # unit of resolution
width = meta_dict['ImageWidth'][0]
height = meta_dict['ImageLength'][0]
xResolution = meta_dict['XResolution'][0] # number of pixels / ResolutionUnit
if len(xResolution) == 1:
xResolution = xResolution[0]
elif len(xResolution) == 2:
xResolution = xResolution[0]/xResolution[1]
else:
print('Image resolution not defined.')
xResolution = 1
if ResolutionUnit == 2:
# Units given are in inches
pixel_size = 0.025*1e9/xResolution
elif ResolutionUnit == 3:
# Units given are in cm
pixel_size = 0.01*1e9/xResolution
else:
# ResolutionUnit is therefore 1
print('Resolution unit not defined. Assuming: um')
pixel_size = 1e3/xResolution
if display:
print('Pixel size obtained from metadata: '+str(pixel_size)+' nm')
print('Image size: '+str(width)+'x'+str(height))
return (pixel_size, width, height)
def saveAsTIF(path, filename, array, pixel_size):
"""
Image saving using PIL to save as .tif format
# Input
path - path where it will be saved
filename - name of the file to save (no extension)
array - numpy array conatining the data at the required format
pixel_size - physical size of pixels in nanometers (identical for x and y)
"""
# print('Data type: '+str(array.dtype))
if (array.dtype == np.uint16):
mode = 'I;16'
elif (array.dtype == np.uint32):
mode = 'I'
else:
mode = 'F'
# Rounding the pixel size to the nearest number that divides exactly 1cm.
# Resolution needs to be a rational number --> see TIFF format
# pixel_size = 10000/(round(10000/pixel_size))
if len(array.shape) == 2:
im = Image.fromarray(array)
im.save(os.path.join(path, filename+'.tif'),
mode = mode,
resolution_unit = 3,
resolution = 0.01*1e9/pixel_size)
elif len(array.shape) == 3:
imlist = []
for frame in array:
imlist.append(Image.fromarray(frame))
imlist[0].save(os.path.join(path, filename+'.tif'), save_all=True,
append_images=imlist[1:],
mode = mode,
resolution_unit = 3,
resolution = 0.01*1e9/pixel_size)
return
class Maximafinder(Layer):
def __init__(self, thresh, neighborhood_size, use_local_avg, **kwargs):
super(Maximafinder, self).__init__(**kwargs)
self.thresh = tf.constant(thresh, dtype=tf.float32)
self.nhood = neighborhood_size
self.use_local_avg = use_local_avg
def build(self, input_shape):
if self.use_local_avg is True:
self.kernel_x = tf.reshape(tf.constant([[-1,0,1],[-1,0,1],[-1,0,1]], dtype=tf.float32), [3, 3, 1, 1])
self.kernel_y = tf.reshape(tf.constant([[-1,-1,-1],[0,0,0],[1,1,1]], dtype=tf.float32), [3, 3, 1, 1])
self.kernel_sum = tf.reshape(tf.constant([[1,1,1],[1,1,1],[1,1,1]], dtype=tf.float32), [3, 3, 1, 1])
def call(self, inputs):
# local maxima positions
max_pool_image = MaxPooling2D(pool_size=(self.nhood,self.nhood), strides=(1,1), padding='same')(inputs)
cond = tf.math.greater(max_pool_image, self.thresh) & tf.math.equal(max_pool_image, inputs)
indices = tf.where(cond)
bind, xind, yind = indices[:, 0], indices[:, 2], indices[:, 1]
confidence = tf.gather_nd(inputs, indices)
# local CoG estimator
if self.use_local_avg:
x_image = K.conv2d(inputs, self.kernel_x, padding='same')
y_image = K.conv2d(inputs, self.kernel_y, padding='same')
sum_image = K.conv2d(inputs, self.kernel_sum, padding='same')
confidence = tf.cast(tf.gather_nd(sum_image, indices), dtype=tf.float32)
x_local = tf.math.divide(tf.gather_nd(x_image, indices),tf.gather_nd(sum_image, indices))
y_local = tf.math.divide(tf.gather_nd(y_image, indices),tf.gather_nd(sum_image, indices))
xind = tf.cast(xind, dtype=tf.float32) + tf.cast(x_local, dtype=tf.float32)
yind = tf.cast(yind, dtype=tf.float32) + tf.cast(y_local, dtype=tf.float32)
else:
xind = tf.cast(xind, dtype=tf.float32)
yind = tf.cast(yind, dtype=tf.float32)
return bind, xind, yind, confidence
def get_config(self):
# Implement get_config to enable serialization. This is optional.
base_config = super(Maximafinder, self).get_config()
config = {}
return dict(list(base_config.items()) + list(config.items()))
# ------------------------------- Prediction with postprocessing function-------------------------------
def batchFramePredictionLocalization(dataPath, filename, modelPath, savePath, batch_size=1, thresh=0.1, neighborhood_size=3, use_local_avg = False, pixel_size = None):
"""
This function tests a trained model on the desired test set, given the
tiff stack of test images, learned weights, and normalization factors.
# Inputs
dataPath - the path to the folder containing the tiff stack(s) to run prediction on
filename - the name of the file to process
modelPath - the path to the folder containing the weights file and the mean and standard deviation file generated in train_model
savePath - the path to the folder where to save the prediction
batch_size. - the number of frames to predict on for each iteration
thresh - threshoold percentage from the maximum of the gaussian scaling
neighborhood_size - the size of the neighborhood for local maxima finding
use_local_average - Boolean whether to perform local averaging or not
"""
# load mean and std
matfile = sio.loadmat(os.path.join(modelPath,'model_metadata.mat'))
test_mean = np.array(matfile['mean_test'])
test_std = np.array(matfile['std_test'])
upsampling_factor = np.array(matfile['upsampling_factor'])
upsampling_factor = upsampling_factor.item() # convert to scalar
L2_weighting_factor = np.array(matfile['Normalization factor'])
L2_weighting_factor = L2_weighting_factor.item() # convert to scalar
# Read in the raw file
Images = io.imread(os.path.join(dataPath, filename))
if pixel_size == None:
pixel_size, _, _ = getPixelSizeTIFFmetadata(os.path.join(dataPath, filename), display=True)
pixel_size_hr = pixel_size/upsampling_factor
# get dataset dimensions
(nFrames, M, N) = Images.shape
print('Input image is '+str(N)+'x'+str(M)+' with '+str(nFrames)+' frames.')
# Build the model for a bigger image
model = buildModel((upsampling_factor*M, upsampling_factor*N, 1))
# Load the trained weights
model.load_weights(os.path.join(modelPath,'weights_best.hdf5'))
# add a post-processing module
max_layer = Maximafinder(thresh*L2_weighting_factor, neighborhood_size, use_local_avg)
# Initialise the results: lists will be used to collect all the localizations
frame_number_list, x_nm_list, y_nm_list, confidence_au_list = [], [], [], []
# Initialise the results
Prediction = np.zeros((M*upsampling_factor, N*upsampling_factor), dtype=np.float32)
Widefield = np.zeros((M, N), dtype=np.float32)
# run model in batches
n_batches = math.ceil(nFrames/batch_size)
for b in tqdm(range(n_batches)):
nF = min(batch_size, nFrames - b*batch_size)
Images_norm = np.zeros((nF, M, N),dtype=np.float32)
Images_upsampled = np.zeros((nF, M*upsampling_factor, N*upsampling_factor), dtype=np.float32)
# Upsampling using a simple nearest neighbor interp and calculating - MULTI-THREAD this?
for f in range(nF):
Images_norm[f,:,:] = project_01(Images[b*batch_size+f,:,:])
Images_norm[f,:,:] = normalize_im(Images_norm[f,:,:], test_mean, test_std)
Images_upsampled[f,:,:] = np.kron(Images_norm[f,:,:], np.ones((upsampling_factor,upsampling_factor)))
Widefield += Images[b*batch_size+f,:,:]
# Reshaping
Images_upsampled = np.expand_dims(Images_upsampled,axis=3)
# Run prediction and local amxima finding
predicted_density = model.predict_on_batch(Images_upsampled)
predicted_density[predicted_density < 0] = 0
Prediction += predicted_density.sum(axis = 3).sum(axis = 0)
bind, xind, yind, confidence = max_layer(predicted_density)
# normalizing the confidence by the L2_weighting_factor
confidence /= L2_weighting_factor
# turn indices to nms and append to the results
xind, yind = xind*pixel_size_hr, yind*pixel_size_hr
frmind = (bind.numpy() + b*batch_size + 1).tolist()
xind = xind.numpy().tolist()
yind = yind.numpy().tolist()
confidence = confidence.numpy().tolist()
frame_number_list += frmind
x_nm_list += xind
y_nm_list += yind
confidence_au_list += confidence
# Open and create the csv file that will contain all the localizations
if use_local_avg:
ext = '_avg'
else:
ext = '_max'
with open(os.path.join(savePath, 'Localizations_' + os.path.splitext(filename)[0] + ext + '.csv'), "w", newline='') as file:
writer = csv.writer(file)
writer.writerow(['frame', 'x [nm]', 'y [nm]', 'confidence [a.u]'])
locs = list(zip(frame_number_list, x_nm_list, y_nm_list, confidence_au_list))
writer.writerows(locs)
# Save the prediction and widefield image
Widefield = np.kron(Widefield, np.ones((upsampling_factor,upsampling_factor)))
Widefield = np.float32(Widefield)
# io.imsave(os.path.join(savePath, 'Predicted_'+os.path.splitext(filename)[0]+'.tif'), Prediction)
# io.imsave(os.path.join(savePath, 'Widefield_'+os.path.splitext(filename)[0]+'.tif'), Widefield)
saveAsTIF(savePath, 'Predicted_'+os.path.splitext(filename)[0], Prediction, pixel_size_hr)
saveAsTIF(savePath, 'Widefield_'+os.path.splitext(filename)[0], Widefield, pixel_size_hr)
return
# Colors for the warning messages
class bcolors:
WARNING = '\033[31m'
NORMAL = '\033[0m' # white (normal)
def list_files(directory, extension):
return (f for f in os.listdir(directory) if f.endswith('.' + extension))
# @njit(parallel=True)
def subPixelMaxLocalization(array, method = 'CoM', patch_size = 3):
xMaxInd, yMaxInd = np.unravel_index(array.argmax(), array.shape, order='C')
centralPatch = XC[(xMaxInd-patch_size):(xMaxInd+patch_size+1),(yMaxInd-patch_size):(yMaxInd+patch_size+1)]
if (method == 'MAX'):
x0 = xMaxInd
y0 = yMaxInd
elif (method == 'CoM'):
x0 = 0
y0 = 0
S = 0
for xy in range(patch_size*patch_size):
y = math.floor(xy/patch_size)
x = xy - y*patch_size
x0 += x*array[x,y]
y0 += y*array[x,y]
S = array[x,y]
x0 = x0/S - patch_size/2 + xMaxInd
y0 = y0/S - patch_size/2 + yMaxInd
elif (method == 'Radiality'):
# Not implemented yet
x0 = xMaxInd
y0 = yMaxInd
return (x0, y0)
@njit(parallel=True)
def correctDriftLocalization(xc_array, yc_array, frames, xDrift, yDrift):
n_locs = xc_array.shape[0]
xc_array_Corr = np.empty(n_locs)
yc_array_Corr = np.empty(n_locs)
for loc in prange(n_locs):
xc_array_Corr[loc] = xc_array[loc] - xDrift[frames[loc]]
yc_array_Corr[loc] = yc_array[loc] - yDrift[frames[loc]]
return (xc_array_Corr, yc_array_Corr)
print('--------------------------------')
print('DeepSTORM installation complete.')
# Check if this is the latest version of the notebook
All_notebook_versions = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_Notebook_versions.csv", dtype=str)
print('Notebook version: '+Notebook_version)
Latest_Notebook_version = All_notebook_versions[All_notebook_versions["Notebook"] == Network]['Version'].iloc[0]
print('Latest notebook version: '+Latest_Notebook_version)
if Notebook_version == Latest_Notebook_version:
print("This notebook is up-to-date.")
else:
print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
# Latest_notebook_version = pd.read_csv("https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/Latest_ZeroCostDL4Mic_Release.csv")
# if Notebook_version == list(Latest_notebook_version.columns):
# print("This notebook is up-to-date.")
# if not Notebook_version == list(Latest_notebook_version.columns):
# print(bcolors.WARNING +"A new version of this notebook has been released. We recommend that you download it at https://github.com/HenriquesLab/ZeroCostDL4Mic/wiki")
def pdf_export(trained = False, raw_data = False, pretrained_model = False):
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
#model_name = 'little_CARE_test'
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Training report for '+Network+' model ('+model_name+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
# add another cell
if trained:
training_time = "Training time: "+str(hours)+ "hour(s) "+str(minutes)+"min(s) "+str(round(seconds))+"sec(s)"
pdf.cell(190, 5, txt = training_time, ln = 1, align='L')
pdf.ln(1)
Header_2 = 'Information for your materials and method:'
pdf.cell(190, 5, txt=Header_2, ln=1, align='L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
#print(all_packages)
#Main Packages
main_packages = ''
version_numbers = []
for name in ['tensorflow','numpy','Keras']:
find_name=all_packages.find(name)
main_packages = main_packages+all_packages[find_name:all_packages.find(',',find_name)]+', '
#Version numbers only here:
version_numbers.append(all_packages[find_name+len(name)+2:all_packages.find(',',find_name)])
cuda_version = subprocess.run('nvcc --version',stdout=subprocess.PIPE, shell=True)
cuda_version = cuda_version.stdout.decode('utf-8')
cuda_version = cuda_version[cuda_version.find(', V')+3:-1]
gpu_name = subprocess.run('nvidia-smi',stdout=subprocess.PIPE, shell=True)
gpu_name = gpu_name.stdout.decode('utf-8')
gpu_name = gpu_name[gpu_name.find('Tesla'):gpu_name.find('Tesla')+10]
#print(cuda_version[cuda_version.find(', V')+3:-1])
#print(gpu_name)
if raw_data == True:
shape = (M,N)
else:
shape = (int(FOV_size/pixel_size),int(FOV_size/pixel_size))
#dataset_size = len(os.listdir(Training_source))
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(n_patches)+' paired image patches (image dimensions: '+str(patch_size)+', patch size (upsampled): ('+str(int(patch_size))+','+str(int(patch_size))+') with a batch size of '+str(batch_size)+', using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Losses were calculated using MSE for the heatmaps and L1 loss for the spike prediction. Key python packages used include tensorflow (v '+version_numbers[0]+'), numpy (v '+version_numbers[1]+'), Keras (v '+version_numbers[2]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+' GPU.'
if pretrained_model:
text = 'The '+Network+' model was trained from scratch for '+str(number_of_epochs)+' epochs on '+str(n_patches)+' paired image patches (image dimensions: '+str(patch_size)+', patch size (upsampled): ('+str(int(patch_size))+','+str(int(patch_size))+') with a batch size of '+str(batch_size)+', using the '+Network+' ZeroCostDL4Mic notebook (v '+Notebook_version[0]+') (von Chamier & Laine et al., 2020). Losses were calculated using MSE for the heatmaps and L1 loss for the spike prediction. The models was retrained from a pretrained model. Key python packages used include tensorflow (v '+version_numbers[0]+'), numpy (v '+version_numbers[1]+'), Keras (v '+version_numbers[2]+'), cuda (v '+cuda_version+'). The training was accelerated using a '+gpu_name+' GPU.'
pdf.set_font('')
pdf.set_font_size(10.)
pdf.multi_cell(180, 5, txt = text, align='L')
pdf.ln(1)
pdf.set_font('')
pdf.set_font("Arial", size = 11, style='B')
pdf.ln(1)
pdf.cell(190, 5, txt = 'Training dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
if raw_data==False:
simul_text = 'The training dataset was created in the notebook using the following simulation settings:'
pdf.cell(200, 5, txt=simul_text, align='L')
pdf.ln(1)
html = """
<table width=60% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Setting</th>
<th width = 50% align="left">Simulated Value</th>
</tr>
<tr>
<td width = 50%>FOV_size</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>pixel_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>ADC_per_photon_conversion</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>ReadOutNoise_ADC</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>ADC_offset</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>emitter_density</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>emitter_density_std</td>
<td width = 50%>{6}</td>
</tr>
<tr>
<td width = 50%>number_of_frames</td>
<td width = 50%>{7}</td>
</tr>
<tr>
<td width = 50%>sigma</td>
<td width = 50%>{8}</td>
</tr>
<tr>
<td width = 50%>sigma_std</td>
<td width = 50%>{9}</td>
</tr>
<tr>
<td width = 50%>n_photons</td>
<td width = 50%>{10}</td>
</tr>
<tr>
<td width = 50%>n_photons_std</td>
<td width = 50%>{11}</td>
</tr>
</table>
""".format(FOV_size, pixel_size, ADC_per_photon_conversion, ReadOutNoise_ADC, ADC_offset, emitter_density, emitter_density_std, number_of_frames, sigma, sigma_std, n_photons, n_photons_std)
pdf.write_html(html)
else:
simul_text = 'The training dataset was simulated using ThunderSTORM and loaded into the notebook.'
pdf.multi_cell(190, 5, txt=simul_text, align='L')
pdf.set_font("Arial", size = 11, style='B')
#pdf.ln(1)
#pdf.cell(190, 5, txt = 'Training Dataset', align='L', ln=1)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(29, 5, txt= 'ImageData_path', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = ImageData_path, align = 'L')
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(28, 5, txt= 'LocalizationData_path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = LocalizationData_path, align = 'L')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(28, 5, txt= 'pixel_size:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = str(pixel_size), align = 'L')
#pdf.cell(190, 5, txt=aug_text, align='L', ln=1)
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Parameters', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
# if Use_Default_Advanced_Parameters:
# pdf.cell(200, 5, txt='Default Advanced Parameters were enabled')
pdf.cell(200, 5, txt='The following parameters were used to generate patches:')
pdf.ln(1)
html = """
<table width=70% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Patch Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>patch_size</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>upsampling_factor</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>num_patches_per_frame</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>min_number_of_emitters_per_patch</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>max_num_patches</td>
<td width = 50%>{4}</td>
</tr>
<tr>
<td width = 50%>gaussian_sigma</td>
<td width = 50%>{5}</td>
</tr>
<tr>
<td width = 50%>Automatic_normalization</td>
<td width = 50%>{6}</td>
</tr>
<tr>
<td width = 50%>L2_weighting_factor</td>
<td width = 50%>{7}</td>
</tr>
""".format(str(patch_size)+'x'+str(patch_size), upsampling_factor, num_patches_per_frame, min_number_of_emitters_per_patch, max_num_patches, gaussian_sigma, Automatic_normalization, L2_weighting_factor)
pdf.write_html(html)
pdf.ln(3)
pdf.set_font('Arial', size=10)
pdf.cell(200, 5, txt='The following parameters were used for training:')
pdf.ln(1)
html = """
<table width=70% style="margin-left:0px;">
<tr>
<th width = 50% align="left">Training Parameter</th>
<th width = 50% align="left">Value</th>
</tr>
<tr>
<td width = 50%>number_of_epochs</td>
<td width = 50%>{0}</td>
</tr>
<tr>
<td width = 50%>batch_size</td>
<td width = 50%>{1}</td>
</tr>
<tr>
<td width = 50%>number_of_steps</td>
<td width = 50%>{2}</td>
</tr>
<tr>
<td width = 50%>percentage_validation</td>
<td width = 50%>{3}</td>
</tr>
<tr>
<td width = 50%>initial_learning_rate</td>
<td width = 50%>{4}</td>
</tr>
</table>
""".format(number_of_epochs,batch_size,number_of_steps,percentage_validation,initial_learning_rate)
pdf.write_html(html)
pdf.ln(1)
# pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.cell(21, 5, txt= 'Model Path:', align = 'L', ln=0)
pdf.set_font('')
pdf.multi_cell(170, 5, txt = model_path+'/'+model_name, align = 'L')
pdf.ln(1)
pdf.cell(60, 5, txt = 'Example Training Images', ln=1)
pdf.ln(1)
exp_size = io.imread('/content/TrainingDataExample_DeepSTORM2D.png').shape
pdf.image('/content/TrainingDataExample_DeepSTORM2D.png', x = 11, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- Deep-STORM: Nehme, Elias, et al. "Deep-STORM: super-resolution single-molecule microscopy by deep learning." Optica 5.4 (2018): 458-464.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
# if Use_Data_augmentation:
# ref_3 = '- Augmentor: Bloice, Marcus D., Christof Stocker, and Andreas Holzinger. "Augmentor: an image augmentation library for machine learning." arXiv preprint arXiv:1708.04680 (2017).'
# pdf.multi_cell(190, 5, txt = ref_3, align='L')
pdf.ln(3)
reminder = 'Important:\nRemember to perform the quality control step on all newly trained models\nPlease consider depositing your training dataset on Zenodo'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(model_path+'/'+model_name+'/'+model_name+'_training_report.pdf')
print('------------------------------')
print('PDF report exported in '+model_path+'/'+model_name+'/')
def qc_pdf_export():
class MyFPDF(FPDF, HTMLMixin):
pass
pdf = MyFPDF()
pdf.add_page()
pdf.set_right_margin(-1)
pdf.set_font("Arial", size = 11, style='B')
Network = 'Deep-STORM'
#model_name = os.path.basename(full_QC_model_path)
day = datetime.now()
datetime_str = str(day)[0:10]
Header = 'Quality Control report for '+Network+' model ('+os.path.basename(QC_model_path)+')\nDate: '+datetime_str
pdf.multi_cell(180, 5, txt = Header, align = 'L')
all_packages = ''
for requirement in freeze(local_only=True):
all_packages = all_packages+requirement+', '
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(2)
pdf.cell(190, 5, txt = 'Loss curves', ln=1, align='L')
pdf.ln(1)
if os.path.exists(savePath+'/lossCurvePlots.png'):
exp_size = io.imread(savePath+'/lossCurvePlots.png').shape
pdf.image(savePath+'/lossCurvePlots.png', x = 11, y = None, w = round(exp_size[1]/10), h = round(exp_size[0]/10))
else:
pdf.set_font('')
pdf.set_font('Arial', size=10)
pdf.cell(190, 5, txt='If you would like to see the evolution of the loss function during training please play the first cell of the QC section in the notebook.')
pdf.ln(2)
pdf.set_font('')
pdf.set_font('Arial', size = 10, style = 'B')
pdf.ln(3)
pdf.cell(80, 5, txt = 'Example Quality Control Visualisation', ln=1)
pdf.ln(1)
exp_size = io.imread(savePath+'/QC_example_data.png').shape
pdf.image(savePath+'/QC_example_data.png', x = 16, y = None, w = round(exp_size[1]/8), h = round(exp_size[0]/8))
pdf.ln(1)
pdf.set_font('')
pdf.set_font('Arial', size = 11, style = 'B')
pdf.ln(1)
pdf.cell(180, 5, txt = 'Quality Control Metrics', align='L', ln=1)
pdf.set_font('')
pdf.set_font_size(10.)
pdf.ln(1)
html = """
<body>
<font size="7" face="Courier New" >
<table width=94% style="margin-left:0px;">"""
with open(savePath+'/'+os.path.basename(QC_model_path)+'_QC_metrics.csv', 'r') as csvfile:
metrics = csv.reader(csvfile)
header = next(metrics)
image = header[0]
mSSIM_PvsGT = header[1]
mSSIM_SvsGT = header[2]
NRMSE_PvsGT = header[3]
NRMSE_SvsGT = header[4]
PSNR_PvsGT = header[5]
PSNR_SvsGT = header[6]
header = """
<tr>
<th width = 10% align="left">{0}</th>
<th width = 15% align="left">{1}</th>
<th width = 15% align="center">{2}</th>
<th width = 15% align="left">{3}</th>
<th width = 15% align="center">{4}</th>
<th width = 15% align="left">{5}</th>
<th width = 15% align="center">{6}</th>
</tr>""".format(image,mSSIM_PvsGT,mSSIM_SvsGT,NRMSE_PvsGT,NRMSE_SvsGT,PSNR_PvsGT,PSNR_SvsGT)
html = html+header
for row in metrics:
image = row[0]
mSSIM_PvsGT = row[1]
mSSIM_SvsGT = row[2]
NRMSE_PvsGT = row[3]
NRMSE_SvsGT = row[4]
PSNR_PvsGT = row[5]
PSNR_SvsGT = row[6]
cells = """
<tr>
<td width = 10% align="left">{0}</td>
<td width = 15% align="center">{1}</td>
<td width = 15% align="center">{2}</td>
<td width = 15% align="center">{3}</td>
<td width = 15% align="center">{4}</td>
<td width = 15% align="center">{5}</td>
<td width = 15% align="center">{6}</td>
</tr>""".format(image,str(round(float(mSSIM_PvsGT),3)),str(round(float(mSSIM_SvsGT),3)),str(round(float(NRMSE_PvsGT),3)),str(round(float(NRMSE_SvsGT),3)),str(round(float(PSNR_PvsGT),3)),str(round(float(PSNR_SvsGT),3)))
html = html+cells
html = html+"""</body></table>"""
pdf.write_html(html)
pdf.ln(1)
pdf.set_font('')
pdf.set_font_size(10.)
ref_1 = 'References:\n - ZeroCostDL4Mic: von Chamier, Lucas & Laine, Romain, et al. "Democratising deep learning for microscopy with ZeroCostDL4Mic." Nature Communications (2021).'
pdf.multi_cell(190, 5, txt = ref_1, align='L')
ref_2 = '- Deep-STORM: Nehme, Elias, et al. "Deep-STORM: super-resolution single-molecule microscopy by deep learning." Optica 5.4 (2018): 458-464.'
pdf.multi_cell(190, 5, txt = ref_2, align='L')
pdf.ln(3)
reminder = 'To find the parameters and other information about how this model was trained, go to the training_report.pdf of this model which should be in the folder of the same name.'
pdf.set_font('Arial', size = 11, style='B')
pdf.multi_cell(190, 5, txt=reminder, align='C')
pdf.output(savePath+'/'+os.path.basename(QC_model_path)+'_QC_report.pdf')
print('------------------------------')
print('QC PDF report exported as '+savePath+'/'+os.path.basename(QC_model_path)+'_QC_report.pdf')
# Build requirements file for local run
after = [str(m) for m in sys.modules]
build_requirements_file(before, after)
```
# **2. Complete the Colab session**
---
## **2.1. Check for GPU access**
---
By default, the session should be using Python 3 and GPU acceleration, but it is possible to ensure that these are set properly by doing the following:
<font size = 4>Go to **Runtime -> Change the Runtime type**
<font size = 4>**Runtime type: Python 3** *(Python 3 is programming language in which this program is written)*
<font size = 4>**Accelerator: GPU** *(Graphics processing unit)*
```
#@markdown ##Run this cell to check if you have GPU access
# %tensorflow_version 1.x
import tensorflow as tf
# if tf.__version__ != '2.2.0':
# !pip install tensorflow==2.2.0
if tf.test.gpu_device_name()=='':
print('You do not have GPU access.')
print('Did you change your runtime ?')
print('If the runtime settings are correct then Google did not allocate GPU to your session')
print('Expect slow performance. To access GPU try reconnecting later')
else:
print('You have GPU access')
!nvidia-smi
# from tensorflow.python.client import device_lib
# device_lib.list_local_devices()
# print the tensorflow version
print('Tensorflow version is ' + str(tf.__version__))
```
## **2.2. Mount your Google Drive**
---
<font size = 4> To use this notebook on the data present in your Google Drive, you need to mount your Google Drive to this notebook.
<font size = 4> Play the cell below to mount your Google Drive and follow the link. In the new browser window, select your drive and select 'Allow', copy the code, paste into the cell and press enter. This will give Colab access to the data on the drive.
<font size = 4> Once this is done, your data are available in the **Files** tab on the top left of notebook.
```
#@markdown ##Run this cell to connect your Google Drive to Colab
#@markdown * Click on the URL.
#@markdown * Sign in your Google Account.
#@markdown * Copy the authorization code.
#@markdown * Enter the authorization code.
#@markdown * Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive".
#mounts user's Google Drive to Google Colab.
from google.colab import drive
drive.mount('/content/gdrive')
```
# **3. Generate patches for training**
---
For Deep-STORM the training data can be obtained in two ways:
* Simulated using ThunderSTORM or other simulation tool and loaded here (**using Section 3.1.a**)
* Directly simulated in this notebook (**using Section 3.1.b**)
## **3.1.a Load training data**
---
Here you can load your simulated data along with its corresponding localization file.
* The `pixel_size` is defined in nanometer (nm).
```
#@markdown ##Load raw data
load_raw_data = True
# Get user input
ImageData_path = "" #@param {type:"string"}
LocalizationData_path = "" #@param {type: "string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value:
pixel_size = 100 #@param {type:"number"}
if get_pixel_size_from_file:
pixel_size,_,_ = getPixelSizeTIFFmetadata(ImageData_path, True)
# load the tiff data
Images = io.imread(ImageData_path)
# get dataset dimensions
if len(Images.shape) == 3:
(number_of_frames, M, N) = Images.shape
elif len(Images.shape) == 2:
(M, N) = Images.shape
number_of_frames = 1
print('Loaded images: '+str(M)+'x'+str(N)+' with '+str(number_of_frames)+' frames')
# Interactive display of the stack
def scroll_in_time(frame):
f=plt.figure(figsize=(6,6))
plt.imshow(Images[frame-1], interpolation='nearest', cmap = 'gray')
plt.title('Training source at frame = ' + str(frame))
plt.axis('off');
if number_of_frames > 1:
interact(scroll_in_time, frame=widgets.IntSlider(min=1, max=Images.shape[0], step=1, value=0, continuous_update=False));
else:
f=plt.figure(figsize=(6,6))
plt.imshow(Images, interpolation='nearest', cmap = 'gray')
plt.title('Training source')
plt.axis('off');
# Load the localization file and display the first
LocData = pd.read_csv(LocalizationData_path, index_col=0)
LocData.tail()
```
## **3.1.b Simulate training data**
---
This simulation tool allows you to generate SMLM data of randomly distrubuted emitters in a field-of-view.
The assumptions are as follows:
* Gaussian Point Spread Function (PSF) with standard deviation defined by `Sigma`. The nominal value of `sigma` can be evaluated using `sigma = 0.21 x Lambda / NA`. (from [Zhang *et al.*, Applied Optics 2007](https://doi.org/10.1364/AO.46.001819))
* Each emitter will emit `n_photons` per frame, and generate their equivalent Poisson noise.
* The camera will contribute Gaussian noise to the signal with a standard deviation defined by `ReadOutNoise_ADC` in ADC
* The `emitter_density` is defined as the number of emitters / um^2 on any given frame. Variability in the emitter density can be applied by adjusting `emitter_density_std`. The latter parameter represents the standard deviation of the normal distribution that the density is drawn from for each individual frame. `emitter_density` **is defined in number of emitters / um^2**.
* The `n_photons` and `sigma` can additionally include some Gaussian variability by setting `n_photons_std` and `sigma_std`.
Important note:
- All dimensions are in nanometer (e.g. `FOV_size` = 6400 represents a field of view of 6.4 um x 6.4 um).
```
load_raw_data = False
# ---------------------------- User input ----------------------------
#@markdown Run the simulation
#@markdown ---
#@markdown Camera settings:
FOV_size = 6400#@param {type:"number"}
pixel_size = 100#@param {type:"number"}
ADC_per_photon_conversion = 1 #@param {type:"number"}
ReadOutNoise_ADC = 4.5#@param {type:"number"}
ADC_offset = 50#@param {type:"number"}
#@markdown Acquisition settings:
emitter_density = 6#@param {type:"number"}
emitter_density_std = 0#@param {type:"number"}
number_of_frames = 20#@param {type:"integer"}
sigma = 110 #@param {type:"number"}
sigma_std = 5 #@param {type:"number"}
# NA = 1.1 #@param {type:"number"}
# wavelength = 800#@param {type:"number"}
# wavelength_std = 150#@param {type:"number"}
n_photons = 2250#@param {type:"number"}
n_photons_std = 250#@param {type:"number"}
# ---------------------------- Variable initialisation ----------------------------
# Start the clock to measure how long it takes
start = time.time()
print('-----------------------------------------------------------')
n_molecules = emitter_density*FOV_size*FOV_size/10**6
n_molecules_std = emitter_density_std*FOV_size*FOV_size/10**6
print('Number of molecules / FOV: '+str(round(n_molecules,2))+' +/- '+str((round(n_molecules_std,2))))
# sigma = 0.21*wavelength/NA
# sigma_std = 0.21*wavelength_std/NA
# print('Gaussian PSF sigma: '+str(round(sigma,2))+' +/- '+str(round(sigma_std,2))+' nm')
M = N = round(FOV_size/pixel_size)
FOV_size = M*pixel_size
print('Final image size: '+str(M)+'x'+str(M)+' ('+str(round(FOV_size/1000, 3))+'um x'+str(round(FOV_size/1000,3))+' um)')
np.random.seed(1)
display_upsampling = 8 # used to display the loc map here
NoiseFreeImages = np.zeros((number_of_frames, M, M))
locImage = np.zeros((number_of_frames, display_upsampling*M, display_upsampling*N))
frames = []
all_xloc = []
all_yloc = []
all_photons = []
all_sigmas = []
# ---------------------------- Main simulation loop ----------------------------
print('-----------------------------------------------------------')
for f in tqdm(range(number_of_frames)):
# Define the coordinates of emitters by randomly distributing them across the FOV
n_mol = int(max(round(np.random.normal(n_molecules, n_molecules_std, size=1)[0]), 0))
x_c = np.random.uniform(low=0.0, high=FOV_size, size=n_mol)
y_c = np.random.uniform(low=0.0, high=FOV_size, size=n_mol)
photon_array = np.random.normal(n_photons, n_photons_std, size=n_mol)
sigma_array = np.random.normal(sigma, sigma_std, size=n_mol)
# x_c = np.linspace(0,3000,5)
# y_c = np.linspace(0,3000,5)
all_xloc += x_c.tolist()
all_yloc += y_c.tolist()
frames += ((f+1)*np.ones(x_c.shape[0])).tolist()
all_photons += photon_array.tolist()
all_sigmas += sigma_array.tolist()
locImage[f] = FromLoc2Image_SimpleHistogram(x_c, y_c, image_size = (N*display_upsampling, M*display_upsampling), pixel_size = pixel_size/display_upsampling)
# # Get the approximated locations according to the grid pixel size
# Chr_emitters = [int(max(min(round(display_upsampling*x_c[i]/pixel_size),N*display_upsampling-1),0)) for i in range(len(x_c))]
# Rhr_emitters = [int(max(min(round(display_upsampling*y_c[i]/pixel_size),M*display_upsampling-1),0)) for i in range(len(y_c))]
# # Build Localization image
# for (r,c) in zip(Rhr_emitters, Chr_emitters):
# locImage[f][r][c] += 1
NoiseFreeImages[f] = FromLoc2Image_Erf(x_c, y_c, photon_array, sigma_array, image_size = (M,M), pixel_size = pixel_size)
# ---------------------------- Create DataFrame fof localization file ----------------------------
# Table with localization info as dataframe output
LocData = pd.DataFrame()
LocData["frame"] = frames
LocData["x [nm]"] = all_xloc
LocData["y [nm]"] = all_yloc
LocData["Photon #"] = all_photons
LocData["Sigma [nm]"] = all_sigmas
LocData.index += 1 # set indices to start at 1 and not 0 (same as ThunderSTORM)
# ---------------------------- Estimation of SNR ----------------------------
n_frames_for_SNR = 100
M_SNR = 10
x_c = np.random.uniform(low=0.0, high=pixel_size*M_SNR, size=n_frames_for_SNR)
y_c = np.random.uniform(low=0.0, high=pixel_size*M_SNR, size=n_frames_for_SNR)
photon_array = np.random.normal(n_photons, n_photons_std, size=n_frames_for_SNR)
sigma_array = np.random.normal(sigma, sigma_std, size=n_frames_for_SNR)
SNR = np.zeros(n_frames_for_SNR)
for i in range(n_frames_for_SNR):
SingleEmitterImage = FromLoc2Image_Erf(np.array([x_c[i]]), np.array([x_c[i]]), np.array([photon_array[i]]), np.array([sigma_array[i]]), (M_SNR, M_SNR), pixel_size)
Signal_photon = np.max(SingleEmitterImage)
Noise_photon = math.sqrt((ReadOutNoise_ADC/ADC_per_photon_conversion)**2 + Signal_photon)
SNR[i] = Signal_photon/Noise_photon
print('SNR: '+str(round(np.mean(SNR),2))+' +/- '+str(round(np.std(SNR),2)))
# ---------------------------- ----------------------------
# Table with info
simParameters = pd.DataFrame()
simParameters["FOV size (nm)"] = [FOV_size]
simParameters["Pixel size (nm)"] = [pixel_size]
simParameters["ADC/photon"] = [ADC_per_photon_conversion]
simParameters["Read-out noise (ADC)"] = [ReadOutNoise_ADC]
simParameters["Constant offset (ADC)"] = [ADC_offset]
simParameters["Emitter density (emitters/um^2)"] = [emitter_density]
simParameters["STD of emitter density (emitters/um^2)"] = [emitter_density_std]
simParameters["Number of frames"] = [number_of_frames]
# simParameters["NA"] = [NA]
# simParameters["Wavelength (nm)"] = [wavelength]
# simParameters["STD of wavelength (nm)"] = [wavelength_std]
simParameters["Sigma (nm))"] = [sigma]
simParameters["STD of Sigma (nm))"] = [sigma_std]
simParameters["Number of photons"] = [n_photons]
simParameters["STD of number of photons"] = [n_photons_std]
simParameters["SNR"] = [np.mean(SNR)]
simParameters["STD of SNR"] = [np.std(SNR)]
# ---------------------------- Finish simulation ----------------------------
# Calculating the noisy image
Images = ADC_per_photon_conversion * np.random.poisson(NoiseFreeImages) + ReadOutNoise_ADC * np.random.normal(size = (number_of_frames, M, N)) + ADC_offset
Images[Images <= 0] = 0
# Convert to 16-bit or 32-bits integers
if Images.max() < (2**16-1):
Images = Images.astype(np.uint16)
else:
Images = Images.astype(np.uint32)
# ---------------------------- Display ----------------------------
# Displaying the time elapsed for simulation
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds,1),"sec(s)")
# Interactively display the results using Widgets
def scroll_in_time(frame):
f = plt.figure(figsize=(18,6))
plt.subplot(1,3,1)
plt.imshow(locImage[frame-1], interpolation='bilinear', vmin = 0, vmax=0.1)
plt.title('Localization image')
plt.axis('off');
plt.subplot(1,3,2)
plt.imshow(NoiseFreeImages[frame-1], interpolation='nearest', cmap='gray')
plt.title('Noise-free simulation')
plt.axis('off');
plt.subplot(1,3,3)
plt.imshow(Images[frame-1], interpolation='nearest', cmap='gray')
plt.title('Noisy simulation')
plt.axis('off');
interact(scroll_in_time, frame=widgets.IntSlider(min=1, max=Images.shape[0], step=1, value=0, continuous_update=False));
# Display the head of the dataframe with localizations
LocData.tail()
#@markdown ---
#@markdown ##Play this cell to save the simulated stack
#@markdown Please select a path to the folder where to save the simulated data. It is not necessary to save the data to run the training, but keeping the simulated for your own record can be useful to check its validity.
Save_path = "" #@param {type:"string"}
if not os.path.exists(Save_path):
os.makedirs(Save_path)
print('Folder created.')
else:
print('Training data already exists in folder: Data overwritten.')
saveAsTIF(Save_path, 'SimulatedDataset', Images, pixel_size)
# io.imsave(os.path.join(Save_path, 'SimulatedDataset.tif'),Images)
LocData.to_csv(os.path.join(Save_path, 'SimulatedDataset.csv'))
simParameters.to_csv(os.path.join(Save_path, 'SimulatedParameters.csv'))
print('Training dataset saved.')
```
## **3.2. Generate training patches**
---
Training patches need to be created from the training data generated above.
* The `patch_size` needs to give sufficient contextual information and for most cases a `patch_size` of 26 (corresponding to patches of 26x26 pixels) works fine. **DEFAULT: 26**
* The `upsampling_factor` defines the effective magnification of the final super-resolved image compared to the input image (this is called magnification in ThunderSTORM). This is used to generate the super-resolved patches as target dataset. Using an `upsampling_factor` of 16 will require the use of more memory and it may be necessary to decreae the `patch_size` to 16 for example. **DEFAULT: 8**
* The `num_patches_per_frame` defines the number of patches extracted from each frame generated in section 3.1. **DEFAULT: 500**
* The `min_number_of_emitters_per_patch` defines the minimum number of emitters that need to be present in the patch to be a valid patch. An empty patch does not contain useful information for the network to learn from. **DEFAULT: 7**
* The `max_num_patches` defines the maximum number of patches to generate. Fewer may be generated depending on how many pacthes are rejected and how many frames are available. **DEFAULT: 10000**
* The `gaussian_sigma` defines the Gaussian standard deviation (in magnified pixels) applied to generate the super-resolved target image. **DEFAULT: 1**
* The `L2_weighting_factor` is a normalization factor used in the loss function. It helps balancing the loss from the L2 norm. When using higher densities, this factor should be decreased and vice-versa. This factor can be autimatically calculated using an empiraical formula. **DEFAULT: 100**
```
#@markdown ## **Provide patch parameters**
# -------------------- User input --------------------
patch_size = 26 #@param {type:"integer"}
upsampling_factor = 8 #@param ["4", "8", "16"] {type:"raw"}
num_patches_per_frame = 500#@param {type:"integer"}
min_number_of_emitters_per_patch = 7#@param {type:"integer"}
max_num_patches = 10000#@param {type:"integer"}
gaussian_sigma = 1#@param {type:"integer"}
#@markdown Estimate the optimal normalization factor automatically?
Automatic_normalization = True #@param {type:"boolean"}
#@markdown Otherwise, it will use the following value:
L2_weighting_factor = 100 #@param {type:"number"}
# -------------------- Prepare variables --------------------
# Start the clock to measure how long it takes
start = time.time()
# Initialize some parameters
pixel_size_hr = pixel_size/upsampling_factor # in nm
n_patches = min(number_of_frames*num_patches_per_frame, max_num_patches)
patch_size = patch_size*upsampling_factor
# Dimensions of the high-res grid
Mhr = upsampling_factor*M # in pixels
Nhr = upsampling_factor*N # in pixels
# Initialize the training patches and labels
patches = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
spikes = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
heatmaps = np.zeros((n_patches, patch_size, patch_size), dtype = np.float32)
# Run over all frames and construct the training examples
k = 1 # current patch count
skip_counter = 0 # number of dataset skipped due to low density
id_start = 0 # id position in LocData for current frame
print('Generating '+str(n_patches)+' patches of '+str(patch_size)+'x'+str(patch_size))
n_locs = len(LocData.index)
print('Total number of localizations: '+str(n_locs))
density = n_locs/(M*N*number_of_frames*(0.001*pixel_size)**2)
print('Density: '+str(round(density,2))+' locs/um^2')
n_locs_per_patch = patch_size**2*density
if Automatic_normalization:
# This empirical formulae attempts to balance the loss L2 function between the background and the bright spikes
# A value of 100 was originally chosen to balance L2 for a patch size of 2.6x2.6^2 0.1um pixel size and density of 3 (hence the 20.28), at upsampling_factor = 8
L2_weighting_factor = 100/math.sqrt(min(n_locs_per_patch, min_number_of_emitters_per_patch)*8**2/(upsampling_factor**2*20.28))
print('Normalization factor: '+str(round(L2_weighting_factor,2)))
# -------------------- Patch generation loop --------------------
print('-----------------------------------------------------------')
for (f, thisFrame) in enumerate(tqdm(Images)):
# Upsample the frame
upsampledFrame = np.kron(thisFrame, np.ones((upsampling_factor,upsampling_factor)))
# Read all the provided high-resolution locations for current frame
DataFrame = LocData[LocData['frame'] == f+1].copy()
# Get the approximated locations according to the high-res grid pixel size
Chr_emitters = [int(max(min(round(DataFrame['x [nm]'][i]/pixel_size_hr),Nhr-1),0)) for i in range(id_start+1,id_start+1+len(DataFrame.index))]
Rhr_emitters = [int(max(min(round(DataFrame['y [nm]'][i]/pixel_size_hr),Mhr-1),0)) for i in range(id_start+1,id_start+1+len(DataFrame.index))]
id_start += len(DataFrame.index)
# Build Localization image
LocImage = np.zeros((Mhr,Nhr))
LocImage[(Rhr_emitters, Chr_emitters)] = 1
# Here, there's a choice between the original Gaussian (classification approach) and using the erf function
HeatMapImage = L2_weighting_factor*gaussian_filter(LocImage, gaussian_sigma)
# HeatMapImage = L2_weighting_factor*FromLoc2Image_MultiThreaded(np.array(list(DataFrame['x [nm]'])), np.array(list(DataFrame['y [nm]'])),
# np.ones(len(DataFrame.index)), pixel_size_hr*gaussian_sigma*np.ones(len(DataFrame.index)),
# Mhr, pixel_size_hr)
# Generate random position for the top left corner of the patch
xc = np.random.randint(0, Mhr-patch_size, size=num_patches_per_frame)
yc = np.random.randint(0, Nhr-patch_size, size=num_patches_per_frame)
for c in range(len(xc)):
if LocImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size].sum() < min_number_of_emitters_per_patch:
skip_counter += 1
continue
else:
# Limit maximal number of training examples to 15k
if k > max_num_patches:
break
else:
# Assign the patches to the right part of the images
patches[k-1] = upsampledFrame[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
spikes[k-1] = LocImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
heatmaps[k-1] = HeatMapImage[xc[c]:xc[c]+patch_size, yc[c]:yc[c]+patch_size]
k += 1 # increment current patch count
# Remove the empty data
patches = patches[:k-1]
spikes = spikes[:k-1]
heatmaps = heatmaps[:k-1]
n_patches = k-1
# -------------------- Failsafe --------------------
# Check if the size of the training set is smaller than 5k to notify user to simulate more images using ThunderSTORM
if ((k-1) < 5000):
# W = '\033[0m' # white (normal)
# R = '\033[31m' # red
print(bcolors.WARNING+'!! WARNING: Training set size is below 5K - Consider simulating more images in ThunderSTORM. !!'+bcolors.NORMAL)
# -------------------- Displays --------------------
print('Number of patches skipped due to low density: '+str(skip_counter))
# dataSize = int((getsizeof(patches)+getsizeof(heatmaps)+getsizeof(spikes))/(1024*1024)) #rounded in MB
# print('Size of patches: '+str(dataSize)+' MB')
print(str(n_patches)+' patches were generated.')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# Display patches interactively with a slider
def scroll_patches(patch):
f = plt.figure(figsize=(16,6))
plt.subplot(1,3,1)
plt.imshow(patches[patch-1], interpolation='nearest', cmap='gray')
plt.title('Raw data (frame #'+str(patch)+')')
plt.axis('off');
plt.subplot(1,3,2)
plt.imshow(heatmaps[patch-1], interpolation='nearest')
plt.title('Heat map')
plt.axis('off');
plt.subplot(1,3,3)
plt.imshow(spikes[patch-1], interpolation='nearest')
plt.title('Localization map')
plt.axis('off');
plt.savefig('/content/TrainingDataExample_DeepSTORM2D.png',bbox_inches='tight',pad_inches=0)
interact(scroll_patches, patch=widgets.IntSlider(min=1, max=patches.shape[0], step=1, value=0, continuous_update=False));
```
# **4. Train the network**
---
## **4.1. Select your paths and parameters**
---
<font size = 4>**`model_path`**: Enter the path where your model will be saved once trained (for instance your result folder).
<font size = 4>**`model_name`:** Use only my_model -style, not my-model (Use "_" not "-"). Do not use spaces in the name. Avoid using the name of an existing model (saved in the same folder) as it will be overwritten.
<font size = 5>**Training parameters**
<font size = 4>**`number_of_epochs`:**Input how many epochs (rounds) the network will be trained. Preliminary results can already be observed after a few (10-30) epochs, but a full training should run for ~100 epochs. Evaluate the performance after training (see 5). **Default value: 80**
<font size =4>**`batch_size:`** This parameter defines the number of patches seen in each training step. Reducing or increasing the **batch size** may slow or speed up your training, respectively, and can influence network performance. **Default value: 16**
<font size = 4>**`number_of_steps`:** Define the number of training steps by epoch. **If this value is set to 0**, by default this parameter is calculated so that each patch is seen at least once per epoch. **Default value: Number of patch / batch_size**
<font size = 4>**`percentage_validation`:** Input the percentage of your training dataset you want to use to validate the network during training. **Default value: 30**
<font size = 4>**`initial_learning_rate`:** This parameter represents the initial value to be used as learning rate in the optimizer. **Default value: 0.001**
```
#@markdown ###Path to training images and parameters
model_path = "" #@param {type: "string"}
model_name = "" #@param {type: "string"}
number_of_epochs = 80#@param {type:"integer"}
batch_size = 16#@param {type:"integer"}
number_of_steps = 0#@param {type:"integer"}
percentage_validation = 30 #@param {type:"number"}
initial_learning_rate = 0.001 #@param {type:"number"}
percentage_validation /= 100
if number_of_steps == 0:
number_of_steps = int((1-percentage_validation)*n_patches/batch_size)
print('Number of steps: '+str(number_of_steps))
# Pretrained model path initialised here so next cell does not need to be run
h5_file_path = ''
Use_pretrained_model = False
if not ('patches' in locals()):
# W = '\033[0m' # white (normal)
# R = '\033[31m' # red
print(WARNING+'!! WARNING: No patches were found in memory currently. !!')
Save_path = os.path.join(model_path, model_name)
if os.path.exists(Save_path):
print(bcolors.WARNING+'The model folder already exists and will be overwritten.'+bcolors.NORMAL)
print('-----------------------------')
print('Training parameters set.')
```
## **4.2. Using weights from a pre-trained model as initial weights**
---
<font size = 4> Here, you can set the the path to a pre-trained model from which the weights can be extracted and used as a starting point for this training session. **This pre-trained model needs to be a Deep-STORM 2D model**.
<font size = 4> This option allows you to perform training over multiple Colab runtimes or to do transfer learning using models trained outside of ZeroCostDL4Mic. **You do not need to run this section if you want to train a network from scratch**.
<font size = 4> In order to continue training from the point where the pre-trained model left off, it is adviseable to also **load the learning rate** that was used when the training ended. This is automatically saved for models trained with ZeroCostDL4Mic and will be loaded here. If no learning rate can be found in the model folder provided, the default learning rate will be used.
```
# @markdown ##Loading weights from a pre-trained network
Use_pretrained_model = False #@param {type:"boolean"}
pretrained_model_choice = "Model_from_file" #@param ["Model_from_file"]
Weights_choice = "best" #@param ["last", "best"]
#@markdown ###If you chose "Model_from_file", please provide the path to the model folder:
pretrained_model_path = "" #@param {type:"string"}
# --------------------- Check if we load a previously trained model ------------------------
if Use_pretrained_model:
# --------------------- Load the model from the choosen path ------------------------
if pretrained_model_choice == "Model_from_file":
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".hdf5")
# --------------------- Download the a model provided in the XXX ------------------------
if pretrained_model_choice == "Model_name":
pretrained_model_name = "Model_name"
pretrained_model_path = "/content/"+pretrained_model_name
print("Downloading the 2D_Demo_Model_from_Stardist_2D_paper")
if os.path.exists(pretrained_model_path):
shutil.rmtree(pretrained_model_path)
os.makedirs(pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
wget.download("", pretrained_model_path)
h5_file_path = os.path.join(pretrained_model_path, "weights_"+Weights_choice+".hdf5")
# --------------------- Add additional pre-trained models here ------------------------
# --------------------- Check the model exist ------------------------
# If the model path chosen does not contain a pretrain model then use_pretrained_model is disabled,
if not os.path.exists(h5_file_path):
print(bcolors.WARNING+'WARNING: weights_'+Weights_choice+'.hdf5 pretrained model does not exist'+bcolors.NORMAL)
Use_pretrained_model = False
# If the model path contains a pretrain model, we load the training rate,
if os.path.exists(h5_file_path):
#Here we check if the learning rate can be loaded from the quality control folder
if os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
with open(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv'),'r') as csvfile:
csvRead = pd.read_csv(csvfile, sep=',')
#print(csvRead)
if "learning rate" in csvRead.columns: #Here we check that the learning rate column exist (compatibility with model trained un ZeroCostDL4Mic bellow 1.4)
print("pretrained network learning rate found")
#find the last learning rate
lastLearningRate = csvRead["learning rate"].iloc[-1]
#Find the learning rate corresponding to the lowest validation loss
min_val_loss = csvRead[csvRead['val_loss'] == min(csvRead['val_loss'])]
#print(min_val_loss)
bestLearningRate = min_val_loss['learning rate'].iloc[-1]
if Weights_choice == "last":
print('Last learning rate: '+str(lastLearningRate))
if Weights_choice == "best":
print('Learning rate of best validation loss: '+str(bestLearningRate))
if not "learning rate" in csvRead.columns: #if the column does not exist, then initial learning rate is used instead
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(bestLearningRate)+' will be used instead.'+bcolors.NORMAL)
#Compatibility with models trained outside ZeroCostDL4Mic but default learning rate will be used
if not os.path.exists(os.path.join(pretrained_model_path, 'Quality Control', 'training_evaluation.csv')):
print(bcolors.WARNING+'WARNING: The learning rate cannot be identified from the pretrained network. Default learning rate of '+str(initial_learning_rate)+' will be used instead'+bcolors.NORMAL)
bestLearningRate = initial_learning_rate
lastLearningRate = initial_learning_rate
# Display info about the pretrained model to be loaded (or not)
if Use_pretrained_model:
print('Weights found in:')
print(h5_file_path)
print('will be loaded prior to training.')
else:
print('No pretrained network will be used.')
h5_file_path = ''
```
## **4.4. Start Training**
---
<font size = 4>When playing the cell below you should see updates after each epoch (round). Network training can take some time.
<font size = 4>* **CRITICAL NOTE:** Google Colab has a time limit for processing (to prevent using GPU power for datamining). Training time must be less than 12 hours! If training takes longer than 12 hours, please decrease the number of epochs or number of patches.
<font size = 4>Once training is complete, the trained model is automatically saved on your Google Drive, in the **model_path** folder that was selected in Section 3. It is however wise to download the folder from Google Drive as all data can be erased at the next training if using the same folder.
```
#@markdown ##Start training
# Start the clock to measure how long it takes
start = time.time()
# --------------------- Using pretrained model ------------------------
#Here we ensure that the learning rate set correctly when using pre-trained models
if Use_pretrained_model:
if Weights_choice == "last":
initial_learning_rate = lastLearningRate
if Weights_choice == "best":
initial_learning_rate = bestLearningRate
# --------------------- ---------------------- ------------------------
#here we check that no model with the same name already exist, if so delete
if os.path.exists(Save_path):
shutil.rmtree(Save_path)
# Create the model folder!
os.makedirs(Save_path)
# Export pdf summary
pdf_export(raw_data = load_raw_data, pretrained_model = Use_pretrained_model)
# Let's go !
train_model(patches, heatmaps, Save_path,
steps_per_epoch=number_of_steps, epochs=number_of_epochs, batch_size=batch_size,
upsampling_factor = upsampling_factor,
validation_split = percentage_validation,
initial_learning_rate = initial_learning_rate,
pretrained_model_path = h5_file_path,
L2_weighting_factor = L2_weighting_factor)
# # Show info about the GPU memory useage
# !nvidia-smi
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# export pdf after training to update the existing document
pdf_export(trained = True, raw_data = load_raw_data, pretrained_model = Use_pretrained_model)
```
# **5. Evaluate your model**
---
<font size = 4>This section allows the user to perform important quality checks on the validity and generalisability of the trained model.
<font size = 4>**We highly recommend to perform quality control on all newly trained models.**
```
# model name and path
#@markdown ###Do you want to assess the model you just trained ?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown ###If not, please provide the path to the model folder:
#@markdown #####During training, the model files are automatically saved inside a folder named after the parameter `model_name` (see section 4.1). Provide the name of this folder as `QC_model_path` .
QC_model_path = "" #@param {type:"string"}
if (Use_the_current_trained_model):
QC_model_path = os.path.join(model_path, model_name)
if os.path.exists(QC_model_path):
print("The "+os.path.basename(QC_model_path)+" model will be evaluated")
else:
print(bcolors.WARNING+'!! WARNING: The chosen model does not exist !!'+bcolors.NORMAL)
print('Please make sure you provide a valid model path before proceeding further.')
```
## **5.1. Inspection of the loss function**
---
<font size = 4>First, it is good practice to evaluate the training progress by comparing the training loss with the validation loss. The latter is a metric which shows how well the network performs on a subset of unseen data which is set aside from the training dataset. For more information on this, see for example [this review](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381354/) by Nichols *et al.*
<font size = 4>**Training loss** describes an error value after each epoch for the difference between the model's prediction and its ground-truth target.
<font size = 4>**Validation loss** describes the same error value between the model's prediction on a validation image and compared to it's target.
<font size = 4>During training both values should decrease before reaching a minimal value which does not decrease further even after more training. Comparing the development of the validation loss with the training loss can give insights into the model's performance.
<font size = 4>Decreasing **Training loss** and **Validation loss** indicates that training is still necessary and increasing the `number_of_epochs` is recommended. Note that the curves can look flat towards the right side, just because of the y-axis scaling. The network has reached convergence once the curves flatten out. After this point no further training is required. If the **Validation loss** suddenly increases again an the **Training loss** simultaneously goes towards zero, it means that the network is overfitting to the training data. In other words the network is remembering the exact patterns from the training data and no longer generalizes well to unseen data. In this case the training dataset has to be increased.
```
#@markdown ##Play the cell to show a plot of training errors vs. epoch number
lossDataFromCSV = []
vallossDataFromCSV = []
with open(os.path.join(QC_model_path,'Quality Control/training_evaluation.csv'),'r') as csvfile:
csvRead = csv.reader(csvfile, delimiter=',')
next(csvRead)
for row in csvRead:
if row:
lossDataFromCSV.append(float(row[0]))
vallossDataFromCSV.append(float(row[1]))
epochNumber = range(len(lossDataFromCSV))
plt.figure(figsize=(15,10))
plt.subplot(2,1,1)
plt.plot(epochNumber,lossDataFromCSV, label='Training loss')
plt.plot(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (linear scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.subplot(2,1,2)
plt.semilogy(epochNumber,lossDataFromCSV, label='Training loss')
plt.semilogy(epochNumber,vallossDataFromCSV, label='Validation loss')
plt.title('Training loss and validation loss vs. epoch number (log scale)')
plt.ylabel('Loss')
plt.xlabel('Epoch number')
plt.legend()
plt.savefig(os.path.join(QC_model_path,'Quality Control/lossCurvePlots.png'), bbox_inches='tight', pad_inches=0)
plt.show()
```
## **5.2. Error mapping and quality metrics estimation**
---
<font size = 4>This section will display SSIM maps and RSE maps as well as calculating total SSIM, NRMSE and PSNR metrics for all the images provided in the "QC_image_folder" using teh corresponding localization data contained in "QC_loc_folder" !
<font size = 4>**1. The SSIM (structural similarity) map**
<font size = 4>The SSIM metric is used to evaluate whether two images contain the same structures. It is a normalized metric and an SSIM of 1 indicates a perfect similarity between two images. Therefore for SSIM, the closer to 1, the better. The SSIM maps are constructed by calculating the SSIM metric in each pixel by considering the surrounding structural similarity in the neighbourhood of that pixel (currently defined as window of 11 pixels and with Gaussian weighting of 1.5 pixel standard deviation, see our Wiki for more info).
<font size=4>**mSSIM** is the SSIM value calculated across the entire window of both images.
<font size=4>**The output below shows the SSIM maps with the mSSIM**
<font size = 4>**2. The RSE (Root Squared Error) map**
<font size = 4>This is a display of the root of the squared difference between the normalized predicted and target or the source and the target. In this case, a smaller RSE is better. A perfect agreement between target and prediction will lead to an RSE map showing zeros everywhere (dark).
<font size =4>**NRMSE (normalised root mean squared error)** gives the average difference between all pixels in the images compared to each other. Good agreement yields low NRMSE scores.
<font size = 4>**PSNR (Peak signal-to-noise ratio)** is a metric that gives the difference between the ground truth and prediction (or source input) in decibels, using the peak pixel values of the prediction and the MSE between the images. The higher the score the better the agreement.
<font size=4>**The output below shows the RSE maps with the NRMSE and PSNR values.**
```
# ------------------------ User input ------------------------
#@markdown ##Choose the folders that contain your Quality Control dataset
QC_image_folder = "" #@param{type:"string"}
QC_loc_folder = "" #@param{type:"string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value:
pixel_size = 100 #@param {type:"number"}
if get_pixel_size_from_file:
pixel_size_INPUT = None
else:
pixel_size_INPUT = pixel_size
# ------------------------ QC analysis loop over provided dataset ------------------------
savePath = os.path.join(QC_model_path, 'Quality Control')
# Open and create the csv file that will contain all the QC metrics
with open(os.path.join(savePath, os.path.basename(QC_model_path)+"_QC_metrics.csv"), "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["image #","Prediction v. GT mSSIM","WF v. GT mSSIM", "Prediction v. GT NRMSE","WF v. GT NRMSE", "Prediction v. GT PSNR", "WF v. GT PSNR"])
# These lists will be used to collect all the metrics values per slice
file_name_list = []
slice_number_list = []
mSSIM_GvP_list = []
mSSIM_GvWF_list = []
NRMSE_GvP_list = []
NRMSE_GvWF_list = []
PSNR_GvP_list = []
PSNR_GvWF_list = []
# Let's loop through the provided dataset in the QC folders
for (imageFilename, locFilename) in zip(list_files(QC_image_folder, 'tif'), list_files(QC_loc_folder, 'csv')):
print('--------------')
print(imageFilename)
print(locFilename)
# Get the prediction
batchFramePredictionLocalization(QC_image_folder, imageFilename, QC_model_path, savePath, pixel_size = pixel_size_INPUT)
# test_model(QC_image_folder, imageFilename, QC_model_path, savePath, display=False);
thisPrediction = io.imread(os.path.join(savePath, 'Predicted_'+imageFilename))
thisWidefield = io.imread(os.path.join(savePath, 'Widefield_'+imageFilename))
Mhr = thisPrediction.shape[0]
Nhr = thisPrediction.shape[1]
if pixel_size_INPUT == None:
pixel_size, N, M = getPixelSizeTIFFmetadata(os.path.join(QC_image_folder,imageFilename))
upsampling_factor = int(Mhr/M)
print('Upsampling factor: '+str(upsampling_factor))
pixel_size_hr = pixel_size/upsampling_factor # in nm
# Load the localization file and display the first
LocData = pd.read_csv(os.path.join(QC_loc_folder,locFilename), index_col=0)
x = np.array(list(LocData['x [nm]']))
y = np.array(list(LocData['y [nm]']))
locImage = FromLoc2Image_SimpleHistogram(x, y, image_size = (Mhr,Nhr), pixel_size = pixel_size_hr)
# Remove extension from filename
imageFilename_no_extension = os.path.splitext(imageFilename)[0]
# io.imsave(os.path.join(savePath, 'GT_image_'+imageFilename), locImage)
saveAsTIF(savePath, 'GT_image_'+imageFilename_no_extension, locImage, pixel_size_hr)
# Normalize the images wrt each other by minimizing the MSE between GT and prediction
test_GT_norm, test_prediction_norm = norm_minmse(locImage, thisPrediction, normalize_gt=True)
# Normalize the images wrt each other by minimizing the MSE between GT and Source image
test_GT_norm, test_wf_norm = norm_minmse(locImage, thisWidefield, normalize_gt=True)
# -------------------------------- Calculate the metric maps and save them --------------------------------
# Calculate the SSIM maps
index_SSIM_GTvsPrediction, img_SSIM_GTvsPrediction = structural_similarity(test_GT_norm, test_prediction_norm, data_range=1., full=True)
index_SSIM_GTvsWF, img_SSIM_GTvsWF = structural_similarity(test_GT_norm, test_wf_norm, data_range=1., full=True)
# Save ssim_maps
img_SSIM_GTvsPrediction_32bit = np.float32(img_SSIM_GTvsPrediction)
# io.imsave(os.path.join(savePath,'SSIM_GTvsPrediction_'+imageFilename),img_SSIM_GTvsPrediction_32bit)
saveAsTIF(savePath,'SSIM_GTvsPrediction_'+imageFilename_no_extension, img_SSIM_GTvsPrediction_32bit, pixel_size_hr)
img_SSIM_GTvsWF_32bit = np.float32(img_SSIM_GTvsWF)
# io.imsave(os.path.join(savePath,'SSIM_GTvsWF_'+imageFilename),img_SSIM_GTvsWF_32bit)
saveAsTIF(savePath,'SSIM_GTvsWF_'+imageFilename_no_extension, img_SSIM_GTvsWF_32bit, pixel_size_hr)
# Calculate the Root Squared Error (RSE) maps
img_RSE_GTvsPrediction = np.sqrt(np.square(test_GT_norm - test_prediction_norm))
img_RSE_GTvsWF = np.sqrt(np.square(test_GT_norm - test_wf_norm))
# Save SE maps
img_RSE_GTvsPrediction_32bit = np.float32(img_RSE_GTvsPrediction)
# io.imsave(os.path.join(savePath,'RSE_GTvsPrediction_'+imageFilename),img_RSE_GTvsPrediction_32bit)
saveAsTIF(savePath,'RSE_GTvsPrediction_'+imageFilename_no_extension, img_RSE_GTvsPrediction_32bit, pixel_size_hr)
img_RSE_GTvsWF_32bit = np.float32(img_RSE_GTvsWF)
# io.imsave(os.path.join(savePath,'RSE_GTvsWF_'+imageFilename),img_RSE_GTvsWF_32bit)
saveAsTIF(savePath,'RSE_GTvsWF_'+imageFilename_no_extension, img_RSE_GTvsWF_32bit, pixel_size_hr)
# -------------------------------- Calculate the RSE metrics and save them --------------------------------
# Normalised Root Mean Squared Error (here it's valid to take the mean of the image)
NRMSE_GTvsPrediction = np.sqrt(np.mean(img_RSE_GTvsPrediction))
NRMSE_GTvsWF = np.sqrt(np.mean(img_RSE_GTvsWF))
# We can also measure the peak signal to noise ratio between the images
PSNR_GTvsPrediction = psnr(test_GT_norm,test_prediction_norm,data_range=1.0)
PSNR_GTvsWF = psnr(test_GT_norm,test_wf_norm,data_range=1.0)
writer.writerow([imageFilename,str(index_SSIM_GTvsPrediction),str(index_SSIM_GTvsWF),str(NRMSE_GTvsPrediction),str(NRMSE_GTvsWF),str(PSNR_GTvsPrediction), str(PSNR_GTvsWF)])
# Collect values to display in dataframe output
file_name_list.append(imageFilename)
mSSIM_GvP_list.append(index_SSIM_GTvsPrediction)
mSSIM_GvWF_list.append(index_SSIM_GTvsWF)
NRMSE_GvP_list.append(NRMSE_GTvsPrediction)
NRMSE_GvWF_list.append(NRMSE_GTvsWF)
PSNR_GvP_list.append(PSNR_GTvsPrediction)
PSNR_GvWF_list.append(PSNR_GTvsWF)
# Table with metrics as dataframe output
pdResults = pd.DataFrame(index = file_name_list)
pdResults["Prediction v. GT mSSIM"] = mSSIM_GvP_list
pdResults["Wide-field v. GT mSSIM"] = mSSIM_GvWF_list
pdResults["Prediction v. GT NRMSE"] = NRMSE_GvP_list
pdResults["Wide-field v. GT NRMSE"] = NRMSE_GvWF_list
pdResults["Prediction v. GT PSNR"] = PSNR_GvP_list
pdResults["Wide-field v. GT PSNR"] = PSNR_GvWF_list
# ------------------------ Display ------------------------
print('--------------------------------------------')
@interact
def show_QC_results(file = list_files(QC_image_folder, 'tif')):
plt.figure(figsize=(15,15))
# Target (Ground-truth)
plt.subplot(3,3,1)
plt.axis('off')
img_GT = io.imread(os.path.join(savePath, 'GT_image_'+file))
plt.imshow(img_GT, norm = simple_norm(img_GT, percent = 99.5))
plt.title('Target',fontsize=15)
# Wide-field
plt.subplot(3,3,2)
plt.axis('off')
img_Source = io.imread(os.path.join(savePath, 'Widefield_'+file))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield',fontsize=15)
#Prediction
plt.subplot(3,3,3)
plt.axis('off')
img_Prediction = io.imread(os.path.join(savePath, 'Predicted_'+file))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Prediction',fontsize=15)
#Setting up colours
cmap = plt.cm.CMRmap
#SSIM between GT and Source
plt.subplot(3,3,5)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_SSIM_GTvsWF = io.imread(os.path.join(savePath, 'SSIM_GTvsWF_'+file))
imSSIM_GTvsWF = plt.imshow(img_SSIM_GTvsWF, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imSSIM_GTvsWF,fraction=0.046, pad=0.04)
plt.title('Target vs. Widefield',fontsize=15)
plt.xlabel('mSSIM: '+str(round(pdResults.loc[file]["Wide-field v. GT mSSIM"],3)),fontsize=14)
plt.ylabel('SSIM maps',fontsize=20, rotation=0, labelpad=75)
#SSIM between GT and Prediction
plt.subplot(3,3,6)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_SSIM_GTvsPrediction = io.imread(os.path.join(savePath, 'SSIM_GTvsPrediction_'+file))
imSSIM_GTvsPrediction = plt.imshow(img_SSIM_GTvsPrediction, cmap = cmap, vmin=0,vmax=1)
plt.colorbar(imSSIM_GTvsPrediction,fraction=0.046, pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('mSSIM: '+str(round(pdResults.loc[file]["Prediction v. GT mSSIM"],3)),fontsize=14)
#Root Squared Error between GT and Source
plt.subplot(3,3,8)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_RSE_GTvsWF = io.imread(os.path.join(savePath, 'RSE_GTvsWF_'+file))
imRSE_GTvsWF = plt.imshow(img_RSE_GTvsWF, cmap = cmap, vmin=0, vmax = 1)
plt.colorbar(imRSE_GTvsWF,fraction=0.046,pad=0.04)
plt.title('Target vs. Widefield',fontsize=15)
plt.xlabel('NRMSE: '+str(round(pdResults.loc[file]["Wide-field v. GT NRMSE"],3))+', PSNR: '+str(round(pdResults.loc[file]["Wide-field v. GT PSNR"],3)),fontsize=14)
plt.ylabel('RSE maps',fontsize=20, rotation=0, labelpad=75)
#Root Squared Error between GT and Prediction
plt.subplot(3,3,9)
#plt.axis('off')
plt.tick_params(
axis='both', # changes apply to the x-axis and y-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
left=False, # ticks along the left edge are off
right=False, # ticks along the right edge are off
labelbottom=False,
labelleft=False)
img_RSE_GTvsPrediction = io.imread(os.path.join(savePath, 'RSE_GTvsPrediction_'+file))
imRSE_GTvsPrediction = plt.imshow(img_RSE_GTvsPrediction, cmap = cmap, vmin=0, vmax=1)
plt.colorbar(imRSE_GTvsPrediction,fraction=0.046,pad=0.04)
plt.title('Target vs. Prediction',fontsize=15)
plt.xlabel('NRMSE: '+str(round(pdResults.loc[file]["Prediction v. GT NRMSE"],3))+', PSNR: '+str(round(pdResults.loc[file]["Prediction v. GT PSNR"],3)),fontsize=14)
plt.savefig(QC_model_path+'/Quality Control/QC_example_data.png', bbox_inches='tight', pad_inches=0)
print('--------------------------------------------')
pdResults.head()
# Export pdf wth summary of QC results
qc_pdf_export()
```
# **6. Using the trained model**
---
<font size = 4>In this section the unseen data is processed using the trained model (in section 4). First, your unseen images are uploaded and prepared for prediction. After that your trained model from section 4 is activated and finally saved into your Google Drive.
## **6.1 Generate image prediction and localizations from unseen dataset**
---
<font size = 4>The current trained model (from section 4.2) can now be used to process images. If you want to use an older model, untick the **Use_the_current_trained_model** box and enter the name and path of the model to use. Predicted output images are saved in your **Result_folder** folder as restored image stacks (ImageJ-compatible TIFF images).
<font size = 4>**`Data_folder`:** This folder should contain the images that you want to use your trained network on for processing.
<font size = 4>**`Result_folder`:** This folder will contain the found localizations csv.
<font size = 4>**`batch_size`:** This paramter determines how many frames are processed by any single pass on the GPU. A higher `batch_size` will make the prediction faster but will use more GPU memory. If an OutOfMemory (OOM) error occurs, decrease the `batch_size`. **DEFAULT: 4**
<font size = 4>**`threshold`:** This paramter determines threshold for local maxima finding. The value is expected to reside in the range **[0,1]**. A higher `threshold` will result in less localizations. **DEFAULT: 0.1**
<font size = 4>**`neighborhood_size`:** This paramter determines size of the neighborhood within which the prediction needs to be a local maxima in recovery pixels (CCD pixel/upsampling_factor). A high `neighborhood_size` will make the prediction slower and potentially discard nearby localizations. **DEFAULT: 3**
<font size = 4>**`use_local_average`:** This paramter determines whether to locally average the prediction in a 3x3 neighborhood to get the final localizations. If set to **True** it will make inference slightly slower depending on the size of the FOV. **DEFAULT: True**
```
# ------------------------------- User input -------------------------------
#@markdown ### Data parameters
Data_folder = "" #@param {type:"string"}
Result_folder = "" #@param {type:"string"}
#@markdown Get pixel size from file?
get_pixel_size_from_file = True #@param {type:"boolean"}
#@markdown Otherwise, use this value (in nm):
pixel_size = 100 #@param {type:"number"}
#@markdown ### Model parameters
#@markdown Do you want to use the model you just trained?
Use_the_current_trained_model = True #@param {type:"boolean"}
#@markdown Otherwise, please provide path to the model folder below
prediction_model_path = "" #@param {type:"string"}
#@markdown ### Prediction parameters
batch_size = 4#@param {type:"integer"}
#@markdown ### Post processing parameters
threshold = 0.1#@param {type:"number"}
neighborhood_size = 3#@param {type:"integer"}
#@markdown Do you want to locally average the model output with CoG estimator ?
use_local_average = True #@param {type:"boolean"}
if get_pixel_size_from_file:
pixel_size = None
if (Use_the_current_trained_model):
prediction_model_path = os.path.join(model_path, model_name)
if os.path.exists(prediction_model_path):
print("The "+os.path.basename(prediction_model_path)+" model will be used.")
else:
print(bcolors.WARNING+'!! WARNING: The chosen model does not exist !!'+bcolors.NORMAL)
print('Please make sure you provide a valid model path before proceeding further.')
# inform user whether local averaging is being used
if use_local_average == True:
print('Using local averaging')
if not os.path.exists(Result_folder):
print('Result folder was created.')
os.makedirs(Result_folder)
# ------------------------------- Run predictions -------------------------------
start = time.time()
#%% This script tests the trained fully convolutional network based on the
# saved training weights, and normalization created using train_model.
if os.path.isdir(Data_folder):
for filename in list_files(Data_folder, 'tif'):
# run the testing/reconstruction process
print("------------------------------------")
print("Running prediction on: "+ filename)
batchFramePredictionLocalization(Data_folder, filename, prediction_model_path, Result_folder,
batch_size,
threshold,
neighborhood_size,
use_local_average,
pixel_size = pixel_size)
elif os.path.isfile(Data_folder):
batchFramePredictionLocalization(os.path.dirname(Data_folder), os.path.basename(Data_folder), prediction_model_path, Result_folder,
batch_size,
threshold,
neighborhood_size,
use_local_average,
pixel_size = pixel_size)
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# ------------------------------- Interactive display -------------------------------
print('--------------------------------------------------------------------')
print('---------------------------- Previews ------------------------------')
print('--------------------------------------------------------------------')
if os.path.isdir(Data_folder):
@interact
def show_QC_results(file = list_files(Data_folder, 'tif')):
plt.figure(figsize=(15,7.5))
# Wide-field
plt.subplot(1,2,1)
plt.axis('off')
img_Source = io.imread(os.path.join(Result_folder, 'Widefield_'+file))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield', fontsize=15)
# Prediction
plt.subplot(1,2,2)
plt.axis('off')
img_Prediction = io.imread(os.path.join(Result_folder, 'Predicted_'+file))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Predicted',fontsize=15)
if os.path.isfile(Data_folder):
plt.figure(figsize=(15,7.5))
# Wide-field
plt.subplot(1,2,1)
plt.axis('off')
img_Source = io.imread(os.path.join(Result_folder, 'Widefield_'+os.path.basename(Data_folder)))
plt.imshow(img_Source, norm = simple_norm(img_Source, percent = 99.5))
plt.title('Widefield', fontsize=15)
# Prediction
plt.subplot(1,2,2)
plt.axis('off')
img_Prediction = io.imread(os.path.join(Result_folder, 'Predicted_'+os.path.basename(Data_folder)))
plt.imshow(img_Prediction, norm = simple_norm(img_Prediction, percent = 99.5))
plt.title('Predicted',fontsize=15)
```
## **6.2 Drift correction**
---
<font size = 4>The visualization above is the raw output of the network and displayed at the `upsampling_factor` chosen during model training. The display is a preview without any drift correction applied. This section performs drift correction using cross-correlation between time bins to estimate the drift.
<font size = 4>**`Loc_file_path`:** is the path to the localization file to use for visualization.
<font size = 4>**`original_image_path`:** is the path to the original image. This only serves to extract the original image size and pixel size to shape the visualization properly.
<font size = 4>**`visualization_pixel_size`:** This parameter corresponds to the pixel size to use for the image reconstructions used for the Drift Correction estmication (in **nm**). A smaller pixel size will be more precise but will take longer to compute. **DEFAULT: 20**
<font size = 4>**`number_of_bins`:** This parameter defines how many temporal bins are used across the full dataset. All localizations in each bins are used ot build an image. This image is used to find the drift with respect to the image obtained from the very first bin. A typical value would correspond to about 500 frames per bin. **DEFAULT: Total number of frames / 500**
<font size = 4>**`polynomial_fit_degree`:** The drift obtained for each temporal bins needs to be interpolated to every single frames. This is performed by polynomial fit, the degree of which is defined here. **DEFAULT: 4**
<font size = 4> The drift-corrected localization data is automaticaly saved in the `save_path` folder.
```
# @markdown ##Data parameters
Loc_file_path = "" #@param {type:"string"}
# @markdown Provide information about original data. Get the info automatically from the raw data?
Get_info_from_file = True #@param {type:"boolean"}
# Loc_file_path = "/content/gdrive/My Drive/Colab notebooks testing/DeepSTORM/Glia data from CL/Results from prediction/20200615-M6 with CoM localizations/Localizations_glia_actin_2D - 1-500fr_avg.csv" #@param {type:"string"}
original_image_path = "" #@param {type:"string"}
# @markdown Otherwise, please provide image width, height (in pixels) and pixel size (in nm)
image_width = 256#@param {type:"integer"}
image_height = 256#@param {type:"integer"}
pixel_size = 100 #@param {type:"number"}
# @markdown ##Drift correction parameters
visualization_pixel_size = 20#@param {type:"number"}
number_of_bins = 50#@param {type:"integer"}
polynomial_fit_degree = 4#@param {type:"integer"}
# @markdown ##Saving parameters
save_path = '' #@param {type:"string"}
# Let's go !
start = time.time()
# Get info from the raw file if selected
if Get_info_from_file:
pixel_size, image_width, image_height = getPixelSizeTIFFmetadata(original_image_path, display=True)
# Read the localizations in
LocData = pd.read_csv(Loc_file_path)
# Calculate a few variables
Mhr = int(math.ceil(image_height*pixel_size/visualization_pixel_size))
Nhr = int(math.ceil(image_width*pixel_size/visualization_pixel_size))
nFrames = max(LocData['frame'])
x_max = max(LocData['x [nm]'])
y_max = max(LocData['y [nm]'])
image_size = (Mhr, Nhr)
n_locs = len(LocData.index)
print('Image size: '+str(image_size))
print('Number of frames in data: '+str(nFrames))
print('Number of localizations in data: '+str(n_locs))
blocksize = math.ceil(nFrames/number_of_bins)
print('Number of frames per block: '+str(blocksize))
blockDataFrame = LocData[(LocData['frame'] < blocksize)].copy()
xc_array = blockDataFrame['x [nm]'].to_numpy(dtype=np.float32)
yc_array = blockDataFrame['y [nm]'].to_numpy(dtype=np.float32)
# Preparing the Reference image
photon_array = np.ones(yc_array.shape[0])
sigma_array = np.ones(yc_array.shape[0])
ImageRef = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
ImagesRef = np.rot90(ImageRef, k=2)
xDrift = np.zeros(number_of_bins)
yDrift = np.zeros(number_of_bins)
filename_no_extension = os.path.splitext(os.path.basename(Loc_file_path))[0]
with open(os.path.join(save_path, filename_no_extension+"_DriftCorrectionData.csv"), "w", newline='') as file:
writer = csv.writer(file)
# Write the header in the csv file
writer.writerow(["Block #", "x-drift [nm]","y-drift [nm]"])
for b in tqdm(range(number_of_bins)):
blockDataFrame = LocData[(LocData['frame'] >= (b*blocksize)) & (LocData['frame'] < ((b+1)*blocksize))].copy()
xc_array = blockDataFrame['x [nm]'].to_numpy(dtype=np.float32)
yc_array = blockDataFrame['y [nm]'].to_numpy(dtype=np.float32)
photon_array = np.ones(yc_array.shape[0])
sigma_array = np.ones(yc_array.shape[0])
ImageBlock = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
XC = fftconvolve(ImagesRef, ImageBlock, mode = 'same')
yDrift[b], xDrift[b] = subPixelMaxLocalization(XC, method = 'CoM')
# saveAsTIF(save_path, 'ImageBlock'+str(b), ImageBlock, visualization_pixel_size)
# saveAsTIF(save_path, 'XCBlock'+str(b), XC, visualization_pixel_size)
writer.writerow([str(b), str((xDrift[b]-xDrift[0])*visualization_pixel_size), str((yDrift[b]-yDrift[0])*visualization_pixel_size)])
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
print('Fitting drift data...')
bin_number = np.arange(number_of_bins)*blocksize + blocksize/2
xDrift = (xDrift-xDrift[0])*visualization_pixel_size
yDrift = (yDrift-yDrift[0])*visualization_pixel_size
xDriftCoeff = np.polyfit(bin_number, xDrift, polynomial_fit_degree)
yDriftCoeff = np.polyfit(bin_number, yDrift, polynomial_fit_degree)
xDriftFit = np.poly1d(xDriftCoeff)
yDriftFit = np.poly1d(yDriftCoeff)
bins = np.arange(nFrames)
xDriftInterpolated = xDriftFit(bins)
yDriftInterpolated = yDriftFit(bins)
# ------------------ Displaying the image results ------------------
plt.figure(figsize=(15,10))
plt.plot(bin_number,xDrift, 'r+', label='x-drift')
plt.plot(bin_number,yDrift, 'b+', label='y-drift')
plt.plot(bins,xDriftInterpolated, 'r-', label='y-drift (fit)')
plt.plot(bins,yDriftInterpolated, 'b-', label='y-drift (fit)')
plt.title('Cross-correlation estimated drift')
plt.ylabel('Drift [nm]')
plt.xlabel('Bin number')
plt.legend();
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:", hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# ------------------ Actual drift correction -------------------
print('Correcting localization data...')
xc_array = LocData['x [nm]'].to_numpy(dtype=np.float32)
yc_array = LocData['y [nm]'].to_numpy(dtype=np.float32)
frames = LocData['frame'].to_numpy(dtype=np.int32)
xc_array_Corr, yc_array_Corr = correctDriftLocalization(xc_array, yc_array, frames, xDriftInterpolated, yDriftInterpolated)
ImageRaw = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
ImageCorr = FromLoc2Image_SimpleHistogram(xc_array_Corr, yc_array_Corr, image_size = image_size, pixel_size = visualization_pixel_size)
# ------------------ Displaying the imge results ------------------
plt.figure(figsize=(15,7.5))
# Raw
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(ImageRaw, norm = simple_norm(ImageRaw, percent = 99.5))
plt.title('Raw', fontsize=15);
# Corrected
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(ImageCorr, norm = simple_norm(ImageCorr, percent = 99.5))
plt.title('Corrected',fontsize=15);
# ------------------ Table with info -------------------
driftCorrectedLocData = pd.DataFrame()
driftCorrectedLocData['frame'] = frames
driftCorrectedLocData['x [nm]'] = xc_array_Corr
driftCorrectedLocData['y [nm]'] = yc_array_Corr
driftCorrectedLocData['confidence [a.u]'] = LocData['confidence [a.u]']
driftCorrectedLocData.to_csv(os.path.join(save_path, filename_no_extension+'_DriftCorrected.csv'))
print('-------------------------------')
print('Corrected localizations saved.')
```
## **6.3 Visualization of the localizations**
---
<font size = 4>The visualization in section 6.1 is the raw output of the network and displayed at the `upsampling_factor` chosen during model training. This section performs visualization of the result by plotting the localizations as a simple histogram.
<font size = 4>**`Loc_file_path`:** is the path to the localization file to use for visualization.
<font size = 4>**`original_image_path`:** is the path to the original image. This only serves to extract the original image size and pixel size to shape the visualization properly.
<font size = 4>**`visualization_pixel_size`:** This parameter corresponds to the pixel size to use for the final image reconstruction (in **nm**). **DEFAULT: 10**
<font size = 4>**`visualization_mode`:** This parameter defines what visualization method is used to visualize the final image. NOTES: The Integrated Gaussian can be quite slow. **DEFAULT: Simple histogram.**
```
# @markdown ##Data parameters
Use_current_drift_corrected_localizations = True #@param {type:"boolean"}
# @markdown Otherwise provide a localization file path
Loc_file_path = "" #@param {type:"string"}
# @markdown Provide information about original data. Get the info automatically from the raw data?
Get_info_from_file = True #@param {type:"boolean"}
# Loc_file_path = "/content/gdrive/My Drive/Colab notebooks testing/DeepSTORM/Glia data from CL/Results from prediction/20200615-M6 with CoM localizations/Localizations_glia_actin_2D - 1-500fr_avg.csv" #@param {type:"string"}
original_image_path = "" #@param {type:"string"}
# @markdown Otherwise, please provide image width, height (in pixels) and pixel size (in nm)
image_width = 256#@param {type:"integer"}
image_height = 256#@param {type:"integer"}
pixel_size = 100#@param {type:"number"}
# @markdown ##Visualization parameters
visualization_pixel_size = 10#@param {type:"number"}
visualization_mode = "Simple histogram" #@param ["Simple histogram", "Integrated Gaussian (SLOW!)"]
if not Use_current_drift_corrected_localizations:
filename_no_extension = os.path.splitext(os.path.basename(Loc_file_path))[0]
if Get_info_from_file:
pixel_size, image_width, image_height = getPixelSizeTIFFmetadata(original_image_path, display=True)
if Use_current_drift_corrected_localizations:
LocData = driftCorrectedLocData
else:
LocData = pd.read_csv(Loc_file_path)
Mhr = int(math.ceil(image_height*pixel_size/visualization_pixel_size))
Nhr = int(math.ceil(image_width*pixel_size/visualization_pixel_size))
nFrames = max(LocData['frame'])
x_max = max(LocData['x [nm]'])
y_max = max(LocData['y [nm]'])
image_size = (Mhr, Nhr)
print('Image size: '+str(image_size))
print('Number of frames in data: '+str(nFrames))
print('Number of localizations in data: '+str(len(LocData.index)))
xc_array = LocData['x [nm]'].to_numpy()
yc_array = LocData['y [nm]'].to_numpy()
if (visualization_mode == 'Simple histogram'):
locImage = FromLoc2Image_SimpleHistogram(xc_array, yc_array, image_size = image_size, pixel_size = visualization_pixel_size)
elif (visualization_mode == 'Shifted histogram'):
print(bcolors.WARNING+'Method not implemented yet!'+bcolors.NORMAL)
locImage = np.zeros(image_size)
elif (visualization_mode == 'Integrated Gaussian (SLOW!)'):
photon_array = np.ones(xc_array.shape)
sigma_array = np.ones(xc_array.shape)
locImage = FromLoc2Image_Erf(xc_array, yc_array, photon_array, sigma_array, image_size = image_size, pixel_size = visualization_pixel_size)
print('--------------------------------------------------------------------')
# Displaying the time elapsed for training
dt = time.time() - start
minutes, seconds = divmod(dt, 60)
hours, minutes = divmod(minutes, 60)
print("Time elapsed:",hours, "hour(s)",minutes,"min(s)",round(seconds),"sec(s)")
# Display
plt.figure(figsize=(20,10))
plt.axis('off')
# plt.imshow(locImage, cmap='gray');
plt.imshow(locImage, norm = simple_norm(locImage, percent = 99.5));
LocData.head()
# @markdown ---
# @markdown #Play this cell to save the visualization
# @markdown ####Please select a path to the folder where to save the visualization.
save_path = "" #@param {type:"string"}
if not os.path.exists(save_path):
os.makedirs(save_path)
print('Folder created.')
saveAsTIF(save_path, filename_no_extension+'_Visualization', locImage, visualization_pixel_size)
print('Image saved.')
```
## **6.4. Download your predictions**
---
<font size = 4>**Store your data** and ALL its results elsewhere by downloading it from Google Drive and after that clean the original folder tree (datasets, results, trained model etc.) if you plan to train or use new networks. Please note that the notebook will otherwise **OVERWRITE** all files which have the same name.
# **7. Version log**
---
<font size = 4>**v1.13**:
* The section 1 and 2 are now swapped for better export of *requirements.txt*.
* This version also now includes built-in version check and the version log that you're reading now.
---
#**Thank you for using Deep-STORM 2D!**
| github_jupyter |
# Microstructure classification using Neural Networks
In this example, we will generate microstructures of 4 different types with different grain sizes.
Then we will split the dataset into training and testing set.
Finally we will trian the neural network using CrysX-NN to make predictions.
## Run the following cell for Google colab
then restart runtime
```
! pip install --upgrade --no-cache-dir https://github.com/manassharma07/crysx_nn/tarball/main
! pip install pymks
! pip install IPython==7.7.0
! pip install fsspec>=0.3.3
```
## Import necessary libraries
We will use PyMKS for generation artificial microstructures.
```
from pymks import (
generate_multiphase,
plot_microstructures,
# PrimitiveTransformer,
# TwoPointCorrelation,
# FlattenTransformer,
# GenericTransformer
)
import numpy as np
import matplotlib.pyplot as plt
# For GPU
import cupy as cp
```
## Define some parameters
like number of samples per type, the width and height of a microstructure image in pixels.
[For Google Colab, generating 10,000 samples of each type results in out of memory error. 8000 seems to work fine.]
```
nSamples_per_type = 10000
width = 100
height = 100
```
## Generate microstructures
The following code will generate microstructures of 4 different types.
The first type has 6 times more grain boundaries along the x-axis than the y-axis.
The second type has 4 times more grain boundaries along the y-axis than the x-axis.
The third type has same number of grain boundaries along the x-axis as well as the y-axis.
The fourth type has 6 times more grain boundaries along the y-axis than the x-axis.
```
grain_sizes = [(30, 5), (10, 40), (15, 15), (5, 30)]
seeds = [10, 99, 4, 36]
data_synth = np.concatenate([
generate_multiphase(shape=(nSamples_per_type, width, height), grain_size=grain_size,
volume_fraction=(0.5, 0.5),
percent_variance=0.2,
seed=seed
)
for grain_size, seed in zip(grain_sizes, seeds)
])
```
## Plot a microstructure of each type
```
plot_microstructures(*data_synth[::nSamples_per_type+0], colorbar=True)
# plot_microstructures(*data_synth[::nSamples_per_type+1], colorbar=True)
# plot_microstructures(*data_synth[::nSamples_per_type+2], colorbar=True)
# plot_microstructures(*data_synth[::nSamples_per_type+3], colorbar=True)
#plt.savefig("Microstructures.png",dpi=600,transparent=True)
plt.show()
```
## Check the shape of the data generated
The first dimension corresponds to the total number of samples, the second and third axes are for width and height.
```
# Print shape of the array
print(data_synth.shape)
print(type(data_synth))
```
## Rename the generated data --> `X_data` as it is the input data
```
X_data = np.array(data_synth)
print(X_data.shape)
```
## Create the target/true labels for the data
The microstructure data we have generated is such that the samples of different types are grouped together. Furthermore, their order is the same as the one we provided when generating the data.
Therefore, we can generate the true labels quite easily by making a numpy array whose first `nSamples_per_type` elements correspond to type 0, and so on upto type 3.
```
Y_data = np.concatenate([np.ones(nSamples_per_type)*0,np.ones(nSamples_per_type)*1,np.ones(nSamples_per_type)*2,np.ones(nSamples_per_type)*3])
print(Y_data)
print(Y_data.shape)
```
## Plot some samples taken from the data randomly as well as their labels that we created for confirmation
```
rng = np.random.default_rng()
### Plot examples
fig, axes = plt.subplots(nrows=2, ncols=6, figsize=(15., 6.))
for axes_row in axes:
for ax in axes_row:
test_index = rng.integers(0, len(Y_data))
image = X_data[test_index]
orig_label = Y_data[test_index]
ax.set_axis_off()
ax.imshow(image)
ax.set_title('True: %i' % orig_label)
```
## Use sklearn to split the data into train and test set
```
from sklearn.model_selection import train_test_split
# Split into train and test
X_train_orig, X_test_orig, Y_train_orig, Y_test_orig = train_test_split(X_data, Y_data, test_size=0.20, random_state=1)
```
## Some statistics of the training data
```
print('Training data MIN',X_train_orig.min())
print('Training data MAX',X_train_orig.max())
print('Training data MEAN',X_train_orig.mean())
print('Training data STD',X_train_orig.std())
```
## Check some shapes
```
print(X_train_orig.shape)
print(Y_train_orig.shape)
print(X_test_orig.shape)
print(Y_test_orig.shape)
```
## Flatten the input pixel data for each sample by reshaping the 2d array of size `100,100`, for each sample to a 1d array of size `100*100`
```
X_train = X_train_orig.reshape(X_train_orig.shape[0], width*height)
X_test = X_test_orig.reshape(X_test_orig.shape[0], width*height)
```
## Check the shapes
```
print(X_train.shape)
print(X_test.shape)
```
## Use a utility from CrysX-NN to one-hot encode the target/true labels
This means that a sample with type 3 will be represented as an array [0,0,0,1]
```
from crysx_nn import mnist_utils as mu
Y_train = mu.one_hot_encode(Y_train_orig, 4)
Y_test = mu.one_hot_encode(Y_test_orig, 4)
print(Y_train.shape)
print(Y_test.shape)
```
## Standardize the training and testing input data using the mean and standard deviation of the training data
```
X_train = (X_train - np.mean(X_train_orig)) / np.std(X_train_orig)
X_test = (X_test - np.mean(X_train_orig)) / np.std(X_train_orig)
# Some statistics after standardization
print('Training data MIN',X_train.min())
print('Training data MAX',X_train.max())
print('Training data MEAN',X_train.mean())
print('Training data STD',X_train.std())
print('Testing data MIN',X_test.min())
print('Testing data MAX',X_test.max())
print('Testing data MEAN',X_test.mean())
print('Testing data STD',X_test.std())
```
## Finally we will begin creating a neural network
Set some important parameters for the Neural Network.
**Note**: In some cases I got NAN values while training. The issue could be circumvented by choosing a different batch size.
```
nInputs = width*height # No. of nodes in the input layer
neurons_per_layer = [500, 4] # Neurons per layer (excluding the input layer)
activation_func_names = ['ReLU', 'Softmax']
nLayers = len(neurons_per_layer)
nEpochs = 4
batchSize = 32 # No. of input samples to process at a time for optimization
```
## Create the neural network model
Use the parameters define above to create the model
```
from crysx_nn import network
model = network.nn_model(nInputs=nInputs, neurons_per_layer=neurons_per_layer, activation_func_names=activation_func_names, batch_size=batchSize, device='GPU', init_method='Xavier')
model.lr = 0.02
```
## Check the details of the Neural Network
```
model.details()
```
## Visualize the neural network
```
model.visualize()
```
## Begin optimization/training
We will use `float32` precision, so convert the input and output arrays.
We will use Categorical Cross Entropy for the loss function.
```
inputs = cp.array(X_train.astype(np.float32))
outputs = cp.array(Y_train.astype(np.float32))
# Run optimization
# model.optimize(inputs, outputs, lr=0.02,nEpochs=nEpochs,loss_func_name='CCE', miniterEpoch=1, batchProgressBar=True, miniterBatch=100)
# To get accuracies at each epoch
model.optimize(inputs, outputs, lr=0.02,nEpochs=nEpochs,loss_func_name='CCE', miniterEpoch=1, batchProgressBar=True, miniterBatch=100, get_accuracy=True)
```
## Error at each epoch
```
print(model.errors)
```
## Accuracy at each epoch
```
print(model.accuracy)
```
## Save model weights and biases
```
# Save weights
model.save_model_weights('NN_crysx_microstructure_96_weights_cupy')
# Save biases
model.save_model_biases('NN_crysx_microstructure_96_biases_cupy')
```
## Load model weights and biases from files
```
model.load_model_weights('NN_crysx_microstructure_96_weights_cupy')
model.load_model_biases('NN_crysx_microstructure_96_biases_cupy')
```
## Performance on Test data
```
## Convert to float32 arrays
inputs = cp.array(X_test.astype(np.float32))
outputs = cp.array(Y_test.astype(np.float32))
# predictions, error = model.predict(inputs, outputs, loss_func_name='BCE')
# print('Error:',error)
# print(predictions)
predictions, error, accuracy = model.predict(inputs, outputs, loss_func_name='CCE', get_accuracy=True)
print('Error:',error)
print('Accuracy %:',accuracy*100)
```
## Confusion matrix
```
from crysx_nn import utils
# Convert predictions to numpy array for using the utility function
predictions = cp.asnumpy(predictions)
# Get the indices of the maximum probabilities for each sample in the predictions array
pred_type = np.argmax(predictions, axis=1)
# Get the digit index from the one-hot encoded array
true_type = np.argmax(Y_test, axis=1)
# Calculation confusion matrix
cm = utils.compute_confusion_matrix(pred_type, true_type)
print('Confusion matrix:\n',cm)
# Plot the confusion matrix
utils.plot_confusion_matrix(cm)
```
## Draw some random images from the test dataset and compare the true labels to the network outputs
```
### Draw some random images from the test dataset and compare the true labels to the network outputs
fig, axes = plt.subplots(nrows=2, ncols=6, figsize=(15., 6.))
### Loop over subplots
for axes_row in axes:
for ax in axes_row:
### Draw the images
test_index = rng.integers(0, len(Y_test_orig))
image = X_test[test_index].reshape(width, height) # Use X_test instead of X_test_orig as X_test_orig is not standardized
orig_label = Y_test_orig[test_index]
### Compute the predictions
input_array = cp.array(image.reshape([1,width*height]))
output = model.predict(input_array)
# Get the maximum probability
certainty = np.max(output)
# Get the index of the maximum probability
output = np.argmax(output)
### Show image
ax.set_axis_off()
ax.imshow(image)
ax.set_title('True: %i, predicted: %i\nat %f ' % (orig_label, output, certainty*100))
```
| github_jupyter |
# <center> Pandas*</center>
*pandas is short for Python Data Analysis Library
<img src="https://welovepandas.club/wp-content/uploads/2019/02/panda-bamboo1550035127.jpg" height=350 width=400>
```
import pandas as pd
```
In pandas you need to work with DataFrames and Series. According to [the documentation of pandas](https://pandas.pydata.org/pandas-docs/stable/):
* **DataFrame**: Two-dimensional, size-mutable, potentially heterogeneous tabular data. Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure.
* **Series**: One-dimensional ndarray with axis labels (including time series).
```
pd.Series([5, 6, 7, 8, 9, 10])
pd.DataFrame([1])
some_data = {'Student': ['1', '2'], 'Name': ['Alice', 'Michael'], 'Surname': ['Brown', 'Williams']}
pd.DataFrame(some_data)
some_data = [{'Student': ['1', '2'], 'Name': ['Alice', 'Michael'], 'Surname': ['Brown', 'Williams']}]
pd.DataFrame(some_data)
pd.DataFrame([{'Student': '1', 'Name': 'Alice', 'Surname': 'Brown'},
{'Student': '2', 'Name': 'Anna', 'Surname': 'White'}])
```
Check how to create it:
* pd.DataFrame().from_records()
* pd.DataFrame().from_dict()
```
pd.DataFrame.from_records(some_data)
pd.DataFrame.from_dict()
```
This data set is too big for github, download it from [here](https://www.kaggle.com/START-UMD/gtd). You will need to register on Kaggle first.
```
df = pd.read_csv('globalterrorismdb_0718dist.csv', encoding='ISO-8859-1')
```
Let's explore the second set of data. How many rows and columns are there?
```
df.shape
```
General information on this data set:
```
df.info()
```
Let's take a look at the dataset information. In .info (), you can pass additional parameters, including:
* **verbose**: whether to print information about the DataFrame in full (if the table is very large, then some information may be lost);
* **memory_usage**: whether to print memory consumption (the default is True, but you can put either False, which will remove memory consumption, or 'deep', which will calculate the memory consumption more accurately);
* **null_counts**: Whether to count the number of empty elements (default is True).
```
df.describe()
df.describe(include=['object', 'int'])
```
The describe method shows the basic statistical characteristics of the data for each numeric feature (int64 and float64 types): the number of non-missing values, mean, standard deviation, range, median, 0.25 and 0.75 quartiles.
How to look only at the column names, index:
```
df.columns
df.index
```
How to look at the first 10 lines?
```
df.head(10)
```
How to look at the last 15 lines?
```
df.tail(15)
```
How to request only one particular line (by counting lines)?
```
df.head(4)
#the first 3 lines
df.iloc[:3] # the number of rows by counting them
```
How to request only one particular line by its index?
```
# the first lines till the row with the index 3
df.loc[:3] # 3 is treated as an index
```
Look only at the unique values of some columns.
```
list(df['city'].unique())
```
How many unique values there are in ```city``` column? = On how many cities this data set hold information on terrorist attacks?
```
df['city'].nunique()
```
In what years did the largest number of terrorist attacks occur (according to only to this data set)?
```
df['iyear'].value_counts().head(5)
df['iyear'].value_counts()[:5]
```
How we can sort all data by year in descending order?
```
df['iyear'].sort_values()
df.sort_values(by='iyear', ascending=False)
```
Which data types we have in each column?
```
dict(df.dtypes)
```
How to check the missing values?
```
df
df.isna()
dict(df.isna().sum())
df.dropna(axis=1)
df.head(5)
df['attacktype2'].min()
df['attacktype2'].max()
df['attacktype2'].mode()
df['attacktype2'].median()
df['attacktype2'].mean()
df['attacktype2'].fillna(df['attacktype2'].mode())
```
Let's delete a column ```approxdate``` from this data set, because it contains a lot of missing values:
```
df.drop(['approxdate'], axis=1, inplace=True)
```
Create a new variable ```casualties``` by summing up the value in ```Killed``` and ```Wounded```.
```
set(df.columns)
df['casualties'] = df['nwound'] + df['nkill']
df.head()
```
Rename a column ```iyear``` to ```Year```:
```
df.rename({'iyear' : 'Year'}, axis='columns', inplace=True)
df
```
How to drop all missing values? Replace these missing values with others?
```
df.dropna(inplace=True)
```
**Task!** Use a function to replace NaNs (=missing values) to a string 'None' in ```related``` column
```
# TODO
```
For the selected columns show its mean, median (and/or mode).
```
df['Year'].mean()
```
Min, max and sum:
```
df['Year'].sum()
sum(df['Year'])
max('word')
```
Filter the dataset to look only at the attacks after 2015 year
```
df[df.Year > 2015]
```
What if we have several conditions? Try it out
```
df[(df.Year > 2015) & (df.extended == 1)]
```
Additional materials:
* https://www.kaggle.com/START-UMD/gtd/code?datasetId=504&sortBy=voteCount
| github_jupyter |
# Tips
### Introduction:
This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html)
The dataset being used is tips from Seaborn.
### Step 1. Import the necessary libraries:
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv).
### Step 3. Assign it to a variable called tips
```
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url, index_col=0)
tips.reset_index()
tips
```
### Step 4. Delete the Unnamed 0 column
```
# already done
```
### Step 5. Plot the total_bill column histogram
```
sns.set(style='white')
sns.set_context(rc = {'patch.linewidth': 2.0})
ax = sns.histplot(tips['total_bill'], kde=True, stat='density')
ax.set(xlabel='Value', ylabel='Frequency')
ax.set_title('Total Bill', size=15)
sns.despine();
# Original solution:
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
```
### Step 6. Create a scatter plot presenting the relationship between total_bill and tip
```
sns.jointplot(x=tips['total_bill'], y=tips['tip'], xlim=(0, 60), ylim=(0, 12));
```
### Step 7. Create one image with the relationship of total_bill, tip and size.
#### Hint: It is just one function.
```
sns.pairplot(data=tips[['total_bill', 'tip', 'size']]);
# Original solution:
#
# sns.pairplot(tips)
```
### Step 8. Present the relationship between days and total_bill value
```
sns.set_style('whitegrid')
plt.figure(figsize=(8, 6))
ax = sns.stripplot(x=tips['day'], y=tips['total_bill'])
ax.set_ylim(0, 60);
# Original solution:
#
# sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
# What a "jitter" is (for demonstration purposes):
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = 0.4);
```
### Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
```
sns.set_style("whitegrid")
plt.figure(figsize=(8, 6))
ax = sns.scatterplot(data=tips, x='tip', y='day', hue='sex');
ax.yaxis.grid(False)
ax.legend(title='Sex', framealpha = 1, edgecolor='w');
# Original solution:
#
# sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
```
### Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
```
plt.figure(figsize=(12, 6))
sns.boxplot(data=tips, x='day', y='total_bill', hue='time');
```
### Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
```
sns.set_style('ticks')
g = sns.FacetGrid(data=tips, col='time')
g.map(sns.histplot, 'tip', bins=10)
g.set(xlim=(0, 12), ylim=(0, 60), xticks=range(0, 13, 2), yticks=range(0, 61, 10));
sns.despine();
# Original solution:
#
# # better seaborn style
# sns.set(style = "ticks")
# # creates FacetGrid
# g = sns.FacetGrid(tips, col = "time")
# g.map(plt.hist, "tip");
```
### Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker
### They must be side by side.
```
g = sns.FacetGrid(data=tips, col='sex')
g.map_dataframe(sns.scatterplot, x='total_bill', y='tip', hue='smoker')
g.add_legend(title='Smoker')
g.set_axis_labels('Total bill', 'Tip')
g.set(xlim=(0, 60), ylim=(0, 12), xticks=range(0, 61, 10), yticks=range(0, 13, 2));
# Original solution:
#
# g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
# g.map(plt.scatter, "total_bill", "tip", alpha =.7)
# g.add_legend();
```
### BONUS: Create your own question and answer it using a graph.
```
g = sns.FacetGrid(data=tips, col='sex')
g.map(sns.kdeplot, 'total_bill');
sns.kdeplot(tips['total_bill'], hue=tips['sex']);
sns.histplot(data=tips, x='total_bill', hue='sex');
tips.groupby('sex')[['total_bill']].sum()
tips.groupby('sex')[['total_bill']].count()
males = tips[tips['sex'] == 'Male'].sample(87)
males.head()
females = tips[tips['sex'] == 'Female']
females.head()
new_tips = pd.concat([males, females]).reset_index()
new_tips.head()
sns.kdeplot(data=new_tips, x='total_bill', hue='sex');
sns.histplot(data=new_tips, x='total_bill', hue='sex');
g = sns.FacetGrid(data=new_tips, col='sex')
g.map(sns.scatterplot, 'total_bill', 'tip');
```
| github_jupyter |
# Matplotlib
Matplotlib is a powerful tool for generating scientific charts of various sorts.
This presentation only touches on some features of matplotlib. Please see
<a href="https://jakevdp.github.io/PythonDataScienceHandbook/index.html">
https://jakevdp.github.io/PythonDataScienceHandbook/index.html</a> or many other
resources for a more
detailed discussion,
The following notebook shows how to use matplotlib to examine a simple univariate function.
Please refer to the quick reference notebook for introductions to some of the methods used.
Note there are some FILL_IN_THE_BLANK placeholders where you are expected
to change the notebook to make it work. There may also be bugs purposefully
introduced in the code
samples which you will need fix.
Consider the function
$$
f(x) = 0.1 * x ^ 2 + \sin(x+1) - 0.5
$$
What does it look like between -2 and 2?
```
# Import numpy and matplotlib modules
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
# Get x values between -2 and 2
xs = np.linspace(-2, 2, 21)
xs
# Compute array of f values for x values
fs = 0.2 * xs * xs + np.sin(xs + 1) - 0.5
fs
# Make a figure and plot x values against f values
fig = plt.figure()
ax = plt.axes()
ax.plot(xs, fs);
```
# Solving an equation
At what value of $x$ in $[-2, 2]$ does $f(x) = 0$?
Let's look at different plots for $f$ using functions to automate things.
```
def f(x):
return 0.2 * x ** 2 + np.sin(x + 1) - 0.5
def plot_f(low_x=-2, high_x=2, number_of_samples=30):
# Get an array of x values between low_x and high_x of length number_of_samples
xs = FILL_IN_THE_BLANK
fs = f(xs)
fig = plt.figure()
ax = plt.axes()
ax.plot(xs, fs);
plot_f()
plot_f(-1.5, 0.5)
```
# Interactive plots
We can make an interactive figure where we can try to locate the crossing point visually
```
from ipywidgets import interact
interact(plot_f, low_x=(-2.,2), high_x=(-2.,2))
# But we really should do it using an algorithm like binary search:
def find_x_at_zero(some_function, x_below_zero, x_above_zero, iteration_limit=10):
"""
Given f(x_below_zero)<=0 and f(x_above_zero) >= 0 iteratively use the
midpoint between the current boundary points to approximate f(x) == 0.
"""
for count in range(iteration_limit):
# check arguments
y_below_zero = some_function(x_below_zero)
assert y_below_zero < 0, "y_below_zero should stay at or below zero"
y_above_zero = some_function(x_above_zero)
assert y_above_zero < 0, "y_above_zero should stay at or above zero"
# get x in the middle of x_below and x_above
x_middle = 0.5 * (x_below_zero + x_above_zero)
f_middle = some_function(x_middle)
print(" at ", count, "looking at x=", x_middle, "with f(x)", f_middle)
if f_middle < 0:
FILL_IN_THE_BLANK
else:
FILL_IN_THE_BLANK
print ("final estimate after", iteration_limit, "iterations:")
print ("x at zero is between", x_below_zero, x_above_zero)
print ("with current f(x) at", f_middle)
find_x_at_zero(f, -2, 2)
# Exercise: For the following function:
def g(x):
return np.sqrt(x) + np.cos(x + 1) - 1
# Part1: Make a figure and plot x values against g(x) values
# Part 2: find an approximate value of x where g(x) is near 0.
# Part 3: Use LaTeX math notation to display the function g nicely formatted in a Markdown cell.
```
| github_jupyter |
```
import numpy as np
import sklearn
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report,confusion_matrix
import warnings
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
warnings.filterwarnings('ignore')
# Load data from numpy file
X = np.load('feat.npy')
y = np.load('label.npy').ravel()
# Split data into training and test subsets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
# Simple SVM
print('fitting...')
clf = SVC(C=20.0, gamma=0.00001)
clf.fit(X_train, y_train)
acc = clf.score(X_test, y_test)
print("acc=%0.3f" % acc)
# Grid search for best parameters
# Set the parameters by cross-validation
tuned_parameters = [{'kernel': ['poly'], 'gamma': [1e-3, 1e-4, 1e-5],
'C': [1, 10 ,20,30,40,50]}]
scores = ['precision', 'recall']
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print('')
clf = GridSearchCV(SVC(), tuned_parameters, cv=2,
scoring='%s_macro' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print('')
print(clf.best_params_)
print('')
print("Grid scores on development set:")
print('')
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
print('')
print("Detailed classification report:")
print('')
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print('')
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print('')
labels = [0,1,2,3,4,5,6,7,8,9]
def cm_analysis(y_true, y_pred, filename, labels, ymap=None, figsize=(10,10)):
if ymap is not None:
y_pred = [ymap[yi] for yi in y_pred]
y_true = [ymap[yi] for yi in y_true]
labels = [ymap[yi] for yi in labels]
cm = confusion_matrix(y_true, y_pred, labels=labels)
cm_sum = np.sum(cm, axis=1, keepdims=True)
cm_perc = cm / cm_sum.astype(float) * 100
annot = np.empty_like(cm).astype(str)
nrows, ncols = cm.shape
for i in range(nrows):
for j in range(ncols):
c = cm[i, j]
p = cm_perc[i, j]
if i == j:
s = cm_sum[i]
annot[i, j] = '%d' % (p)
elif c == 0:
annot[i, j] = ''
else:
annot[i, j] = '%d' % (c)
cm = pd.DataFrame(cm, index=labels, columns=labels)
cm.index.name = 'Actual'
cm.columns.name = 'Predicted accuracy'
fig, ax = plt.subplots(figsize=figsize)
sns.heatmap(cm, annot=annot, fmt='', ax=ax)
plt.savefig(filename)
cm_analysis(y_test,y_pred,"polynomial", labels, ymap=None, figsize=(10,10))
```
| github_jupyter |
# Leaf Rice Disease Detection using ResNet50V2 Architecture

# Taking Dataset from Drive
```
from google.colab import drive
drive.mount('/content/drive')
```
# Importing Libraries
```
import keras
from keras import Sequential
from keras.applications import MobileNetV2
from keras.layers import Dense
from keras.preprocessing import image
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import seaborn as sns
import tensorflow as tf
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics import classification_report, log_loss, accuracy_score
from sklearn.model_selection import train_test_split
directory = '/content/drive/MyDrive/rice'
```
# Target Class
```
Class=[]
for file in os.listdir(directory):
Class+=[file]
print(Class)
print(len(Class))
```
# Mapping the Images
```
Map=[]
for i in range(len(Class)):
Map = Map+[i]
normal_mapping=dict(zip(Class,Map))
reverse_mapping=dict(zip(Map,Class))
def mapper(value):
return reverse_mapping[value]
set1=[]
set2=[]
count=0
for i in Class:
path=os.path.join(directory,i)
t=0
for image in os.listdir(path):
if image[-4:]=='.jpg':
imagee=load_img(os.path.join(path,image), grayscale=False, color_mode='rgb', target_size=(100,100))
imagee=img_to_array(imagee)
imagee=imagee/255.0
if t<60:
set1.append([imagee,count])
else:
set2.append([imagee,count])
t+=1
count=count+1
```
# Dividing Data and Test
```
data, dataa=zip(*set1)
test, test_test=zip(*set2)
label=to_categorical(dataa)
X=np.array(data)
y=np.array(label)
labell=to_categorical(test_test)
test=np.array(test)
labell=np.array(labell)
print(len(y))
print(len(labell))
```
# Train Test Split
```
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print(X_train.shape,X_test.shape)
print(y_train.shape,y_test.shape)
```
# Image Generator
```
generator = ImageDataGenerator(horizontal_flip=True,vertical_flip=True,rotation_range=20,zoom_range=0.2,
width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,fill_mode="nearest")
```
# Calling Resnet50V2 Model
```
from tensorflow.keras.applications import ResNet50V2
resnet50v2 = tf.keras.applications.DenseNet201(input_shape=(100,100,3),include_top=False,weights='imagenet',pooling='avg')
resnet50v2.trainable = False
```
# Making Deep CNN Model
```
model_input = resnet50v2.input
classifier = tf.keras.layers.Dense(128, activation='relu')(resnet50v2.output)
classifier = tf.keras.layers.Dense(64, activation='relu')(resnet50v2.output)
classifier = tf.keras.layers.Dense(512, activation='relu')(resnet50v2.output)
classifier = tf.keras.layers.Dense(128, activation='relu')(resnet50v2.output)
classifier = tf.keras.layers.Dense(256, activation='relu')(resnet50v2.output)
model_output = tf.keras.layers.Dense(3, activation='sigmoid')(classifier)
model = tf.keras.Model(inputs=model_input, outputs=model_output)
```
# Compiling with ADAM Optimizer and Binary Crossentropy Loss Function
```
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
```
# Fitting the Dataset into Model
```
history=model.fit(generator.flow(X_train,y_train,batch_size=32),validation_data=(X_test,y_test),epochs=50)
```
# Prediction on Test Set
```
y_pred=model.predict(X_test)
y_pred=np.argmax(y_pred,axis=1)
y_test = np.argmax(y_test,axis=1)
```
# Confusion Matrix
```
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_pred)
print(cm)
plt.subplots(figsize=(15,7))
sns.heatmap(cm, annot= True, linewidth=1, cmap="autumn_r")
```
# Accuracy
```
print("Accuracy : ",accuracy_score(y_test,y_pred))
```
# Classification Report
```
print(classification_report(y_test,y_pred))
```
# Loss vs Validation Loss Plot
```
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Scatter(y=history.history['loss'], name='Loss',
line=dict(color='royalblue', width=3)))
fig.add_trace(go.Scatter(y=history.history['val_loss'], name='Validation Loss',
line=dict(color='firebrick', width=2)))
```
# Accuracy vs Validation Accuracy Plot
```
fig = go.Figure()
fig.add_trace(go.Scatter(y=history.history['accuracy'], name='Accuracy',
line=dict(color='royalblue', width=3)))
fig.add_trace(go.Scatter(y=history.history['val_accuracy'], name='Validation Accuracy',
line=dict(color='firebrick', width=3)))
```
# Testing on some Random Images
```
image=load_img("/content/drive/MyDrive/rice/tungro/IMG_0852.jpg",target_size=(100,100))
imagee=load_img("/content/drive/MyDrive/rice/blight/IMG_0936.jpg",target_size=(100,100))
imageee=load_img("/content/drive/MyDrive/rice/blast/IMG_0560.jpg",target_size=(100,100))
imageeee=load_img("/content/drive/MyDrive/rice/blight/IMG_1063.jpg",target_size=(100,100))
imageeeee=load_img("/content/drive/MyDrive/rice/tungro/IMG_0898.jpg",target_size=(100,100))
image=img_to_array(image)
image=image/255.0
prediction_image=np.array(image)
prediction_image= np.expand_dims(image, axis=0)
imagee=img_to_array(imagee)
imagee=imagee/255.0
prediction_imagee=np.array(imagee)
prediction_imagee= np.expand_dims(imagee, axis=0)
imageee=img_to_array(imageee)
imageee=imageee/255.0
prediction_imageee=np.array(imageee)
prediction_imageee= np.expand_dims(imageee, axis=0)
imageeee=img_to_array(imageeee)
imageeee=imageeee/255.0
prediction_imageeee=np.array(imageeee)
prediction_imageeee= np.expand_dims(imageeee, axis=0)
imageeeee=img_to_array(imageeeee)
imageeeee=image/255.0
prediction_imageeeee=np.array(imageeeee)
prediction_imageeeee= np.expand_dims(imageeeee, axis=0)
prediction=model.predict(prediction_image)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
prediction=model.predict(prediction_imagee)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
prediction=model.predict(prediction_imageee)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
prediction=model.predict(prediction_imageeee)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
prediction=model.predict(prediction_imageeeee)
value=np.argmax(prediction)
move_name=mapper(value)
print("This Rice Belongs to", move_name + " class")
```
# Prediction on Different Test Set
```
print(test.shape)
predictionn=model.predict(test)
print(predictionn.shape)
test_pred=[]
for item in predictionn:
value=np.argmax(item)
test_pred = test_pred + [value]
```
# Confusion Matrix
```
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(test_test,test_pred)
print(cm)
plt.subplots(figsize=(15,7))
sns.heatmap(cm, annot= True, linewidth=1, cmap="CMRmap")
```
# Accuracy
```
accuracy=accuracy_score(test_test,test_pred)
print("Model Accuracy : ",accuracy)
```
# Classification Report
```
print(classification_report(test_test,test_pred))
```
# This Model can Successfully Detects The Disease of a Rice Leaf with an Accuracy of 97%
```
```
| github_jupyter |
### Stock Prediction using fb Prophet
Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well.
```
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
from alpha_vantage.timeseries import TimeSeries
from fbprophet import Prophet
os.chdir(r'N:\STOCK ADVISOR BOT')
ALPHA_VANTAGE_API_KEY = 'XAGC5LBB1SI9RDLW'
ts = TimeSeries(key= ALPHA_VANTAGE_API_KEY, output_format='pandas')
df_Stock, Stock_info = ts.get_daily('MSFT', outputsize='full')
df_Stock = df_Stock.rename(columns={'1. open' : 'Open', '2. high': 'High', '3. low':'Low', '4. close': 'Close', '5. volume': 'Volume' })
df_Stock = df_Stock.rename_axis(['Date'])
Stock = df_Stock.sort_index(ascending=True, axis=0)
#slicing the data for 15 years from '2004-01-02' to today
Stock = Stock.loc['2004-01-02':]
Stock
Stock = Stock.drop(columns=['Open', 'High', 'Low', 'Volume'])
Stock.index = pd.to_datetime(Stock.index)
Stock.info()
#NFLX.resample('D').ffill()
Stock = Stock.reset_index()
Stock
Stock.columns = ['ds', 'y']
prophet_model = Prophet(yearly_seasonality=True, daily_seasonality=True)
prophet_model.add_country_holidays(country_name='US')
prophet_model.add_seasonality(name='monthly', period=30.5, fourier_order=5)
prophet_model.fit(Stock)
future = prophet_model.make_future_dataframe(periods=30)
future.tail()
forcast = prophet_model.predict(future)
forcast.tail()
prophet_model.plot(forcast);
```
If you want to visualize the individual forecast components, we can use Prophet’s built-in plot_components method like below
```
prophet_model.plot_components(forcast);
forcast.shape
forcast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
```
### Prediction Performance
The performance_metrics utility can be used to compute some useful statistics of the prediction performance (yhat, yhat_lower, and yhat_upper compared to y), as a function of the distance from the cutoff (how far into the future the prediction was). The statistics computed are mean squared error (MSE), root mean squared error (RMSE), mean absolute error (MAE), mean absolute percent error (MAPE), and coverage of the yhat_lower and yhat_upper estimates.
```
from fbprophet.diagnostics import cross_validation, performance_metrics
df_cv = cross_validation(prophet_model, horizon='180 days')
df_cv.head()
df_cv
df_p = performance_metrics(df_cv)
df_p.head()
df_p
from fbprophet.plot import plot_cross_validation_metric
fig = plot_cross_validation_metric(df_cv, metric='mape')
```
### License
MIT License
Copyright (c) 2020 Avinash Chourasiya
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| github_jupyter |
```
library(repr) ; options(repr.plot.res = 100, repr.plot.width=5, repr.plot.height= 5) # Change plot sizes (in cm) - this bit of code is only relevant if you are using a jupyter notebook - ignore otherwise
```
<!--NAVIGATION-->
< [Multiple Explanatory Variables](16-MulExpl.ipynb) | [Main Contents](Index.ipynb) | [Model Simplification](18-ModelSimp.ipynb)>
# Linear Models: Multiple variables with interactions <span class="tocSkip">
<h1>Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introduction" data-toc-modified-id="Introduction-1"><span class="toc-item-num">1 </span>Introduction</a></span><ul class="toc-item"><li><span><a href="#Chapter-aims" data-toc-modified-id="Chapter-aims-1.1"><span class="toc-item-num">1.1 </span>Chapter aims</a></span></li><li><span><a href="#Formulae-with-interactions-in-R" data-toc-modified-id="Formulae-with-interactions-in-R-1.2"><span class="toc-item-num">1.2 </span>Formulae with interactions in R</a></span></li></ul></li><li><span><a href="#Model-1:-Mammalian-genome-size" data-toc-modified-id="Model-1:-Mammalian-genome-size-2"><span class="toc-item-num">2 </span>Model 1: Mammalian genome size</a></span></li><li><span><a href="#Model-2-(ANCOVA):-Body-Weight-in-Odonata" data-toc-modified-id="Model-2-(ANCOVA):-Body-Weight-in-Odonata-3"><span class="toc-item-num">3 </span>Model 2 (ANCOVA): Body Weight in Odonata</a></span></li></ul></div>
# Introduction
Here you will build on your skills in fitting linear models with multiple explanatory variables to data. You will learn about another commonly used Linear Model fitting technique: ANCOVA.
We will build two models in this chapter:
* **Model 1**: Is mammalian genome size predicted by interactions between trophic level and whether species are ground dwelling?
* **ANCOVA**: Is body size in Odonata predicted by interactions between genome size and taxonomic suborder?
So far, we have only looked at the independent effects of variables. For example, in the trophic level and ground dwelling model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), we only looked for specific differences for being a omnivore *or* being ground dwelling, not for being
specifically a *ground dwelling omnivore*. These independent effects of a variable are known as *main effects* and the effects of combinations of variables acting together are known as *interactions* — they describe how the variables *interact*.
## Chapter aims
The aims of this chapter are[$^{[1]}$](#fn1):
* Creating more complex Linear Models with multiple explanatory variables
* Including the effects of interactions between multiple variables in a linear model
* Plotting predictions from more complex (multiple explanatory variables) linear models
## Formulae with interactions in R
We've already seen a number of different model formulae in R. They all use this syntax:
`response variable ~ explanatory variable(s)`
But we are now going to see two extra pieces of syntax:
* `y ~ a + b + a:b`: The `a:b` means the interaction between `a` and `b` — do combinations of these variables lead to different outcomes?
* `y ~ a * b`: This a shorthand for the model above. The means fit `a` and `b` as main effects and their interaction `a:b`.
# Model 1: Mammalian genome size
$\star$ Make sure you have changed the working directory to `Code` in your stats coursework directory.
$\star$ Create a new blank script called 'Interactions.R' and add some introductory comments.
$\star$ Load the data:
```
load('../data/mammals.Rdata')
```
If `mammals.Rdata` is missing, just import the data again using `read.csv`. You will then have to add the log C Value column to the imported data frame again.
Let's refit the model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), but including the interaction between trophic level and ground dwelling. We'll immediately check the model is appropriate:
```
model <- lm(logCvalue ~ TrophicLevel * GroundDwelling, data= mammals)
par(mfrow=c(2,2), mar=c(3,3,1,1), mgp=c(2, 0.8,0))
plot(model)
```
Now, examine the `anova` and `summary` outputs for the model:
```
anova(model)
```
Compared to the model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), there is an extra line at the bottom. The top two are the same and show that trophic level and ground dwelling both have independent main effects. The extra line
shows that there is also an interaction between the two. It doesn't explain a huge amount of variation, about half as much as trophic level, but it is significant.
Again, we can calculate the $r^2$ for the model: $\frac{0.81 + 2.75 + 0.43}{0.81+2.75+0.43+12.77} = 0.238$
The model from [the first multiple explanatory variables chapter](16-MulExpl.ipynb) without the interaction had an $r^2 = 0.212$ — our new
model explains 2.6% more of the variation in the data.
The summary table is as follows:
```
summary(model)
```
The lines in this output are:
1. The reference level (intercept) for non ground dwelling carnivores. (The reference level is decided just by the alphabetic order of the levels)
2. Two differences for being in different trophic levels.
3. One difference for being ground dwelling
4. Two new differences that give specific differences for ground dwelling herbivores and omnivores.
The first four lines, as in the model from the [ANOVA chapter](15-anova.ipynb), which would allow us to find the predicted values for each group *if the size of the differences did not vary between levels because of the interactions*. That is, this part of the model only includes a single difference ground and non-ground species, which has to be the same for each trophic group because it ignores interactions between trophic level and ground / non-ground identity of each species. The last two lines then give the estimated coefficients associated with the interaction terms, and allow cause the size of differences to vary
between levels because of the further effects of interactions.
The table below show how these combine to give the predictions for each group combination, with those two new lines show in red:
$\begin{array}{|r|r|r|}
\hline
& \textrm{Not ground} & \textrm{Ground} \\
\hline
\textrm{Carnivore} & 0.96 = 0.96 & 0.96+0.25=1.21 \\
\textrm{Herbivore} & 0.96 + 0.05 = 1.01 & 0.96+0.05+0.25{\color{red}+0.03}=1.29\\
\textrm{Omnivore} & 0.96 + 0.23 = 1.19 & 0.96+0.23+0.25{\color{red}-0.15}=1.29\\
\hline
\end{array}$
So why are there two new coefficients? For interactions between two factors, there are always $(n-1)\times(m-1)$ new coefficients, where $n$ and $m$ are the number of levels in the two factors (Ground dwelling or not: 2 levels and trophic level: 3 levels, in our current example). So in this model, $(3-1) \times (2-1) =2$. It is easier to understand why
graphically: the prediction for the white boxes below can be found by adding the main effects together but for the grey boxes we need to find specific differences and so there are $(n-1)\times(m-1)$ interaction coefficients to add.
<a id="fig:interactionsdiag"></a>
<figure>
<img src="./graphics/interactionsdiag.png" alt="interactionsdiag" style="width:50%">
<small>
<center>
<figcaption>
Figure 2
</figcaption>
</center>
</small>
</figure>
If we put this together, what is the model telling us?
* Herbivores have the same genome sizes as carnivores, but omnivores have larger genomes.
* Ground dwelling mammals have larger genomes.
These two findings suggest that ground dwelling omnivores should have extra big genomes. However, the interaction shows they are smaller than expected and are, in fact, similar to ground dwelling herbivores.
Note that although the interaction term in the `anova` output is significant, neither of the two coefficients in the `summary` has a $p<0.05$. There are two weak differences (one
very weak, one nearly significant) that together explain significant
variance in the data.
$\star$ Copy the code above into your script and run the model.
Make sure you understand the output!
Just to make sure the sums above are correct, we'll use the same code as
in [the first multiple explanatory variables chapter](16-MulExpl.ipynb) to get R to calculate predictions for us, similar to the way we did [before](16-MulExpl.ipynb):
```
# a data frame of combinations of variables
gd <- rep(levels(mammals$GroundDwelling), times = 3)
print(gd)
tl <- rep(levels(mammals$TrophicLevel), each = 2)
print(tl)
# New data frame
predVals <- data.frame(GroundDwelling = gd, TrophicLevel = tl)
# predict using the new data frame
predVals$predict <- predict(model, newdata = predVals)
print(predVals)
```
$\star$ Include and run the code for gererating these predictions in your script.
If we plot these data points onto the barplot from [the first multiple explanatory variables chapter](16-MulExpl.ipynb), they now lie exactly on the mean values, because we've allowed for interactions. The triangle on this plot shows the predictions for ground dwelling omnivores from the main effects ($0.96 + 0.23 + 0.25 = 1.44$), the interaction of $-0.15$ pushes the prediction back down.
<a id="fig:predPlot"></a>
<figure>
<img src="./graphics/predPlot.svg" alt="predPlot" style="width:70%">
</figure>
# Model 2 (ANCOVA): Body Weight in Odonata
We'll go all the way back to the regression analyses from the [Regression chapter](14-regress.ipynb). Remember that we fitted two separate regression lines to the data for damselflies and dragonflies. We'll now use an interaction to fit these in a single model. This kind of linear model — with a mixture of continuous variables and factors — is often called an *analysis of covariance*, or ANCOVA. That is, ANCOVA is a type of linear model that blends ANOVA and regression. ANCOVA evaluates whether population means of a dependent variable are equal across levels of a categorical independent variable, while statistically controlling for the effects of other continuous variables that are not of primary interest, known as covariates.
*Thus, ANCOVA is a linear model with one categorical and one or more continuous predictors*.
We will use the odonates data that we have worked with [before](12-ExpDesign.ipynb).
$\star$ First load the data:
```
odonata <- read.csv('../data/GenomeSize.csv')
```
$\star$ Now create two new variables in the `odonata` data set called `logGS` and `logBW` containing log genome size and log body weight:
```
odonata$logGS <- log(odonata$GenomeSize)
odonata$logBW <- log(odonata$BodyWeight)
```
The models we fitted [before](12-ExpDesign.ipynb) looked like this:
<a id="fig:dragonData"></a>
<figure>
<img src="./graphics/dragonData.svg" alt="dragonData" style="width:60%">
<small>
<center>
<figcaption>
</figcaption>
</center>
</small>
</figure>
We can now fit the model of body weight as a function of both genome size and suborder:
```
odonModel <- lm(logBW ~ logGS * Suborder, data = odonata)
```
Again, we'll look at the <span>anova</span> table first:
```
anova(odonModel)
```
Interpreting this:
* There is no significant main effect of log genome size. The *main* effect is the important thing here — genome size is hugely important but does very different things for the two different suborders. If we ignored `Suborder`, there isn't an overall relationship: the average of those two lines is pretty much flat.
* There is a very strong main effect of Suborder: the mean body weight in the two groups are very different.
* There is a strong interaction between suborder and genome size. This is an interaction between a factor and a continuous variable and shows that the *slopes* are different for the different factor levels.
Now for the summary table:
```
summary(odonModel)
```
* The first thing to note is that the $r^2$ value is really high. The model explains three quarters (0.752) of the variation in the data.
* Next, there are four coefficients:
* The intercept is for the first level of `Suborder`, which is Anisoptera (dragonflies).
* The next line, for `log genome size`, is the slope for Anisoptera.
* We then have a coefficient for the second level of `Suborder`, which is Zygoptera (damselflies). As with the first model, this difference in factor levels is a difference in mean values and shows the difference in the intercept for Zygoptera.
* The last line is the interaction between `Suborder` and `logGS`. This shows how the slope for Zygoptera differs from the slope for Anisoptera.
How do these hang together to give the two lines shown in the model? We can calculate these by hand:
$\begin{aligned}
\textrm{Body Weight} &= -2.40 + 1.01 \times \textrm{logGS} & \textrm{[Anisoptera]}\\
\textrm{Body Weight} &= (-2.40 -2.25) + (1.01 - 2.15) \times \textrm{logGS} & \textrm{[Zygoptera]}\\
&= -4.65 - 1.14 \times \textrm{logGS} \\\end{aligned}$
$\star$ Add the above code into your script and check that you understand the outputs.
We'll use the `predict` function again to get the predicted values from the model and add lines to the plot above.
First, we'll create a set of numbers spanning the range of genome size:
```
#get the range of the data:
rng <- range(odonata$logGS)
#get a sequence from the min to the max with 100 equally spaced values:
LogGSForFitting <- seq(rng[1], rng[2], length = 100)
```
Have a look at these numbers:
```
print(LogGSForFitting)
```
We can now use the model to predict the values of body weight at each of those points for each of the two suborders:
```
#get a data frame of new data for the order
ZygoVals <- data.frame(logGS = LogGSForFitting, Suborder = "Zygoptera")
#get the predictions and standard error
ZygoPred <- predict(odonModel, newdata = ZygoVals, se.fit = TRUE)
#repeat for anisoptera
AnisoVals <- data.frame(logGS = LogGSForFitting, Suborder = "Anisoptera")
AnisoPred <- predict(odonModel, newdata = AnisoVals, se.fit = TRUE)
```
We've added `se.fit=TRUE` to the function to get the standard error around the regression lines. Both `AnisoPred` and `ZygoPred` contain predicted values (called `fit`) and standard error values (called `se.fit`) for each of the values in our generated values in `LogGSForFitting` for each of the two suborders.
We can add the predictions onto a plot like this:
```
# plot the scatterplot of the data
plot(logBW ~ logGS, data = odonata, col = Suborder)
# add the predicted lines
lines(AnisoPred$fit ~ LogGSForFitting, col = "black")
lines(AnisoPred$fit + AnisoPred$se.fit ~ LogGSForFitting, col = "black", lty = 2)
lines(AnisoPred$fit - AnisoPred$se.fit ~ LogGSForFitting, col = "black", lty = 2)
```
$\star$ Copy the prediction code into your script and run the plot above.
Copy and modify the last three lines to add the lines for the Zygoptera. Your final plot should look like this.
<a id="fig:odonPlot"></a>
<figure>
<img src="./graphics/odonPlot.svg" alt="odonPlot" style="width:70%">
<small>
<center>
<figcaption>
Figure 4
</figcaption>
</center>
</small>
</figure>
---
<a id="fn1"></a>
[1]: Here you work with the script file `MulExplInter.R`
| github_jupyter |
# Optimization Methods
Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this:
<img src="images/cost.jpg" style="width:650px;height:300px;">
<caption><center> <u> **Figure 1** </u>: **Minimizing the cost is like finding the lowest point in a hilly landscape**<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption>
**Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`.
To get started, run the following code to import the libraries you will need.
```
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
## 1 - Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
**Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$
where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - (learning_rate * grads["dW" + str(l+1)])
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - (learning_rate * grads["db" + str(l+1)])
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74604067]
[-0.75184921]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88020257]
[ 0.02561572]
[ 0.57539477]] </td>
</tr>
</table>
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent.
- **(Batch) Gradient Descent**:
``` python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost = compute_cost(a, Y)
# Backward propagation.
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
- **Stochastic Gradient Descent**:
```python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
for j in range(0, m):
# Forward propagation
a, caches = forward_propagation(X[:,j], parameters)
# Compute cost
cost = compute_cost(a, Y[:,j])
# Backward propagation
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this:
<img src="images/kiank_sgd.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **SGD vs GD**<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>
**Note** also that implementing SGD requires 3 for-loops in total:
1. Over the number of iterations
2. Over the $m$ training examples
3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)
In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.
<img src="images/kiank_minibatch.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u>: <font color='purple'> **SGD vs Mini-Batch GD**<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>
<font color='blue'>
**What you should remember**:
- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.
- You have to tune a learning rate hyperparameter $\alpha$.
- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).
## 2 - Mini-Batch Gradient descent
Let's learn how to build mini-batches from the training set (X, Y).
There are two steps:
- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches.
<img src="images/kiank_shuffle.png" style="width:550px;height:300px;">
- **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this:
<img src="images/kiank_partition.png" style="width:550px;height:300px;">
**Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:
```python
first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
...
```
Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$).
```
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, k * mini_batch_size : (k + 1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:, k * mini_batch_size : (k + 1) * mini_batch_size].reshape((1, mini_batch_size))
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m].reshape((1, m - num_complete_minibatches * mini_batch_size))
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td > **shape of the 1st mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_X** </td>
<td > (12288, 20) </td>
</tr>
<tr>
<td > **shape of the 1st mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_Y** </td>
<td > (1, 20) </td>
</tr>
<tr>
<td > **mini batch sanity check** </td>
<td > [ 0.90085595 -0.7612069 0.2344157 ] </td>
</tr>
</table>
<font color='blue'>
**What you should remember**:
- Shuffling and Partitioning are the two steps required to build mini-batches
- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.
## 3 - Momentum
Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations.
Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill.
<img src="images/opt_momentum.png" style="width:400px;height:250px;">
<caption><center> <u><font color='purple'>**Figure 3**</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>
**Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:
for $l =1,...,L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
```
**Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop.
```
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
**Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$:
$$ \begin{cases}
v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\
W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}
\end{cases}\tag{3}$$
$$\begin{cases}
v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\
b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}}
\end{cases}\tag{4}$$
where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' +
str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta * v["dW" + str(l+1)] + (1 - beta) * grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta * v["db" + str(l+1)] + (1 - beta) * grads["db" + str(l+1)]
# update parameters
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - (learning_rate * v["dW" + str(l+1)])
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - (learning_rate * v["db" + str(l+1)])
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td > **W1** </td>
<td > [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74493465]
[-0.76027113]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.87809283]
[ 0.04055394]
[ 0.58207317]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]</td>
</tr>
</table>
**Note** that:
- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.
- If $\beta = 0$, then this just becomes standard gradient descent without momentum.
**How do you choose $\beta$?**
- The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much.
- Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default.
- Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$.
<font color='blue'>
**What you should remember**:
- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.
- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$.
## 4 - Adam
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum.
**How does Adam work?**
1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction).
2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction).
3. It updates parameters in a direction based on combining information from "1" and "2".
The update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\
v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\
s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\
s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_1)^t} \\
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon}
\end{cases}$$
where:
- t counts the number of steps taken of Adam
- L is the number of layers
- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages.
- $\alpha$ is the learning rate
- $\varepsilon$ is a very small number to avoid dividing by zero
As usual, we will store all parameters in the `parameters` dictionary
**Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.
**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:
for $l = 1, ..., L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
```
```
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
v["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
s["dW" + str(l+1)] = np.zeros(parameters["W" + str(l+1)].shape)
s["db" + str(l+1)] = np.zeros(parameters["b" + str(l+1)].shape)
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
**Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\
v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\
s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\
s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon}
\end{cases}$$
**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
```
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1 - beta1) * grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1 - beta1) * grads["db" + str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)] / (1 - beta1 ** t)
v_corrected["db" + str(l+1)] = v["db" + str(l+1)] / (1 - beta1 ** t)
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1 - beta2) * np.square(grads["dW" + str(l+1)])
s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1 - beta2) * np.square(grads["db" + str(l+1)])
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)] / (1 - beta2 ** t)
s_corrected["db" + str(l+1)] = s["db" + str(l+1)] / (1 - beta2 ** t)
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - ( learning_rate * v_corrected["dW" + str(l+1)] / ( np.sqrt(s_corrected["dW" + str(l+1)]) + epsilon ) )
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - ( learning_rate * v_corrected["db" + str(l+1)] / ( np.sqrt(s_corrected["db" + str(l+1)]) + epsilon ) )
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
```
**Expected Output**:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.75225313]
[-0.75376553]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09245036 -0.16498684]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88529978]
[ 0.03477238]
[ 0.57537385]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 1.51020075e-05]
[ 8.75664434e-04]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]] </td>
</tr>
</table>
You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference.
## 5 - Model with different optimization algorithms
Lets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
```
train_X, train_Y = load_dataset()
```
We have already implemented a 3-layer neural network. You will train it with:
- Mini-batch **Gradient Descent**: it will call your function:
- `update_parameters_with_gd()`
- Mini-batch **Momentum**: it will call your functions:
- `initialize_velocity()` and `update_parameters_with_momentum()`
- Mini-batch **Adam**: it will call your functions:
- `initialize_adam()` and `update_parameters_with_adam()`
```
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
```
You will now run this 3 layer neural network with each of the 3 optimization methods.
### 5.1 - Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.2 - Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.3 - Mini-batch with Adam mode
Run the following code to see how the model does with Adam.
```
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
### 5.4 - Summary
<table>
<tr>
<td>
**optimization method**
</td>
<td>
**accuracy**
</td>
<td>
**cost shape**
</td>
</tr>
<td>
Gradient descent
</td>
<td>
79.7%
</td>
<td>
oscillations
</td>
<tr>
<td>
Momentum
</td>
<td>
79.7%
</td>
<td>
oscillations
</td>
</tr>
<tr>
<td>
Adam
</td>
<td>
94%
</td>
<td>
smoother
</td>
</tr>
</table>
Momentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.
Adam on the other hand, clearly outperforms mini-batch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you've seen that Adam converges a lot faster.
Some advantages of Adam include:
- Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum)
- Usually works well even with little tuning of hyperparameters (except $\alpha$)
**References**:
- Adam paper: https://arxiv.org/pdf/1412.6980.pdf
| github_jupyter |
## YOLOv3 - Functions
changing the pipeline to functions to make the implementation easier
```
import os.path
import cv2
import numpy as np
import requests
yolo_config = 'yolov3.cfg'
if not os.path.isfile(yolo_config):
url = 'https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg'
r = requests.get(url)
with open(yolo_config, 'wb') as f:
f.write(r.content)
# Download YOLO net weights
# We'll it from the YOLO author's website
yolo_weights = 'yolov3.weights'
if not os.path.isfile(yolo_weights):
url = 'https://pjreddie.com/media/files/yolov3.weights'
r = requests.get(url)
with open(yolo_weights, 'wb') as f:
f.write(r.content)
net = cv2.dnn.readNet(yolo_weights, yolo_config)
classes_file = 'coco.names'
if not os.path.isfile(classes_file):
url = 'https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names'
r = requests.get(url)
with open(classes_file, 'wb') as f:
f.write(r.content)
# load class names
with open(classes_file, 'r') as f:
classes = [line.strip() for line in f.readlines()]
image_file = 'C:/Users/Billi/repos/Computer_Vision/OpenCV/bdd100k/seg/images/train/00d79c0a-23bea078.jpg'
image = cv2.imread(image_file)
cv2.imshow('img', image)
cv2.waitKey(0)
def get_image(image):
blob = cv2.dnn.blobFromImage(image, 1 / 255, (416, 416), (0, 0, 0), True, crop=False)
return blob
def get_prediction(blob):
# set as input to the net
net.setInput(blob)
# get network output layers
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# inference
# the network outputs multiple lists of anchor boxes
# one for each detected class
outs = net.forward(output_layers)
return outs
```
After we get the network outputs, we have to pre-process the network outputs and apply non-max suppression over them to produce the final set of detected objects
```
def get_boxes(outs):
class_ids = []
confidences = []
boxes = []
for out in outs:
# iterate over the anchor boxes for each class
for detection in out:
center_x = int(detection[0] * image.shape[1])
center_y = int(detection[1] * image.shape[0])
w, h = int(detection[2] * image.shape[1]), int(detection[3] * image.shape[0])
x, y = center_x - w // 2, center_y - h // 2
boxes.append([x, y, w, h])
# confidence
confidences.append(float(detection[4]))
# class
class_ids.append(np.argmax(detection[5:]))
return boxes, confidences, class_ids
def get_ids(boxes, confidences):
ids = cv2.dnn.NMSBoxes(boxes, confidences, score_threshold=0.75, nms_threshold=0.5)
return ids
def colors(image):
colors = np.random.uniform(0, 255, size=(len(classes), 3))
# iterate over all boxes
for i in ids:
i = i[0]
x, y, w, h = boxes[i]
class_id = class_ids[i]
color = colors[class_id]
cv2.rectangle(img=image,
pt1=(round(x), round(y)),
pt2=(round(x + w), round(y + h)),
color=color,
thickness=3)
cv2.putText(img=image,
text=f"{classes[class_id]}: {confidences[i]:.2f}",
org=(x - 10, y - 10),
fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=0.8,
color=color,
thickness=2)
return image
image = cv2.imread(image_file)
blob = get_image(image)
outs = get_prediction(blob)
boxes, confidences, class_ids = get_boxes(outs)
ids = get_ids(boxes, confidences)
final = colors(image)
plt.imshow(image)
plt.imshow(final)
cv2.imshow('img', image)
cv2.waitKey(0)
image = cv2.imread(image_file)
blob = get_image(image)
outs = get_prediction(blob)
boxes, confidences, class_ids = get_bounding_boxes(outs)
ids = max_supp(boxes, confidences)
final = custom(image)
import matplotlib.pyplot as plt
plt.imshow(final)
```
| github_jupyter |
# <span style='color:darkred'> 2 Protein Visualization </span>
***
For the purposes of this tutorial, we will use the HIV-1 protease structure (PDB ID: 1HSG). It is a homodimer with two chains of 99 residues each. Before starting to perform any simulations and data analysis, we need to observe and familiarize with the protein of interest.
There are various software packages for visualizing molecular systems, but here we will guide you through using two of those; NGLView and VMD:
* [NGLView](http://nglviewer.org/#nglview): An IPython/Jupyter widget to interactively view molecular structures and trajectories.
* [VMD](https://www.ks.uiuc.edu/Research/vmd/): VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.
You could either take your time to familiarize with both, or select which one you prefer to delve into.
NGLView is great for looking at things directly within a jupyter notebook, but VMD can be a more powerful tool for visualizing, generating high quality images and videos, but also analysing simulation trajectories.
## <span style='color:darkred'> 2.0 Obtain the protein structure </span>
The first step is to obtain the crystal structure of the HIV-1 protease.
Start your web-browser and go to the [protein data bank](https://www.rcsb.org/). Enter the pdb code 1HSG in the site search box at the top and hit the site search button. The protein should come up. Select download from the top right hand menu and save the .pdb file to the current working directory.
## <span style='color:darkred'> 2.1 VMD (optional) </span>
You can now open the pdb structure with VMD (the following file name might be uppercase depending on how you downloaded it):
`% vmd 1hsg.pdb`
You should experiment with the menu system and try various representations of the protein such as `Trace`, `NewCartoon` and `Ribbons` for example.
Go to `Graphics` and then `Graphical Representations` and from the `Drawing Method` drop-down list, select `Trace`. Similarly, you can explore other drawing methods.
<span style='color:Blue'> **Questions** </span>
* Can you find the indinavir drug?
*Hint: At the `Graphical Representations` menu, click `Create Rep` and type "all and not protein" and hit Enter. Change the `Drawing Method` to `Licorice`.*
* Give the protein the Trace representation and then make the polar residues in vdw format as an additional representation. Repeat with the hydrophobic residues. What do you notice?
*Hint: Explore the `Selections` tab and the options provided as singlewords.*
*Hint: To hide a representation, double-click on it. Double-click again if you want to make it reappear.*
Take your time to explore the features of VMD and to observe the protein. Once you are happy, you can exit VMD, either by clicking on `File` and then `Quit` or by typing `quit` in the terminal box.
***
## <span style='color:darkred'> 2.2 NGLView </span>
You have already been introduced to NGLView during the Python tutorial. You can now spend more time to navigate through its features.
```
# Import NGLView
import nglview
# Select as your protein the 1HSG pdb entry
protein_view = nglview.show_pdbid('1hsg')
protein_view.gui_style = 'ngl'
#Uncomment the command below to add a hyperball representation of the crystal water oxygens in grey
#protein_view.add_hyperball('HOH', color='grey', opacity=1.0)
#Uncomment the command below to color the protein according to its secondary structure with opacity 0.6
#protein_view.update_cartoon(color='sstruc', opacity=0.6)
# Let's change the display a little bit
protein_view.parameters = dict(camera_type='orthographic', clip_dist=0)
# Set the background colour to black
protein_view.background = 'black'
# Call protein_view to visualise the trajectory
protein_view
```
<span style='color:Blue'> **Questions** </span>
* When you load the structure, can you see the two subunits that form the dimer?
* Can you locate the drug in the binding pocket?
*Hint: Go to `View` and then `Full screen` to expand the viewing window.*
* Can you hide all the other representations and view only the drug?
*Hint: Use your mouse to rotate, translate and zoom in and out.*
*Hint: You can hide/show a representation by clicking on the "eye" symbol on the right panel.*
***
Explore the [NGLView documentation](http://nglviewer.org/nglview/latest/api.html), and play around with different representations, selections, colors etc. Take as much time as you want in this step.
***
## <span style='color:darkred'> Next Step </span>
You can now open the `03_Running_an_MD_simulation.ipynb` notebook to setup and perform a Molecular Dynamics simulation of your protein.
| github_jupyter |
# Image features exercise
*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
```
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
```
## Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
```
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
```
## Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The `hog_feature` and `color_histogram_hsv` functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
```
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
```
## Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
```
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
```
### Inline question 1:
Describe the misclassification results that you see. Do they make sense?
## Neural Network on image features
Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.
For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
```
print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc
```
# Bonus: Design your own features!
You have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.
For bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline.
# Bonus: Do something extra!
Use the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!
| github_jupyter |
# Nuclear Fuel Cycle Overview
The nuclear fuel cycle is the technical and economic system traversed by nuclear fuel during the generation of nuclear power.
## Learning Objectives
By the end of this lesson, you should be able to:
- Categorize types of fission reactors by their fuels and coolants.
- Summarize the history and key characteristics of reactor technology generations.
- Weigh and compare advanced nuclear reactor types.
- Name fuel cycle facilities and technologies that contribute to open and closed cycles.
- Identify categories of nuclear fuel cycle strategies (open, closed, etc.)
- Associate categories of fuel cycle with nations that implement them (USA, France, etc.)
- Order the stages of such fuel cycle from mining to disposal, including reprocessing.
- Identify the chemical and physical states of nuclear material passed between stages.
## Fission Reactor Types
Let's see what you know already.
[pollev.com/katyhuff](pollev.com/katyhuff)
```
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/YWUBNMDynR0yeiu?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/rhvKnG3a6nKaNdU?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/zdDog6JmDGOQ1hJ?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/YE5bPL6KecA5M3A?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/BLojIJiKtPULpmw?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
```
A really good summary, with images, can be found [here](https://www.theiet.org/media/1275/nuclear-reactors.pdf).
```
from IPython.display import IFrame
IFrame("https://www.theiet.org/media/1275/nuclear-reactors.pdf", width=1000, height=700)
```
What about fusion?
Fusion devices can use Tritium, Deuterium, Protium, $^3He$, $^4He$.

# Fuel Cycle Strategies
## Once through
Also known as an open fuel cycle, this is the fuel cycle currently underway in the United States. There is no reprocessing or recycling of any kind and all high level spent nuclear fuel is eventually destined for a geologic repository.
```
try:
import graphviz
except ImportError:
!y | conda install graphviz
!pip install graphviz
from graphviz import Digraph
dot = Digraph(comment='The Round Table')
dot.node('A', 'Mine')
dot.node('B', 'Mill')
dot.node('C', 'Conversion')
dot.node('D', 'Enrichment')
dot.node('E', 'Fuel Fabrication')
dot.node('F', 'Reactor')
dot.node('G', 'Wet Storage')
dot.node('H', 'Dry Storage')
dot.node('I', 'Repository')
dot.edge('A', 'B', label='Natural U Ore')
dot.edge('B', 'C', label='U3O8')
dot.edge('C', 'D', label='UF6')
dot.edge('D', 'E', label='Enriched UF6')
dot.edge('E', 'F', label='Fresh Fuel')
dot.edge('F', 'G', label='Spent Fuel')
dot.edge('G', 'H', label='Cooled SNF')
dot.edge('H', 'I', label='Cooled SNF')
dot
```
## Single Pass or Multi Pass Recycle
To add reprocessing or recycling, and all high level spent nuclear fuel is eventually destined for a geologic repository
```
dot.node('Z', 'Reprocessing')
dot.edge('H', 'Z', label='Cooled SNF')
dot.edge('G', 'Z', label='Cooled SNF')
dot.edge('Z', 'E', label='Pu')
dot.edge('Z', 'E', label='U')
dot.edge('Z', 'I', label='FP')
dot
```
## Wrap-up
- Reactors of various types can be distinguished by neutron speed, coolant type, fuel type, size, and generation.
- A once through fuel cycle, like that in the US, is identifiable by the immediate storage and ultimate disposal of all spent fuel.
- Single and multi pass recycling schemes can be called "closed" fuel cycles. These involve separations and reprocessing.
## References
This section was developed to complement chapter 1 of [1]. In reference [2] you'll find a video concerning the front end of the fuel cycle that's pretty fun, 5 minutes, and actually quite accurate.
[1] N. Tsoulfanidis, The Nuclear Fuel Cycle. La Grange Park, Illinois, USA: American Nuclear Society, 2013.
[2] D. News. How Uranium Becomes Nuclear Fuel. https://www.youtube.com/watch?v=apODDbgFFPI. 2015.
| github_jupyter |
# Namespace
* 為了將你寫的程式碼轉換成可以執行的程式,Python語言使用翻譯器(interpreter)來辨別你的程式碼,它會把你命名的變數分在不同的範圍內,這些範圍就叫作Namespace。
* 每次創建一個變數時,翻譯器會在Namespace裡面記錄變數名稱和變數存的東西的記憶體位置。當有新的變數名稱時,翻譯器會先看新變數要存的值有沒有在紀錄裡,有的話就直接將新變數指定到該位置,例如:
```python
a = 2
a = a + 1
b = 2
```
<img src="https://cdn.programiz.com/sites/tutorial2program/files/aEquals2.jpg" align="center" height=400 width=500 >
* 翻譯器認定的範圍大致上可以分成以下三類:
* Built-in: 開啟翻譯器時就會有的,裡面就是有預設的函數和以下會介紹的資料結構。
* Module: 要透過```import```來加入的函數和變數等等。
* Function: 通常是使用者自己定義的變數和函數。
<img src="https://cdn.programiz.com/sites/tutorial2program/files/nested-namespaces-python.jpg" align="center" height=300 width=300>
```
a = 2
print(id(a))
b = 2
print(id(b))
```
---
# Data Structures
* 資料結構就是各種用來存放資料的"容器",並且可以很有效率地操作資料。
## [Sequence](#Sequence)
> _immutable v.s. mutable_
* [Lists](#Lists): mutable = 可以更改內容的
* [Tuples](#Tuples): immuntable = 不可以更改內容的
* [Range](#Range): immuntable
#### [Dictionary](#Dictionary)
#### [Set](#Set)
---
## Sequence
基本操作:
* 檢查東西在不在sequence裡面
```python
x in seq
x not in seq
```
* 把seq.頭尾相接(concatenation)
```python
a + b # a, b要是同一種sequence
a * n #repeat n times
```
* 拿出sequence裡面的東西
```python
seq[i]
seq[i:j] # 拿出第i到第j-1個
seq[i:j:k] # 從第i~第j中,每k個拿出一個
```
* seq.長度、最大/最小、東西出現次數和東西的位置
```python
len(seq), max(seq), min(seq)
seq.index(x)
seq.count(x)
```
* 更多在這裡:https://docs.python.org/3.6/library/stdtypes.html#typesseq-common
---
### Lists
``` list = [item1, item2, item3, ...] ```
* 通常使用在存放一堆相同種類的資料,類似於array(在電腦眼中是一排連續的櫃子)。
| 0號櫃子 | 1號櫃子 | 2號櫃子 |
|:---|:----|:---|
| ㄏ | ㄏ | ㄏ |
* 實際長相: 電腦用array記錄每個項目的index,因此可以根據index找到各個項目的內容。[圖片來源](https://www.hackerrank.com/challenges/variable-sized-arrays/problem)
<img src='images/variable_length_array.png' align="center">
| 0號櫃子 | 1號櫃子 | 2號櫃子 |
|:---|:----|:---|
| 紙條:"東西在3樓" | 紙條:"沒東西" | 紙條:"東西在地下室" |
```
marvel_hero = ["Steve Rogers", "Tony Stark", "Thor Odinson"]
print(type(marvel_hero), marvel_hero)
marvel_hero.append("Hulk")
marvel_hero.insert(2, "Bruce Banner") # insert "Bruce Banner" into index 2
print(marvel_hero)
print(marvel_hero.pop()) # default: pop last item
marvel_hero[0] = "Captain America"
print(marvel_hero[1:-1])
```
##### List comprehension: 可以直接在list裡面處理東西,不用再另外寫for-loop(但是花費時間差不多)
```
%timeit list_hero = [i.lower() if i.startswith('T') else i.upper() for i in marvel_hero]
print(list_hero)
%%timeit
list_hero = []
for i in marvel_hero:
if i.startswith('T'):
list_hero.append(i.lower())
else:
list_hero.append(i.upper())
print(list_hero)
```
##### List可以排序,排序所花的時間和List長度成正比
```
marvel_hero.sort(reverse=False) # sort in-place
marvel_hero
list_hero_sorted = sorted(list_hero) # return a sorted list
print(list_hero_sorted)
```
##### **注意!如果要複製list,不能直接指定給新的變數,這樣只是幫同一個list重新命名而已**
* 此行為被稱為 shallow copy
```
a = [1, 2, 3, 4, 5]
b = a
print(id(a), id(b), id(a) == id(b))
b[0] = 8888
print(a)
```
---
### Tuples
``` tuples = item1, item2, item3, ...```
* 通常用來存放不同種類但是有關聯的資料。
* ',' 決定是不是tuples,但是通常會用()來區分function call。
例如:
```python
def f(a, b=0):
return a1*a2 + b
f((87, 2)) # return 87*2 + 0
```
```
love_iron = ("Iron Man", 3000)
cap = "Captain America",
print(type(love_iron), love_iron)
print(love_iron + cap)
print("Does {} in the \"love_iron\" tuples?: {}".format("Iron", 'Iron' in love_iron))
print("Length of cap: {}".format(len(cap)))
max(love_iron)
```
* ```enumerate()``` 用在for-loop裡面可以依次輸出(i, 第i個項目),這樣就不用另外去記錄你跑到第幾個項目
```
for e in enumerate(love_iron + cap):
print(e, type(e))
```
---
### Range
* 產生一串**整數**,通常用在for-loop裡面來記錄次數或是當作index。
* 如果要產生一串浮點數,就要用numpy.arange()。
```range(start, stop[, step])```
```
even_number = [x for x in range(0, 30, 2)]
for i in range(2, 10, 2):
print("The {}th even number is {}".format(i, even_number[i-1]))
```
---
## Dictionary
``` {key1:value1, key2:value2, key3:value3, ...}```
* 用來存放具有對應關係的資料。
* ```key``` 不能重複,而且必須是hashable
* 兩個條件:
1. 建立之後不會更改數值(immutable)
2. 可以和其他東西比較是不是一樣
* 實際長相:hash table
* 電腦透過一個叫hash的函數將key編碼成一串固定長度的數字,然後用這串數字當作value的index。
* 理想上,每串數字都不重複,這樣可以讓查詢速度在平均上不受裡面的東西數量影響。
* [圖片來源](https://en.wikipedia.org/wiki/Hash_table)
<img src='images/hash-table.png' height=600 width=400 align="center">
```
hero_id = {"Steve Rogers": 1,
"Tony Stark": 666,
"Thor Odinson": 999
}
hero_code = dict(zip(hero_id.keys(), ["Captain America", "God of Thunder", "Iron Man"]))
print(type(hero_code), hero_code)
# dict[key]: 輸出相對應的value,如果key not in dict則輸出 KeyError
# dict.get(key, default=None): 輸出相對應的value,如果key not in dict則輸出default
hero_name = "Steve Rogers"
print("The codename of hero_id {} is {}".format(hero_id.get(hero_name), hero_code[hero_name]))
hero_id.update({"Bruce Banner": 87})
print(hero_id)
```
##### Dictionary View
* 用來看dict裡面目前的值是什麼,可以放在for-loop一個一個處理:
* dict.keys() 會輸出keys
* dict.values() 會輸出values
* dict.items() 會輸出(key, value)的 tuples
> **注意!輸出的順序不一定代表加入dictionary的順序!**
> 但是key和value的對應順序會一樣。
* 如果想要固定輸出的順序,就要用list或是[collections.OrderedDict](https://docs.python.org/3.6/library/collections.html#collections.OrderedDict)。
```
print(hero_id.keys())
print(hero_id.values())
print(hero_id.items())
for name, code in hero_code.items():
print("{} is {}".format(name, code))
```
---
## Set
``` set = {item1, item2, item3, ...}```
* 用來存放不重複的資料,放進重複的資料也只會保存一個。
* set可以更改內容,frozenset不能更改內容(immuntable)。
* 可以操作的動作和數學上的set很像:([圖片來源](https://www.learnbyexample.org/python-set/))
* union (```A | B```)
* intersection (```A & B```)
* difference (```A - B```)
* symmetric difference (```A ^ B```)
* subset (```A < B```)
* super-set (```A > B```)
<img src='images/set.png' height=600 width=400 align="center">
```
set_example = {"o", 7, 7, 7, 7, 7, 7, 7}
print(type(set_example), set_example)
A = set("Shin Shin ba fei jai")
B = set("Ni may yo may may")
print(A)
print(B)
A ^ B
a = set([[1],2,3,3,3,3])
a
```
-----
# Numpy Array
* 以分析資料來說,常見的形式就是矩陣(array),python有一個package叫做numpy。
* 這個package可以讓我們更快更方便的處理矩陣。
<img src="https://numpy.org/_static/numpy_logo.png" align="center">
* Full documentation: https://docs.scipy.org/doc/
* 快速教學:http://cs231n.github.io/python-numpy-tutorial/#numpy-arrays
```bash
# 安裝它只要一個步驟:
pip3 install numpy
```
```
import numpy as np
b = np.array([[1,2,3],[4,5,6]]) # 2D array
print(b)
print(b.shape, b.dtype)
print(b[0, 0], b[0, 1], b[1, 0]) # array[row_index, col_index]
```
##### 有許多方便建立矩陣的函數
* 像是全部數值為0、全部數值為1、Identity等等,更多函數都在[這裡](https://docs.scipy.org/doc/numpy/reference/routines.array-creation.html#routines-array-creation)。
```
z = np.zeros((8,7))
z
```
##### 取出array中的row或是column
* 可以跟list一樣用```1:5```的語法。
* 也可以用boolean的方式選取部分的數值。
```
yeee = np.fromfile(join("data", "numpy_sample.txt"), sep=' ')
yeee = yeee.reshape((4,4))
print(yeee)
print(yeee[:2,0]) # 取出第一個column的前兩個row
print(yeee[-1,:]) # 取出最後一個row
yeee[yeee > 6]
```
##### 矩陣運算
* ``` + - * / ``` 都是element-wise,也就是矩陣內每個數值各自獨立運算。
* 矩陣相乘要用 ```dot```
* 更多數學運算在[這裡](https://docs.scipy.org/doc/numpy/reference/routines.math.html)。
```
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
print(x.dot(y) == np.dot(x, y))
```
##### Broadcasting
* 在numpy中,如果我們要對不同形狀的矩陣進行運算,我們可以直接在形狀相同的地方直接進行運算。
```
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v
print(y) # v 分別加在x的每一個row
```
##### 如果想要使用兩個數值一樣的矩陣,必須注意shallow copy的問題。
* 用```x[:]```拿出的東西是x的一個View,也就是說,看到的其實是x的data,更動View就是更動x。
* 如果要真正複製一份出來,就要用```.copy()```。
```
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
shallow = x[:]
deep = x.copy()
shallow[0, 0] = 9487
deep[0, 0] = 5566
print(x)
```
----
# Files
* 知道如何在程式裡面存放資料後,還需要知道怎麼存檔和讀檔。
* 每打開一個檔案時,需要跟電腦說哪一種模式:
| Access mode | access_flag | detail |
|:------------|:-----|:-------|
| Read only | r | 從檔案開頭讀取 |
| Write only | w | 寫進去的東西會從頭開始寫,如果本來有內容,會覆蓋過去 |
| Append only | a | 寫進去的東西會接在檔案後面 |
| Enhance | + | 讀+寫或是讀+append |
* f是個負責處理該檔案的管理者,可以透過f對檔案做事情。
* f記錄著現在讀取到檔案的哪裡。
```python
f = open(filepath, access_flag)
f.close()
```
```
from os.path import join
with open(join("data", "heyhey.txt"), 'r', encoding='utf-8') as f:
print(f.read() + '\n')
print(f.readlines())
f.seek(0) # 回到檔案開頭
readlines = f.readlines(1)
print(readlines, type(readlines)) # 以 '\n'為界線,一行一行讀
single_line = f.readline()
print(single_line, type(single_line)) # 一次只讀一行
print()
# 也可以放進for-loop 一行一行讀
for i, line in enumerate(f):
print("Line: {}: {}".format(i, line))
break
with open(join("data", "test.txt"), 'w+', encoding='utf-8') as f:
f.write("Shin Shin ba fei jai")
f.seek(0)
print(f.read())
```
----
### Reference
* 如果想看更多python相關的教學,網路上有更多詳細的資源喔。
* 一些Python的教學資源:
* https://www.programiz.com/python-programming
* https://www.tutorialspoint.com/python3
* 有很多範例:https://www.learnbyexample.org/python-introduction/
* 官方文件:https://docs.python.org/3.6/library/
* 詳細python設計FAQ:https://docs.python.org/3.6/faq/design.html
* 養成寫好程式的習慣:https://www.python.org/dev/peps/pep-0008/
| github_jupyter |
```
import os, gc
import pygrib
import numpy as np
import pandas as pd
import xarray as xr
import multiprocessing as mp
from glob import glob
from functools import partial
from datetime import datetime, timedelta
os.environ['OMP_NUM_THREADS'] = '1'
nbm_dir = '/scratch/general/lustre/u1070830/nbm/'
urma_dir = '/scratch/general/lustre/u1070830/urma/'
tmp_dir = '/scratch/general/lustre/u1070830/tmp/'
os.makedirs(tmp_dir, exist_ok=True)
nbm_shape = (1051, 1132)
def unpack_fhr(nbm_file, xthreshold, xinterval, returned=False):
# WE NEED TO MATCH URMA IN HERE IF WE CAN!
try:
with pygrib.open(nbm_file) as grb:
msgs = grb.read()
if len(msgs) > 0:
_init = nbm_file.split('/')[-2:]
init = datetime.strptime(
_init[0] + _init[1].split('.')[1][1:-1],
'%Y%m%d%H')
if init.hour % 6 != 0:
init -= timedelta(hours=1)
lats, lons = grb.message(1).latlons()
valid = datetime.strptime(
str(msgs[0].validityDate) + '%02d'%msgs[0].validityTime,
'%Y%m%d%H%M')
step = valid - init
lead = int(step.days*24 + step.seconds/3600)
tmpfile = tmp_dir + '%02dprobX%s_%s_f%03d'%(xinterval, str(xthreshold).replace('.', 'p'), init.strftime('%Y%m%d%H'), lead)
if not os.path.isfile(tmpfile + '.npy'):
print(nbm_file.split('/')[-2:])
for msg in msgs:
if 'Probability of event above upper limit' in str(msg):
interval = msg['stepRange'].split('-')
interval = int(interval[1]) - int(interval[0])
threshold = msg.upperLimit
if ((threshold == xthreshold)&(interval == xinterval)):
returned = True
agg_data = np.array((init, valid, lead, msg.values), dtype=object)
np.save(tmpfile, agg_data, allow_pickle=True)
return agg_data
if not returned:
agg_data = np.array((init, valid, lead, np.full(nbm_shape, fill_value=np.nan)), dtype=object)
np.save(tmpfile, agg_data, allow_pickle=True)
return agg_data
else:
print(nbm_file.split('/')[-2:], 'from file')
return np.load(tmpfile + '.npy', allow_pickle=True)
else:
print('%s: No grib messages'%nbm_file.split('/')[-2:])
except:
pass
gc.collect()
# Pass data label to the extractor to pull out the variable we care about
# Do these one at a time and save out the xarray to netcdf to compare w/ URMA
extract_threshold = 0.254
extract_interval = 24
data_label = 'probx_%s_%02dh'%(str(extract_threshold).replace('.', 'p'), extract_interval)
# Build a list of inits
inits = pd.date_range(
datetime(2020, 6, 1, 0),
datetime(2020, 6, 10, 23),
freq='6H')
outfile = '../scripts/' + data_label + '.%s_%s.WR.nc'%(
inits[0].strftime('%Y%m%d%H'),
inits[-1].strftime('%Y%m%d%H'))
os.remove(outfile)
if not os.path.isfile(outfile):
nbm_flist_agg = []
for init in inits:
try:
nbm_flist = sorted(glob(nbm_dir + init.strftime('%Y%m%d') + '/*t%02dz*'%init.hour))
nbm_flist[0]
except:
nbm_flist = sorted(glob(nbm_dir + init.strftime('%Y%m%d') + '/*t%02dz*'%(init+timedelta(hours=1)).hour))
nbm_flist = [f for f in nbm_flist if 'idx' not in f]
if len(nbm_flist) > 0:
nbm_flist_agg.append(nbm_flist)
nbm_flist_agg = np.hstack(nbm_flist_agg)
with pygrib.open(nbm_flist_agg[0]) as sample:
lat, lon = sample.message(1).latlons()
unpack_fhr_mp = partial(unpack_fhr, xinterval=extract_interval, xthreshold=extract_threshold)
# 128 workers ~ 1.2GB RAM/worker
workers = 128
with mp.get_context('fork').Pool(workers) as p:
returns = p.map(unpack_fhr_mp, nbm_flist_agg, chunksize=1)
p.close()
p.join()
returns = np.array([r for r in returns if r is not None], dtype=object)
init = returns[:, 0].astype(np.datetime64)
valid = returns[:, 1].astype(np.datetime64).reshape(len(np.unique(init)), -1)
lead = returns[:, 2].astype(np.int16).reshape(len(np.unique(init)), -1)
data = np.array([r for r in returns[:, 3]], dtype=np.int8).reshape(len(np.unique(init)), -1, nbm_shape[0], nbm_shape[1])
valid = xr.DataArray(valid, name='valid', dims=('init', 'lead'), coords={'init':np.unique(init), 'lead':np.unique(lead)})
data = xr.DataArray(data, name=data_label, dims=('init', 'lead', 'y', 'x'), coords={'init':np.unique(init), 'lead':np.unique(lead)})
data = xr.merge([data, valid])
data['lat'] = xr.DataArray(lat, dims=('y', 'x'))
data['lon'] = xr.DataArray(lon, dims=('y', 'x'))
data.set_coords(['lat', 'lon'])
data.to_netcdf(outfile)
else:
data = xr.open_dataset(outfile)
data
valid_unique = np.unique([pd.to_datetime(t).strftime('%Y%m%d%H') for t in data['valid'].values])
urma_flist = np.hstack([[f for f in glob(urma_dir + '*%s*.WR.grib2'%v) if 'idx' not in f] for v in valid_unique])
print(urma_flist[:5])
def open_dataset(f, cfengine='pynio'):
ds = xr.open_dataset(f, engine=cfengine)
ds['valid'] = datetime.strptime(f.split('/')[-1].split('.')[1], '%Y%m%d%H')
return ds
with mp.get_context('fork').Pool(int(len(urma_flist)/2)) as p:
urma = p.map(open_dataset, urma_flist)
p.close()
p.join()
urma = xr.concat(urma, dim='valid').rename({'APCP_P8_L1_GLC0_acc':'apcp6h',
'xgrid_0':'x', 'ygrid_0':'y',
'gridlat_0':'lat', 'gridlon_0':'lon'})
urma
```
| github_jupyter |
## Plotting very large datasets meaningfully, using `datashader`
There are a variety of approaches for plotting large datasets, but most of them are very unsatisfactory. Here we first show some of the issues, then demonstrate how the `datashader` library helps make large datasets truly practical.
We'll use part of the well-studied [NYC Taxi trip database](http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml), with the locations of all NYC taxi pickups and dropoffs from the month of January 2015. Although we know what the data is, let's approach it as if we are doing data mining, and see what it takes to understand the dataset from scratch.
### Load NYC Taxi data
(takes 10-20 seconds, since it's in the inefficient but widely supported CSV file format...)
```
import pandas as pd
%time df = pd.read_csv('../data/nyc_taxi.csv',usecols= \
['pickup_x', 'pickup_y', 'dropoff_x','dropoff_y', 'passenger_count','tpep_pickup_datetime'])
df.tail()
```
As you can see, this file contains about 12 million pickup and dropoff locations (in Web Mercator coordinates), with passenger counts.
### Define a simple plot
```
from bokeh.models import BoxZoomTool
from bokeh.plotting import figure, output_notebook, show
output_notebook()
NYC = x_range, y_range = ((-8242000,-8210000), (4965000,4990000))
plot_width = int(750)
plot_height = int(plot_width//1.2)
def base_plot(tools='pan,wheel_zoom,reset',plot_width=plot_width, plot_height=plot_height, **plot_args):
p = figure(tools=tools, plot_width=plot_width, plot_height=plot_height,
x_range=x_range, y_range=y_range, outline_line_color=None,
min_border=0, min_border_left=0, min_border_right=0,
min_border_top=0, min_border_bottom=0, **plot_args)
p.axis.visible = False
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
p.add_tools(BoxZoomTool(match_aspect=True))
return p
options = dict(line_color=None, fill_color='blue', size=5)
```
### 1000-point scatterplot: undersampling
Any plotting program should be able to handle a plot of 1000 datapoints. Here the points are initially overplotting each other, but if you hit the Reset button (top right of plot) to zoom in a bit, nearly all of them should be clearly visible in the following Bokeh plot of a random 1000-point sample. If you know what to look for, you can even see the outline of Manhattan Island and Central Park from the pattern of dots. We've included geographic map data here to help get you situated, though for a genuine data mining task in an abstract data space you might not have any such landmarks. In any case, because this plot is discarding 99.99% of the data, it reveals very little of what might be contained in the dataset, a problem called *undersampling*.
```
%%time
from bokeh.tile_providers import STAMEN_TERRAIN
samples = df.sample(n=1000)
p = base_plot()
p.add_tile(STAMEN_TERRAIN)
p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)
show(p)
```
### 10,000-point scatterplot: overplotting
We can of course plot more points to reduce the amount of undersampling. However, even if we only try to plot 0.1% of the data, ignoring the other 99.9%, we will find major problems with *overplotting*, such that the true density of dropoffs in central Manhattan is impossible to see due to occlusion:
```
%%time
samples = df.sample(n=10000)
p = base_plot()
p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)
show(p)
```
Overplotting is reduced if you zoom in on a particular region (may need to click to enable the wheel-zoom tool in the upper right of the plot first, then use the scroll wheel). However, then the problem switches to back to serious undersampling, as the too-sparsely sampled datapoints get revealed for zoomed-in regions, even though much more data is available.
### 100,000-point scatterplot: saturation
If you make the dot size smaller, you can reduce the overplotting that occurs when you try to combat undersampling. Even so, with enough opaque data points, overplotting will be unavoidable in popular dropoff locations. So you can then adjust the alpha (opacity) parameter of most plotting programs, so that multiple points need to overlap before full color saturation is achieved. With enough data, such a plot can approximate the probability density function for dropoffs, showing where dropoffs were most common:
```python
%%time
options = dict(line_color=None, fill_color='blue', size=1, alpha=0.1)
samples = df.sample(n=100000)
p = base_plot(webgl=True)
p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)
show(p)
```
<img src="../assets/images/nyc_taxi_100k.png">
[*Here we've shown static output as a PNG rather than a live Bokeh plot, to reduce the file size for distributing full notebooks and because some browsers will have trouble with plots this large. The above cell can be converted into code and executed to get the full interactive plot.*]
However, it's very tricky to set the size and alpha parameters. How do we know if certain regions are saturating, unable to show peaks in dropoff density? Here we've manually set the alpha to show a clear structure of streets and blocks, as one would intuitively expect to see, but the density of dropoffs still seems approximately the same on nearly all Manhattan streets (just wider in some locations), which is unlikely to be true. We can of course reduce the alpha value to reduce saturation further, but there's no way to tell when it's been set correctly, and it's already low enough that nothing other than Manhattan and La Guardia is showing up at all. Plus, this alpha value will only work even reasonably well at the one zoom level shown. Try zooming in (may need to enable the wheel zoom tool in the upper right) to see that at higher zooms, there is less overlap between dropoff locations, so that the points *all* start to become transparent due to lack of overlap. Yet without setting the size and alpha to a low value in the first place, the stucture is invisible when zoomed out, due to overplotting. Thus even though Bokeh provides rich support for interactively revealing structure by zooming, it is of limited utility for large data; either the data is invisible when zoomed in, or there's no large-scale structure when zoomed out, which is necessary to indicate where zooming would be informative.
Moreover, we're still ignoring 99% of the data. Many plotting programs will have trouble with plots even this large, but Bokeh can handle 100-200,000 points in most browsers. Here we've enabled Bokeh's WebGL support, which gives smoother zooming behavior, but the non-WebGL mode also works well. Still, for such large sizes the plots become slow due to the large HTML file sizes involved, because each of the data points are encoded as text in the web page, and for even larger samples the browser will fail to render the page at all.
### 10-million-point datashaded plots: auto-ranging, but limited dynamic range
To let us work with truly large datasets without discarding most of the data, we can take an entirely different approach. Instead of using a Bokeh scatterplot, which encodes every point into JSON and stores it in the HTML file read by the browser, we can use the [datashader](https://github.com/bokeh/datashader) library to render the entire dataset into a pixel buffer in a separate Python process, and then provide a fixed-size image to the browser containing only the data currently visible. This approach decouples the data processing from the visualization. The data processing is then limited only by the computational power available, while the visualization has much more stringent constraints determined by your display device (a web browser and your particular monitor, in this case). This approach works particularly well when your data is in a far-off server, but it is also useful whenever your dataset is larger than your display device can render easily.
Because the number of points involved is no longer a limiting factor, you can now use the entire dataset (including the full 150 million trips that have been made public, if you download that data separately). Most importantly, because datashader allows computation on the intermediate stages of plotting, you can easily define operations like auto-ranging (which is on by default), so that we can be sure there is no overplotting or saturation and no need to set parameters like alpha.
The steps involved in datashading are (1) create a Canvas object with the shape of the eventual plot (i.e. having one storage bin for collecting points, per final pixel), (2) aggregating all points into that set of bins, incrementally counting them, and (3) mapping the resulting counts into a visible color from a specified range to make an image:
```
import datashader as ds
from datashader import transfer_functions as tf
from datashader.colors import Greys9
Greys9_r = list(reversed(Greys9))[:-2]
%%time
cvs = ds.Canvas(plot_width=plot_width, plot_height=plot_height, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg, cmap=["white", 'darkblue'], how='linear')
```
The resulting image is similar to the 100,000-point Bokeh plot above, but (a) makes use of all 12 million datapoints, (b) is computed in only a tiny fraction of the time, (c) does not require any magic-number parameters like size and alpha, and (d) automatically ensures that there is no saturation or overplotting:
```
img
```
This plot renders the count at every pixel as a color from the specified range (here from white to dark blue), mapped linearly. If your display device were linear, and the data were distributed evenly across this color range, then the result of such linear, auto-ranged processing would be an effective, parameter-free way to visualize your dataset.
However, real display devices are not typically linear, and more importantly, real data is rarely distributed evenly. Here, it is clear that there are "hotspots" in dropoffs, with a very high count for areas around Penn Station and Madison Square Garden, relatively low counts for the rest of Manhattan's streets, and apparently no dropoffs anywhere else but La Guardia airport. NYC taxis definitely cover a larger geographic range than this, so what is the problem? To see, let's look at the histogram of counts for the above image:
```
import numpy as np
def histogram(x,colors=None):
hist,edges = np.histogram(x, bins=100)
p = figure(y_axis_label="Pixels",
tools='', height=130, outline_line_color=None,
min_border=0, min_border_left=0, min_border_right=0,
min_border_top=0, min_border_bottom=0)
p.quad(top=hist[1:], bottom=0, left=edges[1:-1], right=edges[2:])
print("min: {}, max: {}".format(np.min(x),np.max(x)))
show(p)
histogram(agg.values)
```
Clearly, most of the pixels have very low counts (under 3000), while a very few pixels have much larger counts (up to 22000, in this case). When these values are mapped into colors for display, nearly all of the pixels will end up being colored with the lowest colors in the range, i.e. white or nearly white, while the other colors in the available range will be used for only a few dozen pixels at most. Thus most of the pixels in this plot convey very little information about the data, wasting nearly all of dynamic range available on your display device. It's thus very likely that we are missing a lot of the structure in this data that we could be seeing.
### 10-million-point datashaded plots: high dynamic range
For the typical case of data that is distributed nonlinearly over the available range, we can use nonlinear scaling to map the data range into the visible color range. E.g. first transforming the values via a log function will help flatten out this histogram and reveal much more of the structure of this data:
```
histogram(np.log1p(agg.values))
tf.shade(agg, cmap=Greys9_r, how='log')
```
We can now see that there is rich structure throughout this dataset -- geographic features like streets and buildings are clearly modulating the values in both the high-dropoff regions in Manhattan and the relatively low-dropoff regions in the surrounding areas. Still, this choice is arbitrary -- why the log function in particular? It clearly flattened the histogram somewhat, but it was just a guess. We can instead explicitly equalize the histogram of the data before building the image, making structure visible at every data level (and thus at all the geographic locations covered) in a general way:
```
histogram(tf.eq_hist(agg.values))
tf.shade(agg, cmap=Greys9_r, how='eq_hist')
```
The histogram is now fully flat (apart from the spacing of bins caused by the discrete nature of integer counting). Effectively, the visualization now shows a rank-order or percentile distribution of the data. I.e., pixels are now colored according to where their corresponding counts fall in the distribution of all counts, with one end of the color range for the lowest counts, one end for the highest ones, and every colormap step in between having similar numbers of counts. Such a visualization preserves the ordering between count values, faithfully displaying local differences in these counts, but discards absolute magnitudes (as the top 1% of the color range will be used for the top 1% of the data values, whatever those may be).
Now that the data is visible at every level, we can immediately see that there are some clear problems with the quality of the data -- there is a surprising number of trips that claim to drop off in the water or in the roadless areas of Central park, as well as in the middle of most of the tallest buildings in central Manhattan. These locations are likely to be GPS errors being made visible, perhaps partly because of poor GPS performance in between the tallest buildings.
Histogram equalization does not require any magic parameters, and in theory it should convey the maximum information available about the relative values between pixels, by mapping each of the observed ranges of values into visibly discriminable colors. And it's clearly a good start in practice, because it shows both low values (avoiding undersaturation) and relatively high values clearly, without arbitrary settings.
Even so, the results will depend on the nonlinearities of your visual system, your specific display device, and any automatic compensation or calibration being applied to your display device. Thus in practice, the resulting range of colors may not map directly into a linearly perceivable range for your particular setup, and so you may want to further adjust the values to more accurately reflect the underlying structure, by adding additional calibration or compensation steps.
Moreover, at this point you can now bring in your human-centered goals for the visualization -- once the overall structure has been clearly revealed, you can select specific aspects of the data to highlight or bring out, based on your own questions about the data. These questions can be expressed at whatever level of the pipeline is most appropriate, as shown in the examples below. For instance, histogram equalization was done on the counts in the aggregate array, because if we waited until the image had been created, we would have been working with data truncated to the 256 color levels available per channel in most display devices, greatly reducing precision. Or you may want to focus specifically on the highest peaks (as shown below), which again should be done at the aggregate level so that you can use the full color range of your display device to represent the narrow range of data that you are interested in. Throughout, the goal is to map from the data of interest into the visible, clearly perceptible range available on your display device.
### 10-million-point datashaded plots: interactive
Although the above plots reveal the entire dataset at once, the full power of datashading requires an interactive plot, because a big dataset will usually have structure at very many different levels (such as different geographic regions). Datashading allows auto-ranging and other automatic operations to be recomputed dynamically for the specific selected viewport, automatically revealing local structure that may not be visible from a global view. Here we'll embed the generated images into a Bokeh plot to support fully interactive zooming. For the highest detail on large monitors, you should increase the plot width and height above.
```
import datashader as ds
from datashader.bokeh_ext import InteractiveImage
from functools import partial
from datashader.utils import export_image
from datashader.colors import colormap_select, Greys9, Hot, inferno
background = "black"
export = partial(export_image, export_path="export", background=background)
cm = partial(colormap_select, reverse=(background=="black"))
def create_image(x_range, y_range, w=plot_width, h=plot_height):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg, cmap=Hot, how='eq_hist')
return tf.dynspread(img, threshold=0.5, max_px=4)
p = base_plot(background_fill_color=background)
export(create_image(*NYC),"NYCT_hot")
InteractiveImage(p, create_image)
```
You can now zoom in interactively to this plot, seeing all the points available in that viewport, without ever needing to change the plot parameters for that specific zoom level. Each time you zoom or pan, a new image is rendered (which takes a few seconds for large datasets), and displayed overlaid any other plot elements, providing full access to all of your data. Here we've used the optional `tf.dynspread` function to automatically enlarge the size of each datapoint once you've zoomed in so far that datapoints no longer have nearby neighbors.
### Customizing datashader
One of the most important features of datashading is that each of the stages of the datashader pipeline can be modified or replaced, either for personal preferences or to highlight specific aspects of the data. Here we'll use a high-level `Pipeline` object that encapsulates the typical series of steps in the above `create_image` function, and then we'll customize it. The default values of this pipeline are the same as the plot above, but here we'll add a special colormap to make the values stand out against an underlying map, and only plot hotspots (defined here as pixels (aggregation bins) that are in the 90th percentile by count):
```
import numpy as np
from functools import partial
def create_image90(x_range, y_range, w=plot_width, h=plot_height):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg.where(agg>np.percentile(agg,90)), cmap=inferno, how='eq_hist')
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot()
p.add_tile(STAMEN_TERRAIN)
export(create_image(*NYC),"NYCT_90th")
InteractiveImage(p, create_image90)
```
If you zoom in to the plot above, you can see that the 90th-percentile criterion at first highlights the most active areas in the entire dataset, and then highlights the most active areas in each subsequent viewport. Here yellow has been chosen to highlight the strongest peaks, and if you zoom in on one of those peaks you can see the most active areas in that particular geographic region, according to this dynamically evaluated definition of "most active".
The above plots each followed a roughly standard series of steps useful for many datasets, but you can instead fully customize the computations involved. This capability lets you do novel operations on the data once it has been aggregated into pixel-shaped bins. For instance, you might want to plot all the pixels where there were more dropoffs than pickups in blue, and all those where there were more pickups than dropoffs in red. To do this, just write your own function that will create an image, when given x and y ranges, a resolution (w x h), and any optional arguments needed. You can then either call the function yourself, or pass it to `InteractiveImage` to make an interactive Bokeh plot:
```
def merged_images(x_range, y_range, w=plot_width, h=plot_height, how='log'):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
picks = cvs.points(df, 'pickup_x', 'pickup_y', ds.count('passenger_count'))
drops = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
drops = drops.rename({'dropoff_x': 'x', 'dropoff_y': 'y'})
picks = picks.rename({'pickup_x': 'x', 'pickup_y': 'y'})
more_drops = tf.shade(drops.where(drops > picks), cmap=["darkblue", 'cornflowerblue'], how=how)
more_picks = tf.shade(picks.where(picks > drops), cmap=["darkred", 'orangered'], how=how)
img = tf.stack(more_picks, more_drops)
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot(background_fill_color=background)
export(merged_images(*NYC),"NYCT_pickups_vs_dropoffs")
InteractiveImage(p, merged_images)
```
Now you can see that pickups are more common on major roads, as you'd expect, and dropoffs are more common on side streets. In Manhattan, roads running along the island are more common for pickups. If you zoom in to any location, the data will be re-aggregated to the new resolution automatically, again calculating for each newly defined pixel whether pickups or dropoffs were more likely in that pixel. The interactive features of Bokeh are now fully usable with this large dataset, allowing you to uncover new structure at every level.
We can also use other columns in the dataset as additional dimensions in the plot. For instance, if we want to see if certain areas are more likely to have pickups at certain hours (e.g. areas with bars and restaurants might have pickups in the evening, while apartment buildings may have pickups in the morning). One way to do this is to use the hour of the day as a category, and then colorize each hour:
```
df['hour'] = pd.to_datetime(df['tpep_pickup_datetime']).dt.hour.astype('category')
colors = ["#FF0000","#FF3F00","#FF7F00","#FFBF00","#FFFF00","#BFFF00","#7FFF00","#3FFF00",
"#00FF00","#00FF3F","#00FF7F","#00FFBF","#00FFFF","#00BFFF","#007FFF","#003FFF",
"#0000FF","#3F00FF","#7F00FF","#BF00FF","#FF00FF","#FF00BF","#FF007F","#FF003F",]
def colorized_images(x_range, y_range, w=plot_width, h=plot_height, dataset="pickup"):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, dataset+'_x', dataset+'_y', ds.count_cat('hour'))
img = tf.shade(agg, color_key=colors)
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot(background_fill_color=background)
#p.add_tile(STAMEN_TERRAIN)
export(colorized_images(*NYC, dataset="pickup"),"NYCT_pickup_times")
InteractiveImage(p, colorized_images, dataset="pickup")
export(colorized_images(*NYC, dataset="dropoff"),"NYCT_dropoff_times")
p = base_plot(background_fill_color=background)
InteractiveImage(p, colorized_images, dataset="dropoff")
```
Here the order of colors is roughly red (midnight), yellow (4am), green (8am), cyan (noon), blue (4pm), purple (8pm), and back to red (since hours and colors are both cyclic). There are clearly hotspots by hour that can now be investigated, and perhaps compared with the underlying map data. And you can try first filtering the dataframe to only have weekdays or weekends, or only during certain public events, etc., or filtering the resulting pixels to have only those in a certain range of interest. The system is very flexible, and it should be straightforward to express a very large range of possible queries and visualizations with very little code.
The above examples each used pre-existing components provided for the datashader pipeline, but you can implement any components you like and substitute them, allowing you to easily explore and highlight specific aspects of your data. Have fun datashading!
| github_jupyter |
```
# nuclio: ignore
import nuclio
%nuclio config kind = "job"
%nuclio config spec.image = "mlrun/ml-models"
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from mlrun.execution import MLClientCtx
from mlrun.datastore import DataItem
from mlrun.artifacts import PlotArtifact, TableArtifact
from mlrun.mlutils import gcf_clear
from typing import List
pd.set_option("display.float_format", lambda x: "%.2f" % x)
def summarize(
context: MLClientCtx,
table: DataItem,
label_column: str = None,
class_labels: List[str] = [],
plot_hist: bool = True,
plots_dest: str = "plots",
update_dataset = False,
) -> None:
"""Summarize a table
:param context: the function context
:param table: MLRun input pointing to pandas dataframe (csv/parquet file path)
:param label_column: ground truth column label
:param class_labels: label for each class in tables and plots
:param plot_hist: (True) set this to False for large tables
:param plots_dest: destination folder of summary plots (relative to artifact_path)
:param update_dataset: when the table is a registered dataset update the charts in-place
"""
df = table.as_df()
header = df.columns.values
extra_data = {}
try:
gcf_clear(plt)
snsplt = sns.pairplot(df, hue=label_column)#, diag_kws={"bw": 1.5})
extra_data["histograms"] = context.log_artifact(PlotArtifact("histograms", body=plt.gcf()),
local_path=f"{plots_dest}/hist.html", db_key=False)
except Exception as e:
context.logger.error(f'Failed to create pairplot histograms due to: {e}')
try:
gcf_clear(plt)
plot_cols = 3
plot_rows = int((len(header) - 1) / plot_cols)+1
fig, ax = plt.subplots(plot_rows, plot_cols, figsize=(15, 4))
fig.tight_layout(pad=2.0)
for i in range(plot_rows * plot_cols):
if i < len(header):
sns.violinplot(x=df[header[i]], ax=ax[int(i / plot_cols)][i % plot_cols],
orient='h', width=0.7, inner="quartile")
else:
fig.delaxes(ax[int(i / plot_cols)][i % plot_cols])
i+=1
extra_data["violin"] = context.log_artifact(PlotArtifact("violin", body=plt.gcf(), title='Violin Plot'),
local_path=f"{plots_dest}/violin.html", db_key=False)
except Exception as e:
context.logger.warn(f'Failed to create violin distribution plots due to: {e}')
if label_column:
labels = df.pop(label_column)
imbtable = labels.value_counts(normalize=True).sort_index()
try:
gcf_clear(plt)
balancebar = imbtable.plot(kind='bar', title='class imbalance - labels')
balancebar.set_xlabel('class')
balancebar.set_ylabel("proportion of total")
extra_data["imbalance"] = context.log_artifact(PlotArtifact("imbalance", body=plt.gcf()),
local_path=f"{plots_dest}/imbalance.html")
except Exception as e:
context.logger.warn(f'Failed to create class imbalance plot due to: {e}')
context.log_artifact(TableArtifact("imbalance-weights-vec",
df=pd.DataFrame({"weights": imbtable})),
local_path=f"{plots_dest}/imbalance-weights-vec.csv", db_key=False)
tblcorr = df.corr()
mask = np.zeros_like(tblcorr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
dfcorr = pd.DataFrame(data=tblcorr, columns=header, index=header)
dfcorr = dfcorr[np.arange(dfcorr.shape[0])[:, None] > np.arange(dfcorr.shape[1])]
context.log_artifact(TableArtifact("correlation-matrix", df=tblcorr, visible=True),
local_path=f"{plots_dest}/correlation-matrix.csv", db_key=False)
try:
gcf_clear(plt)
ax = plt.axes()
sns.heatmap(tblcorr, ax=ax, mask=mask, annot=False, cmap=plt.cm.Reds)
ax.set_title("features correlation")
extra_data["correlation"] = context.log_artifact(PlotArtifact("correlation", body=plt.gcf(), title='Correlation Matrix'),
local_path=f"{plots_dest}/corr.html", db_key=False)
except Exception as e:
context.logger.warn(f'Failed to create features correlation plot due to: {e}')
gcf_clear(plt)
if update_dataset and table.meta and table.meta.kind == 'dataset':
from mlrun.artifacts import update_dataset_meta
update_dataset_meta(table.meta, extra_data=extra_data)
# nuclio: end-code
```
### mlconfig
```
from mlrun import mlconf
import os
mlconf.dbpath = mlconf.dbpath or 'http://mlrun-api:8080'
mlconf.artifact_path = mlconf.artifact_path or os.path.abspath('./')
```
### save
```
from mlrun import code_to_function
# create job function object from notebook code
fn = code_to_function("describe", handler="summarize",
description="describe and visualizes dataset stats",
categories=["analysis"],
labels = {"author": "yjb"},
code_output='.')
fn.export()
```
## tests
```
from mlrun.platforms import auto_mount
fn.apply(auto_mount())
from mlrun import NewTask, run_local
#DATA_URL = "https://iguazio-sample-data.s3.amazonaws.com/datasets/classifier-data.csv"
DATA_URL = 'https://iguazio-sample-data.s3.amazonaws.com/datasets/iris_dataset.csv'
task = NewTask(
name="tasks-describe",
handler=summarize,
inputs={"table": DATA_URL}, params={'update_dataset': True, 'label_column': 'label'})
```
### run locally
```
run = run_local(task)
```
### run remotely
```
fn.run(task, inputs={"table": DATA_URL})
```
| github_jupyter |
Taller Presencial --- Programación en Python
===
El algoritmo MapReduce de Hadoop se presenta en la siguiente figura.
<img src="https://raw.githubusercontent.com/jdvelasq/datalabs/master/images/map-reduce.jpg"/>
Se desea escribir un programa que realice el conteo de palabras usando el algoritmo MapReduce.
```
#
# A continuación se crea las carpetas /tmp/input, /tmp/output y tres archivos de prueba
#
!rm -rf /tmp/input /tmp/output
!mkdir /tmp/input
!mkdir /tmp/output
%%writefile /tmp/input/text0.txt
Analytics is the discovery, interpretation, and communication of meaningful patterns
in data. Especially valuable in areas rich with recorded information, analytics relies
on the simultaneous application of statistics, computer programming and operations research
to quantify performance.
Organizations may apply analytics to business data to describe, predict, and improve business
performance. Specifically, areas within analytics include predictive analytics, prescriptive
analytics, enterprise decision management, descriptive analytics, cognitive analytics, Big
Data Analytics, retail analytics, store assortment and stock-keeping unit optimization,
marketing optimization and marketing mix modeling, web analytics, call analytics, speech
analytics, sales force sizing and optimization, price and promotion modeling, predictive
science, credit risk analysis, and fraud analytics. Since analytics can require extensive
computation (see big data), the algorithms and software used for analytics harness the most
current methods in computer science, statistics, and mathematics
%%writefile /tmp/input/text1.txt
The field of data analysis. Analytics often involves studying past historical data to
research potential trends, to analyze the effects of certain decisions or events, or to
evaluate the performance of a given tool or scenario. The goal of analytics is to improve
the business by gaining knowledge which can be used to make improvements or changes.
%%writefile /tmp/input/text2.txt
Data analytics (DA) is the process of examining data sets in order to draw conclusions
about the information they contain, increasingly with the aid of specialized systems
and software. Data analytics technologies and techniques are widely used in commercial
industries to enable organizations to make more-informed business decisions and by
scientists and researchers to verify or disprove scientific models, theories and
hypotheses.
#
# Escriba la función load_input que recive como parámetro un folder y retorna
# una lista de tuplas donde el primer elemento de cada tupla es el nombre del
# archivo y el segundo es una línea del archivo. La función convierte a tuplas
# todas las lineas de cada uno de los archivos. La función es genérica y debe
# leer todos los archivos de folder entregado como parámetro.
#
# Por ejemplo:
# [
# ('text0'.txt', 'Analytics is the discovery, inter ...'),
# ('text0'.txt', 'in data. Especially valuable in ar...').
# ...
# ('text2.txt'. 'hypotheses.')
# ]
#
def load_input(input_directory):
pass
#
# Escriba una función llamada maper que recibe una lista de tuplas de la
# función anterior y retorna una lista de tuplas (clave, valor). En este caso,
# la clave es cada palabra y el valor es 1, puesto que se está realizando un
# conteo.
#
# [
# ('Analytics', 1),
# ('is', 1),
# ...
# ]
#
def mapper(sequence):
pass
#
# Escriba la función shuffle_and_sort que recibe la lista de tuplas entregada
# por el mapper, y retorna una lista con el mismo contenido ordenado por la
# clave.
#
# [
# ('Analytics', 1),
# ('Analytics', 1),
# ...
# ]
#
def shuffle_and_sort(sequence):
pass
#
# Escriba la función reducer, la cual recibe el resultado de shuffle_and_sort y
# reduce los valores asociados a cada clave sumandolos. Como resultado, por
# ejemplo, la reducción indica cuantas veces aparece la palabra analytics en el
# texto.
#
def reducer(sequence):
pass
#
# Escriba la función save_output, que toma la lista devuelta por el reducer y
# escribe en la carpeta /tmp/output/ los archivos 'part-0.txt', 'part-1.txt',
# etc. El primer archivo contiene las primeras 20 palabras contadas, el segundo
# de la 21 a la 40 y asi sucesivamente. Cada línea de cada archivo contiene la
# palabra y las veces que aparece separadas por un tabulador.
#
def save_output(sequence, output_directory):
pass
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation/blob/master/DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science Unit 2_
# Classification & Validation Sprint Challenge
Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3.
#### For this Sprint Challenge, you'll predict whether a person's income exceeds $50k/yr, based on census data.
You can read more about the Adult Census Income dataset at the UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/adult
#### Run this cell to load the data:
```
!pip install category_encoders
import category_encoders as ce
from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler, OrdinalEncoder
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
columns = ['age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income']
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data',
header=None, names=columns)
df['income'] = df['income'].str.strip()
```
## Part 1 — Begin with baselines
Split the data into an **X matrix** (all the features) and **y vector** (the target).
(You _don't_ need to split the data into train and test sets here. You'll be asked to do that at the _end_ of Part 1.)
```
df['income'].value_counts()
X = df.drop(columns='income')
Y = df['income'].replace({'<=50K':0, '>50K':1})
```
What **accuracy score** would you get here with a **"majority class baseline"?**
(You can answer this question either with a scikit-learn function or with a pandas function.)
```
majority_class = Y.mode()[0]
majority_class_prediction = [majority_class] * len(Y)
accuracy_score(Y, majority_class_prediction)
```
What **ROC AUC score** would you get here with a **majority class baseline?**
(You can answer this question either with a scikit-learn function or with no code, just your understanding of ROC AUC.)
```
roc_auc_score(Y, majority_class_prediction)
```
In this Sprint Challenge, you will use **"Cross-Validation with Independent Test Set"** for your model validaton method.
First, **split the data into `X_train, X_test, y_train, y_test`**. You can include 80% of the data in the train set, and hold out 20% for the test set.
```
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size=.2, random_state=42)
```
## Part 2 — Modeling with Logistic Regression!
- You may do exploratory data analysis and visualization, but it is not required.
- You may **use all the features, or select any features** of your choice, as long as you select at least one numeric feature and one categorical feature.
- **Scale your numeric features**, using any scikit-learn [Scaler](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing) of your choice.
- **Encode your categorical features**. You may use any encoding (One-Hot, Ordinal, etc) and any library (category_encoders, scikit-learn, pandas, etc) of your choice.
- You may choose to use a pipeline, but it is not required.
- Use a **Logistic Regression** model.
- Use scikit-learn's [**cross_val_score**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function. For [scoring](https://scikit-learn.org/stable/modules/model_evaluation.html#the-scoring-parameter-defining-model-evaluation-rules), use **accuracy**.
- **Print your model's cross-validation accuracy score.**
```
pipeline = make_pipeline(ce.OneHotEncoder(use_cat_names=True),
StandardScaler(),
LogisticRegression(solver='lbfgs', max_iter=1000))
scores = cross_val_score(pipeline, X_train, Y_train, scoring='accuracy', cv=10, n_jobs=1, verbose=10)
print('Cross-Validation accuracy scores:', scores,'\n\n')
print('Average:', scores.mean())
```
## Part 3 — Modeling with Tree Ensembles!
Part 3 is the same as Part 2, except this time, use a **Random Forest** or **Gradient Boosting** classifier. You may use scikit-learn, xgboost, or any other library. Then, print your model's cross-validation accuracy score.
```
pipeline = make_pipeline(ce.OneHotEncoder(use_cat_names=True),
StandardScaler(),
RandomForestClassifier(max_depth=2, n_estimators=40))
scores = cross_val_score(pipeline, X_train, Y_train, scoring='accuracy', cv=10, n_jobs=1, verbose=10)
print('Cross-Validation accuracy scores:', scores,'\n\n')
print('Average:', scores.mean())
```
## Part 4 — Calculate classification metrics from a confusion matrix
Suppose this is the confusion matrix for your binary classification model:
<table>
<tr>
<td colspan="2" rowspan="2"></td>
<td colspan="2">Predicted</td>
</tr>
<tr>
<td>Negative</td>
<td>Positive</td>
</tr>
<tr>
<td rowspan="2">Actual</td>
<td>Negative</td>
<td style="border: solid">85</td>
<td style="border: solid">58</td>
</tr>
<tr>
<td>Positive</td>
<td style="border: solid">8</td>
<td style="border: solid"> 36</td>
</tr>
</table>
```
true_neg = 85
true_pos = 36
false_neg = 8
false_pos = 58
pred_neg = true_neg + false_neg
pred_pos = true_pos + false_pos
actual_pos = false_neg + true_pos
actual_neg = false_pos + true_neg
```
Calculate accuracy
```
accuracy = (true_neg + true_pos)/ (pred_neg + pred_pos)
accuracy
```
Calculate precision
```
precision = (true_pos) / (pred_pos)
precision
```
Calculate recall
```
recall = true_pos / actual_pos
recall
```
## BONUS — How you can earn a score of 3
### Part 1
Do feature engineering, to try improving your cross-validation score.
### Part 2
Experiment with feature selection, preprocessing, categorical encoding, and hyperparameter optimization, to try improving your cross-validation score.
### Part 3
Which model had the best cross-validation score? Refit this model on the train set and do a final evaluation on the held out test set — what is the test score?
### Part 4
Calculate F1 score and False Positive Rate.
| github_jupyter |
```
# check pytorch installation:
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
assert torch.__version__.startswith("1.9") # please manually install torch 1.9 if Colab changes its default version
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger()
# import some common libraries
import numpy as np
import os, json, cv2, random
from PIL import Image
from IPython.display import display
from typing import Tuple
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog, DatasetCatalog
from detectron2.data.dataset_mapper import DatasetMapper
from detectron2.data import detection_utils as utils
from detectron2.engine import DefaultTrainer
from detectron2.data import build_detection_test_loader, build_detection_train_loader
import copy
import json
from detectron2.data import transforms as T
def get_circles_dataset():
with open("circles.json") as json_file:
data = json.load(json_file)
return data
DatasetCatalog.register("circles_train", get_circles_dataset)
MetadataCatalog.get("circles_train").set(thing_classes=["circle"])
circles_metadata = MetadataCatalog.get("circles_train")
dataset_dicts = get_circles_dataset()
for d in random.sample(dataset_dicts, 2):
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=circles_metadata, scale=0.5)
out = visualizer.draw_dataset_dict(d)
display(Image.fromarray(out.get_image()))
class FourChannelMapper(DatasetMapper):
def __call__(self, dataset_dict):
"""
This method is basically a carbon copy of DatasetMapper::__call__,
but each image gets expanded by one empty channel
"""
dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
image = utils.read_image(dataset_dict["file_name"], format=self.image_format)
#########################################
# MY CODE
# Simply add a 4th empty channel
# image = np.dstack((image, np.zeros_like(image[:,:,0])))
# END OF MY CODE
utils.check_image_size(dataset_dict, image)
# USER: Remove if you don't do semantic/panoptic segmentation.
if "sem_seg_file_name" in dataset_dict:
sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2)
else:
sem_seg_gt = None
aug_input = T.AugInput(image, sem_seg=sem_seg_gt)
transforms = self.augmentations(aug_input)
image, sem_seg_gt = aug_input.image, aug_input.sem_seg
image_shape = image.shape[:2] # h, w
# Pytorch's dataloader is efficient on torch.Tensor due to shared-memory,
# but not efficient on large generic data structures due to the use of pickle & mp.Queue.
# Therefore it's important to use torch.Tensor.
dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1)))
if sem_seg_gt is not None:
dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long"))
if not self.is_train:
# USER: Modify this if you want to keep them for some reason.
dataset_dict.pop("annotations", None)
dataset_dict.pop("sem_seg_file_name", None)
return dataset_dict
if "annotations" in dataset_dict:
# USER: Modify this if you want to keep them for some reason.
for anno in dataset_dict["annotations"]:
if not self.use_instance_mask:
anno.pop("segmentation", None)
if not self.use_keypoint:
anno.pop("keypoints", None)
# USER: Implement additional transformations if you have other types of data
annos = [
utils.transform_instance_annotations(
obj, transforms, image_shape, keypoint_hflip_indices=self.keypoint_hflip_indices
)
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(
annos, image_shape, mask_format=self.instance_mask_format
)
# After transforms such as cropping are applied, the bounding box may no longer
# tightly bound the object. As an example, imagine a triangle object
# [(0,0), (2,0), (0,2)] cropped by a box [(1,0),(2,2)] (XYXY format). The tight
# bounding box of the cropped triangle should be [(1,0),(2,1)], which is not equal to
# the intersection of original bounding box and the cropping box.
if self.recompute_boxes:
instances.gt_boxes = instances.gt_masks.get_bounding_boxes()
dataset_dict["instances"] = utils.filter_empty_instances(instances)
return dataset_dict
class Trainer(DefaultTrainer):
@classmethod
def build_train_loader(cls, cfg):
return build_detection_train_loader(cfg, mapper=FourChannelMapper(cfg, True))
@classmethod
def build_test_loader(cls, cfg, dataset_name):
return build_detection_test_loader(cfg, dataset_name, mapper=FourChannelMapper(cfg, False))
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("circles_train",)
cfg.DATASETS.TEST = ()
#cfg.INPUT.FORMAT = "BGR"
cfg.INPUT.FORMAT = "RGB"
cfg.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675]
#cfg.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675, 103.530]
cfg.MODEL.PIXEL_STD = [1.0, 1.0, 1.0]
#cfg.MODEL.PIXEL_STD = [1.0, 1.0, 1.0, 1.0]
cfg.DATALOADER.NUM_WORKERS = 2
#cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 1
cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR
cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset
cfg.SOLVER.STEPS = [] # do not decay learning rate
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 # faster, and good enough for this toy dataset (default: 512)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets)
# NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here.
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = Trainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()
# Inference should use the config with parameters that are used in training
# cfg now already contains everything we've set previously. We changed it a little bit for inference:
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # path to the model we just trained
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.4 # set a custom testing threshold
predictor = DefaultPredictor(cfg)
from detectron2.utils.visualizer import ColorMode
dataset_dict = get_circles_dataset()
for d in random.sample(dataset_dicts, 2):
img = utils.read_image(d["file_name"], format=cfg.INPUT.FORMAT)
#stacked = np.dstack((img, np.zeros_like(img[:,:,0])))
stacked = img
display(Image.fromarray(img))
outputs = predictor(stacked)
v = Visualizer(img,
metadata=circles_metadata,
scale=1.0,
)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
display(Image.fromarray(out.get_image()))
```
| github_jupyter |
**author**: [email protected]<br>
**date**: 7 Oct 2017<br>
**language**: Python 3.5<br>
**license**: BSD3<br>
## alpha_diversity_90bp_100bp_150bp.ipynb
```
import pandas as pd
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from empcolors import get_empo_cat_color
%matplotlib inline
```
*** Choose 2k or qc-filtered subset (one or the other) ***
```
path_map = '../../data/mapping-files/emp_qiime_mapping_subset_2k.tsv' # already has 90bp alpha-div data
version = '2k'
path_map = '../../data/mapping-files/emp_qiime_mapping_qc_filtered.tsv' # already has 90bp alpha-div data
version = 'qc'
```
*** Merged mapping file and alpha-div ***
```
path_adiv100 = '../../data/alpha-div/emp.100.min25.deblur.withtax.onlytree_5000.txt'
path_adiv150 = '../../data/alpha-div/emp.150.min25.deblur.withtax.onlytree_5000.txt'
df_map = pd.read_csv(path_map, sep='\t', index_col=0)
df_adiv100 = pd.read_csv(path_adiv100, sep='\t', index_col=0)
df_adiv150 = pd.read_csv(path_adiv150, sep='\t', index_col=0)
df_adiv100.columns = ['adiv_chao1_100bp', 'adiv_observed_otus_100bp', 'adiv_faith_pd_100bp', 'adiv_shannon_100bp']
df_adiv150.columns = ['adiv_chao1_150bp', 'adiv_observed_otus_150bp', 'adiv_faith_pd_150bp', 'adiv_shannon_150bp']
df_merged = pd.concat([df_adiv100, df_adiv150, df_map], axis=1, join='outer')
```
*** Removing all samples without 150bp alpha-div results ***
```
df1 = df_merged[['empo_3', 'adiv_observed_otus', 'adiv_observed_otus_100bp', 'adiv_observed_otus_150bp']]
df1.columns = ['empo_3', 'observed_tag_sequences_90bp', 'observed_tag_sequences_100bp', 'observed_tag_sequences_150bp']
df1.dropna(axis=0, inplace=True)
g = sns.PairGrid(df1, hue='empo_3', palette=get_empo_cat_color(returndict=True))
g = g.map(plt.scatter, alpha=0.5)
for i in [0, 1, 2]:
for j in [0, 1, 2]:
g.axes[i][j].set_xscale('log')
g.axes[i][j].set_yscale('log')
g.axes[i][j].set_xlim([1e0, 1e4])
g.axes[i][j].set_ylim([1e0, 1e4])
g.savefig('adiv_%s_scatter.pdf' % version)
sns.lmplot(x='observed_tag_sequences_90bp', y='observed_tag_sequences_150bp', col='empo_3', hue="empo_3", data=df1,
col_wrap=4, palette=get_empo_cat_color(returndict=True), size=3, markers='o',
scatter_kws={"s": 20, "alpha": 1}, fit_reg=True)
plt.xlim([0, 3000])
plt.ylim([0, 3000])
plt.savefig('adiv_%s_lmplot.pdf' % version)
df1melt = pd.melt(df1, id_vars='empo_3')
empo_list = list(set(df1melt.empo_3))
empo_list = [x for x in empo_list if type(x) is str]
empo_list.sort()
empo_colors = [get_empo_cat_color(returndict=True)[x] for x in empo_list]
for var in ['observed_tag_sequences_90bp', 'observed_tag_sequences_100bp', 'observed_tag_sequences_150bp']:
list_of = [0] * len(empo_list)
df1melt2 = df1melt[df1melt['variable'] == var].drop('variable', axis=1)
for empo in np.arange(len(empo_list)):
list_of[empo] = list(df1melt2.pivot(columns='empo_3')['value'][empo_list[empo]].dropna())
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(2.5,2.5))
plt.hist(list_of, color=empo_colors,
bins=np.logspace(np.log10(1e0),np.log10(1e4), 20),
stacked=True)
plt.xscale('log')
fig.savefig('adiv_%s_hist_%s.pdf' % (version, var))
```
| github_jupyter |
# Value Investing Indicators from SEC Filings
### Data Source: https://www.sec.gov/dera/data/financial-statement-data-sets.html
```
import pandas as pd
import os
import shutil
import glob
import sys
import warnings
import functools
from functools import reduce
import os
pd.set_option('display.max_columns', 999)
warnings.simplefilter("ignore")
os.chdir("..")
dir_root = os.getcwd()
dir_raw = dir_root + u"/sec_filings/Raw"
cik_lookup = pd.read_csv(dir_root + u"/sec_filings/cik_ticker.csv", usecols=['CIK', 'Ticker'], sep="|")
# num_tags = ["PreferredStockValue", "AssetsCurrent", "Liabilities", "EarningsPerShareBasic", "CommonStockSharesOutstanding", "LiabilitiesCurrent", "EarningsPerShareBasic", "SharePrice", "StockholdersEquity", "PreferredStockValue", "CommonStockSharesOutstanding", "NetIncomeLoss", "GrossProfit", "SalesRevenueNet","StockRepurchasedAndRetiredDuringPeriodShares"]
files_num = sorted(glob.glob(dir_raw + u'/*_num.txt'))
files_sub = sorted(glob.glob(dir_raw + u'/*_sub.txt'))
files_pre = sorted(glob.glob(dir_raw + u'/*_pre.txt'))
sec_df = pd.DataFrame([])
for num, sub, pre in zip(files_num, files_sub, files_pre):
df_num = pd.read_csv(num, sep='\t', dtype=str, encoding = "ISO-8859-1")
df_sub = pd.read_csv(sub, sep='\t', dtype=str, encoding = "ISO-8859-1")
df_pre = pd.read_csv(pre, sep='\t', dtype=str, encoding = "ISO-8859-1")
# df_num = df_num[df_num['tag'].isin(num_tags)]
df_pre = df_pre.merge(df_num, on=['adsh', 'tag', 'version'], sort=True)
sec_merge = df_sub.merge(df_pre, on='adsh', how="inner", sort=True)
sec_merge = sec_merge[sec_merge['form'] == "10-Q"]
sec_merge = sec_merge[sec_merge['stmt'] == "BS"]
sec_df = sec_df.append(sec_merge)
sec_curated = sec_df.sort_values(by='ddate').drop_duplicates(subset=['adsh', 'tag', 'version'], keep='last')
sec_curated = sec_curated[['adsh', 'ddate','version','filed', 'form', 'fp', 'fy', 'fye', 'instance', 'period', 'tag', 'uom', 'value']]
sec_curated.to_csv(dir_root + r'/test.csv', index=False)
sec_curated = sec_curated.drop_duplicates(subset=['instance'])
sec_curated[sec_curated['tag'] == 'Share'].dropna()
drop_cols = ['adsh', 'ddate','version','filed', 'form', 'fp', 'fy', 'fye', 'period', 'tag', 'uom']
PreferredStockValue = sec_curated[sec_curated["tag"] == "PreferredStockValue"].rename(columns={"value": "PreferredStockValue"}).drop(columns=drop_cols)
AssetsCurrent = sec_curated[sec_curated["tag"] == "AssetsCurrent"].rename(columns={'value': 'AssetsCurrent'}).drop(columns=drop_cols)
Liabilities = sec_curated[sec_curated["tag"] == "Liabilities"].rename(columns={'value': 'Liabilities'}).drop(columns=drop_cols)
EarningsPerShareBasic = sec_curated[sec_curated["tag"] == "EarningsPerShareBasic"].rename(columns={'value': 'EarningsPerShareBasic'}).drop(columns=drop_cols)
CommonStockSharesOutstanding = sec_curated[sec_curated["tag"] == "CommonStockSharesOutstanding"].rename(columns={'value': 'CommonStockSharesOutstanding'}).drop(columns=drop_cols)
LiabilitiesCurrent = sec_curated[sec_curated["tag"] == "LiabilitiesCurrent"].rename(columns={'value': 'LiabilitiesCurrent'}).drop(columns=drop_cols)
EarningsPerShareBasic = sec_curated[sec_curated["tag"] == "EarningsPerShareBasic"].rename(columns={'value': 'EarningsPerShareBasic'}).drop(columns=drop_cols)
SharePrice = sec_curated[sec_curated["tag"] == "SharePrice"].rename(columns={'value': 'SharePrice'}).drop(columns=drop_cols)
StockholdersEquity = sec_curated[sec_curated["tag"] == "StockholdersEquity"].rename(columns={'value': 'StockholdersEquity'}).drop(columns=drop_cols)
PreferredStockValue = sec_curated[sec_curated["tag"] == "PreferredStockValue"].rename(columns={'value': 'PreferredStockValue'}).drop(columns=drop_cols)
CommonStockSharesOutstanding = sec_curated[sec_curated["tag"] == "CommonStockSharesOutstanding"].rename(columns={'value': 'CommonStockSharesOutstanding'}).drop(columns=drop_cols)
NetIncomeLoss = sec_curated[sec_curated["tag"] == "NetIncomeLoss"].rename(columns={'value': 'NetIncomeLoss'}).drop(columns=drop_cols)
GrossProfit = sec_curated[sec_curated["tag"] == "GrossProfit"].rename(columns={'value': 'GrossProfit'}).drop(columns=drop_cols)
SalesRevenueNet = sec_curated[sec_curated["tag"] == "SalesRevenueNet"].rename(columns={'value': 'SalesRevenueNet'}).drop(columns=drop_cols)
StockRepurchased = sec_curated[sec_curated["tag"] == "StockRepurchasedAndRetiredDuringPeriodShares"].rename(columns={'value': 'StockRepurchased'}).drop(columns=drop_cols)
cols = ['adsh', 'ddate','version','filed', 'form', 'fp', 'fy', 'fye', 'instance', 'period', 'tag', 'uom']
sec_final = sec_curated[cols]
dfs = [sec_curated, PreferredStockValue, AssetsCurrent, Liabilities, EarningsPerShareBasic,
CommonStockSharesOutstanding, LiabilitiesCurrent, EarningsPerShareBasic,
SharePrice, StockholdersEquity, PreferredStockValue, CommonStockSharesOutstanding,
NetIncomeLoss, GrossProfit, SalesRevenueNet,StockRepurchased]
df_final = reduce(lambda left,right: pd.merge(sec_curated,right,on='instance'), dfs)
dfs_final = [df.set_index('instance') for df in dfs]
x = pd.concat(dfs_final, axis=1)
x['SharePriceBasic'].dropna()
```
# Benjamin Graham
## Formulas:
* NCAVPS = CurrentAssets - (Total Liabilities + Preferred Stock) ÷ Shares Outstanding
* Less than 1.10
* Debt to Assets = Current Assets / Current Liabilities
* Greater than 1.50
* Price / Earnings per Share ratio
* Less than 9.0
* PRICE TO BOOK VALUE = (P/BV)
* Where BV = (Total Shareholder Equity−Preferred Stock)/ Total Outstanding Shares
* Less than 1.20. P/E ratios
## References:
Benjamin Graham rules: https://cabotwealth.com/daily/value-investing/benjamin-grahams-value-stock-criteria/
Benjamin Graham rules Modified: https://www.netnethunter.com/16-benjamin-graham-rules/
```
sec_df['NCAVPS'] = sec_df['AssetsCurrent'] - (sec_df['Liabilities'] + sec_df[
'PreferredStockValue']) / sec_df['CommonStockSharesOutstanding']
sec_df['DebtToAssets'] = sec_df['AssetsCurrent'] / sec_df['LiabilitiesCurrent']
sec_df['PE'] = sec_df['SharePrice'] / sec_df['EarningsPerShareBasic']
sec_df['PBV'] = sec_df['SharePrice'] / ((sec_df['StockholdersEquity'] - sec_df['PreferredStockValue']) / sec_df['CommonStockSharesOutstanding'])
```
# Warren Buffet Rules
## Formulas
* Debt/Equity= Total Liabilities / Total Shareholders’ Equity
* Less than 1 and ROE is greater than 10%
* Return on Earnings = (Net Income / Stock Holders Equity)
* Is Positive
* Gross Profit Margin = Gross Profit / Revenue
* Greater than 40%
* Quarter over Quarter EPS
* Greater than 10
* Stock Buybacks
* Greater than last period
## References:
https://www.oldschoolvalue.com/tutorial/this-is-how-buffett-interprets-financial-statements/
```
sec_df['DebtEquity'] = sec_df['Liabilities'] / sec_df['StockholdersEquity']
sec_df['ReturnEarnings'] = sec_df['NetIncomeLoss'] / sec_df['StockholdersEquity']
sec_df["GrossProfitMargin"] = sec_df['GrossProfit'] / sec_df['SalesRevenueNet']
sec_df["EPS"] = sec_df["EarningsPerShareBasic"]
sec_df["StockBuybacks"] = sec_df["StockRepurchasedAndRetiredDuringPeriodShares"]
```
| github_jupyter |
# Modeling
@Author: Bruno Vieira
Goals: Create a classification model able to identify a BOT account on twitter, using only profile-based features.
```
# Libs
import os
import numpy as np
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.metrics import classification_report, precision_score, recall_score, roc_auc_score, average_precision_score, f1_score
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, MinMaxScaler, StandardScaler, FunctionTransformer
from sklearn.inspection import permutation_importance
from sklearn.model_selection import cross_validate, StratifiedKFold, train_test_split
import cloudpickle
from sklearn.model_selection import learning_curve
import matplotlib.pyplot as plt
from sklearn.svm import SVC
import utils.dev.model as mdl
import importlib
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', 100)
# Paths and Filenames
DATA_INPUT_PATH = 'data/interim'
DATA_INPUT_TRAIN_NAME = 'train_selected_features.csv'
DATA_INPUT_TEST_NAME = 'test.csv'
MODEL_OUTPUT_PATH = 'models'
MODEL_NAME = 'model_bot_classifier_v0.pkl'
df_twitter_train = pd.read_csv(os.path.join('..',DATA_INPUT_PATH, DATA_INPUT_TRAIN_NAME))
df_twitter_test = pd.read_csv(os.path.join('..',DATA_INPUT_PATH, DATA_INPUT_TEST_NAME))
df_twitter_train.replace({False:'FALSE', True:'TRUE'}, inplace=True)
df_twitter_test.replace({False:'FALSE', True:'TRUE'}, inplace=True)
```
# 1) Training
```
X_train = df_twitter_train.drop('label', axis=1)
y_train = df_twitter_train['label']
cat_columns = df_twitter_train.select_dtypes(include=['bool', 'object']).columns.tolist()
num_columns = df_twitter_train.select_dtypes(include=['int32','int64','float32', 'float64']).columns.tolist()
num_columns.remove('label')
skf = StratifiedKFold(n_splits=10)
cat_preprocessor = Pipeline(steps=[('imputer', SimpleImputer(strategy='most_frequent')),
('encoder', OneHotEncoder(handle_unknown='ignore'))])
num_preprocessor = Pipeline(steps=[('imputer', SimpleImputer(strategy='constant', fill_value=0)),
('scaler', StandardScaler())])
pipe_transformer = ColumnTransformer(transformers=[('num_pipe_preprocessor', num_preprocessor, num_columns),
('cat_pipe_preprocessor', cat_preprocessor, cat_columns)])
pipe_model = Pipeline(steps=[('pre_processor', pipe_transformer),
('model', SVC(random_state=23, kernel='rbf', gamma='scale', C=1, probability=True))])
cross_validation_results = cross_validate(pipe_model, X=X_train, y=y_train, scoring=['average_precision', 'roc_auc', 'precision', 'recall'], cv=skf, n_jobs=-1, verbose=0, return_train_score=True)
pipe_model.fit(X_train, y_train)
```
# 2) Evaluation
## 2.1) Cross Validation
```
cross_validation_results = pd.DataFrame(cross_validation_results)
cross_validation_results
1.96*cross_validation_results['train_average_precision'].std()
print(f"Avg Train Avg Precision:{np.round(cross_validation_results['train_average_precision'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['train_average_precision'].std(), 2)}")
print(f"Avg Test Avg Precision:{np.round(cross_validation_results['test_average_precision'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['test_average_precision'].std(), 2)}")
print(f"ROC - AUC Train:{np.round(cross_validation_results['train_roc_auc'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['train_roc_auc'].std(), 2)}")
print(f"ROC - AUC Test:{np.round(cross_validation_results['test_roc_auc'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['test_roc_auc'].std(), 2)}")
```
## 2.2) Test Set
```
def build_features(df):
list_columns_colors = df.filter(regex='color').columns.tolist()
df = df.replace({'false':'FALSE', 'true':'TRUE', False:'FALSE', True:'TRUE'})
df['name'] = df['name'].apply(lambda x: len(x) if x is not np.nan else 0)
df['profile_location'] = df['profile_location'].apply(lambda x: 'TRUE' if x is not np.nan else 'FALSE')
df['rate_friends_followers'] = df['friends_count']/df['followers_count']
df['rate_friends_followers'] = df['rate_friends_followers'].map({np.inf:0, np.nan:0})
df['unique_colors'] = df[list_columns_colors].stack().groupby(level=0).nunique()
return df
df_twitter_test = build_features(df_twitter_test)
columns_to_predict = df_twitter_train.columns.tolist()
df_twitter_test = df_twitter_test.loc[:,columns_to_predict]
X_test = df_twitter_test.drop('label', axis=1)
y_test = df_twitter_test['label']
```
## 2.3) Metrics
```
y_train_predict = pipe_model.predict_proba(X_train)
y_test_predict = pipe_model.predict_proba(X_test)
df_metrics_train = mdl.eval_thresh(y_real = y_train, y_proba = y_train_predict[:,1])
df_metrics_test = mdl.eval_thresh(y_real = y_test, y_proba = y_test_predict[:,1])
importlib.reload(mdl)
mdl.plot_metrics(df_metrics_train)
mdl.plot_metrics(df_metrics_test)
```
## 2.4) Learning Curve
```
train_sizes, train_scores, validation_scores = learning_curve(estimator = pipe_model,
X = X_train,
y = y_train,
cv = 5,
train_sizes=np.linspace(0.1, 1, 10),
scoring = 'neg_log_loss')
train_scores_mean = train_scores.mean(axis=1)
validation_scores_mean = validation_scores.mean(axis=1)
plt.style.use('seaborn')
plt.plot(train_sizes, train_scores_mean, label = 'Training error')
plt.plot(train_sizes, validation_scores_mean, label = 'Validation error')
plt.ylabel('Average Precision Score', fontsize = 14)
plt.xlabel('Training set size', fontsize = 14)
plt.title('Learning curves', fontsize = 18, y = 1.03)
plt.legend()
plt.show()
```
## 2.5) Ordering
## 2.6) Callibration
# 3) Saving the Model
```
with open(os.path.join('..', MODEL_OUTPUT_PATH, MODEL_NAME), 'wb') as f:
cloudpickle.dump(pipe_model, f)
```
| github_jupyter |
```
from IPython import display
from utils import Logger
import torch
from torch import nn
from torch.optim import Adam
from torch.autograd import Variable
from torchvision import transforms, datasets
DATA_FOLDER = './torch_data/VGAN/MNIST'
```
## Load Data
```
def mnist_data():
compose = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((.5, .5, .5), (.5, .5, .5))
])
out_dir = '{}/dataset'.format(DATA_FOLDER)
return datasets.MNIST(root=out_dir, train=True, transform=compose, download=True)
data = mnist_data()
batch_size = 100
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
num_batches = len(data_loader)
```
## Networks
```
class DiscriminativeNet(torch.nn.Module):
"""
A two hidden-layer discriminative neural network
"""
def __init__(self):
super(DiscriminativeNet, self).__init__()
n_features = 784
n_out = 1
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 1024),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden1 = nn.Sequential(
nn.Linear(1024, 512),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 256),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.out = nn.Sequential(
torch.nn.Linear(256, n_out),
torch.nn.Sigmoid()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
def images_to_vectors(images):
return images.view(images.size(0), 784)
def vectors_to_images(vectors):
return vectors.view(vectors.size(0), 1, 28, 28)
class GenerativeNet(torch.nn.Module):
"""
A three hidden-layer generative neural network
"""
def __init__(self):
super(GenerativeNet, self).__init__()
n_features = 100
n_out = 784
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 256),
nn.LeakyReLU(0.2)
)
self.hidden1 = nn.Sequential(
nn.Linear(256, 512),
nn.LeakyReLU(0.2)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 1024),
nn.LeakyReLU(0.2)
)
self.out = nn.Sequential(
nn.Linear(1024, n_out),
nn.Tanh()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
# Noise
def noise(size):
n = Variable(torch.randn(size, 100))
if torch.cuda.is_available(): return n.cuda
return n
discriminator = DiscriminativeNet()
generator = GenerativeNet()
if torch.cuda.is_available():
discriminator.cuda()
generator.cuda()
```
## Optimization
```
# Optimizers
d_optimizer = Adam(discriminator.parameters(), lr=0.0002)
g_optimizer = Adam(generator.parameters(), lr=0.0002)
# Loss function
loss = nn.BCELoss()
# Number of steps to apply to the discriminator
d_steps = 1 # In Goodfellow et. al 2014 this variable is assigned to 1
# Number of epochs
num_epochs = 200
```
## Training
```
def real_data_target(size):
'''
Tensor containing ones, with shape = size
'''
data = Variable(torch.ones(size, 1))
if torch.cuda.is_available(): return data.cuda()
return data
def fake_data_target(size):
'''
Tensor containing zeros, with shape = size
'''
data = Variable(torch.zeros(size, 1))
if torch.cuda.is_available(): return data.cuda()
return data
def train_discriminator(optimizer, real_data, fake_data):
# Reset gradients
optimizer.zero_grad()
# 1.1 Train on Real Data
prediction_real = discriminator(real_data)
# Calculate error and backpropagate
error_real = loss(prediction_real, real_data_target(real_data.size(0)))
error_real.backward()
# 1.2 Train on Fake Data
prediction_fake = discriminator(fake_data)
# Calculate error and backpropagate
error_fake = loss(prediction_fake, fake_data_target(real_data.size(0)))
error_fake.backward()
# 1.3 Update weights with gradients
optimizer.step()
# Return error
return error_real + error_fake, prediction_real, prediction_fake
def train_generator(optimizer, fake_data):
# 2. Train Generator
# Reset gradients
optimizer.zero_grad()
# Sample noise and generate fake data
prediction = discriminator(fake_data)
# Calculate error and backpropagate
error = loss(prediction, real_data_target(prediction.size(0)))
error.backward()
# Update weights with gradients
optimizer.step()
# Return error
return error
```
### Generate Samples for Testing
```
num_test_samples = 16
test_noise = noise(num_test_samples)
```
### Start training
```
logger = Logger(model_name='VGAN', data_name='MNIST')
for epoch in range(num_epochs):
for n_batch, (real_batch,_) in enumerate(data_loader):
# 1. Train Discriminator
real_data = Variable(images_to_vectors(real_batch))
if torch.cuda.is_available(): real_data = real_data.cuda()
# Generate fake data
fake_data = generator(noise(real_data.size(0))).detach()
# Train D
d_error, d_pred_real, d_pred_fake = train_discriminator(d_optimizer,
real_data, fake_data)
# 2. Train Generator
# Generate fake data
fake_data = generator(noise(real_batch.size(0)))
# Train G
g_error = train_generator(g_optimizer, fake_data)
# Log error
logger.log(d_error, g_error, epoch, n_batch, num_batches)
# Display Progress
if (n_batch) % 100 == 0:
display.clear_output(True)
# Display Images
test_images = vectors_to_images(generator(test_noise)).data.cpu()
logger.log_images(test_images, num_test_samples, epoch, n_batch, num_batches);
# Display status Logs
logger.display_status(
epoch, num_epochs, n_batch, num_batches,
d_error, g_error, d_pred_real, d_pred_fake
)
# Model Checkpoints
logger.save_models(generator, discriminator, epoch)
```
| github_jupyter |
```
# Import Dependencies
import pandas as pd
import random
# A gigantic DataFrame of individuals' names, their trainers, their weight, and their days as gym members
training_data = pd.DataFrame({
"Name":["Gino Walker","Hiedi Wasser","Kerrie Wetzel","Elizabeth Sackett","Jack Mitten","Madalene Wayman","Jamee Horvath","Arlena Reddin","Tula Levan","Teisha Dreier","Leslie Carrier","Arlette Hartson","Romana Merkle","Heath Viviani","Andres Zimmer","Allyson Osman","Yadira Caggiano","Jeanmarie Friedrichs","Leann Ussery","Bee Mom","Pandora Charland","Karena Wooten","Elizabet Albanese","Augusta Borjas","Erma Yadon","Belia Lenser","Karmen Sancho","Edison Mannion","Sonja Hornsby","Morgan Frei","Florencio Murphy","Christoper Hertel","Thalia Stepney","Tarah Argento","Nicol Canfield","Pok Moretti","Barbera Stallings","Muoi Kelso","Cicely Ritz","Sid Demelo","Eura Langan","Vanita An","Frieda Fuhr","Ernest Fitzhenry","Ashlyn Tash","Melodi Mclendon","Rochell Leblanc","Jacqui Reasons","Freeda Mccroy","Vanna Runk","Florinda Milot","Cierra Lecompte","Nancey Kysar","Latasha Dalton","Charlyn Rinaldi","Erline Averett","Mariko Hillary","Rosalyn Trigg","Sherwood Brauer","Hortencia Olesen","Delana Kohut","Geoffrey Mcdade","Iona Delancey","Donnie Read","Cesar Bhatia","Evia Slate","Kaye Hugo","Denise Vento","Lang Kittle","Sherry Whittenberg","Jodi Bracero","Tamera Linneman","Katheryn Koelling","Tonia Shorty","Misha Baxley","Lisbeth Goering","Merle Ladwig","Tammie Omar","Jesusa Avilla","Alda Zabala","Junita Dogan","Jessia Anglin","Peggie Scranton","Dania Clodfelter","Janis Mccarthy","Edmund Galusha","Tonisha Posey","Arvilla Medley","Briana Barbour","Delfina Kiger","Nia Lenig","Ricarda Bulow","Odell Carson","Nydia Clonts","Andree Resendez","Daniela Puma","Sherill Paavola","Gilbert Bloomquist","Shanon Mach","Justin Bangert","Arden Hokanson","Evelyne Bridge","Hee Simek","Ward Deangelis","Jodie Childs","Janis Boehme","Beaulah Glowacki","Denver Stoneham","Tarra Vinton","Deborah Hummell","Ulysses Neil","Kathryn Marques","Rosanna Dake","Gavin Wheat","Tameka Stoke","Janella Clear","Kaye Ciriaco","Suk Bloxham","Gracia Whaley","Philomena Hemingway","Claudette Vaillancourt","Olevia Piche","Trey Chiles","Idalia Scardina","Jenine Tremble","Herbert Krider","Alycia Schrock","Miss Weibel","Pearlene Neidert","Kina Callender","Charlotte Skelley","Theodora Harrigan","Sydney Shreffler","Annamae Trinidad","Tobi Mumme","Rosia Elliot","Debbra Putt","Rena Delosantos","Genna Grennan","Nieves Huf","Berry Lugo","Ayana Verdugo","Joaquin Mazzei","Doris Harmon","Patience Poss","Magaret Zabel","Marylynn Hinojos","Earlene Marcantel","Yuki Evensen","Rema Gay","Delana Haak","Patricia Fetters","Vinnie Elrod","Octavia Bellew","Burma Revard","Lakenya Kato","Vinita Buchner","Sierra Margulies","Shae Funderburg","Jenae Groleau","Louetta Howie","Astrid Duffer","Caron Altizer","Kymberly Amavisca","Mohammad Diedrich","Thora Wrinkle","Bethel Wiemann","Patria Millet","Eldridge Burbach","Alyson Eddie","Zula Hanna","Devin Goodwin","Felipa Kirkwood","Kurtis Kempf","Kasey Lenart","Deena Blankenship","Kandra Wargo","Sherrie Cieslak","Ron Atha","Reggie Barreiro","Daria Saulter","Tandra Eastman","Donnell Lucious","Talisha Rosner","Emiko Bergh","Terresa Launius","Margy Hoobler","Marylou Stelling","Lavonne Justice","Kala Langstaff","China Truett","Louanne Dussault","Thomasena Samaniego","Charlesetta Tarbell","Fatimah Lade","Malisa Cantero","Florencia Litten","Francina Fraise","Patsy London","Deloris Mclaughlin"],
"Trainer":['Bettyann Savory','Mariah Barberio','Gordon Perrine','Pa Dargan','Blanch Victoria','Aldo Byler','Aldo Byler','Williams Camire','Junie Ritenour','Gordon Perrine','Bettyann Savory','Mariah Barberio','Aldo Byler','Barton Stecklein','Bettyann Savory','Barton Stecklein','Gordon Perrine','Pa Dargan','Aldo Byler','Brittani Brin','Bettyann Savory','Phyliss Houk','Bettyann Savory','Junie Ritenour','Aldo Byler','Calvin North','Brittani Brin','Junie Ritenour','Blanch Victoria','Brittani Brin','Bettyann Savory','Blanch Victoria','Mariah Barberio','Bettyann Savory','Blanch Victoria','Brittani Brin','Junie Ritenour','Pa Dargan','Gordon Perrine','Phyliss Houk','Pa Dargan','Mariah Barberio','Phyliss Houk','Phyliss Houk','Calvin North','Williams Camire','Brittani Brin','Gordon Perrine','Bettyann Savory','Bettyann Savory','Pa Dargan','Phyliss Houk','Barton Stecklein','Blanch Victoria','Coleman Dunmire','Phyliss Houk','Blanch Victoria','Pa Dargan','Harland Coolidge','Calvin North','Bettyann Savory','Phyliss Houk','Bettyann Savory','Harland Coolidge','Gordon Perrine','Junie Ritenour','Harland Coolidge','Blanch Victoria','Mariah Barberio','Coleman Dunmire','Aldo Byler','Bettyann Savory','Gordon Perrine','Bettyann Savory','Barton Stecklein','Harland Coolidge','Aldo Byler','Aldo Byler','Pa Dargan','Junie Ritenour','Brittani Brin','Junie Ritenour','Gordon Perrine','Mariah Barberio','Mariah Barberio','Mariah Barberio','Bettyann Savory','Brittani Brin','Aldo Byler','Phyliss Houk','Blanch Victoria','Pa Dargan','Phyliss Houk','Brittani Brin','Barton Stecklein','Coleman Dunmire','Bettyann Savory','Bettyann Savory','Gordon Perrine','Blanch Victoria','Junie Ritenour','Phyliss Houk','Coleman Dunmire','Williams Camire','Harland Coolidge','Williams Camire','Aldo Byler','Harland Coolidge','Gordon Perrine','Brittani Brin','Coleman Dunmire','Calvin North','Phyliss Houk','Brittani Brin','Aldo Byler','Bettyann Savory','Brittani Brin','Gordon Perrine','Calvin North','Harland Coolidge','Coleman Dunmire','Harland Coolidge','Aldo Byler','Junie Ritenour','Blanch Victoria','Harland Coolidge','Blanch Victoria','Junie Ritenour','Harland Coolidge','Junie Ritenour','Gordon Perrine','Brittani Brin','Coleman Dunmire','Williams Camire','Junie Ritenour','Brittani Brin','Calvin North','Barton Stecklein','Barton Stecklein','Mariah Barberio','Coleman Dunmire','Bettyann Savory','Mariah Barberio','Pa Dargan','Barton Stecklein','Coleman Dunmire','Brittani Brin','Barton Stecklein','Pa Dargan','Barton Stecklein','Junie Ritenour','Bettyann Savory','Williams Camire','Pa Dargan','Calvin North','Williams Camire','Coleman Dunmire','Aldo Byler','Barton Stecklein','Coleman Dunmire','Blanch Victoria','Mariah Barberio','Mariah Barberio','Harland Coolidge','Barton Stecklein','Phyliss Houk','Pa Dargan','Bettyann Savory','Barton Stecklein','Harland Coolidge','Junie Ritenour','Pa Dargan','Mariah Barberio','Blanch Victoria','Williams Camire','Phyliss Houk','Phyliss Houk','Coleman Dunmire','Mariah Barberio','Gordon Perrine','Coleman Dunmire','Brittani Brin','Pa Dargan','Coleman Dunmire','Brittani Brin','Blanch Victoria','Coleman Dunmire','Gordon Perrine','Coleman Dunmire','Aldo Byler','Aldo Byler','Mariah Barberio','Williams Camire','Phyliss Houk','Aldo Byler','Williams Camire','Aldo Byler','Williams Camire','Coleman Dunmire','Phyliss Houk'],
"Weight":[128,180,193,177,237,166,224,208,177,241,114,161,162,151,220,142,193,193,124,130,132,141,190,239,213,131,172,127,184,157,215,122,181,240,218,205,239,217,234,158,180,131,194,171,177,110,117,114,217,123,248,189,198,127,182,121,224,111,151,170,188,150,137,231,222,186,139,175,178,246,150,154,129,216,144,198,228,183,173,129,157,199,186,232,172,157,246,239,214,161,132,208,187,224,164,177,175,224,219,235,112,241,243,179,208,196,131,207,182,233,191,162,173,197,190,182,231,196,196,143,250,174,138,135,164,204,235,192,114,179,215,127,185,213,250,213,153,217,176,190,119,167,118,208,113,206,200,236,159,218,168,159,156,183,121,203,215,209,179,219,174,220,129,188,217,250,166,157,112,236,182,144,189,243,238,147,165,115,160,134,245,174,238,157,150,184,174,134,134,248,199,165,117,119,162,112,170,224,247,217],
"Membership(Days)":[52,70,148,124,186,157,127,155,37,185,158,129,93,69,124,13,76,153,164,161,48,121,167,69,39,163,7,34,176,169,108,162,195,86,155,77,197,200,80,142,179,67,58,145,188,147,125,15,13,173,125,4,61,29,132,110,62,137,197,135,162,174,32,151,149,65,18,42,63,62,104,200,189,40,38,199,1,12,8,2,195,30,7,72,130,144,2,34,200,143,43,196,22,115,171,54,143,59,14,52,109,115,187,185,26,19,178,18,120,169,45,52,130,69,168,178,96,22,78,152,39,51,118,130,60,156,108,69,103,158,165,142,86,91,117,77,57,169,86,188,97,111,22,83,81,177,163,35,12,164,21,181,171,138,22,107,58,51,38,128,19,193,157,13,104,89,13,10,26,190,179,101,7,159,100,49,120,109,56,199,51,108,47,171,69,162,74,119,148,88,32,159,65,146,140,171,88,18,59,13]
})
training_data.head(10)
# Collecting a summary of all numeric data
# Finding the names of the trainers
# Finding how many students each trainer has
# Finding the average weight of all students
# Finding the combined weight of all students
# Converting the membership days into weeks and then adding a column to the DataFrame
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Project: **Finding Lane Lines on the Road**
***
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
---
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
---
**The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
---
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
**Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
## Import Packages
```
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
```
## Read in an Image
```
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
```
## Ideas for Lane Detection Pipeline
**Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
`cv2.inRange()` for color selection
`cv2.fillPoly()` for regions selection
`cv2.line()` to draw lines on an image given endpoints
`cv2.addWeighted()` to coadd / overlay two images
`cv2.cvtColor()` to grayscale or change color
`cv2.imwrite()` to output images to file
`cv2.bitwise_and()` to apply a mask to an image
**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
## Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
```
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=15):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# initialize the accumulators
imshape = img.shape
left_slopes = []
left_x = 0
left_y = 0
right_slopes = []
right_x = 0
right_y = 0
# loop over the lines
for line in lines:
for x1,y1,x2,y2 in line:
slope = (y2 - y1)/(x2 - x1)
if slope < -.5 and slope > -1: #left
left_slopes.append(slope)
left_x += x1 + x2
left_y += y1 + y2
elif slope > .5 and slope < .8: #right
right_slopes.append(slope)
right_x += x1 + x2
right_y += y1 + y2
left_slope = sum(left_slopes) / len(left_slopes)
avg_left_x = int(left_x / (2 * len(left_slopes)))
avg_left_y = int(left_y / (2 * len(left_slopes)))
y_left_1 = imshape[0]
x_left_1 = int((y_left_1 - avg_left_y) / left_slope + avg_left_x)
x_left_2 = int((y_left_2 - avg_left_y) / left_slope + avg_left_x)
right_slope = sum(right_slopes) / len(right_slopes)
avg_right_x = int(right_x / (2 * len(right_slopes)))
avg_right_y = int(right_y / (2 * len(right_slopes)))
y_right_1 = imshape[0]
x_right_1 = int((y_right_1 - avg_right_y) / right_slope + avg_right_x)
y_right_2 = int((34/54) * imshape[0])
x_right_2 = int((y_right_2 - avg_right_y) / right_slope + avg_right_x)
cv2.line(img, (x_left_1, y_left_1), (x_left_2, y_left_2), color, thickness)
cv2.line(img, (x_right_1, y_right_1), (x_right_2, y_right_2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
```
## Test Images
Build your pipeline to work on the images in the directory "test_images"
**You should make sure your pipeline works well on these images before you try the videos.**
```
import os
os.listdir("test_images/")
```
## Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
```
def draw(image):
gray=grayscale(image)
blur=gaussian_blur(gray,5)
edge=canny(blur,50,150)
imshape=image.shape
vertices = np.array([[(0, imshape[0]),((43/96) * imshape[1], (33/54) * imshape[0]), ((54/96) * imshape[1], (33/54) * imshape[0]), (imshape[1],imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edge, vertices)
rho = 1
theta = np.pi/180
threshold = 5
min_line_length = 3
max_line_gap = 1
lines = hough_lines(masked_edges, 1, math.pi/180, 5, 3, 1)
lines_overlayed = weighted_img(lines, image)
return lines_overlayed
for i in os.listdir("test_images/"):
image = mpimg.imread('test_images/'+i)
out=draw(image)
plt.imsave(i[:-4]+"out.jpg",out)
```
## Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
`solidWhiteRight.mp4`
`solidYellowLeft.mp4`
**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
**If you get an error that looks like this:**
```
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
```
**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
```
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result=draw(image)
return result
```
Let's try the one with the solid white lane on the right first ...
```
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
```
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
```
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
```
## Improve the draw_lines() function
**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
**Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
Now for the one with the solid yellow lane on the left. This one's more tricky!
```
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
```
## Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
## Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
```
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
```
| github_jupyter |
```
from simforest import SimilarityForestClassifier, SimilarityForestRegressor
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.datasets import load_svmlight_file
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import f1_score
from scipy.stats import pearsonr
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from bias import create_numerical_feature_classification, create_categorical_feature_classification
from bias import create_numerical_feature_regression, create_categorical_feature_regression
from bias import get_permutation_importances, bias_experiment, plot_bias
sns.set_style('whitegrid')
SEED = 42
import warnings
warnings.filterwarnings('ignore')
```
# Read the data
```
X, y = load_svmlight_file('data/heart')
X = X.toarray().astype(np.float32)
y[y==-1] = 0
features = [f'f{i+1}' for i in range(X.shape[1])]
df = pd.DataFrame(X, columns=features)
df.head()
```
# Add new numerical feature
Create synthetic column, strongly correlated with target.
Each value is calculated according to the formula:
v = y * a + random(-b, b)
So its scaled target value with some noise.
Then a fraction of values is permuted, to reduce the correlation.
In this case, a=10, b=5, fraction=0.05
```
if 'new_feature' in df.columns:
df.pop('new_feature')
new_feature, corr = create_numerical_feature_classification(y, fraction=0.05, seed=SEED, verbose=True)
df = pd.concat([pd.Series(new_feature, name='new_feature'), df], axis=1)
plt.scatter(new_feature, y, alpha=0.3)
plt.xlabel('Feature value')
plt.ylabel('Target')
plt.title('Synthetic numerical feature');
```
# Random Forest feature importance
Random Forest offers a simple way to measure feature importance. A certain feature is considered to be important if it reduced node impurity often, during fitting the trees.
We can see that adding a feature strongly correlated with target improved the model's performance, compared to results we obtained without this feature. What is more, this new feature was really important for the predictions. The plot shows that it is far more important than the original features.
```
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=SEED)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestClassifier(random_state=SEED)
rf.fit(X_train, y_train)
rf_pred = rf.predict(X_test)
print(f'Random Forest f1 score: {round(f1_score(y_test, rf_pred), 3)}')
df_rf_importances = pd.DataFrame(rf.feature_importances_, index=df.columns.values, columns=['importance'])
df_rf_importances = df_rf_importances.sort_values(by='importance', ascending=False)
df_rf_importances.plot()
plt.title('Biased Random Forest feature importance');
```
# Permutation feature importance
The impurity-based feature importance of Random Forests suffers from being computed on statistics derived from the training dataset: the importances can be high even for features that are not predictive of the target variable, as long as the model has the capacity to use them to overfit.
Futhermore, Random Forest feature importance is biased towards high-cardinality numerical feautures.
In this experiment, we will use permutation feature importance to asses how Random Forest and Similarity Forest
depend on syntetic feauture. This method is more reliable, and enables to measure feature importance for Similarity Forest, that doesn't enable us to measure impurity-based feature importance.
Source: https://scikit-learn.org/stable/auto_examples/inspection/plot_permutation_importance.html
```
sf = SimilarityForestClassifier(n_estimators=100, random_state=SEED).fit(X_train, y_train)
perm_importance_results = get_permutation_importances(rf, sf,
X_train, y_train, X_test, y_test,
corr, df.columns.values, plot=True)
fraction_range = [0.0, 0.02, 0.05, 0.08, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 1.0]
correlations, rf_scores, sf_scores, permutation_importances = bias_experiment(df, y,
'classification', 'numerical',
fraction_range, SEED)
plot_bias(fraction_range, correlations,
rf_scores, sf_scores,
permutation_importances, 'heart')
```
# New categorical feature
```
if 'new_feature' in df.columns:
df.pop('new_feature')
new_feature, corr = create_categorical_feature_classification(y, fraction=0.05, seed=SEED, verbose=True)
df = pd.concat([pd.Series(new_feature, name='new_feature'), df], axis=1)
df_category = pd.concat([pd.Series(new_feature, name='new_feature'), pd.Series(y, name='y')], axis=1)
fig = plt.figure(figsize=(8, 6))
sns.countplot(data=df_category, x='new_feature', hue='y')
plt.xlabel('Feature value, grouped by class')
plt.ylabel('Count')
plt.title('Synthetic categorical feature', fontsize=16);
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=SEED)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestClassifier(random_state=SEED).fit(X_train, y_train)
sf = SimilarityForestClassifier(n_estimators=100, random_state=SEED).fit(X_train, y_train)
perm_importance_results = get_permutation_importances(rf, sf,
X_train, y_train, X_test, y_test,
corr, df.columns.values, plot=True)
correlations, rf_scores, sf_scores, permutation_importances = bias_experiment(df, y,
'classification', 'categorical',
fraction_range, SEED)
plot_bias(fraction_range, correlations,
rf_scores, sf_scores,
permutation_importances, 'heart')
```
# Regression, numerical feature
```
X, y = load_svmlight_file('data/mpg')
X = X.toarray().astype(np.float32)
features = [f'f{i+1}' for i in range(X.shape[1])]
df = pd.DataFrame(X, columns=features)
df.head()
if 'new_feature' in df.columns:
df.pop('new_feature')
new_feature, corr = create_numerical_feature_regression(y, fraction=0.2, seed=SEED, verbose=True)
df = pd.concat([pd.Series(new_feature, name='new_feature'), df], axis=1)
plt.scatter(new_feature, y, alpha=0.3)
plt.xlabel('Feature value')
plt.ylabel('Target')
plt.title('Synthetic numerical feature');
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=SEED)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestRegressor(random_state=SEED).fit(X_train, y_train)
sf = SimilarityForestRegressor(n_estimators=100, random_state=SEED).fit(X_train, y_train)
perm_importance_results = get_permutation_importances(rf, sf,
X_train, y_train, X_test, y_test,
corr, df.columns.values, plot=True)
correlations, rf_scores, sf_scores, permutation_importances = bias_experiment(df, y,
'regression', 'numerical',
fraction_range, SEED)
plot_bias(fraction_range, correlations,
rf_scores, sf_scores,
permutation_importances, 'mpg')
```
# Regression, categorical feature
```
if 'new_feature' in df.columns:
df.pop('new_feature')
new_feature, corr = create_categorical_feature_regression(y, fraction=0.15, seed=SEED, verbose=True)
df = pd.concat([pd.Series(new_feature, name='new_feature'), df], axis=1)
plt.scatter(new_feature, y, alpha=0.3)
plt.xlabel('Feature value')
plt.ylabel('Target')
plt.title('Synthetic categorical feature');
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=SEED)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
rf = RandomForestRegressor(random_state=SEED).fit(X_train, y_train)
sf = SimilarityForestRegressor(n_estimators=100, random_state=SEED).fit(X_train, y_train)
perm_importance_results = get_permutation_importances(rf, sf,
X_train, y_train, X_test, y_test,
corr, df.columns.values, plot=True)
correlations, rf_scores, sf_scores, permutation_importances = bias_experiment(df, y,
'regression', 'categorical',
fraction_range, SEED)
plot_bias(fraction_range, correlations,
rf_scores, sf_scores,
permutation_importances, 'mpg')
```
| github_jupyter |
Subsets and Splits