path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebook_EDA__0zEZKd8j.ipynb | ###Markdown
**Space X Falcon 9 First Stage Landing Prediction** Lab 2: Data wrangling Estimated time needed: **60** minutes In this lab, we will perform some Exploratory Data Analysis (EDA) to find some patterns in the data and determine what would be the label for training supervised models.In the data set, there are several different cases where the booster did not land successfully. Sometimes a landing was attempted but failed due to an accident; for example, True Ocean means the mission outcome was successfully landed to a specific region of the ocean while False Ocean means the mission outcome was unsuccessfully landed to a specific region of the ocean. True RTLS means the mission outcome was successfully landed to a ground pad False RTLS means the mission outcome was unsuccessfully landed to a ground pad.True ASDS means the mission outcome was successfully landed on a drone ship False ASDS means the mission outcome was unsuccessfully landed on a drone ship.In this lab we will mainly convert those outcomes into Training Labels with `1` means the booster successfully landed `0` means it was unsuccessful. Falcon 9 first stage will land successfully  Several examples of an unsuccessful landing are shown here:  ObjectivesPerform exploratory Data Analysis and determine Training Labels* Exploratory Data Analysis* Determine Training Labels *** Import Libraries and Define Auxiliary Functions We will import the following libraries.
###Code
# Pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
#NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
###Output
_____no_output_____
###Markdown
Data Analysis Load Space X dataset, from last section.
###Code
df=pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_1.csv")
df.head(10)
###Output
_____no_output_____
###Markdown
Identify and calculate the percentage of the missing values in each attribute
###Code
df.isnull().sum()/df.count()*100
###Output
_____no_output_____
###Markdown
Identify which columns are numerical and categorical:
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
TASK 1: Calculate the number of launches on each siteThe data contains several Space X launch facilities: Cape Canaveral Space Launch Complex 40 VAFB SLC 4E , Vandenberg Air Force Base Space Launch Complex 4E (SLC-4E), Kennedy Space Center Launch Complex 39A KSC LC 39A .The location of each Launch Is placed in the column LaunchSite Next, let's see the number of launches for each site.Use the method value_counts() on the column LaunchSite to determine the number of launches on each site:
###Code
# Apply value_counts() on column LaunchSite
df.LaunchSite.value_counts()
###Output
_____no_output_____
###Markdown
Each launch aims to an dedicated orbit, and here are some common orbit types: * LEO: Low Earth orbit (LEO)is an Earth-centred orbit with an altitude of 2,000 km (1,200 mi) or less (approximately one-third of the radius of Earth),\[1] or with at least 11.25 periods per day (an orbital period of 128 minutes or less) and an eccentricity less than 0.25.\[2] Most of the manmade objects in outer space are in LEO \[1].* VLEO: Very Low Earth Orbits (VLEO) can be defined as the orbits with a mean altitude below 450 km. Operating in these orbits can provide a number of benefits to Earth observation spacecraft as the spacecraft operates closer to the observation\[2].* GTO A geosynchronous orbit is a high Earth orbit that allows satellites to match Earth's rotation. Located at 22,236 miles (35,786 kilometers) above Earth's equator, this position is a valuable spot for monitoring weather, communications and surveillance. Because the satellite orbits at the same speed that the Earth is turning, the satellite seems to stay in place over a single longitude, though it may drift north to south,” NASA wrote on its Earth Observatory website \[3] .* SSO (or SO): It is a Sun-synchronous orbit also called a heliosynchronous orbit is a nearly polar orbit around a planet, in which the satellite passes over any given point of the planet's surface at the same local mean solar time \[4] .* ES-L1 :At the Lagrange points the gravitational forces of the two large bodies cancel out in such a way that a small object placed in orbit there is in equilibrium relative to the center of mass of the large bodies. L1 is one such point between the sun and the earth \[5] .* HEO A highly elliptical orbit, is an elliptic orbit with high eccentricity, usually referring to one around Earth \[6].* ISS A modular space station (habitable artificial satellite) in low Earth orbit. It is a multinational collaborative project between five participating space agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada) \[7] * MEO Geocentric orbits ranging in altitude from 2,000 km (1,200 mi) to just below geosynchronous orbit at 35,786 kilometers (22,236 mi). Also known as an intermediate circular orbit. These are "most commonly at 20,200 kilometers (12,600 mi), or 20,650 kilometers (12,830 mi), with an orbital period of 12 hours \[8] * HEO Geocentric orbits above the altitude of geosynchronous orbit (35,786 km or 22,236 mi) \[9] * GEO It is a circular geosynchronous orbit 35,786 kilometres (22,236 miles) above Earth's equator and following the direction of Earth's rotation \[10] * PO It is one type of satellites in which a satellite passes above or nearly above both poles of the body being orbited (usually a planet such as the Earth \[11] some are shown in the following plot:  TASK 2: Calculate the number and occurrence of each orbit Use the method .value_counts() to determine the number and occurrence of each orbit in the column Orbit
###Code
# Apply value_counts on Orbit column
df.Orbit.value_counts()
###Output
_____no_output_____
###Markdown
TASK 3: Calculate the number and occurence of mission outcome per orbit type Use the method .value_counts() on the column Outcome to determine the number of landing_outcomes.Then assign it to a variable landing_outcomes.
###Code
# landing_outcomes = values on Outcome column
landing_outcomes = df.Outcome.value_counts()
###Output
_____no_output_____
###Markdown
True Ocean means the mission outcome was successfully landed to a specific region of the ocean while False Ocean means the mission outcome was unsuccessfully landed to a specific region of the ocean. True RTLS means the mission outcome was successfully landed to a ground pad False RTLS means the mission outcome was unsuccessfully landed to a ground pad.True ASDS means the mission outcome was successfully landed to a drone ship False ASDS means the mission outcome was unsuccessfully landed to a drone ship. None ASDS and None None these represent a failure to land.
###Code
for i,outcome in enumerate(landing_outcomes.keys()):
print(i,outcome)
###Output
0 True ASDS
1 None None
2 True RTLS
3 False ASDS
4 True Ocean
5 False Ocean
6 None ASDS
7 False RTLS
###Markdown
We create a set of outcomes where the second stage did not land successfully:
###Code
bad_outcomes=set(landing_outcomes.keys()[[1,3,5,6,7]])
bad_outcomes
###Output
_____no_output_____
###Markdown
TASK 4: Create a landing outcome label from Outcome column Using the Outcome, create a list where the element is zero if the corresponding row in Outcome is in the set bad_outcome; otherwise, it's one. Then assign it to the variable landing_class:
###Code
# landing_class = 0 if bad_outcome
# landing_class = 1 otherwise
landing_class = np.where(df['Outcome'].isin(set(bad_outcomes)), 0, 1)
###Output
_____no_output_____
###Markdown
This variable will represent the classification variable that represents the outcome of each launch. If the value is zero, the first stage did not land successfully; one means the first stage landed Successfully
###Code
df['Class']=landing_class
df[['Class']].head(8)
df.head(5)
###Output
_____no_output_____
###Markdown
We can use the following line of code to determine the success rate:
###Code
df["Class"].mean()
###Output
_____no_output_____ |
part04.ipynb | ###Markdown
Migrating from Spark to BigQuery via Dataproc -- Part 4* [Part 1](01_spark.ipynb): The original Spark code, now running on Dataproc (lift-and-shift).* [Part 2](02_gcs.ipynb): Replace HDFS by Google Cloud Storage. This enables job-specific-clusters. (cloud-native)* [Part 3](03_automate.ipynb): Automate everything, so that we can run in a job-specific cluster. (cloud-optimized)* [Part 4](04_bigquery.ipynb): Load CSV into BigQuery, use BigQuery. (modernize)* [Part 5](05_functions.ipynb): Using Cloud Functions, launch analysis every time there is a new file in the bucket. (serverless) Catch-up cell
###Code
# Catch-up cell. Run if you did not do previous notebooks of this sequence
!wget http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz
BUCKET='julio_demo' # CHANGE
!gsutil cp kdd* gs://$BUCKET/
###Output
--2020-03-02 14:10:34-- http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz
Resolving kdd.ics.uci.edu (kdd.ics.uci.edu)... 128.195.1.86
Connecting to kdd.ics.uci.edu (kdd.ics.uci.edu)|128.195.1.86|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2144903 (2.0M) [application/x-gzip]
Saving to: ‘kddcup.data_10_percent.gz.7’
kddcup.data_10_perc 100%[===================>] 2.04M 5.62MB/s in 0.4s
2020-03-02 14:10:34 (5.62 MB/s) - ‘kddcup.data_10_percent.gz.7’ saved [2144903/2144903]
Copying file://kddcup.data_10_percent.gz [Content-Type=application/octet-stream]...
Copying file://kddcup.data_10_percent.gz.1 [Content-Type=application/octet-stream]...
Copying file://kddcup.data_10_percent.gz.2 [Content-Type=application/octet-stream]...
Copying file://kddcup.data_10_percent.gz.3 [Content-Type=application/octet-stream]...
\ [4 files][ 8.2 MiB/ 8.2 MiB]
==> NOTE: You are performing a sequence of gsutil operations that may
run significantly faster if you instead use gsutil -m cp ... Please
see the -m section under "gsutil help options" for further information
about when gsutil -m can be advantageous.
Copying file://kddcup.data_10_percent.gz.4 [Content-Type=application/octet-stream]...
Copying file://kddcup.data_10_percent.gz.5 [Content-Type=application/octet-stream]...
Copying file://kddcup.data_10_percent.gz.6 [Content-Type=application/octet-stream]...
Copying file://kddcup.data_10_percent.gz.7 [Content-Type=application/octet-stream]...
/ [8 files][ 16.4 MiB/ 16.4 MiB]
Operation completed over 8 objects/16.4 MiB.
###Markdown
Load data into BigQuery
###Code
!bq mk sparktobq
BUCKET='julio_demo' # CHANGE
!bq --location=US load --autodetect --source_format=CSV sparktobq.kdd_cup_raw gs://$BUCKET/kddcup.data_10_percent.gz
###Output
Waiting on bqjob_r1bfc91c724bd15a6_000001709b95a31f_1 ... (22s) Current status: DONE
###Markdown
BigQuery queriesWe can replace much of the initial exploratory code by SQL statements.
###Code
%%bigquery
SELECT * FROM sparktobq.kdd_cup_raw LIMIT 5
###Output
_____no_output_____
###Markdown
Ooops. There are no column headers. Let's fix this.
###Code
%%bigquery
CREATE OR REPLACE TABLE sparktobq.kdd_cup AS
SELECT
int64_field_0 AS duration,
string_field_1 AS protocol_type,
string_field_2 AS service,
string_field_3 AS flag,
int64_field_4 AS src_bytes,
int64_field_5 AS dst_bytes,
int64_field_6 AS wrong_fragment,
int64_field_7 AS urgent,
int64_field_8 AS hot,
int64_field_9 AS num_failed_logins,
int64_field_11 AS num_compromised,
int64_field_13 AS su_attempted,
int64_field_14 AS num_root,
int64_field_15 AS num_file_creations,
string_field_41 AS label
FROM
sparktobq.kdd_cup_raw
%%bigquery
SELECT * FROM sparktobq.kdd_cup LIMIT 5
###Output
_____no_output_____
###Markdown
Spark analysisReplace Spark analysis by BigQuery SQL
###Code
%%bigquery connections_by_protocol
SELECT COUNT(*) AS count
FROM sparktobq.kdd_cup
GROUP BY protocol_type
ORDER by count ASC
connections_by_protocol
###Output
_____no_output_____
###Markdown
Spark SQL to BigQueryPretty clean translation
###Code
%%bigquery attack_stats
SELECT
protocol_type,
CASE label
WHEN 'normal.' THEN 'no attack'
ELSE 'attack'
END AS state,
COUNT(*) as total_freq,
ROUND(AVG(src_bytes), 2) as mean_src_bytes,
ROUND(AVG(dst_bytes), 2) as mean_dst_bytes,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_compromised) as total_compromised,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM sparktobq.kdd_cup
GROUP BY protocol_type, state
ORDER BY 3 DESC
%matplotlib inline
ax = attack_stats.plot.bar(x='protocol_type', subplots=True, figsize=(10,25))
###Output
_____no_output_____
###Markdown
Write out reportCopy the output to GCS so that we can safely delete the AI Platform Notebooks instance.
###Code
import google.cloud.storage as gcs
# save locally
ax[0].get_figure().savefig('report.png');
connections_by_protocol.to_csv("connections_by_protocol.csv")
# upload to GCS
bucket = gcs.Client().get_bucket(BUCKET)
for blob in bucket.list_blobs(prefix='sparktobq/'):
blob.delete()
for fname in ['report.png', 'connections_by_protocol.csv']:
bucket.blob('sparktobq/{}'.format(fname)).upload_from_filename(fname)
###Output
_____no_output_____ |
aula-02/alura_learning_aula_02.ipynb | ###Markdown
AULA 1IntroduçãoNesse trecho vamos só mostra que o google.colab (%notebook) interpreta/roda código de python.
###Code
print ("Gilberto Raitz")
print ("aula de data science alura - QUARENTENADOS")
###Output
Gilberto Raitz
aula de data science alura - QUARENTENADOS
###Markdown
importação de bibliotecas a serem usadas no notebook.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Carregando os ArquivosFoi carregado dados referente a avaliações de filmes do site https://grouplens.org/datasets/movielens/
###Code
filmes = pd.read_csv("https://raw.githubusercontent.com/graitz/alura-data-science/master/aula-01/ml-latest-small/movies.csv")
filmes.head()
#para carregar direto do repositorio o mesmo tem que estar em dominio publico.
filmes.columns = ["filmeid", "titulo", "genero"]
filmes.head()
avaliacoes = pd.read_csv("https://raw.githubusercontent.com/graitz/alura-data-science/master/aula-01/ml-latest-small/ratings.csv")
avaliacoes.head()
avaliacoes.shape
###Output
_____no_output_____
###Markdown
Renomenando as colunas do dado.
###Code
avaliacoes.columns = ["userarioid", "filmeid", "nota", "momento"]
avaliacoes.head()
###Output
_____no_output_____
###Markdown
Avalições estatísticas dos dados.
###Code
avaliacoes.describe()
avaliacoes['nota']
avaliacoes.query("filmeid==1").describe()
avaliacoes.query("filmeid==1")["nota"].mean()
avaliacoes.query("filmeid==1").mean()
###Output
_____no_output_____
###Markdown
Extraindo variavel de um unico filme para poder avaliar separadamente
###Code
#o code não esta organizado, de uma forma mais correta seria extrair a variavel filme um e fazer todas as avalizaçoes deste filme separadamente
avaliacoes_filme_1 = avaliacoes.query("filmeid==1")
avaliacoes_filme_1.head()
notas_medias_por_filme = avaliacoes.groupby("filmeid")["nota"].mean()
#notas_medias_por_filme.head()
notas_medias_por_filme
filmes
#para unir a tabela (nome do filme) com a média se faz a pergunta, será que todos os filmes obtiveram votação?
#filmes["nota_media"] = notas_medias_por_filme
#filmes.head()
#assumindo que os numeros de linhas batem entre os title e nota_media e a ordem é a mesma.
#não quero correr o risco de amanha os fimes não estarem em quantidade exata e ter que alterar o dataset
###Output
_____no_output_____
###Markdown
DESAFIO 1Encontre quais filmes não possuem notas
###Code
filmes_com_media = filmes.join(notas_medias_por_filme, on="filmeid")
filmes_com_media.head()
filmes_com_media.sort_values("nota")
###Output
_____no_output_____
###Markdown
DESAFIO 02Mudar o nome da coluna para média apos o join.
###Code
filmes_com_media.sort_values("nota", ascending=False)
###Output
_____no_output_____
###Markdown
DESAFIO 03Colocar quantos avaliaçoes tiveram cada filme
###Code
avaliacoes.query("filmeid in [1,2,102084]")
avaliacoes.query("filmeid == 1").plot()
avaliacoes.query("filmeid == 1")['nota'].plot(kind='hist', title='Avaliações do Filme Toy Story')
#plt.title("Avaliação do Filme Toy Story")
#plt.show()
avaliacoes.query("filmeid == 2")['nota'].plot(kind='hist')
avaliacoes.query("filmeid == 102084")['nota'].plot(kind='hist')
###Output
_____no_output_____
###Markdown
DESAFIOS 4 - 704 - Arredondar as medias (coluna de nota média) para duas casas decimais05 - Descobrir os generos dos filmes (quais são eles, unicos) (esse aqui o bicho pega)06 - Contar o numero de aparições de cada genero07 - Plotar o grafico de aparições de cada genero. Pode ser um grafico de tipo igua a barra. AULA 2 Inicio da aula quarentenaDados.
###Code
filmes["genero"].str.get_dummies("|")
filmes["genero"].str.get_dummies("|").sum()
filmes["genero"].str.get_dummies("|").sum(axis=1)
filmes["genero"].str.get_dummies("|").sum(axis=1).value_counts()
filmes["genero"].str.get_dummies("|").sum()
filmes["genero"].str.get_dummies("|").sum().sort_values()
filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False)
#este dado é uma serie porque tem apenas uma coluna, pois a chamada drama, comedy e etc é o index da tambela
filmes.index
filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False).index
filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False).values
filmes["genero"].str.get_dummies("|").sum().sort_index()
#não faz sentido nenhum para demostrar os dados
filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False).plot()
filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False).plot(
kind='pie',
title='Categorias de Filmes',
figsize=(10,10))
#nunca me entregue um dado de pizza, o cerebro não e feito para entregar um dado de area, comparando a area de comedia com drama
filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False).plot(
kind='bar',
title='Numero de Filmes por Categorias',
figsize=(10,10),)
plt.show()
import seaborn as sns
filmes_por_genero = filmes["genero"].str.get_dummies("|").sum().sort_index()
plt.figure(figsize=(20,10))
plt.title("Caterigorias de Filmes")
sns.barplot(x=filmes_por_genero.index,
y=filmes_por_genero.values)
plt.show
filmes_por_genero = filmes["genero"].str.get_dummies("|").sum().sort_values()
plt.figure(figsize=(20,10))
plt.title("Caterigorias de Filmes")
sns.barplot(x=filmes_por_genero.index,
y=filmes_por_genero.values)
plt.show
filmes_por_genero = filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False)
plt.figure(figsize=(20,10))
plt.title("Caterigorias de Filmes")
sns.barplot(x=filmes_por_genero.index,
y=filmes_por_genero.values)
plt.show
filmes_por_genero = filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False)
plt.figure(figsize=(20,10))
plt.title("Caterigorias de Filmes")
sns.barplot(x=filmes_por_genero.index,
y=filmes_por_genero.values,
palette=sns.color_palette("BuGn_r", n_colors=len(filmes_por_genero)))
sns.palplot(sns.color_palette("BuGn_r"))
plt.show
filmes_por_genero = filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False)
plt.figure(figsize=(25,10))
plt.title("Caterigorias de Filmes")
sns.barplot(x=filmes_por_genero.index,
y=filmes_por_genero.values,
palette=sns.color_palette("BuGn_r", n_colors=len(filmes_por_genero)+15))
sns.palplot(sns.color_palette("BuGn_r"))
plt.show
filmes_por_genero = filmes["genero"].str.get_dummies("|").sum().sort_values(ascending=False)
sns.set_style("whitegrid")
plt.figure(figsize=(20,10))
plt.title("Caterigorias de Filmes")
sns.barplot(x=filmes_por_genero.index,
y=filmes_por_genero.values,
palette=sns.color_palette("BuGn_r", n_colors=len(filmes_por_genero)+15))
sns.palplot(sns.color_palette("BuGn_r"))
plt.show
###Output
_____no_output_____
###Markdown
DESAFIO 1Rotacionar os thincks (os noems dos generos)
###Code
avaliacoes_filme_1 = avaliacoes.query("filmeid==1")["nota"]
print(avaliacoes_filme_1.mean())
avaliacoes_filme_1.plot(kind="hist")
plt.show()
avaliacoes_filme_2 = avaliacoes.query("filmeid==2")["nota"]
print(avaliacoes_filme_2.mean())
avaliacoes_filme_2.plot(kind="hist")
plt.show()
avaliacoes_filme_1.describe()
avaliacoes_filme_2.describe()
filmes_com_media.sort_values("nota", ascending=False)[2000:2500]
def plot_filme(n):
avaliacoes_filme = avaliacoes.query(f"filmeid=={n}")["nota"]
avaliacoes_filme.plot(kind="hist")
return avaliacoes_filme.describe()
plot_filme(6242)
###Output
_____no_output_____
###Markdown
DESAFIO 2Comparar filmes com notas parecidas com destribuiçoes diferentes
###Code
def plot_filme(n):
avaliacoes_filme = avaliacoes.query(f"filmeid=={n}")["nota"]
avaliacoes_filme.plot(kind="hist")
plt.show()
avaliacoes_filme.plot.box()
plt.show()
return avaliacoes_filme.describe()
plot_filme(6242)
#traço superior max value
#traço inferior min value
#reta no dentro do retangulo mediana
#retangulo superior 75%
#retangulo inferior 25%
#biblioteca panda e max e min
###Output
_____no_output_____
###Markdown
desafio 3Pegar os 10 filmes com mais votos e fazer a os boxplots
###Code
sns.boxplot(data = avaliacoes.query("filmeid in [1,2,919,46578]"), x="filmeid", y="nota")
###Output
_____no_output_____
###Markdown
DESAFIO 4O boxplot estar em um tamanho adequado e com os nomes dos filmes dos thicksDESAFIO 5 Calcular moda, media e mediana dos filmes. Explore filmes com notas mais proximas de 0,5, 1, 3 e 5.DESAFIO 6Plotar o bloxplot e o histograma um lado do outro (na mesma figura ou em figuras distintas)DESAFIO 7 Grafico de notas médias por anodica: possui series que não possuem ano
###Code
###Output
_____no_output_____ |
CourseraRL_HW_Week1_SimpleGymProblem_ipynb_.ipynb | ###Markdown
OpenAI GymWe're gonna spend several next weeks learning algorithms that solve decision processes. We are then in need of some interesting decision problems to test our algorithms.That's where OpenAI Gym comes into play. It's a Python library that wraps many classical decision problems including robot control, videogames and board games.So here's how it works:
###Code
import gym
env = gym.make("MountainCar-v0")
env.reset()
plt.imshow(env.render('rgb_array'))
print("Observation space:", env.observation_space)
print("Action space:", env.action_space)
###Output
Observation space: Box(-1.2000000476837158, 0.6000000238418579, (2,), float32)
Action space: Discrete(3)
###Markdown
Note: if you're running this on your local machine, you'll see a window pop up with the image above. Don't close it, just alt-tab away. Gym interfaceThe three main methods of an environment are* `reset()`: reset environment to the initial state, _return first observation_* `render()`: show current environment state (a more colorful version :) )* `step(a)`: commit action `a` and return `(new_observation, reward, is_done, info)` * `new_observation`: an observation right after committing the action `a` * `reward`: a number representing your reward for committing action `a` * `is_done`: True if the MDP has just finished, False if still in progress * `info`: some auxiliary stuff about what just happened. For now, ignore it.
###Code
obs0 = env.reset()
print("initial observation code:", obs0)
# Note: in MountainCar, observation is just two numbers: car position and velocity
print("taking action 2 (right)")
new_obs, reward, is_done, _ = env.step(2)
print("new observation code:", new_obs)
print("reward:", reward)
print("is game over?:", is_done)
# Note: as you can see, the car has moved to the right slightly (around 0.0005)
###Output
taking action 2 (right)
new observation code: [-0.49878691 0.00082022]
reward: -1.0
is game over?: False
###Markdown
Play with itBelow is the code that drives the car to the right. However, if you simply use the default policy, the car will not reach the flag at the far right due to gravity.__Your task__ is to fix it. Find a strategy that reaches the flag. You are not required to build any sophisticated algorithms for now, and you definitely don't need to know any reinforcement learning for this. Feel free to hard-code :)
###Code
from IPython import display
# Create env manually to set time limit. Please don't change this.
TIME_LIMIT = 250
env = gym.wrappers.TimeLimit(
gym.envs.classic_control.MountainCarEnv(),
max_episode_steps=TIME_LIMIT + 1,
)
actions = {'left': 0, 'stop': 1, 'right': 2}
# def policy(obs, t):
# # Write the code for your policy here. You can use the observation
# # (a tuple of position and velocity), the current time step, or both,
# # if you want.
# position, velocity = obs
# # This is an example policy. You can try running it, but it will not work.
# # Your goal is to fix that. You don't need anything sophisticated here,
# # and you can hard-code any policy that seems to work.
# # Hint: think how you would make a swing go farther and faster.
# return actions['right']
def policy(obs, t):
if obs[1] >= 0:
return 2
return 0
plt.figure(figsize=(4, 3))
display.clear_output(wait=True)
obs = env.reset()
for t in range(TIME_LIMIT):
plt.gca().clear()
action = policy(obs, t) # Call your policy
obs, reward, done, _ = env.step(action) # Pass the action chosen by the policy to the environment
# We don't do anything with reward here because MountainCar is a very simple environment,
# and reward is a constant -1. Therefore, your goal is to end the episode as quickly as possible.
# Draw game image on display.
plt.imshow(env.render('rgb_array'))
display.display(plt.gcf())
display.clear_output(wait=True)
if done:
print("Well done!")
break
else:
print("Time limit exceeded. Try again.")
display.clear_output(wait=True)
from submit import submit_interface
submit_interface(policy, '[email protected]', '15m1hALHplrDzt0U')
###Output
Your car ended in state {x=0.5198254648246383, v=0.04033598265487886}.
The flag is located roughly at x=0.46. You reached it!
Submitted to Coursera platform. See results on assignment page!
|
labs/C1_data_analysis/06_eda/laboratorio_06.ipynb | ###Markdown
MAT281 - Laboratorio N°06 Objetivos de la clase* Reforzar los conceptos básicos del E.D.A.. Contenidos* [Problema 01](p1) Problema 01 El **Iris dataset** es un conjunto de datos que contine una muestras de tres especies de Iris (Iris setosa, Iris virginica e Iris versicolor). Se midió cuatro rasgos de cada muestra: el largo y ancho del sépalo y pétalo, en centímetros.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# librerias
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes
# Ver gráficos de matplotlib en jupyter notebook/lab
%matplotlib inline
# cargar datos
df = pd.read_csv(os.path.join("data","iris_contaminados.csv"))
df.columns = ['sepalLength',
'sepalWidth',
'petalLength',
'petalWidth',
'species']
df.head()
###Output
_____no_output_____
###Markdown
Bases del experimentoLo primero es identificar las variables que influyen en el estudio y la naturaleza de esta.* **species**: * Descripción: Nombre de la especie de Iris. * Tipo de dato: *string* * Limitantes: solo existen tres tipos (setosa, virginia y versicolor).* **sepalLength**: * Descripción: largo del sépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 4.0 y 7.0 cm.* **sepalWidth**: * Descripción: ancho del sépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 2.0 y 4.5 cm.* **petalLength**: * Descripción: largo del pétalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 1.0 y 7.0 cm.* **petalWidth**: * Descripción: ancho del pépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 0.1 y 2.5 cm. Su objetivo es realizar un correcto **E.D.A.**, para esto debe seguir las siguientes intrucciones: 1. Realizar un conteo de elementos de la columna **species** y corregir según su criterio. Reemplace por "default" los valores nan..
###Code
df['species']=df['species'].str.strip().str.lower()
df['species']=df['species'].replace(np.nan,0)
df['species']
###Output
_____no_output_____
###Markdown
2. Realizar un gráfico de box-plot sobre el largo y ancho de los petalos y sépalos. Reemplace por **0** los valores nan.
###Code
sns.boxplot(data=df)
###Output
_____no_output_____
###Markdown
3. Anteriormente se define un rango de valores válidos para los valores del largo y ancho de los petalos y sépalos. Agregue una columna denominada **label** que identifique cuál de estos valores esta fuera del rango de valores válidos.
###Code
for i in df['sepalLength']:
if i>=4.0 and i<=7.0:
df['label']='True'
else:
df['label']='False'
for i in df['sepalWidth']:
if i>=2.0 and i<=4.5:
df['label']='True'
else:
df['label']='False'
for i in df['petalLength']:
if i>=1.0 and i<=7.0:
df['label']='True'
else:
df['label']='False'
for i in df['petalWidth']:
if i>=0.1 and i<=2.5:
df['label']='True'
else:
df['label']='False'
###Output
_____no_output_____
###Markdown
4. Realice un gráfico de *sepalLength* vs *petalLength* y otro de *sepalWidth* vs *petalWidth* categorizados por la etiqueta **label**. Concluya sus resultados.
###Code
sns.lineplot(
x='sepalLength',
y='petalLength',
hue='label',
data=df,
ci = None,
)
sns.lineplot(
x='sepalWidth',
y='petalWidth',
hue='label',
data=df,
ci = None,
)
###Output
_____no_output_____
###Markdown
5. Filtre los datos válidos y realice un gráfico de *sepalLength* vs *petalLength* categorizados por la etiqueta **species**.
###Code
sns.lineplot(
x='sepalWidth',
y='petalWidth',
hue='species',
data=df,
ci = None,
)
###Output
_____no_output_____ |
docs/notebooks/GITM.ipynb | ###Markdown
Interpolation
###Code
print(model.rho([[70,30.,440000.],[70,40.,400000.],[90,40.,400000.],[120,40.,440000.]]))
model.rho?
var="rho"
grid = np.ndarray(shape=(4,3), dtype=np.float32)
grid[:,0] = [70, 70, 90, 120]
grid[:,1] = [30, 40, 40, 40]
grid[:,2] = [440000., 400000., 400000., 440000.]
units=model.variables[var]['units']
test = model.variables[var]['interpolator'](grid)
print(units)
print(test)
###Output
_____no_output_____
###Markdown
Plotting
###Code
# Slice at given Altitude
fig=model.get_plot('rho', 400000., '2D-alt', colorscale='Viridis', log="T")
#iplot(fig)
#pio.write_image(fig, 'images/GITM_2D-alt.svg')
fig.data
fig
###Output
_____no_output_____
###Markdown

###Code
# Slice at given latitude
fig=model.get_plot('Tn', 0., '2D-lat', colorscale='Rainbow')
iplot(fig)
#pio.write_image(fig, 'images/GITM_2D-lat.svg')
###Output
_____no_output_____
###Markdown

###Code
# Slice at given longitude
fig=model.get_plot('rho', 180., '2D-lon', colorscale='Rainbow', log='T')
iplot(fig)
#pio.write_image(fig, 'images/GITM_2D-lon.svg')
###Output
_____no_output_____
###Markdown

###Code
# 3D view at given Altitude
fig=model.get_plot('rho', 400000., '3D-alt', colorscale='Rainbow')
iplot(fig)
#pio.write_image(fig, 'images/GITM_3D-alt.svg')
###Output
_____no_output_____
###Markdown

###Code
# Isosurface with slice at Lat=0.
fig=model.get_plot('Tn', 750., 'iso')
iplot(fig)
#scope = PlotlyScope()
#with open("images/GITM_iso.png", "wb") as f:
# f.write(scope.transform(fig, format="png"))
#fig.write_html("GITM_iso.html",full_html=False)
###Output
_____no_output_____
###Markdown
Interpolation
###Code
print(model.rho([[70,30.,440000.],[70,40.,400000.],[90,40.,400000.],[120,40.,440000.]]))
var="rho"
grid = np.ndarray(shape=(4,3), dtype=np.float32)
grid[:,0] = [70, 70, 90, 120]
grid[:,1] = [30, 40, 40, 40]
grid[:,2] = [440000., 400000., 400000., 440000.]
units=model.variables[var]['units']
test = model.variables[var]['interpolator'](grid)
print(units)
print(test)
###Output
kg/m^3
[2.68652554e-14 9.22378231e-14 1.12557827e-13 9.39359154e-14]
###Markdown
Plotting
###Code
# Slice at given Altitude
fig=model.get_plot('rho', 400000., '2D-alt', colorscale='Viridis', log="T")
#iplot(fig)
#pio.write_image(fig, 'images/GITM_2D-alt.svg')
###Output
_____no_output_____
###Markdown

###Code
# Slice at given latitude
fig=model.get_plot('Tn', 0., '2D-lat', colorscale='Rainbow')
iplot(fig)
#pio.write_image(fig, 'images/GITM_2D-lat.svg')
###Output
_____no_output_____
###Markdown

###Code
# Slice at given longitude
fig=model.get_plot('rho', 180., '2D-lon', colorscale='Rainbow', log='T')
iplot(fig)
#pio.write_image(fig, 'images/GITM_2D-lon.svg')
###Output
_____no_output_____
###Markdown

###Code
# 3D view at given Altitude
fig=model.get_plot('rho', 400000., '3D-alt', colorscale='Rainbow')
iplot(fig)
#pio.write_image(fig, 'images/GITM_3D-alt.svg')
###Output
_____no_output_____
###Markdown

###Code
# Isosurface with slice at Lat=0.
fig=model.get_plot('Tn', 750., 'iso')
iplot(fig)
#scope = PlotlyScope()
#with open("images/GITM_iso.png", "wb") as f:
# f.write(scope.transform(fig, format="png"))
#fig.write_html("GITM_iso.html",full_html=False)
###Output
_____no_output_____ |
Materialy/Grupa1/Lab5/lab05.ipynb | ###Markdown
Wstęp do Uczenia Maszynowego - Lab 5
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed = 42
data = pd.read_csv('heart.csv')
data.head()
y = np.array(data['chd'])
X = data.drop(['chd'],axis=1)
map_dict = {'Present': 1, 'Absent':0}
X['famhist'] = X['famhist'].map(map_dict)
X.head()
###Output
_____no_output_____
###Markdown
Naiwny Klasyfikator BayesowskiJest oparty na założeniu o wzajemnej niezależności zmiennych. Często nie mają one żadnego związku z rzeczywistością i właśnie z tego powodu nazywa się je naiwnymi.
###Code
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X,y)
y_hat = nb.predict(X)
print('y: ' + str(y_hat[0:10]) + '\ny_hat: ' + str(y[0:10]))
###Output
y: [1 0 0 1 1 1 0 0 0 1]
y_hat: [1 1 0 1 1 0 0 1 0 1]
###Markdown
- Jakie widzicie wady/zalety tego algorytmu? Sposoby podziału danych- Jak radzić sobie z overfitingiem?- Jakie znacie sposoby podziału danych na treningowe i testowe? Zbiór treningowy i testowyProsty podział danych na część, na której uczymy model i na część która służy nam do sprawdzenia jego skuteczności.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print(X.shape,X_train.shape,X_test.shape)
###Output
(462, 9) (369, 9) (93, 9)
###Markdown
**Szybkie zadanie:** Podzielić dane w taki sposób jak powyżej i nauczyć na zbiorze treningowym regresje logistyczną- Jakie widzicie wady podejścia train/test split? Crossvalidation- Czy możemy stosować CV dzieląc zbiór, tak by w zbiorze walidacyjnym pozostała tylko jedna obserwacja danych?- Czy sprawdzając performance modelu przez CV, możemy potem nauczyć model na całym zbiorze danych?- Czy dobierając parametry do modelu, powinniśmy wydzielić dodatkowy zbiór testowy, a CV przeprowadzać tylko na części treningowej?
###Code
from sklearn.model_selection import cross_val_score
cross_val_score(nb, X, y, scoring='accuracy', cv = 10)
###Output
_____no_output_____
###Markdown
Miary ocen jakości klasyfikatorów- Jakie znacie miary oceny klasyfikatorów?Na potrzeby zadania wygenerujmy sobie wynik:
###Code
nb.fit(X_train,y_train)
y_hat = nb.predict(X_test)
print("y_test: "+ str(y_test) + "\n\ny_hat: " + str(y_hat))
###Output
y_test: [0 1 1 0 0 0 0 1 0 1 1 1 0 0 0 0 0 1 0 1 1 1 0 1 0 1 0 0 1 1 0 0 0 0 0 0 0
0 0 0 0 1 0 0 1 0 1 1 0 1 0 0 0 0 1 1 1 1 0 0 0 1 1 1 0 0 0 1 0 0 0 0 0 0
1 1 0 1 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0]
y_hat: [0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 0 0 1 1 1 1 0 1 0 0 0 0 1 1 0 0 0 0 0 1 1
0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 1 1 1 0 0 1 0 1 1 1 0 0 1 1 0 0 0 0 0 0
1 1 1 1 1 1 1 0 0 0 0 1 0 1 1 0 1 0 0]
###Markdown
Accuracy$ACC = \frac{TP+TN}{ALL}$Bardzo intuicyjna miara - ile obserwacji zakwalifikowaliśmy poprawnie.- Jaki jest problem z accuracy?
###Code
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_hat)
###Output
_____no_output_____
###Markdown
Precision & Recall**Precision** mówi o tym jak dokładny jest model wśród pozytywnej klasy, ile z przewidzianych jest faktycznie pozytywnych.$PREC = \frac{TP}{TP+FP}= \frac{TP}{\text{TOTAL PREDICTED POSITIVE}}$- Jakie widzicie zastosowania takiej miary?$RECALL = \frac{TP}{TP+FN} = \frac{TP}{\text{TOTAL ACTUAL POSITIVE}}$- Jakie widzicie zastosowania takiej miary?
###Code
from sklearn.metrics import precision_score
precision_score(y_test, y_hat)
from sklearn.metrics import recall_score
recall_score(y_test, y_hat)
###Output
_____no_output_____
###Markdown
F1 ScoreSzukanie balansu pomiędzy PRECISION i RECALL:$F1 = 2\frac{PREC * RECALL}{PREC + RECALL}$
###Code
from sklearn.metrics import f1_score
f1_score(y_test, y_hat)
###Output
_____no_output_____
###Markdown
ROC AUCReceiver Operating Characterictic (ROC), lub po prostu krzywa ROC, to wykres, który ilustruje efektywność binarnego klasyfikatora, niezależnie od progu dyskryminacyjnego. Na osi Y jest TPR, czyli RECALL, na osi X jest FPR, czyli $1 - SPECIFITY$.$FPR = 1- SPECIFITY = 1 - \frac{TN}{TN+FP}$SPECIFITY - przykład: odsetek zdrowych osób, które są prawidłowo zidentyfikowane jako nie cierpiące na chorobę.
###Code
y_hat_proba = nb.predict_proba(X_test)[:,1]
from sklearn.metrics import roc_curve, auc
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr, tpr, _ = roc_curve(y_test, y_hat_proba)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=(10, 6))
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
- Jaką widzicie przewagę tej miary nad poprzednimi?
###Code
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test,y_hat_proba)
###Output
_____no_output_____
###Markdown
Ensemble MethodsNa potrzeby stosowania różnych metod Ensemble Learningu załadujemy sobie już 3 modele z których będziemy potem korzystać.
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
model1 = DecisionTreeClassifier(random_state=1)
model2 = KNeighborsClassifier()
model3 = LogisticRegression(random_state=1, max_iter=1000)
estimators=[('DecisionTree', model1), ('KNN', model2), ('LR', model2)]
###Output
_____no_output_____
###Markdown
Max Voting
###Code
from sklearn.ensemble import VotingClassifier
model = VotingClassifier(estimators=estimators, voting='hard')
model.fit(X_train,y_train)
y_hat = model.predict(X_test)
accuracy_score(y_test, y_hat), model.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Averaging
###Code
model1.fit(X_train,y_train)
model2.fit(X_train,y_train)
model3.fit(X_train,y_train)
pred1=model1.predict_proba(X_test)
pred2=model2.predict_proba(X_test)
pred3=model3.predict_proba(X_test)
# Average
pred_average=(pred1+pred2+pred3)/3
y_hat = np.argmax(pred_average, axis=1)
print(accuracy_score(y_test, y_hat))
# Weighted Average
pred_weighted_average=(pred1*0.05+pred2*0.05+pred3*0.9)
y_hat = np.argmax(pred_weighted_average, axis=1)
print(accuracy_score(y_test, y_hat))
###Output
0.6236559139784946
0.7419354838709677
###Markdown
Stacking
###Code
from sklearn.ensemble import StackingClassifier
clf = StackingClassifier(estimators=estimators, final_estimator=LogisticRegression())
from sklearn.model_selection import train_test_split
clf.fit(X_train, y_train).score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Czy zestackowanie kilku takich samych modeli zwiększy ich dokładność?- Jeżeli tak to podaj przykład?- Jeżeli nie to czy masz jakiś pomysł żeby ulepszyć tą metodę? Bagging (Bootstrap Aggregating)Bootstrap - to technika próbkowania, w której tworzymy podzbiory (próby) obserwacji z oryginalnego datasetu, **ze zwracaniem**. Rozmiar podzbiorów jest taki sam jak rozmiar oryginalnego datasetu.1. Losujemy N **bootstrapowych** prób ze zbioru treningowego2. Trenujemy niezależnie N "słabych" klasyfikatorów3. Składamy wyniki "słabych" modeli - **Klasyfikacja:** reguła większościowa / uśrednione prawdopodobieństwo - **Regresja:** Uśrednione wartości
###Code
from sklearn.ensemble import BaggingClassifier
clf = BaggingClassifier(base_estimator=model1,
n_estimators=10, random_state=0)
clf.fit(X_train, y_train)
clf.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Jakie widzicie wady i zalety takiej metody? Random ForestNajbardziej popularny algorytm Baggingowy.Cechy:- podstawowym algorytmem jest Drzewo Decyzyjne (wszystkie zalety drzew: obsługa NA)- Do podziału każdego węzła wykorzystujemy losowe zmienne (ilość można wybrać jako hiperparametr)- wbudowana metoda istotności zmiennych
###Code
from sklearn.ensemble import RandomForestClassifier
model_rf = RandomForestClassifier(n_estimators=1000, # Ilość słabych estymatorów
max_depth=2, # Maksymalna wysokość drzewa w słabym estymatorze
min_samples_split = 2, # Minimalna ilość obserwacji wymagana do podziału węzła
max_features = 3, # Maksymalna ilość zmiennych brana pod uwagę przy podziale węzła
random_state=0,
n_jobs = -1)
model_rf.fit(X_train, y_train)
model_rf.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
BoostingBoosting działa podobnie jak Bagging z jedną różnicą. Każda kolejna próba bootstrap jest tworzona w taki sposób, że losuje z większym prawdopodobieństwiem obserwacje **źle sklasyfikowane**. W skrócie: Boosting uczy się na błędach, które popełnił w poprzednich iteracjach. AdaBoostNajprostsza metoda boostingowa
###Code
from sklearn.ensemble import AdaBoostClassifier
model = AdaBoostClassifier(random_state=1)
model.fit(X_train, y_train)
model.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Gradient BoostingKażdy nowy "słaby" model jest uczony na błędach poprzednich.
###Code
from sklearn.ensemble import GradientBoostingClassifier
model= GradientBoostingClassifier(random_state=1,
learning_rate=0.01)
model.fit(X_train, y_train)
model.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
XGBoostZaawansowana implementacja Gradient Boostingu
###Code
from xgboost import XGBClassifier # Inna paczka niż sklearn!
model=XGBClassifier(random_state=1,
learning_rate=0.01, # Szybkość "uczenia" się
booster='gbtree', # Jaki model wykorzystujemy (drzewo - gbtree, liniowe - gblinear)
nround = 100, # Ilość itereacji boosingowych
max_depth=4 # Maksymalna głębokość drzewa
)
model.fit(X_train, y_train)
model.score(X_test,y_test)
###Output
_____no_output_____ |
code/algorithms/.ipynb_checkpoints/google_algo_course_notes-checkpoint.ipynb | ###Markdown
singly linked list
###Code
"""The LinkedList code from before is provided below.
Add three functions to the LinkedList.
"get_position" returns the element at a certain position.
The "insert" function will add an element to a particular
spot in the list.
"delete" will delete the first element with that
particular value.
Then, use "Test Run" and "Submit" to run the test cases
at the bottom."""
class Element(object):
def __init__(self, value):
self.value = value
self.next = None
class LinkedList(object):
def __init__(self, head=None):
self.head = head
def append(self, new_element):
current = self.head
if self.head:
while current.next:
current = current.next
current.next = new_element
else:
self.head = new_element
def get_position(self, position):
counter = 1
current = self.head
if position < 1:
return None
while current and counter <= position:
if counter == position:
return current
current = current.next
counter += 1
return None
def insert(self, new_element, position):
counter = 1
current = self.head
if position > 1:
while current and counter < position:
if counter == position - 1:
new_element.next = current.next
current.next = new_element
current = current.next
counter += 1
elif position == 1:
new_element.next = self.head
self.head = new_element
def delete(self, value):
current = self.head
previous = None
while current.value != value and current.next:
previous = current
current = current.next
if current.value == value:
if previous:
previous.next = current.next
else:
self.head = current.next
# Test cases
# Set up some Elements
e1 = Element(1)
e2 = Element(2)
e3 = Element(3)
e4 = Element(4)
# Start setting up a LinkedList
ll = LinkedList(e1)
ll.append(e2)
ll.append(e3)
# Test get_position
# Should print 3
print (ll.head.next.next.value)
# Should also print 3
print (ll.get_position(3).value)
# Test insert
ll.insert(e4, 3)
# Should print 4 now
print (ll.get_position(3).value)
# Test delete
ll.delete(1)
# Should print 2 now
print (ll.get_position(1).value)
# Should print 4 now
print (ll.get_position(2).value)
# Should print 3 now
print (ll.get_position(3).value)
print (ll.head.value)
###Output
3
3
4
2
4
3
2
###Markdown
doubly linked list
###Code
# create this
###Output
_____no_output_____
###Markdown
stack in python - can use list
###Code
stack = [3, 4, 5]
stack.append(6)
stack.append(7)
print(stack)
stack.pop()
print(stack)
###Output
[3, 4, 5, 6, 7]
[3, 4, 5, 6]
###Markdown
custom stack
###Code
"""Add a couple methods to our LinkedList class,
and use that to implement a Stack.
You have 4 functions below to fill in:
insert_first, delete_first, push, and pop.
Think about this while you're implementing:
why is it easier to add an "insert_first"
function than just use "append"?"""
class Element(object):
def __init__(self, value):
self.value = value
self.next = None
class LinkedList(object):
def __init__(self, head=None):
self.head = head
def append(self, new_element):
current = self.head
if self.head:
while current.next:
current = current.next
current.next = new_element
else:
self.head = new_element
def insert_first(self, new_element):
new_element.next = self.head
self.head = new_element
def delete_first(self):
node = self.head if self.head else None
if self.head:
self.head = self.head.next
return node
class Stack(object):
def __init__(self,top=None):
self.ll = LinkedList(top)
def push(self, new_element):
self.ll.insert_first(new_element)
def pop(self):
return self.ll.delete_first()
# Test cases
# Set up some Elements
e1 = Element(1)
e2 = Element(2)
e3 = Element(3)
e4 = Element(4)
# Start setting up a Stack
stack = Stack(e1)
# Test stack functionality
stack.push(e2)
stack.push(e3)
print (stack.pop().value)
print (stack.pop().value)
print (stack.pop().value)
print (stack.pop())
stack.push(e4)
print (stack.pop().value)
###Output
3
2
1
None
4
###Markdown
custom queue
###Code
"""Make a Queue class using a list!
Hint: You can use any Python list method
you'd like! Try to write each one in as
few lines as possible.
Make sure you pass the test cases too!"""
class Queue(object):
def __init__(self, head=None):
self.storage = [head]
def enqueue(self, new_element):
self.storage.append(new_element)
def peek(self):
return self.storage[0]
def dequeue(self):
return self.storage.pop(0)
# Setup
q = Queue(1)
q.enqueue(2)
q.enqueue(3)
# Test peek
# Should be 1
print(q.peek())
# Test dequeue
# Should be 1
print(q.dequeue())
# Test enqueue
q.enqueue(4)
# Should be 2
print(q.dequeue())
# Should be 3
print(q.dequeue())
# Should be 4
print(q.dequeue())
q.enqueue(5)
# Should be 5
print(q.peek())
###Output
1
1
2
3
4
5
###Markdown
binary searchhttp://www.cs.armstrong.edu/liang/animation/web/BinarySearch.htmlhttps://www.cs.usfca.edu/~galles/visualization/Search.html
###Code
"""You're going to write a binary search function.
You should use an iterative approach - meaning
using loops.
Your function should take two inputs:
a Python list to search through, and the value
you're searching for.
Assume the list only has distinct elements,
meaning there are no repeated values, and
elements are in a strictly increasing order.
Return the index of value, or -1 if the value
doesn't exist in the list."""
def binary_search(inp, value):
"""Your code goes here."""
low = 0
high = len(inp) - 1
while low <= high:
mid = int((low + high) / 2)
if inp[mid] == value:
return int(mid)
elif inp[mid] < value:
low = mid + 1
else:
high = mid - 1
return -1
test_list = [1,3,9,11,15,19,29]
test_val1 = 25
print (binary_search(test_list, test_val1))
test_val2 = 15
print (binary_search(test_list, test_val2))
###Output
-1
4
###Markdown
sort with minimum swaps (similar to a non-comparison sorting algorithm)
###Code
def minimumSwaps(arr):
swaps = 0
i = 0
while i < len(arr) - 1:
if arr[i] != i+1:
temp = arr[i]
loc = arr[i]-1
temp2 = arr[loc]
arr[i] = temp2
arr[loc] = temp
i -= 1
swaps += 1
i += 1
return swaps
arr = list(map(int, '4 3 1 2'.rstrip().split()))
res = minimumSwaps(arr)
print(res)
###Output
3
###Markdown
recursion
###Code
"""Implement a function recursively to get the desired
Fibonacci sequence value.
Your code should have the same input/output as the
iterative code in the instructions."""
"""
fib_seq = []
fib_seq[0] = 0
fib_seq[1] = 1
fib_seq[2] = 1
fib_seq[3] = 2
fib_seq[4] = 3
fib_seq[5] = 5
fib_seq[6] = 8
fib_seq[7] = 13
fib_seq[8] = 21
fib_seq[9] = 34
"""
def get_fib(position):
if position == 0 or position == 1:
return position
return get_fib(position - 1) + get_fib(position - 2)
# Test cases
print (get_fib(9))
print (get_fib(11))
print (get_fib(0))
# # fix this
# def get_fib2(position):
# end_pos = position
# curr_pos = 0
# if end_pos == 0:
# return 0
# value = 1
# if end_pos == curr_pos:
# return value
# curr_pos += 1
# value += value
# return get_fib(curr_pos)
# # Test cases
# print (get_fib2(9))
# print (get_fib2(11))
# print (get_fib2(0))
###Output
34
89
0
###Markdown
merge sort - time complexity: O(nlogn), auxiliary space: O(n)top-down and bottom-up merge sort sorting algorithms complexitieshttps://en.wikipedia.org/wiki/Sorting_algorithmComparison_of_algorithms sorting referenceshttps://algs4.cs.princeton.edu/20sorting/https://www.toptal.com/developers/sorting-algorithmshttp://panthema.net/2013/sound-of-sorting/ sorting considerations Comparison based sorting –In comparison based sorting, elements of an array are compared with each other to find the sorted array. Non-comparison based sorting –In non-comparison based sorting, elements of array are not compared with each other to find the sorted array. No comparison sorting includes Counting sort which sorts using key value, Radix sort, which examines individual bits of keys, and Bucket Sort which examines bits of keys. These are also known as Liner sorting algorithms because they sort in O(n) time. They make certain assumption about data hence they don't need to go through comparison decision tree. In-place/Outplace technique –A sorting technique is inplace if it does not use any extra memory to sort the array.Among the comparison based techniques discussed, only merge sort is outplace technique as it requires an extra array to merge the sorted subarrays.Among the non-comparison based techniques discussed, all are outplace techniques. Counting sort uses counting array and bucket sort uses hash table for sorting the array. Online/Offline technique –A sorting technique is considered Online if it can accept new data while the procedure is ongoing i.e. complete data is not required to start the sorting operation.Among the comparison based techniques discussed, only Insertion Sort qualifies for this because of the underlying algorithm it uses i.e. it processes the array (not just elements) from left to right and if new elements are added to the right, it doesn’t impact the ongoing operation. Stable/Unstable technique –A sorting technique is stable if it does not change the order of elements with the same value.Out of comparison based techniques, bubble sort, insertion sort and merge sort are stable techniques. Selection sort is unstable as it may change the order of elements with the same value. For example, consider the array 4, 4, 1, 3.In the first iteration, minimum element found is 1 and it is swapped with 4 at 0th position. Therefore, order of 4 with respect to 4 at 1st position will change. Similarly, quick sort and heap sort are also unstable.Out of non-comparison based techniques, Counting sort and Bucket sort are stable sorting techniques whereas radix sort stability depends on the underlying algorithm used for sorting. Analysis of sorting techniques :When the array is almost sorted, insertion sort can be preferred.When order of input is not known, merge sort is preferred as it has worst case time complexity of nlogn and it is stable as well.When the array is sorted, insertion and bubble sort gives complexity of n but quick sort gives complexity of n^2. best for non-parallel sortingOnly a few items: Insertion SortItems are mostly sorted already: Insertion SortConcerned about worst-case scenarios: Heap SortInterested in a good average-case result: QuicksortItems are drawn from a dense universe: Bucket SortDesire to write as little code as possible: Insertion Sort quick sort- take last element as pivot, compare to first element- if first element is greater, move it to the last element position; move last element position one forward; move the element that used to be in this second to last position to the beginning- repeat until compared the same pivot with all previous items- split at pivot, repeat this process for items below and above pivot- keep repeating until all sorted
###Code
def partition(array, begin, end):
pivot = begin
for i in range(begin+1, end+1):
if array[i] <= array[begin]:
pivot += 1
array[i], array[pivot] = array[pivot], array[i]
array[pivot], array[begin] = array[begin], array[pivot]
return pivot
def quicksort(array, begin=0, end=None):
if end is None:
end = len(array) - 1
def _quicksort(array, begin, end):
if begin >= end:
return
pivot = partition(array, begin, end)
_quicksort(array, begin, pivot-1)
_quicksort(array, pivot+1, end)
return _quicksort(array, begin, end)
arr = [21, 4, 1, 1, 3, 9, 20, 25, 6, 21, 14]
quicksort(arr)
print(arr)
def quick_sort_not_inplace(array):
less = []
equal = []
greater = []
if len(array) > 1:
pivot = array[0]
for x in array:
if x < pivot:
less.append(x)
elif x == pivot:
equal.append(x)
else: # x > pivot
greater.append(x)
# Don't forget to return something!
return quick_sort(less) + equal + quick_sort(greater) # Just use the + operator to join lists
# Note that you want equal ^^^^^ not pivot
else: # You need to hande the part at the end of the recursion - when you only have one element in your array, just return the array.
return array
test = [21, 4, 1, 1, 3, 9, 20, 25, 6, 21, 14]
res = quick_sort_not_inplace(test)
print(res)
###Output
[1, 1, 3, 4, 6, 9, 14, 20, 21, 21, 25]
###Markdown
maps/dictionaries - can retrieve key's value in REAL TIME
###Code
locations = {'North America': {'USA': ['Mountain View']}}
locations['North America']['USA'].append('Atlanta')
locations['Asia'] = {'India': ['Bangalore']}
locations['Asia']['China'] = ['Shanghai']
locations['Africa'] = {'Egypt': ['Cairo']}
print (1)
usa_sorted = sorted(locations['North America']['USA'])
for city in usa_sorted:
print (city)
print (2)
asia_cities = []
for countries, cities in locations['Asia'].items():
city_country = cities[0] + " - " + countries
asia_cities.append(city_country)
asia_sorted = sorted(asia_cities)
for city in asia_sorted:
print (city)
###Output
1
Atlanta
Mountain View
2
Bangalore - India
Shanghai - China
###Markdown
hashing load factorWhen we're talking about hash tables, we can define a "load factor":>Load Factor = Number of Entries / Number of BucketsThe purpose of a load factor is to give us a sense of how "full" a hash table is. For example, if we're trying to store 10 values in a hash table with 1000 buckets, the load factor would be 0.01, and the majority of buckets in the table will be empty. We end up wasting memory by having so many empty buckets, so we may want to rehash, or come up with a new hash function with less buckets. We can use our load factor as an indicator for when to rehash—as the load factor approaches 0, the more empty, or sparse, our hash table is. On the flip side, the closer our load factor is to 1 (meaning the number of values equals the number of buckets), the better it would be for us to rehash and add more buckets. Any table with a load value greater than 1 is guaranteed to have collisions. hash table
###Code
"""Write a HashTable class that stores strings
in a hash table, where keys are calculated
using the first two letters of the string."""
class HashTable(object):
def __init__(self):
self.table = [None]*10000
def store(self, string):
hv = self.calculate_hash_value(string)
if hv != -1:
if self.table[hv] != None:
self.table[hv].append(string)
else:
self.table[hv] = [string]
def lookup(self, string):
hv = self.calculate_hash_value(string)
if hv != -1:
if self.table[hv] != None:
if string in self.table[hv]:
return hv
return -1
def calculate_hash_value(self, string):
'''
You can assume that the string will have at least two letters,
and the first two characters are uppercase letters (ASCII values from 65 to 90).
You can use the Python function ord() to get the ASCII value of a letter,
and chr() to get the letter associated with an ASCII value.
'''
value = ord(string[0])*100 + ord(string[1])
return value
# Setup
hash_table = HashTable()
# Test calculate_hash_value
# Should be 8568
print (hash_table.calculate_hash_value('UDACITY'))
# Test lookup edge case
# Should be -1
print (hash_table.lookup('UDACITY'))
# Test store
hash_table.store('UDACITY')
# Should be 8568
print (hash_table.lookup('UDACITY'))
# Test store edge case
hash_table.store('UDACIOUS')
# Should be 8568
print (hash_table.lookup('UDACIOUS'))
###Output
8568
-1
8568
8568
###Markdown
trees Traversal- DFS - Depth First Search - pre-order: start at top, go to left all the way to leaf while checking each node. Check each family on the way back up, repeat for right side - in-order: start from left-most leaf, check this, plus its immediate family, go one level up, repeat - post-order: start at left-most leaf, check everything at that level, go to right-most leaf, check everything at that level, then check next level up- BFS - Breadth First Search efficiency of binary trees (leaves, or nodes with 1 or 2 children)- search: O(n)- delete (remove element then rearrange to fill element): O(n)- insert (every level you double the amount of elements you can add): O(logn) special case- if tree is UNBALANCED, means it's more skewed to one side, which means its more similar to a linked list, and will have more linear efficiency properties (worst case) binary tree
###Code
class Node():
def __init__(self, value):
self.value = value
self.left = None
self.right = None
class BinaryTree():
def __init__(self, root):
self.root = Node(root)
def search(self, find_val):
return self.preorder_search(tree.root, find_val)
def print_tree(self):
return self.preorder_print(tree.root, "")[:-1]
def preorder_search(self, start, find_val):
if start:
if start.value == find_val:
return True
else:
return self.preorder_search(start.left, find_val) or self.preorder_search(start.right, find_val)
return False
def preorder_print(self, start, traversal):
if start:
traversal += (str(start.value) + "-")
traversal = self.preorder_print(start.left, traversal)
traversal = self.preorder_print(start.right, traversal)
return traversal
# Set up tree
tree = BinaryTree(1)
tree.root.left = Node(2)
tree.root.right = Node(3)
tree.root.left.left = Node(4)
tree.root.left.right = Node(5)
# Test search
# Should be True
print (tree.search(4))
# Should be False
print (tree.search(6))
# Test print_tree
# Should be 1-2-4-5-3
print (tree.print_tree())
###Output
True
False
1-2-4-5-3
###Markdown
binary search trees (BST)- numbers are organized left to right so that you can tell which path to go down if your search value is less than or greater than the comparison value- search and insert efficiency are both O(logn)- delete (remove element then rearrange to fill element): O(n)
###Code
class Node(object):
def __init__(self, value):
self.value = value
self.left = None
self.right = None
class BST(object):
def __init__(self, root):
self.root = Node(root)
def insert(self, new_val):
self.insert_helper(self.root, new_val)
def insert_helper(self, current, new_val):
if current.value < new_val:
if current.right:
self.insert_helper(current.right, new_val)
else:
current.right = Node(new_val)
else:
if current.left:
self.insert_helper(current.left, new_val)
else:
current.left = Node(new_val)
def search(self, find_val):
return self.search_helper(self.root, find_val)
def search_helper(self, current, find_val):
if current:
if current.value == find_val:
return True
elif current.value < find_val:
return self.search_helper(current.right, find_val)
else:
return self.search_helper(current.left, find_val)
return False
# Set up tree
tree = BST(4)
# Insert elements
tree.insert(2)
tree.insert(1)
tree.insert(3)
tree.insert(5)
# Check search
# Should be True
print (tree.search(4))
# Should be False
print (tree.search(6))
###Output
True
False
###Markdown
Heaps- max heaps: parent always greater than child- min heaps: parent always less than child- parents can have any number of children (not max of two like in binary) search- max heaps: finding max happens in constant time, O(1)- min heaps: finding min happens in constant time, O(1)- searching worst case still O(n), but since can quit search if greater than max or less than min, average is O(n/2) insertion (Heapify)- add to end then keep swapping child with parent until fits pattern- O(logn) worst case, roughly as many operations as the height deletion- similar to insertion, except take a random leaf and place it at place of deletion, then swap where necessary- O(logn) worst case, roughly as many operations as the height implementation- given sorted array, know how many heap/tree elements per level, just keep level as a counter and insert that many- trees take up more space than array self balancing trees- balances itself out when inserting or deleting- Red/Black tree: assign Red or Black as property to each node. Each leaf must end with a null, colored Black. When inserting/deleting, ensure the same number of Black in each path - also follows BST rules - insert red nodes only, and only change color as needed - if the parent and the parent's sibling is red, switch to black, and the grandparent switches to red - if red, left parent has a red, right child, perform "left rotation". "left rotation" is done by moving the child up one, and making the initial parent a left child. Then will have a previous case so we can continue self balancing - since at every step the tree won't be too unbalanced, the runtimes won't be too large graphs- Trees are a subset of graphs, where any node/vertex is connected to any other via an edge- either nodes or edges can store data- if we want edges to be one directional, these are "Directed Graphs." If going back and forth between the same two nodes, the two nodes can share two edges directed in opposite ways- non-directional graphs are called "Undirected Graphs"- cycle can happen if start at one node, and can end up at same node. Could result in infinite loop- Directed Acyclic Graph (DAG) is common, it's a directed graph with no cycles- "Graph Theory" is the study- connectivity - connected graph has all nodes connected to at least one edge - disconnected graph has at least one disconnection - "connectivity" is also the number of edges that can be removed before a graph becomes disconnected representation in code- can create an "edge list", list of nodes that are connected to eachother via an edge - `[ [0, 1], [1, 2], [1, 3], [2, 3] ]`- "adjacency list": each index of the list represents that number node. The items in that index location represent the nodes it's connected to - `[ [1], [0, 2, 3], [1, 3], [1, 2] ]`- adjacency matrix - probably fastest to tell how many nodes a particular node is connected to
###Code
# adjacency matrix example
# The Graph class contains a list of nodes and edges.
# You can sometimes get by with just a list of edges, since edges contain references to the nodes they connect to, or vice versa.
# However, our Graph class is built with both for the following reasons:
# If you're storing a disconnected graph, not every node will be tied to an edge, so you should store a list of nodes.
# We could probably leave it there, but storing an edge list will make our lives much easier when we're trying to print out different types of graph representations.
# Unfortunately, having both makes insertion a bit complicated. We can assume that each value is unique, but we need to be careful about keeping both nodes and edges
# updated when either is inserted.
class Node(object):
def __init__(self, value):
self.value = value
self.edges = []
class Edge(object):
def __init__(self, value, node_from, node_to):
self.value = value
self.node_from = node_from
self.node_to = node_to
class Graph(object):
def __init__(self, nodes=[], edges=[]):
self.nodes = nodes
self.edges = edges
def insert_node(self, new_node_val):
new_node = Node(new_node_val)
self.nodes.append(new_node)
def insert_edge(self, new_edge_val, node_from_val, node_to_val):
from_found = None
to_found = None
for node in self.nodes:
if node_from_val == node.value:
from_found = node
if node_to_val == node.value:
to_found = node
if from_found == None:
from_found = Node(node_from_val)
self.nodes.append(from_found)
if to_found == None:
to_found = Node(node_to_val)
self.nodes.append(to_found)
new_edge = Edge(new_edge_val, from_found, to_found)
from_found.edges.append(new_edge)
to_found.edges.append(new_edge)
self.edges.append(new_edge)
def get_edge_list(self):
edge_list = []
for edge_object in self.edges:
edge = (edge_object.value, edge_object.node_from.value, edge_object.node_to.value)
edge_list.append(edge)
return edge_list
def get_adjacency_list(self):
max_index = self.find_max_index()
adjacency_list = [None] * (max_index + 1)
for edge_object in self.edges:
if adjacency_list[edge_object.node_from.value]:
adjacency_list[edge_object.node_from.value].append((edge_object.node_to.value, edge_object.value))
else:
adjacency_list[edge_object.node_from.value] = [(edge_object.node_to.value, edge_object.value)]
return adjacency_list
def get_adjacency_matrix(self):
max_index = self.find_max_index()
adjacency_matrix = [[0 for i in range(max_index + 1)] for j in range(max_index + 1)]
for edge_object in self.edges:
adjacency_matrix[edge_object.node_from.value][edge_object.node_to.value] = edge_object.value
return adjacency_matrix
def find_max_index(self):
max_index = -1
if len(self.nodes):
for node in self.nodes:
if node.value > max_index:
max_index = node.value
return max_index
graph = Graph()
graph.insert_edge(100, 1, 2)
graph.insert_edge(101, 1, 3)
graph.insert_edge(102, 1, 4)
graph.insert_edge(103, 3, 4)
# Should be [(100, 1, 2), (101, 1, 3), (102, 1, 4), (103, 3, 4)]
print (graph.get_edge_list())
# Should be [None, [(2, 100), (3, 101), (4, 102)], None, [(4, 103)], None]
print (graph.get_adjacency_list())
# Should be [[0, 0, 0, 0, 0], [0, 0, 100, 101, 102], [0, 0, 0, 0, 0], [0, 0, 0, 0, 103], [0, 0, 0, 0, 0]]
print (graph.get_adjacency_matrix())
###Output
[(100, 1, 2), (101, 1, 3), (102, 1, 4), (103, 3, 4)]
[None, [(2, 100), (3, 101), (4, 102)], None, [(4, 103)], None]
[[0, 0, 0, 0, 0], [0, 0, 100, 101, 102], [0, 0, 0, 0, 0], [0, 0, 0, 0, 103], [0, 0, 0, 0, 0]]
###Markdown
graph traversal- Depth First Search (DFS): go as deep into a node as possible before searching another - start anywhere, add visited node to a stack, then go to a connected node, repeat - if already seen, pop off stack - O(|E| + |V|) - can also do with recursion - https://www.cs.usfca.edu/~galles/visualization/DFS.html- Breadth First Search (BFS): search all adjacent nodes first before searching other areas - start anywhere, add visited node to a queue, then visit all adjacent nodes and add to a queue - when run out of edges, dequeue, and use next node as a starting place - O(|E| + |V|) - https://www.cs.usfca.edu/~galles/visualization/BFS.html
###Code
class Node(object):
def __init__(self, value):
self.value = value
self.edges = []
self.visited = False
class Edge(object):
def __init__(self, value, node_from, node_to):
self.value = value
self.node_from = node_from
self.node_to = node_to
# You only need to change code with docs strings that have TODO.
# Specifically: Graph.dfs_helper and Graph.bfs
# New methods have been added to associate node numbers with names
# Specifically: Graph.set_node_names
# and the methods ending in "_names" which will print names instead
# of node numbers
class Graph(object):
def __init__(self, nodes=None, edges=None):
self.nodes = nodes or []
self.edges = edges or []
self.node_names = []
self._node_map = {}
def set_node_names(self, names):
"""The Nth name in names should correspond to node number N.
Node numbers are 0 based (starting at 0).
"""
self.node_names = list(names)
def insert_node(self, new_node_val):
"Insert a new node with value new_node_val"
new_node = Node(new_node_val)
self.nodes.append(new_node)
self._node_map[new_node_val] = new_node
return new_node
def insert_edge(self, new_edge_val, node_from_val, node_to_val):
"Insert a new edge, creating new nodes if necessary"
nodes = {node_from_val: None, node_to_val: None}
for node in self.nodes:
if node.value in nodes:
nodes[node.value] = node
if all(nodes.values()):
break
for node_val in nodes:
nodes[node_val] = nodes[node_val] or self.insert_node(node_val)
node_from = nodes[node_from_val]
node_to = nodes[node_to_val]
new_edge = Edge(new_edge_val, node_from, node_to)
node_from.edges.append(new_edge)
node_to.edges.append(new_edge)
self.edges.append(new_edge)
def get_edge_list(self):
"""Return a list of triples that looks like this:
(Edge Value, From Node, To Node)"""
return [(e.value, e.node_from.value, e.node_to.value)
for e in self.edges]
def get_edge_list_names(self):
"""Return a list of triples that looks like this:
(Edge Value, From Node Name, To Node Name)"""
return [(edge.value,
self.node_names[edge.node_from.value],
self.node_names[edge.node_to.value])
for edge in self.edges]
def get_adjacency_list(self):
"""Return a list of lists.
The indecies of the outer list represent "from" nodes.
Each section in the list will store a list
of tuples that looks like this:
(To Node, Edge Value)"""
max_index = self.find_max_index()
adjacency_list = [[] for _ in range(max_index)]
for edg in self.edges:
from_value, to_value = edg.node_from.value, edg.node_to.value
adjacency_list[from_value].append((to_value, edg.value))
return [a or None for a in adjacency_list] # replace []'s with None
def get_adjacency_list_names(self):
"""Each section in the list will store a list
of tuples that looks like this:
(To Node Name, Edge Value).
Node names should come from the names set
with set_node_names."""
adjacency_list = self.get_adjacency_list()
def convert_to_names(pair, graph=self):
node_number, value = pair
return (graph.node_names[node_number], value)
def map_conversion(adjacency_list_for_node):
if adjacency_list_for_node is None:
return None
return map(convert_to_names, adjacency_list_for_node)
return [map_conversion(adjacency_list_for_node)
for adjacency_list_for_node in adjacency_list]
def get_adjacency_matrix(self):
"""Return a matrix, or 2D list.
Row numbers represent from nodes,
column numbers represent to nodes.
Store the edge values in each spot,
and a 0 if no edge exists."""
max_index = self.find_max_index()
adjacency_matrix = [[0] * (max_index) for _ in range(max_index)]
for edg in self.edges:
from_index, to_index = edg.node_from.value, edg.node_to.value
adjacency_matrix[from_index][to_index] = edg.value
return adjacency_matrix
def find_max_index(self):
"""Return the highest found node number
Or the length of the node names if set with set_node_names()."""
if len(self.node_names) > 0:
return len(self.node_names)
max_index = -1
if len(self.nodes):
for node in self.nodes:
if node.value > max_index:
max_index = node.value
return max_index
def find_node(self, node_number):
"Return the node with value node_number or None"
return self._node_map.get(node_number)
def _clear_visited(self):
for node in self.nodes:
node.visited = False
def dfs_helper(self, start_node):
"""The helper function for a recursive implementation
of Depth First Search iterating through a node's edges. The
output should be a list of numbers corresponding to the
values of the traversed nodes.
ARGUMENTS: start_node is the starting Node
REQUIRES: self._clear_visited() to be called before
MODIFIES: the value of the visited property of nodes in self.nodes
RETURN: a list of the traversed node values (integers).
"""
ret_list = [start_node.value]
start_node.visited = True
edges_out = [e for e in start_node.edges
if e.node_to.value != start_node.value]
for edge in edges_out:
if not edge.node_to.visited:
ret_list.extend(self.dfs_helper(edge.node_to))
return ret_list
def dfs(self, start_node_num):
"""Outputs a list of numbers corresponding to the traversed nodes
in a Depth First Search.
ARGUMENTS: start_node_num is the starting node number (integer)
MODIFIES: the value of the visited property of nodes in self.nodes
RETURN: a list of the node values (integers)."""
self._clear_visited()
start_node = self.find_node(start_node_num)
return self.dfs_helper(start_node)
def dfs_names(self, start_node_num):
"""Return the results of dfs with numbers converted to names."""
return [self.node_names[num] for num in self.dfs(start_node_num)]
def bfs(self, start_node_num):
"""An iterative implementation of Breadth First Search
iterating through a node's edges. The output should be a list of
numbers corresponding to the traversed nodes.
ARGUMENTS: start_node_num is the node number (integer)
MODIFIES: the value of the visited property of nodes in self.nodes
RETURN: a list of the node values (integers)."""
node = self.find_node(start_node_num)
self._clear_visited()
ret_list = []
# Your code here
queue = [node]
node.visited = True
def enqueue(n, q=queue):
n.visited = True
q.append(n)
def unvisited_outgoing_edge(n, e):
return ((e.node_from.value == n.value) and
(not e.node_to.visited))
while queue:
node = queue.pop(0)
ret_list.append(node.value)
for e in node.edges:
if unvisited_outgoing_edge(node, e):
enqueue(e.node_to)
return ret_list
def bfs_names(self, start_node_num):
"""Return the results of bfs with numbers converted to names."""
return [self.node_names[num] for num in self.bfs(start_node_num)]
graph = Graph()
# You do not need to change anything below this line.
# You only need to implement Graph.dfs_helper and Graph.bfs
graph.set_node_names(('Mountain View', # 0
'San Francisco', # 1
'London', # 2
'Shanghai', # 3
'Berlin', # 4
'Sao Paolo', # 5
'Bangalore')) # 6
graph.insert_edge(51, 0, 1) # MV <-> SF
graph.insert_edge(51, 1, 0) # SF <-> MV
graph.insert_edge(9950, 0, 3) # MV <-> Shanghai
graph.insert_edge(9950, 3, 0) # Shanghai <-> MV
graph.insert_edge(10375, 0, 5) # MV <-> Sao Paolo
graph.insert_edge(10375, 5, 0) # Sao Paolo <-> MV
graph.insert_edge(9900, 1, 3) # SF <-> Shanghai
graph.insert_edge(9900, 3, 1) # Shanghai <-> SF
graph.insert_edge(9130, 1, 4) # SF <-> Berlin
graph.insert_edge(9130, 4, 1) # Berlin <-> SF
graph.insert_edge(9217, 2, 3) # London <-> Shanghai
graph.insert_edge(9217, 3, 2) # Shanghai <-> London
graph.insert_edge(932, 2, 4) # London <-> Berlin
graph.insert_edge(932, 4, 2) # Berlin <-> London
graph.insert_edge(9471, 2, 5) # London <-> Sao Paolo
graph.insert_edge(9471, 5, 2) # Sao Paolo <-> London
# (6) 'Bangalore' is intentionally disconnected (no edges)
# for this problem and should produce None in the
# Adjacency List, etc.
import pprint
pp = pprint.PrettyPrinter(indent=2)
print ("Edge List")
pp.pprint(graph.get_edge_list_names())
print ("\nAdjacency List")
pp.pprint(graph.get_adjacency_list_names())
print ("\nAdjacency Matrix")
pp.pprint(graph.get_adjacency_matrix())
print ("\nDepth First Search")
pp.pprint(graph.dfs_names(2))
# Should print:
# Depth First Search
# ['London', 'Shanghai', 'Mountain View', 'San Francisco', 'Berlin', 'Sao Paolo']
print ("\nBreadth First Search")
pp.pprint(graph.bfs_names(2))
# test error reporting
# pp.pprint(['Sao Paolo', 'Mountain View', 'San Francisco', 'London', 'Shanghai', 'Berlin'])
# Should print:
# Breadth First Search
# ['London', 'Shanghai', 'Berlin', 'Sao Paolo', 'Mountain View', 'San Francisco']
###Output
Edge List
[ (51, 'Mountain View', 'San Francisco'),
(51, 'San Francisco', 'Mountain View'),
(9950, 'Mountain View', 'Shanghai'),
(9950, 'Shanghai', 'Mountain View'),
(10375, 'Mountain View', 'Sao Paolo'),
(10375, 'Sao Paolo', 'Mountain View'),
(9900, 'San Francisco', 'Shanghai'),
(9900, 'Shanghai', 'San Francisco'),
(9130, 'San Francisco', 'Berlin'),
(9130, 'Berlin', 'San Francisco'),
(9217, 'London', 'Shanghai'),
(9217, 'Shanghai', 'London'),
(932, 'London', 'Berlin'),
(932, 'Berlin', 'London'),
(9471, 'London', 'Sao Paolo'),
(9471, 'Sao Paolo', 'London')]
Adjacency List
[ <map object at 0x7f11c8228828>,
<map object at 0x7f11c8228898>,
<map object at 0x7f11c8228908>,
<map object at 0x7f11c8228978>,
<map object at 0x7f11c82289e8>,
<map object at 0x7f11c8228a58>,
None]
Adjacency Matrix
[ [0, 51, 0, 9950, 0, 10375, 0],
[51, 0, 0, 9900, 9130, 0, 0],
[0, 0, 0, 9217, 932, 9471, 0],
[9950, 9900, 9217, 0, 0, 0, 0],
[0, 9130, 932, 0, 0, 0, 0],
[10375, 0, 9471, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]]
Depth First Search
['London', 'Shanghai', 'Mountain View', 'San Francisco', 'Berlin', 'Sao Paolo']
Breadth First Search
['London', 'Shanghai', 'Berlin', 'Sao Paolo', 'Mountain View', 'San Francisco']
|
courses/Setup/Github-GoogleDrive-GoogleColab.ipynb | ###Markdown
Github-GoogleDrive-GoogleColab Setup0. GoogleDrive: Andare su Google Drive creasi una cartella chiamata "data_visualization". Create un nuovo google colab file oppure aprite un Google-Colab del corso [data-visualization](https://github.com/visiont3lab/data-visualization), salvatelo e spostatelo in questa cartella.1. Github: Andare su [Github](https://github.com/) e creare un account 2. Github: Creare un repository chiamato "seaborn-data-visualization" e creare un README.md di default.3. GoogleColab: Editare il seguent notebook aggiugendo codice o commenti.4. GoogleColab: Salvare il contenuto su Git (Save a Copy in Github)5. Github: Creare un file index.html con scritto dentro "Test" . Questa sarà la nostra pagine web di partenza. Più informazioni a [Github Web Pages](https://pages.github.com/).6. Github: Vai su settings e cerchimao "Github Pages". Lo abilitiamo scegliendo "master-branch. Possiamo anche scegliere un tema nella sezione "Theme Choser".Dovrebbe essere comparso sotto "Github Pages"> Your site is ready to be published at https://visiont3lab.github.io/seaborn-data-visualization/.7. GoogleColab: Convertire notebook (ipynb) in html e scaricalo localmente8. Github: Caricare il file html scaricato su Github. Importare è che il nome del file sia "index.html"
###Code
# Requisiti per convertire il notebook in pdf
!apt-get install texlive texlive-xetex texlive-latex-extra pandoc
!pip install pypandoc
###Output
_____no_output_____
###Markdown
Convertire un notebook (ipynb) contenuto in un repository GIT (Github) in html
###Code
!git clone https://github.com/visiont3lab/test-data-visualization.git
%cd test-data-visualization/
!ls
#jupyter nbconvert --to <output format> <filename.ipynb>
!jupyter nbconvert --to html Github-GoogleDrive-GoogleColab.ipynb
!ls
#jupyter nbconvert --to <output format> <filename.ipynb>
!jupyter nbconvert --to pdf Github-GoogleDrive-GoogleColab.ipynb
!ls
###Output
[NbConvertApp] Converting notebook Github-GoogleDrive-GoogleColab.ipynb to pdf
[NbConvertApp] Writing 26063 bytes to ./notebook.tex
[NbConvertApp] Building PDF
[NbConvertApp] Running xelatex 3 times: [u'xelatex', u'./notebook.tex', '-quiet']
[NbConvertApp] Running bibtex 1 time: [u'bibtex', u'./notebook']
[NbConvertApp] WARNING | bibtex had problems, most likely because there were no citations
[NbConvertApp] PDF successfully created
[NbConvertApp] Writing 30462 bytes to Github-GoogleDrive-GoogleColab.pdf
Github-GoogleDrive-GoogleColab.ipynb Pandas-Esercizio-Soluzione.ipynb
Github-GoogleDrive-GoogleColab.pdf Pandas-Esercizio-Soluzione.pdf
Github.ipynb Seaborn-Esercizio.ipynb
index.html Seaborn.ipynb
###Markdown
Convertire un notebook (ipynb) contenuto in un repository GOOGLE DRIVE in html
###Code
from google.colab import drive
drive.mount('/content/gdrive')
%cd /content/drive/My Drive/Colab Notebooks/_Manuel_DataVisualization
!ls
#jupyter nbconvert --to <output format> <filename.ipynb>
!jupyter nbconvert --to html Pandas-Esercizio-Soluzione.ipynb --output index.html
!ls
#jupyter nbconvert --to <output format> <filename.ipynb>
!jupyter nbconvert --to pdf Pandas-Esercizio-Soluzione.ipynb
!ls
###Output
_____no_output_____ |
DeepLearningFrameworks/MXNet_RNN.ipynb | ###Markdown
High-level RNN MXNet Example
###Code
import os
import sys
import numpy as np
import mxnet as mx
from common.params_lstm import *
from common.utils import *
print("OS: ", sys.platform)
print("Python: ", sys.version)
print("Numpy: ", np.__version__)
print("MXNet: ", mx.__version__)
print("GPU: ", get_gpu_name())
def create_symbol(CUDNN=True):
# https://mxnet.incubator.apache.org/api/python/rnn.html
data = mx.symbol.Variable('data')
embedded_step = mx.symbol.Embedding(data=data, input_dim=MAXFEATURES, output_dim=EMBEDSIZE)
# Fusing RNN layers across time step into one kernel
# Improves speed but is less flexible
# Currently only supported if using cuDNN on GPU
if not CUDNN:
gru_cell = mx.rnn.GRUCell(num_hidden=NUMHIDDEN)
else:
gru_cell = mx.rnn.FusedRNNCell(num_hidden=NUMHIDDEN, num_layers=1, mode='gru')
begin_state = gru_cell.begin_state()
# Call the cell to get the output of one time step for a batch.
# TODO: TNC layout (sequence length, batch size, and feature dimensions) is faster for RNN
outputs, states = gru_cell.unroll(length=MAXLEN, inputs=embedded_step, merge_outputs=False)
fc1 = mx.symbol.FullyConnected(data=outputs[-1], num_hidden=2)
input_y = mx.symbol.Variable('softmax_label')
m = mx.symbol.SoftmaxOutput(data=fc1, label=input_y, name="softmax")
return m
def init_model(m):
if GPU:
ctx = [mx.gpu(0)]
else:
ctx = mx.cpu()
mod = mx.mod.Module(context=ctx, symbol=m)
mod.bind(data_shapes=[('data', (BATCHSIZE, MAXLEN))],
label_shapes=[('softmax_label', (BATCHSIZE, ))])
# Glorot-uniform initializer
mod.init_params(initializer=mx.init.Xavier(rnd_type='uniform'))
mod.init_optimizer(optimizer='Adam',
optimizer_params=(('learning_rate', LR),
('beta1', BETA_1),
('beta2', BETA_2),
('epsilon', EPS)))
return mod
%%time
# Data into format for library
x_train, x_test, y_train, y_test = imdb_for_library(seq_len=MAXLEN, max_features=MAXFEATURES)
# Use custom iterator instead of mx.io.NDArrayIter() for consistency
# Wrap as DataBatch class
wrapper_db = lambda args: mx.io.DataBatch(data=[mx.nd.array(args[0])], label=[mx.nd.array(args[1])])
print(x_train.shape, x_test.shape, y_train.shape, y_test.shape)
print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype)
%%time
# Load symbol
sym = create_symbol()
%%time
# Initialise model
model = init_model(sym)
%%time
# 29s
# Train and log accuracy
metric = mx.metric.create('acc')
for j in range(EPOCHS):
#train_iter.reset()
metric.reset()
#for batch in train_iter:
for batch in map(wrapper_db, yield_mb(x_train, y_train, BATCHSIZE, shuffle=True)):
model.forward(batch, is_train=True)
model.update_metric(metric, batch.label)
model.backward()
model.update()
print('Epoch %d, Training %s' % (j, metric.get()))
%%time
y_guess = model.predict(mx.io.NDArrayIter(x_test, batch_size=BATCHSIZE, shuffle=False))
y_guess = np.argmax(y_guess.asnumpy(), axis=-1)
print("Accuracy: ", sum(y_guess == y_test)/len(y_guess))
###Output
Accuracy: 0.85924
|
src/L2_Most_Sampled_Tests.ipynb | ###Markdown
This file compares distances of vectors in P based on the Tau metric, contrasting between those obtained from sampling and from most distant. This would be considered an auxiliary file, as it is exploratory and not directly concerned with the testbed
###Code
from sensitivity_tests import *
import pandas as pd
#A programmer's note to themselves. Beautiful
"""###rerun with tau comparison###"""
#D matrix generators
eloTournament = SynthELOTournamentSource(50, 5, 80, 800)
smalleloTournament = SynthELOTournamentSource(4, 5, 80, 800)
l2dm = L2DifferenceMetric("max")
eloMatrix = eloTournament.init_D()
smalleloMatrix = smalleloTournament.init_D()
k, details = pyrankability.search.solve_pair_max_tau(eloMatrix)
print(l2dm._compute(k, [details["perm_x"],details["perm_y"]]))
k, details = pyrankability.search.solve_pair_max_tau(smalleloMatrix)
print(l2dm._compute(k, [details["perm_x"],details["perm_y"]]))
most_dist = []
sampled_dist = []
#Very straightforward. Generate tournament matricies, locate members of P with both methods,
#and place in corresponding arrays.
for i in range(30):
eloMatrix = eloTournament.init_D()
k, details = pyrankability.search.bilp(eloMatrix, num_random_restarts=10, find_pair=True)
sampled_dist.append(l2dm._compute(k, details["P"]))
k_most, details_most = pyrankability.search.solve_pair_max_tau(eloMatrix)
most_dist.append(l2dm._compute(k_most, [details_most["perm_x"],details_most["perm_y"]]))
comp = pd.DataFrame(data={'most_distant': most_dist, 'sampled_most_distant': sampled_dist})
comp
comp.plot.scatter("most_distant", "sampled_most_distant", title="Comparison of L2 Metric")
[(most_dist[i]-sampled_dist[i]) for i in range(len(most_dist))]
sampled_dist
most_dist
comp = pd.DataFrame(data={'most_distant': most_dist, 'sampled_most_distant': sampled_dist})
comp
comp.plot.scatter("most_distant", "sampled_most_distant", title="Comparison of L2 Metric")
#ensuring data matches what is expected/presented
[(most_dist[i]-sampled_dist[i]) for i in range(len(most_dist))]
#ensuring data matches what is expected/presented
comp = pd.DataFrame(data={'most_distant': most_dist, 'sampled_most_distant': sampled_dist})
comp
comp.plot.scatter("most_distant", "sampled_most_distant", title="Comparison of L2 Metric")
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
display(comp)
[(most_dist[i]-sampled_dist[i]) for i in range(len(most_dist))]
comp.to_csv(index=True)
comp = pd.DataFrame(data={'most_distant': most_dist, 'sampled_most_distant': sampled_dist})
comp
comp.plot.scatter("most_distant", "sampled_most_distant", title="Comparison of L2 Metric")
[(most_dist[i]-sampled_dist[i]) for i in range(len(most_dist))]
comp.to_csv(index=True)
###Output
_____no_output_____ |
FaceDetection_CNN.ipynb | ###Markdown
**Face Detection using CNNs** Importing the required libraries
###Code
import numpy as np
from keras import layers
from keras import models
from keras import regularizers
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
###Output
Using TensorFlow backend.
###Markdown
Assigning the location of training and test data
###Code
train_data = ''
test_data = ''
###Output
_____no_output_____
###Markdown
Initializing the parameters required for the network
###Code
max_count = 100
reg_val = []
lr_val = []
test_loss = []
test_acc = []
###Output
_____no_output_____
###Markdown
1. Sample learning rate and regularization from a uniform distribution2. Defining the architecture of the model
###Code
for i in range(max_count):
print("*"*30)
print(str(i+1)+"/"+str(max_count))
print("*"*30)
reg = 10**(np.random.uniform(-4,0))
lr=10**(np.random.uniform(-3,-4))
model = model.Sequential()
model.add(layers.Conv2D(32,(3,3),activation='relu',input_shape=(60,60,3)))
model.add(layers.MaxPooling2D((2,2)))
model.add(layers.Conv2D(64,(3,3),activation='relu'))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Conv2D(128,(3,3),activation='relu'))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Conv2D(128,(3,3),activation='relu'))
model.add(layers.MaxPooling2D(2,2))
model.add(layers.Flatten())
model.add(layers.Dense(512,activation='relu',kernel_regularizer=regularizers.12(reg)
model.add(layers.Dense(1,activation='sigmoid',kernel_regularizer=regularizers.12(reg)
# Summarizing the model:
model.summary()
model.compile(loss='binary_crossentropy',optimizers=optimizers.RMSprop(lr=lr),metrics=['acc'])
# Rescale all the images:
train_datagen=ImageDataGenerator(rescale=1./255)
test_datagen=ImageDataGenerator(rescale=1./255)
train_generator=train_datagen.flow_from_directory(
train_dir,
target_size=(60,60),
batch_size=20,
class_mode='binary')
train_generator=train_datagen.flow_from_directory(
test_dir,
target_size=(60,60),
batch_size=20,
class_mode='binary')
# Fitting the model
history=model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=5,
validation_data=test_generator,
validation_steps=50)
reg_val.append(reg)
lr_val.append(lr)
test_loss.append(history.history['val_loss'])
test_acc.append(history.history['val_acc'])
# Saving the model
model.save(face_nonface.h5)
###Output
_____no_output_____
###Markdown
Plotting accuracy and loss
###Code
acc=history.history['acc']
test_acc=history.history['val_acc']
loss=history.history['loss']
test_loss=history.history['val_loss']
epochs=range(1,len(acc)+1)
plt.plot(epochs,acc,'bo',label='TRAINING ACCURACY')
plt.plot(epochs,test_acc,'b',label='TEST ACCURACY')
plt.title('TRAINING VS TEST ACCURACY')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.figure()
plt.plot(epochs,loss,'bo',label='TRAINING LOSS')
plt.plot(epochs,test_loss,'b',label='TEST LOSS')
plt.title('TRAINING AND TESTING LOSS')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Finding the highest and the lowest test accuracy
###Code
print ("Finding the highest Test Accuracy and lowest Test Loss...")
index1=0
index2=0
max_test_acc=max(test_acc)
min_test_loss=min(test_loss)
"""
for i in range(max_count):
temp1=max(test_acc[i])
if(temp1>=max_test_acc):
max_test_acc=temp1
index1=i
temp2=min(test_loss[i])
if(temp2<min_test_loss):
min_test_loss=temp2
index2=i
"""
print ('Maximum Testing Accuracy:',max_test_acc)
print ('Minimum Testing Loss:',min_test_loss)
print ('Value of optimum learning rate :',lr_val[index1])
print ('Value of optimum regularization:',reg_val[index2])
###Output
_____no_output_____ |
dd_1/Part 2/Section 02 - Sequences/07 - In-Place Concatenation and Repetition.ipynb | ###Markdown
In-Place Concatenation and Repetition In-Place Concatenation We saw that using concatenation ended up creating a new sequence object:
###Code
l1 = [1, 2, 3, 4]
l2 = [5, 6]
print(id(l1), l1)
print(id(l2), l2)
l1 = l1 + l2
print(id(l1), l1)
###Output
2674853399624 [1, 2, 3, 4, 5, 6]
###Markdown
But watch what happens when we use the in-place concatenation operator `+=:
###Code
l1 = [1, 2, 3, 4]
l2 = [5, 6]
print(id(l1), l1)
print(id(l2), l2)
l1 += l2
print(id(l1), l1)
###Output
2674853400520 [1, 2, 3, 4, 5, 6]
###Markdown
Notice how the `id` of `l1` has **not** changed - it is the same object, just mutated! So far in this course I have often said that:`a = a + 1`and `a += 1`are the same thing.And for immutable objects such as integers, that is indeed true.But in fact `+` and `+=` are two different operators. It is interesting to note that the implementation of `+=` for lists will actually extend the list given any iterable, not just another list. This is really just the particular implementation of that operator for lists.
###Code
l1 = [1, 2, 3, 4]
t1 = 5, 6, 7
print(id(l1), l1)
print(id(t1), t1)
l1 += t1
print(id(l1), l1)
###Output
2674853566344 [1, 2, 3, 4, 5, 6, 7]
###Markdown
And this will work with other iterables as well:
###Code
l1 += range(8, 11)
print(id(l1), l1)
###Output
2674853566344 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
###Markdown
or even with iterable non-sequence types:
###Code
l1 += {11, 12, 13}
print(id(l1), l1)
###Output
2674853566344 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
###Markdown
Of course, this will **not work** with **immutable** sequence types, such as tuples or strings:
###Code
t1 = 1, 2, 3
t2 = 4, 5, 6
print(id(t1), t1)
print(id(t2), t2)
print(id(t1), t1)
###Output
2674852634768 (1, 2, 3)
###Markdown
We cannot mutate an immutable container!What happens is that `+=` is not actually defined for the `tuple`, and so Python essentially executed this code:`t1 = t1 + t2`which, as we already know, always creates a new object. In-Place Repetition A similar result holds for in-place repetition.Let's see this using a list (mutable sequence type) first:
###Code
l = [1, 2, 3]
print(id(l), l)
l *= 2
print(id(l), l)
###Output
2674853567560 [1, 2, 3, 1, 2, 3]
###Markdown
But obviously this operator will work differently if the sequence type is immutable:
###Code
t = (1, 2, 3)
print(id(t), t)
t *= 2
print(id(t), t)
###Output
2674829349224 (1, 2, 3, 1, 2, 3)
|
eda/Step2_TimeSeriesEDA.ipynb | ###Markdown
Step 2: Time Series Explorator Data Analysis Get a feel for how time series data looks like for various features
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
df = pd.read_csv("../data/device_failure.csv")
df.columns = ['date', 'device', 'failure', 'a1', 'a2','a3','a4','a5','a6','a7','a8','a9']
fcols = ['a1', 'a2','a3','a4','a5','a6','a7','a8','a9']
df.loc[:,'date'] = pd.to_datetime(df['date'])
###Output
_____no_output_____
###Markdown
Ideally, would save this off in seperate data stores for faster retrieval Like in orc/parquet format (hdfs) to retain format types (especially dates), float/int stored very efficiently
###Code
failed_devs = pd.DataFrame(df[df['failure'] == 1].device.unique())
failed_devs.columns = ["device"]
failed_devs_hist = pd.merge(df, failed_devs, on=["device"])
good_devs = pd.DataFrame(list(set(df.device.unique()) - set(failed_devs["device"])))
good_devs.columns = ["device"]
good_devs_hist = pd.merge(df, good_devs, on=["device"])
###Output
_____no_output_____
###Markdown
Explore how good vs bad devices data looks for various features Just priliminary analysis. See which transformations make sense so we can build modules.
###Code
def plot_history(tdf, feature, devname):
fdev = tdf[tdf["device"] == devname]
fdev.set_index("date", inplace=True)
fdev[feature].plot()
def plot_sample_history(tdf, dev_list_df, sample_cnt, feature):
#Get a sample of devices and their history
sample_dev_df = dev_list_df.sample(sample_cnt)
sample_dev_hist = pd.merge(tdf, sample_dev_df, on=["device"])
for device in sample_dev_df["device"]:
fig, axs = plt.subplots(1)
fig.set_size_inches(6,2)
plot_history(sample_dev_hist, feature, device)
plot_sample_history(failed_devs_hist, failed_devs, 3, "a2")
plot_sample_history(good_devs_hist, good_devs, 3, "a2")
###Output
_____no_output_____
###Markdown
As Good Vs Bad analysys for "a2" shows Failures happen quickly within days, once we start seeing signal on a2 Most of good devices don't show any signal activity in a2 Range of values varies widely. So, may have to use natural logs, their differentials and derive features from these Note: I have taken samples in 10 many times to come to this conclusion (not just for a2, but all features) a1: Cannot seem to find anything different between good and bad devices. Try other approach like using slope and bias of regression lines as features. TBD later
###Code
plot_sample_history(failed_devs_hist, failed_devs, 3, "a1")
plot_sample_history(good_devs_hist, good_devs, 3, "a1")
###Output
_____no_output_____
###Markdown
a7 and a8 are the same feature!
###Code
df[df["a7"] != df["a8"]]
###Output
_____no_output_____
###Markdown
Step 2: Time Series Explorator Data Analysis Get a feel for how time series data looks like for various features
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
df = pd.read_csv("../data/device_failure.csv")
df.columns = ['date', 'device', 'failure', 'a1', 'a2','a3','a4','a5','a6','a7','a8','a9']
fcols = ['a1', 'a2','a3','a4','a5','a6','a7','a8','a9']
df.loc[:,'date'] = pd.to_datetime(df['date'])
###Output
_____no_output_____
###Markdown
Ideally, would save this off in seperate data stores for faster retrieval Like in orc/parquet format (hdfs) to retain format types (especially dates), float/int stored very efficiently
###Code
failed_devs = pd.DataFrame(df[df['failure'] == 1].device.unique())
failed_devs.columns = ["device"]
failed_devs_hist = pd.merge(df, failed_devs, on=["device"])
good_devs = pd.DataFrame(list(set(df.device.unique()) - set(failed_devs["device"])))
good_devs.columns = ["device"]
good_devs_hist = pd.merge(df, good_devs, on=["device"])
###Output
_____no_output_____
###Markdown
Explore how good vs bad devices data looks for various features Just priliminary analysis. See which transformations make sense so we can build modules.
###Code
def plot_history(tdf, feature, devname):
fdev = tdf[tdf["device"] == devname]
fdev.set_index("date", inplace=True)
fdev[feature].plot()
def plot_sample_history(tdf, dev_list_df, sample_cnt, feature):
#Get a sample of devices and their history
sample_dev_df = dev_list_df.sample(sample_cnt)
sample_dev_hist = pd.merge(tdf, sample_dev_df, on=["device"])
for device in sample_dev_df["device"]:
fig, axs = plt.subplots(1)
fig.set_size_inches(6,2)
plot_history(sample_dev_hist, feature, device)
plot_sample_history(failed_devs_hist, failed_devs, 3, "a2")
plot_sample_history(good_devs_hist, good_devs, 3, "a2")
###Output
_____no_output_____
###Markdown
As Good Vs Bad analysys for "a2" shows Failures happen quickly within days, once we start seeing signal on a2 Most of good devices don't show any signal activity in a2 Range of values varies widely. So, may have to use natural logs, their differentials and derive features from these Note: I have taken samples in 10 many times to come to this conclusion (not just for a2, but all features) a1: Cannot seem to find anything different between good and bad devices. Try other approach like using slope and bias of regression lines as features. TBD later
###Code
plot_sample_history(failed_devs_hist, failed_devs, 3, "a1")
plot_sample_history(good_devs_hist, good_devs, 3, "a1")
###Output
_____no_output_____
###Markdown
a7 and a8 are the same feature!
###Code
df[df["a7"] != df["a8"]]
###Output
_____no_output_____ |
_notebooks/2021-09-15-Polygon_Bridge.ipynb | ###Markdown
Polygon Bridging Behaviour> "What do users do when they bridge to Polygon?"- toc:true- branch: master- badges: true- comments: false- author: Scott Simpson- categories: [polygon]
###Code
#hide
#Imports & settings
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from plotly.subplots import make_subplots
%matplotlib inline
#%load_ext google.colab.data_table
%load_ext rpy2.ipython
%R options(tidyverse.quiet = TRUE)
%R options(lubridate.quiet = TRUE)
%R options(jsonlite.quiet = TRUE)
%R suppressMessages(library(tidyverse))
%R suppressMessages(library(lubridate))
%R suppressMessages(library(jsonlite))
%R suppressMessages(options(dplyr.summarise.inform = FALSE))
#hide
%%R
#Grab base query from Flipside
df_group1 = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/ce6a4190-132d-46fc-96a3-f70127512f85/data/latest', simplifyDataFrame = TRUE)
df_group2 = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/e391cb58-6f9d-4f90-9fd9-bd07a268997b/data/latest', simplifyDataFrame = TRUE)
df_group3 = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/8eca21f3-20ef-4ce9-88e6-ce03591b3a71/data/latest', simplifyDataFrame = TRUE)
df_group4 = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/c03db498-3094-45f3-93c0-30193e111b14/data/latest', simplifyDataFrame = TRUE)
#union all the three query groups
df <- df_group1 %>%
bind_rows(df_group2) %>%
bind_rows(df_group3) %>%
bind_rows(df_group4)
rm(list = c("df_group1", "df_group2", "df_group3", "df_group4"))
#Change the date to date format
df$BLOCK_TIMESTAMP <- parse_datetime(df$BLOCK_TIMESTAMP)
#lower case the column names
names(df)<-tolower(names(df))
#grab labels
labels <- read_csv("https://raw.githubusercontent.com/scottincrypto/analytics/master/data/bridge_labels.csv")
#join labels
df <- df %>%
left_join(labels)
#replace nas with Unknown
df$Label <- replace_na(df$Label, "Unknown")
df$Class <- replace_na(df$Class, "Unknown")
#Grab the top Coins chart
coin_chart = fromJSON('https://api.flipsidecrypto.com/api/v2/queries/55d223eb-f8f4-4e0a-97bb-6a0eb5a22b7d/data/latest', simplifyDataFrame = TRUE)
coin_chart <- coin_chart %>%
mutate(total = sum(AMOUNT_USD),
Percentage = round(AMOUNT_USD / total * 100,0)) %>%
rename('Bridged Token' = SYMBOL,'USD Amount' = AMOUNT_USD) %>%
select('Bridged Token', 'USD Amount', Percentage) %>%
arrange(desc('USD Amount'))
###Output
Rows: 77 Columns: 3
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (3): nxt_to_address, Label, Class
ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
Joining, by = "nxt_to_address"
###Markdown
IntroductionThis post seeks to answer the following questions: *Where do people go when they bridge to Polygon from Ethereum? What are the 10 most popular first destinations for Polygon addresses that have just bridged from Ethereum? What has this been for each day in the past month?* Polygon operates as a sidechain to Ethereum, with funds moved to & from Polygon via a series of bridges. These bridges accept tokens on the Ethereum side, then create wrapped versions of the same tokens on the Polygon side. Users are attracted to Polygon by the low fees & fast transactions speeds relative to Ethereum. In the absence of fiat onramps, bridges are the only way into the ecosystem. There are a number of bridges in operation by different operators, but this analysis will focus on the official Polygon bridge provided by the protocol. About the DataThis dataset looks at all of the transactions coming into Polygon via the Polygon Bridge over the last month. It then looks at the *next* transaction each wallet undertook in the currency that was bridged in, and classifies the destination of that transaction. In examining the data, a couple of observations were noted:- Often the first transaction was a very small swap for MATIC - the token used for gas on Polygon- Most of the coins bridged were either USDC, USDT, WBTC or WETH. Only 3% were other coins, and there was a very long list of these other coins.To simplify the analysis, we excluded any next transaction with a value of less than USD 10, and looked at the transaction *after* this instead. This lets us see the intent of the user more than looking at their initial MATIC swap transaction instead. We also limited our analysis to the 4 major coins above - this accounts for 97% of the funds bridged (as per the table below) and allowed us to simplify the charts for better insights.All data was sourced from [Flipside Crypto](https://flipsidecrypto.com)
###Code
#hide_input
%R coin_chart
###Output
_____no_output_____
###Markdown
Top DestinationsThe destination addresses & contracts were classified by project or protocol to understand where users were going after bridging to Polygon. We managed to classify over 70 of these destination contracts to account for over 75% of the value moved. There was a very long tail (over 23k addresses) accounting for the remaining 25% of the value - these are labelled as "Unknown" in the data going forward.The graph below shows the next destination of the bridged funds. Aave is the top destination, with a lot of value coming into Polygon to take part in borrowing & lending activities. A number of familiar names from the Ethereum ecosystem are in the list - Curve, 1inch, Balance, Sushi - but there are a few non-Ethereum specific names in the list - Quickswap, Iron Finance, Polynetwork. An interesting find in this list are the bridges - a signficant portion of funds were immediately bridged out of Polygon after being bridged in. This was done via the Polygon Bridge ("Bridge Out") or via the Allbridge facility. It's possible users are using Allbridge to get to chains such as Solana or Binance using Polygon as an intermediate step.
###Code
#hide_input
# Plot the top 10
df_p = %R df %>% group_by(Label) %>% summarise(total = sum(nxt_amount_usd)) %>% arrange(desc(total))
fig = px.bar(df_p
, x = "Label"
, y = "total"
, labels=dict(Label="Destination", total="USD Amount")
, title= "Next Destination of Bridged Funds on Polygon"
, template="simple_white", width=800, height=800/1.618
)
fig.update_yaxes(title_text='Amount (USD)')
fig.update_xaxes(title_text='Destination')
fig.show()
###Output
_____no_output_____
###Markdown
Flow of Funds By DestinationThe graph below shows the flow of funds from bridge, by token bridged, into the destination protocols shown above. We can see the large flows into Aave - interestingly most of the Aave flow is USDC, USDT or WBTC. WETH makes up only a small portion of the flow. WETH yields are usually low on Aave, but WBTC yields are too. Low yields do not show the full picture of user behaviour here.
###Code
#hide
%%R
#create the RHS of the sankey link table
rhs_link_label <- df %>% group_by(symbol, Label) %>%
summarise(total = sum(nxt_amount_usd)) %>%
rename(source = symbol, target = Label)
#create the LHS of the sankey link table
lhs_link_label <- df %>% group_by(symbol) %>%
summarise(total = sum(amount_usd)) %>%
mutate(source = "Bridge In") %>%
select(source, symbol, total) %>%
rename(target = symbol)
#Join the table together
link_table_label <- lhs_link_label %>% bind_rows(rhs_link_label)
#Create a list of the nodes
nodes_label <- link_table_label %>% rename(node = source) %>%
select(node) %>%
bind_rows(link_table_label %>% rename(node = target) %>% select(node)) %>%
distinct(node) %>%
mutate(index = row_number()-1)
#join the index back into the link table
link_table_label <- link_table_label %>%
left_join(nodes_label, by=c("source" = "node")) %>%
rename(source_index = index)
link_table_label <- link_table_label %>%
left_join(nodes_label, by=c("target" = "node")) %>%
rename(target_index = index)
#hide_input
# Sankey by Label
label = %R nodes_label %>% select(node)
source = %R link_table_label %>% select(source_index)
target = %R link_table_label %>% select(target_index)
value = %R link_table_label%>% select(total)
label = label['node'].to_list()
source = source['source_index'].to_list()
target = target['target_index'].to_list()
value = value['total'].to_list()
# data to dict, dict to sankey
link = dict(source = source, target = target, value = value)
node = dict(label = label, pad=50, thickness=5)
data = go.Sankey(link = link, node=node)
fig = go.Figure(data)
fig.update_layout(width=800, height=800/1.618, template="simple_white", title="Flow of Funds from Bridge to Destination (amounts in USD)")
fig.show()
###Output
_____no_output_____
###Markdown
By Use CaseTo simplify the above graph, each destination was classified into a broad use case grouping. This is shown in the graph below. Users coming to Polygon for Defi activities (Aave, Iron Finance etc) account for the largest grouping. Here we see the amount of funds bridge immediately back out of Polygon - 11% of funds bridge in are immediately bridged out. Another observation from this graph is that there is a small amount of hodling occuring - this is seen where the inflows to each token are less than the outflows.
###Code
#hide
%%R
#Do the same graph by Class
#create the RHS of the sankey link table
rhs_link <- df %>% group_by(symbol, Class) %>%
summarise(total = sum(nxt_amount_usd)) %>%
rename(source = symbol, target = Class)
#create the LHS of the sankey link table
lhs_link <- df %>% group_by(symbol) %>%
summarise(total = sum(amount_usd)) %>%
mutate(source = "Bridge In") %>%
select(source, symbol, total) %>%
rename(target = symbol)
#Join the table together
link_table <- lhs_link %>% bind_rows(rhs_link)
#Create a list of the nodes
nodes <- link_table %>% rename(node = source) %>%
select(node) %>%
bind_rows(link_table %>% rename(node = target) %>% select(node)) %>%
distinct(node) %>%
mutate(index = row_number()-1)
#join the index back into the link table
link_table <- link_table %>%
left_join(nodes, by=c("source" = "node")) %>%
rename(source_index = index)
link_table <- link_table %>%
left_join(nodes, by=c("target" = "node")) %>%
rename(target_index = index)
#hide_input
# Sankey by Class
label = %R nodes %>% select(node)
source = %R link_table %>% select(source_index)
target = %R link_table %>% select(target_index)
value = %R link_table%>% select(total)
label = label['node'].to_list()
source = source['source_index'].to_list()
target = target['target_index'].to_list()
value = value['total'].to_list()
# data to dict, dict to sankey
link = dict(source = source, target = target, value = value)
node = dict(label = label, pad=50, thickness=5)
data = go.Sankey(link = link, node=node)
fig = go.Figure(data)
fig.update_layout(width=800, height=800/1.618, template="simple_white", title="Flow of Funds from Bridge to Use Case (amounts in USD)")
fig.show()
###Output
_____no_output_____
###Markdown
Most Popular Destinations by DayThe chart below shows, by day, the ranking of each of the destinations by the USD amount sent to them. This chart is very busy and a littled difficult to intepret. To target a particular protocol, double-click on the protocol in the legend and the graph will isolate just that protocol. Clicking on another protocol after this will add it to the graph. Double-click again to return to all values.Aave is consistently the top performer, being always ranked at number 4 or higher. The unknown category also features highly - indicating that there are a wide variety of destinations on Polygon and it's not just a small handful capturing all the action. The behaviour with bridges continues to yield interesting insight - Allbridge has risen to be a consistent top 5 destination in the last 2 weeks - perhaps this is coincident with the rise in interest in Solana, as Allbridge can be used to bridge to this chain.
###Code
#hide
%%R
#top 10 rank by day
top_10_table <- df %>% mutate(date = floor_date(block_timestamp, unit = "days")) %>%
group_by(date, Label) %>%
summarise(dest_total = sum(nxt_amount_usd)) %>%
ungroup() %>%
group_by(date) %>%
mutate(rank = rank(desc(dest_total))) %>%
ungroup() %>%
# filter(rank <= 10) %>%
arrange(date, rank)
#hide_input
#aave top graph - need rank by day first
#Top 5 pools by week for 6 months
df_p = %R top_10_table %>% arrange(date, rank)
fig = px.line(df_p, x="date", y="rank", color='Label',
template="simple_white", width=1200, height=1200/1.618,
title= 'Most Popular Desination, Ranked, by Day',
labels=dict(date="Date", rank="Rank", Label="Destination"))
fig.update_yaxes(autorange="reversed")
fig.update_traces(mode="lines+markers")
fig.update_yaxes(tick0=1, dtick=1)
#fig.update_layout(legend=dict(
# yanchor="bottom",
# y=0.01,
# xanchor="left",
# x=0.01
#))
fig.show()
###Output
_____no_output_____
###Markdown
Most Popular Use Cases by DayTo simplify the above graph, we used the broader classes of use cases rather than the destination protcols. Here we see that Defi usage is on average the highest - this contains Aave. Users are obviously keen to generate yield from their funds in Polygon, taking advantage of the low fees relative to Ethereum to deposit their funds. We also see that funds jumping straight to dex swaps has dropped off - 3-4 weeks ago this was the highest use case, now it is amongst the lowest.
###Code
#hide
%%R
#top 10 rank by day
top_10_table <- df %>% mutate(date = floor_date(block_timestamp, unit = "days")) %>%
group_by(date, Class) %>%
summarise(dest_total = sum(nxt_amount_usd)) %>%
ungroup() %>%
group_by(date) %>%
mutate(rank = rank(desc(dest_total))) %>%
ungroup() %>%
filter(rank <= 10) %>%
arrange(date, rank)
#hide_input
df_p = %R top_10_table %>% arrange(date, rank)
fig = px.line(df_p, x="date", y="rank", color='Class',
template="simple_white", width=1200, height=1200/1.618,
title= 'Most Popular Use Case, Ranked, by Day',
labels=dict(rank="Rank", date="Date", Class="Use Case"))
fig.update_yaxes(autorange="reversed")
fig.update_traces(mode="lines+markers")
fig.update_yaxes(tick0=1, dtick=1)
#fig.update_layout(legend=dict(
# yanchor="bottom",
# y=0.01,
# xanchor="left",
# x=0.01
#))
fig.show()
###Output
_____no_output_____ |
Time_Series_Forecasting/.ipynb_checkpoints/Energy_Consumption_Exercise-checkpoint.ipynb | ###Markdown
Time Series Forecasting A time series is data collected periodically, over time. Time series forecasting is the task of predicting future data points, given some historical data. It is commonly used in a variety of tasks from weather forecasting, retail and sales forecasting, stock market prediction, and in behavior prediction (such as predicting the flow of car traffic over a day). There is a lot of time series data out there, and recognizing patterns in that data is an active area of machine learning research!In this notebook, we'll focus on one method for finding time-based patterns: using SageMaker's supervised learning model, [DeepAR](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html). DeepARDeepAR utilizes a recurrent neural network (RNN), which is designed to accept some sequence of data points as historical input and produce a predicted sequence of points. So, how does this model learn?During training, you'll provide a training dataset (made of several time series) to a DeepAR estimator. The estimator looks at *all* the training time series and tries to identify similarities across them. It trains by randomly sampling **training examples** from the training time series. * Each training example consists of a pair of adjacent **context** and **prediction** windows of fixed, predefined lengths. * The `context_length` parameter controls how far in the *past* the model can see. * The `prediction_length` parameter controls how far in the *future* predictions can be made. * You can find more details, in [this documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html).> Since DeepAR trains on several time series, it is well suited for data that exhibit **recurring patterns**.In any forecasting task, you should choose the context window to provide enough, **relevant** information to a model so that it can produce accurate predictions. In general, data closest to the prediction time frame will contain the information that is most influential in defining that prediction. In many forecasting applications, like forecasting sales month-to-month, the context and prediction windows will be the same size, but sometimes it will be useful to have a larger context window to notice longer-term patterns in data. Energy Consumption DataThe data we'll be working with in this notebook is data about household electric power consumption, over the globe. The dataset is originally taken from [Kaggle](https://www.kaggle.com/uciml/electric-power-consumption-data-set), and represents power consumption collected over several years from 2006 to 2010. With such a large dataset, we can aim to predict over long periods of time, over days, weeks or months of time. Predicting energy consumption can be a useful task for a variety of reasons including determining seasonal prices for power consumption and efficiently delivering power to people, according to their predicted usage. **Interesting read**: An inversely-related project, recently done by Google and DeepMind, uses machine learning to predict the *generation* of power by wind turbines and efficiently deliver power to the grid. You can read about that research, [in this post](https://deepmind.com/blog/machine-learning-can-boost-value-wind-energy/). Machine Learning WorkflowThis notebook approaches time series forecasting in a number of steps:* Loading and exploring the data* Creating training and test sets of time series* Formatting data as JSON files and uploading to S3* Instantiating and training a DeepAR estimator* Deploying a model and creating a predictor* Evaluating the predictor ---Let's start by loading in the usual resources.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load and Explore the DataWe'll be loading in some data about global energy consumption, collected over a few years. The below cell downloads and unzips this data, giving you one text file of data, `household_power_consumption.txt`.
###Code
! wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/March/5c88a3f1_household-electric-power-consumption/household-electric-power-consumption.zip
! unzip household-electric-power-consumption
###Output
_____no_output_____
###Markdown
Read in the `.txt` FileThe next cell displays the first few lines in the text file, so we can see how it is formatted.
###Code
# display first ten lines of text data
n_lines = 10
with open('household_power_consumption.txt') as file:
head = [next(file) for line in range(n_lines)]
display(head)
###Output
_____no_output_____
###Markdown
Pre-Process the DataThe 'household_power_consumption.txt' file has the following attributes: * Each data point has a date and time (hour:minute:second) of recording * The various data features are separated by semicolons (;) * Some values are 'nan' or '?', and we'll treat these both as `NaN` values Managing `NaN` valuesThis DataFrame does include some data points that have missing values. So far, we've mainly been dropping these values, but there are other ways to handle `NaN` values, as well. One technique is to just fill the missing column values with the **mean** value from that column; this way the added value is likely to be realistic.I've provided some helper functions in `txt_preprocessing.py` that will help to load in the original text file as a DataFrame *and* fill in any `NaN` values, per column, with the mean feature value. This technique will be fine for long-term forecasting; if I wanted to do an hourly analysis and prediction, I'd consider dropping the `NaN` values or taking an average over a small, sliding window rather than an entire column of data.**Below, I'm reading the file in as a DataFrame and filling `NaN` values with feature-level averages.**
###Code
import txt_preprocessing as pprocess
# create df from text file
initial_df = pprocess.create_df('household_power_consumption.txt', sep=';')
# fill NaN column values with *average* column value
df = pprocess.fill_nan_with_mean(initial_df)
# print some stats about the data
print('Data shape: ', df.shape)
df.head()
###Output
_____no_output_____
###Markdown
Global Active Power In this example, we'll want to predict the global active power, which is the household minute-averaged active power (kilowatt), measured across the globe. So, below, I am getting just that column of data and displaying the resultant plot.
###Code
# Select Global active power data
power_df = df['Global_active_power'].copy()
print(power_df.shape)
# display the data
plt.figure(figsize=(12,6))
# all data points
power_df.plot(title='Global active power', color='blue')
plt.show()
###Output
_____no_output_____
###Markdown
Since the data is recorded each minute, the above plot contains *a lot* of values. So, I'm also showing just a slice of data, below.
###Code
# can plot a slice of hourly data
end_mins = 1440 # 1440 mins = 1 day
plt.figure(figsize=(12,6))
power_df[0:end_mins].plot(title='Global active power, over one day', color='blue')
plt.show()
###Output
_____no_output_____
###Markdown
Hourly vs DailyThere is a lot of data, collected every minute, and so I could go one of two ways with my analysis:1. Create many, short time series, say a week or so long, in which I record energy consumption every hour, and try to predict the energy consumption over the following hours or days.2. Create fewer, long time series with data recorded daily that I could use to predict usage in the following weeks or months.Both tasks are interesting! It depends on whether you want to predict time patterns over a day/week or over a longer time period, like a month. With the amount of data I have, I think it would be interesting to see longer, *recurring* trends that happen over several months or over a year. So, I will resample the 'Global active power' values, recording **daily** data points as averages over 24-hr periods.> I can resample according to a specified frequency, by utilizing pandas [time series tools](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html), which allow me to sample at points like every hour ('H') or day ('D'), etc.
###Code
# resample over day (D)
freq = 'D'
# calculate the mean active power for a day
mean_power_df = power_df.resample(freq).mean()
# display the mean values
plt.figure(figsize=(15,8))
mean_power_df.plot(title='Global active power, mean per day', color='blue')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
In this plot, we can see that there are some interesting trends that occur over each year. It seems that there are spikes of energy consumption around the end/beginning of each year, which correspond with heat and light usage being higher in winter months. We also see a dip in usage around August, when global temperatures are typically higher.The data is still not very smooth, but it shows noticeable trends, and so, makes for a good use case for machine learning models that may be able to recognize these patterns. --- Create Time Series My goal will be to take full years of data, from 2007-2009, and see if I can use it to accurately predict the average Global active power usage for the next several months in 2010!Next, let's make one time series for each complete year of data. This is just a design decision, and I am deciding to use full years of data, starting in January of 2017 because there are not that many data points in 2006 and this split will make it easier to handle leap years; I could have also decided to construct time series starting at the first collected data point, just by changing `t_start` and `t_end` in the function below.The function `make_time_series` will create pandas `Series` for each of the passed in list of years `['2007', '2008', '2009']`.* All of the time series will start at the same time point `t_start` (or t0). * When preparing data, it's important to use a consistent start point for each time series; DeepAR uses this time-point as a frame of reference, which enables it to learn recurrent patterns e.g. that weekdays behave differently from weekends or that Summer is different than Winter. * You can change the start and end indices to define any time series you create.* We should account for leap years, like 2008, in the creation of time series.* Generally, we create `Series` by getting the relevant global consumption data (from the DataFrame) and date indices.``` get global consumption datadata = mean_power_df[start_idx:end_idx] create time series for the yearindex = pd.DatetimeIndex(start=t_start, end=t_end, freq='D')time_series.append(pd.Series(data=data, index=index))```
###Code
def make_time_series(mean_power_df, years, freq='D', start_idx=16):
'''Creates as many time series as there are complete years. This code
accounts for the leap year, 2008.
:param mean_power_df: A dataframe of global power consumption, averaged by day.
This dataframe should also be indexed by a datetime.
:param years: A list of years to make time series out of, ex. ['2007', '2008'].
:param freq: The frequency of data recording (D = daily)
:param start_idx: The starting dataframe index of the first point in the first time series.
The default, 16, points to '2017-01-01'.
:return: A list of pd.Series(), time series data.
'''
# store time series
time_series = []
# store leap year in this dataset
leap = '2008'
# create time series for each year in years
for i in range(len(years)):
year = years[i]
if(year == leap):
end_idx = start_idx+366
else:
end_idx = start_idx+365
# create start and end datetimes
t_start = year + '-01-01' # Jan 1st of each year = t_start
t_end = year + '-12-31' # Dec 31st = t_end
# get global consumption data
data = mean_power_df[start_idx:end_idx]
# create time series for the year
index = pd.DatetimeIndex(start=t_start, end=t_end, freq=freq)
time_series.append(pd.Series(data=data, index=index))
start_idx = end_idx
# return list of time series
return time_series
###Output
_____no_output_____
###Markdown
Test the resultsBelow, let's construct one time series for each complete year of data, and display the results.
###Code
# test out the code above
# yearly time series for our three complete years
full_years = ['2007', '2008', '2009']
freq='D' # daily recordings
# make time series
time_series = make_time_series(mean_power_df, full_years, freq=freq)
# display first time series
time_series_idx = 0
plt.figure(figsize=(12,6))
time_series[time_series_idx].plot()
plt.show()
###Output
_____no_output_____
###Markdown
--- Splitting in TimeWe'll evaluate our model on a test set of data. For machine learning tasks like classification, we typically create train/test data by randomly splitting examples into different sets. For forecasting it's important to do this train/test split in **time** rather than by individual data points. > In general, we can create training data by taking each of our *complete* time series and leaving off the last `prediction_length` data points to create *training* time series. EXERCISE: Create training time seriesComplete the `create_training_series` function, which should take in our list of complete time series data and return a list of truncated, training time series.* In this example, we want to predict about a month's worth of data, and we'll set `prediction_length` to 30 (days).* To create a training set of data, we'll leave out the last 30 points of *each* of the time series we just generated, so we'll use only the first part as training data. * The **test set contains the complete range** of each time series.
###Code
# create truncated, training time series
def create_training_series(complete_time_series, prediction_length):
'''Given a complete list of time series data, create training time series.
:param complete_time_series: A list of all complete time series.
:param prediction_length: The number of points we want to predict.
:return: A list of training time series.
'''
# your code here
pass
# test your code!
# set prediction length
prediction_length = 30 # 30 days ~ a month
time_series_training = create_training_series(time_series, prediction_length)
###Output
_____no_output_____
###Markdown
Training and Test SeriesWe can visualize what these series look like, by plotting the train/test series on the same axis. We should see that the test series contains all of our data in a year, and a training series contains all but the last `prediction_length` points.
###Code
# display train/test time series
time_series_idx = 0
plt.figure(figsize=(15,8))
# test data is the whole time series
time_series[time_series_idx].plot(label='test', lw=3)
# train data is all but the last prediction pts
time_series_training[time_series_idx].plot(label='train', ls=':', lw=3)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Convert to JSON According to the [DeepAR documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html), DeepAR expects to see input training data in a JSON format, with the following fields:* **start**: A string that defines the starting date of the time series, with the format 'YYYY-MM-DD HH:MM:SS'.* **target**: An array of numerical values that represent the time series.* **cat** (optional): A numerical array of categorical features that can be used to encode the groups that the record belongs to. This is useful for finding models per class of item, such as in retail sales, where you might have {'shoes', 'jackets', 'pants'} encoded as categories {0, 1, 2}.The input data should be formatted with one time series per line in a JSON file. Each line looks a bit like a dictionary, for example:```{"start":'2007-01-01 00:00:00', "target": [2.54, 6.3, ...], "cat": [1]}{"start": "2012-01-30 00:00:00", "target": [1.0, -5.0, ...], "cat": [0]} ...```In the above example, each time series has one, associated categorical feature and one time series feature. EXERCISE: Formatting Energy Consumption DataFor our data:* The starting date, "start," will be the index of the first row in a time series, Jan. 1st of that year.* The "target" will be all of the energy consumption values that our time series holds.* We will not use the optional "cat" field.Complete the following utility function, which should convert `pandas.Series` objects into the appropriate JSON strings that DeepAR can consume.
###Code
def series_to_json_obj(ts):
'''Returns a dictionary of values in DeepAR, JSON format.
:param ts: A single time series.
:return: A dictionary of values with "start" and "target" keys.
'''
# your code here
pass
# test out the code
ts = time_series[0]
json_obj = series_to_json_obj(ts)
print(json_obj)
###Output
_____no_output_____
###Markdown
Saving Data, LocallyThe next helper function will write one series to a single JSON line, using the new line character '\n'. The data is also encoded and written to a filename that we specify.
###Code
# import json for formatting data
import json
import os # and os for saving
def write_json_dataset(time_series, filename):
with open(filename, 'wb') as f:
# for each of our times series, there is one JSON line
for ts in time_series:
json_line = json.dumps(series_to_json_obj(ts)) + '\n'
json_line = json_line.encode('utf-8')
f.write(json_line)
print(filename + ' saved.')
# save this data to a local directory
data_dir = 'json_energy_data'
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# directories to save train/test data
train_key = os.path.join(data_dir, 'train.json')
test_key = os.path.join(data_dir, 'test.json')
# write train/test JSON files
write_json_dataset(time_series_training, train_key)
write_json_dataset(time_series, test_key)
###Output
_____no_output_____
###Markdown
--- Uploading Data to S3Next, to make this data accessible to an estimator, I'll upload it to S3. Sagemaker resourcesLet's start by specifying:* The sagemaker role and session for training a model.* A default S3 bucket where we can save our training, test, and model data.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
# session, role, bucket
sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()
###Output
_____no_output_____
###Markdown
EXERCISE: Upoad *both* training and test JSON files to S3Specify *unique* train and test prefixes that define the location of that data in S3.* Upload training data to a location in S3, and save that location to `train_path`* Upload test data to a location in S3, and save that location to `test_path`
###Code
# suggested that you set prefixes for directories in S3
# upload data to S3, and save unique locations
train_path = None
test_path = None
# check locations
print('Training data is stored in: '+ train_path)
print('Test data is stored in: '+ test_path)
###Output
_____no_output_____
###Markdown
--- Training a DeepAR EstimatorSome estimators have specific, SageMaker constructors, but not all. Instead you can create a base `Estimator` and pass in the specific image (or container) that holds a specific model.Next, we configure the container image to be used for the region that we are running in.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
image_name = get_image_uri(boto3.Session().region_name, # get the region
'forecasting-deepar') # specify image
###Output
_____no_output_____
###Markdown
EXERCISE: Instantiate an Estimator You can now define the estimator that will launch the training job. A generic Estimator will be defined by the usual constructor arguments and an `image_name`. > You can take a look at the [estimator source code](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/estimator.pyL595) to view specifics.
###Code
from sagemaker.estimator import Estimator
# instantiate a DeepAR estimator
estimator = None
###Output
_____no_output_____
###Markdown
Setting HyperparametersNext, we need to define some DeepAR hyperparameters that define the model size and training behavior. Values for the epochs, frequency, prediction length, and context length are required.* **epochs**: The maximum number of times to pass over the data when training.* **time_freq**: The granularity of the time series in the dataset ('D' for daily).* **prediction_length**: A string; the number of time steps (based off the unit of frequency) that the model is trained to predict. * **context_length**: The number of time points that the model gets to see *before* making a prediction. Context LengthTypically, it is recommended that you start with a `context_length`=`prediction_length`. This is because a DeepAR model also receives "lagged" inputs from the target time series, which allow the model to capture long-term dependencies. For example, a daily time series can have yearly seasonality and DeepAR automatically includes a lag of one year. So, the context length can be shorter than a year, and the model will still be able to capture this seasonality. The lag values that the model picks depend on the frequency of the time series. For example, lag values for daily frequency are the previous week, 2 weeks, 3 weeks, 4 weeks, and year. You can read more about this in the [DeepAR "how it works" documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html). Optional HyperparametersYou can also configure optional hyperparameters to further tune your model. These include parameters like the number of layers in our RNN model, the number of cells per layer, the likelihood function, and the training options, such as batch size and learning rate. For an exhaustive list of all the different DeepAR hyperparameters you can refer to the DeepAR [hyperparameter documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_hyperparameters.html).
###Code
freq='D'
context_length=30 # same as prediction_length
hyperparameters = {
"epochs": "50",
"time_freq": freq,
"prediction_length": str(prediction_length),
"context_length": str(context_length),
"num_cells": "50",
"num_layers": "2",
"mini_batch_size": "128",
"learning_rate": "0.001",
"early_stopping_patience": "10"
}
# set the hyperparams
estimator.set_hyperparameters(**hyperparameters)
###Output
_____no_output_____
###Markdown
Training JobNow, we are ready to launch the training job! SageMaker will start an EC2 instance, download the data from S3, start training the model and save the trained model.If you provide the `test` data channel, as we do in this example, DeepAR will also calculate accuracy metrics for the trained model on this test data set. This is done by predicting the last `prediction_length` points of each time series in the test set and comparing this to the *actual* value of the time series. The computed error metrics will be included as part of the log output.The next cell may take a few minutes to complete, depending on data size, model complexity, and training options.
###Code
%%time
# train and test channels
data_channels = {
"train": train_path,
"test": test_path
}
# fit the estimator
estimator.fit(inputs=data_channels)
###Output
_____no_output_____
###Markdown
Deploy and Create a PredictorNow that we have trained a model, we can use it to perform predictions by deploying it to a predictor endpoint.Remember to **delete the endpoint** at the end of this notebook. A cell at the very bottom of this notebook will be provided, but it is always good to keep, front-of-mind.
###Code
%%time
# create a predictor
predictor = estimator.deploy(
initial_instance_count=1,
instance_type='ml.t2.medium',
content_type="application/json" # specify that it will accept/produce JSON
)
###Output
_____no_output_____
###Markdown
--- Generating PredictionsAccording to the [inference format](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar-in-formats.html) for DeepAR, the `predictor` expects to see input data in a JSON format, with the following keys:* **instances**: A list of JSON-formatted time series that should be forecast by the model.* **configuration** (optional): A dictionary of configuration information for the type of response desired by the request.Within configuration the following keys can be configured:* **num_samples**: An integer specifying the number of samples that the model generates when making a probabilistic prediction.* **output_types**: A list specifying the type of response. We'll ask for **quantiles**, which look at the list of num_samples generated by the model, and generate [quantile estimates](https://en.wikipedia.org/wiki/Quantile) for each time point based on these values.* **quantiles**: A list that specified which quantiles estimates are generated and returned in the response.Below is an example of what a JSON query to a DeepAR model endpoint might look like.```{ "instances": [ { "start": "2009-11-01 00:00:00", "target": [4.0, 10.0, 50.0, 100.0, 113.0] }, { "start": "1999-01-30", "target": [2.0, 1.0] } ], "configuration": { "num_samples": 50, "output_types": ["quantiles"], "quantiles": ["0.5", "0.9"] }}``` JSON Prediction RequestThe code below accepts a **list** of time series as input and some configuration parameters. It then formats that series into a JSON instance and converts the input into an appropriately formatted JSON_input.
###Code
def json_predictor_input(input_ts, num_samples=50, quantiles=['0.1', '0.5', '0.9']):
'''Accepts a list of input time series and produces a formatted input.
:input_ts: An list of input time series.
:num_samples: Number of samples to calculate metrics with.
:quantiles: A list of quantiles to return in the predicted output.
:return: The JSON-formatted input.
'''
# request data is made of JSON objects (instances)
# and an output configuration that details the type of data/quantiles we want
instances = []
for k in range(len(input_ts)):
# get JSON objects for input time series
instances.append(series_to_json_obj(input_ts[k]))
# specify the output quantiles and samples
configuration = {"num_samples": num_samples,
"output_types": ["quantiles"],
"quantiles": quantiles}
request_data = {"instances": instances,
"configuration": configuration}
json_request = json.dumps(request_data).encode('utf-8')
return json_request
###Output
_____no_output_____
###Markdown
Get a PredictionWe can then use this function to get a prediction for a formatted time series!In the next cell, I'm getting an input time series and known target, and passing the formatted input into the predictor endpoint to get a resultant prediction.
###Code
# get all input and target (test) time series
input_ts = time_series_training
target_ts = time_series
# get formatted input time series
json_input_ts = json_predictor_input(input_ts)
# get the prediction from the predictor
json_prediction = predictor.predict(json_input_ts)
print(json_prediction)
###Output
_____no_output_____
###Markdown
Decoding PredictionsThe predictor returns JSON-formatted prediction, and so we need to extract the predictions and quantile data that we want for visualizing the result. The function below, reads in a JSON-formatted prediction and produces a list of predictions in each quantile.
###Code
# helper function to decode JSON prediction
def decode_prediction(prediction, encoding='utf-8'):
'''Accepts a JSON prediction and returns a list of prediction data.
'''
prediction_data = json.loads(prediction.decode(encoding))
prediction_list = []
for k in range(len(prediction_data['predictions'])):
prediction_list.append(pd.DataFrame(data=prediction_data['predictions'][k]['quantiles']))
return prediction_list
# get quantiles/predictions
prediction_list = decode_prediction(json_prediction)
# should get a list of 30 predictions
# with corresponding quantile values
print(prediction_list[0])
###Output
_____no_output_____
###Markdown
Display the Results!The quantile data will give us all we need to see the results of our prediction.* Quantiles 0.1 and 0.9 represent higher and lower bounds for the predicted values.* Quantile 0.5 represents the median of all sample predictions.
###Code
# display the prediction median against the actual data
def display_quantiles(prediction_list, target_ts=None):
# show predictions for all input ts
for k in range(len(prediction_list)):
plt.figure(figsize=(12,6))
# get the target month of data
if target_ts is not None:
target = target_ts[k][-prediction_length:]
plt.plot(range(len(target)), target, label='target')
# get the quantile values at 10 and 90%
p10 = prediction_list[k]['0.1']
p90 = prediction_list[k]['0.9']
# fill the 80% confidence interval
plt.fill_between(p10.index, p10, p90, color='y', alpha=0.5, label='80% confidence interval')
# plot the median prediction line
prediction_list[k]['0.5'].plot(label='prediction median')
plt.legend()
plt.show()
# display predictions
display_quantiles(prediction_list, target_ts)
###Output
_____no_output_____
###Markdown
Predicting the FutureRecall that we did not give our model any data about 2010, but let's see if it can predict the energy consumption given **no target**, only a known start date! EXERCISE: Format a request for a "future" predictionCreate a formatted input to send to the deployed `predictor` passing in my usual parameters for "configuration". The "instances" will, in this case, just be one instance, defined by the following:* **start**: The start time will be time stamp that you specify. To predict the first 30 days of 2010, start on Jan. 1st, '2010-01-01'.* **target**: The target will be an empty list because this year has no, complete associated time series; we specifically withheld that information from our model, for testing purposes.```{"start": start_time, "target": []} empty target```
###Code
# Starting my prediction at the beginning of 2010
start_date = '2010-01-01'
timestamp = '00:00:00'
# formatting start_date
start_time = start_date +' '+ timestamp
# format the request_data
# with "instances" and "configuration"
request_data = None
# create JSON input
json_input = json.dumps(request_data).encode('utf-8')
print('Requesting prediction for '+start_time)
###Output
_____no_output_____
###Markdown
Then get and decode the prediction response, as usual.
###Code
# get prediction response
json_prediction = predictor.predict(json_input)
prediction_2010 = decode_prediction(json_prediction)
###Output
_____no_output_____
###Markdown
Finally, I'll compare the predictions to a known target sequence. This target will come from a time series for the 2010 data, which I'm creating below.
###Code
# create 2010 time series
ts_2010 = []
# get global consumption data
# index 1112 is where the 2011 data starts
data_2010 = mean_power_df.values[1112:]
index = pd.DatetimeIndex(start=start_date, periods=len(data_2010), freq='D')
ts_2010.append(pd.Series(data=data_2010, index=index))
# range of actual data to compare
start_idx=0 # days since Jan 1st 2010
end_idx=start_idx+prediction_length
# get target data
target_2010_ts = [ts_2010[0][start_idx:end_idx]]
# display predictions
display_quantiles(prediction_2010, target_2010_ts)
###Output
_____no_output_____
###Markdown
Delete the EndpointTry your code out on different time series. You may want to tweak your DeepAR hyperparameters and see if you can improve the performance of this predictor.When you're done with evaluating the predictor (any predictor), make sure to delete the endpoint.
###Code
## TODO: delete the endpoint
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Time Series Forecasting A time series is data collected periodically, over time. Time series forecasting is the task of predicting future data points, given some historical data. It is commonly used in a variety of tasks from weather forecasting, retail and sales forecasting, stock market prediction, and in behavior prediction (such as predicting the flow of car traffic over a day). There is a lot of time series data out there, and recognizing patterns in that data is an active area of machine learning research!In this notebook, we'll focus on one method for finding time-based patterns: using SageMaker's supervised learning model, [DeepAR](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html). DeepARDeepAR utilizes a recurrent neural network (RNN), which is designed to accept some sequence of data points as historical input and produce a predicted sequence of points. So, how does this model learn?During training, you'll provide a training dataset (made of several time series) to a DeepAR estimator. The estimator looks at *all* the training time series and tries to identify similarities across them. It trains by randomly sampling **training examples** from the training time series. * Each training example consists of a pair of adjacent **context** and **prediction** windows of fixed, predefined lengths. * The `context_length` parameter controls how far in the *past* the model can see. * The `prediction_length` parameter controls how far in the *future* predictions can be made. * You can find more details, in [this documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html).> Since DeepAR trains on several time series, it is well suited for data that exhibit **recurring patterns**.In any forecasting task, you should choose the context window to provide enough, **relevant** information to a model so that it can produce accurate predictions. In general, data closest to the prediction time frame will contain the information that is most influential in defining that prediction. In many forecasting applications, like forecasting sales month-to-month, the context and prediction windows will be the same size, but sometimes it will be useful to have a larger context window to notice longer-term patterns in data. Energy Consumption DataThe data we'll be working with in this notebook is data about household electric power consumption, over the globe. The dataset is originally taken from [Kaggle](https://www.kaggle.com/uciml/electric-power-consumption-data-set), and represents power consumption collected over several years from 2006 to 2010. With such a large dataset, we can aim to predict over long periods of time, over days, weeks or months of time. Predicting energy consumption can be a useful task for a variety of reasons including determining seasonal prices for power consumption and efficiently delivering power to people, according to their predicted usage. **Interesting read**: An inversely-related project, recently done by Google and DeepMind, uses machine learning to predict the *generation* of power by wind turbines and efficiently deliver power to the grid. You can read about that research, [in this post](https://deepmind.com/blog/machine-learning-can-boost-value-wind-energy/). Machine Learning WorkflowThis notebook approaches time series forecasting in a number of steps:* Loading and exploring the data* Creating training and test sets of time series* Formatting data as JSON files and uploading to S3* Instantiating and training a DeepAR estimator* Deploying a model and creating a predictor* Evaluating the predictor ---Let's start by loading in the usual resources.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load and Explore the DataWe'll be loading in some data about global energy consumption, collected over a few years. The below cell downloads and unzips this data, giving you one text file of data, `household_power_consumption.txt`.
###Code
! wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/March/5c88a3f1_household-electric-power-consumption/household-electric-power-consumption.zip
! unzip household-electric-power-consumption
###Output
--2020-10-15 13:36:15-- https://s3.amazonaws.com/video.udacity-data.com/topher/2019/March/5c88a3f1_household-electric-power-consumption/household-electric-power-consumption.zip
Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.49.220
Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.49.220|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 20805339 (20M) [application/zip]
Saving to: ‘household-electric-power-consumption.zip’
household-electric- 100%[===================>] 19.84M 7.42MB/s in 2.7s
2020-10-15 13:36:18 (7.42 MB/s) - ‘household-electric-power-consumption.zip’ saved [20805339/20805339]
Archive: household-electric-power-consumption.zip
inflating: household_power_consumption.txt
###Markdown
Read in the `.txt` FileThe next cell displays the first few lines in the text file, so we can see how it is formatted.
###Code
# display first ten lines of text data
n_lines = 10
with open('household_power_consumption.txt') as file:
head = [next(file) for line in range(n_lines)]
display(head)
###Output
_____no_output_____
###Markdown
Pre-Process the DataThe 'household_power_consumption.txt' file has the following attributes: * Each data point has a date and time (hour:minute:second) of recording * The various data features are separated by semicolons (;) * Some values are 'nan' or '?', and we'll treat these both as `NaN` values Managing `NaN` valuesThis DataFrame does include some data points that have missing values. So far, we've mainly been dropping these values, but there are other ways to handle `NaN` values, as well. One technique is to just fill the missing column values with the **mean** value from that column; this way the added value is likely to be realistic.I've provided some helper functions in `txt_preprocessing.py` that will help to load in the original text file as a DataFrame *and* fill in any `NaN` values, per column, with the mean feature value. This technique will be fine for long-term forecasting; if I wanted to do an hourly analysis and prediction, I'd consider dropping the `NaN` values or taking an average over a small, sliding window rather than an entire column of data.**Below, I'm reading the file in as a DataFrame and filling `NaN` values with feature-level averages.**
###Code
import txt_preprocessing as pprocess
# create df from text file
initial_df = pprocess.create_df('household_power_consumption.txt', sep=';')
# fill NaN column values with *average* column value
df = pprocess.fill_nan_with_mean(initial_df)
# print some stats about the data
print('Data shape: ', df.shape)
df.head()
df.loc[[1,3,5],['Global_active_power','Sub_metering_2']]
###Output
_____no_output_____
###Markdown
Global Active Power In this example, we'll want to predict the global active power, which is the household minute-averaged active power (kilowatt), measured across the globe. So, below, I am getting just that column of data and displaying the resultant plot.
###Code
# Select Global active power data
power_df = df['Global_active_power'].copy()
print(power_df.shape)
# display the data
plt.figure(figsize=(12,6))
# all data points
power_df.plot(title='Global active power', color='blue')
plt.show()
###Output
_____no_output_____
###Markdown
Since the data is recorded each minute, the above plot contains *a lot* of values. So, I'm also showing just a slice of data, below.
###Code
# can plot a slice of hourly data
end_mins = 1440 # 1440 mins = 1 day
plt.figure(figsize=(12,6))
power_df[0:end_mins].plot(title='Global active power, over one day', color='blue')
plt.show()
###Output
_____no_output_____
###Markdown
Hourly vs DailyThere is a lot of data, collected every minute, and so I could go one of two ways with my analysis:1. Create many, short time series, say a week or so long, in which I record energy consumption every hour, and try to predict the energy consumption over the following hours or days.2. Create fewer, long time series with data recorded daily that I could use to predict usage in the following weeks or months.Both tasks are interesting! It depends on whether you want to predict time patterns over a day/week or over a longer time period, like a month. With the amount of data I have, I think it would be interesting to see longer, *recurring* trends that happen over several months or over a year. So, I will resample the 'Global active power' values, recording **daily** data points as averages over 24-hr periods.> I can resample according to a specified frequency, by utilizing pandas [time series tools](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html), which allow me to sample at points like every hour ('H') or day ('D'), etc.
###Code
# resample over day (D)
freq = 'D'
# calculate the mean active power for a day
mean_power_df = power_df.resample(freq).mean()
# display the mean values
plt.figure(figsize=(15,8))
mean_power_df.plot(title='Global active power, mean per day', color='blue')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
In this plot, we can see that there are some interesting trends that occur over each year. It seems that there are spikes of energy consumption around the end/beginning of each year, which correspond with heat and light usage being higher in winter months. We also see a dip in usage around August, when global temperatures are typically higher.The data is still not very smooth, but it shows noticeable trends, and so, makes for a good use case for machine learning models that may be able to recognize these patterns. --- Create Time Series My goal will be to take full years of data, from 2007-2009, and see if I can use it to accurately predict the average Global active power usage for the next several months in 2010!Next, let's make one time series for each complete year of data. This is just a design decision, and I am deciding to use full years of data, starting in January of 2017 because there are not that many data points in 2006 and this split will make it easier to handle leap years; I could have also decided to construct time series starting at the first collected data point, just by changing `t_start` and `t_end` in the function below.The function `make_time_series` will create pandas `Series` for each of the passed in list of years `['2007', '2008', '2009']`.* All of the time series will start at the same time point `t_start` (or t0). * When preparing data, it's important to use a consistent start point for each time series; DeepAR uses this time-point as a frame of reference, which enables it to learn recurrent patterns e.g. that weekdays behave differently from weekends or that Summer is different than Winter. * You can change the start and end indices to define any time series you create.* We should account for leap years, like 2008, in the creation of time series.* Generally, we create `Series` by getting the relevant global consumption data (from the DataFrame) and date indices.``` get global consumption datadata = mean_power_df[start_idx:end_idx] create time series for the yearindex = pd.DatetimeIndex(start=t_start, end=t_end, freq='D')time_series.append(pd.Series(data=data, index=index))```
###Code
def make_time_series(mean_power_df, years, freq='D', start_idx=16):
'''Creates as many time series as there are complete years. This code
accounts for the leap year, 2008.
:param mean_power_df: A dataframe of global power consumption, averaged by day.
This dataframe should also be indexed by a datetime.
:param years: A list of years to make time series out of, ex. ['2007', '2008'].
:param freq: The frequency of data recording (D = daily)
:param start_idx: The starting dataframe index of the first point in the first time series.
The default, 16, points to '2017-01-01'.
:return: A list of pd.Series(), time series data.
'''
# store time series
time_series = []
# store leap year in this dataset
leap = '2008'
# create time series for each year in years
for i in range(len(years)):
year = years[i]
if(year == leap):
end_idx = start_idx+366
else:
end_idx = start_idx+365
# create start and end datetimes
t_start = year + '-01-01' # Jan 1st of each year = t_start
t_end = year + '-12-31' # Dec 31st = t_end
# get global consumption data
data = mean_power_df[start_idx:end_idx]
# create time series for the year
index = pd.date_range(start=t_start, end=t_end, freq=freq)
time_series.append(pd.Series(data=data, index=index))
start_idx = end_idx
# return list of time series
return time_series
###Output
_____no_output_____
###Markdown
Test the resultsBelow, let's construct one time series for each complete year of data, and display the results.
###Code
# test out the code above
# yearly time series for our three complete years
full_years = ['2007', '2008', '2009'] #2010 seems data not complete
freq='D' # daily recordings
# make time series
time_series = make_time_series(mean_power_df, full_years, freq=freq)
# display first time series
time_series_idx = 2
plt.figure(figsize=(12,6))
time_series[time_series_idx].plot()
plt.show()
###Output
_____no_output_____
###Markdown
--- Splitting in TimeWe'll evaluate our model on a test set of data. For machine learning tasks like classification, we typically create train/test data by randomly splitting examples into different sets. For forecasting it's important to do this train/test split in **time** rather than by individual data points. > In general, we can create training data by taking each of our *complete* time series and leaving off the last `prediction_length` data points to create *training* time series. EXERCISE: Create training time seriesComplete the `create_training_series` function, which should take in our list of complete time series data and return a list of truncated, training time series.* In this example, we want to predict about a month's worth of data, and we'll set `prediction_length` to 30 (days).* To create a training set of data, we'll leave out the last 30 points of *each* of the time series we just generated, so we'll use only the first part as training data. * The **test set contains the complete range** of each time series.
###Code
# create truncated, training time series
def create_training_series(complete_time_series, prediction_length):
'''Given a complete list of time series data, create training time series.
:param complete_time_series: A list of all complete time series.
:param prediction_length: The number of points we want to predict.
:return: A list of training time series.
'''
# your code here
training_time_series=[]
# create time series for each year in years
for i in range(len(complete_time_series)):
data=complete_time_series[i]
training_time_series.append(data[:len(data)-prediction_length])
return training_time_series
# test your code!
# set prediction length
prediction_length = 30 # 30 days ~ a month
time_series_training = create_training_series(time_series, prediction_length)
###Output
_____no_output_____
###Markdown
Training and Test SeriesWe can visualize what these series look like, by plotting the train/test series on the same axis. We should see that the test series contains all of our data in a year, and a training series contains all but the last `prediction_length` points.
###Code
# display train/test time series
time_series_idx = 0
plt.figure(figsize=(15,8))
# test data is the whole time series
time_series[time_series_idx].plot(label='test', lw=3)
# train data is all but the last prediction pts
time_series_training[time_series_idx].plot(label='train', ls=':', lw=3)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Convert to JSON According to the [DeepAR documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html), DeepAR expects to see input training data in a JSON format, with the following fields:* **start**: A string that defines the starting date of the time series, with the format 'YYYY-MM-DD HH:MM:SS'.* **target**: An array of numerical values that represent the time series.* **cat** (optional): A numerical array of categorical features that can be used to encode the groups that the record belongs to. This is useful for finding models per class of item, such as in retail sales, where you might have {'shoes', 'jackets', 'pants'} encoded as categories {0, 1, 2}.The input data should be formatted with one time series per line in a JSON file. Each line looks a bit like a dictionary, for example:```{"start":'2007-01-01 00:00:00', "target": [2.54, 6.3, ...], "cat": [1]}{"start": "2012-01-30 00:00:00", "target": [1.0, -5.0, ...], "cat": [0]} ...```In the above example, each time series has one, associated categorical feature and one time series feature. EXERCISE: Formatting Energy Consumption DataFor our data:* The starting date, "start," will be the index of the first row in a time series, Jan. 1st of that year.* The "target" will be all of the energy consumption values that our time series holds.* We will not use the optional "cat" field.Complete the following utility function, which should convert `pandas.Series` objects into the appropriate JSON strings that DeepAR can consume.
###Code
def series_to_json_obj(ts):
'''Returns a dictionary of values in DeepAR, JSON format.
:param ts: A single time series.
:return: A dictionary of values with "start" and "target" keys.
'''
# your code here
json_obj ={}
json_obj["start"]=ts.index[0].strftime('%Y-%m-%d %H:%M:%S')
json_obj["target"]=list(ts.values) #ts.array
return json_obj
# test out the code
ts = time_series[0]
json_obj = series_to_json_obj(ts)
print(json_obj)
###Output
{'start': '2007-01-01 00:00:00', 'target': [1.9090305555555664, 0.8814138888888893, 0.7042041666666671, 2.263480555555549, 1.8842805555555506, 1.047484722222222, 1.6997361111111127, 1.5564999999999982, 1.2979541666666659, 1.496388888888886, 1.5661069444444446, 1.0147888888888905, 2.2130652777777766, 2.0895191771086794, 1.4921374999999986, 1.1711138888888888, 1.9775611111111107, 1.2649041666666674, 1.0280833333333352, 2.176202777777775, 2.3661541666666652, 1.5142319444444445, 1.2344722222222224, 2.07489861111111, 1.1085722222222205, 1.1235916666666654, 1.419494444444445, 2.146733065997562, 1.3768500000000001, 1.1859708333333323, 1.6414944444444453, 1.2671944444444443, 1.1581499999999982, 2.798418055555553, 2.4971805555555573, 1.1425555555555553, 0.8843611111111115, 1.6166791666666673, 1.2509819444444445, 1.1251375000000008, 1.9648111111111108, 2.4800194444444434, 1.3038958333333335, 0.9823236111111113, 1.7351888888888864, 1.3787124999999985, 1.1823111111111093, 1.4423000000000017, 2.659556944444449, 1.5184819444444428, 2.187450000000003, 1.4394958333333352, 2.3414314097729143, 0.9121388888888906, 0.4964736111111122, 0.3484305555555563, 0.3947805555555556, 0.3597999999999982, 0.3616486111111118, 0.3594194444444455, 0.358116666666667, 0.5688347222222228, 1.451875000000001, 1.8474527777777783, 0.8656249999999984, 1.6494486111111129, 1.091758333333331, 0.8837583333333331, 1.7254944444444502, 2.417108333333334, 1.354630555555556, 0.8959333333333357, 1.2453916666666665, 1.308898611111112, 1.1819819444444437, 1.4083958333333346, 1.6465597222222195, 1.4948041666666658, 1.448769444444444, 1.4875638888888894, 1.0104013888888888, 0.9090333333333334, 2.0614097222222245, 2.3175108437753487, 0.9833902777777774, 0.7674583333333332, 1.620762500000001, 1.1044486111111105, 0.9738847222222207, 2.437159722222223, 1.9346888888888931, 1.3616513888888915, 1.1408472222222201, 1.4249083333333314, 1.2806097222222221, 1.1119027777777781, 1.079113888888888, 1.1172333333333315, 0.6950833333333352, 0.5398361111111113, 0.616306944444446, 0.3734069444444451, 0.3826541666666676, 0.3849083333333345, 0.5042805555555557, 0.6613819444444443, 0.849815277777779, 0.6932263888888891, 0.7143458333333358, 0.571236111111111, 1.1622611111111096, 1.5511902777777784, 0.7323374999999995, 0.7167361111111107, 0.8778902777777773, 0.8857402777777782, 0.7599527777777767, 1.0914859283295342, 1.091615036500725, 0.9472065219004123, 1.1554569444444436, 0.6968652777777783, 0.7571097222222218, 0.6931444444444462, 1.0854527777777792, 1.4597944444444448, 1.1414333333333337, 1.242563888888889, 1.1567611111111105, 0.49158611111111133, 0.6799680555555554, 1.0010874999999977, 1.1352347222222203, 1.0192930555555566, 0.62805, 1.1756555555555552, 0.8554277777777767, 1.1330069444444428, 0.9926680555555539, 1.3668944444444437, 0.7778208333333345, 0.7867027777777772, 1.0173166666666695, 0.8016583333333344, 1.1209861111111095, 1.0501722222222214, 1.4956999999999978, 1.137130555555556, 0.7370416666666654, 1.1179624999999997, 0.6517708333333337, 0.7263955659975689, 1.030806944444443, 1.463906944444444, 0.6330680555555547, 1.0131194444444456, 1.1768955659975675, 0.6840291666666676, 1.4916986111111106, 0.7647884523521012, 0.9311194444444444, 0.5518430555555551, 0.6380319444444437, 0.6439388888888893, 0.8743638888888887, 0.6872250000000003, 1.4012236111111112, 1.352145833333335, 0.4711625000000016, 0.5864883542173611, 0.7460319444444469, 0.560379166666666, 0.39325972222222155, 0.33164305555555645, 0.49628888888888706, 0.7445847222222217, 0.6561861111111094, 1.0283513888888873, 0.8996097222222214, 0.906451121553125, 0.9273013888888878, 0.7129611111111108, 0.6715499999999989, 0.7360499999999994, 0.6968125000000016, 0.734229166666665, 0.7427249999999991, 0.9431986111111104, 0.7563055555555565, 0.8195541666666663, 0.6307805555555537, 0.5016333333333299, 0.7427652777777777, 0.5516541666666681, 0.8945624999999994, 0.7353708019063109, 0.5928291666666665, 0.4997444444444466, 0.52575138888889, 0.47075416666666736, 0.7711263888888894, 0.7362833333333343, 0.9561566771086814, 0.6882069444444442, 0.5731138888888893, 0.3281666666666694, 0.6420138888888899, 0.8350791666666663, 1.1262930555555566, 0.2229972222222228, 0.3241777777777761, 0.5641194444444436, 0.7522652192823055, 0.7079277777777766, 0.7384472222222229, 0.7951319444444429, 0.5465958333333305, 0.7567499999999994, 1.0302638888888895, 0.7815847222222178, 0.7442222222222233, 0.7138958333333341, 0.7580902777777812, 0.8624333333333364, 0.8007027777777773, 0.6536041666666672, 0.8276805555555493, 0.6656222222222172, 0.7013138888888948, 0.2416388888888889, 0.9877041666666672, 0.9297513888888894, 0.44621944444444595, 0.8401444444444471, 0.7724388888888885, 1.1509066771086796, 0.7992222222222208, 0.9131291666666661, 0.8414791666666658, 0.692363888888888, 0.5194388888888899, 0.83504027777778, 0.8887652777777755, 1.1815874999999996, 1.154491666666666, 0.7214736111111109, 1.1006555555555562, 0.8587305555555576, 0.6260236111111122, 0.9281222222222196, 1.0206611111111112, 1.1663208333333344, 0.7759555555555581, 0.6830361111111114, 0.8942805555555555, 0.796098611111113, 0.7885208333333332, 0.7606333333333342, 1.1473777777777783, 1.0147124999999988, 0.8107777777777783, 0.9018638888888884, 0.871897222222219, 0.860263888888889, 0.8167861111111107, 0.9847013888888875, 0.9406458333333346, 1.2252375000000006, 1.266977243106251, 0.9670833333333325, 1.1670125000000016, 1.5948069444444422, 1.0529791666666668, 1.1950847222222218, 1.5269888888888836, 0.8892041666666652, 1.0544847222222216, 0.9681194444444439, 1.2196111111111123, 1.4625791666666648, 0.7605847222222217, 1.1721236111111104, 0.8103972222222212, 1.2030875, 1.3295166666666667, 1.3380055555555561, 1.4314250000000006, 0.8793958333333348, 1.3641958333333348, 1.0950902777777778, 1.054534722222222, 1.448366666666666, 1.15294861111111, 1.3485694444444438, 1.147844444444447, 1.3589105764395841, 1.2955805555555562, 1.2874430555555554, 0.9983111111111123, 1.240061111111108, 0.3556652777777766, 0.37876944444444327, 0.5631999999999971, 0.8911180555555571, 0.34459444444444476, 1.002501388888888, 0.34204861111111057, 1.0317263888888877, 1.0420777777777788, 1.446095833333335, 1.2810819444444468, 1.042402777777777, 1.507795833333338, 1.8136111111111097, 1.3105027777777782, 1.4973236111111095, 1.5377763888888916, 1.4222180555555546, 1.0361027777777765, 1.6039958333333328, 1.812565277777774, 1.516568055555557, 1.1969819444444456, 1.4554555555555526, 1.2173108437753466, 1.3419291666666693, 1.751818055555553, 1.318787500000002, 0.9691652777777785, 1.0900805555555562, 1.9451472222222208, 1.1423194444444453, 1.1946677882197925, 1.6192555555555597, 2.177043055555557, 2.0459638888888865, 1.2236527777777777, 1.4358583333333348, 1.4952222222222211, 1.727731944444445, 1.2284833333333336, 1.537086111111109, 1.579572222222225, 1.2213569444444459, 1.6007666666666638, 1.1028208333333336, 1.2831569444444444, 1.3713944444444444, 1.9006083333333303, 1.9182069444444376, 1.2487205659975713, 1.8216625000000024, 1.525036111111112, 1.3539986111111106, 1.376788888888891, 1.8133777777777769, 1.6729499999999993, 1.5679499999999997, 1.8715055555555555, 1.7918611111111102, 1.7584708333333323, 2.161841666666663, 2.2909416666666647, 1.7770249999999976, 1.5392652777777778]}
###Markdown
Saving Data, LocallyThe next helper function will write one series to a single JSON line, using the new line character '\n'. The data is also encoded and written to a filename that we specify.
###Code
# import json for formatting data
import json
import os # and os for saving
def write_json_dataset(time_series, filename):
with open(filename, 'wb') as f:
# for each of our times series, there is one JSON line
for ts in time_series:
json_line = json.dumps(series_to_json_obj(ts)) + '\n'
json_line = json_line.encode('utf-8')
f.write(json_line)
print(filename + ' saved.')
# save this data to a local directory
data_dir = 'json_energy_data'
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# directories to save train/test data
train_key = os.path.join(data_dir, 'train.json')
test_key = os.path.join(data_dir, 'test.json')
# write train/test JSON files
write_json_dataset(time_series_training, train_key)
write_json_dataset(time_series, test_key)
###Output
json_energy_data/train.json saved.
json_energy_data/test.json saved.
###Markdown
--- Uploading Data to S3Next, to make this data accessible to an estimator, I'll upload it to S3. Sagemaker resourcesLet's start by specifying:* The sagemaker role and session for training a model.* A default S3 bucket where we can save our training, test, and model data.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
# session, role, bucket
sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()
###Output
_____no_output_____
###Markdown
EXERCISE: Upoad *both* training and test JSON files to S3Specify *unique* train and test prefixes that define the location of that data in S3.* Upload training data to a location in S3, and save that location to `train_path`* Upload test data to a location in S3, and save that location to `test_path`
###Code
# suggested that you set prefixes for directories in S3
# upload data to S3, and save unique locations
prefix = 'sagemaker/energy_consumption'
train_path = sagemaker_session.upload_data(path=train_key, bucket=bucket, key_prefix=prefix)
test_path = sagemaker_session.upload_data(path=test_key, bucket=bucket, key_prefix=prefix)
# check locations
print('Training data is stored in: '+ train_path)
print('Test data is stored in: '+ test_path)
###Output
Training data is stored in: s3://sagemaker-eu-west-1-168250291396/sagemaker/energy_consumption/train.json
Test data is stored in: s3://sagemaker-eu-west-1-168250291396/sagemaker/energy_consumption/test.json
###Markdown
--- Training a DeepAR EstimatorSome estimators have specific, SageMaker constructors, but not all. Instead you can create a base `Estimator` and pass in the specific image (or container) that holds a specific model.Next, we configure the container image to be used for the region that we are running in.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
image_name = get_image_uri(boto3.Session().region_name, # get the region
'forecasting-deepar') # specify image
###Output
'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.
###Markdown
EXERCISE: Instantiate an Estimator You can now define the estimator that will launch the training job. A generic Estimator will be defined by the usual constructor arguments and an `image_name`. > You can take a look at the [estimator source code](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/estimator.pyL595) to view specifics.
###Code
from sagemaker.estimator import Estimator
prefix = 'sagemaker/energy_consumption'
output_path='s3://{}/{}/'.format(bucket, prefix)
# instantiate a DeepAR estimator
estimator =Estimator(image_name,
role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path=output_path, # specified, above
sagemaker_session=sagemaker_session)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Setting HyperparametersNext, we need to define some DeepAR hyperparameters that define the model size and training behavior. Values for the epochs, frequency, prediction length, and context length are required.* **epochs**: The maximum number of times to pass over the data when training.* **time_freq**: The granularity of the time series in the dataset ('D' for daily).* **prediction_length**: A string; the number of time steps (based off the unit of frequency) that the model is trained to predict. * **context_length**: The number of time points that the model gets to see *before* making a prediction. Context LengthTypically, it is recommended that you start with a `context_length`=`prediction_length`. This is because a DeepAR model also receives "lagged" inputs from the target time series, which allow the model to capture long-term dependencies. For example, a daily time series can have yearly seasonality and DeepAR automatically includes a lag of one year. So, the context length can be shorter than a year, and the model will still be able to capture this seasonality. The lag values that the model picks depend on the frequency of the time series. For example, lag values for daily frequency are the previous week, 2 weeks, 3 weeks, 4 weeks, and year. You can read more about this in the [DeepAR "how it works" documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html). Optional HyperparametersYou can also configure optional hyperparameters to further tune your model. These include parameters like the number of layers in our RNN model, the number of cells per layer, the likelihood function, and the training options, such as batch size and learning rate. For an exhaustive list of all the different DeepAR hyperparameters you can refer to the DeepAR [hyperparameter documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_hyperparameters.html).
###Code
freq='D'
context_length=30 # same as prediction_length
hyperparameters = {
"epochs": "50",
"time_freq": freq,
"prediction_length": str(prediction_length),
"context_length": str(context_length),
"num_cells": "50",
"num_layers": "2",
"mini_batch_size": "128",
"learning_rate": "0.001",
"early_stopping_patience": "10"
}
# set the hyperparams
estimator.set_hyperparameters(**hyperparameters)
###Output
_____no_output_____
###Markdown
Training JobNow, we are ready to launch the training job! SageMaker will start an EC2 instance, download the data from S3, start training the model and save the trained model.If you provide the `test` data channel, as we do in this example, DeepAR will also calculate accuracy metrics for the trained model on this test data set. This is done by predicting the last `prediction_length` points of each time series in the test set and comparing this to the *actual* value of the time series. The computed error metrics will be included as part of the log output.The next cell may take a few minutes to complete, depending on data size, model complexity, and training options.
###Code
%%time
# train and test channels
data_channels = {
"train": train_path,
"test": test_path
}
# fit the estimator
estimator.fit(inputs=data_channels)
###Output
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.
###Markdown
Deploy and Create a PredictorNow that we have trained a model, we can use it to perform predictions by deploying it to a predictor endpoint.Remember to **delete the endpoint** at the end of this notebook. A cell at the very bottom of this notebook will be provided, but it is always good to keep, front-of-mind.
###Code
%%time
# create a predictor
predictor = estimator.deploy(
initial_instance_count=1,
instance_type='ml.t2.medium',
content_type="application/json" # specify that it will accept/produce JSON
)
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
--- Generating PredictionsAccording to the [inference format](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar-in-formats.html) for DeepAR, the `predictor` expects to see input data in a JSON format, with the following keys:* **instances**: A list of JSON-formatted time series that should be forecast by the model.* **configuration** (optional): A dictionary of configuration information for the type of response desired by the request.Within configuration the following keys can be configured:* **num_samples**: An integer specifying the number of samples that the model generates when making a probabilistic prediction.* **output_types**: A list specifying the type of response. We'll ask for **quantiles**, which look at the list of num_samples generated by the model, and generate [quantile estimates](https://en.wikipedia.org/wiki/Quantile) for each time point based on these values.* **quantiles**: A list that specified which quantiles estimates are generated and returned in the response.Below is an example of what a JSON query to a DeepAR model endpoint might look like.```{ "instances": [ { "start": "2009-11-01 00:00:00", "target": [4.0, 10.0, 50.0, 100.0, 113.0] }, { "start": "1999-01-30", "target": [2.0, 1.0] } ], "configuration": { "num_samples": 50, "output_types": ["quantiles"], "quantiles": ["0.5", "0.9"] }}``` JSON Prediction RequestThe code below accepts a **list** of time series as input and some configuration parameters. It then formats that series into a JSON instance and converts the input into an appropriately formatted JSON_input.
###Code
def json_predictor_input(input_ts, num_samples=50, quantiles=['0.1', '0.5', '0.9']):
'''Accepts a list of input time series and produces a formatted input.
:input_ts: An list of input time series.
:num_samples: Number of samples to calculate metrics with.
:quantiles: A list of quantiles to return in the predicted output.
:return: The JSON-formatted input.
'''
# request data is made of JSON objects (instances)
# and an output configuration that details the type of data/quantiles we want
instances = []
for k in range(len(input_ts)):
# get JSON objects for input time series
instances.append(series_to_json_obj(input_ts[k]))
# specify the output quantiles and samples
configuration = {"num_samples": num_samples,
"output_types": ["quantiles"],
"quantiles": quantiles}
request_data = {"instances": instances,
"configuration": configuration}
json_request = json.dumps(request_data).encode('utf-8')
return json_request
###Output
_____no_output_____
###Markdown
Get a PredictionWe can then use this function to get a prediction for a formatted time series!In the next cell, I'm getting an input time series and known target, and passing the formatted input into the predictor endpoint to get a resultant prediction.
###Code
# get all input and target (test) time series
input_ts = time_series_training
target_ts = time_series
# get formatted input time series
json_input_ts = json_predictor_input(input_ts)
# get the prediction from the predictor
json_prediction = predictor.predict(json_input_ts)
print(json_prediction)
###Output
b'{"predictions":[{"quantiles":{"0.1":[1.0732749701,1.160982132,1.4139347076,1.2430267334,1.0804938078,1.3301789761,1.4701865911,1.3994019032,1.2371610403,1.2868614197,1.1445115805,1.109082818,1.311300993,1.1515891552,1.3058798313,1.038470149,1.0976593494,1.0263260603,0.860673964,1.0272933245,1.0207802057,0.8440876603,0.9132721424,0.8155958652,0.9279446602,0.7213132381,0.8816479445,0.9329715967,1.0122709274,0.6836285591],"0.9":[1.9491639137,1.7151364088,1.9665014744,1.6717638969,1.688627243,2.025583744,2.1581335068,2.0605378151,1.694137454,1.926402092,1.7702662945,1.7958476543,2.0729393959,2.0563149452,2.0424129963,1.76790905,1.879027009,1.9826424122,1.8695762157,2.0499374866,2.1886467934,2.0472579002,1.7323303223,1.8177461624,1.6395537853,1.5056248903,1.9876739979,2.2252993584,2.2854511738,1.7210767269],"0.5":[1.5050832033,1.397018671,1.683083415,1.4728707075,1.3717074394,1.6938368082,1.8318486214,1.7426275015,1.4535388947,1.6350978613,1.5168379545,1.4699680805,1.6491430998,1.6371824741,1.6244065762,1.3655431271,1.5742114782,1.4178040028,1.3916404247,1.4836207628,1.6142466068,1.5861662626,1.3131304979,1.3152029514,1.2314814329,1.1315762997,1.4303350449,1.508587122,1.4078787565,1.2106872797]}},{"quantiles":{"0.1":[1.1110612154,1.1684799194,1.087495923,0.9613780975,1.2648773193,1.3580272198,1.100632906,1.1214978695,1.2728190422,1.1334471703,1.1395806074,1.2750405073,1.4784226418,1.1089192629,1.1837488413,1.0526570082,1.1640049219,1.1552641392,1.4077328444,1.2646210194,0.8407219648,1.104611516,0.9460157752,0.7994711399,0.9069052935,0.7870396972,0.7477427125,0.7024663687,0.808983922,0.6502761841],"0.9":[1.6734070778,1.7974295616,1.6937521696,1.6920018196,2.0805075169,2.1967058182,1.6029438972,1.7681598663,1.869250536,1.9794979095,1.9873273373,2.3448274136,2.415448904,1.761713028,1.8110653162,1.7660439014,1.8294456005,1.8505930901,2.2991809845,2.2494697571,1.7895103693,1.8230280876,1.7844949961,1.9233208895,1.8544781208,1.8531644344,2.1766111851,1.5863021612,1.7389090061,1.8187289238],"0.5":[1.3312399387,1.4013158083,1.387114048,1.2566481829,1.69459939,1.7086486816,1.3656680584,1.541888237,1.5473628044,1.578951478,1.5237312317,1.6908383369,1.8401408195,1.5636504889,1.4751423597,1.4682052135,1.5178762674,1.4829516411,1.83636415,1.7440556288,1.3912638426,1.4391226768,1.2852457762,1.3388893604,1.3521085978,1.3218683004,1.4823949337,1.2174355984,1.1642144918,1.3063315153]}},{"quantiles":{"0.1":[1.1406818628,0.8892359734,0.879147768,1.2468963861,1.1696062088,1.0544157028,1.0714406967,1.297612071,1.0834032297,1.1926629543,1.4540336132,1.290481925,1.026925087,1.078289628,1.22001791,1.0756703615,1.007373333,1.2909748554,1.2781050205,0.950568378,0.9827567339,0.8651491404,0.9986946583,1.0546956062,1.0034294128,1.0323400497,0.7100723982,0.8442938924,0.8221567869,0.8413610458],"0.9":[1.722215414,1.6057517529,1.7572950125,2.2158243656,2.3104712963,1.5537397861,1.7339893579,1.9301620722,1.6236760616,1.8276309967,2.2904467583,2.0933134556,1.5869296789,1.6727341413,1.9212094545,1.7160105705,1.8005046844,2.0341072083,2.149189949,1.6441940069,1.7722268105,1.9179282188,1.6220242977,1.657648325,2.0226025581,2.2933521271,1.5795642138,1.7972948551,1.8403422832,1.7486526966],"0.5":[1.4382965565,1.2517490387,1.2494604588,1.687615633,1.6291773319,1.3280653954,1.4587786198,1.5994262695,1.4430062771,1.4361715317,1.8355339766,1.6841471195,1.3588910103,1.480298996,1.5402938128,1.3514592648,1.5278197527,1.7001411915,1.5434495211,1.2944327593,1.3596547842,1.3973469734,1.2257122993,1.3043396473,1.5225265026,1.658162117,1.2293732166,1.3867697716,1.407086134,1.2310637236]}}]}'
###Markdown
Decoding PredictionsThe predictor returns JSON-formatted prediction, and so we need to extract the predictions and quantile data that we want for visualizing the result. The function below, reads in a JSON-formatted prediction and produces a list of predictions in each quantile.
###Code
# helper function to decode JSON prediction
def decode_prediction(prediction, encoding='utf-8'):
'''Accepts a JSON prediction and returns a list of prediction data.
'''
prediction_data = json.loads(prediction.decode(encoding))
prediction_list = []
for k in range(len(prediction_data['predictions'])):
prediction_list.append(pd.DataFrame(data=prediction_data['predictions'][k]['quantiles']))
return prediction_list
# get quantiles/predictions
prediction_list = decode_prediction(json_prediction)
# should get a list of 30 predictions
# with corresponding quantile values
print(prediction_list[0])
###Output
0.1 0.9 0.5
0 1.073275 1.949164 1.505083
1 1.160982 1.715136 1.397019
2 1.413935 1.966501 1.683083
3 1.243027 1.671764 1.472871
4 1.080494 1.688627 1.371707
5 1.330179 2.025584 1.693837
6 1.470187 2.158134 1.831849
7 1.399402 2.060538 1.742628
8 1.237161 1.694137 1.453539
9 1.286861 1.926402 1.635098
10 1.144512 1.770266 1.516838
11 1.109083 1.795848 1.469968
12 1.311301 2.072939 1.649143
13 1.151589 2.056315 1.637182
14 1.305880 2.042413 1.624407
15 1.038470 1.767909 1.365543
16 1.097659 1.879027 1.574211
17 1.026326 1.982642 1.417804
18 0.860674 1.869576 1.391640
19 1.027293 2.049937 1.483621
20 1.020780 2.188647 1.614247
21 0.844088 2.047258 1.586166
22 0.913272 1.732330 1.313130
23 0.815596 1.817746 1.315203
24 0.927945 1.639554 1.231481
25 0.721313 1.505625 1.131576
26 0.881648 1.987674 1.430335
27 0.932972 2.225299 1.508587
28 1.012271 2.285451 1.407879
29 0.683629 1.721077 1.210687
###Markdown
Display the Results!The quantile data will give us all we need to see the results of our prediction.* Quantiles 0.1 and 0.9 represent higher and lower bounds for the predicted values.* Quantile 0.5 represents the median of all sample predictions.
###Code
# display the prediction median against the actual data
def display_quantiles(prediction_list, target_ts=None):
# show predictions for all input ts
for k in range(len(prediction_list)):
plt.figure(figsize=(12,6))
# get the target month of data
if target_ts is not None:
target = target_ts[k][-prediction_length:]
plt.plot(range(len(target)), target, label='target')
# get the quantile values at 10 and 90%
p10 = prediction_list[k]['0.1']
p90 = prediction_list[k]['0.9']
# fill the 80% confidence interval
plt.fill_between(p10.index, p10, p90, color='y', alpha=0.5, label='80% confidence interval')
# plot the median prediction line
prediction_list[k]['0.5'].plot(label='prediction median')
plt.legend()
plt.show()
# display predictions
display_quantiles(prediction_list, target_ts)
###Output
_____no_output_____
###Markdown
Predicting the FutureRecall that we did not give our model any data about 2010, but let's see if it can predict the energy consumption given **no target**, only a known start date! EXERCISE: Format a request for a "future" predictionCreate a formatted input to send to the deployed `predictor` passing in my usual parameters for "configuration". The "instances" will, in this case, just be one instance, defined by the following:* **start**: The start time will be time stamp that you specify. To predict the first 30 days of 2010, start on Jan. 1st, '2010-01-01'.* **target**: The target will be an empty list because this year has no, complete associated time series; we specifically withheld that information from our model, for testing purposes.```{"start": start_time, "target": []} empty target```
###Code
# Starting my prediction at the beginning of 2010
start_date = '2010-01-01'
timestamp = '00:00:00'
# formatting start_date
start_time = start_date +' '+ timestamp
# format the request_data
instances=[{"start": start_time, "target": []}]
# specify the output quantiles and samples
configuration = {"num_samples": 30,
"output_types": ["quantiles"],
"quantiles": ['0.1', '0.5', '0.9']}
request_data = {"instances": instances,
"configuration": configuration}
# create JSON input
json_input = json.dumps(request_data).encode('utf-8')
print('Requesting prediction for '+start_time)
###Output
Requesting prediction for 2010-01-01 00:00:00
###Markdown
Then get and decode the prediction response, as usual.
###Code
# get prediction response
json_prediction = predictor.predict(json_input)
prediction_2010 = decode_prediction(json_prediction)
###Output
_____no_output_____
###Markdown
Finally, I'll compare the predictions to a known target sequence. This target will come from a time series for the 2010 data, which I'm creating below.
###Code
# create 2010 time series
ts_2010 = []
# get global consumption data
# index 1112 is where the 2011 data starts
data_2010 = mean_power_df.values[1112:]
index = pd.date_range(start=start_date, periods=len(data_2010), freq='D')
ts_2010.append(pd.Series(data=data_2010, index=index))
# range of actual data to compare
start_idx=0 # days since Jan 1st 2010
end_idx=start_idx+prediction_length
# get target data
target_2010_ts = [ts_2010[0][start_idx:end_idx]]
# display predictions
display_quantiles(prediction_2010, target_2010_ts)
###Output
_____no_output_____
###Markdown
Delete the EndpointTry your code out on different time series. You may want to tweak your DeepAR hyperparameters and see if you can improve the performance of this predictor.When you're done with evaluating the predictor (any predictor), make sure to delete the endpoint.
###Code
## TODO: delete the endpoint
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Time Series Forecasting A time series is data collected periodically, over time. Time series forecasting is the task of predicting future data points, given some historical data. It is commonly used in a variety of tasks from weather forecasting, retail and sales forecasting, stock market prediction, and in behavior prediction (such as predicting the flow of car traffic over a day). There is a lot of time series data out there, and recognizing patterns in that data is an active area of machine learning research!In this notebook, we'll focus on one method for finding time-based patterns: using SageMaker's supervised learning model, [DeepAR](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html). DeepARDeepAR utilizes a recurrent neural network (RNN), which is designed to accept some sequence of data points as historical input and produce a predicted sequence of points. So, how does this model learn?During training, you'll provide a training dataset (made of several time series) to a DeepAR estimator. The estimator looks at *all* the training time series and tries to identify similarities across them. It trains by randomly sampling **training examples** from the training time series. * Each training example consists of a pair of adjacent **context** and **prediction** windows of fixed, predefined lengths. * The `context_length` parameter controls how far in the *past* the model can see. * The `prediction_length` parameter controls how far in the *future* predictions can be made. * You can find more details, in [this documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html).> Since DeepAR trains on several time series, it is well suited for data that exhibit **recurring patterns**.In any forecasting task, you should choose the context window to provide enough, **relevant** information to a model so that it can produce accurate predictions. In general, data closest to the prediction time frame will contain the information that is most influential in defining that prediction. In many forecasting applications, like forecasting sales month-to-month, the context and prediction windows will be the same size, but sometimes it will be useful to have a larger context window to notice longer-term patterns in data. Energy Consumption DataThe data we'll be working with in this notebook is data about household electric power consumption, over the globe. The dataset is originally taken from [Kaggle](https://www.kaggle.com/uciml/electric-power-consumption-data-set), and represents power consumption collected over several years from 2006 to 2010. With such a large dataset, we can aim to predict over long periods of time, over days, weeks or months of time. Predicting energy consumption can be a useful task for a variety of reasons including determining seasonal prices for power consumption and efficiently delivering power to people, according to their predicted usage. **Interesting read**: An inversely-related project, recently done by Google and DeepMind, uses machine learning to predict the *generation* of power by wind turbines and efficiently deliver power to the grid. You can read about that research, [in this post](https://deepmind.com/blog/machine-learning-can-boost-value-wind-energy/). Machine Learning WorkflowThis notebook approaches time series forecasting in a number of steps:* Loading and exploring the data* Creating training and test sets of time series* Formatting data as JSON files and uploading to S3* Instantiating and training a DeepAR estimator* Deploying a model and creating a predictor* Evaluating the predictor ---Let's start by loading in the usual resources.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load and Explore the DataWe'll be loading in some data about global energy consumption, collected over a few years. The below cell downloads and unzips this data, giving you one text file of data, `household_power_consumption.txt`.
###Code
! wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/March/5c88a3f1_household-electric-power-consumption/household-electric-power-consumption.zip
! unzip household-electric-power-consumption
###Output
_____no_output_____
###Markdown
Read in the `.txt` FileThe next cell displays the first few lines in the text file, so we can see how it is formatted.
###Code
# display first ten lines of text data
n_lines = 10
with open('household_power_consumption.txt') as file:
head = [next(file) for line in range(n_lines)]
display(head)
###Output
_____no_output_____
###Markdown
Pre-Process the DataThe 'household_power_consumption.txt' file has the following attributes: * Each data point has a date and time (hour:minute:second) of recording * The various data features are separated by semicolons (;) * Some values are 'nan' or '?', and we'll treat these both as `NaN` values Managing `NaN` valuesThis DataFrame does include some data points that have missing values. So far, we've mainly been dropping these values, but there are other ways to handle `NaN` values, as well. One technique is to just fill the missing column values with the **mean** value from that column; this way the added value is likely to be realistic.I've provided some helper functions in `txt_preprocessing.py` that will help to load in the original text file as a DataFrame *and* fill in any `NaN` values, per column, with the mean feature value. This technique will be fine for long-term forecasting; if I wanted to do an hourly analysis and prediction, I'd consider dropping the `NaN` values or taking an average over a small, sliding window rather than an entire column of data.**Below, I'm reading the file in as a DataFrame and filling `NaN` values with feature-level averages.**
###Code
import txt_preprocessing as pprocess
# create df from text file
initial_df = pprocess.create_df('household_power_consumption.txt', sep=';')
# fill NaN column values with *average* column value
df = pprocess.fill_nan_with_mean(initial_df)
# print some stats about the data
print('Data shape: ', df.shape)
df.head()
###Output
_____no_output_____
###Markdown
Global Active Power In this example, we'll want to predict the global active power, which is the household minute-averaged active power (kilowatt), measured across the globe. So, below, I am getting just that column of data and displaying the resultant plot.
###Code
# Select Global active power data
power_df = df['Global_active_power'].copy()
print(power_df.shape)
# display the data
plt.figure(figsize=(12,6))
# all data points
power_df.plot(title='Global active power', color='blue')
plt.show()
###Output
_____no_output_____
###Markdown
Since the data is recorded each minute, the above plot contains *a lot* of values. So, I'm also showing just a slice of data, below.
###Code
# can plot a slice of hourly data
end_mins = 1440 # 1440 mins = 1 day
plt.figure(figsize=(12,6))
power_df[0:end_mins].plot(title='Global active power, over one day', color='blue')
plt.show()
###Output
_____no_output_____
###Markdown
Hourly vs DailyThere is a lot of data, collected every minute, and so I could go one of two ways with my analysis:1. Create many, short time series, say a week or so long, in which I record energy consumption every hour, and try to predict the energy consumption over the following hours or days.2. Create fewer, long time series with data recorded daily that I could use to predict usage in the following weeks or months.Both tasks are interesting! It depends on whether you want to predict time patterns over a day/week or over a longer time period, like a month. With the amount of data I have, I think it would be interesting to see longer, *recurring* trends that happen over several months or over a year. So, I will resample the 'Global active power' values, recording **daily** data points as averages over 24-hr periods.> I can resample according to a specified frequency, by utilizing pandas [time series tools](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html), which allow me to sample at points like every hour ('H') or day ('D'), etc.
###Code
# resample over day (D)
freq = 'D'
# calculate the mean active power for a day
mean_power_df = power_df.resample(freq).mean()
# display the mean values
plt.figure(figsize=(15,8))
mean_power_df.plot(title='Global active power, mean per day', color='blue')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
In this plot, we can see that there are some interesting trends that occur over each year. It seems that there are spikes of energy consumption around the end/beginning of each year, which correspond with heat and light usage being higher in winter months. We also see a dip in usage around August, when global temperatures are typically higher.The data is still not very smooth, but it shows noticeable trends, and so, makes for a good use case for machine learning models that may be able to recognize these patterns. --- Create Time Series My goal will be to take full years of data, from 2007-2009, and see if I can use it to accurately predict the average Global active power usage for the next several months in 2010!Next, let's make one time series for each complete year of data. This is just a design decision, and I am deciding to use full years of data, starting in January of 2017 because there are not that many data points in 2006 and this split will make it easier to handle leap years; I could have also decided to construct time series starting at the first collected data point, just by changing `t_start` and `t_end` in the function below.The function `make_time_series` will create pandas `Series` for each of the passed in list of years `['2007', '2008', '2009']`.* All of the time series will start at the same time point `t_start` (or t0). * When preparing data, it's important to use a consistent start point for each time series; DeepAR uses this time-point as a frame of reference, which enables it to learn recurrent patterns e.g. that weekdays behave differently from weekends or that Summer is different than Winter. * You can change the start and end indices to define any time series you create.* We should account for leap years, like 2008, in the creation of time series.* Generally, we create `Series` by getting the relevant global consumption data (from the DataFrame) and date indices.``` get global consumption datadata = mean_power_df[start_idx:end_idx] create time series for the yearindex = pd.DatetimeIndex(start=t_start, end=t_end, freq='D')time_series.append(pd.Series(data=data, index=index))```
###Code
def make_time_series(mean_power_df, years, freq='D', start_idx=16):
'''Creates as many time series as there are complete years. This code
accounts for the leap year, 2008.
:param mean_power_df: A dataframe of global power consumption, averaged by day.
This dataframe should also be indexed by a datetime.
:param years: A list of years to make time series out of, ex. ['2007', '2008'].
:param freq: The frequency of data recording (D = daily)
:param start_idx: The starting dataframe index of the first point in the first time series.
The default, 16, points to '2017-01-01'.
:return: A list of pd.Series(), time series data.
'''
# store time series
time_series = []
# store leap year in this dataset
leap = '2008'
# create time series for each year in years
for i in range(len(years)):
year = years[i]
if(year == leap):
end_idx = start_idx+366
else:
end_idx = start_idx+365
# create start and end datetimes
t_start = year + '-01-01' # Jan 1st of each year = t_start
t_end = year + '-12-31' # Dec 31st = t_end
# get global consumption data
data = mean_power_df[start_idx:end_idx]
# create time series for the year
index = pd.DatetimeIndex(start=t_start, end=t_end, freq=freq)
time_series.append(pd.Series(data=data, index=index))
start_idx = end_idx
# return list of time series
return time_series
###Output
_____no_output_____
###Markdown
Test the resultsBelow, let's construct one time series for each complete year of data, and display the results.
###Code
# test out the code above
# yearly time series for our three complete years
full_years = ['2007', '2008', '2009']
freq='D' # daily recordings
# make time series
time_series = make_time_series(mean_power_df, full_years, freq=freq)
# display first time series
time_series_idx = 0
plt.figure(figsize=(12,6))
time_series[time_series_idx].plot()
plt.show()
###Output
_____no_output_____
###Markdown
--- Splitting in TimeWe'll evaluate our model on a test set of data. For machine learning tasks like classification, we typically create train/test data by randomly splitting examples into different sets. For forecasting it's important to do this train/test split in **time** rather than by individual data points. > In general, we can create training data by taking each of our *complete* time series and leaving off the last `prediction_length` data points to create *training* time series. EXERCISE: Create training time seriesComplete the `create_training_series` function, which should take in our list of complete time series data and return a list of truncated, training time series.* In this example, we want to predict about a month's worth of data, and we'll set `prediction_length` to 30 (days).* To create a training set of data, we'll leave out the last 30 points of *each* of the time series we just generated, so we'll use only the first part as training data. * The **test set contains the complete range** of each time series.
###Code
# create truncated, training time series
def create_training_series(complete_time_series, prediction_length):
'''Given a complete list of time series data, create training time series.
:param complete_time_series: A list of all complete time series.
:param prediction_length: The number of points we want to predict.
:return: A list of training time series.
'''
# your code here
pass
# test your code!
# set prediction length
prediction_length = 30 # 30 days ~ a month
time_series_training = create_training_series(time_series, prediction_length)
###Output
_____no_output_____
###Markdown
Training and Test SeriesWe can visualize what these series look like, by plotting the train/test series on the same axis. We should see that the test series contains all of our data in a year, and a training series contains all but the last `prediction_length` points.
###Code
# display train/test time series
time_series_idx = 0
plt.figure(figsize=(15,8))
# test data is the whole time series
time_series[time_series_idx].plot(label='test', lw=3)
# train data is all but the last prediction pts
time_series_training[time_series_idx].plot(label='train', ls=':', lw=3)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Convert to JSON According to the [DeepAR documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html), DeepAR expects to see input training data in a JSON format, with the following fields:* **start**: A string that defines the starting date of the time series, with the format 'YYYY-MM-DD HH:MM:SS'.* **target**: An array of numerical values that represent the time series.* **cat** (optional): A numerical array of categorical features that can be used to encode the groups that the record belongs to. This is useful for finding models per class of item, such as in retail sales, where you might have {'shoes', 'jackets', 'pants'} encoded as categories {0, 1, 2}.The input data should be formatted with one time series per line in a JSON file. Each line looks a bit like a dictionary, for example:```{"start":'2007-01-01 00:00:00', "target": [2.54, 6.3, ...], "cat": [1]}{"start": "2012-01-30 00:00:00", "target": [1.0, -5.0, ...], "cat": [0]} ...```In the above example, each time series has one, associated categorical feature and one time series feature. EXERCISE: Formatting Energy Consumption DataFor our data:* The starting date, "start," will be the index of the first row in a time series, Jan. 1st of that year.* The "target" will be all of the energy consumption values that our time series holds.* We will not use the optional "cat" field.Complete the following utility function, which should convert `pandas.Series` objects into the appropriate JSON strings that DeepAR can consume.
###Code
def series_to_json_obj(ts):
'''Returns a dictionary of values in DeepAR, JSON format.
:param ts: A single time series.
:return: A dictionary of values with "start" and "target" keys.
'''
# your code here
pass
# test out the code
ts = time_series[0]
json_obj = series_to_json_obj(ts)
print(json_obj)
###Output
_____no_output_____
###Markdown
Saving Data, LocallyThe next helper function will write one series to a single JSON line, using the new line character '\n'. The data is also encoded and written to a filename that we specify.
###Code
# import json for formatting data
import json
import os # and os for saving
def write_json_dataset(time_series, filename):
with open(filename, 'wb') as f:
# for each of our times series, there is one JSON line
for ts in time_series:
json_line = json.dumps(series_to_json_obj(ts)) + '\n'
json_line = json_line.encode('utf-8')
f.write(json_line)
print(filename + ' saved.')
# save this data to a local directory
data_dir = 'json_energy_data'
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# directories to save train/test data
train_key = os.path.join(data_dir, 'train.json')
test_key = os.path.join(data_dir, 'test.json')
# write train/test JSON files
write_json_dataset(time_series_training, train_key)
write_json_dataset(time_series, test_key)
###Output
_____no_output_____
###Markdown
--- Uploading Data to S3Next, to make this data accessible to an estimator, I'll upload it to S3. Sagemaker resourcesLet's start by specifying:* The sagemaker role and session for training a model.* A default S3 bucket where we can save our training, test, and model data.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
# session, role, bucket
sagemaker_session = sagemaker.Session()
role = get_execution_role()
bucket = sagemaker_session.default_bucket()
###Output
_____no_output_____
###Markdown
EXERCISE: Upoad *both* training and test JSON files to S3Specify *unique* train and test prefixes that define the location of that data in S3.* Upload training data to a location in S3, and save that location to `train_path`* Upload test data to a location in S3, and save that location to `test_path`
###Code
# suggested that you set prefixes for directories in S3
# upload data to S3, and save unique locations
train_path = None
test_path = None
# check locations
print('Training data is stored in: '+ train_path)
print('Test data is stored in: '+ test_path)
###Output
_____no_output_____
###Markdown
--- Training a DeepAR EstimatorSome estimators have specific, SageMaker constructors, but not all. Instead you can create a base `Estimator` and pass in the specific image (or container) that holds a specific model.Next, we configure the container image to be used for the region that we are running in.
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
image_name = get_image_uri(boto3.Session().region_name, # get the region
'forecasting-deepar') # specify image
###Output
_____no_output_____
###Markdown
EXERCISE: Instantiate an Estimator You can now define the estimator that will launch the training job. A generic Estimator will be defined by the usual constructor arguments and an `image_name`. > You can take a look at the [estimator source code](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/estimator.pyL595) to view specifics.
###Code
from sagemaker.estimator import Estimator
# instantiate a DeepAR estimator
estimator = None
###Output
_____no_output_____
###Markdown
Setting HyperparametersNext, we need to define some DeepAR hyperparameters that define the model size and training behavior. Values for the epochs, frequency, prediction length, and context length are required.* **epochs**: The maximum number of times to pass over the data when training.* **time_freq**: The granularity of the time series in the dataset ('D' for daily).* **prediction_length**: A string; the number of time steps (based off the unit of frequency) that the model is trained to predict. * **context_length**: The number of time points that the model gets to see *before* making a prediction. Context LengthTypically, it is recommended that you start with a `context_length`=`prediction_length`. This is because a DeepAR model also receives "lagged" inputs from the target time series, which allow the model to capture long-term dependencies. For example, a daily time series can have yearly seasonality and DeepAR automatically includes a lag of one year. So, the context length can be shorter than a year, and the model will still be able to capture this seasonality. The lag values that the model picks depend on the frequency of the time series. For example, lag values for daily frequency are the previous week, 2 weeks, 3 weeks, 4 weeks, and year. You can read more about this in the [DeepAR "how it works" documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html). Optional HyperparametersYou can also configure optional hyperparameters to further tune your model. These include parameters like the number of layers in our RNN model, the number of cells per layer, the likelihood function, and the training options, such as batch size and learning rate. For an exhaustive list of all the different DeepAR hyperparameters you can refer to the DeepAR [hyperparameter documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_hyperparameters.html).
###Code
freq='D'
context_length=30 # same as prediction_length
hyperparameters = {
"epochs": "50",
"time_freq": freq,
"prediction_length": str(prediction_length),
"context_length": str(context_length),
"num_cells": "50",
"num_layers": "2",
"mini_batch_size": "128",
"learning_rate": "0.001",
"early_stopping_patience": "10"
}
# set the hyperparams
estimator.set_hyperparameters(**hyperparameters)
###Output
_____no_output_____
###Markdown
Training JobNow, we are ready to launch the training job! SageMaker will start an EC2 instance, download the data from S3, start training the model and save the trained model.If you provide the `test` data channel, as we do in this example, DeepAR will also calculate accuracy metrics for the trained model on this test data set. This is done by predicting the last `prediction_length` points of each time series in the test set and comparing this to the *actual* value of the time series. The computed error metrics will be included as part of the log output.The next cell may take a few minutes to complete, depending on data size, model complexity, and training options.
###Code
%%time
# train and test channels
data_channels = {
"train": train_path,
"test": test_path
}
# fit the estimator
estimator.fit(inputs=data_channels)
###Output
_____no_output_____
###Markdown
Deploy and Create a PredictorNow that we have trained a model, we can use it to perform predictions by deploying it to a predictor endpoint.Remember to **delete the endpoint** at the end of this notebook. A cell at the very bottom of this notebook will be provided, but it is always good to keep, front-of-mind.
###Code
%%time
# create a predictor
predictor = estimator.deploy(
initial_instance_count=1,
instance_type='ml.t2.medium',
content_type="application/json" # specify that it will accept/produce JSON
)
###Output
_____no_output_____
###Markdown
--- Generating PredictionsAccording to the [inference format](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar-in-formats.html) for DeepAR, the `predictor` expects to see input data in a JSON format, with the following keys:* **instances**: A list of JSON-formatted time series that should be forecast by the model.* **configuration** (optional): A dictionary of configuration information for the type of response desired by the request.Within configuration the following keys can be configured:* **num_samples**: An integer specifying the number of samples that the model generates when making a probabilistic prediction.* **output_types**: A list specifying the type of response. We'll ask for **quantiles**, which look at the list of num_samples generated by the model, and generate [quantile estimates](https://en.wikipedia.org/wiki/Quantile) for each time point based on these values.* **quantiles**: A list that specified which quantiles estimates are generated and returned in the response.Below is an example of what a JSON query to a DeepAR model endpoint might look like.```{ "instances": [ { "start": "2009-11-01 00:00:00", "target": [4.0, 10.0, 50.0, 100.0, 113.0] }, { "start": "1999-01-30", "target": [2.0, 1.0] } ], "configuration": { "num_samples": 50, "output_types": ["quantiles"], "quantiles": ["0.5", "0.9"] }}``` JSON Prediction RequestThe code below accepts a **list** of time series as input and some configuration parameters. It then formats that series into a JSON instance and converts the input into an appropriately formatted JSON_input.
###Code
def json_predictor_input(input_ts, num_samples=50, quantiles=['0.1', '0.5', '0.9']):
'''Accepts a list of input time series and produces a formatted input.
:input_ts: An list of input time series.
:num_samples: Number of samples to calculate metrics with.
:quantiles: A list of quantiles to return in the predicted output.
:return: The JSON-formatted input.
'''
# request data is made of JSON objects (instances)
# and an output configuration that details the type of data/quantiles we want
instances = []
for k in range(len(input_ts)):
# get JSON objects for input time series
instances.append(series_to_json_obj(input_ts[k]))
# specify the output quantiles and samples
configuration = {"num_samples": num_samples,
"output_types": ["quantiles"],
"quantiles": quantiles}
request_data = {"instances": instances,
"configuration": configuration}
json_request = json.dumps(request_data).encode('utf-8')
return json_request
###Output
_____no_output_____
###Markdown
Get a PredictionWe can then use this function to get a prediction for a formatted time series!In the next cell, I'm getting an input time series and known target, and passing the formatted input into the predictor endpoint to get a resultant prediction.
###Code
# get all input and target (test) time series
input_ts = time_series_training
target_ts = time_series
# get formatted input time series
json_input_ts = json_predictor_input(input_ts)
# get the prediction from the predictor
json_prediction = predictor.predict(json_input_ts)
print(json_prediction)
###Output
_____no_output_____
###Markdown
Decoding PredictionsThe predictor returns JSON-formatted prediction, and so we need to extract the predictions and quantile data that we want for visualizing the result. The function below, reads in a JSON-formatted prediction and produces a list of predictions in each quantile.
###Code
# helper function to decode JSON prediction
def decode_prediction(prediction, encoding='utf-8'):
'''Accepts a JSON prediction and returns a list of prediction data.
'''
prediction_data = json.loads(prediction.decode(encoding))
prediction_list = []
for k in range(len(prediction_data['predictions'])):
prediction_list.append(pd.DataFrame(data=prediction_data['predictions'][k]['quantiles']))
return prediction_list
# get quantiles/predictions
prediction_list = decode_prediction(json_prediction)
# should get a list of 30 predictions
# with corresponding quantile values
print(prediction_list[0])
###Output
_____no_output_____
###Markdown
Display the Results!The quantile data will give us all we need to see the results of our prediction.* Quantiles 0.1 and 0.9 represent higher and lower bounds for the predicted values.* Quantile 0.5 represents the median of all sample predictions.
###Code
# display the prediction median against the actual data
def display_quantiles(prediction_list, target_ts=None):
# show predictions for all input ts
for k in range(len(prediction_list)):
plt.figure(figsize=(12,6))
# get the target month of data
if target_ts is not None:
target = target_ts[k][-prediction_length:]
plt.plot(range(len(target)), target, label='target')
# get the quantile values at 10 and 90%
p10 = prediction_list[k]['0.1']
p90 = prediction_list[k]['0.9']
# fill the 80% confidence interval
plt.fill_between(p10.index, p10, p90, color='y', alpha=0.5, label='80% confidence interval')
# plot the median prediction line
prediction_list[k]['0.5'].plot(label='prediction median')
plt.legend()
plt.show()
# display predictions
display_quantiles(prediction_list, target_ts)
###Output
_____no_output_____
###Markdown
Predicting the FutureRecall that we did not give our model any data about 2010, but let's see if it can predict the energy consumption given **no target**, only a known start date! EXERCISE: Format a request for a "future" predictionCreate a formatted input to send to the deployed `predictor` passing in my usual parameters for "configuration". The "instances" will, in this case, just be one instance, defined by the following:* **start**: The start time will be time stamp that you specify. To predict the first 30 days of 2010, start on Jan. 1st, '2010-01-01'.* **target**: The target will be an empty list because this year has no, complete associated time series; we specifically withheld that information from our model, for testing purposes.```{"start": start_time, "target": []} empty target```
###Code
# Starting my prediction at the beginning of 2010
start_date = '2010-01-01'
timestamp = '00:00:00'
# formatting start_date
start_time = start_date +' '+ timestamp
# format the request_data
# with "instances" and "configuration"
request_data = None
# create JSON input
json_input = json.dumps(request_data).encode('utf-8')
print('Requesting prediction for '+start_time)
###Output
_____no_output_____
###Markdown
Then get and decode the prediction response, as usual.
###Code
# get prediction response
json_prediction = predictor.predict(json_input)
prediction_2010 = decode_prediction(json_prediction)
###Output
_____no_output_____
###Markdown
Finally, I'll compare the predictions to a known target sequence. This target will come from a time series for the 2010 data, which I'm creating below.
###Code
# create 2010 time series
ts_2010 = []
# get global consumption data
# index 1112 is where the 2011 data starts
data_2010 = mean_power_df.values[1112:]
index = pd.DatetimeIndex(start=start_date, periods=len(data_2010), freq='D')
ts_2010.append(pd.Series(data=data_2010, index=index))
# range of actual data to compare
start_idx=0 # days since Jan 1st 2010
end_idx=start_idx+prediction_length
# get target data
target_2010_ts = [ts_2010[0][start_idx:end_idx]]
# display predictions
display_quantiles(prediction_2010, target_2010_ts)
###Output
_____no_output_____
###Markdown
Delete the EndpointTry your code out on different time series. You may want to tweak your DeepAR hyperparameters and see if you can improve the performance of this predictor.When you're done with evaluating the predictor (any predictor), make sure to delete the endpoint.
###Code
## TODO: delete the endpoint
predictor.delete_endpoint()
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/GNN_Psych_DBS-checkpoint.ipynb | ###Markdown
Graph Neural Networks for Psychiatric DBS Setup
###Code
import torch
from torch_geometric.nn import MessagePassing
from torch_geometric.utils import add_self_loops, degree
class GCNConv(MessagePassing):
def __init__(self, in_channels, out_channels):
super(GCNConv, self).__init__(aggr='add') # "Add" aggregation (Step 5).
self.lin = torch.nn.Linear(in_channels, out_channels)
def forward(self, x, edge_index):
# x has shape [N, in_channels]
# edge_index has shape [2, E]
# Step 1: Add self-loops to the adjacency matrix.
edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))
# Step 2: Linearly transform node feature matrix.
x = self.lin(x)
# Step 3: Compute normalization.
row, col = edge_index
deg = degree(col, x.size(0), dtype=x.dtype)
deg_inv_sqrt = deg.pow(-0.5)
norm = deg_inv_sqrt[row] * deg_inv_sqrt[col]
# Step 4-5: Start propagating messages.
return self.propagate(edge_index, x=x, norm=norm)
def message(self, x_j, norm):
# x_j has shape [E, out_channels]
# Step 4: Normalize node features.
return norm.view(-1, 1) * x_j
###Output
_____no_output_____ |
demos/nei_demo.ipynb | ###Markdown
NEI (Noisy Expected Improvement) DemoYou can also look at the Botorch implementation, but that requires a lot more understanding of code which involves Pytorch. So I tried to put a simple example together here.
###Code
import numpy as np
import qmcpy as qp
from scipy.linalg import solve_triangular, cho_solve, cho_factor
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
lw = 3
ms = 8
###Output
_____no_output_____
###Markdown
We make some fake data and consider the sequential decision making problem of trying to optimize the function depicted below.
###Code
def yf(x):
return np.cos(10 * x) * np.exp(.2 * x) + np.exp(-5 * (x - .4) ** 2)
xplt = np.linspace(0, 1, 300)
yplt = yf(xplt)
x = np.array([.1, .2, .4, .7, .9])
y = yf(x)
v = np.array([.001, .05, .01, .1, .4])
plt.plot(xplt, yplt, linewidth=lw)
plt.plot(x, y, 'o', markersize=ms, color='orange')
plt.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3)
plt.title('Sample data with noise');
###Output
_____no_output_____
###Markdown
We can build a zero mean Gaussian process model to this data, observed under noise. Below are plots of the posterior distribution. We use the Gaussian (square exponential) kernel as our prior covariance belief.This kernel has a shape parameter, the Gaussian process has a global variance, which are both chosen fixed for simplicity. The `fudge_factor` which is added here to prevent ill-conditioning for a large matrix.Notice the higher uncertainty in the posterior in locations where the observed noise is greater.
###Code
def gaussian_kernel(x, z, e, pv):
return pv * np.exp(-e ** 2 * (x[:, None] - z[None, :]) ** 2)
shape_parameter = 4.1
process_variance = .9
fudge_factor = 1e-10
kernel_prior_data = gaussian_kernel(x, x, shape_parameter, process_variance)
kernel_cross_matrix = gaussian_kernel(xplt, x, shape_parameter, process_variance)
kernel_prior_plot = gaussian_kernel(xplt, xplt, shape_parameter, process_variance)
prior_cholesky = np.linalg.cholesky(kernel_prior_data + np.diag(v))
partial_cardinal_functions = solve_triangular(prior_cholesky, kernel_cross_matrix.T, lower=True)
posterior_covariance = kernel_prior_plot - np.dot(partial_cardinal_functions.T, partial_cardinal_functions)
posterior_cholesky = np.linalg.cholesky(posterior_covariance + fudge_factor * np.eye(len(xplt)))
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
posterior_mean = np.dot(full_cardinal_functions.T, y)
num_posterior_draws = 123
normal_draws = np.random.normal(size=(num_posterior_draws, len(xplt)))
posterior_draws = posterior_mean[:, None] + np.dot(posterior_cholesky, normal_draws.T)
plt.plot(xplt, posterior_draws, alpha=.1, color='r')
plt.plot(xplt, posterior_mean, color='k', linewidth=lw)
plt.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3);
###Output
_____no_output_____
###Markdown
First we take a look at the EI quantity by itself which, despite having a closed form, we will approximate using basic Monte Carlo below. The closed form is very preferable, but not applicable in all situations.Expected improvement is just the expectation (under the posterior distribution) of the improvement beyond the current best value. If we were trying to maximize this function that we are studying then improvement would be defined as$$I(x) = (Y_x|\mathcal{D} - y^*)_+,$$the positive part of the gap between the model $Y_x|\mathcal{D}$ and the current highest value $y^*=\max\{y_1,\ldots,y_N\}$. Since $Y_x|\mathcal{D}$ is a random variable (normally distributed because we have a Gaussian process model), we generally study the expected value of this, which is plotted below. Written as an integral, this would look like$$\mathrm{EI}(x) = \int_{-\infty}^\infty (y - y^*)_+\, p_{Y_x|\mathcal{D}}(y)\; \text{d}y$$**NOTE**: This quantity is written for maximization here, but most of the literature is concerned with minimization. I can rewrite this if needed, but the math is essentially the same.This $EI$ quantity is referred to as an _acquisition function_, a function which defines the utility associated with sampling at a given point. For each acquisition function, there is a balance between exploration and exploitation (as is the focus of most topics involving sequential decision making under uncertainty).
###Code
improvement_draws = np.fmax(posterior_draws - max(y), 0)
plt.plot(xplt, improvement_draws, alpha=.1, color='#96CA4F', linewidth=lw)
plt.ylabel('improvement draws')
ax2 = plt.gca().twinx()
ax2.plot(xplt, np.mean(improvement_draws, axis=1), color='#A23D97', linewidth=lw)
ax2.set_ylabel('expected improvement');
###Output
_____no_output_____
###Markdown
The NEI quantity is then computed using multiple EI computations (each using a different posterior GP draw) computed without noise. In this computation below, I will use the closed form of EI, to speed up the computation -- it is possible to execute the same strategy as above, though.This computation is vectorized so as to compute for multiple $x$ locations at the same time ... the algorithm from the [Facebook paper](https://projecteuclid.org/download/pdfview_1/euclid.ba/1533866666) is written for only a single location. We are omitting the constraints aspect of their paper because the problem can be considered without that. To define the integral, though, we need some more definitions/notation.First, we need to define $\mathrm{EI}(x;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon})$ to be the expected improvement at a location $x$, given the $N$ values stored in the vector $\mathbf{y}$ having been evaluated with noise $\boldsymbol{\epsilon}$ at the points $\mathcal{X}$,$$\mathbf{y}=\begin{pmatrix}y_1\\\vdots\\y_N\end{pmatrix},\qquad \mathcal{X}=\{\mathbf{x}_1,\ldots,\mathbf{x}_N\},\qquad \boldsymbol{\epsilon}=\begin{pmatrix}\epsilon_1\\\vdots\\\epsilon_N\end{pmatrix}.$$The noise is assumed to be $\epsilon_i\sim\mathcal{N}(0, \sigma^2)$ for some fixed $\sigma^2$. The noise need not actually be homoscedastic, but it is a standard assumption.We encapsulate this information in $\mathcal{D}=\{\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}\}$. This is omitted from the earlier notation, because the data would be fixed.The point of NEI though is to deal with **noisy** observed values (EI, itself, is notorious for not dealing with noisy data very well). It does this by considering a variety of posterior draws at the locations in $\mathcal{X}$. These have distribution$$Y_{\mathcal{X}}|\mathcal{D}=Y_{\mathcal{X}}|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}\sim \mathcal{N}\left(\mathsf{K}(\mathsf{K}+\mathsf{E})^{-1}\mathbf{y}, \mathsf{K} - \mathsf{K}(\mathsf{K}+\mathsf{E})^{-1}\mathsf{K}\right),$$where$$\mathbf{k}(x)=\begin{pmatrix}K(x,x_1)\\\vdots\\K(x,x_N)\end{pmatrix},\qquad\mathsf{K}=\begin{pmatrix}K(x_1,x_1)&\cdots&K(x_1, x_N)\\&\vdots&\\K(x_N,x_1)&\cdots&K(x_N, x_N)\end{pmatrix}=\begin{pmatrix}\mathbf{k}(x_1)^T\\\vdots\\\mathbf{k}(x_N)^T\end{pmatrix},\qquad\mathsf{E}=\begin{pmatrix}\epsilon_1&&\\&\ddots&\\&&\epsilon_N\end{pmatrix}$$In practice, unless noise has actually been measured at each point, it would be common to simply plug in $\epsilon_1=\ldots=\epsilon_N=\sigma^2$. The term `noisy_predictions_at_data` below is drawn from this distribution (though in a standard iid fashion, not a more awesome QMC fashion).The EI integral, although approximated earlier using Monte Carlo, can actually be written in closed form. We do so below to also solidify our newer notation:$$\mathrm{EI}(x;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}) = \int_{-\infty}^\infty (y - y^*)_+\, p_{Y_x|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}}(y)\; \text{d}y = s(z\Phi(z)+\phi(z))$$where $\phi$ and $\Phi$ are the standard normal pdf and cdf, and$$\mu=\mathbf{k}(x)^T(\mathsf{K}+\mathsf{E})^{-1}\mathbf{y},\qquad s^2 = K(x, x)-\mathbf{k}(x)^T(\mathsf{K}+\mathsf{E})^{-1}\mathbf{k}(x),\qquad z=(\mu - y^*)/s.$$It is very important to remember that these quantities are functions of $\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}$ despite the absence of those quantities in the notation.The goal of the NEI integral is to simulate many possible random realizations of what could actually be the truth at the locations $\mathcal{X}$ and then run a *noiseless* EI computation over each of those realizations. The average of these outcomes is the NEI quantity. This would look like:$$\mathrm{NEI}(x) = \int_{\mathbf{f}\in\mathbb{R}^N} \mathrm{EI}(x;\mathbf{f}, \mathcal{X}, 0)\, p_{Y_{\mathcal{X}}|\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}}(\mathbf{f})\;\text{d}\mathbf{f}$$**NOTE**: There are ways to do this computation in a more vectorized fashion, so it would more likely be a loop involving chunks of MC elements at a time. Just so you know.
###Code
num_draws_at_data = 109
# These draws are done through QMC in the FB paper
normal_draws_at_data = np.random.normal(size=(num_draws_at_data, len(x)))
partial_cardinal_functions_at_data = solve_triangular(prior_cholesky, kernel_prior_data.T, lower=True)
posterior_covariance_at_data = kernel_prior_data - np.dot(partial_cardinal_functions_at_data.T, partial_cardinal_functions_at_data)
posterior_cholesky_at_data = np.linalg.cholesky(posterior_covariance_at_data + fudge_factor * np.eye(len(x)))
noisy_predictions_at_data = y[:, None] + np.dot(posterior_cholesky_at_data, normal_draws_at_data.T)
prior_cholesky_noiseless = np.linalg.cholesky(kernel_prior_data)
partial_cardinal_functions = solve_triangular(prior_cholesky_noiseless, kernel_cross_matrix.T, lower=True)
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
pointwise_sd = np.sqrt(np.fmax(process_variance - np.sum(partial_cardinal_functions ** 2, axis=0), 1e-100))
all_noiseless_eis = []
for draw in noisy_predictions_at_data.T:
posterior_mean = np.dot(full_cardinal_functions.T, draw)
z = (posterior_mean - max(y)) / pointwise_sd
ei = pointwise_sd * (z * norm.cdf(z) + norm.pdf(z))
all_noiseless_eis.append(ei)
all_noiseless_eis = np.array(all_noiseless_eis)
plt.plot(xplt, all_noiseless_eis.T, alpha=.1, color='#96CA4F', linewidth=lw)
plt.ylabel('expected improvement draws', color='#96CA4F')
ax2 = plt.gca().twinx()
ax2.plot(xplt, np.mean(all_noiseless_eis, axis=0), color='#A23D97', linewidth=lw)
ax2.set_ylabel('noisy expected improvement', color='#A23D97');
###Output
_____no_output_____
###Markdown
GoalWhat would be really great would be if we could compute integrals like the EI integral or the NEI integral using QMC. If there are opportunities to use the latest research to adaptively study tolerance and truncate, that would be absolutely amazing.I put the NEI example up first because the FB crew has already done a great job showing how QMC can play a role. But, as you can see, NEI is more complicated than EI, and also not yet as popular in the community (though that may change). Bonus stuff Even the EI integral, which does have a closed form, might better be considered in a QMC fashion because of interesting use cases. I'm going to reconsider the same problem from above, but here I am not looking to maximize the function -- I want to find the "level set" associated with the value $y=1$. Below you can see how the different outcome might look.In this case, the quantity of relevance is not exactly an integral, but it is a function of this posterior mean and standard deviation, which might need to be estimated through an integral (rather than the closed form, which we do have for a GP situation).
###Code
fig, axes = plt.subplots(1, 3, figsize=(14, 4))
ax = axes[0]
ax.plot(xplt, yplt, linewidth=lw)
ax.plot(x, y, 'o', markersize=ms, color='orange')
ax.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3)
ax.set_title('Sample data with noise')
ax.set_ylim(-2.4, 2.4)
ax = axes[1]
ax.plot(xplt, posterior_draws, alpha=.1, color='r')
ax.plot(xplt, posterior_mean, color='k', linewidth=lw)
ax.set_title('Posterior draws')
ax.set_ylim(-2.4, 2.4)
ax = axes[2]
posterior_mean_distance_from_1 = np.mean(np.abs(posterior_draws - 1), axis=1)
posterior_standard_deviation = np.std(posterior_draws, axis=1)
level_set_expected_improvement = norm.cdf(-posterior_mean_distance_from_1 / posterior_standard_deviation)
ax.plot(xplt, level_set_expected_improvement, color='#A23D97', linewidth=lw)
ax.set_title('level set expected improvement')
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Computation of the QEI quantity using `qmcpy`NEI is an important quantity, but there are other quantities as well which could be considered relevant demonstrations of higher dimensional integrals.One such quantity is a computation involving $q$ "next points" to sample in a BO process; in the standard formulation this quantity might involve just $q=1$, but $q>1$ is also of interest for batched evaluation in parallel.This quantity is defined as$$\mathrm{EI}_q(x_1, \ldots, x_q;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}) = \int_{\mathbb{R}^q} \max_{1\leq i\leq q}\left[{(y_i - y^*)_+}\right]\, p_{Y_{x_1,\ldots, x_q}|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}}(y_1, \ldots, y_q)\; \text{d}y_1\cdots\text{d}y_q$$The example I am considering here is with $q=5$ but this quantity could be made larger. Each of these QEI computations (done in a vectorized fashion in production) would be needed in an optimization loop (likely powered by CMAES or some other high dimensional nonconvex optimization tool). This optimization problem would take place in a $qd$ dimensional space, which is one aspect which usually prevents $q$ from being too large.Note that some of this will look much more confusing in $d>1$, but it is written here in a simplified version.
###Code
q = 5 # number of "next points" to be considered simultaneously
next_x = np.array([0.158, 0.416, 0.718, 0.935, 0.465])
def compute_qei(next_x, mc_strat, num_posterior_draws):
q = len(next_x)
kernel_prior_data = gaussian_kernel(x, x, shape_parameter, process_variance)
kernel_cross_matrix = gaussian_kernel(next_x, x, shape_parameter, process_variance)
kernel_prior_plot = gaussian_kernel(next_x, next_x, shape_parameter, process_variance)
prior_cholesky = np.linalg.cholesky(kernel_prior_data + np.diag(v))
partial_cardinal_functions = solve_triangular(prior_cholesky, kernel_cross_matrix.T, lower=True)
posterior_covariance = kernel_prior_plot - np.dot(partial_cardinal_functions.T, partial_cardinal_functions)
posterior_cholesky = np.linalg.cholesky(posterior_covariance + fudge_factor * np.eye(q))
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
posterior_mean = np.dot(full_cardinal_functions.T, y)
if mc_strat == 'numpy':
normal_draws = np.random.normal(size=(num_posterior_draws, q))
elif mc_strat == 'lattice':
distrib = qp.Lattice(dimension=q, randomize=True)
normal_draws = qp.Gaussian(distrib).gen_samples(n_min=0, n_max=num_posterior_draws)
else:
distrib = qp.IIDStdGaussian(dimension=q)
normal_draws = qp.Gaussian(distrib).gen_samples(n=num_posterior_draws)
posterior_draws = posterior_mean[:, None] + np.dot(posterior_cholesky, normal_draws.T)
return np.mean(np.fmax(np.max(posterior_draws[:, :num_posterior_draws] - max(y), axis=0), 0))
num_posterior_draws_to_test = 2 ** np.arange(4, 17)
trials = 10
vals = {}
for mc_strat in ('numpy', 'iid', 'lattice'):
vals[mc_strat] = []
for num_posterior_draws in num_posterior_draws_to_test:
qei_estimate = 0.
for trial in range(trials):
qei_estimate += compute_qei(next_x, mc_strat, num_posterior_draws)
avg_qei_estimate = qei_estimate/float(trials)
vals[mc_strat].append(avg_qei_estimate)
vals[mc_strat] = np.array(vals[mc_strat])
#reference_answer = compute_qei(next_x, 'lattice', 2 ** 7 * max(num_posterior_draws_to_test))
reference_answer = compute_qei(next_x, 'lattice', 2 ** 20)
for name, results in vals.items():
plt.loglog(num_posterior_draws_to_test, abs(results - reference_answer), label=name)
plt.loglog(num_posterior_draws_to_test, .05 * num_posterior_draws_to_test ** -.5, '--k', label='$O(N^{-1/2})$')
plt.loglog(num_posterior_draws_to_test, .3 * num_posterior_draws_to_test ** -1.0, '-.k', label='$O(N^{-1})$')
plt.xlabel('N - number of points')
plt.ylabel('Accuracy')
plt.legend(loc='lower left');
###Output
_____no_output_____
###Markdown
NEI (Noisy Expected Improvement) DemoYou can also look at the Botorch implementation, but that requires a lot more understanding of code which involves Pytorch. So I tried to put a simple example together here.
###Code
import numpy as np
import qmcpy as qp
from scipy.linalg import solve_triangular, cho_solve, cho_factor
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
lw = 3
ms = 8
###Output
_____no_output_____
###Markdown
We make some fake data and consider the sequential decision making problem of trying to optimize the function depicted below.
###Code
def yf(x):
return np.cos(10 * x) * np.exp(.2 * x) + np.exp(-5 * (x - .4) ** 2)
xplt = np.linspace(0, 1, 300)
yplt = yf(xplt)
x = np.array([.1, .2, .4, .7, .9])
y = yf(x)
v = np.array([.001, .05, .01, .1, .4])
plt.plot(xplt, yplt, linewidth=lw)
plt.plot(x, y, 'o', markersize=ms, color='orange')
plt.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3)
plt.title('Sample data with noise');
###Output
_____no_output_____
###Markdown
We can build a zero mean Gaussian process model to this data, observed under noise. Below are plots of the posterior distribution. We use the Gaussian (square exponential) kernel as our prior covariance belief.This kernel has a shape parameter, the Gaussian process has a global variance, which are both chosen fixed for simplicity. The `fudge_factor` which is added here to prevent ill-conditioning for a large matrix.Notice the higher uncertainty in the posterior in locations where the observed noise is greater.
###Code
def gaussian_kernel(x, z, e, pv):
return pv * np.exp(-e ** 2 * (x[:, None] - z[None, :]) ** 2)
shape_parameter = 4.1
process_variance = .9
fudge_factor = 1e-10
kernel_prior_data = gaussian_kernel(x, x, shape_parameter, process_variance)
kernel_cross_matrix = gaussian_kernel(xplt, x, shape_parameter, process_variance)
kernel_prior_plot = gaussian_kernel(xplt, xplt, shape_parameter, process_variance)
prior_cholesky = np.linalg.cholesky(kernel_prior_data + np.diag(v))
partial_cardinal_functions = solve_triangular(prior_cholesky, kernel_cross_matrix.T, lower=True)
posterior_covariance = kernel_prior_plot - np.dot(partial_cardinal_functions.T, partial_cardinal_functions)
posterior_cholesky = np.linalg.cholesky(posterior_covariance + fudge_factor * np.eye(len(xplt)))
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
posterior_mean = np.dot(full_cardinal_functions.T, y)
num_posterior_draws = 123
normal_draws = np.random.normal(size=(num_posterior_draws, len(xplt)))
posterior_draws = posterior_mean[:, None] + np.dot(posterior_cholesky, normal_draws.T)
plt.plot(xplt, posterior_draws, alpha=.1, color='r')
plt.plot(xplt, posterior_mean, color='k', linewidth=lw)
plt.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3);
###Output
_____no_output_____
###Markdown
First we take a look at the EI quantity by itself which, despite having a closed form, we will approximate using basic Monte Carlo below. The closed form is very preferable, but not applicable in all situations.Expected improvement is just the expectation (under the posterior distribution) of the improvement beyond the current best value. If we were trying to maximize this function that we are studying then improvement would be defined as$$I(x) = (Y_x|\mathcal{D} - y^*)_+,$$the positive part of the gap between the model $Y_x|\mathcal{D}$ and the current highest value $y^*=\max\{y_1,\ldots,y_N\}$. Since $Y_x|\mathcal{D}$ is a random variable (normally distributed because we have a Gaussian process model), we generally study the expected value of this, which is plotted below. Written as an integral, this would look like$$\mathrm{EI}(x) = \int_{-\infty}^\infty (y - y^*)_+\, p_{Y_x|\mathcal{D}}(y)\; \text{d}y$$**NOTE**: This quantity is written for maximization here, but most of the literature is concerned with minimization. I can rewrite this if needed, but the math is essentially the same.This $EI$ quantity is referred to as an _acquisition function_, a function which defines the utility associated with sampling at a given point. For each acquisition function, there is a balance between exploration and exploitation (as is the focus of most topics involving sequential decision making under uncertainty).
###Code
improvement_draws = np.fmax(posterior_draws - max(y), 0)
plt.plot(xplt, improvement_draws, alpha=.1, color='#96CA4F', linewidth=lw)
plt.ylabel('improvement draws')
ax2 = plt.gca().twinx()
ax2.plot(xplt, np.mean(improvement_draws, axis=1), color='#A23D97', linewidth=lw)
ax2.set_ylabel('expected improvement');
###Output
_____no_output_____
###Markdown
The NEI quantity is then computed using multiple EI computations (each using a different posterior GP draw) computed without noise. In this computation below, I will use the closed form of EI, to speed up the computation -- it is possible to execute the same strategy as above, though.This computation is vectorized so as to compute for multiple $x$ locations at the same time ... the algorithm from the [Facebook paper](https://projecteuclid.org/download/pdfview_1/euclid.ba/1533866666) is written for only a single location. We are omitting the constraints aspect of their paper because the problem can be considered without that. To define the integral, though, we need some more definitions/notation.First, we need to define $\mathrm{EI}(x;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon})$ to be the expected improvement at a location $x$, given the $N$ values stored in the vector $\mathbf{y}$ having been evaluated with noise $\boldsymbol{\epsilon}$ at the points $\mathcal{X}$,$$\mathbf{y}=\begin{pmatrix}y_1\\\vdots\\y_N\end{pmatrix},\qquad \mathcal{X}=\{\mathbf{x}_1,\ldots,\mathbf{x}_N\},\qquad \boldsymbol{\epsilon}=\begin{pmatrix}\epsilon_1\\\vdots\\\epsilon_N\end{pmatrix}.$$The noise is assumed to be $\epsilon_i\sim\mathcal{N}(0, \sigma^2)$ for some fixed $\sigma^2$. The noise need not actually be homoscedastic, but it is a standard assumption.We encapsulate this information in $\mathcal{D}=\{\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}\}$. This is omitted from the earlier notation, because the data would be fixed.The point of NEI though is to deal with **noisy** observed values (EI, itself, is notorious for not dealing with noisy data very well). It does this by considering a variety of posterior draws at the locations in $\mathcal{X}$. These have distribution$$Y_{\mathcal{X}}|\mathcal{D}=Y_{\mathcal{X}}|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}\sim \mathcal{N}\left(\mathsf{K}(\mathsf{K}+\mathsf{E})^{-1}\mathbf{y}, \mathsf{K} - \mathsf{K}(\mathsf{K}+\mathsf{E})^{-1}\mathsf{K}\right),$$where$$\mathbf{k}(x)=\begin{pmatrix}K(x,x_1)\\\vdots\\K(x,x_N)\end{pmatrix},\qquad\mathsf{K}=\begin{pmatrix}K(x_1,x_1)&\cdots&K(x_1, x_N)\\&\vdots&\\K(x_N,x_1)&\cdots&K(x_N, x_N)\end{pmatrix}=\begin{pmatrix}\mathbf{k}(x_1)^T\\\vdots\\\mathbf{k}(x_N)^T\end{pmatrix},\qquad\mathsf{E}=\begin{pmatrix}\epsilon_1&&\\&\ddots&\\&&\epsilon_N\end{pmatrix}$$In practice, unless noise has actually been measured at each point, it would be common to simply plug in $\epsilon_1=\ldots=\epsilon_N=\sigma^2$. The term `noisy_predictions_at_data` below is drawn from this distribution (though in a standard iid fashion, not a more awesome QMC fashion).The EI integral, although approximated earlier using Monte Carlo, can actually be written in closed form. We do so below to also solidify our newer notation:$$\mathrm{EI}(x;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}) = \int_{-\infty}^\infty (y - y^*)_+\, p_{Y_x|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}}(y)\; \text{d}y = s(z\Phi(z)+\phi(z))$$where $\phi$ and $\Phi$ are the standard normal pdf and cdf, and$$\mu=\mathbf{k}(x)^T(\mathsf{K}+\mathsf{E})^{-1}\mathbf{y},\qquad s^2 = K(x, x)-\mathbf{k}(x)^T(\mathsf{K}+\mathsf{E})^{-1}\mathbf{k}(x),\qquad z=(\mu - y^*)/s.$$It is very important to remember that these quantities are functions of $\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}$ despite the absence of those quantities in the notation.The goal of the NEI integral is to simulate many possible random realizations of what could actually be the truth at the locations $\mathcal{X}$ and then run a *noiseless* EI computation over each of those realizations. The average of these outcomes is the NEI quantity. This would look like:$$\mathrm{NEI}(x) = \int_{\mathbf{f}\in\mathbb{R}^N} \mathrm{EI}(x;\mathbf{f}, \mathcal{X}, 0)\, p_{Y_{\mathcal{X}}|\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}}(\mathbf{f})\;\text{d}\mathbf{f}$$**NOTE**: There are ways to do this computation in a more vectorized fashion, so it would more likely be a loop involving chunks of MC elements at a time. Just so you know.
###Code
num_draws_at_data = 109
# These draws are done through QMC in the FB paper
normal_draws_at_data = np.random.normal(size=(num_draws_at_data, len(x)))
partial_cardinal_functions_at_data = solve_triangular(prior_cholesky, kernel_prior_data.T, lower=True)
posterior_covariance_at_data = kernel_prior_data - np.dot(partial_cardinal_functions_at_data.T, partial_cardinal_functions_at_data)
posterior_cholesky_at_data = np.linalg.cholesky(posterior_covariance_at_data + fudge_factor * np.eye(len(x)))
noisy_predictions_at_data = y[:, None] + np.dot(posterior_cholesky_at_data, normal_draws_at_data.T)
prior_cholesky_noiseless = np.linalg.cholesky(kernel_prior_data)
partial_cardinal_functions = solve_triangular(prior_cholesky_noiseless, kernel_cross_matrix.T, lower=True)
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
pointwise_sd = np.sqrt(np.fmax(process_variance - np.sum(partial_cardinal_functions ** 2, axis=0), 1e-100))
all_noiseless_eis = []
for draw in noisy_predictions_at_data.T:
posterior_mean = np.dot(full_cardinal_functions.T, draw)
z = (posterior_mean - max(y)) / pointwise_sd
ei = pointwise_sd * (z * norm.cdf(z) + norm.pdf(z))
all_noiseless_eis.append(ei)
all_noiseless_eis = np.array(all_noiseless_eis)
plt.plot(xplt, all_noiseless_eis.T, alpha=.1, color='#96CA4F', linewidth=lw)
plt.ylabel('expected improvement draws', color='#96CA4F')
ax2 = plt.gca().twinx()
ax2.plot(xplt, np.mean(all_noiseless_eis, axis=0), color='#A23D97', linewidth=lw)
ax2.set_ylabel('noisy expected improvement', color='#A23D97');
###Output
_____no_output_____
###Markdown
GoalWhat would be really great would be if we could compute integrals like the EI integral or the NEI integral using QMC. If there are opportunities to use the latest research to adaptively study tolerance and truncate, that would be absolutely amazing.I put the NEI example up first because the FB crew has already done a great job showing how QMC can play a role. But, as you can see, NEI is more complicated than EI, and also not yet as popular in the community (though that may change). Bonus stuff Even the EI integral, which does have a closed form, might better be considered in a QMC fashion because of interesting use cases. I'm going to reconsider the same problem from above, but here I am not looking to maximize the function -- I want to find the "level set" associated with the value $y=1$. Below you can see how the different outcome might look.In this case, the quantity of relevance is not exactly an integral, but it is a function of this posterior mean and standard deviation, which might need to be estimated through an integral (rather than the closed form, which we do have for a GP situation).
###Code
fig, axes = plt.subplots(1, 3, figsize=(14, 4))
ax = axes[0]
ax.plot(xplt, yplt, linewidth=lw)
ax.plot(x, y, 'o', markersize=ms, color='orange')
ax.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3)
ax.set_title('Sample data with noise')
ax.set_ylim(-2.4, 2.4)
ax = axes[1]
ax.plot(xplt, posterior_draws, alpha=.1, color='r')
ax.plot(xplt, posterior_mean, color='k', linewidth=lw)
ax.set_title('Posterior draws')
ax.set_ylim(-2.4, 2.4)
ax = axes[2]
posterior_mean_distance_from_1 = np.mean(np.abs(posterior_draws - 1), axis=1)
posterior_standard_deviation = np.std(posterior_draws, axis=1)
level_set_expected_improvement = norm.cdf(-posterior_mean_distance_from_1 / posterior_standard_deviation)
ax.plot(xplt, level_set_expected_improvement, color='#A23D97', linewidth=lw)
ax.set_title('level set expected improvement')
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Computation of the QEI quantity using `qmcpy`NEI is an important quantity, but there are other quantities as well which could be considered relevant demonstrations of higher dimensional integrals.One such quantity is a computation involving $q$ "next points" to sample in a BO process; in the standard formulation this quantity might involve just $q=1$, but $q>1$ is also of interest for batched evaluation in parallel.This quantity is defined as$$\mathrm{EI}_q(x_1, \ldots, x_q;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}) = \int_{\mathbb{R}^q} \max_{1\leq i\leq q}\left[{(y_i - y^*)_+}\right]\, p_{Y_{x_1,\ldots, x_q}|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}}(y_1, \ldots, y_q)\; \text{d}y_1\cdots\text{d}y_q$$The example I am considering here is with $q=5$ but this quantity could be made larger. Each of these QEI computations (done in a vectorized fashion in production) would be needed in an optimization loop (likely powered by CMAES or some other high dimensional nonconvex optimization tool). This optimization problem would take place in a $qd$ dimensional space, which is one aspect which usually prevents $q$ from being too large.Note that some of this will look much more confusing in $d>1$, but it is written here in a simplified version.
###Code
q = 5 # number of "next points" to be considered simultaneously
next_x = np.array([0.158, 0.416, 0.718, 0.935, 0.465])
def compute_qei(next_x, mc_strat, num_posterior_draws):
q = len(next_x)
kernel_prior_data = gaussian_kernel(x, x, shape_parameter, process_variance)
kernel_cross_matrix = gaussian_kernel(next_x, x, shape_parameter, process_variance)
kernel_prior_plot = gaussian_kernel(next_x, next_x, shape_parameter, process_variance)
prior_cholesky = np.linalg.cholesky(kernel_prior_data + np.diag(v))
partial_cardinal_functions = solve_triangular(prior_cholesky, kernel_cross_matrix.T, lower=True)
posterior_covariance = kernel_prior_plot - np.dot(partial_cardinal_functions.T, partial_cardinal_functions)
posterior_cholesky = np.linalg.cholesky(posterior_covariance + fudge_factor * np.eye(q))
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
posterior_mean = np.dot(full_cardinal_functions.T, y)
if mc_strat == 'numpy':
normal_draws = np.random.normal(size=(num_posterior_draws, q))
elif mc_strat == 'lattice':
g = qp.Gaussian(qp.Lattice(dimension=q, randomize=True))
normal_draws = g.gen_samples(n=num_posterior_draws)
else:
g = qp.Gaussian(qp.IIDStdUniform(dimension=q))
normal_draws = g.gen_samples(n = num_posterior_draws)
posterior_draws = posterior_mean[:, None] + np.dot(posterior_cholesky, normal_draws.T)
return np.mean(np.fmax(np.max(posterior_draws[:, :num_posterior_draws] - max(y), axis=0), 0))
num_posterior_draws_to_test = 2 ** np.arange(4, 17)
trials = 10
vals = {}
for mc_strat in ('numpy', 'iid', 'lattice'):
vals[mc_strat] = []
for num_posterior_draws in num_posterior_draws_to_test:
qei_estimate = 0.
for trial in range(trials):
qei_estimate += compute_qei(next_x, mc_strat, num_posterior_draws)
avg_qei_estimate = qei_estimate/float(trials)
vals[mc_strat].append(avg_qei_estimate)
vals[mc_strat] = np.array(vals[mc_strat])
#reference_answer = compute_qei(next_x, 'lattice', 2 ** 7 * max(num_posterior_draws_to_test))
reference_answer = compute_qei(next_x, 'lattice', 2 ** 20)
for name, results in vals.items():
plt.loglog(num_posterior_draws_to_test, abs(results - reference_answer), label=name)
plt.loglog(num_posterior_draws_to_test, .05 * num_posterior_draws_to_test ** -.5, '--k', label='$O(N^{-1/2})$')
plt.loglog(num_posterior_draws_to_test, .3 * num_posterior_draws_to_test ** -1.0, '-.k', label='$O(N^{-1})$')
plt.xlabel('N - number of points')
plt.ylabel('Accuracy')
plt.legend(loc='lower left');
###Output
_____no_output_____
###Markdown
NEI (Noisy Expected Improvement) DemoYou can also look at the Botorch implementation, but that requires a lot more understanding of code which involves Pytorch. So I tried to put a simple example together here.
###Code
import numpy as np
import qmcpy as qp
from scipy.linalg import solve_triangular, cho_solve, cho_factor
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
lw = 3
ms = 8
###Output
_____no_output_____
###Markdown
We make some fake data and consider the sequential decision making problem of trying to optimize the function depicted below.
###Code
def yf(x):
return np.cos(10 * x) * np.exp(.2 * x) + np.exp(-5 * (x - .4) ** 2)
xplt = np.linspace(0, 1, 300)
yplt = yf(xplt)
x = np.array([.1, .2, .4, .7, .9])
y = yf(x)
v = np.array([.001, .05, .01, .1, .4])
plt.plot(xplt, yplt, linewidth=lw)
plt.plot(x, y, 'o', markersize=ms, color='orange')
plt.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3)
plt.title('Sample data with noise');
###Output
_____no_output_____
###Markdown
We can build a zero mean Gaussian process model to this data, observed under noise. Below are plots of the posterior distribution. We use the Gaussian (square exponential) kernel as our prior covariance belief.This kernel has a shape parameter, the Gaussian process has a global variance, which are both chosen fixed for simplicity. The `fudge_factor` which is added here to prevent ill-conditioning for a large matrix.Notice the higher uncertainty in the posterior in locations where the observed noise is greater.
###Code
def gaussian_kernel(x, z, e, pv):
return pv * np.exp(-e ** 2 * (x[:, None] - z[None, :]) ** 2)
shape_parameter = 4.1
process_variance = .9
fudge_factor = 1e-10
kernel_prior_data = gaussian_kernel(x, x, shape_parameter, process_variance)
kernel_cross_matrix = gaussian_kernel(xplt, x, shape_parameter, process_variance)
kernel_prior_plot = gaussian_kernel(xplt, xplt, shape_parameter, process_variance)
prior_cholesky = np.linalg.cholesky(kernel_prior_data + np.diag(v))
partial_cardinal_functions = solve_triangular(prior_cholesky, kernel_cross_matrix.T, lower=True)
posterior_covariance = kernel_prior_plot - np.dot(partial_cardinal_functions.T, partial_cardinal_functions)
posterior_cholesky = np.linalg.cholesky(posterior_covariance + fudge_factor * np.eye(len(xplt)))
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
posterior_mean = np.dot(full_cardinal_functions.T, y)
num_posterior_draws = 123
normal_draws = np.random.normal(size=(num_posterior_draws, len(xplt)))
posterior_draws = posterior_mean[:, None] + np.dot(posterior_cholesky, normal_draws.T)
plt.plot(xplt, posterior_draws, alpha=.1, color='r')
plt.plot(xplt, posterior_mean, color='k', linewidth=lw)
plt.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3);
###Output
_____no_output_____
###Markdown
First we take a look at the EI quantity by itself which, despite having a closed form, we will approximate using basic Monte Carlo below. The closed form is very preferable, but not applicable in all situations.Expected improvement is just the expectation (under the posterior distribution) of the improvement beyond the current best value. If we were trying to maximize this function that we are studying then improvement would be defined as$$I(x) = (Y_x|\mathcal{D} - y^*)_+,$$the positive part of the gap between the model $Y_x|\mathcal{D}$ and the current highest value $y^*=\max\{y_1,\ldots,y_N\}$. Since $Y_x|\mathcal{D}$ is a random variable (normally distributed because we have a Gaussian process model), we generally study the expected value of this, which is plotted below. Written as an integral, this would look like$$\mathrm{EI}(x) = \int_{-\infty}^\infty (y - y^*)_+\, p_{Y_x|\mathcal{D}}(y)\; \text{d}y$$**NOTE**: This quantity is written for maximization here, but most of the literature is concerned with minimization. I can rewrite this if needed, but the math is essentially the same.This $EI$ quantity is referred to as an _acquisition function_, a function which defines the utility associated with sampling at a given point. For each acquisition function, there is a balance between exploration and exploitation (as is the focus of most topics involving sequential decision making under uncertainty).
###Code
improvement_draws = np.fmax(posterior_draws - max(y), 0)
plt.plot(xplt, improvement_draws, alpha=.1, color='#96CA4F', linewidth=lw)
plt.ylabel('improvement draws')
ax2 = plt.gca().twinx()
ax2.plot(xplt, np.mean(improvement_draws, axis=1), color='#A23D97', linewidth=lw)
ax2.set_ylabel('expected improvement');
###Output
_____no_output_____
###Markdown
The NEI quantity is then computed using multiple EI computations (each using a different posterior GP draw) computed without noise. In this computation below, I will use the closed form of EI, to speed up the computation -- it is possible to execute the same strategy as above, though.This computation is vectorized so as to compute for multiple $x$ locations at the same time ... the algorithm from the [Facebook paper](https://projecteuclid.org/download/pdfview_1/euclid.ba/1533866666) is written for only a single location. We are omitting the constraints aspect of their paper because the problem can be considered without that. To define the integral, though, we need some more definitions/notation.First, we need to define $\mathrm{EI}(x;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon})$ to be the expected improvement at a location $x$, given the $N$ values stored in the vector $\mathbf{y}$ having been evaluated with noise $\boldsymbol{\epsilon}$ at the points $\mathcal{X}$,$$\mathbf{y}=\begin{pmatrix}y_1\\\vdots\\y_N\end{pmatrix},\qquad \mathcal{X}=\{\mathbf{x}_1,\ldots,\mathbf{x}_N\},\qquad \boldsymbol{\epsilon}=\begin{pmatrix}\epsilon_1\\\vdots\\\epsilon_N\end{pmatrix}.$$The noise is assumed to be $\epsilon_i\sim\mathcal{N}(0, \sigma^2)$ for some fixed $\sigma^2$. The noise need not actually be homoscedastic, but it is a standard assumption.We encapsulate this information in $\mathcal{D}=\{\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}\}$. This is omitted from the earlier notation, because the data would be fixed.The point of NEI though is to deal with **noisy** observed values (EI, itself, is notorious for not dealing with noisy data very well). It does this by considering a variety of posterior draws at the locations in $\mathcal{X}$. These have distribution$$Y_{\mathcal{X}}|\mathcal{D}=Y_{\mathcal{X}}|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}\sim \mathcal{N}\left(\mathsf{K}(\mathsf{K}+\mathsf{E})^{-1}\mathbf{y}, \mathsf{K} - \mathsf{K}(\mathsf{K}+\mathsf{E})^{-1}\mathsf{K}\right),$$where$$\mathbf{k}(x)=\begin{pmatrix}K(x,x_1)\\\vdots\\K(x,x_N)\end{pmatrix},\qquad\mathsf{K}=\begin{pmatrix}K(x_1,x_1)&\cdots&K(x_1, x_N)\\&\vdots&\\K(x_N,x_1)&\cdots&K(x_N, x_N)\end{pmatrix}=\begin{pmatrix}\mathbf{k}(x_1)^T\\\vdots\\\mathbf{k}(x_N)^T\end{pmatrix},\qquad\mathsf{E}=\begin{pmatrix}\epsilon_1&&\\&\ddots&\\&&\epsilon_N\end{pmatrix}$$In practice, unless noise has actually been measured at each point, it would be common to simply plug in $\epsilon_1=\ldots=\epsilon_N=\sigma^2$. The term `noisy_predictions_at_data` below is drawn from this distribution (though in a standard iid fashion, not a more awesome QMC fashion).The EI integral, although approximated earlier using Monte Carlo, can actually be written in closed form. We do so below to also solidify our newer notation:$$\mathrm{EI}(x;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}) = \int_{-\infty}^\infty (y - y^*)_+\, p_{Y_x|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}}(y)\; \text{d}y = s(z\Phi(z)+\phi(z))$$where $\phi$ and $\Phi$ are the standard normal pdf and cdf, and$$\mu=\mathbf{k}(x)^T(\mathsf{K}+\mathsf{E})^{-1}\mathbf{y},\qquad s^2 = K(x, x)-\mathbf{k}(x)^T(\mathsf{K}+\mathsf{E})^{-1}\mathbf{k}(x),\qquad z=(\mu - y^*)/s.$$It is very important to remember that these quantities are functions of $\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}$ despite the absence of those quantities in the notation.The goal of the NEI integral is to simulate many possible random realizations of what could actually be the truth at the locations $\mathcal{X}$ and then run a *noiseless* EI computation over each of those realizations. The average of these outcomes is the NEI quantity. This would look like:$$\mathrm{NEI}(x) = \int_{\mathbf{f}\in\mathbb{R}^N} \mathrm{EI}(x;\mathbf{f}, \mathcal{X}, 0)\, p_{Y_{\mathcal{X}}|\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}}(\mathbf{f})\;\text{d}\mathbf{f}$$**NOTE**: There are ways to do this computation in a more vectorized fashion, so it would more likely be a loop involving chunks of MC elements at a time. Just so you know.
###Code
num_draws_at_data = 109
# These draws are done through QMC in the FB paper
normal_draws_at_data = np.random.normal(size=(num_draws_at_data, len(x)))
partial_cardinal_functions_at_data = solve_triangular(prior_cholesky, kernel_prior_data.T, lower=True)
posterior_covariance_at_data = kernel_prior_data - np.dot(partial_cardinal_functions_at_data.T, partial_cardinal_functions_at_data)
posterior_cholesky_at_data = np.linalg.cholesky(posterior_covariance_at_data + fudge_factor * np.eye(len(x)))
noisy_predictions_at_data = y[:, None] + np.dot(posterior_cholesky_at_data, normal_draws_at_data.T)
prior_cholesky_noiseless = np.linalg.cholesky(kernel_prior_data)
partial_cardinal_functions = solve_triangular(prior_cholesky_noiseless, kernel_cross_matrix.T, lower=True)
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
pointwise_sd = np.sqrt(np.fmax(process_variance - np.sum(partial_cardinal_functions ** 2, axis=0), 1e-100))
all_noiseless_eis = []
for draw in noisy_predictions_at_data.T:
posterior_mean = np.dot(full_cardinal_functions.T, draw)
z = (posterior_mean - max(y)) / pointwise_sd
ei = pointwise_sd * (z * norm.cdf(z) + norm.pdf(z))
all_noiseless_eis.append(ei)
all_noiseless_eis = np.array(all_noiseless_eis)
plt.plot(xplt, all_noiseless_eis.T, alpha=.1, color='#96CA4F', linewidth=lw)
plt.ylabel('expected improvement draws', color='#96CA4F')
ax2 = plt.gca().twinx()
ax2.plot(xplt, np.mean(all_noiseless_eis, axis=0), color='#A23D97', linewidth=lw)
ax2.set_ylabel('noisy expected improvement', color='#A23D97');
###Output
_____no_output_____
###Markdown
GoalWhat would be really great would be if we could compute integrals like the EI integral or the NEI integral using QMC. If there are opportunities to use the latest research to adaptively study tolerance and truncate, that would be absolutely amazing.I put the NEI example up first because the FB crew has already done a great job showing how QMC can play a role. But, as you can see, NEI is more complicated than EI, and also not yet as popular in the community (though that may change). Bonus stuff Even the EI integral, which does have a closed form, might better be considered in a QMC fashion because of interesting use cases. I'm going to reconsider the same problem from above, but here I am not looking to maximize the function -- I want to find the "level set" associated with the value $y=1$. Below you can see how the different outcome might look.In this case, the quantity of relevance is not exactly an integral, but it is a function of this posterior mean and standard deviation, which might need to be estimated through an integral (rather than the closed form, which we do have for a GP situation).
###Code
fig, axes = plt.subplots(1, 3, figsize=(14, 4))
ax = axes[0]
ax.plot(xplt, yplt, linewidth=lw)
ax.plot(x, y, 'o', markersize=ms, color='orange')
ax.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3)
ax.set_title('Sample data with noise')
ax.set_ylim(-2.4, 2.4)
ax = axes[1]
ax.plot(xplt, posterior_draws, alpha=.1, color='r')
ax.plot(xplt, posterior_mean, color='k', linewidth=lw)
ax.set_title('Posterior draws')
ax.set_ylim(-2.4, 2.4)
ax = axes[2]
posterior_mean_distance_from_1 = np.mean(np.abs(posterior_draws - 1), axis=1)
posterior_standard_deviation = np.std(posterior_draws, axis=1)
level_set_expected_improvement = norm.cdf(-posterior_mean_distance_from_1 / posterior_standard_deviation)
ax.plot(xplt, level_set_expected_improvement, color='#A23D97', linewidth=lw)
ax.set_title('level set expected improvement')
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Computation of the QEI quantity using `qmcpy`NEI is an important quantity, but there are other quantities as well which could be considered relevant demonstrations of higher dimensional integrals.One such quantity is a computation involving $q$ "next points" to sample in a BO process; in the standard formulation this quantity might involve just $q=1$, but $q>1$ is also of interest for batched evaluation in parallel.This quantity is defined as$$\mathrm{EI}_q(x_1, \ldots, x_q;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}) = \int_{\mathbb{R}^q} \max_{1\leq i\leq q}\left[{(y_i - y^*)_+}\right]\, p_{Y_{x_1,\ldots, x_q}|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}}(y_1, \ldots, y_q)\; \text{d}y_1\cdots\text{d}y_q$$The example I am considering here is with $q=5$ but this quantity could be made larger. Each of these QEI computations (done in a vectorized fashion in production) would be needed in an optimization loop (likely powered by CMAES or some other high dimensional nonconvex optimization tool). This optimization problem would take place in a $qd$ dimensional space, which is one aspect which usually prevents $q$ from being too large.Note that some of this will look much more confusing in $d>1$, but it is written here in a simplified version.
###Code
q = 5 # number of "next points" to be considered simultaneously
next_x = np.array([0.158, 0.416, 0.718, 0.935, 0.465])
def compute_qei(next_x, mc_strat, num_posterior_draws):
q = len(next_x)
kernel_prior_data = gaussian_kernel(x, x, shape_parameter, process_variance)
kernel_cross_matrix = gaussian_kernel(next_x, x, shape_parameter, process_variance)
kernel_prior_plot = gaussian_kernel(next_x, next_x, shape_parameter, process_variance)
prior_cholesky = np.linalg.cholesky(kernel_prior_data + np.diag(v))
partial_cardinal_functions = solve_triangular(prior_cholesky, kernel_cross_matrix.T, lower=True)
posterior_covariance = kernel_prior_plot - np.dot(partial_cardinal_functions.T, partial_cardinal_functions)
posterior_cholesky = np.linalg.cholesky(posterior_covariance + fudge_factor * np.eye(q))
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
posterior_mean = np.dot(full_cardinal_functions.T, y)
if mc_strat == 'numpy':
normal_draws = np.random.normal(size=(num_posterior_draws, q))
elif mc_strat == 'lattice':
g = qp.Gaussian(qp.Lattice(dimension=q, randomize=True))
normal_draws = g.gen_samples(n=num_posterior_draws)
else:
g = qp.Gaussian(qp.IIDStdUniform(dimension=q))
normal_draws = g.gen_samples(n = num_posterior_draws)
posterior_draws = posterior_mean[:, None] + np.dot(posterior_cholesky, normal_draws.T)
return np.mean(np.fmax(np.max(posterior_draws[:, :num_posterior_draws] - max(y), axis=0), 0))
num_posterior_draws_to_test = 2 ** np.arange(4, 17)
trials = 10
vals = {}
for mc_strat in ('numpy', 'iid', 'lattice'):
vals[mc_strat] = []
for num_posterior_draws in num_posterior_draws_to_test:
qei_estimate = 0.
for trial in range(trials):
qei_estimate += compute_qei(next_x, mc_strat, num_posterior_draws)
avg_qei_estimate = qei_estimate/float(trials)
vals[mc_strat].append(avg_qei_estimate)
vals[mc_strat] = np.array(vals[mc_strat])
#reference_answer = compute_qei(next_x, 'lattice', 2 ** 7 * max(num_posterior_draws_to_test))
reference_answer = compute_qei(next_x, 'lattice', 2 ** 20)
for name, results in vals.items():
plt.loglog(num_posterior_draws_to_test, abs(results - reference_answer), label=name)
plt.loglog(num_posterior_draws_to_test, .05 * num_posterior_draws_to_test ** -.5, '--k', label='$O(N^{-1/2})$')
plt.loglog(num_posterior_draws_to_test, .3 * num_posterior_draws_to_test ** -1.0, '-.k', label='$O(N^{-1})$')
plt.xlabel('N - number of points')
plt.ylabel('Accuracy')
plt.legend(loc='lower left');
###Output
_____no_output_____
###Markdown
NEI (Noisy Expected Improvement) DemoYou can also look at the Botorch implementation, but that requires a lot more understanding of code which involves Pytorch. So I tried to put a simple example together here.
###Code
import numpy as np
import qmcpy as qp
from scipy.linalg import solve_triangular, cho_solve, cho_factor
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
lw = 3
ms = 8
###Output
_____no_output_____
###Markdown
We make some fake data and consider the sequential decision making problem of trying to optimize the function depicted below.
###Code
def yf(x):
return np.cos(10 * x) * np.exp(.2 * x) + np.exp(-5 * (x - .4) ** 2)
xplt = np.linspace(0, 1, 300)
yplt = yf(xplt)
x = np.array([.1, .2, .4, .7, .9])
y = yf(x)
v = np.array([.001, .05, .01, .1, .4])
plt.plot(xplt, yplt, linewidth=lw)
plt.plot(x, y, 'o', markersize=ms, color='orange')
plt.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3)
plt.title('Sample data with noise');
###Output
_____no_output_____
###Markdown
We can build a zero mean Gaussian process model to this data, observed under noise. Below are plots of the posterior distribution. We use the Gaussian (square exponential) kernel as our prior covariance belief.This kernel has a shape parameter, the Gaussian process has a global variance, which are both chosen fixed for simplicity. The `fudge_factor` which is added here to prevent ill-conditioning for a large matrix.Notice the higher uncertainty in the posterior in locations where the observed noise is greater.
###Code
def gaussian_kernel(x, z, e, pv):
return pv * np.exp(-e ** 2 * (x[:, None] - z[None, :]) ** 2)
shape_parameter = 4.1
process_variance = .9
fudge_factor = 1e-10
kernel_prior_data = gaussian_kernel(x, x, shape_parameter, process_variance)
kernel_cross_matrix = gaussian_kernel(xplt, x, shape_parameter, process_variance)
kernel_prior_plot = gaussian_kernel(xplt, xplt, shape_parameter, process_variance)
prior_cholesky = np.linalg.cholesky(kernel_prior_data + np.diag(v))
partial_cardinal_functions = solve_triangular(prior_cholesky, kernel_cross_matrix.T, lower=True)
posterior_covariance = kernel_prior_plot - np.dot(partial_cardinal_functions.T, partial_cardinal_functions)
posterior_cholesky = np.linalg.cholesky(posterior_covariance + fudge_factor * np.eye(len(xplt)))
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
posterior_mean = np.dot(full_cardinal_functions.T, y)
num_posterior_draws = 123
normal_draws = np.random.normal(size=(num_posterior_draws, len(xplt)))
posterior_draws = posterior_mean[:, None] + np.dot(posterior_cholesky, normal_draws.T)
plt.plot(xplt, posterior_draws, alpha=.1, color='r')
plt.plot(xplt, posterior_mean, color='k', linewidth=lw)
plt.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3);
###Output
_____no_output_____
###Markdown
First we take a look at the EI quantity by itself which, despite having a closed form, we will approximate using basic Monte Carlo below. The closed form is very preferable, but not applicable in all situations.Expected improvement is just the expectation (under the posterior distribution) of the improvement beyond the current best value. If we were trying to maximize this function that we are studying then improvement would be defined as$$I(x) = (Y_x|\mathcal{D} - y^*)_+,$$the positive part of the gap between the model $Y_x|\mathcal{D}$ and the current highest value $y^*=\max\{y_1,\ldots,y_N\}$. Since $Y_x|\mathcal{D}$ is a random variable (normally distributed because we have a Gaussian process model), we generally study the expected value of this, which is plotted below. Written as an integral, this would look like$$\mathrm{EI}(x) = \int_{-\infty}^\infty (y - y^*)_+\, p_{Y_x|\mathcal{D}}(y)\; \text{d}y$$**NOTE**: This quantity is written for maximization here, but most of the literature is concerned with minimization. I can rewrite this if needed, but the math is essentially the same.This $EI$ quantity is referred to as an _acquisition function_, a function which defines the utility associated with sampling at a given point. For each acquisition function, there is a balance between exploration and exploitation (as is the focus of most topics involving sequential decision making under uncertainty).
###Code
improvement_draws = np.fmax(posterior_draws - max(y), 0)
plt.plot(xplt, improvement_draws, alpha=.1, color='#96CA4F', linewidth=lw)
plt.ylabel('improvement draws')
ax2 = plt.gca().twinx()
ax2.plot(xplt, np.mean(improvement_draws, axis=1), color='#A23D97', linewidth=lw)
ax2.set_ylabel('expected improvement');
###Output
_____no_output_____
###Markdown
The NEI quantity is then computed using multiple EI computations (each using a different posterior GP draw) computed without noise. In this computation below, I will use the closed form of EI, to speed up the computation -- it is possible to execute the same strategy as above, though.This computation is vectorized so as to compute for multiple $x$ locations at the same time ... the algorithm from the [Facebook paper](https://projecteuclid.org/download/pdfview_1/euclid.ba/1533866666) is written for only a single location. We are omitting the constraints aspect of their paper because the problem can be considered without that. To define the integral, though, we need some more definitions/notation.First, we need to define $\mathrm{EI}(x;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon})$ to be the expected improvement at a location $x$, given the $N$ values stored in the vector $\mathbf{y}$ having been evaluated with noise $\boldsymbol{\epsilon}$ at the points $\mathcal{X}$,$$\mathbf{y}=\begin{pmatrix}y_1\\\vdots\\y_N\end{pmatrix},\qquad \mathcal{X}=\{\mathbf{x}_1,\ldots,\mathbf{x}_N\},\qquad \boldsymbol{\epsilon}=\begin{pmatrix}\epsilon_1\\\vdots\\\epsilon_N\end{pmatrix}.$$The noise is assumed to be $\epsilon_i\sim\mathcal{N}(0, \sigma^2)$ for some fixed $\sigma^2$. The noise need not actually be homoscedastic, but it is a standard assumption.We encapsulate this information in $\mathcal{D}=\{\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}\}$. This is omitted from the earlier notation, because the data would be fixed.The point of NEI though is to deal with **noisy** observed values (EI, itself, is notorious for not dealing with noisy data very well). It does this by considering a variety of posterior draws at the locations in $\mathcal{X}$. These have distribution$$Y_{\mathcal{X}}|\mathcal{D}=Y_{\mathcal{X}}|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}\sim \mathcal{N}\left(\mathsf{K}(\mathsf{K}+\mathsf{E})^{-1}\mathbf{y}, \mathsf{K} - \mathsf{K}(\mathsf{K}+\mathsf{E})^{-1}\mathsf{K}\right),$$where$$\mathbf{k}(x)=\begin{pmatrix}K(x,x_1)\\\vdots\\K(x,x_N)\end{pmatrix},\qquad\mathsf{K}=\begin{pmatrix}K(x_1,x_1)&\cdots&K(x_1, x_N)\\&\vdots&\\K(x_N,x_1)&\cdots&K(x_N, x_N)\end{pmatrix}=\begin{pmatrix}\mathbf{k}(x_1)^T\\\vdots\\\mathbf{k}(x_N)^T\end{pmatrix},\qquad\mathsf{E}=\begin{pmatrix}\epsilon_1&&\\&\ddots&\\&&\epsilon_N\end{pmatrix}$$In practice, unless noise has actually been measured at each point, it would be common to simply plug in $\epsilon_1=\ldots=\epsilon_N=\sigma^2$. The term `noisy_predictions_at_data` below is drawn from this distribution (though in a standard iid fashion, not a more awesome QMC fashion).The EI integral, although approximated earlier using Monte Carlo, can actually be written in closed form. We do so below to also solidify our newer notation:$$\mathrm{EI}(x;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}) = \int_{-\infty}^\infty (y - y^*)_+\, p_{Y_x|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}}(y)\; \text{d}y = s(z\Phi(z)+\phi(z))$$where $\phi$ and $\Phi$ are the standard normal pdf and cdf, and$$\mu=\mathbf{k}(x)^T(\mathsf{K}+\mathsf{E})^{-1}\mathbf{y},\qquad s^2 = K(x, x)-\mathbf{k}(x)^T(\mathsf{K}+\mathsf{E})^{-1}\mathbf{k}(x),\qquad z=(\mu - y^*)/s.$$It is very important to remember that these quantities are functions of $\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}$ despite the absence of those quantities in the notation.The goal of the NEI integral is to simulate many possible random realizations of what could actually be the truth at the locations $\mathcal{X}$ and then run a *noiseless* EI computation over each of those realizations. The average of these outcomes is the NEI quantity. This would look like:$$\mathrm{NEI}(x) = \int_{\mathbf{f}\in\mathbb{R}^N} \mathrm{EI}(x;\mathbf{f}, \mathcal{X}, 0)\, p_{Y_{\mathcal{X}}|\mathbf{y},\mathcal{X},\boldsymbol{\epsilon}}(\mathbf{f})\;\text{d}\mathbf{f}$$**NOTE**: There are ways to do this computation in a more vectorized fashion, so it would more likely be a loop involving chunks of MC elements at a time. Just so you know.
###Code
num_draws_at_data = 109
# These draws are done through QMC in the FB paper
normal_draws_at_data = np.random.normal(size=(num_draws_at_data, len(x)))
partial_cardinal_functions_at_data = solve_triangular(prior_cholesky, kernel_prior_data.T, lower=True)
posterior_covariance_at_data = kernel_prior_data - np.dot(partial_cardinal_functions_at_data.T, partial_cardinal_functions_at_data)
posterior_cholesky_at_data = np.linalg.cholesky(posterior_covariance_at_data + fudge_factor * np.eye(len(x)))
noisy_predictions_at_data = y[:, None] + np.dot(posterior_cholesky_at_data, normal_draws_at_data.T)
prior_cholesky_noiseless = np.linalg.cholesky(kernel_prior_data)
partial_cardinal_functions = solve_triangular(prior_cholesky_noiseless, kernel_cross_matrix.T, lower=True)
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
pointwise_sd = np.sqrt(np.fmax(process_variance - np.sum(partial_cardinal_functions ** 2, axis=0), 1e-100))
all_noiseless_eis = []
for draw in noisy_predictions_at_data.T:
posterior_mean = np.dot(full_cardinal_functions.T, draw)
z = (posterior_mean - max(y)) / pointwise_sd
ei = pointwise_sd * (z * norm.cdf(z) + norm.pdf(z))
all_noiseless_eis.append(ei)
all_noiseless_eis = np.array(all_noiseless_eis)
plt.plot(xplt, all_noiseless_eis.T, alpha=.1, color='#96CA4F', linewidth=lw)
plt.ylabel('expected improvement draws', color='#96CA4F')
ax2 = plt.gca().twinx()
ax2.plot(xplt, np.mean(all_noiseless_eis, axis=0), color='#A23D97', linewidth=lw)
ax2.set_ylabel('noisy expected improvement', color='#A23D97');
###Output
_____no_output_____
###Markdown
GoalWhat would be really great would be if we could compute integrals like the EI integral or the NEI integral using QMC. If there are opportunities to use the latest research to adaptively study tolerance and truncate, that would be absolutely amazing.I put the NEI example up first because the FB crew has already done a great job showing how QMC can play a role. But, as you can see, NEI is more complicated than EI, and also not yet as popular in the community (though that may change). Bonus stuff Even the EI integral, which does have a closed form, might better be considered in a QMC fashion because of interesting use cases. I'm going to reconsider the same problem from above, but here I am not looking to maximize the function -- I want to find the "level set" associated with the value $y=1$. Below you can see how the different outcome might look.In this case, the quantity of relevance is not exactly an integral, but it is a function of this posterior mean and standard deviation, which might need to be estimated through an integral (rather than the closed form, which we do have for a GP situation).
###Code
fig, axes = plt.subplots(1, 3, figsize=(14, 4))
ax = axes[0]
ax.plot(xplt, yplt, linewidth=lw)
ax.plot(x, y, 'o', markersize=ms, color='orange')
ax.errorbar(x, y, yerr=2 * np.sqrt(v), marker='', linestyle='', color='orange', linewidth=3)
ax.set_title('Sample data with noise')
ax.set_ylim(-2.4, 2.4)
ax = axes[1]
ax.plot(xplt, posterior_draws, alpha=.1, color='r')
ax.plot(xplt, posterior_mean, color='k', linewidth=lw)
ax.set_title('Posterior draws')
ax.set_ylim(-2.4, 2.4)
ax = axes[2]
posterior_mean_distance_from_1 = np.mean(np.abs(posterior_draws - 1), axis=1)
posterior_standard_deviation = np.std(posterior_draws, axis=1)
level_set_expected_improvement = norm.cdf(-posterior_mean_distance_from_1 / posterior_standard_deviation)
ax.plot(xplt, level_set_expected_improvement, color='#A23D97', linewidth=lw)
ax.set_title('level set expected improvement')
plt.tight_layout();
###Output
_____no_output_____
###Markdown
Computation of the QEI quantity using `qmcpy`NEI is an important quantity, but there are other quantities as well which could be considered relevant demonstrations of higher dimensional integrals.One such quantity is a computation involving $q$ "next points" to sample in a BO process; in the standard formulation this quantity might involve just $q=1$, but $q>1$ is also of interest for batched evaluation in parallel.This quantity is defined as$$\mathrm{EI}_q(x_1, \ldots, x_q;\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}) = \int_{\mathbb{R}^q} \max_{1\leq i\leq q}\left[{(y_i - y^*)_+}\right]\, p_{Y_{x_1,\ldots, x_q}|\mathbf{y}, \mathcal{X}, \boldsymbol{\epsilon}}(y_1, \ldots, y_q)\; \text{d}y_1\cdots\text{d}y_q$$The example I am considering here is with $q=5$ but this quantity could be made larger. Each of these QEI computations (done in a vectorized fashion in production) would be needed in an optimization loop (likely powered by CMAES or some other high dimensional nonconvex optimization tool). This optimization problem would take place in a $qd$ dimensional space, which is one aspect which usually prevents $q$ from being too large.Note that some of this will look much more confusing in $d>1$, but it is written here in a simplified version.
###Code
q = 5 # number of "next points" to be considered simultaneously
next_x = np.array([0.158, 0.416, 0.718, 0.935, 0.465])
def compute_qei(next_x, mc_strat, num_posterior_draws):
q = len(next_x)
kernel_prior_data = gaussian_kernel(x, x, shape_parameter, process_variance)
kernel_cross_matrix = gaussian_kernel(next_x, x, shape_parameter, process_variance)
kernel_prior_plot = gaussian_kernel(next_x, next_x, shape_parameter, process_variance)
prior_cholesky = np.linalg.cholesky(kernel_prior_data + np.diag(v))
partial_cardinal_functions = solve_triangular(prior_cholesky, kernel_cross_matrix.T, lower=True)
posterior_covariance = kernel_prior_plot - np.dot(partial_cardinal_functions.T, partial_cardinal_functions)
posterior_cholesky = np.linalg.cholesky(posterior_covariance + fudge_factor * np.eye(q))
full_cardinal_functions = solve_triangular(prior_cholesky.T, partial_cardinal_functions, lower=False)
posterior_mean = np.dot(full_cardinal_functions.T, y)
if mc_strat == 'numpy':
normal_draws = np.random.normal(size=(num_posterior_draws, q))
elif mc_strat == 'lattice':
g = qp.Gaussian(qp.Lattice(dimension=q, randomize=True))
normal_draws = g.gen_samples(n=num_posterior_draws)
else:
g = qp.Gaussian(qp.IIDStdUniform(dimension=q))
normal_draws = g.gen_samples(n = num_posterior_draws)
posterior_draws = posterior_mean[:, None] + np.dot(posterior_cholesky, normal_draws.T)
return np.mean(np.fmax(np.max(posterior_draws[:, :num_posterior_draws] - max(y), axis=0), 0))
num_posterior_draws_to_test = 2 ** np.arange(4, 17)
trials = 10
vals = {}
for mc_strat in ('numpy', 'iid', 'lattice'):
vals[mc_strat] = []
for num_posterior_draws in num_posterior_draws_to_test:
qei_estimate = 0.
for trial in range(trials):
qei_estimate += compute_qei(next_x, mc_strat, num_posterior_draws)
avg_qei_estimate = qei_estimate/float(trials)
vals[mc_strat].append(avg_qei_estimate)
vals[mc_strat] = np.array(vals[mc_strat])
#reference_answer = compute_qei(next_x, 'lattice', 2 ** 7 * max(num_posterior_draws_to_test))
reference_answer = compute_qei(next_x, 'lattice', 2 ** 20)
for name, results in vals.items():
plt.loglog(num_posterior_draws_to_test, abs(results - reference_answer), label=name)
plt.loglog(num_posterior_draws_to_test, .05 * num_posterior_draws_to_test ** -.5, '--k', label='$O(N^{-1/2})$')
plt.loglog(num_posterior_draws_to_test, .3 * num_posterior_draws_to_test ** -1.0, '-.k', label='$O(N^{-1})$')
plt.xlabel('N - number of points')
plt.ylabel('Accuracy')
plt.legend(loc='lower left');
###Output
_____no_output_____ |
lect08_requests_BS/2021_DPO_8_2_intro_to_parsing.ipynb | ###Markdown
Парсинг – продолжение
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import requests
url = 'http://books.toscrape.com/catalogue/page-1.html'
response = requests.get(url)
response
response.content[:1000]
from bs4 import BeautifulSoup
tree = BeautifulSoup(response.content, 'html.parser')
tree.html.head.title.text.strip()
books = tree.find_all('article', {'class' : 'product_pod'})
books[0]
books[0].find('p', {'class': 'price_color'}).text
books[0].p.get('class')[1]
books[0].a.get('href')
books[0].h3.a.get('title')
def get_page(p):
url = 'http://books.toscrape.com/catalogue/page-{}.html'.format(p)
response = requests.get(url)
tree = BeautifulSoup(response.content, 'html.parser')
books = tree.find_all('article', {'class' : 'product_pod'})
info = []
for book in books:
info.append({'price': book.find('p', {'class': 'price_color'}).text,
'href': book.h3.a.get('href'),
'title': book.h3.a.get('title'),
'rating': book.p.get('class')[1]})
return info
import time
infa = []
for p in range(1,51):
try:
infa.extend(get_page(p))
time.sleep(5)
except:
print(p)
import pandas as pd
df = pd.DataFrame(infa)
print(df.shape)
df.head()
df.to_csv('books_parsed.csv', index=False)
df.to_excel('books_parsed.xlsx', index=False)
df.info()
float(df.loc[0, 'price'][1:])
def get_price(price):
return float(price[1:])
df['price'] = df['price'].apply(get_price)
sns.histplot(data=df, x='price', bins=30);
def get_rating(r):
if r == "One":
return 1
elif r == "Two":
return 2
elif r == 'Three':
return 3
elif r == 'Four':
return 4
else:
return 5
df['rating'] = df['rating'].apply(get_rating)
df.rating.value_counts()
###Output
_____no_output_____
###Markdown
Парсинг – задание По аналогии с работой на семинаре Вам предстоит собрать данные с сайта https://quotes.toscrape.com/. Нужно получить pandas dataframe, где есть колонки:* `quote` – цитата* `author` – автор* `название_тега` – 1, если этот тег стоит у цитаты, и 0, если нет. Количество таких колонок равно количеству тегов на сайте.Выведите все цитаты, у которых есть тег "truth".
###Code
url = 'https://quotes.toscrape.com/page/1/'
response = requests.get(url)
response
tree = BeautifulSoup(response.content, 'html.parser')
quotes = tree.find_all('div', {'class' : 'quote'})
quotes[0]
quotes[0].span.text
quotes[0].find('small', {'class':'author'}).text
quotes[0].find_all('a', {'class': 'tag'})
quotes[0].find_all('a', {'class': 'tag'})[0].text
tags = []
for tag in quotes[0].find_all('a', {'class': 'tag'}):
tags.append(tag.text)
tags
info = []
for q in quotes:
tags = []
for tag in q.find_all('a', {'class': 'tag'}):
tags.append(tag.text)
info.append({'quote': q.span.text,
'author': q.find('small', {'class':'author'}).text,
'tags': tags})
info
response.content[:1000]
def get_page(p):
url = 'https://quotes.toscrape.com/page/{}/'.format(p)
response = requests.get(url)
tree = BeautifulSoup(response.content, 'html.parser')
quotes = tree.find_all('div', {'class' : 'quote'})
info = []
for q in quotes:
tags = []
for tag in q.find_all('a', {'class': 'tag'}):
tags.append(tag.text)
info.append({'quote': q.span.text,
'author': q.find('small', {'class':'author'}).text,
'tags': tags})
return info
info = []
for p in range(1,11):
info.extend(get_page(p))
len(info)
df = pd.DataFrame(info)
df.head()
tags_set = set(df['tags'].explode().values)
tags_set
for tag in tags_set:
df[tag] = [tag in df['tags'].loc[i] for i in df.index]
pd.set_option('display.max_columns', 500)
df.head()
df.columns
for q in df[df['truth']]['quote'].values:
print(q)
###Output
“The reason I talk to myself is because I’m the only one whose answers I accept.”
“A lie can travel half way around the world while the truth is putting on its shoes.”
“The truth." Dumbledore sighed. "It is a beautiful and terrible thing, and should therefore be treated with great caution.”
“Never tell the truth to people who are not worthy of it.”
###Markdown
Работа с json файлами Создать pandas dataframe с такими колонками:* `username`* `changed_lines` – количество измененных строчек* `commits` – количество коммитов* `new_files` – количество новых файлов, которые сделал этот разработчикОтсортировать по `username` pandas
###Code
from pandas import json_normalize
import json
with open('commits.json', 'r') as f:
data = json.load(f)
data[0]
data[0]['username']
data = json_normalize(data, 'files', ['username', 'commit_time'])
data
import pandas as pd
data['commit_time'] = pd.to_datetime(data['commit_time'])
data.shape
data.info()
# commits
res = data.groupby('username')[['commit_time']].nunique().reset_index()
res
# changed_lines
data.groupby('username')['changed_lines'].sum().values
res['changed_lines'] = data.groupby('username')['changed_lines'].sum().values
agg = data.groupby(['name', 'username'])[['commit_time']].min().sort_values(['name', 'commit_time'])
agg
d = {}
for file in agg.reset_index()['name'].unique():
d[file] = agg.loc[file].iloc[0].name
d
pd.DataFrame([d]).T.reset_index().groupby(0).count()['index'].values
res['new_files'] = pd.DataFrame([d]).T.reset_index().groupby(0).count()['index'].values
res.sort_values('username', inplace=True)
res
###Output
_____no_output_____
###Markdown
словари
###Code
from collections import defaultdict
d = defaultdict()
for k in [1, 2, 3]:
d[k] = 1
with open('commits.json', 'r') as f:
data = json.load(f)
data = sorted(data, key=lambda x: pd.to_datetime(x['commit_time']))
data[0]
somedict = {}
print(somedict[3]) # KeyError
someddict = defaultdict(int)
print(someddict[3]) # print int(), thus 0
someddict
table = defaultdict(lambda: {'commits': 0, 'changed_lines':0, 'new_files':0})
new_files = set()
for commit in data:
user = commit['username']
table[user]['commits'] += 1
for file in commit['files']:
table[user]['changed_lines'] += file['changed_lines']
if file['name'] not in new_files:
new_files.add(file['name'])
table[user]['new_files'] += 1
table
fin = pd.DataFrame(table).T.reset_index().rename(columns={'index': 'username'}).sort_values('username')
fin
###Output
_____no_output_____
###Markdown
Парсинг – продолжение
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import requests
url = 'http://books.toscrape.com/catalogue/page-1.html'
response = requests.get(url)
response
response.content[:1000]
from bs4 import BeautifulSoup
tree = BeautifulSoup(response.content, 'html.parser')
tree.html.head.title.text.strip()
books = tree.find_all('article', {'class' : 'product_pod'})
books[0]
books[0].find('p', {'class': 'price_color'}).text
books[0].p.get('class')[1]
books[0].a.get('href')
books[0].h3.a.get('title')
def get_page(p):
url = 'http://books.toscrape.com/catalogue/page-{}.html'.format(p)
response = requests.get(url)
tree = BeautifulSoup(response.content, 'html.parser')
books = tree.find_all('article', {'class' : 'product_pod'})
info = []
for book in books:
info.append({'price': book.find('p', {'class': 'price_color'}).text,
'href': book.h3.a.get('href'),
'title': book.h3.a.get('title'),
'rating': book.p.get('class')[1]})
return info
import time
infa = []
for p in range(1,51):
try:
infa.extend(get_page(p))
time.sleep(5)
except:
print(p)
import pandas as pd
df = pd.DataFrame(infa)
print(df.shape)
df.head()
df.to_csv('books_parsed.csv', index=False)
df.to_excel('books_parsed.xlsx', index=False)
df.info()
float(df.loc[0, 'price'][1:])
def get_price(price):
return float(price[1:])
df['price'] = df['price'].apply(get_price)
sns.histplot(data=df, x='price', bins=30);
def get_rating(r):
if r == "One":
return 1
elif r == "Two":
return 2
elif r == 'Three':
return 3
elif r == 'Four':
return 4
else:
return 5
df['rating'] = df['rating'].apply(get_rating)
df.rating.value_counts()
###Output
_____no_output_____
###Markdown
Парсинг – задание По аналогии с работой на семинаре Вам предстоит собрать данные с сайта https://quotes.toscrape.com/. Нужно получить pandas dataframe, где есть колонки:* `quote` – цитата* `author` – автор* `название_тега` – 1, если этот тег стоит у цитаты, и 0, если нет. Количество таких колонок равно количеству тегов на сайте.Выведите все цитаты, у которых есть тег "truth".
###Code
url = 'https://quotes.toscrape.com/page/1/'
response = requests.get(url)
response
tree = BeautifulSoup(response.content, 'html.parser')
quotes = tree.find_all('div', {'class' : 'quote'})
quotes[0]
quotes[0].span.text
quotes[0].find('small', {'class':'author'}).text
quotes[0].find_all('a', {'class': 'tag'})
quotes[0].find_all('a', {'class': 'tag'})[0].text
tags = []
for tag in quotes[0].find_all('a', {'class': 'tag'}):
tags.append(tag.text)
tags
info = []
for q in quotes:
tags = []
for tag in q.find_all('a', {'class': 'tag'}):
tags.append(tag.text)
info.append({'quote': q.span.text,
'author': q.find('small', {'class':'author'}).text,
'tags': tags})
info
response.content[:1000]
def get_page(p):
url = 'https://quotes.toscrape.com/page/{}/'.format(p)
response = requests.get(url)
tree = BeautifulSoup(response.content, 'html.parser')
quotes = tree.find_all('div', {'class' : 'quote'})
info = []
for q in quotes:
tags = []
for tag in q.find_all('a', {'class': 'tag'}):
tags.append(tag.text)
info.append({'quote': q.span.text,
'author': q.find('small', {'class':'author'}).text,
'tags': tags})
return info
info = []
for p in range(1,11):
info.extend(get_page(p))
len(info)
df = pd.DataFrame(info)
df.head()
tags_set = set(df['tags'].explode().values)
tags_set
for tag in tags_set:
df[tag] = [tag in df['tags'].loc[i] for i in df.index]
pd.set_option('display.max_columns', 500)
df.head()
df.columns
for q in df[df['truth']]['quote'].values:
print(q)
###Output
“The reason I talk to myself is because I’m the only one whose answers I accept.”
“A lie can travel half way around the world while the truth is putting on its shoes.”
“The truth." Dumbledore sighed. "It is a beautiful and terrible thing, and should therefore be treated with great caution.”
“Never tell the truth to people who are not worthy of it.”
###Markdown
Работа с json файлами Создать pandas dataframe с такими колонками:* `username`* `changed_lines` – количество измененных строчек* `commits` – количество коммитов* `new_files` – количество новых файлов, которые сделал этот разработчикОтсортировать по `username` pandas
###Code
from pandas import json_normalize
import json
with open('commits.json', 'r') as f:
data = json.load(f)
data[0]
data[0]['username']
data = json_normalize(data, 'files', ['username', 'commit_time'])
data
import pandas as pd
data['commit_time'] = pd.to_datetime(data['commit_time'])
data.shape
data.info()
# commits
res = data.groupby('username')[['commit_time']].nunique().reset_index()
res
# changed_lines
data.groupby('username')['changed_lines'].sum().values
res['changed_lines'] = data.groupby('username')['changed_lines'].sum().values
agg = data.groupby(['name', 'username'])[['commit_time']].min().sort_values(['name', 'commit_time'])
agg
d = {}
for file in agg.reset_index()['name'].unique():
d[file] = agg.loc[file].iloc[0].name
d
pd.DataFrame([d]).T.reset_index().groupby(0).count()['index'].values
res['new_files'] = pd.DataFrame([d]).T.reset_index().groupby(0).count()['index'].values
res.sort_values('username', inplace=True)
res
###Output
_____no_output_____
###Markdown
словари
###Code
from collections import defaultdict
d = defaultdict()
for k in [1, 2, 3]:
d[k] = 1
with open('commits.json', 'r') as f:
data = json.load(f)
data = sorted(data, key=lambda x: pd.to_datetime(x['commit_time']))
data[0]
somedict = {}
print(somedict[3]) # KeyError
someddict = defaultdict(int)
print(someddict[3]) # print int(), thus 0
someddict
table = defaultdict(lambda: {'commits': 0, 'changed_lines':0, 'new_files':0})
new_files = set()
for commit in data:
user = commit['username']
table[user]['commits'] += 1
for file in commit['files']:
table[user]['changed_lines'] += file['changed_lines']
if file['name'] not in new_files:
new_files.add(file['name'])
table[user]['new_files'] += 1
table
fin = pd.DataFrame(table).T.reset_index().rename(columns={'index': 'username'}).sort_values('username')
fin
###Output
_____no_output_____ |
python/PCA_SVM_position copy.ipynb | ###Markdown
Test the model of PCA+logistic regression
###Code
from cnn_utils import *
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
import matplotlib.pyplot as plt
import tensorflow as tf
from keras.models import Sequential,Model
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense, InputLayer, BatchNormalization, Dropout
from keras.optimizers import Adam
import seaborn as sns
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
df = create_dataframe('/../raw_data/dataset_071220.json',image_size=(25,50))
df.head()
###Output
_____no_output_____
###Markdown
1. Get to know how many dimension we need to capture 99% of variance
###Code
first_idx = []
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(df.loc[:,df.columns!="y"],df.loc[:,df.columns=="y"], test_size=0.3)
eyeImage_train = np.stack(X_train['eyeImage'].to_numpy())
eyeImage_test = np.stack(X_test['eyeImage'].to_numpy())
# reshape to make it possible to feed into SVM
eyeImage_train = eyeImage_train.reshape(eyeImage_train.shape[0],eyeImage_train.shape[1]*eyeImage_train.shape[2]*eyeImage_train.shape[3])
eyeImage_test = eyeImage_test.reshape(eyeImage_test.shape[0],eyeImage_test.shape[1]*eyeImage_test.shape[2]*eyeImage_test.shape[3])
pca = PCA(n_components=150, whiten=True, random_state=42)
train_score = svm_model.score(eyeImage_train, y_train_binary, sample_weight=None)
test_score = svm_model.score(eyeImage_test, y_test_binary, sample_weight=None)
pca.fit(eyeImage_train)
first_idx.append(np.where(pca.explained_variance_ratio_.cumsum() > 0.99)[0][0])
np.mean(first_idx)
np.std(first_idx)
pca.transform(eyeImage_train).shape
###Output
_____no_output_____
###Markdown
Conclusion: in the 100 iterations, the mean of the first index that explains 99%+ variation is 117.38, with a std of 1.074988372030135. Thus, having 125 dimension would be sufficient in our case 2. Experiment with adding left/right eye positions Do not need to scale images before feeding into PCA as the previous experiments shows the performance is similar. Find the best C for this task
###Code
X_train, X_test, y_train, y_test = train_test_split(df.loc[:,df.columns!="y"],df.loc[:,df.columns=="y"], test_size=0.01)
y_train_binary = create_binary_labels(y_train)
y_test_binary = create_binary_labels(y_test)
eyeImage_train = np.stack(X_train['eyeImage'].to_numpy())
eyeImage_test = np.stack(X_test['eyeImage'].to_numpy())
leftEye_train = np.stack(X_train['leftEye'].to_numpy())
leftEye_test = np.stack(X_test['leftEye'].to_numpy())
rightEye_train = np.stack(X_train['rightEye'].to_numpy())
rightEye_test = np.stack(X_test['rightEye'].to_numpy())
# reshape to make it possible to feed into SVM
eyeImage_train = eyeImage_train.reshape(eyeImage_train.shape[0],eyeImage_train.shape[1]*eyeImage_train.shape[2]*eyeImage_train.shape[3])
eyeImage_test = eyeImage_test.reshape(eyeImage_test.shape[0],eyeImage_test.shape[1]*eyeImage_test.shape[2]*eyeImage_test.shape[3])
# model part
pca = PCA(n_components=125, whiten=True)
scalar = StandardScaler()
logit = LogisticRegression(solver='saga')
# prepare the input training data
pca.fit(eyeImage_train)
eyeImage_train = pca.transform(eyeImage_train)
input_train = np.concatenate((eyeImage_train, leftEye_train, rightEye_train), axis=1)
# input_train =eyeImage_train
scalar.fit(input_train)
input_train = scalar.transform(input_train)
grid = GridSearchCV(estimator=logit, param_grid={'C':[i*0.01 for i in range(1,100)],
'penalty':['l1']})
grid.fit(input_train, y_train_binary)
# svc.fit(input_train, y_train_binary)
# train_score = svc.score(input_train, y_train_binary, sample_weight=None)
# # prepare the input testing data
# eyeImage_test = pca.transform(eyeImage_test)
# input_test = np.concatenate((eyeImage_test, leftEye_test, rightEye_test), axis=1)
# # input_test =eyeImage_test
# input_test = scalar.transform(input_test)
# test_score = svc.score(input_test, y_test_binary, sample_weight=None)
grid.best_score_
from scipy import stats
stats.mode(Cs)[0][0]
def test_robustness(df):
"""
For testing the robustness of our PCA+SVM model when we split the train and test randomly
"""
train_scores, test_scores = [], []
for i in range(50):
X_train, X_test, y_train, y_test = train_test_split(df.loc[:,df.columns!="y"],df.loc[:,df.columns=="y"], test_size=0.3)
eyeImage_train = np.stack(X_train['eyeImage'].to_numpy())
eyeImage_test = np.stack(X_test['eyeImage'].to_numpy())
y_train_binary = create_binary_labels(y_train)
y_test_binary = create_binary_labels(y_test)
# reshape to make it possible to feed into SVM
eyeImage_train = eyeImage_train.reshape(eyeImage_train.shape[0],eyeImage_train.shape[1]*eyeImage_train.shape[2]*eyeImage_train.shape[3])
eyeImage_test = eyeImage_test.reshape(eyeImage_test.shape[0],eyeImage_test.shape[1]*eyeImage_test.shape[2]*eyeImage_test.shape[3])
pca = PCA(n_components=125, whiten=True, random_state=42)
logit = LogisticRegression(penalty='l1', solver='saga',C=0.2)
svm_model = make_pipeline(pca, svc)
svm_model.fit(eyeImage_train, y_train_binary)
# param_grid = {'svc__C': [100,50,10,5,1,0.5,0.1, 0.05, 0.01, 0.005, 0.001]}
# grid = GridSearchCV(svm_model, param_grid)
# %time grid.fit(eyeImage_train, y_train_binary)
# print(grid.best_params_)
# svm_model = grid.best_estimator_
train_score = svm_model.score(eyeImage_train, y_train_binary, sample_weight=None)
test_score = svm_model.score(eyeImage_test, y_test_binary, sample_weight=None)
train_scores.append(train_score)
test_scores.append(test_score)
svm_model = None
data = {'train_score':train_scores, 'test_score':test_scores}
return pd.DataFrame(data)
logit_df = test_robustness(df)
plt.figure()
logit_df.plot()
plt.legend(loc="best")
plt.title("Acc of PCA+logitL1 in 100 rand splits")
# plt.savefig("./results/PCA_SVM_eye_acc_0716.png")
logit_df.mean(axis=0)
logit_df.std(axis=0)
def pca_svm_positions_robustness(df):
train_scores, test_scores = [], []
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(df.loc[:,df.columns!="y"],df.loc[:,df.columns=="y"], test_size=0.2)
y_train_binary = create_binary_labels(y_train)
y_test_binary = create_binary_labels(y_test)
eyeImage_train = np.stack(X_train['eyeImage'].to_numpy())
eyeImage_test = np.stack(X_test['eyeImage'].to_numpy())
leftEye_train = np.stack(X_train['leftEye'].to_numpy())
leftEye_test = np.stack(X_test['leftEye'].to_numpy())
rightEye_train = np.stack(X_train['rightEye'].to_numpy())
rightEye_test = np.stack(X_test['rightEye'].to_numpy())
# reshape to make it possible to feed into SVM
eyeImage_train = eyeImage_train.reshape(eyeImage_train.shape[0],eyeImage_train.shape[1]*eyeImage_train.shape[2]*eyeImage_train.shape[3])
eyeImage_test = eyeImage_test.reshape(eyeImage_test.shape[0],eyeImage_test.shape[1]*eyeImage_test.shape[2]*eyeImage_test.shape[3])
# model part
pca = PCA(n_components=125, whiten=True)
scalar = StandardScaler()
svc = LogisticRegression(penalty='l1',solver='saga',C=0.2) #SVC(kernel='linear', C=0.1)
# prepare the input training data
pca.fit(eyeImage_train)
eyeImage_train = pca.transform(eyeImage_train)
input_train = np.concatenate((eyeImage_train, leftEye_train, rightEye_train), axis=1)
# input_train =eyeImage_train
scalar.fit(input_train)
input_train = scalar.transform(input_train)
svc.fit(input_train, y_train_binary)
train_score = svc.score(input_train, y_train_binary, sample_weight=None)
# prepare the input testing data
eyeImage_test = pca.transform(eyeImage_test)
input_test = np.concatenate((eyeImage_test, leftEye_test, rightEye_test), axis=1)
# input_test =eyeImage_test
input_test = scalar.transform(input_test)
test_score = svc.score(input_test, y_test_binary, sample_weight=None)
svc,pca, scalar = None, None, None
train_scores.append(train_score)
test_scores.append(test_score)
data = {'train_score':train_scores, 'test_score':test_scores}
return pd.DataFrame(data)
position_df = pca_svm_positions_robustness(df)
position_df.mean(axis=0)
position_df.std(axis=0)
plt.figure()
position_df.plot()
plt.legend(loc="best")
plt.title("Acc of PCA+SVM+eyePosition in 100 rand splits")
# plt.savefig("./results/PCA_SVM_eye_acc_0716.png")
###Output
_____no_output_____
###Markdown
Without using eye positions information
###Code
def pca_svm_positions_robustness(df):
train_scores, test_scores = [], []
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(df.loc[:,df.columns!="y"],df.loc[:,df.columns=="y"], test_size=0.2)
y_train_binary = create_binary_labels(y_train)
y_test_binary = create_binary_labels(y_test)
eyeImage_train = np.stack(X_train['eyeImage'].to_numpy())
eyeImage_test = np.stack(X_test['eyeImage'].to_numpy())
leftEye_train = np.stack(X_train['leftEye'].to_numpy())
leftEye_test = np.stack(X_test['leftEye'].to_numpy())
rightEye_train = np.stack(X_train['rightEye'].to_numpy())
rightEye_test = np.stack(X_test['rightEye'].to_numpy())
# reshape to make it possible to feed into SVM
eyeImage_train = eyeImage_train.reshape(eyeImage_train.shape[0],eyeImage_train.shape[1]*eyeImage_train.shape[2]*eyeImage_train.shape[3])
eyeImage_test = eyeImage_test.reshape(eyeImage_test.shape[0],eyeImage_test.shape[1]*eyeImage_test.shape[2]*eyeImage_test.shape[3])
# model part
pca = PCA(n_components=125, whiten=True)
# scalar = StandardScaler()
svc = SVC(kernel='linear', C=0.1)
# prepare the input training data
pca.fit(eyeImage_train)
eyeImage_train = pca.transform(eyeImage_train)
# input_train = np.concatenate((eyeImage_train, leftEye_train, rightEye_train), axis=1)
input_train =eyeImage_train
# scalar.fit(input_train)
# input_train = scalar.transform(input_train)
svc.fit(input_train, y_train_binary)
train_score = svc.score(input_train, y_train_binary, sample_weight=None)
# prepare the input testing data
eyeImage_test = pca.transform(eyeImage_test)
# input_test = np.concatenate((eyeImage_test, leftEye_test, rightEye_test), axis=1)
input_test =eyeImage_test
# input_test = scalar.transform(input_test)
test_score = svc.score(input_test, y_test_binary, sample_weight=None)
svc,pca, scalar = None, None, None
train_scores.append(train_score)
test_scores.append(test_score)
data = {'train_score':train_scores, 'test_score':test_scores}
return pd.DataFrame(data)
position_df = pca_svm_positions_robustness(df)
position_df.mean(axis=0)
position_df.std(axis=0)
plt.figure()
position_df.plot()
plt.legend(loc="best")
plt.title("Acc of PCA+SVM+eyePosition in 100 rand train-test splits")
def test_robustness(df):
"""
For testing the robustness of our PCA+SVM model when we split the train and test randomly
"""
train_scores, test_scores = [], []
for i in range(200):
X_train, X_test, y_train, y_test = train_test_split(df.loc[:,df.columns!="y"],df.loc[:,df.columns=="y"], test_size=0.3)
eyeImage_train = np.stack(X_train['eyeImage'].to_numpy())
eyeImage_test = np.stack(X_test['eyeImage'].to_numpy())
y_train_binary = create_binary_labels(y_train)
y_test_binary = create_binary_labels(y_test)
# reshape to make it possible to feed into SVM
eyeImage_train = eyeImage_train.reshape(eyeImage_train.shape[0],eyeImage_train.shape[1]*eyeImage_train.shape[2]*eyeImage_train.shape[3])
eyeImage_test = eyeImage_test.reshape(eyeImage_test.shape[0],eyeImage_test.shape[1]*eyeImage_test.shape[2]*eyeImage_test.shape[3])
pca = PCA(n_components=125, whiten=True, random_state=42)
svc = SVC(kernel='linear', C=0.1)
svm_model = make_pipeline(pca, svc)
svm_model.fit(eyeImage_train, y_train_binary)
# param_grid = {'svc__C': [100,50,10,5,1,0.5,0.1, 0.05, 0.01, 0.005, 0.001]}
# grid = GridSearchCV(svm_model, param_grid)
# %time grid.fit(eyeImage_train, y_train_binary)
# print(grid.best_params_)
# svm_model = grid.best_estimator_
train_score = svm_model.score(eyeImage_train, y_train_binary, sample_weight=None)
test_score = svm_model.score(eyeImage_test, y_test_binary, sample_weight=None)
train_scores.append(train_score)
test_scores.append(test_score)
svm_model = None
data = {'train_score':train_scores, 'test_score':test_scores}
return pd.DataFrame(data)
result_df = test_robustness(df)
result_df
result_df.to_csv("results/pca_svm_cv.csv")
result_df.mean(axis=0)
result_df.std(axis=0)
y_train.index
print(classification_report(y_test_binary, yfit))
yfit_train = svm_model.predict(eyeImage_train)
print(classification_report(y_train_binary, yfit_train))
plt.figure()
result_df.plot()
plt.legend(loc="best")
plt.title("Acc of PCA+SVM in 200 rand splits")
plt.savefig("results/pca_svm_cv_acc_0716.png")
###Output
_____no_output_____
###Markdown
2. Only use far left and far right data
###Code
# create a extreme df
is_far_right = df["y"].map(lambda x: x[0]>0.7)
is_far_left = df["y"].map(lambda x: x[0]<-0.7)
df_extreme = df.loc[is_far_right | is_far_left]
result_df = test_robustness(df_extreme)
result_df.mean(axis=0)
result_df.std()
plt.figure()
result_df.plot()
plt.legend(loc="best")
plt.title("Acc of PCA+SVM in 200 rand train-test splits, data is mild left/right")
# plt.savefig("results/pca_svm_cv_acc_mild_0707.png")
###Output
_____no_output_____ |
00_mt_SQL_Oracle_core_builds.ipynb | ###Markdown
SQL Basics, build tables and queries.Sources for this notebook:[Learning PostgreSQL](http://shop.oreilly.com/product/9781783989188.do)[David Berry, Pluralsight](http://buildingbettersoftware.blogspot.com/)[SQL Pocket Guide 3rd Edition, Jonathan Gennick, 2011](http://shop.oreilly.com/product/0636920013471.do)
###Code
##Table Basics: Oracle
'''
Syntax to create table.
NULL values.
Default values.
Naming Rules: Max 30 chars, tables and view names have to be unique within the same schema
Columns must be unique within the same table. Do not use SQL reserved words.
Diagram relationships between tables, list PKs and FKs at a minumum.
---Names, Unquoted:
Characters allowed: alphanumeric, unders_score, $ and #.
Unquoted names are case insensitive.
Quoted identifiers allow a wider range of name options
Column definition rules:
Max of 1000 columns, >255 will start row chaining.
Max of 30 characters for column name, can be reused in other tables.
Minumum info = name & data type, NULL and DEFAULT are optional
---NULLs
CHAR, VARCHAR, NCHAR, NVARCHAR treat empty strings as NULL value
NULLs vs DEFAULTS, dates could be misleading e.g. 1900-01-01 for a future order
or missing phone 000-000-0000 instead of <NULL>
---Virtual column values are computed from other table columns
Normal columns store data on disk
Virtual columns : value is computed in result query set,
cannot INSERT or UPDATE virtual columns,
can only use columns in same table,
indexes can be created over virtual column values,
Useful when a derived value is needed.
'''
%%writefile $pirate_school_schema_oracle.sql
CREATE TABLE pirate_class
(
ship_deck VARCHAR2(2) NOT NULL,
number_o_course NUMBER(3,0) NOT NULL,
title_o_course VARCHAR2(66) NOT NULL,
desc_yer_course VARCHAR2(666) NOT NULL,
doubloons NUMBER(3,1) NOT NULL,
CONSTRAINT pk_ship_deck PRIMARY KEY
(ship_deck, number_o_course),
CONSTRAINT fk_pirate_class_ship_deck FOREIGN KEY
(ship_deck) REFERENCES decks (ship_deck)
)
TABLESPACE users
PCTFREE 75;
%%writefile port_code_schema.sql
CREATE TABLE port_codes
(
port_code VARCHAR2(4) NOT NULL,
city VARCHAR2(30) NOT NULL,
state VARCHAR2(30) NOT NULL,
country_code3 VARCHAR2(3) NOT NULL
);
!cat port_code_schema.sql
###Output
CREATE TABLE port_codes
(
port_code VARCHAR2(4) NOT NULL,
city VARCHAR2(30) NOT NULL,
state VARCHAR2(30) NOT NULL,
country_code3 VARCHAR2(3) NOT NULL
);
###Markdown
Select statments for the aboveselect PORT_CODE, CITY, STATE from PORT_CODES;--OR-- SELECT Port_Code, City, State FROM Zip_codes;
###Code
%%writefile port_code_quoted.sql
CREATE TABLE "PortCodes_Q"
(
"port code" VARCHAR2(4) NOT NULL,
"city.name" VARCHAR2(30) NOT NULL,
"state-abbr" VARCHAR2(2) NOT NULL,
"country code3" VARCHAR2(3) NOT NULL
);
###Output
Writing zip_code_quoted.sql
###Markdown
Select statement for the above quoted tableSELECT "por code", "city.name", "state-abbr"FROM "PortCodes_Q"WHERE "port code" = '1234'
###Code
%%writefile pirates_table.sql
CREATE TABLE pirates
(
pirate_id NUMBER(7) NOT NULL,
nick_name VARCHAR2(31) NOT NULL,
last_name VARCHAR2(31) NOT NULL,
eye_patch VARCHAR2(1) DEFAULT 'T' NOT NULL,
email VARCHAR2(128) NOT NULL,
email_domain VARCHAR2(60) AS (
SUBSTR(email, INSTR(email, '@', 1,1)+1)
) VIRTUAL
phone VARCHAR2(21) NOT NULL,
berth_date DATE NULL,
home_port VARCHAR2(31) NULL,
port_country VARCHAR2(3) NULL,
active_code VARCHAR2(1) DEFAULT 'A' NOT NULL,
CONSTRAINT pk_pirates PRIMARY KEY (pirate_id)
CONSTRAINT ck_pirates_table_eye_patch
CHECK (eye_patch is 'T' or 'F')
);
%%writefile treasure_map_yorders.sql
CREATE TABLE treasure_map_yorders
(
yorder_id NUMBER(13) NOT NULL,
yorder_date DATE NOT NULL,
pirate_id NUMBER(7) NOT NULL,
subtotal NUMBER(10,2),
tax NUMBER(10,2),
shipping NUMBER(10,2),
invoice_total NUMBER(10,2)
AS (subtotal + tax + shipping) VIRTUAL
);
%writefile view_grades_students_oracle.sql
--------------------------------------------------------
-- DDL for View V_STUDENT_GRADES
--------------------------------------------------------
CREATE OR REPLACE VIEW V_STUDENT_GRADES AS
SELECT
ce.student_id, co.term_code,
c.department_code, c.course_number,
c.course_title, c.credits,
ce.grade_code, g.points
FROM course_enrollments ce
INNER JOIN course_offerings co
ON ce.course_offering_id = co.course_offering_id
INNER JOIN courses c
ON c.department_code = co.department_code
AND c.course_number = co.course_number
INNER JOIN grades g
ON ce.grade_code = g.grade_code;
%%writefile prospective_pirates_import_oracle.sql
--create a practice schema to import into the db
CREATE TABLE prospective_pirates_IO_table
(
first_name VARCHAR2(50),
last_name VARCHAR2(50),
slip_number VARCHAR2(5),
port VARCHAR2(60),
govenar VARCHAR2(50),
snail_mail VARCHAR2(50) ,
date_of_piercing DATE,
grog_ration NUMBER(3,2)
)
ORGANIZATION EXTERNAL --the secction below is what makes this an external table
TYPE ORACLE_LOADER --tells Oracle we are importing a flat or text file, i.e. not data pump files
DEFAULT DIRECTORY data_import --specify dir object
ACCESS PARAMETERS --importing a fixed width files
RECORDS FIXED 149 --this is the length of each line in the file in bytes, includes new line chars at end
LOGFILE data_import:'prospective_students_fw.log' --will do default, better to name them
BADFILE data_import:'prospective_students_fw.bad' --rejected records not imported go here
FIELDS --most important part, specify the structure of import file, tells Oracle how to read it
( first_name CHAR(22), --Oracle will map these names to the above table def,
middle_init CHAR(1), --note this is not needed in the create table statement, db still needs to know
last_name CHAR(22),
street_address CHAR(33),
city CHAR(22),
state CHAR(2),
email_address CHAR(66),
date_of_birth CHAR(10), DATE_FORMAT DATE MASK "MM/DD/YYYY", --do not import as text
gpa CHAR(4)
)
)
LOCATION ('prospective_students.dat') --no absolute path, referenced above
)
REJECT LIMIT UNLIMITED; --bad data is all rejected, or import and filter out later
###Output
Writing prospective_pirates_import_oracle.sql
|
Multi_agent_planning.ipynb | ###Markdown
###Code
import sys
import matplotlib.pyplot as plt
import numpy as np
import torch
from torch import autograd
from torch.utils.data import DataLoader
from torch.nn.parameter import Parameter
import torch.nn.functional as F
!ls
from google.colab import files
uploaded = files.upload()
def action(S,val):
if val == 0:
return torch.tensor([S[0].item(),S[1].item()-1])
elif val == 1:
return torch.tensor([S[0].item()-1,S[1].item()])
elif val == 2:
return torch.tensor([S[0].item(),S[1].item()+1])
elif val == 3:
return torch.tensor([S[0].item()+1,S[1].item()])
else :
return S
def rand_play(X,a_s,imsize):
#find possible action
Action = []
if (a_s[1] -1) >= 0:
if X[0][a_s[0]][a_s[1]-1] and (not X[2][a_s[0]][a_s[1]-1]):
Action.append(0)
if (a_s[0] -1 ) >= 0:
if X[0][a_s[0]-1][a_s[1]] and (not X[2][a_s[0]-1][a_s[1]]):
Action.append(1)
if (a_s[1] +1 ) < imsize:
if X[0][a_s[0]][a_s[1]+1] and (not X[2][a_s[0]][a_s[1]+1]):
Action.append(2)
if (a_s[0] + 1) < imsize:
if X[0][a_s[0]+1][a_s[1]] and (not X[2][a_s[0]+1][a_s[1]]):
Action.append(3)
if len(Action) != 0:
act = np.random.choice(Action)
new_state = action(a_s,act)
else:
new_state = a_s
return new_state
def goal_reach_check(agent_state,goal,goal_count,goal_reached):
token = 0
v = 0
for z in goal:
#check goal is already reached
if goal_reached[v] == 0:
#if goal is not reached and compare with agent state
if agent_state[0].item() == z[0] and agent_state[1].item() == z[1]:
goal_reached[v] = 1
goal_count += 1
token = 1
v = v+1
return token,goal_count,goal_reached
def agent_predict(model,dom,goal,imsize,max_random_play):
X = dom.clone()
model.eval()
correct_goal,total_goal,traj_step_error,goal_count = 0.0,0.0,0.0,0.0
#store agent initial state
inter = np.where(X[2] == 1.0)
agent_state = np.column_stack([inter[0],inter[1]])
n_agent = agent_state.shape[0]
agent_traj = [[agent_state[i].copy()] for i in range(n_agent)]
#store number agent still playing
current_play_agent = [1.0 for i in range(n_agent)]
current_play_agent = np.array(current_play_agent)
#keep which is reached
goal_reached = np.zeros([goal.size])
#store each agent random play chances
random_play = np.zeros(n_agent)
max_random_play = max_random_play
#run untill all player finish
while current_play_agent.sum() != 0.0:
for i in range(n_agent):
if current_play_agent[i] == 1.0:
S1 = torch.tensor(agent_state[i][0])
S2 = torch.tensor(agent_state[i][1])
S1 = S1.to(device)
S2 = S2.to(device)
Input = X.clone()
Input = Input.reshape([1,3,imsize,imsize])
Input = Input.float()
Input = Input.to(device)
output,prediction = model(Input,S1,S2)
_,index = torch.max(output,dim=1)
#check agent (index = 4) means stop planning
if index == 4:
#token whether agent reach goal or not
token,goal_count,goal_reached = goal_reach_check(agent_state[i],goal,goal_count,goal_reached)
if token:
current_play_agent[i] = 0.0
else:
if random_play[i] > max_random_play:
current_play_agent[i] = 0.0
else:
X[2][agent_state[i][0]][agent_state[i][1]] = 0.0
agent_state[i] =rand_play(X,agent_state[i].copy(),imsize)
random_play[i] += 1
X[2][agent_state[i][0]][agent_state[i][1]] = 1.0
agent_traj[i].append(agent_state[i].copy())
else:
X[2][agent_state[i][0]][agent_state[i][1]] = 0.0
agent_state[i] = action(agent_state[i],index)
#check agent reach any obstacle and push it into stop
i_1 = agent_state[i][0]
i_2 = agent_state[i][1]
if X[0][i_1][i_2].item() == 0.0 and X[1][i_1][i_2].item() == 0.0:
current_play_agent[i] = 0.0
X[2][agent_state[i][0]][agent_state[i][1]] = 1.0
agent_traj[i].append(agent_state[i].copy())
#stop agent which moves back and forward two on the goal
if len(agent_traj[i]) > 3 and (agent_traj[i][-1] == agent_traj[i][-3]).sum() == 2 and (agent_traj[i][-2] == agent_traj[i][-4]).sum() == 2:
inter = agent_traj[i][-1]
token,goal_count,goal_reached = goal_reach_check(inter,goal,goal_count,goal_reached)
if token:
current_play_agent[i] = 0.0
else:
inter = agent_traj[i][-2]
token,goal_count,goal_reached = goal_reach_check(inter,goal,goal_count,goal_reached)
if token:
current_play_agent[i] = 0.0
agent_traj[i].pop(-1)
else:
if random_play[i] < max_random_play:
X[2][agent_state[i][0]][agent_state[i][1]] = 0.0
agent_state[i] =rand_play(X,agent_state[i].copy(),imsize)
random_play[i] += 1
X[2][agent_state[i][0]][agent_state[i][1]] = 1.0
agent_traj[i].append(agent_state[i].copy())
else:
current_play_agent[i] = 0.0
correct_goal = goal_count
if n_agent > len(goal):
total_goal = len(goal)
else:
total_goal = n_agent
return correct_goal,total_goal,agent_traj
class MA_VIN(torch.nn.Module):
def __init__(self, k,imsize):
super(MA_VIN, self).__init__()
self.k = k
self.h = torch.nn.Conv2d(
in_channels=3,
out_channels=150,
kernel_size=(3, 3),
stride=1,
padding=1,
bias=True)
self.r = torch.nn.Conv2d(
in_channels=150,
out_channels=1,
kernel_size=(1, 1),
stride=1,
padding=0,
bias=False)
self.q = torch.nn.Conv2d(
in_channels=1,
out_channels=5,
kernel_size=(3, 3),
stride=1,
padding=1,
bias=False)
self.fc_1 = torch.nn.Linear(in_features=5, out_features=5, bias=True)
self.w = Parameter(torch.zeros(5, 1, 3, 3), requires_grad=True)
self.sm = torch.nn.Softmax(dim=1)
def forward(self, X, S1, S2):
h = self.h(X)
r = self.r(h)
q = self.q(r)
v, _ = torch.max(q, dim=1, keepdim=True)
for i in range(0, self.k - 1):
q = F.conv2d(
torch.cat([r, v], 1),
torch.cat([self.q.weight, self.w], 1),
stride=1,
padding=1)
v, _ = torch.max(q, dim=1, keepdim=True)
q = F.conv2d(
torch.cat([r, v], 1),
torch.cat([self.q.weight, self.w], 1),
stride=1,
padding=1)
slice_s1 = S1.long().expand(imsize, 1, 5, q.size(0))
slice_s1 = slice_s1.permute(3, 2, 1, 0)
q_out = q.gather(2, slice_s1).squeeze(2)
slice_s2 = S2.long().expand(1, 5, q.size(0))
slice_s2 = slice_s2.permute(2, 1, 0)
q_out = q_out.gather(2, slice_s2).squeeze(2)
logits = self.fc_1(q_out)
return logits, self.sm(logits)
test = []
with np.load('Env_28x28.npz', mmap_mode='r') as f:
test = f['arr_0']
test_data = []
for i in test:
env = -1 *(i[0] == 0) + 1.0
goal = (i[0] == 0.5) + 0.0
Agent = i[1]
domain = np.array([env,goal,Agent])
test_data.append([torch.tensor(domain.copy())])
model_28x28 = MA_VIN(60,28)
model_28x28.load_state_dict(torch.load('MA_VIN_6.pth'))
model_28x28.cuda()
device = torch.device("cuda:0" if torch.cuda.is_available() else "CPU")
correct_goal,total_goal = 0.0,0.0
imsize = 28
max_rand_play= 5
for i in range(50):
X = test_data[i][0].clone()
go = np.where(X[1] == 1.0)
goal = np.column_stack([go[0],go[1]])
current_goal,current_total,traj = agent_predict(model_28x28,X.clone(),goal,imsize,max_rand_play)
correct_goal += current_goal
total_goal += current_total
print(f'Goal Accuracy: {correct_goal/total_goal}')
print(correct_goal,total_goal)
test = []
with np.load('Env_64x64.npz', mmap_mode='r') as f:
test = f['arr_0']
###Output
_____no_output_____
###Markdown
###Code
test_data = []
for i in test:
env = -1 *(i[0] == 0) + 1.0
goal = (i[0] == 0.5) + 0.0
Agent = i[1]
domain = np.array([env,goal,Agent])
test_data.append([torch.tensor(domain.copy())])
model_64x64 = MA_VIN(125,64)
model_64x64.load_state_dict(torch.load('MA_VIN_6.pth'))
model_64x64.cuda()
correct_goal,total_goal = 0.0,0.0
imsize = 64
max_rand_play= 5
for i in range(50):
X = test_data[i][0].clone()
go = np.where(X[1] == 1.0)
goal = np.column_stack([go[0],go[1]])
current_goal,current_total,traj = agent_predict(model_64x64,X.clone(),goal,imsize,max_rand_play)
correct_goal += current_goal
total_goal += current_total
print(f'Goal Accuracy: {correct_goal/total_goal}')
print(correct_goal,total_goal)
test = []
with np.load('Env_80x80.npz', mmap_mode='r') as f:
test = f['arr_0']
test_data = []
for i in test:
env = -1 *(i[0] == 0) + 1.0
goal = (i[0] == 0.5) + 0.0
Agent = i[1]
domain = np.array([env,goal,Agent])
test_data.append([torch.tensor(domain.copy())])
model_80x80 = MA_VIN(135,80)
model_80x80.load_state_dict(torch.load('MA_VIN_6.pth'))
model_80x80.cuda()
correct_goal,total_goal = 0.0,0.0
imsize = 80
max_rand_play= 5
for i in range(50):
X = test_data[i][0].clone()
go = np.where(X[1] == 1.0)
goal = np.column_stack([go[0],go[1]])
current_goal,current_total,traj = agent_predict(model_80x80,X.clone(),goal,imsize,max_rand_play)
correct_goal += current_goal
total_goal += current_total
print(f'Goal Accuracy: {correct_goal/total_goal}')
print(correct_goal,total_goal)
test = []
with np.load('Env_128x128.npz', mmap_mode='r') as f:
test = f['arr_0']
test_data = []
for i in test:
env = -1 *(i[0] == 0) + 1.0
goal = (i[0] == 0.5) + 0.0
Agent = i[1]
domain = np.array([env,goal,Agent])
test_data.append([torch.tensor(domain.copy())])
model_128x128 = MA_VIN(190,128)
model_128x128.load_state_dict(torch.load('MA_VIN_6.pth'))
model_128x128.cuda()
correct_goal,total_goal = 0.0,0.0
imsize = 128
max_rand_play= 5
for i in range(50):
X = test_data[i][0].clone()
go = np.where(X[1] == 1.0)
goal = np.column_stack([go[0],go[1]])
current_goal,current_total,traj = agent_predict(model_128x128,X.clone(),goal,imsize,max_rand_play)
correct_goal += current_goal
total_goal += current_total
print(f'Goal Accuracy: {correct_goal/total_goal}')
print(correct_goal,total_goal)
###Output
_____no_output_____ |
Python for Data Science and AI/PY0101EN-2-2-Lists.ipynb | ###Markdown
Lists in Python Welcome! This notebook will teach you about the lists in the Python Programming Language. By the end of this lab, you'll know the basics list operations in Python, including indexing, list operations and copy/clone list. Table of Contents About the Dataset Lists Indexing List Content List Operations Copy and Clone List Quiz on Lists Estimated time needed: 15 min About the Dataset Imagine you received album recommendations from your friends and compiled all of the recommandations into a table, with specific information about each album.The table has one row for each movie and several columns:- **artist** - Name of the artist- **album** - Name of the album- **released_year** - Year the album was released- **length_min_sec** - Length of the album (hours,minutes,seconds)- **genre** - Genre of the album- **music_recording_sales_millions** - Music recording sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)- **claimed_sales_millions** - Album's claimed sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)- **date_released** - Date on which the album was released- **soundtrack** - Indicates if the album is the movie soundtrack (Y) or (N)- **rating_of_friends** - Indicates the rating from your friends from 1 to 10The dataset can be seen below: Artist Album Released Length Genre Music recording sales (millions) Claimed sales (millions) Released Soundtrack Rating (friends) Michael Jackson Thriller 1982 00:42:19 Pop, rock, R&B 46 65 30-Nov-82 10.0 AC/DC Back in Black 1980 00:42:11 Hard rock 26.1 50 25-Jul-80 8.5 Pink Floyd The Dark Side of the Moon 1973 00:42:49 Progressive rock 24.2 45 01-Mar-73 9.5 Whitney Houston The Bodyguard 1992 00:57:44 Soundtrack/R&B, soul, pop 26.1 50 25-Jul-80 Y 7.0 Meat Loaf Bat Out of Hell 1977 00:46:33 Hard rock, progressive rock 20.6 43 21-Oct-77 7.0 Eagles Their Greatest Hits (1971-1975) 1976 00:43:08 Rock, soft rock, folk rock 32.2 42 17-Feb-76 9.5 Bee Gees Saturday Night Fever 1977 1:15:54 Disco 20.6 40 15-Nov-77 Y 9.0 Fleetwood Mac Rumours 1977 00:40:01 Soft rock 27.9 40 04-Feb-77 9.5 Lists Indexing We are going to take a look at lists in Python. A list is a sequenced collection of different objects such as integers, strings, and other lists as well. The address of each element within a list is called an index. An index is used to access and refer to items within a list. To create a list, type the list within square brackets [ ], with your content inside the parenthesis and separated by commas. Let’s try it!
###Code
# Create a list
L = ["Michael Jackson", 10.1, 1982]
L
###Output
_____no_output_____
###Markdown
We can use negative and regular indexing with a list :
###Code
# Print the elements on each index
print('the same element using negative and positive indexing:\n Postive:',L[0],
'\n Negative:' , L[-3] )
print('the same element using negative and positive indexing:\n Postive:',L[1],
'\n Negative:' , L[-2] )
print('the same element using negative and positive indexing:\n Postive:',L[2],
'\n Negative:' , L[-1] )
###Output
_____no_output_____
###Markdown
List Content Lists can contain strings, floats, and integers. We can nest other lists, and we can also nest tuples and other data structures. The same indexing conventions apply for nesting:
###Code
# Sample List
["Michael Jackson", 10.1, 1982, [1, 2], ("A", 1)]
###Output
_____no_output_____
###Markdown
List Operations We can also perform slicing in lists. For example, if we want the last two elements, we use the following command:
###Code
# Sample List
L = ["Michael Jackson", 10.1,1982,"MJ",1]
L
###Output
_____no_output_____
###Markdown
###Code
# List slicing
L[3:5]
###Output
_____no_output_____
###Markdown
We can use the method extend to add new elements to the list:
###Code
# Use extend to add elements to list
L = [ "Michael Jackson", 10.2]
L.extend(['pop', 10])
L
###Output
_____no_output_____
###Markdown
Another similar method is append. If we apply append instead of extend, we add one element to the list:
###Code
# Use append to add elements to list
L = [ "Michael Jackson", 10.2]
L.append(['pop', 10])
L
###Output
_____no_output_____
###Markdown
Each time we apply a method, the list changes. If we apply extend we add two new elements to the list. The list L is then modified by adding two new elements:
###Code
# Use extend to add elements to list
L = [ "Michael Jackson", 10.2]
L.extend(['pop', 10])
L
###Output
_____no_output_____
###Markdown
If we append the list ['a','b'] we have one new element consisting of a nested list:
###Code
# Use append to add elements to list
L.append(['a','b'])
L
###Output
_____no_output_____
###Markdown
As lists are mutable, we can change them. For example, we can change the first element as follows:
###Code
# Change the element based on the index
A = ["disco", 10, 1.2]
print('Before change:', A)
A[0] = 'hard rock'
print('After change:', A)
###Output
_____no_output_____
###Markdown
We can also delete an element of a list using the del command:
###Code
# Delete the element based on the index
print('Before change:', A)
del(A[0])
print('After change:', A)
###Output
_____no_output_____
###Markdown
We can convert a string to a list using split. For example, the method split translates every group of characters separated by a space into an element in a list:
###Code
# Split the string, default is by space
'hard rock'.split()
###Output
_____no_output_____
###Markdown
We can use the split function to separate strings on a specific character. We pass the character we would like to split on into the argument, which in this case is a comma. The result is a list, and each element corresponds to a set of characters that have been separated by a comma:
###Code
# Split the string by comma
'A,B,C,D'.split(',')
###Output
_____no_output_____
###Markdown
Copy and Clone List When we set one variable B equal to A; both A and B are referencing the same list in memory:
###Code
# Copy (copy by reference) the list A
A = ["hard rock", 10, 1.2]
B = A
print('A:', A)
print('B:', B)
###Output
_____no_output_____
###Markdown
Initially, the value of the first element in B is set as hard rock. If we change the first element in A to banana, we get an unexpected side effect. As A and B are referencing the same list, if we change list A, then list B also changes. If we check the first element of B we get banana instead of hard rock:
###Code
# Examine the copy by reference
print('B[0]:', B[0])
A[0] = "banana"
print('B[0]:', B[0])
###Output
_____no_output_____
###Markdown
This is demonstrated in the following figure: You can clone list **A** by using the following syntax:
###Code
# Clone (clone by value) the list A
B = A[:]
B
###Output
_____no_output_____
###Markdown
Variable **B** references a new copy or clone of the original list; this is demonstrated in the following figure: Now if you change A, B will not change:
###Code
print('B[0]:', B[0])
A[0] = "hard rock"
print('B[0]:', B[0])
###Output
_____no_output_____
###Markdown
Quiz on List Create a list a_list, with the following elements 1, hello, [1,2,3] and True.
###Code
# Write your code below and press Shift+Enter to execute
a_list = [1, 'hello',[1,2,3],True]
###Output
_____no_output_____
###Markdown
Double-click here for the solution.<!-- Your answer is below:a_list = [1, 'hello', [1, 2, 3] , True]a_list--> Find the value stored at index 1 of a_list.
###Code
# Write your code below and press Shift+Enter to execute
a_list[1]
###Output
_____no_output_____
###Markdown
Double-click here for the solution.<!-- Your answer is below:a_list[1]--> Retrieve the elements stored at index 1, 2 and 3 of a_list.
###Code
# Write your code below and press Shift+Enter to execute
a_list[1:4]
###Output
_____no_output_____
###Markdown
Double-click here for the solution.<!-- Your answer is below:a_list[1:4]--> Concatenate the following lists A = [1, 'a'] and B = [2, 1, 'd']:
###Code
# Write your code below and press Shift+Enter to execute
'a,b,c,d'.split(',')
###Output
_____no_output_____ |
notebooks/image_models/solutions/2_mnist_models.ipynb | ###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
from datetime import datetime
import os
REGION = 'us-central1'
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "dnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.3" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dense(nclasses),
Softmax()
],
'dnn_dropout': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
],
'cnn': [
Conv2D(num_filters_1, kernel_size=kernel_size_1,
activation='relu', input_shape=(WIDTH, HEIGHT, 1)),
MaxPooling2D(pooling_size_1),
Conv2D(num_filters_2, kernel_size=kernel_size_2,
activation='relu'),
MaxPooling2D(pooling_size_2),
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudWe will use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) to train this model on AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TensorFlow 2.3 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --region $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.3
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
from datetime import datetime
import os
REGION = 'us-central1'
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "cnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["TFVERSION"] = "2.3" # Tensorflow version
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dense(nclasses),
Softmax()
],
'dnn_dropout': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
],
'cnn': [
Conv2D(num_filters_1, kernel_size=kernel_size_1,
activation='relu', input_shape=(WIDTH, HEIGHT, 1)),
MaxPooling2D(pooling_size_1),
Conv2D(num_filters_2, kernel_size=kernel_size_2,
activation='relu'),
MaxPooling2D(pooling_size_2),
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudWe will use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) to train this model on AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TensorFlow 2.3 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model.
###Code
%%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --region $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.3
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____
###Markdown
MNIST Image Classification with TensorFlow on Cloud AI PlatformThis notebook demonstrates how to implement different image models on MNIST using the [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras). Learning Objectives1. Understand how to build a Dense Neural Network (DNN) for image classification2. Understand how to use dropout (DNN) for image classification3. Understand how to use Convolutional Neural Networks (CNN)4. Know how to deploy and use an image classifcation model using Google Cloud's [AI Platform](https://cloud.google.com/ai-platform/)First things first. Configure the parameters below to match your own Google Cloud project details.
###Code
import os
from datetime import datetime
REGION = "us-central1"
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
MODEL_TYPE = "cnn" # "linear", "dnn", "dnn_dropout", or "cnn"
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_TYPE"] = MODEL_TYPE
os.environ["IMAGE_URI"] = os.path.join("gcr.io", PROJECT, "mnist_models")
###Output
_____no_output_____
###Markdown
Building a dynamic modelIn the previous notebook, mnist_linear.ipynb, we ran our code directly from the notebook. In order to run it on the AI Platform, it needs to be packaged as a python module.The boilerplate structure for this module has already been set up in the folder `mnist_models`. The module lives in the sub-folder, `trainer`, and is designated as a python package with the empty `__init__.py` (`mnist_models/trainer/__init__.py`) file. It still needs the model and a trainer to run it, so let's make them.Let's start with the trainer file first. This file parses command line arguments to feed into the model.
###Code
%%writefile mnist_models/trainer/task.py
import argparse
import json
import os
import sys
from . import model
def _parse_arguments(argv):
"""Parses command-line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_type',
help='Which model type to use',
type=str, default='linear')
parser.add_argument(
'--epochs',
help='The number of epochs to train',
type=int, default=10)
parser.add_argument(
'--steps_per_epoch',
help='The number of steps per epoch to train',
type=int, default=100)
parser.add_argument(
'--job-dir',
help='Directory where to save the given model',
type=str, default='mnist_models/')
return parser.parse_known_args(argv)
def main():
"""Parses command line arguments and kicks off model training."""
args = _parse_arguments(sys.argv[1:])[0]
# Configure path for hyperparameter tuning.
trial_id = json.loads(
os.environ.get('TF_CONFIG', '{}')).get('task', {}).get('trial', '')
output_path = args.job_dir if not trial_id else args.job_dir + '/'
model_layers = model.get_layers(args.model_type)
image_model = model.build_model(model_layers, args.job_dir)
model_history = model.train_and_evaluate(
image_model, args.epochs, args.steps_per_epoch, args.job_dir)
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
Next, let's group non-model functions into a util file to keep the model file simple. We'll copy over the `scale` and `load_dataset` functions from the previous lab.
###Code
%%writefile mnist_models/trainer/util.py
import tensorflow as tf
def scale(image, label):
"""Scales images from a 0-255 int range to a 0-1 float range"""
image = tf.cast(image, tf.float32)
image /= 255
image = tf.expand_dims(image, -1)
return image, label
def load_dataset(
data, training=True, buffer_size=5000, batch_size=100, nclasses=10):
"""Loads MNIST dataset into a tf.data.Dataset"""
(x_train, y_train), (x_test, y_test) = data
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, nclasses)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(batch_size)
if training:
dataset = dataset.shuffle(buffer_size).repeat()
return dataset
###Output
_____no_output_____
###Markdown
Finally, let's code the models! The [tf.keras API](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras) accepts an array of [layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) into a [model object](https://www.tensorflow.org/api_docs/python/tf/keras/Model), so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: `get_layers` and `create_and_train_model`. We will build the structure of our model in `get_layers`. Last but not least, we'll copy over the training code from the previous lab into `train_and_evaluate`.**TODO 1**: Define the Keras layers for a DNN model **TODO 2**: Define the Keras layers for a dropout model **TODO 3**: Define the Keras layers for a CNN model Hint: These models progressively build on each other. Look at the imported `tensorflow.keras.layers` modules and the default values for the variables defined in `get_layers` for guidance.
###Code
%%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dense(nclasses),
Softmax()
],
'dnn_dropout': [
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
],
'cnn': [
Conv2D(num_filters_1, kernel_size=kernel_size_1,
activation='relu', input_shape=(WIDTH, HEIGHT, 1)),
MaxPooling2D(pooling_size_1),
Conv2D(num_filters_2, kernel_size=kernel_size_2,
activation='relu'),
MaxPooling2D(pooling_size_2),
Flatten(),
Dense(hidden_layer_1_neurons, activation='relu'),
Dense(hidden_layer_2_neurons, activation='relu'),
Dropout(dropout_rate),
Dense(nclasses),
Softmax()
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
###Output
_____no_output_____
###Markdown
Local TrainingWith everything set up, let's run locally to test the code. Some of the previous tests have been copied over into a testing script `mnist_models/trainer/test.py` to make sure the model still passes our previous checks. On `line 13`, you can specify which model types you would like to check. `line 14` and `line 15` has the number of epochs and steps per epoch respectively.Moment of truth! Run the code below to check your models against the unit tests. If you see "OK" at the end when it's finished running, congrats! You've passed the tests!
###Code
!python3 -m mnist_models.trainer.test
###Output
_____no_output_____
###Markdown
Now that we know that our models are working as expected, let's run it on the [Google Cloud AI Platform](https://cloud.google.com/ml-engine/docs/). We can run it as a python module locally first using the command line.The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp.
###Code
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = "cnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time
)
###Output
_____no_output_____
###Markdown
The cell below runs the local version of the code. The epochs and steps_per_epoch flag can be changed to run for longer or shorther, as defined in our `mnist_models/trainer/task.py` file.
###Code
%%bash
python3 -m mnist_models.trainer.task \
--job-dir=$JOB_DIR \
--epochs=5 \
--steps_per_epoch=50 \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Training on the cloudWe will use a [Deep Learning Container](https://cloud.google.com/ai-platform/deep-learning-containers/docs/overview) to train this model on AI Platform. Below is a simple [Dockerlife](https://docs.docker.com/engine/reference/builder/) which copies our code to be used in a TensorFlow 2.3 environment.
###Code
%%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"]
###Output
_____no_output_____
###Markdown
The below command builds the image and ships it off to Google Cloud so it can be used for AI Platform. When built, it will show up [here](http://console.cloud.google.com/gcr) with the name `mnist_models`. ([Click here](https://console.cloud.google.com/cloud-build) to enable Cloud Build)
###Code
!docker build -f mnist_models/Dockerfile -t $IMAGE_URI ./
!docker push $IMAGE_URI
###Output
_____no_output_____
###Markdown
Finally, we can kickoff the [AI Platform training job](https://cloud.google.com/sdk/gcloud/reference/ai-platform/jobs/submit/training). We can pass in our docker image using the `master-image-uri` flag.
###Code
current_time = datetime.now().strftime("%Y%m%d_%H%M%S")
model_type = "cnn"
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time
)
os.environ["JOB_NAME"] = f"mnist_{model_type}_{current_time}"
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE
###Output
_____no_output_____
###Markdown
Deploying and predicting with modelOnce you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but `${JOB_DIR}keras_export/` can always be changed to a different path.
###Code
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
MODEL_NAME = f"mnist_{TIMESTAMP}"
%env MODEL_NAME = $MODEL_NAME
%%bash
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
gcloud ai-platform models create ${MODEL_NAME} --region $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=2.3
###Output
_____no_output_____
###Markdown
To predict with the model, let's take one of the example images.**TODO 4**: Write a `.json` file with image data to send to an AI Platform deployed model
###Code
import codecs
import json
import matplotlib.pyplot as plt
import tensorflow as tf
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding="utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH));
###Output
_____no_output_____
###Markdown
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab!
###Code
%%bash
gcloud ai-platform predict \
--model=${MODEL_NAME} \
--version=${MODEL_TYPE} \
--json-instances=./test.json
###Output
_____no_output_____ |
examples/Practical_tutorial/Combinations_practical.ipynb | ###Markdown
The Combinatorial Explosion---***"During the past century, science has developed a limited capability to design materials, but we are still too dependent on serendipity"*** - [Eberhart and Clougherty, Looking for design in materials design (2004)](http://www.nature.com/nmat/journal/v3/n10/abs/nmat1229.html)This practical explores how materials design can be approached by using the simplest of rules in order to narrow down the combinations to those that might be considered legitimate. It will demonstrate the scale of the problem, even after some chemical rules are applied. TASKS- Section 1: 1 task- Section 2 i: 2 tasks- Section 2 ii: 2 tasks- Section 2 iii: 2 tasks- Section 2 iv: 3 tasks- Section 3 & 4: information only NOTES ON USING THE NOTEBOOK- This notebook is divided into "cells" which either contain Markdown (text, equations and images) or Python code- A cell can be "run" by selecting it and either - pressing the Run button in the toolbar above (triangle/arrow symbol) - Using Cell > Run in the menu above - Holding the Ctrl key and pressing Enter- Running Markdown cells just displays them nicely (like this text!) Running Python code cells runs the code and displays any output below.- When you run a cell and it appears to not be doing anything, if there is no number in the square brackets and instead you see ```In [*] ``` it is still running!- If the output produces a lot of lines, you can minimise the output box by clicking on the white space to the left of it.- You can clear the output of a cell or all cells by going to Cell > Current output/All output > Clear. 1. Back to basics: Forget your chemistry(From the blog of Anubhav Jain: [www.hackingmaterials.com](http://www.hackingmaterials.com))1. You have the first 50 elements of the periodic table2. You also have a 10 x 10 x 10 grid 3. You are allowed to arrange 30 of the elements at a time in some combination in the grid to make a 'compound'4. How many different arrangements (different compounds) could you make?The answer is about $10^{108}$, *over a googol of compounds!***TASK: Use the cell below to arrive at the conclusion above. Hints for the formula required are below the cell.**
###Code
from math import factorial as factorial
grid_points = 1000.0
atoms = 30.0
elements = 50.0
##########
# A. Show that assigning each of the 30 atoms as one of 50 elements is ~ 9e50 (permutations)
element_assignment = 0
print(f'Number of possible element assignments is: {element_assignment}')
# B. Show that the number of possible arrangements of 30 atoms on a grid of 10x10x10 is ~2e57 (combinations)
atom_arrangements = 0
print(f'Number of atom arrangements is: {atom_arrangements}')
# C. Finally, show that the total number of potential "materials" is ~ 2e108
total_materials = 0
print(f'Total number of "materials" is: {total_materials}')
###Output
_____no_output_____
###Markdown
2. Counting combinations: Remember your chemistryWe will use well-known elemental properties along with the criterion that compounds must not have an overall charge in order to sequentially apply different levels of screening and count the possible combinations:i. Setting up the search space - Defining which elements we want to include ii. Element combination counting - Considering combinations of elements and ignore oxidation statesiii. Ion combination counting - Considering combinations of elements in their allowed oxidation statesiv. Charge neutrality - Discarding any combinations that would not make a charge neutral compoundv. Electronegativity - Discarding any combinations which exhibit a cation which is more electronegative than an anion i. Setting up and choosing the search-spaceThe code below imports the element data that we need in order to do our counting. The main variable in the cell below for this practical is the ```max_atomic_number``` which dictates how many elements to consider. For example, when ```max_atomic_number = 10``` the elements from H to Ne are considered in the search.- ** TASK 1: Change the variable ```max_atomic_number``` so that it includes elements from H to Ar **- ** TASK 2: Get the code to print out the actual list of elements that will be considered **
###Code
# Imports the SMACT toolkit for later on #
import smact
# Gets element data from file and puts into a list #
with open('Counting/element_data.txt','r') as f:
data = f.readlines()
list_of_elements = []
# Specify the range of elements to include #
### EDIT BELOW ###
max_atomic_number = 10
##################
# Populates a list with the elements we are concerned with #
for line in data:
if not line.startswith('#'):
# Grab first three items from table row
symbol, name, Z = line.split()[:3]
if int(Z) > 0 and int(Z) < max_atomic_number + 1:
list_of_elements.append(symbol)
print(f'--- Considering the {len(list_of_elements)} elements '
f'from {list_of_elements[0]} to {list_of_elements[-1]} ---')
###Output
_____no_output_____
###Markdown
ii. Element combination countingThis first procedure simply counts how many binary combinations are possible for a given set of elements. This is a numerical (combinations) problem, as we are not considering element properties in any way for the time being.- **TASK 1: Increase the number of elements to consider (max_atomic_number in the cell above) to see how this affects the number of combinations**- **TASK 2: If you can, add another for statement (e.g. ```for k, ele_c...```) to make the cell count up ternary combinations. It is advisable to change the number of elements to include back to 10 first! Hint: The next exercise is set up for ternary counting so you could come back and do this after looking at that.**
###Code
# Counts up possibilities and prints the output #
element_count = 0
for i, ele_a in enumerate(list_of_elements):
for j, ele_b in enumerate(list_of_elements[i+1:]):
element_count += 1
print(f'{ele_a} {ele_b}')
# Prints the total number of combinations found
print(f'Number of combinations = {element_count}')
###Output
_____no_output_____
###Markdown
iii. Ion combination countingWe now consider each known oxidation state of an element (so strictly speaking we are not dealing with 'ions'). The procedure incorporates a library of known oxidation states for each element and is this time already set up to search for ternary combinations. The code prints out the combination of elements including their oxidation states. There is also a timer so that you can see how long it takes to run the program. - ** TASK 1: Reset the search space to ~10 elements, read through (feel free to ask if you don't understand any parts!) and run the code below. **- ** TASK 2: change ```max_atomic_number``` again in the cell above and see how this affects the number of combinations. Hint: It is advisable to increase the search-space gradually and see how long the calculation takes. Big numbers mean you could be waiting a while for the calculation to run....**
###Code
# Sets up the timer to see how long the program takes to run #
import time
start_time = time.time()
ion_count = 0
for i, ele_a in enumerate(list_of_elements):
for ox_a in smact.Element(ele_a).oxidation_states:
for j, ele_b in enumerate(list_of_elements[i+1:]):
for ox_b in smact.Element(ele_b).oxidation_states:
for k, ele_c in enumerate(list_of_elements[i+j+2:]):
for ox_c in smact.Element(ele_c).oxidation_states:
ion_count += 1
print(f'{ele_a} {ox_a} \t {ele_b} {ox_b} \t {ele_c} {ox_c}')
# Prints the total number of combinations found and the time taken to run.
print(f'Number of combinations = {ion_count}')
print(f'--- {time.time() - start_time} seconds to run ---')
###Output
_____no_output_____
###Markdown
All we seem to have done is make matters worse! We are introducing many more species by further splitting each element in our search-space into separate ions, one for each allowed oxidation state. When we get to max_atomic_number > 20, we are including the transition metals and their many oxidation states. iv. Charge neutralityThe previous step is necessary to incorporate our filter that viable compounds must be charge neutral overall. Scrolling through the output from above, it is easy to see that the vast majority of the combinations are not charge neutral overall. We can discard these combinations to start narrowing our search down to more 'sensible' (or at least not totally unreasonable) ones. In this cell, we will use the `neutral_ratios` function in smact to do this.- ** TASK 1: Reset the search space to ~10 elements, read through (feel free to ask if you don't understand any parts!) and run the code below. **- ** TASK 2: Edit the code so that it also prints out the oxidation state next to each element **- ** TASK 3: Increase the number of elements to consider again (```max_atomic_number``` in the cell above) and compare the output of i. and ii. with that of the below cell**
###Code
import time
from smact import neutral_ratios
start_time = time.time()
charge_neutral_count = 0
for i, ele_a in enumerate(list_of_elements):
for ox_a in smact.Element(ele_a).oxidation_states:
for j, ele_b in enumerate(list_of_elements[i+1:]):
for ox_b in smact.Element(ele_b).oxidation_states:
for k, ele_c in enumerate(list_of_elements[i+j+2:]):
for ox_c in smact.Element(ele_c).oxidation_states:
# Checks if the combination is charge neutral before printing it out! #
cn_e, cn_r = neutral_ratios([ox_a, ox_b, ox_c], threshold=1)
if cn_e:
charge_neutral_count += 1
print(f'{ele_a} \t {ele_b} \t {ele_c}')
print(f'Number of combinations = {charge_neutral_count}')
print(f'--- {time.time() - start_time} seconds to run ---')
###Output
_____no_output_____
###Markdown
This drastically reduces the number of combinations we get out and we can even begin to see some compounds that we recognise and know exist. v. ElectronegativityThe last step is to incorporate the key chemical property of electronegativity, i.e. the propensity of an element to attract electron density to itself in a bond. This is a logical step as inspection of the output from above reveals that some combinations feature a species in a higher (more positive) oxidation state which is more elecronegative than other species present.With this in mind, we now incorporate another filter which checks that the species with higher oxidation states have lower electronegativities. The library of values used is of the widely accepted electronegativity scale as developed by Linus Pauling. The scale is based on the dissociation energies of heteronuclear diatomic molecules and their corresponding homonuclear diatomic molecules:
###Code
import time
from smact.screening import pauling_test
start_time = time.time()
pauling_count = 0
for i, ele_a in enumerate(list_of_elements):
paul_a = smact.Element(ele_a).pauling_eneg
for ox_a in smact.Element(ele_a).oxidation_states:
for j, ele_b in enumerate(list_of_elements[i+1:]):
paul_b = smact.Element(ele_b).pauling_eneg
for ox_b in smact.Element(ele_b).oxidation_states:
for k, ele_c in enumerate(list_of_elements[i+j+2:]):
paul_c = smact.Element(ele_c).pauling_eneg
for ox_c in smact.Element(ele_c).oxidation_states:
# Puts elements, oxidation states and electronegativites into lists for convenience #
elements = [ele_a, ele_b, ele_c]
oxidation_states = [ox_a, ox_b, ox_c]
pauling_electro = [paul_a, paul_b, paul_c]
# Checks if the electronegativity makes sense and if the combination is charge neutral #
electroneg_makes_sense = pauling_test(oxidation_states, pauling_electro, elements)
cn_e, cn_r = smact.neutral_ratios([ox_a, ox_b, ox_c], threshold=1)
if cn_e:
if electroneg_makes_sense:
pauling_count += 1
print(f'{ele_a}{ox_a} \t {ele_b}{ox_b} \t {ele_c}{ox_c}')
print(f'Number of combinations = {pauling_count}')
print(f'--- {time.time() - start_time} seconds to run ---')
###Output
_____no_output_____ |
tutorials/W3D3_NetworkCausality/W3D3_Tutorial3.ipynb | ###Markdown
Neuromatch Academy 2020 -- Week 3 Day 3 Tutorial 3 Causality Day - Simultaneous fitting/regression**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording**Content reviewers**: Mike X Cohen, Madineh Sarvestani, Ella Batty, Michael Waskom --- Tutorial objectivesThis is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:1. Master definitions of causality2. Understand that estimating causality is possible3. Learn 4 different methods and understand when they fail 1. perturbations 2. correlations 3. **simultaneous fitting/regression** 4. instrumental variables Notebook 3 objectivesIn tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things? Here we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:- Learn about more advanced (but also controversial) techniques for estimating causality - conditional probabilities (**regression**)- Explore limitations and failure modes - understand the problem of **omitted variable bias** --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Lasso
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
def create_connectivity(n_neurons, random_state=42, p=0.9):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[p, 1 - p])
# set the timescale of the dynamical system to about 100 steps
_, s_vals, _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def get_regression_estimate_full_connectivity(X):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
n_neurons = X.shape[0]
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[:, 1:].transpose()
# apply inverse sigmoid transformation
Y = logit(Y)
# fit multioutput regression
reg = MultiOutputRegressor(Lasso(fit_intercept=False,
alpha=0.01, max_iter=250 ), n_jobs=-1)
reg.fit(W, Y)
V = np.zeros((n_neurons, n_neurons))
for i, estimator in enumerate(reg.estimators_):
V[i, :] = estimator.coef_
return V
def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): number of neurons
A (np.ndarray): connectivity matrix
X (np.ndarray): dynamical system
observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
sel_X = X[:sel_idx, :]
sel_A = A[:sel_idx, :sel_idx]
sel_V = get_regression_estimate_full_connectivity(sel_X)
return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V
def see_neurons(A, ax, ratio_observed=1, arrows=True):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0, np.pi * 2, n, endpoint=False)
x, y = np.cos(thetas), np.sin(thetas),
if arrows:
for i in range(n):
for j in range(n):
if A[i, j] > 0:
ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', head_width=.05,
width = A[i, j] / 25,shape='right', length_includes_head=True,
alpha = .2)
if ratio_observed < 1:
nn = int(n * ratio_observed)
ax.scatter(x[:nn], y[:nn], c='r', s=150, label='Observed')
ax.scatter(x[nn:], y[nn:], c='b', s=150, label='Unobserved')
ax.legend(fontsize=15)
else:
ax.scatter(x, y, c='k', s=150)
ax.axis('off')
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps - 1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0, 1]
def get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): the number of neurons in our system.
A (np.array): the true connectivity
X (np.array): the simulated system
observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons * observed_ratio), 1, n_neurons)
selected_X = X[:sel_idx, :]
selected_connectivity = A[:sel_idx, :sel_idx]
estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)
if neuron_idx is None:
return np.corrcoef(selected_connectivity.flatten(),
estimated_selected_connectivity.flatten())[1, 0], estimated_selected_connectivity
else:
return np.corrcoef(selected_connectivity[neuron_idx, :],
estimated_selected_connectivity)[1, 0], estimated_selected_connectivity
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
###Output
_____no_output_____
###Markdown
--- Section 1: Regression
###Code
#@title Video 1: Regression approach
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Av4LaXZdgDo", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/Av4LaXZdgDo
###Markdown
You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.**A confounding example**:Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. **Controlling for a confound**: Confonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things: 1. **All** confounds are included as covariates2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.
###Code
#@title Video 2: Fitting a GLM
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GvMj9hRv5Ak", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/GvMj9hRv5Ak
###Markdown
Section 1.1: Recovering connectivity by model fittingRecall that in our system each neuron effects every other via:$$\vec{x}_{t+1} = \sigma(A\vec{x}_t + \epsilon_t), $$where $\sigma$ is our sigmoid nonlinearity from before: $\sigma(x) = \frac{1}{1 + e^{-x}}$Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they? We will use a regression approach to estimate the causal influence of all neurons to neuron 1. Specifically, we will use linear regression to determine the $A$ in:$$\sigma^{-1}(\vec{x}_{t+1}) = A\vec{x}_t + \epsilon_t ,$$where $\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\sigma^{-1}(x) = \log(\frac{x}{1-x})$.Let $W$ be the $\vec{x}_t$ values, up to the second-to-last timestep $T-1$:$$W = \begin{bmatrix}\mid & \mid & ... & \mid \\ \vec{x}_0 & \vec{x}_1 & ... & \vec{x}_{T-1} \\ \mid & \mid & ... & \mid\end{bmatrix}_{n \times (T-1)}$$Let $Y$ be the $\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:$$Y = \begin{bmatrix}x_{i,1} & x_{i,2} & ... & x_{i, T} \\ \end{bmatrix}_{1 \times (T-1)}$$You will then fit the following model:$$\sigma^{-1}(Y^T) = W^TV$$where $V$ is the $n \times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.**Review**: As you learned Friday of Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here. Exercise 1: Use linear regression plus lasso to estimate causal connectivitiesYou will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.**Code**:You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function- Use the following hyperparameters for the `Lasso` estimator: - `alpha = 0.01` - `fit_intercept = False`- How do we obtain $V$ from the fitted model?
###Code
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
############################################################################
## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures
## our assumption that most connections are precisely 0.
## Fill in function and remove
# raise NotImplementedError("Please complete the regression exercise")
############################################################################
# Initialize regression model with no intercept and alpha=0.01
regression = Lasso(alpha=0.01, fit_intercept=False)
# Fit regression to the data
regression.fit(W, Y)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
# to_remove solution
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
# Initialize regression model with no intercept and alpha=0.01
regression = Lasso(fit_intercept=False, alpha=0.01)
# Fit regression to the data
regression.fit(W, Y)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
###Output
Regression: correlation of estimated connectivity with true connectivity: 0.865
Lagged correlation of estimated connectivity with true connectivity: 0.703
###Markdown
You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity. --- Section 2: Omitted Variable BiasIf we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately? Section 2.1: Visualizing subsets of the connectivity matrixWe first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.
###Code
#@markdown Execute this cell to visualize subsets of connectivity matrix
# Run this cell to visualize the subsets of variables we observe
n_neurons = 25
A = create_connectivity(n_neurons)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
ratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system
for i, ratio in enumerate(ratio_observed):
sel_idx = int(n_neurons * ratio)
offset = np.zeros((n_neurons, n_neurons))
axs[i,1].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[i, 1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
axs[i, 1].set_xlabel("Connectivity from")
axs[i, 1].set_ylabel("Connectivity to")
plt.colorbar(im, ax=axs[i, 1], fraction=0.046, pad=0.04)
see_neurons(A,axs[i, 0],ratio)
plt.suptitle("Visualizing subsets of the connectivity matrix", y = 1.05)
plt.show()
###Output
_____no_output_____
###Markdown
Section 2.2: Effects of partial observability
###Code
#@title Video 3: Omitted variable bias
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="5CCib6CTMac", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/5CCib6CTMac
###Markdown
**Video correction**: the labels "connectivity from"/"connectivity to" are swapped in the video but fixed in the figures/demos below Interactive Demo: Regression performance as a function of the number of observed neuronsWe will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?**Note:** the plots will take a moment or so to update after moving the slider.
###Code
#@markdown Execute this cell to enable demo
n_neurons = 50
A = create_connectivity(n_neurons, random_state=42)
X = simulate_neurons(A, 4000, random_state=42)
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
@widgets.interact
def plot_observed(n_observed=(5, 45, 5)):
to_neuron = 0
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
sel_idx = n_observed
ratio = (n_observed) / n_neurons
offset = np.zeros((n_neurons, n_neurons))
axs[0].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[1], fraction=0.046, pad=0.04)
see_neurons(A,axs[0], ratio, False)
corr, R = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
#rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')
#axs[1].add_patch(rect)
big_R = np.zeros(A.shape)
big_R[:sel_idx, :sel_idx] = 1 + R
#big_R[to_neuron, :sel_idx] = 1 + R
im = axs[2].imshow(big_R, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)
c = 'w' if n_observed<(n_neurons-3) else 'k'
axs[2].text(0,n_observed+3,"Correlation : {:.2f}".format(corr), color=c, size=15)
#axs[2].axis("off")
axs[1].title.set_text("True connectivity")
axs[1].set_xlabel("Connectivity from")
axs[1].set_ylabel("Connectivity to")
axs[2].title.set_text("Estimated connectivity")
axs[2].set_xlabel("Connectivity from")
#axs[2].set_ylabel("Connectivity to")
###Output
_____no_output_____
###Markdown
Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.What is the relationship that you see between performance and the number of neurons observed?**Note:** the cell below will take about 25-30 seconds to run.
###Code
#@title
#@markdown Plot correlation vs. subsampling
import warnings
warnings.filterwarnings('ignore')
# we'll simulate many systems for various ratios of observed neurons
n_neurons = 50
timesteps = 5000
ratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system
n_trials = 3 # run it this many times to get variability in our results
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
corr_data = np.zeros((n_trials, len(ratio_observed)))
for trial in range(n_trials):
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps)
print("simulating trial {} of {}".format(trial + 1, n_trials))
for j, ratio in enumerate(ratio_observed):
result,_ = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
corr_data[trial, j] = result
corr_mean = np.nanmean(corr_data, axis=0)
corr_std = np.nanstd(corr_data, axis=0)
plt.plot(np.asarray(ratio_observed) * 100, corr_mean)
plt.fill_between(np.asarray(ratio_observed) * 100,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlim([100, 10])
plt.xlabel("Percent of neurons observed")
plt.ylabel("connectivity matrices correlation")
plt.title("Performance of regression as a function of the number of neurons observed");
###Output
simulating trial 1 of 3
simulating trial 2 of 3
simulating trial 3 of 3
###Markdown
--- Summary
###Code
#@title Video 4: Summary
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="T1uGf1H31wE", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/T1uGf1H31wE
###Markdown
Neuromatch Academy 2020 -- Week 3 Day 3 Tutorial 3 Causality Day - Simultaneous fitting/regression**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording**Content reviewers**: Mike X Cohen, Madineh Sarvestani, Ella Batty, Michael Waskom --- Tutorial objectivesThis is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:1. Master definitions of causality2. Understand that estimating causality is possible3. Learn 4 different methods and understand when they fail 1. perturbations 2. correlations 3. **simultaneous fitting/regression** 4. instrumental variables Notebook 3 objectivesIn tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things? Here we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:- Learn about more advanced (but also controversial) techniques for estimating causality - conditional probabilities (**regression**)- Explore limitations and failure modes - understand the problem of **omitted variable bias** --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Ridge, LinearRegression, ElasticNet, Lasso
import matplotlib.patches as patches
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("/share/dataset/COMMON/nma.mplstyle.txt")
# @title Helper Functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
def create_connectivity(n_neurons, random_state=42):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0,1], size=(n_neurons, n_neurons), p=[0.9, 0.1])
# set the timescale of the dynamical system to about 100 steps
_, s_vals , _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def get_regression_estimate_full_connectivity(X):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
n_neurons = X.shape[0]
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[:, 1:].transpose()
# apply inverse sigmoid transformation
Y = logit(Y)
# fit multioutput regression
reg = MultiOutputRegressor(Lasso(fit_intercept=False,
alpha=0.01, max_iter=250 ), n_jobs=-1)
reg.fit(W,Y)
V = np.zeros((n_neurons, n_neurons))
for i, estimator in enumerate(reg.estimators_):
V[i, :] = estimator.coef_
return V
def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): number of neurons
A (np.ndarray): connectivity matrix
X (np.ndarray): dynamical system
observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
sel_X = X[:sel_idx, :]
sel_A = A[:sel_idx, :sel_idx]
sel_V = get_regression_estimate_full_connectivity(sel_X)
return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V
def see_neurons(A, ax, ratio_observed=1, arrows=True):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0,np.pi*2,n,endpoint=False )
x,y = np.cos(thetas),np.sin(thetas),
if arrows:
for i in range(n):
for j in range(n):
if A[i,j]>0:
ax.arrow(x[i],y[i],x[j]-x[i],y[j]-y[i],color='k',head_width=.05,
width = A[i,j]/25,shape='right', length_includes_head=True,
alpha = .2)
if ratio_observed<1:
nn = int(n*ratio_observed)
ax.scatter(x[:nn],y[:nn],c='r',s=150, label='Observed')
ax.scatter(x[nn:],y[nn:],c='b',s=150, label='Unobserved')
ax.legend(fontsize=15)
else:
ax.scatter(x,y,c='k',s=150)
ax.axis('off')
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps-1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t+1] = sigmoid(A.dot(X[:,t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons,random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0,1]
def get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): the number of neurons in our system.
A (np.array): the true connectivity
X (np.array): the simulated system
observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
selected_X = X[:sel_idx, :]
selected_connectivity = A[:sel_idx, :sel_idx]
estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)
if neuron_idx is None:
return np.corrcoef(selected_connectivity.flatten(),
estimated_selected_connectivity.flatten())[1,0], estimated_selected_connectivity
else:
return np.corrcoef(selected_connectivity[neuron_idx, :],
estimated_selected_connectivity)[1,0], estimated_selected_connectivity
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
###Output
_____no_output_____
###Markdown
--- Section 1: Regression
###Code
#@title Video 1: Regression approach
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1m54y1q78b', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1m54y1q78b
###Markdown
You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.**A confounding example**:Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. **Controlling for a confound**: Confonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things: 1. **All** confounds are included as covariates2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.
###Code
#@title Video 2: Fitting a GLM
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV16p4y1S7yE', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV16p4y1S7yE
###Markdown
Section 1.1: Recovering connectivity by model fittingRecall that in our system each neuron effects every other via:$$\vec{x}_{t+1} = \sigma(A\vec{x}_t + \epsilon_t), $$where $\sigma$ is our sigmoid nonlinearity from before: $\sigma(x) = \frac{1}{1 + e^{-x}}$Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they? We will use a regression approach to estimate the causal influence of all neurons to neuron 1. Specifically, we will use linear regression to determine the $A$ in:$$\sigma^{-1}(\vec{x}_{t+1}) = A\vec{x}_t + \epsilon_t ,$$where $\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\sigma^{-1}(x) = \log(\frac{x}{1-x})$.Let $W$ be the $\vec{x}_t$ values, up to the second-to-last timestep $T-1$:$$W = \begin{bmatrix}\mid & \mid & ... & \mid \\ \vec{x}_0 & \vec{x}_1 & ... & \vec{x}_{T-1} \\ \mid & \mid & ... & \mid\end{bmatrix}_{n \times (T-1)}$$Let $Y$ be the $\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:$$Y = \begin{bmatrix}x_{i,1} & x_{i,2} & ... & x_{i, T} \\ \end{bmatrix}_{1 \times (T-1)}$$You will then fit the following model:$$\sigma^{-1}(Y^T) = W^TV$$where $V$ is the $n \times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.**Review**: As you learned Friday of Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here. Exercise 1: Use linear regression plus lasso to estimate causal connectivitiesYou will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.**Code**:You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function- Use the following hyperparameters for the `Lasso` estimator: - `alpha = 0.01` - `fit_intercept = False`- How do we obtain $V$ from the fitted model?
###Code
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
############################################################################
## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures
## our assumption that most connections are precisely 0.
## Fill in function and remove
raise NotImplementedError("Please complete the regression exercise")
############################################################################
# Initialize regression model with no intercept and alpha=0.01
regression = ...
# Fit regression to the data
regression.fit(...)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
# V = get_regression_estimate(X, neuron_idx)
#print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1,0]))
#print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
# to_remove solution
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
# Initialize regression model with no intercept and alpha=0.01
regression = Lasso(fit_intercept=False, alpha=0.01)
# Fit regression to the data
regression.fit(W, Y)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1,0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
###Output
Regression: correlation of estimated connectivity with true connectivity: 0.865
###Markdown
You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity. --- Section 2: Omitted Variable BiasIf we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately? Section 2.1: Visualizing subsets of the connectivity matrixWe first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.
###Code
#@markdown Execute this cell to visualize subsets of connectivity matrix
# Run this cell to visualize the subsets of variables we observe
n_neurons = 25
A = create_connectivity(n_neurons)
fig, axs = plt.subplots(2,2, figsize=(10,10))
ratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system
for i, ratio in enumerate(ratio_observed):
sel_idx = int(n_neurons*ratio)
offset = np.zeros((n_neurons, n_neurons))
axs[i,1].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[i, 1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max()+1)
axs[i, 1].set_xlabel("Connectivity from")
axs[i, 1].set_ylabel("Connectivity to")
plt.colorbar(im, ax=axs[i,1],fraction=0.046, pad=0.04)
see_neurons(A,axs[i,0],ratio)
plt.suptitle("Visualizing subsets of the connectivity matrix", y = 1.05)
plt.show()
###Output
_____no_output_____
###Markdown
Section 2.2: Effects of partial observability
###Code
#@title Video 3: Omitted variable bias
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1ov411i7dc', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1ov411i7dc
###Markdown
**Video correction**: the labels "connectivity from"/"connectivity to" are swapped in the video but fixed in the figures/demos below Interactive Demo: Regression performance as a function of the number of observed neuronsWe will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?**Note:** the plots will take a moment or so to update after moving the slider.
###Code
#@markdown Execute this cell to enable demo
n_neurons = 50
A = create_connectivity(n_neurons, random_state=42)
X = simulate_neurons(A, 4000, random_state=42)
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
@widgets.interact
def plot_observed(n_observed=(5,n_neurons,5)):
to_neuron = 0
fig, axs = plt.subplots(1,3, figsize=(15,5))
sel_idx = n_observed
ratio = (n_observed)/n_neurons
offset = np.zeros((n_neurons, n_neurons))
axs[0].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max()+1)
plt.colorbar(im, ax=axs[1],fraction=0.046, pad=0.04)
see_neurons(A,axs[0],ratio, False)
corr, R = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
#rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')
#axs[1].add_patch(rect)
big_R = np.zeros(A.shape)
big_R[:sel_idx, :sel_idx] = 1 + R
#big_R[to_neuron, :sel_idx] = 1 + R
im = axs[2].imshow(big_R, cmap="coolwarm", vmin=0, vmax=A.max()+1)
plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)
c = 'w' if n_observed<(n_neurons-3) else 'k'
axs[2].text(0,n_observed+3,"Correlation : {:.2f}".format(corr), color=c,size=15)
#axs[2].axis("off")
axs[1].title.set_text("True connectivity")
axs[1].set_xlabel("Connectivity from")
axs[1].set_ylabel("Connectivity to")
axs[2].title.set_text("Estimated connectivity")
axs[2].set_xlabel("Connectivity from")
#axs[2].set_ylabel("Connectivity to")
###Output
_____no_output_____
###Markdown
Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.What is the relationship that you see between performance and the number of neurons observed?**Note:** the cell below will take about 25-30 seconds to run.
###Code
#@title
#@markdown Plot correlation vs. subsampling
import warnings
warnings.filterwarnings('ignore')
# we'll simulate many systems for various ratios of observed neurons
n_neurons = 50
timesteps = 5000
ratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system
n_trials = 3 # run it this many times to get variability in our results
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
corr_data = np.zeros((n_trials, len(ratio_observed)))
for trial in range(n_trials):
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps)
print("simulating trial {} of {}".format(trial+1, n_trials))
for j, ratio in enumerate(ratio_observed):
result,_ = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
corr_data[trial, j] = result
corr_mean = np.nanmean(corr_data, axis=0)
corr_std = np.nanstd(corr_data, axis=0)
plt.plot(np.asarray(ratio_observed)*100, corr_mean)
plt.fill_between(np.asarray(ratio_observed)*100,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlim([100, 10])
plt.xlabel("Percent of neurons observed")
plt.ylabel("connectivity matrices correlation")
plt.title("Performance of regression as a function of the number of neurons observed");
###Output
simulating trial 1 of 3
###Markdown
--- Summary
###Code
#@title Video 4: Summary
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1bh411o73r', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1bh411o73r
###Markdown
Neuromatch Academy 2020 -- Week 3 Day 3 Tutorial 3 Causality Day - Simultaneous fitting/regression**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording**Content reviewers**: Mike X Cohen, Madineh Sarvestani, Ella Batty, Michael Waskom --- Tutorial objectivesThis is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:1. Master definitions of causality2. Understand that estimating causality is possible3. Learn 4 different methods and understand when they fail 1. perturbations 2. correlations 3. **simultaneous fitting/regression** 4. instrumental variables Notebook 3 objectivesIn tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things? Here we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:- Learn about more advanced (but also controversial) techniques for estimating causality - conditional probabilities (**regression**)- Explore limitations and failure modes - understand the problem of **omitted variable bias** --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Lasso
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
def create_connectivity(n_neurons, random_state=42, p=0.9):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[p, 1 - p])
# set the timescale of the dynamical system to about 100 steps
_, s_vals, _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def get_regression_estimate_full_connectivity(X):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
n_neurons = X.shape[0]
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[:, 1:].transpose()
# apply inverse sigmoid transformation
Y = logit(Y)
# fit multioutput regression
reg = MultiOutputRegressor(Lasso(fit_intercept=False,
alpha=0.01, max_iter=250 ), n_jobs=-1)
reg.fit(W, Y)
V = np.zeros((n_neurons, n_neurons))
for i, estimator in enumerate(reg.estimators_):
V[i, :] = estimator.coef_
return V
def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): number of neurons
A (np.ndarray): connectivity matrix
X (np.ndarray): dynamical system
observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
sel_X = X[:sel_idx, :]
sel_A = A[:sel_idx, :sel_idx]
sel_V = get_regression_estimate_full_connectivity(sel_X)
return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V
def see_neurons(A, ax, ratio_observed=1, arrows=True):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0, np.pi * 2, n, endpoint=False)
x, y = np.cos(thetas), np.sin(thetas),
if arrows:
for i in range(n):
for j in range(n):
if A[i, j] > 0:
ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', head_width=.05,
width = A[i, j] / 25,shape='right', length_includes_head=True,
alpha = .2)
if ratio_observed < 1:
nn = int(n * ratio_observed)
ax.scatter(x[:nn], y[:nn], c='r', s=150, label='Observed')
ax.scatter(x[nn:], y[nn:], c='b', s=150, label='Unobserved')
ax.legend(fontsize=15)
else:
ax.scatter(x, y, c='k', s=150)
ax.axis('off')
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps - 1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0, 1]
def get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): the number of neurons in our system.
A (np.array): the true connectivity
X (np.array): the simulated system
observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons * observed_ratio), 1, n_neurons)
selected_X = X[:sel_idx, :]
selected_connectivity = A[:sel_idx, :sel_idx]
estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)
if neuron_idx is None:
return np.corrcoef(selected_connectivity.flatten(),
estimated_selected_connectivity.flatten())[1, 0], estimated_selected_connectivity
else:
return np.corrcoef(selected_connectivity[neuron_idx, :],
estimated_selected_connectivity)[1, 0], estimated_selected_connectivity
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
###Output
_____no_output_____
###Markdown
--- Section 1: Regression
###Code
#@title Video 1: Regression approach
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1m54y1q78b', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1m54y1q78b
###Markdown
You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.**A confounding example**:Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. **Controlling for a confound**: Confonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things: 1. **All** confounds are included as covariates2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.
###Code
#@title Video 2: Fitting a GLM
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV16p4y1S7yE', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV16p4y1S7yE
###Markdown
Section 1.1: Recovering connectivity by model fittingRecall that in our system each neuron effects every other via:$$\vec{x}_{t+1} = \sigma(A\vec{x}_t + \epsilon_t), $$where $\sigma$ is our sigmoid nonlinearity from before: $\sigma(x) = \frac{1}{1 + e^{-x}}$Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they? We will use a regression approach to estimate the causal influence of all neurons to neuron 1. Specifically, we will use linear regression to determine the $A$ in:$$\sigma^{-1}(\vec{x}_{t+1}) = A\vec{x}_t + \epsilon_t ,$$where $\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\sigma^{-1}(x) = \log(\frac{x}{1-x})$.Let $W$ be the $\vec{x}_t$ values, up to the second-to-last timestep $T-1$:$$W = \begin{bmatrix}\mid & \mid & ... & \mid \\ \vec{x}_0 & \vec{x}_1 & ... & \vec{x}_{T-1} \\ \mid & \mid & ... & \mid\end{bmatrix}_{n \times (T-1)}$$Let $Y$ be the $\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:$$Y = \begin{bmatrix}x_{i,1} & x_{i,2} & ... & x_{i, T} \\ \end{bmatrix}_{1 \times (T-1)}$$You will then fit the following model:$$\sigma^{-1}(Y^T) = W^TV$$where $V$ is the $n \times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.**Review**: As you learned Friday of Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here. Exercise 1: Use linear regression plus lasso to estimate causal connectivitiesYou will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.**Code**:You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function- Use the following hyperparameters for the `Lasso` estimator: - `alpha = 0.01` - `fit_intercept = False`- How do we obtain $V$ from the fitted model?
###Code
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
############################################################################
## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures
## our assumption that most connections are precisely 0.
## Fill in function and remove
raise NotImplementedError("Please complete the regression exercise")
############################################################################
# Initialize regression model with no intercept and alpha=0.01
regression = ...
# Fit regression to the data
regression.fit(...)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
# V = get_regression_estimate(X, neuron_idx)
#print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
#print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
# to_remove solution
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
# Initialize regression model with no intercept and alpha=0.01
regression = Lasso(fit_intercept=False, alpha=0.01)
# Fit regression to the data
regression.fit(W, Y)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
###Output
Regression: correlation of estimated connectivity with true connectivity: 0.865
###Markdown
You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity. --- Section 2: Omitted Variable BiasIf we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately? Section 2.1: Visualizing subsets of the connectivity matrixWe first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.
###Code
#@markdown Execute this cell to visualize subsets of connectivity matrix
# Run this cell to visualize the subsets of variables we observe
n_neurons = 25
A = create_connectivity(n_neurons)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
ratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system
for i, ratio in enumerate(ratio_observed):
sel_idx = int(n_neurons * ratio)
offset = np.zeros((n_neurons, n_neurons))
axs[i,1].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[i, 1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
axs[i, 1].set_xlabel("Connectivity from")
axs[i, 1].set_ylabel("Connectivity to")
plt.colorbar(im, ax=axs[i, 1], fraction=0.046, pad=0.04)
see_neurons(A,axs[i, 0],ratio)
plt.suptitle("Visualizing subsets of the connectivity matrix", y = 1.05)
plt.show()
###Output
_____no_output_____
###Markdown
Section 2.2: Effects of partial observability
###Code
#@title Video 3: Omitted variable bias
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1ov411i7dc', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1ov411i7dc
###Markdown
**Video correction**: the labels "connectivity from"/"connectivity to" are swapped in the video but fixed in the figures/demos below Interactive Demo: Regression performance as a function of the number of observed neuronsWe will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?**Note:** the plots will take a moment or so to update after moving the slider.
###Code
#@markdown Execute this cell to enable demo
n_neurons = 50
A = create_connectivity(n_neurons, random_state=42)
X = simulate_neurons(A, 4000, random_state=42)
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
@widgets.interact
def plot_observed(n_observed=(5, 45, 5)):
to_neuron = 0
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
sel_idx = n_observed
ratio = (n_observed) / n_neurons
offset = np.zeros((n_neurons, n_neurons))
axs[0].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[1], fraction=0.046, pad=0.04)
see_neurons(A,axs[0], ratio, False)
corr, R = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
#rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')
#axs[1].add_patch(rect)
big_R = np.zeros(A.shape)
big_R[:sel_idx, :sel_idx] = 1 + R
#big_R[to_neuron, :sel_idx] = 1 + R
im = axs[2].imshow(big_R, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)
c = 'w' if n_observed<(n_neurons-3) else 'k'
axs[2].text(0,n_observed+3,"Correlation : {:.2f}".format(corr), color=c, size=15)
#axs[2].axis("off")
axs[1].title.set_text("True connectivity")
axs[1].set_xlabel("Connectivity from")
axs[1].set_ylabel("Connectivity to")
axs[2].title.set_text("Estimated connectivity")
axs[2].set_xlabel("Connectivity from")
#axs[2].set_ylabel("Connectivity to")
###Output
_____no_output_____
###Markdown
Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.What is the relationship that you see between performance and the number of neurons observed?**Note:** the cell below will take about 25-30 seconds to run.
###Code
#@title
#@markdown Plot correlation vs. subsampling
import warnings
warnings.filterwarnings('ignore')
# we'll simulate many systems for various ratios of observed neurons
n_neurons = 50
timesteps = 5000
ratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system
n_trials = 3 # run it this many times to get variability in our results
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
corr_data = np.zeros((n_trials, len(ratio_observed)))
for trial in range(n_trials):
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps)
print("simulating trial {} of {}".format(trial + 1, n_trials))
for j, ratio in enumerate(ratio_observed):
result,_ = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
corr_data[trial, j] = result
corr_mean = np.nanmean(corr_data, axis=0)
corr_std = np.nanstd(corr_data, axis=0)
plt.plot(np.asarray(ratio_observed) * 100, corr_mean)
plt.fill_between(np.asarray(ratio_observed) * 100,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlim([100, 10])
plt.xlabel("Percent of neurons observed")
plt.ylabel("connectivity matrices correlation")
plt.title("Performance of regression as a function of the number of neurons observed");
###Output
simulating trial 1 of 3
###Markdown
--- Summary
###Code
#@title Video 4: Summary
# Insert the ID of the corresponding youtube video
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1bh411o73r', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1bh411o73r
###Markdown
Neuromatch Academy 2020 -- Week 3 Day 3 Tutorial 3 Causality Day - Simultaneous fitting/regression**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording**Content reviewers**: Mike X Cohen, Madineh Sarvestani, Ella Batty, Michael Waskom --- Tutorial objectivesThis is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:1. Master definitions of causality2. Understand that estimating causality is possible3. Learn 4 different methods and understand when they fail 1. perturbations 2. correlations 3. **simultaneous fitting/regression** 4. instrumental variables Notebook 3 objectivesIn tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things? Here we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:- Learn about more advanced (but also controversial) techniques for estimating causality - conditional probabilities (**regression**)- Explore limitations and failure modes - understand the problem of **omitted variable bias** --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Lasso
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
def create_connectivity(n_neurons, random_state=42, p=0.9):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[p, 1 - p])
# set the timescale of the dynamical system to about 100 steps
_, s_vals, _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def get_regression_estimate_full_connectivity(X):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
n_neurons = X.shape[0]
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[:, 1:].transpose()
# apply inverse sigmoid transformation
Y = logit(Y)
# fit multioutput regression
reg = MultiOutputRegressor(Lasso(fit_intercept=False,
alpha=0.01, max_iter=250 ), n_jobs=-1)
reg.fit(W, Y)
V = np.zeros((n_neurons, n_neurons))
for i, estimator in enumerate(reg.estimators_):
V[i, :] = estimator.coef_
return V
def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): number of neurons
A (np.ndarray): connectivity matrix
X (np.ndarray): dynamical system
observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
sel_X = X[:sel_idx, :]
sel_A = A[:sel_idx, :sel_idx]
sel_V = get_regression_estimate_full_connectivity(sel_X)
return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V
def see_neurons(A, ax, ratio_observed=1, arrows=True):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0, np.pi * 2, n, endpoint=False)
x, y = np.cos(thetas), np.sin(thetas),
if arrows:
for i in range(n):
for j in range(n):
if A[i, j] > 0:
ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', head_width=.05,
width = A[i, j] / 25,shape='right', length_includes_head=True,
alpha = .2)
if ratio_observed < 1:
nn = int(n * ratio_observed)
ax.scatter(x[:nn], y[:nn], c='r', s=150, label='Observed')
ax.scatter(x[nn:], y[nn:], c='b', s=150, label='Unobserved')
ax.legend(fontsize=15)
else:
ax.scatter(x, y, c='k', s=150)
ax.axis('off')
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps - 1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0, 1]
def get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): the number of neurons in our system.
A (np.array): the true connectivity
X (np.array): the simulated system
observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons * observed_ratio), 1, n_neurons)
selected_X = X[:sel_idx, :]
selected_connectivity = A[:sel_idx, :sel_idx]
estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)
if neuron_idx is None:
return np.corrcoef(selected_connectivity.flatten(),
estimated_selected_connectivity.flatten())[1, 0], estimated_selected_connectivity
else:
return np.corrcoef(selected_connectivity[neuron_idx, :],
estimated_selected_connectivity)[1, 0], estimated_selected_connectivity
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
###Output
_____no_output_____
###Markdown
--- Section 1: Regression
###Code
#@title Video 1: Regression approach
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Av4LaXZdgDo", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/Av4LaXZdgDo
###Markdown
You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.**A confounding example**:Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. **Controlling for a confound**: Confonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things: 1. **All** confounds are included as covariates2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.
###Code
#@title Video 2: Fitting a GLM
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GvMj9hRv5Ak", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/GvMj9hRv5Ak
###Markdown
Section 1.1: Recovering connectivity by model fittingRecall that in our system each neuron effects every other via:$$\vec{x}_{t+1} = \sigma(A\vec{x}_t + \epsilon_t), $$where $\sigma$ is our sigmoid nonlinearity from before: $\sigma(x) = \frac{1}{1 + e^{-x}}$Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they? We will use a regression approach to estimate the causal influence of all neurons to neuron 1. Specifically, we will use linear regression to determine the $A$ in:$$\sigma^{-1}(\vec{x}_{t+1}) = A\vec{x}_t + \epsilon_t ,$$where $\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\sigma^{-1}(x) = \log(\frac{x}{1-x})$.Let $W$ be the $\vec{x}_t$ values, up to the second-to-last timestep $T-1$:$$W = \begin{bmatrix}\mid & \mid & ... & \mid \\ \vec{x}_0 & \vec{x}_1 & ... & \vec{x}_{T-1} \\ \mid & \mid & ... & \mid\end{bmatrix}_{n \times (T-1)}$$Let $Y$ be the $\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:$$Y = \begin{bmatrix}x_{i,1} & x_{i,2} & ... & x_{i, T} \\ \end{bmatrix}_{1 \times (T-1)}$$You will then fit the following model:$$\sigma^{-1}(Y^T) = W^TV$$where $V$ is the $n \times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.**Review**: As you learned Friday of Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here. Exercise 1: Use linear regression plus lasso to estimate causal connectivitiesYou will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.**Code**:You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function- Use the following hyperparameters for the `Lasso` estimator: - `alpha = 0.01` - `fit_intercept = False`- How do we obtain $V$ from the fitted model?
###Code
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
############################################################################
## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures
## our assumption that most connections are precisely 0.
## Fill in function and remove
raise NotImplementedError("Please complete the regression exercise")
############################################################################
# Initialize regression model with no intercept and alpha=0.01
regression = ...
# Fit regression to the data
regression.fit(...)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
# V = get_regression_estimate(X, neuron_idx)
#print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
#print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
# to_remove solution
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
# Initialize regression model with no intercept and alpha=0.01
regression = Lasso(fit_intercept=False, alpha=0.01)
# Fit regression to the data
regression.fit(W, Y)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
###Output
Regression: correlation of estimated connectivity with true connectivity: 0.865
###Markdown
You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity. --- Section 2: Omitted Variable BiasIf we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately? Section 2.1: Visualizing subsets of the connectivity matrixWe first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.
###Code
#@markdown Execute this cell to visualize subsets of connectivity matrix
# Run this cell to visualize the subsets of variables we observe
n_neurons = 25
A = create_connectivity(n_neurons)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
ratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system
for i, ratio in enumerate(ratio_observed):
sel_idx = int(n_neurons * ratio)
offset = np.zeros((n_neurons, n_neurons))
axs[i,1].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[i, 1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
axs[i, 1].set_xlabel("Connectivity from")
axs[i, 1].set_ylabel("Connectivity to")
plt.colorbar(im, ax=axs[i, 1], fraction=0.046, pad=0.04)
see_neurons(A,axs[i, 0],ratio)
plt.suptitle("Visualizing subsets of the connectivity matrix", y = 1.05)
plt.show()
###Output
_____no_output_____
###Markdown
Section 2.2: Effects of partial observability
###Code
#@title Video 3: Omitted variable bias
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="5CCib6CTMac", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/5CCib6CTMac
###Markdown
**Video correction**: the labels "connectivity from"/"connectivity to" are swapped in the video but fixed in the figures/demos below Interactive Demo: Regression performance as a function of the number of observed neuronsWe will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?**Note:** the plots will take a moment or so to update after moving the slider.
###Code
#@markdown Execute this cell to enable demo
n_neurons = 50
A = create_connectivity(n_neurons, random_state=42)
X = simulate_neurons(A, 4000, random_state=42)
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
@widgets.interact
def plot_observed(n_observed=(5, 45, 5)):
to_neuron = 0
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
sel_idx = n_observed
ratio = (n_observed) / n_neurons
offset = np.zeros((n_neurons, n_neurons))
axs[0].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[1], fraction=0.046, pad=0.04)
see_neurons(A,axs[0], ratio, False)
corr, R = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
#rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')
#axs[1].add_patch(rect)
big_R = np.zeros(A.shape)
big_R[:sel_idx, :sel_idx] = 1 + R
#big_R[to_neuron, :sel_idx] = 1 + R
im = axs[2].imshow(big_R, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)
c = 'w' if n_observed<(n_neurons-3) else 'k'
axs[2].text(0,n_observed+3,"Correlation : {:.2f}".format(corr), color=c, size=15)
#axs[2].axis("off")
axs[1].title.set_text("True connectivity")
axs[1].set_xlabel("Connectivity from")
axs[1].set_ylabel("Connectivity to")
axs[2].title.set_text("Estimated connectivity")
axs[2].set_xlabel("Connectivity from")
#axs[2].set_ylabel("Connectivity to")
###Output
_____no_output_____
###Markdown
Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.What is the relationship that you see between performance and the number of neurons observed?**Note:** the cell below will take about 25-30 seconds to run.
###Code
#@title
#@markdown Plot correlation vs. subsampling
import warnings
warnings.filterwarnings('ignore')
# we'll simulate many systems for various ratios of observed neurons
n_neurons = 50
timesteps = 5000
ratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system
n_trials = 3 # run it this many times to get variability in our results
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
corr_data = np.zeros((n_trials, len(ratio_observed)))
for trial in range(n_trials):
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps)
print("simulating trial {} of {}".format(trial + 1, n_trials))
for j, ratio in enumerate(ratio_observed):
result,_ = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
corr_data[trial, j] = result
corr_mean = np.nanmean(corr_data, axis=0)
corr_std = np.nanstd(corr_data, axis=0)
plt.plot(np.asarray(ratio_observed) * 100, corr_mean)
plt.fill_between(np.asarray(ratio_observed) * 100,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlim([100, 10])
plt.xlabel("Percent of neurons observed")
plt.ylabel("connectivity matrices correlation")
plt.title("Performance of regression as a function of the number of neurons observed");
###Output
simulating trial 1 of 3
###Markdown
--- Summary
###Code
#@title Video 4: Summary
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="T1uGf1H31wE", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/T1uGf1H31wE
###Markdown
Neuromatch Academy 2020 -- Week 3 Day 3 Tutorial 3 Causality Day - Simultaneous fitting/regression**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording**Content reviewers**: Mike X Cohen, Madineh Sarvestani, Ella Batty, Michael Waskom --- Tutorial objectivesThis is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:1. Master definitions of causality2. Understand that estimating causality is possible3. Learn 4 different methods and understand when they fail 1. perturbations 2. correlations 3. **simultaneous fitting/regression** 4. instrumental variables Notebook 3 objectivesIn tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things? Here we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:- Learn about more advanced (but also controversial) techniques for estimating causality - conditional probabilities (**regression**)- Explore limitations and failure modes - understand the problem of **omitted variable bias** --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Ridge, LinearRegression, ElasticNet, Lasso
import matplotlib.patches as patches
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper Functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
def create_connectivity(n_neurons, random_state=42):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0,1], size=(n_neurons, n_neurons), p=[0.9, 0.1])
# set the timescale of the dynamical system to about 100 steps
_, s_vals , _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def get_regression_estimate_full_connectivity(X):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
n_neurons = X.shape[0]
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[:, 1:].transpose()
# apply inverse sigmoid transformation
Y = logit(Y)
# fit multioutput regression
reg = MultiOutputRegressor(Lasso(fit_intercept=False,
alpha=0.01, max_iter=250 ), n_jobs=-1)
reg.fit(W,Y)
V = np.zeros((n_neurons, n_neurons))
for i, estimator in enumerate(reg.estimators_):
V[i, :] = estimator.coef_
return V
def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): number of neurons
A (np.ndarray): connectivity matrix
X (np.ndarray): dynamical system
observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
sel_X = X[:sel_idx, :]
sel_A = A[:sel_idx, :sel_idx]
sel_V = get_regression_estimate_full_connectivity(sel_X)
return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V
def see_neurons(A, ax, ratio_observed=1, arrows=True):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0,np.pi*2,n,endpoint=False )
x,y = np.cos(thetas),np.sin(thetas),
if arrows:
for i in range(n):
for j in range(n):
if A[i,j]>0:
ax.arrow(x[i],y[i],x[j]-x[i],y[j]-y[i],color='k',head_width=.05,
width = A[i,j]/25,shape='right', length_includes_head=True,
alpha = .2)
if ratio_observed<1:
nn = int(n*ratio_observed)
ax.scatter(x[:nn],y[:nn],c='r',s=150, label='Observed')
ax.scatter(x[nn:],y[nn:],c='b',s=150, label='Unobserved')
ax.legend(fontsize=15)
else:
ax.scatter(x,y,c='k',s=150)
ax.axis('off')
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps-1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t+1] = sigmoid(A.dot(X[:,t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons,random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0,1]
def get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): the number of neurons in our system.
A (np.array): the true connectivity
X (np.array): the simulated system
observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
selected_X = X[:sel_idx, :]
selected_connectivity = A[:sel_idx, :sel_idx]
estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)
if neuron_idx is None:
return np.corrcoef(selected_connectivity.flatten(),
estimated_selected_connectivity.flatten())[1,0], estimated_selected_connectivity
else:
return np.corrcoef(selected_connectivity[neuron_idx, :],
estimated_selected_connectivity)[1,0], estimated_selected_connectivity
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
###Output
_____no_output_____
###Markdown
--- Section 1: Regression
###Code
#@title Video 1: Regression approach
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Av4LaXZdgDo", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/Av4LaXZdgDo
###Markdown
You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.**A confounding example**:Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. **Controlling for a confound**: Confonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things: 1. **All** confounds are included as covariates2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.
###Code
#@title Video 2: Fitting a GLM
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GvMj9hRv5Ak", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/GvMj9hRv5Ak
###Markdown
Section 1.1: Recovering connectivity by model fittingRecall that in our system each neuron effects every other via:$$\vec{x}_{t+1} = \sigma(A\vec{x}_t + \epsilon_t), $$where $\sigma$ is our sigmoid nonlinearity from before: $\sigma(x) = \frac{1}{1 + e^{-x}}$Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they? We will use a regression approach to estimate the causal influence of all neurons to neuron 1. Specifically, we will use linear regression to determine the $A$ in:$$\sigma^{-1}(\vec{x}_{t+1}) = A\vec{x}_t + \epsilon_t ,$$where $\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\sigma^{-1}(x) = \log(\frac{x}{1-x})$.Let $W$ be the $\vec{x}_t$ values, up to the second-to-last timestep $T-1$:$$W = \begin{bmatrix}\mid & \mid & ... & \mid \\ \vec{x}_0 & \vec{x}_1 & ... & \vec{x}_{T-1} \\ \mid & \mid & ... & \mid\end{bmatrix}_{n \times (T-1)}$$Let $Y$ be the $\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:$$Y = \begin{bmatrix}x_{i,1} & x_{i,2} & ... & x_{i, T} \\ \end{bmatrix}_{1 \times (T-1)}$$You will then fit the following model:$$\sigma^{-1}(Y^T) = W^TV$$where $V$ is the $n \times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.**Review**: As you learned Friday of Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here. Exercise 1: Use linear regression plus lasso to estimate causal connectivitiesYou will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.**Code**:You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function- Use the following hyperparameters for the `Lasso` estimator: - `alpha = 0.01` - `fit_intercept = False`- How do we obtain $V$ from the fitted model?
###Code
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
############################################################################
## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures
## our assumption that most connections are precisely 0.
## Fill in function and remove
raise NotImplementedError("Please complete the regression exercise")
############################################################################
# Initialize regression model with no intercept and alpha=0.01
regression = ...
# Fit regression to the data
regression.fit(...)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
# V = get_regression_estimate(X, neuron_idx)
#print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1,0]))
#print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
# to_remove solution
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
# Initialize regression model with no intercept and alpha=0.01
regression = Lasso(fit_intercept=False, alpha=0.01)
# Fit regression to the data
regression.fit(W, Y)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1,0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
###Output
Regression: correlation of estimated connectivity with true connectivity: 0.865
###Markdown
You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity. --- Section 2: Omitted Variable BiasIf we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately? Section 2.1: Visualizing subsets of the connectivity matrixWe first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.
###Code
#@markdown Execute this cell to visualize subsets of connectivity matrix
# Run this cell to visualize the subsets of variables we observe
n_neurons = 25
A = create_connectivity(n_neurons)
fig, axs = plt.subplots(2,2, figsize=(10,10))
ratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system
for i, ratio in enumerate(ratio_observed):
sel_idx = int(n_neurons*ratio)
offset = np.zeros((n_neurons, n_neurons))
axs[i,1].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[i, 1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max()+1)
axs[i, 1].set_xlabel("Connectivity from")
axs[i, 1].set_ylabel("Connectivity to")
plt.colorbar(im, ax=axs[i,1],fraction=0.046, pad=0.04)
see_neurons(A,axs[i,0],ratio)
plt.suptitle("Visualizing subsets of the connectivity matrix", y = 1.05)
plt.show()
###Output
_____no_output_____
###Markdown
Section 2.2: Effects of partial observability
###Code
#@title Video 3: Omitted variable bias
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="5CCib6CTMac", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/5CCib6CTMac
###Markdown
**Video correction**: the labels "connectivity from"/"connectivity to" are swapped in the video but fixed in the figures/demos below Interactive Demo: Regression performance as a function of the number of observed neuronsWe will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?**Note:** the plots will take a moment or so to update after moving the slider.
###Code
#@markdown Execute this cell to enable demo
n_neurons = 50
A = create_connectivity(n_neurons, random_state=42)
X = simulate_neurons(A, 4000, random_state=42)
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
@widgets.interact
def plot_observed(n_observed=(5,n_neurons,5)):
to_neuron = 0
fig, axs = plt.subplots(1,3, figsize=(15,5))
sel_idx = n_observed
ratio = (n_observed)/n_neurons
offset = np.zeros((n_neurons, n_neurons))
axs[0].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max()+1)
plt.colorbar(im, ax=axs[1],fraction=0.046, pad=0.04)
see_neurons(A,axs[0],ratio, False)
corr, R = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
#rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')
#axs[1].add_patch(rect)
big_R = np.zeros(A.shape)
big_R[:sel_idx, :sel_idx] = 1 + R
#big_R[to_neuron, :sel_idx] = 1 + R
im = axs[2].imshow(big_R, cmap="coolwarm", vmin=0, vmax=A.max()+1)
plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)
c = 'w' if n_observed<(n_neurons-3) else 'k'
axs[2].text(0,n_observed+3,"Correlation : {:.2f}".format(corr), color=c,size=15)
#axs[2].axis("off")
axs[1].title.set_text("True connectivity")
axs[1].set_xlabel("Connectivity from")
axs[1].set_ylabel("Connectivity to")
axs[2].title.set_text("Estimated connectivity")
axs[2].set_xlabel("Connectivity from")
#axs[2].set_ylabel("Connectivity to")
###Output
_____no_output_____
###Markdown
Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.What is the relationship that you see between performance and the number of neurons observed?**Note:** the cell below will take about 25-30 seconds to run.
###Code
#@title
#@markdown Plot correlation vs. subsampling
import warnings
warnings.filterwarnings('ignore')
# we'll simulate many systems for various ratios of observed neurons
n_neurons = 50
timesteps = 5000
ratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system
n_trials = 3 # run it this many times to get variability in our results
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
corr_data = np.zeros((n_trials, len(ratio_observed)))
for trial in range(n_trials):
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps)
print("simulating trial {} of {}".format(trial+1, n_trials))
for j, ratio in enumerate(ratio_observed):
result,_ = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
corr_data[trial, j] = result
corr_mean = np.nanmean(corr_data, axis=0)
corr_std = np.nanstd(corr_data, axis=0)
plt.plot(np.asarray(ratio_observed)*100, corr_mean)
plt.fill_between(np.asarray(ratio_observed)*100,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlim([100, 10])
plt.xlabel("Percent of neurons observed")
plt.ylabel("connectivity matrices correlation")
plt.title("Performance of regression as a function of the number of neurons observed");
###Output
simulating trial 1 of 3
###Markdown
--- Summary
###Code
#@title Video 4: Summary
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="T1uGf1H31wE", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/T1uGf1H31wE
###Markdown
Neuromatch Academy 2020 -- Week 3 Day 3 Tutorial 3 Causality Day - Simultaneous fitting/regression**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording**Content reviewers**: Mike X Cohen, Madineh Sarvestani, Ella Batty, Michael Waskom --- Tutorial objectivesThis is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:1. Master definitions of causality2. Understand that estimating causality is possible3. Learn 4 different methods and understand when they fail 1. perturbations 2. correlations 3. **simultaneous fitting/regression** 4. instrumental variables Notebook 3 objectivesIn tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things? Here we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:- Learn about more advanced (but also controversial) techniques for estimating causality - conditional probabilities (**regression**)- Explore limitations and failure modes - understand the problem of **omitted variable bias** --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Lasso
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
def create_connectivity(n_neurons, random_state=42, p=0.9):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[p, 1 - p])
# set the timescale of the dynamical system to about 100 steps
_, s_vals, _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def get_regression_estimate_full_connectivity(X):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
n_neurons = X.shape[0]
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[:, 1:].transpose()
# apply inverse sigmoid transformation
Y = logit(Y)
# fit multioutput regression
reg = MultiOutputRegressor(Lasso(fit_intercept=False,
alpha=0.01, max_iter=250 ), n_jobs=-1)
reg.fit(W, Y)
V = np.zeros((n_neurons, n_neurons))
for i, estimator in enumerate(reg.estimators_):
V[i, :] = estimator.coef_
return V
def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): number of neurons
A (np.ndarray): connectivity matrix
X (np.ndarray): dynamical system
observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
sel_X = X[:sel_idx, :]
sel_A = A[:sel_idx, :sel_idx]
sel_V = get_regression_estimate_full_connectivity(sel_X)
return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V
def see_neurons(A, ax, ratio_observed=1, arrows=True):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0, np.pi * 2, n, endpoint=False)
x, y = np.cos(thetas), np.sin(thetas),
if arrows:
for i in range(n):
for j in range(n):
if A[i, j] > 0:
ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', head_width=.05,
width = A[i, j] / 25,shape='right', length_includes_head=True,
alpha = .2)
if ratio_observed < 1:
nn = int(n * ratio_observed)
ax.scatter(x[:nn], y[:nn], c='r', s=150, label='Observed')
ax.scatter(x[nn:], y[nn:], c='b', s=150, label='Unobserved')
ax.legend(fontsize=15)
else:
ax.scatter(x, y, c='k', s=150)
ax.axis('off')
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps - 1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0, 1]
def get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): the number of neurons in our system.
A (np.array): the true connectivity
X (np.array): the simulated system
observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons * observed_ratio), 1, n_neurons)
selected_X = X[:sel_idx, :]
selected_connectivity = A[:sel_idx, :sel_idx]
estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)
if neuron_idx is None:
return np.corrcoef(selected_connectivity.flatten(),
estimated_selected_connectivity.flatten())[1, 0], estimated_selected_connectivity
else:
return np.corrcoef(selected_connectivity[neuron_idx, :],
estimated_selected_connectivity)[1, 0], estimated_selected_connectivity
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
###Output
_____no_output_____
###Markdown
--- Section 1: Regression
###Code
#@title Video 1: Regression approach
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Av4LaXZdgDo", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/Av4LaXZdgDo
###Markdown
You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.**A confounding example**:Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. **Controlling for a confound**: Confonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things: 1. **All** confounds are included as covariates2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.
###Code
#@title Video 2: Fitting a GLM
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GvMj9hRv5Ak", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/GvMj9hRv5Ak
###Markdown
Section 1.1: Recovering connectivity by model fittingRecall that in our system each neuron effects every other via:$$\vec{x}_{t+1} = \sigma(A\vec{x}_t + \epsilon_t), $$where $\sigma$ is our sigmoid nonlinearity from before: $\sigma(x) = \frac{1}{1 + e^{-x}}$Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they? We will use a regression approach to estimate the causal influence of all neurons to neuron 1. Specifically, we will use linear regression to determine the $A$ in:$$\sigma^{-1}(\vec{x}_{t+1}) = A\vec{x}_t + \epsilon_t ,$$where $\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\sigma^{-1}(x) = \log(\frac{x}{1-x})$.Let $W$ be the $\vec{x}_t$ values, up to the second-to-last timestep $T-1$:$$W = \begin{bmatrix}\mid & \mid & ... & \mid \\ \vec{x}_0 & \vec{x}_1 & ... & \vec{x}_{T-1} \\ \mid & \mid & ... & \mid\end{bmatrix}_{n \times (T-1)}$$Let $Y$ be the $\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:$$Y = \begin{bmatrix}x_{i,1} & x_{i,2} & ... & x_{i, T} \\ \end{bmatrix}_{1 \times (T-1)}$$You will then fit the following model:$$\sigma^{-1}(Y^T) = W^TV$$where $V$ is the $n \times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.**Review**: As you learned Friday of Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here. Exercise 1: Use linear regression plus lasso to estimate causal connectivitiesYou will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.**Code**:You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function- Use the following hyperparameters for the `Lasso` estimator: - `alpha = 0.01` - `fit_intercept = False`- How do we obtain $V$ from the fitted model?
###Code
# to_remove solution
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
# Initialize regression model with no intercept and alpha=0.01
regression = Lasso(fit_intercept=False, alpha=0.01)
# Fit regression to the data
regression.fit(W, Y)
V = regression.coef_
return V
# Parameters
n_neurons = 100 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
###Output
Regression: correlation of estimated connectivity with true connectivity: 0.708
Lagged correlation of estimated connectivity with true connectivity: 0.470
###Markdown
You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity. --- Section 2: Omitted Variable BiasIf we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately? Section 2.1: Visualizing subsets of the connectivity matrixWe first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.
###Code
#@markdown Execute this cell to visualize subsets of connectivity matrix
# Run this cell to visualize the subsets of variables we observe
n_neurons = 25
A = create_connectivity(n_neurons)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
ratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system
for i, ratio in enumerate(ratio_observed):
sel_idx = int(n_neurons * ratio)
offset = np.zeros((n_neurons, n_neurons))
axs[i,1].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[i, 1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
axs[i, 1].set_xlabel("Connectivity from")
axs[i, 1].set_ylabel("Connectivity to")
plt.colorbar(im, ax=axs[i, 1], fraction=0.046, pad=0.04)
see_neurons(A,axs[i, 0],ratio)
plt.suptitle("Visualizing subsets of the connectivity matrix", y = 1.05)
plt.show()
###Output
_____no_output_____
###Markdown
Section 2.2: Effects of partial observability
###Code
#@title Video 3: Omitted variable bias
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="5CCib6CTMac", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/5CCib6CTMac
###Markdown
**Video correction**: the labels "connectivity from"/"connectivity to" are swapped in the video but fixed in the figures/demos below Interactive Demo: Regression performance as a function of the number of observed neuronsWe will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?**Note:** the plots will take a moment or so to update after moving the slider.
###Code
#@markdown Execute this cell to enable demo
n_neurons = 50
A = create_connectivity(n_neurons, random_state=42)
X = simulate_neurons(A, 4000, random_state=42)
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
@widgets.interact
def plot_observed(n_observed=(5, 45, 5)):
to_neuron = 0
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
sel_idx = n_observed
ratio = (n_observed) / n_neurons
offset = np.zeros((n_neurons, n_neurons))
axs[0].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[1], fraction=0.046, pad=0.04)
see_neurons(A,axs[0], ratio, False)
corr, R = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
#rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')
#axs[1].add_patch(rect)
big_R = np.zeros(A.shape)
big_R[:sel_idx, :sel_idx] = 1 + R
#big_R[to_neuron, :sel_idx] = 1 + R
im = axs[2].imshow(big_R, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)
c = 'w' if n_observed<(n_neurons-3) else 'k'
axs[2].text(0,n_observed+3,"Correlation : {:.2f}".format(corr), color=c, size=15)
#axs[2].axis("off")
axs[1].title.set_text("True connectivity")
axs[1].set_xlabel("Connectivity from")
axs[1].set_ylabel("Connectivity to")
axs[2].title.set_text("Estimated connectivity")
axs[2].set_xlabel("Connectivity from")
#axs[2].set_ylabel("Connectivity to")
###Output
_____no_output_____
###Markdown
Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.What is the relationship that you see between performance and the number of neurons observed?**Note:** the cell below will take about 25-30 seconds to run.
###Code
#@title
#@markdown Plot correlation vs. subsampling
import warnings
warnings.filterwarnings('ignore')
# we'll simulate many systems for various ratios of observed neurons
n_neurons = 50
timesteps = 5000
ratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system
n_trials = 3 # run it this many times to get variability in our results
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
corr_data = np.zeros((n_trials, len(ratio_observed)))
for trial in range(n_trials):
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps)
print("simulating trial {} of {}".format(trial + 1, n_trials))
for j, ratio in enumerate(ratio_observed):
result,_ = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
corr_data[trial, j] = result
corr_mean = np.nanmean(corr_data, axis=0)
corr_std = np.nanstd(corr_data, axis=0)
plt.plot(np.asarray(ratio_observed) * 100, corr_mean)
plt.fill_between(np.asarray(ratio_observed) * 100,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlim([100, 10])
plt.xlabel("Percent of neurons observed")
plt.ylabel("connectivity matrices correlation")
plt.title("Performance of regression as a function of the number of neurons observed");
###Output
simulating trial 1 of 3
simulating trial 2 of 3
simulating trial 3 of 3
###Markdown
--- Summary
###Code
#@title Video 4: Summary
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="T1uGf1H31wE", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
Video available at https://youtu.be/T1uGf1H31wE
###Markdown
Neuromatch Academy 2020 -- Week 3 Day 3 Tutorial 3 Causality Day - Simultaneous fitting/regression**Content creators**: Ari Benjamin, Tony Liu, Konrad Kording**Content reviewers**: Mike X Cohen, Madineh Sarvestani, Ella Batty, Michael Waskom --- Tutorial objectivesThis is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:1. Master definitions of causality2. Understand that estimating causality is possible3. Learn 4 different methods and understand when they fail 1. perturbations 2. correlations 3. **simultaneous fitting/regression** 4. instrumental variables Notebook 3 objectivesIn tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things? Here we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:- Learn about more advanced (but also controversial) techniques for estimating causality - conditional probabilities (**regression**)- Explore limitations and failure modes - understand the problem of **omitted variable bias** --- Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Lasso
#@title Figure settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
def create_connectivity(n_neurons, random_state=42, p=0.9):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[p, 1 - p])
# set the timescale of the dynamical system to about 100 steps
_, s_vals, _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def get_regression_estimate_full_connectivity(X):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
n_neurons = X.shape[0]
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[:, 1:].transpose()
# apply inverse sigmoid transformation
Y = logit(Y)
# fit multioutput regression
reg = MultiOutputRegressor(Lasso(fit_intercept=False,
alpha=0.01, max_iter=250 ), n_jobs=-1)
reg.fit(W, Y)
V = np.zeros((n_neurons, n_neurons))
for i, estimator in enumerate(reg.estimators_):
V[i, :] = estimator.coef_
return V
def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): number of neurons
A (np.ndarray): connectivity matrix
X (np.ndarray): dynamical system
observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
sel_X = X[:sel_idx, :]
sel_A = A[:sel_idx, :sel_idx]
sel_V = get_regression_estimate_full_connectivity(sel_X)
return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V
def see_neurons(A, ax, ratio_observed=1, arrows=True):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0, np.pi * 2, n, endpoint=False)
x, y = np.cos(thetas), np.sin(thetas),
if arrows:
for i in range(n):
for j in range(n):
if A[i, j] > 0:
ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', head_width=.05,
width = A[i, j] / 25,shape='right', length_includes_head=True,
alpha = .2)
if ratio_observed < 1:
nn = int(n * ratio_observed)
ax.scatter(x[:nn], y[:nn], c='r', s=150, label='Observed')
ax.scatter(x[nn:], y[nn:], c='b', s=150, label='Unobserved')
ax.legend(fontsize=15)
else:
ax.scatter(x, y, c='k', s=150)
ax.axis('off')
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps - 1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0, 1]
def get_regression_corr(n_neurons, A, X, observed_ratio, regression_args, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): the number of neurons in our system.
A (np.array): the true connectivity
X (np.array): the simulated system
observed_ratio (float): the proportion of n_neurons observed, must be between 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons * observed_ratio), 1, n_neurons)
selected_X = X[:sel_idx, :]
selected_connectivity = A[:sel_idx, :sel_idx]
estimated_selected_connectivity = get_regression_estimate(selected_X, neuron_idx=neuron_idx)
if neuron_idx is None:
return np.corrcoef(selected_connectivity.flatten(),
estimated_selected_connectivity.flatten())[1, 0], estimated_selected_connectivity
else:
return np.corrcoef(selected_connectivity[neuron_idx, :],
estimated_selected_connectivity)[1, 0], estimated_selected_connectivity
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
###Output
_____no_output_____
###Markdown
--- Section 1: Regression
###Code
#@title Video 1: Regression approach
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Av4LaXZdgDo", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.**A confounding example**:Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades. **Controlling for a confound**: Confonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things: 1. **All** confounds are included as covariates2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.
###Code
#@title Video 2: Fitting a GLM
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GvMj9hRv5Ak", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
Section 1.1: Recovering connectivity by model fittingRecall that in our system each neuron effects every other via:$$\vec{x}_{t+1} = \sigma(A\vec{x}_t + \epsilon_t), $$where $\sigma$ is our sigmoid nonlinearity from before: $\sigma(x) = \frac{1}{1 + e^{-x}}$Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they? We will use a regression approach to estimate the causal influence of all neurons to neuron 1. Specifically, we will use linear regression to determine the $A$ in:$$\sigma^{-1}(\vec{x}_{t+1}) = A\vec{x}_t + \epsilon_t ,$$where $\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\sigma^{-1}(x) = \log(\frac{x}{1-x})$.Let $W$ be the $\vec{x}_t$ values, up to the second-to-last timestep $T-1$:$$W = \begin{bmatrix}\mid & \mid & ... & \mid \\ \vec{x}_0 & \vec{x}_1 & ... & \vec{x}_{T-1} \\ \mid & \mid & ... & \mid\end{bmatrix}_{n \times (T-1)}$$Let $Y$ be the $\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:$$Y = \begin{bmatrix}x_{i,1} & x_{i,2} & ... & x_{i, T} \\ \end{bmatrix}_{1 \times (T-1)}$$You will then fit the following model:$$\sigma^{-1}(Y^T) = W^TV$$where $V$ is the $n \times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.**Review**: As you learned Friday of Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here. Exercise 1: Use linear regression plus lasso to estimate causal connectivitiesYou will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.**Code**:You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that? This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.- Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function- Use the following hyperparameters for the `Lasso` estimator: - `alpha = 0.01` - `fit_intercept = False`- How do we obtain $V$ from the fitted model?
###Code
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
############################################################################
## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures
## our assumption that most connections are precisely 0.
## Fill in function and remove
raise NotImplementedError("Please complete the regression exercise")
############################################################################
# Initialize regression model with no intercept and alpha=0.01
regression = ...
# Fit regression to the data
regression.fit(...)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
# V = get_regression_estimate(X, neuron_idx)
#print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
#print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
# to_remove solution
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
# Initialize regression model with no intercept and alpha=0.01
regression = Lasso(fit_intercept=False, alpha=0.01)
# Fit regression to the data
regression.fit(W, Y)
V = regression.coef_
return V
# Parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Uncomment below to test your function
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
###Output
_____no_output_____
###Markdown
You should find that using regression, our estimated connectivity matrix has a correlation of 0.865 with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of 0.703 with the true connectivity matrix.We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity. --- Section 2: Omitted Variable BiasIf we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately? Section 2.1: Visualizing subsets of the connectivity matrixWe first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.
###Code
#@markdown Execute this cell to visualize subsets of connectivity matrix
# Run this cell to visualize the subsets of variables we observe
n_neurons = 25
A = create_connectivity(n_neurons)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
ratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system
for i, ratio in enumerate(ratio_observed):
sel_idx = int(n_neurons * ratio)
offset = np.zeros((n_neurons, n_neurons))
axs[i,1].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[i, 1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
axs[i, 1].set_xlabel("Connectivity from")
axs[i, 1].set_ylabel("Connectivity to")
plt.colorbar(im, ax=axs[i, 1], fraction=0.046, pad=0.04)
see_neurons(A,axs[i, 0],ratio)
plt.suptitle("Visualizing subsets of the connectivity matrix", y = 1.05)
plt.show()
###Output
_____no_output_____
###Markdown
Section 2.2: Effects of partial observability
###Code
#@title Video 3: Omitted variable bias
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="5CCib6CTMac", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____
###Markdown
**Video correction**: the labels "connectivity from"/"connectivity to" are swapped in the video but fixed in the figures/demos below Interactive Demo: Regression performance as a function of the number of observed neuronsWe will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?**Note:** the plots will take a moment or so to update after moving the slider.
###Code
#@markdown Execute this cell to enable demo
n_neurons = 50
A = create_connectivity(n_neurons, random_state=42)
X = simulate_neurons(A, 4000, random_state=42)
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
@widgets.interact
def plot_observed(n_observed=(5, 45, 5)):
to_neuron = 0
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
sel_idx = n_observed
ratio = (n_observed) / n_neurons
offset = np.zeros((n_neurons, n_neurons))
axs[0].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[1], fraction=0.046, pad=0.04)
see_neurons(A,axs[0], ratio, False)
corr, R = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
#rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')
#axs[1].add_patch(rect)
big_R = np.zeros(A.shape)
big_R[:sel_idx, :sel_idx] = 1 + R
#big_R[to_neuron, :sel_idx] = 1 + R
im = axs[2].imshow(big_R, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)
c = 'w' if n_observed<(n_neurons-3) else 'k'
axs[2].text(0,n_observed+3,"Correlation : {:.2f}".format(corr), color=c, size=15)
#axs[2].axis("off")
axs[1].title.set_text("True connectivity")
axs[1].set_xlabel("Connectivity from")
axs[1].set_ylabel("Connectivity to")
axs[2].title.set_text("Estimated connectivity")
axs[2].set_xlabel("Connectivity from")
#axs[2].set_ylabel("Connectivity to")
###Output
_____no_output_____
###Markdown
Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.What is the relationship that you see between performance and the number of neurons observed?**Note:** the cell below will take about 25-30 seconds to run.
###Code
#@title
#@markdown Plot correlation vs. subsampling
import warnings
warnings.filterwarnings('ignore')
# we'll simulate many systems for various ratios of observed neurons
n_neurons = 50
timesteps = 5000
ratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system
n_trials = 3 # run it this many times to get variability in our results
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
corr_data = np.zeros((n_trials, len(ratio_observed)))
for trial in range(n_trials):
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps)
print("simulating trial {} of {}".format(trial + 1, n_trials))
for j, ratio in enumerate(ratio_observed):
result,_ = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
corr_data[trial, j] = result
corr_mean = np.nanmean(corr_data, axis=0)
corr_std = np.nanstd(corr_data, axis=0)
plt.plot(np.asarray(ratio_observed) * 100, corr_mean)
plt.fill_between(np.asarray(ratio_observed) * 100,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlim([100, 10])
plt.xlabel("Percent of neurons observed")
plt.ylabel("connectivity matrices correlation")
plt.title("Performance of regression as a function of the number of neurons observed");
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 4: Summary
# Insert the ID of the corresponding youtube video
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="T1uGf1H31wE", width=854, height=480, fs=1)
print("Video available at https://youtu.be/" + video.id)
video
###Output
_____no_output_____ |
Assignments/ASSIGNMENT-2.ipynb | ###Markdown
Assignment 2: Containers**Deadline: Tuesday, September 15, 2020 before 20:00** - Please name your files: * ASSIGNMENT_2_FIRSTNAME_LASTNAME.ipynb * assignment2_utils.py- Please store the two files in a folder called ASSIGNMENT_2_FIRSTNAME_LASTNAME- Please zip your folder and please follow the following naming convention for the zip file: ASSIGNMENT_2_FIRSTNAME_LASTNAME.zip- Please submit your assignment on Canvas: Assignment 2- If you have **questions** about this topic, please contact us **([email protected])**. Questions and answers will be collected on Piazza, so please check if your question has already been answered first.In this block, we covered the following chapters:- Chapter 05 - Core concepts of containers- Chapter 06 - Lists- Chapter 07 - Sets- Chapter 08 - Comparison of lists and sets- Chapter 09 - Looping over containers.- Chapter 10 - Dictionaries- Chapter 11 - Functions and scopeIn this assignment, you will be asked to show what you have learned from the topics above! **Finding solutions online**Very often, you can find good solutions online. We encourage you to use online resources when you get stuck. However, please always try to understand the code you find and indicate that it is not your own. Use the following format to mark code written by someone else:Taken from [link] [date][code]\Please use a similar format to indicate that you have worked with a classmate (e.g. mention the name instead of the link). *Please stick to this strategy for all course assignments.* Exercise 1: Beersong*99 Bottles of Beer* is a traditional song in the United States and Canada. Write a python program that generates the lyrics to the song. The song's simple lyrics are as follows: 99 bottles of beer on the wall, 99 bottles of beer. Take one down, pass it around, 98 bottles of beer on the wall.The same verse is repeated, each time with one fewer bottle. The songis completed when the singer or singers reach zero. After the last bottleis taken down and passed around, there is a special verse: No more bottles of beer on the wall, no more bottles of beer. Go to the store and buy some more, 99 bottles of beer on the wall. Notes:* Leave a blank line between verses.* Make sure that you print the singular form of "bottles" when the counter is at one. Hint:* While debugging the program, start from a small number, andchange it to 100 when you are done (as shown below).* Use variables to prevent code repetitionYou can use the following code snippet as a start:
###Code
for number in range(4, 0, -1): # change 4 to 99 when you're done with debugging
print(number, 'bottles of beer on the wall,')
###Output
4 bottles of beer on the wall,
3 bottles of beer on the wall,
2 bottles of beer on the wall,
1 bottles of beer on the wall,
###Markdown
Exercise 2: list methods In this exercise, we will focus on the following list methods:a.) appendb.) countc.) indexd.) inserte.) popFor each of the aforementioned list methods:* explain the positional parameters* explain the keyword parameters* you can exclude *self* from your explanation* explain what the goal of the method is and what data type it returns, e.g., string, list, set, etc.* give a working example. Provide also an example by providing a value for keyword parameters (assuming the method has one or more keyword parameters). Exercise 3: set methods In this exercise, we will focus on the following set methods:* update* pop* remove* clearFor each of the aforementioned set methods:* explain the positional parameters* explain the keyword parameters* you can exclude *self* from your explanation* explain what the goal of the method is and what data type it returns, e.g., string, list, set, etc.* give a working example. Provide also an example by providing a value for keyword parameters (assuming the method has one or more keyword parameters). Please fill in your answers here: Exercise 4: Analyzing vocabulary using setsPlease consider the following two texts: These stories were copied from [here](http://www.english-for-students.com/).
###Code
a_story = """In a far away kingdom, there was a river. This river was home to many golden swans. The swans spent most of their time on the banks of the river. Every six months, the swans would leave a golden feather as a fee for using the lake. The soldiers of the kingdom would collect the feathers and deposit them in the royal treasury.
One day, a homeless bird saw the river. "The water in this river seems so cool and soothing. I will make my home here," thought the bird.
As soon as the bird settled down near the river, the golden swans noticed her. They came shouting. "This river belongs to us. We pay a golden feather to the King to use this river. You can not live here."
"I am homeless, brothers. I too will pay the rent. Please give me shelter," the bird pleaded. "How will you pay the rent? You do not have golden feathers," said the swans laughing. They further added, "Stop dreaming and leave once." The humble bird pleaded many times. But the arrogant swans drove the bird away.
"I will teach them a lesson!" decided the humiliated bird.
She went to the King and said, "O King! The swans in your river are impolite and unkind. I begged for shelter but they said that they had purchased the river with golden feathers."
The King was angry with the arrogant swans for having insulted the homeless bird. He ordered his soldiers to bring the arrogant swans to his court. In no time, all the golden swans were brought to the King’s court.
"Do you think the royal treasury depends upon your golden feathers? You can not decide who lives by the river. Leave the river at once or you all will be beheaded!" shouted the King.
The swans shivered with fear on hearing the King. They flew away never to return. The bird built her home near the river and lived there happily forever. The bird gave shelter to all other birds in the river. """
print(a_story)
another_story = """Long time ago, there lived a King. He was lazy and liked all the comforts of life. He never carried out his duties as a King. "Our King does not take care of our needs. He also ignores the affairs of his kingdom." The people complained.
One day, the King went into the forest to hunt. After having wandered for quite sometime, he became thirsty. To his relief, he spotted a lake. As he was drinking water, he suddenly saw a golden swan come out of the lake and perch on a stone. "Oh! A golden swan. I must capture it," thought the King.
But as soon as he held his bow up, the swan disappeared. And the King heard a voice, "I am the Golden Swan. If you want to capture me, you must come to heaven."
Surprised, the King said, "Please show me the way to heaven." Do good deeds, serve your people and the messenger from heaven would come to fetch you to heaven," replied the voice.
The selfish King, eager to capture the Swan, tried doing some good deeds in his Kingdom. "Now, I suppose a messenger will come to take me to heaven," he thought. But, no messenger came.
The King then disguised himself and went out into the street. There he tried helping an old man. But the old man became angry and said, "You need not try to help. I am in this miserable state because of out selfish King. He has done nothing for his people."
Suddenly, the King heard the golden swan’s voice, "Do good deeds and you will come to heaven." It dawned on the King that by doing selfish acts, he will not go to heaven.
He realized that his people needed him and carrying out his duties was the only way to heaven. After that day he became a responsible King.
"""
###Output
_____no_output_____
###Markdown
Exercise 4a: preprocessing textBefore analyzing the two texts, we are first going to preprocess them. Please use a particular string method multiple times to replace the following characters by empty strings in both **a_story** and **another_story**:* newlines: '\n'* commas: ','* dots: '.'* quotes: '"'Please assign the processed texts to the variables **cleaned_story** and **cleaned_another_story**.
###Code
# your code here
###Output
_____no_output_____
###Markdown
Exercise 4b: from text to a listFor each text (**cleaned_story** and **cleaned_another_story**), please use a string method to convert **cleaned_story** and **cleaned_another_story** into lists by splitting using spaces. Please call the lists **list_cleaned_story** and **list_cleaned_another_story**.
###Code
#you code here
###Output
_____no_output_____
###Markdown
Exercise 4c: from a list to a vocabulary (a set)Please create a set for the words in each text by adding each word to a set. In the end, you should have two variables **vocab_a_story** and **vocab_another_story**, each containing the unique words in each story. Please use the output of Exercise 4b as the input for this exercise.
###Code
vocab_a_story = set()
for word in list_cleaned_story:
# insert your code here
###Output
_____no_output_____
###Markdown
do the same for the other text
###Code
# you code
###Output
_____no_output_____
###Markdown
Exercise 4d: analyzing vocabulariesPlease analyze the vocabularies by using set methods to determine:* which words occur in both texts* which words only occur in **a_story*** which words only occur in **another_story**
###Code
# your code
###Output
_____no_output_____
###Markdown
Exercise 5: countingBelow you find a list called **words**, which is a list of strings. a.) Please create a dictionary in which the **key** is the word, and the **value** is the frequency of the word. Exclude all words which meet at least one of the following requirements: * end with the letter e * start with the letter t * start with the letter c and end with the letter w (both conditions must be met) * have six or more lettersYou are not allowed to use the **collections** module to do this.
###Code
words = ['there',
'was',
'a',
'village',
'near',
'a',
'jungle',
'the',
'village',
'cows',
'used',
'to',
'go',
'up',
'to',
'the',
'jungle',
'in',
'search',
'of',
'food.',
'in',
'the',
'forest',
'there',
'lived',
'a',
'wicked',
'lion',
'he',
'used',
'to',
'kill',
'a',
'cow',
'now',
'and',
'then',
'and',
'eat',
'her',
'this',
'was',
'happening',
'for',
'quite',
'sometime',
'the',
'cows',
'were',
'frightened',
'one',
'day',
'all',
'the',
'cows',
'held',
'a',
'meeting',
'an',
'old',
'cow',
'said',
'listen',
'everybody',
'the',
'lion',
'eats',
'one',
'of',
'us',
'only',
'because',
'we',
'go',
'into',
'the',
'jungle',
'separately',
'from',
'now',
'on',
'we',
'will',
'all',
'be',
'together',
'from',
'then',
'on',
'all',
'the',
'cows',
'went',
'into',
'the',
'jungle',
'in',
'a',
'herd',
'when',
'they',
'heard',
'or',
'saw',
'the',
'lion',
'all',
'of',
'them',
'unitedly',
'moo',
'and',
'chased',
'him',
'away',
'moral',
'divided',
'we',
'fall',
'united',
'we',
'stand']
###Output
_____no_output_____
###Markdown
b.) Analyze your dicitionary by printing:* how many keys it has * what the highest word frequency is* the sum of all values. c.) In addition, print the frequencies of the following words using your dictionary (if the word does not occur in the dictionary, print 'WORD does not occur')* up* near* together* lion* cow
###Code
for word in ['up', 'near' , 'together', 'lion', 'cow']:
# print frequency
###Output
_____no_output_____
###Markdown
Exercise 6: Functions Exercise 6a: the beersongPlease write a function that prints the beersong when it is called.The function:* is called `print_beersong`* has one positional parameter `start_number` (this is 99 in the original song) * prints the beer song
###Code
def print_beersong(start_number):
"""
"""
###Output
_____no_output_____
###Markdown
Exercise 6b: the whatever can be in a bottle songThere are other liquids than beer than can be placed in a bottle, e.g., *99 bottles of water on the wall.*.Please write a function that prints a variation of the beersong when it is called. All occurrences of **beer** will be replaced by what the user provides as an argument, e.g., *water*.The function:* is called `print_liquids`* has one positional parameter: `start_number` (this is **99** in the original song) * has one keyword parameter: `liquid` (set the default value to **beer**)* prints a liquids song Exercise 6c: preprocessing textPlease write the answer to Exercise 4a as a function. The function replaces the following characters by empty spaces in a text:* newlines: '\n'* commas: ','* dots: '.'* quotes: '"'The function is:* is called `clean_text`* has one positional parameter `text`* return a string, e.g., the cleaned text
###Code
def clean_text(text):
""""""
###Output
_____no_output_____
###Markdown
Exercise 6d: preprocessing text in a more general wayPlease write a function that replaces all characters that the user provides by empty spaces.The function is:* is called `clean_text_general`* has one positional parameter `text`* has one keyword parameter `chars_to_remove`, which is a set (set the default to {'\n', ',', '.', '"'})* return a string, e.g., the cleaned textWhen the user provides a different value to `chars_to_remove`, e.g., {'a'}, then only those characters should be replaced by empty spaces in the text.
###Code
def clean_text_general(text, chars_to_remove=)
###Output
_____no_output_____
###Markdown
Please store this function in a file called **assignment2_utils.py**. Please import the function and call it in this notebook. Exercise 6e: including and excluding wordsPlease write Exercise 5a as a function. The function:* is called `exclude_and_count`* has one positional parameter `words`, which is a list of strings.* creates a dictionary in which the **key** is a word and the **value** is the frequency of that word. * words are excluded if they meet one of the following criteria: * end with the letter e * start with the letter t * start with the letter c and end with the letter w (both conditions must be met) * have six or more letters* returns a dictionary in which the **key** is the word and the **value** is the frequency of the word.
###Code
def exclude_and_count
###Output
_____no_output_____
###Markdown
Assignment 2: Containers**Deadline: Tuesday, September 21, 2021 before 20:00** - Please name your files: * ASSIGNMENT_2_FIRSTNAME_LASTNAME.ipynb * assignment2_utils.py- Please store the two files in a folder called ASSIGNMENT_2_FIRSTNAME_LASTNAME- Please zip your folder and please follow the following naming convention for the zip file: ASSIGNMENT_2_FIRSTNAME_LASTNAME.zip- Please submit your assignment on Canvas: Assignment 2- If you have **questions** about this topic, please contact us **([email protected])**. Questions and answers will be collected on Piazza, so please check if your question has already been answered first.In this block, we covered the following chapters:- Chapter 05 - Core concepts of containers- Chapter 06 - Lists- Chapter 07 - Sets- Chapter 08 - Comparison of lists and sets- Chapter 09 - Looping over containers.- Chapter 10 - Dictionaries- Chapter 11 - Functions and scopeIn this assignment, you will be asked to show what you have learned from the topics above! **Finding solutions online**Very often, you can find good solutions online. We encourage you to use online resources when you get stuck. However, please always try to understand the code you find and indicate that it is not your own. Use the following format to mark code written by someone else:Taken from [link] [date][code]\Please use a similar format to indicate that you have worked with a classmate (e.g. mention the name instead of the link). *Please stick to this strategy for all course assignments.* Exercise 1: Beersong*99 Bottles of Beer* is a traditional song in the United States and Canada. Write a python program that generates the lyrics to the song. The song's simple lyrics are as follows: 99 bottles of beer on the wall, 99 bottles of beer. Take one down, pass it around, 98 bottles of beer on the wall.The same verse is repeated, each time with one fewer bottle. The songis completed when the singer or singers reach zero. After the last bottleis taken down and passed around, there is a special verse: No more bottles of beer on the wall, no more bottles of beer. Go to the store and buy some more, 99 bottles of beer on the wall. Notes:* Leave a blank line between verses.* Make sure that you print the singular form of "bottles" when the counter is at one. Hint:* While debugging the program, start from a small number, andchange it to 100 when you are done (as shown below).* Use variables to prevent code repetitionYou can use the following code snippet as a start:
###Code
for number in range(4, 0, -1): # change 4 to 99 when you're done with debugging
print(number, 'bottles of beer on the wall,')
###Output
4 bottles of beer on the wall,
3 bottles of beer on the wall,
2 bottles of beer on the wall,
1 bottles of beer on the wall,
###Markdown
Exercise 2: list methods In this exercise, we will focus on the following list methods:a.) appendb.) countc.) indexd.) inserte.) popFor each of the aforementioned list methods:* explain the positional parameters* explain the keyword parameters* you can exclude *self* from your explanation* explain what the goal of the method is and what data type it returns, e.g., string, list, set, etc.* give a working example. Provide also an example by providing a value for keyword parameters (assuming the method has one or more keyword parameters). Exercise 3: set methods In this exercise, we will focus on the following set methods:* update* pop* remove* clearFor each of the aforementioned set methods:* explain the positional parameters* explain the keyword parameters* you can exclude *self* from your explanation* explain what the goal of the method is and what data type it returns, e.g., string, list, set, etc.* give a working example. Provide also an example by providing a value for keyword parameters (assuming the method has one or more keyword parameters). Please fill in your answers here: Exercise 4: Analyzing vocabulary using setsPlease consider the following two texts: These stories were copied from [here](http://www.english-for-students.com/).
###Code
a_story = """In a far away kingdom, there was a river. This river was home to many golden swans. The swans spent most of their time on the banks of the river. Every six months, the swans would leave a golden feather as a fee for using the lake. The soldiers of the kingdom would collect the feathers and deposit them in the royal treasury.
One day, a homeless bird saw the river. "The water in this river seems so cool and soothing. I will make my home here," thought the bird.
As soon as the bird settled down near the river, the golden swans noticed her. They came shouting. "This river belongs to us. We pay a golden feather to the King to use this river. You can not live here."
"I am homeless, brothers. I too will pay the rent. Please give me shelter," the bird pleaded. "How will you pay the rent? You do not have golden feathers," said the swans laughing. They further added, "Stop dreaming and leave once." The humble bird pleaded many times. But the arrogant swans drove the bird away.
"I will teach them a lesson!" decided the humiliated bird.
She went to the King and said, "O King! The swans in your river are impolite and unkind. I begged for shelter but they said that they had purchased the river with golden feathers."
The King was angry with the arrogant swans for having insulted the homeless bird. He ordered his soldiers to bring the arrogant swans to his court. In no time, all the golden swans were brought to the King’s court.
"Do you think the royal treasury depends upon your golden feathers? You can not decide who lives by the river. Leave the river at once or you all will be beheaded!" shouted the King.
The swans shivered with fear on hearing the King. They flew away never to return. The bird built her home near the river and lived there happily forever. The bird gave shelter to all other birds in the river. """
print(a_story)
another_story = """Long time ago, there lived a King. He was lazy and liked all the comforts of life. He never carried out his duties as a King. "Our King does not take care of our needs. He also ignores the affairs of his kingdom." The people complained.
One day, the King went into the forest to hunt. After having wandered for quite sometime, he became thirsty. To his relief, he spotted a lake. As he was drinking water, he suddenly saw a golden swan come out of the lake and perch on a stone. "Oh! A golden swan. I must capture it," thought the King.
But as soon as he held his bow up, the swan disappeared. And the King heard a voice, "I am the Golden Swan. If you want to capture me, you must come to heaven."
Surprised, the King said, "Please show me the way to heaven." Do good deeds, serve your people and the messenger from heaven would come to fetch you to heaven," replied the voice.
The selfish King, eager to capture the Swan, tried doing some good deeds in his Kingdom. "Now, I suppose a messenger will come to take me to heaven," he thought. But, no messenger came.
The King then disguised himself and went out into the street. There he tried helping an old man. But the old man became angry and said, "You need not try to help. I am in this miserable state because of out selfish King. He has done nothing for his people."
Suddenly, the King heard the golden swan’s voice, "Do good deeds and you will come to heaven." It dawned on the King that by doing selfish acts, he will not go to heaven.
He realized that his people needed him and carrying out his duties was the only way to heaven. After that day he became a responsible King.
"""
###Output
_____no_output_____
###Markdown
Exercise 4a: preprocessing textBefore analyzing the two texts, we are first going to preprocess them. Please use a particular string method multiple times to replace the following characters by empty strings in both **a_story** and **another_story**:* newlines: '\n'* commas: ','* dots: '.'* quotes: '"'Please assign the processed texts to the variables **cleaned_story** and **cleaned_another_story**.
###Code
# your code here
###Output
_____no_output_____
###Markdown
Exercise 4b: from text to a listFor each text (**cleaned_story** and **cleaned_another_story**), please use a string method to convert **cleaned_story** and **cleaned_another_story** into lists by splitting using spaces. Please call the lists **list_cleaned_story** and **list_cleaned_another_story**.
###Code
#you code here
###Output
_____no_output_____
###Markdown
Exercise 4c: from a list to a vocabulary (a set)Please create a set for the words in each text by adding each word to a set. In the end, you should have two variables **vocab_a_story** and **vocab_another_story**, each containing the unique words in each story. Please use the output of Exercise 4b as the input for this exercise.
###Code
vocab_a_story = set()
for word in list_cleaned_story:
# insert your code here
###Output
_____no_output_____
###Markdown
do the same for the other text
###Code
# you code
###Output
_____no_output_____
###Markdown
Exercise 4d: analyzing vocabulariesPlease analyze the vocabularies by using set methods to determine:* which words occur in both texts* which words only occur in **a_story*** which words only occur in **another_story**
###Code
# your code
###Output
_____no_output_____
###Markdown
Exercise 5: countingBelow you find a list called **words**, which is a list of strings. a.) Please create a dictionary in which the **key** is the word, and the **value** is the frequency of the word. Exclude all words which meet at least one of the following requirements: * end with the letter e * start with the letter t * start with the letter c and end with the letter w (both conditions must be met) * have six or more lettersYou are not allowed to use the **collections** module to do this.
###Code
words = ['there',
'was',
'a',
'village',
'near',
'a',
'jungle',
'the',
'village',
'cows',
'used',
'to',
'go',
'up',
'to',
'the',
'jungle',
'in',
'search',
'of',
'food.',
'in',
'the',
'forest',
'there',
'lived',
'a',
'wicked',
'lion',
'he',
'used',
'to',
'kill',
'a',
'cow',
'now',
'and',
'then',
'and',
'eat',
'her',
'this',
'was',
'happening',
'for',
'quite',
'sometime',
'the',
'cows',
'were',
'frightened',
'one',
'day',
'all',
'the',
'cows',
'held',
'a',
'meeting',
'an',
'old',
'cow',
'said',
'listen',
'everybody',
'the',
'lion',
'eats',
'one',
'of',
'us',
'only',
'because',
'we',
'go',
'into',
'the',
'jungle',
'separately',
'from',
'now',
'on',
'we',
'will',
'all',
'be',
'together',
'from',
'then',
'on',
'all',
'the',
'cows',
'went',
'into',
'the',
'jungle',
'in',
'a',
'herd',
'when',
'they',
'heard',
'or',
'saw',
'the',
'lion',
'all',
'of',
'them',
'unitedly',
'moo',
'and',
'chased',
'him',
'away',
'moral',
'divided',
'we',
'fall',
'united',
'we',
'stand']
###Output
_____no_output_____
###Markdown
b.) Analyze your dicitionary by printing:* how many keys it has * what the highest word frequency is* the sum of all values. c.) In addition, print the frequencies of the following words using your dictionary (if the word does not occur in the dictionary, print 'WORD does not occur')* up* near* together* lion* cow
###Code
for word in ['up', 'near' , 'together', 'lion', 'cow']:
# print frequency
###Output
_____no_output_____
###Markdown
Exercise 6: Functions Exercise 6a: the beersongPlease write a function that prints the beersong when it is called.The function:* is called `print_beersong`* has one positional parameter `start_number` (this is 99 in the original song) * prints the beer song
###Code
def print_beersong(start_number):
"""
"""
###Output
_____no_output_____
###Markdown
Exercise 6b: the whatever can be in a bottle songThere are other liquids than beer than can be placed in a bottle, e.g., *99 bottles of water on the wall.*.Please write a function that prints a variation of the beersong when it is called. All occurrences of **beer** will be replaced by what the user provides as an argument, e.g., *water*.The function:* is called `print_liquids`* has one positional parameter: `start_number` (this is **99** in the original song) * has one keyword parameter: `liquid` (set the default value to **beer**)* prints a liquids song Exercise 6c: preprocessing textPlease write the answer to Exercise 4a as a function. The function replaces the following characters by empty spaces in a text:* newlines: '\n'* commas: ','* dots: '.'* quotes: '"'The function is:* is called `clean_text`* has one positional parameter `text`* return a string, e.g., the cleaned text
###Code
def clean_text(text):
""""""
###Output
_____no_output_____
###Markdown
Exercise 6d: preprocessing text in a more general wayPlease write a function that replaces all characters that the user provides by empty spaces.The function is:* is called `clean_text_general`* has one positional parameter `text`* has one keyword parameter `chars_to_remove`, which is a set (set the default to {'\n', ',', '.', '"'})* return a string, e.g., the cleaned textWhen the user provides a different value to `chars_to_remove`, e.g., {'a'}, then only those characters should be replaced by empty spaces in the text.
###Code
def clean_text_general(text, chars_to_remove=)
###Output
_____no_output_____
###Markdown
Please store this function in a file called **assignment2_utils.py**. Please import the function and call it in this notebook. Exercise 6e: including and excluding wordsPlease write Exercise 5a as a function. The function:* is called `exclude_and_count`* has one positional parameter `words`, which is a list of strings.* creates a dictionary in which the **key** is a word and the **value** is the frequency of that word. * words are excluded if they meet one of the following criteria: * end with the letter e * start with the letter t * start with the letter c and end with the letter w (both conditions must be met) * have six or more letters* returns a dictionary in which the **key** is the word and the **value** is the frequency of the word.
###Code
def exclude_and_count
###Output
_____no_output_____
###Markdown
Assignment 2: Containers**Deadline: Tuesday, September 17, 2019 before 20:00** - Please name your files: * ASSIGNMENT_2_FIRSTNAME_LASTNAME.ipynb * assignment2_utils.py- Please store the two files in a folder called ASSIGNMENT_2_FIRSTNAME_LASTNAME- Please zip your folder and please follow the following naming convention for the zip file: ASSIGNMENT_2_FIRSTNAME_LASTNAME.zip- Please submit your assignment using [this google form](https://forms.gle/CeLm2rfQWGsD9S7v6) - If you have **questions** about this topic, please contact **Marten ([email protected])**.In this block, we covered the following chapters:- Chapter 05 - Core concepts of containers- Chapter 06 - Lists- Chapter 07 - Sets- Chapter 08 - Comparison of lists and sets- Chapter 09 - Looping over containers.- Chapter 10 - Dictionaries- Chapter 11 - Functions and scopeIn this assignment, you will be asked to show what you have learned from the topics above! **Finding solutions online**Very often, you can find good solutions online. We encourage you to use online resources when you get stuck. However, please always try to understand the code you find and indicate that it is not your own. Use the following format to mark code written by someone else: Taken from [link] [date][code]\Please use a similar format to indicate that you have worked with a classmate (e.g. mention the name instead of the link). *Please stick to this strategy for all course assignments.* Exercise 1: Beersong*99 Bottles of Beer* is a traditional song in the United States and Canada. Write a python program that generates the lyrics to the song. The song's simple lyrics are as follows: 99 bottles of beer on the wall, 99 bottles of beer. Take one down, pass it around, 98 bottles of beer on the wall.The same verse is repeated, each time with one fewer bottle. The songis completed when the singer or singers reach zero. After the last bottleis taken down and passed around, there is a special verse: No more bottles of beer on the wall, no more bottles of beer. Go to the store and buy some more, 99 bottles of beer on the wall. Notes:* Leave a blank line between verses.* Make sure that you print the singular form of "bottles" when the counter is at one. Hint:* While debugging the program, start from a small number, andchange it to 100 when you are done (as shown below).* Use variables to prevent code repetitionYou can use the following code snippet as a start:
###Code
for number in reversed(range(1, 5)): # change 5 to 100 when you're done with debugging
print(number, 'bottles of beer on the wall,')
###Output
_____no_output_____
###Markdown
Exercise 2: list methods In this exercise, we will focus on the following list methods:a.) appendb.) countc.) indexd.) inserte.) popFor each of the aforementioned list methods:* explain the positional arugments* explain the keyword arguments* explain what the goal of the method is and what it returns* give a working example. Provide also an example with a keyword argument (assuming the method has one or more keyword arguments). Exercise 3: set methods In this exercise, we will focus on the following set methods:* update* pop* remove* clearFor each of the aforementioned set methods:* explain the positional arugments* explain the keyword arguments* explain what the goal of the method is and what it returns* give a working example. Please fill in your answers here: Exercise 4: Analyzing vocabulary using setsPlease consider the following two texts: These stories were copied from [here](http://www.english-for-students.com/).
###Code
a_story = """In a far away kingdom, there was a river. This river was home to many golden swans. The swans spent most of their time on the banks of the river. Every six months, the swans would leave a golden feather as a fee for using the lake. The soldiers of the kingdom would collect the feathers and deposit them in the royal treasury.
One day, a homeless bird saw the river. "The water in this river seems so cool and soothing. I will make my home here," thought the bird.
As soon as the bird settled down near the river, the golden swans noticed her. They came shouting. "This river belongs to us. We pay a golden feather to the King to use this river. You can not live here."
"I am homeless, brothers. I too will pay the rent. Please give me shelter," the bird pleaded. "How will you pay the rent? You do not have golden feathers," said the swans laughing. They further added, "Stop dreaming and leave once." The humble bird pleaded many times. But the arrogant swans drove the bird away.
"I will teach them a lesson!" decided the humiliated bird.
She went to the King and said, "O King! The swans in your river are impolite and unkind. I begged for shelter but they said that they had purchased the river with golden feathers."
The King was angry with the arrogant swans for having insulted the homeless bird. He ordered his soldiers to bring the arrogant swans to his court. In no time, all the golden swans were brought to the King’s court.
"Do you think the royal treasury depends upon your golden feathers? You can not decide who lives by the river. Leave the river at once or you all will be beheaded!" shouted the King.
The swans shivered with fear on hearing the King. They flew away never to return. The bird built her home near the river and lived there happily forever. The bird gave shelter to all other birds in the river. """
print(a_story)
another_story = """Long time ago, there lived a King. He was lazy and liked all the comforts of life. He never carried out his duties as a King. "Our King does not take care of our needs. He also ignores the affairs of his kingdom." The people complained.
One day, the King went into the forest to hunt. After having wandered for quite sometime, he became thirsty. To his relief, he spotted a lake. As he was drinking water, he suddenly saw a golden swan come out of the lake and perch on a stone. "Oh! A golden swan. I must capture it," thought the King.
But as soon as he held his bow up, the swan disappeared. And the King heard a voice, "I am the Golden Swan. If you want to capture me, you must come to heaven."
Surprised, the King said, "Please show me the way to heaven." Do good deeds, serve your people and the messenger from heaven would come to fetch you to heaven," replied the voice.
The selfish King, eager to capture the Swan, tried doing some good deeds in his Kingdom. "Now, I suppose a messenger will come to take me to heaven," he thought. But, no messenger came.
The King then disguised himself and went out into the street. There he tried helping an old man. But the old man became angry and said, "You need not try to help. I am in this miserable state because of out selfish King. He has done nothing for his people."
Suddenly, the King heard the golden swan’s voice, "Do good deeds and you will come to heaven." It dawned on the King that by doing selfish acts, he will not go to heaven.
He realized that his people needed him and carrying out his duties was the only way to heaven. After that day he became a responsible King.
"""
###Output
_____no_output_____
###Markdown
Exercise 4a: preprocessing textBefore analyzing the two texts, we are first going to preprocess them. Please use a particular string method multiple times to replace the following characters by empty strings in both **a_story** and **another_story**:* newlines: '\n'* commas: ','* dots: '.'* quotes: '"'After preprocessing both texts, please call the cleaned stories **cleaned_story** and **cleaned_another_story** Exercise 4b: from text to a listFor each text (**cleaned_story** and **cleaned_another_story**), please use a string method to convert **cleaned_story** and **cleaned_another_story** into lists by splitting using spaces. Please call the lists **list_cleaned_story** and **list_cleaned_another_story**. Exercise 4c: from a list to a vocabulary (a set)Please create a set for the words in each text by adding each word to a set. At the end, you should have two variables **vocab_a_story** and **vocab_another_story**, each containing the unique words in each story. Please use the output of Exercise 4b as the input for this exercise.
###Code
vocab_a_story = set()
for word in list_cleaned_story:
# insert your code here
###Output
_____no_output_____
###Markdown
do the same for the other text
###Code
# you code
###Output
_____no_output_____
###Markdown
Exercise 4d: analyzing vocabulariesPlease analyze the vocabularies by using set methods to determine:* which words occur in both texts* which words only occur in **a_story*** which words only occur in **another_story**
###Code
# your code
###Output
_____no_output_____
###Markdown
Exercise 5: countingBelow you find a list called **words**, which is a list of strings. a.) Please create a dictionary in which the **key** is the word and the **value** is the frequency of the word. However, do not include words that: * end with the letter e * start with the letter t * start with the letter c and end with the letter w (both conditions must be met) * have five or more lettersYou are not allowed to use the **collections** module to do this.
###Code
words = ['there',
'was',
'a',
'village',
'near',
'a',
'jungle',
'the',
'village',
'cows',
'used',
'to',
'go',
'up',
'to',
'the',
'jungle',
'in',
'search',
'of',
'food.',
'in',
'the',
'forest',
'there',
'lived',
'a',
'wicked',
'lion',
'he',
'used',
'to',
'kill',
'a',
'cow',
'now',
'and',
'then',
'and',
'eat',
'her',
'this',
'was',
'happening',
'for',
'quite',
'sometime',
'the',
'cows',
'were',
'frightened',
'one',
'day',
'all',
'the',
'cows',
'held',
'a',
'meeting',
'an',
'old',
'cow',
'said',
'listen',
'everybody',
'the',
'lion',
'eats',
'one',
'of',
'us',
'only',
'because',
'we',
'go',
'into',
'the',
'jungle',
'separately',
'from',
'now',
'on',
'we',
'will',
'all',
'be',
'together',
'from',
'then',
'on',
'all',
'the',
'cows',
'went',
'into',
'the',
'jungle',
'in',
'a',
'herd',
'when',
'they',
'heard',
'or',
'saw',
'the',
'lion',
'all',
'of',
'them',
'unitedly',
'moo',
'and',
'chased',
'him',
'away',
'moral',
'divided',
'we',
'fall',
'united',
'we',
'stand']
###Output
_____no_output_____
###Markdown
b.) Analyze your dicitionary by printing:* how many keys it has * what the highest word frequency is* the sum of all values. In addition, print the frequencies of the following words using your dictionary (if the word does not occur in the dictionary, print 'WORD does not occur')* up* near* together* lion* cow
###Code
for word in ['up', 'near' , 'together', 'lion', 'cow']:
# print frequency
###Output
_____no_output_____
###Markdown
Exercise 6: Functions Exercise 6a: the beersongPlease write a function that prints the beersong when it is called.The function:* is called `print_beersong`* has one positional parameter `start_number` (this is 99 in the original song) * prints the beer song
###Code
def print_beersong(start_number):
"""
"""
###Output
_____no_output_____
###Markdown
Exercise 6b: the whatever can be in a bottle songThere are other liquids than beer than can be placed in a bottle, e.g., *99 bottle of water on the wall.*.Please write a function that prints a variation of the beersong when it is called. All occurrences of **beer** will be replaced by what the user provides as an argument, e.g., *water*.The function:* is called `print_liquids`* has one positional parameter: `start_number` (this is **99** in the original song) * has one keyword parameter: `liquid` (set the default value to **beer**)* prints a liquids song Exercise 6c: preprocessing textPlease write the answer to Exercise 4a as a function. The function replaces the following characters by empty spaces in a text:* newlines: '\n'* commas: ','* dots: '.'* quotes: '"'The function is:* is called `clean_text`* has one positional parameter `text`* return a string, e.g., the cleaned text
###Code
def clean_text(text):
""""""
###Output
_____no_output_____
###Markdown
Exercise 6d: preprocessing text in a more general wayPlease write a function that replaces all characters that the user provides by empty spaces.The function is:* is called `clean_text_general`* has one positional parameter `text`* has one keyword parameter `chars_to_remove`, which is a set (set the default to {'\n', ',', '.', '"'})* return a string, e.g., the cleaned textWhen the user provides a different value to `chars_to_remove`, e.g., {'a'}, then only those characters should be replaced by empty spaces in the text.
###Code
def clean_text_general(text, chars_to_remove=)
###Output
_____no_output_____
###Markdown
Please store this function in a file called **assignment2_utils.py**. Please import the function and call it in this notebook. Exercise 6e: including and excluding wordsPlease write Exercise 5a as a function. The function:* is called `exclude_and_count`* has one positional parameter `words`, which is a list of strings.* creates a dictionary in which the **key** is a word and the **value** is the frequency of that word. * words are excluded if they meet one of the following criteria: * end with the letter e * start with the letter t * start with the letter c and end with the letter w (both conditions must be met) * have five or more letters* returns a dictionary in which the **key** is the word and the **value** is the frequency of the word. * make sure you use a number of `assert` statements to determine that the function behaves as you expect it to behave. This means that you have to try it with different lists of words.
###Code
def exclude_and_count
###Output
_____no_output_____
###Markdown
Assignment 2: Containers** Due: Tuesday the 13th of November 2018 20:00 p.m.**Please name your ipython notebook with the following naming convention: ASSIGNMENT_2_FIRSTNAME_LASTNAME.ipynb Please submit your assignment using [this google form](https://docs.google.com/forms/d/e/1FAIpQLSeEdPV6Gv0Joa1kkU9DonM2nHOI0luFoNNjBogUNsMTkZyr0A/viewform?usp=sf_link)If you have **questions** about this topic, please **post them in the Canvas forum**. Exercise 1: Beersong*99 Bottles of Beer* is a traditional song in the United States and Canada. Write a python program that generates the lyrics to the song. The song's simple lyrics are as follows: 99 bottles of beer on the wall, 99 bottles of beer. Take one down, pass it around, 98 bottles of beer on the wall.The same verse is repeated, each time with one fewer bottle. The songis completed when the singer or singers reach zero. After the last bottleis taken down and passed around, there is a special verse: No more bottles of beer on the wall, no more bottles of beer. Go to the store and buy some more, 99 bottles of beer on the wall. Notes:* Leave a blank line between verses.* Make sure that you print the singular form of "bottles" when the counter is at one. Hint:* While debugging the program, start from a small number, andchange it to 99 when you are done (as shown below).* You can substract from a number with `number = number - 1`* Use variables to prevent code repetitionYou can use the following code snippet as a start:
###Code
for number in reversed(range(1, 5)): # change 5 to 99 when you're done with debugging
print(number, 'bottles of beer on the wall,')
###Output
_____no_output_____
###Markdown
Exercise 2: list methods In this exercise, we will focus on the following list methods:a.) appendb.) countc.) indexd.) inserte.) popFor each of the aforementioned list methods:* explain the positional arugments* explain the keyword arguments* explain what the goal of the method is and what it returns* give a working example. Provide also an example with a keyword argument (assuming the method has one or more keyword arguments). Exercise 3: set methods In this exercise, we will focus on the following set methods:* update* pop* remove* clearFor each of the aforementioned set methods:* explain the positional arugments* explain the keyword arguments* explain what the goal of the method is and what it returns* give a working example. Please fill in your answers here: Exercise 4: Analyzing vocabulary using setsPlease consider the following two texts: These stories were copied from [here](http://www.english-for-students.com/).
###Code
a_story = """In a far away kingdom, there was a river. This river was home to many golden swans. The swans spent most of their time on the banks of the river. Every six months, the swans would leave a golden feather as a fee for using the lake. The soldiers of the kingdom would collect the feathers and deposit them in the royal treasury.
One day, a homeless bird saw the river. "The water in this river seems so cool and soothing. I will make my home here," thought the bird.
As soon as the bird settled down near the river, the golden swans noticed her. They came shouting. "This river belongs to us. We pay a golden feather to the King to use this river. You can not live here."
"I am homeless, brothers. I too will pay the rent. Please give me shelter," the bird pleaded. "How will you pay the rent? You do not have golden feathers," said the swans laughing. They further added, "Stop dreaming and leave once." The humble bird pleaded many times. But the arrogant swans drove the bird away.
"I will teach them a lesson!" decided the humiliated bird.
She went to the King and said, "O King! The swans in your river are impolite and unkind. I begged for shelter but they said that they had purchased the river with golden feathers."
The King was angry with the arrogant swans for having insulted the homeless bird. He ordered his soldiers to bring the arrogant swans to his court. In no time, all the golden swans were brought to the King’s court.
"Do you think the royal treasury depends upon your golden feathers? You can not decide who lives by the river. Leave the river at once or you all will be beheaded!" shouted the King.
The swans shivered with fear on hearing the King. They flew away never to return. The bird built her home near the river and lived there happily forever. The bird gave shelter to all other birds in the river. """
print(a_story)
another_story = """Long time ago, there lived a King. He was lazy and liked all the comforts of life. He never carried out his duties as a King. “Our King does not take care of our needs. He also ignores the affairs of his kingdom." The people complained.
One day, the King went into the forest to hunt. After having wandered for quite sometime, he became thirsty. To his relief, he spotted a lake. As he was drinking water, he suddenly saw a golden swan come out of the lake and perch on a stone. “Oh! A golden swan. I must capture it," thought the King.
But as soon as he held his bow up, the swan disappeared. And the King heard a voice, “I am the Golden Swan. If you want to capture me, you must come to heaven."
Surprised, the King said, “Please show me the way to heaven." “Do good deeds, serve your people and the messenger from heaven would come to fetch you to heaven," replied the voice.
The selfish King, eager to capture the Swan, tried doing some good deeds in his Kingdom. “Now, I suppose a messenger will come to take me to heaven," he thought. But, no messenger came.
The King then disguised himself and went out into the street. There he tried helping an old man. But the old man became angry and said, “You need not try to help. I am in this miserable state because of out selfish King. He has done nothing for his people."
Suddenly, the King heard the golden swan’s voice, “Do good deeds and you will come to heaven." It dawned on the King that by doing selfish acts, he will not go to heaven.
He realized that his people needed him and carrying out his duties was the only way to heaven. After that day he became a responsible King.
"""
###Output
_____no_output_____
###Markdown
Exercise 4a: preprocessing textBefore analyzing the two texts, we are first going to preprocess them. Please use a particular string method multiple times to replace the following characters by empty strings in both **a_story** and **another_story**:* newlines: '\n'* commas: ','* dots: '.'* quotes: '"'After preprocessing both texts, please call the cleaned stories **cleaned_story** and **cleaned_another_story** Exercise 4b: from text to a listFor each text (**cleaned_story** and **cleaned_another_story**), please use a string method to convert **cleaned_story** and **cleaned_another_story** into lists by splitting using spaces. Exercise 4c: from a list to a vocabulary (a set)Please create a set for the words in each text by adding each word to a set. At the end, you should have two variables **vocab_a_story** and **vocab_another_story**, each containing the unique words in each story.
###Code
vocab_a_story = set()
for word in cleaned_story:
# insert your code here
###Output
_____no_output_____
###Markdown
do the same for the other text
###Code
# you code
###Output
_____no_output_____
###Markdown
Exercise 4d: analyzing vocabulariesPlease analyze the vocabularies by using set methods to determine:* which words occur in both texts* which words only occur in **a_story*** which words only occur in **another_story**
###Code
# your code
###Output
_____no_output_____
###Markdown
Exercise 5: countingBelow you find a list called **words**, which is a list of strings. a.) Please create a dictionary in which the **key** is the word and the **value** is the frequency of the word. However, do not include words that: * end with the letter e * start with the letter t * start with the letter c and end with the letter w (both conditions must be met) * have five or more lettersYou are not allowed to use the **collections** module to do this.
###Code
words = ['there',
'was',
'a',
'village',
'near',
'a',
'jungle',
'the',
'village',
'cows',
'used',
'to',
'go',
'up',
'to',
'the',
'jungle',
'in',
'search',
'of',
'food.',
'in',
'the',
'forest',
'there',
'lived',
'a',
'wicked',
'lion',
'he',
'used',
'to',
'kill',
'a',
'cow',
'now',
'and',
'then',
'and',
'eat',
'her',
'this',
'was',
'happening',
'for',
'quite',
'sometime',
'the',
'cows',
'were',
'frightened',
'one',
'day',
'all',
'the',
'cows',
'held',
'a',
'meeting',
'an',
'old',
'cow',
'said',
'listen',
'everybody',
'the',
'lion',
'eats',
'one',
'of',
'us',
'only',
'because',
'we',
'go',
'into',
'the',
'jungle',
'separately',
'from',
'now',
'on',
'we',
'will',
'all',
'be',
'together',
'from',
'then',
'on',
'all',
'the',
'cows',
'went',
'into',
'the',
'jungle',
'in',
'a',
'herd',
'when',
'they',
'heard',
'or',
'saw',
'the',
'lion',
'all',
'of',
'them',
'unitedly',
'moo',
'and',
'chased',
'him',
'away',
'moral',
'divided',
'we',
'fall',
'united',
'we',
'stand']
###Output
_____no_output_____
###Markdown
b.) Analyze your dicitionary by printing:* how many keys it has * what the highest word frequency is* the sum of all values. In addition, print the frequencies of the following words using your dictionary (if the word does not occur in the dictionary, print 'WORD does not occur')* up* near* together* lion* cow
###Code
for word in ['up', 'near' , 'lion']:
# print frequency
###Output
_____no_output_____ |
OOPS/OOP Challenge.ipynb | ###Markdown
Content Copyright by Pierian Data Object Oriented Programming ChallengeFor this challenge, create a bank account class that has two attributes:* owner* balanceand two methods:* deposit* withdrawAs an added requirement, withdrawals may not exceed the available balance.Instantiate your class, make several deposits and withdrawals, and test to make sure the account can't be overdrawn.
###Code
class Account:
def __init__(self, owner, balance):
self.owner=owner
print(f"Account owner: {self.owner}")
self.balance=balance
print(f"Account balance: {self.balance}")
def deposit(self, deposit_amt):
self.balance=self.balance+deposit_amt
print(f"Deposit Accepted")
def withdraw(self, withdraw_amt):
if self.balance>=withdraw_amt:
self.balance=self.balance-withdraw_amt
print(f"Withdrawal Accepted")
else:
print(f"Funds Unavailable!")
# 1. Instantiate the class
acct1 = Account('Jose',100)
# 2. Print the object
print(acct1)
# 3. Show the account owner attribute
acct1.owner
# 4. Show the account balance attribute
acct1.balance
# 5. Make a series of deposits and withdrawals
acct1.deposit(50)
acct1.withdraw(75)
# 6. Make a withdrawal that exceeds the available balance
acct1.withdraw(500)
###Output
Funds Unavailable!
|
project-face-generation/.ipynb_checkpoints/dlnd_face_generation-checkpoint.ipynb | ###Markdown
Face GenerationIn this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible!The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise. Get the DataYou'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training. Pre-processed DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.> If you are working locally, you can download this data [by clicking here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip)This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data `processed_celeba_small/`
###Code
# can comment out after executing
#!unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
###Output
_____no_output_____
###Markdown
Visualize the CelebA DataThe [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with [3 color channels (RGB)](https://en.wikipedia.org/wiki/Channel_(digital_image)RGB_Images) each. Pre-process and Load the DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This *pre-processed* dataset is a smaller subset of the very large CelebA data.> There are a few other steps that you'll need to **transform** this data and create a **DataLoader**. Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements:* Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension.* Your function should return a DataLoader that shuffles and batches these Tensor images. ImageFolderTo create a dataset given a directory of images, it's recommended that you use PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.htmlimagefolder) wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.
###Code
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
from torch.utils.data import DataLoader
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
# TODO: Implement function and return a dataloader
#create transform
transform = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
#define dataset using ImageFolder
data = datasets.ImageFolder(data_dir, transform)
#create DataLoader
d_loader = DataLoader(dataset=data, batch_size=batch_size, shuffle=True)
return d_loader
###Output
_____no_output_____
###Markdown
Create a DataLoader Exercise: Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.Call the above function and create a dataloader to view images. * You can decide on any reasonable `batch_size` parameter* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
###Code
# Define function hyperparameters(for clarity)
batch_size = 128
img_size = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
###Output
_____no_output_____
###Markdown
Next, you can view some images! You should seen square images of somewhat-centered faces.Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
###Code
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
###Output
_____no_output_____
###Markdown
Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
###Code
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
min, max = feature_range
x = x * (max-min) + min
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
###Output
Min: tensor(-0.9922)
Max: tensor(0.9137)
###Markdown
--- Define the ModelA GAN is comprised of two adversarial networks, a discriminator and a generator. DiscriminatorYour first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful. Exercise: Complete the Discriminator class* The inputs to the discriminator are 32x32x3 tensor images* The output should be a single value that will indicate whether a given image is real or fake
###Code
import torch.nn as nn
import torch.nn.functional as F
#helper conv function - lets us add batch norm easily.
def conv(in_chan, out_chan, kernel, stride=2, padding=1, batch_norm=True):
layers = []
conv_layer = nn.Conv2d(in_channels=in_chan, out_channels=out_chan, kernel_size=kernel,
stride=stride, padding=padding, bias=False)
layers.append(conv_layer)
if batch_norm:
layers.append(nn.BatchNorm2d(out_chan))
return nn.Sequential(*layers)
class Discriminator(nn.Module):
def __init__(self, conv_dim=64):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
self.conv_dim = conv_dim
# complete init function
self.conv1 = conv(3, conv_dim, 4, batch_norm=False) #(16x16, depth=64)
self.conv2 = conv(conv_dim, conv_dim*2, 4) #8*8, 128
self.conv3 = conv(conv_dim*2, conv_dim*4, 4) #4*4, 256
#self.conv4 = conv(conv_dim*4, conv_dim*8, 4) #2*2, 512
#self.conv5 = conv(conv_dim*8, conv_dim*16, 4) #1*1, 1024
#classification layer
self.fc = nn.Linear(conv_dim*4*4*4, 1)
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
out = F.leaky_relu(self.conv1(x), 0.2)
out = F.leaky_relu(self.conv2(out), 0.2)
out = F.leaky_relu(self.conv3(out), 0.2)
#out = F.leaky_relu(self.conv4(out))
#out = F.leaky_relu(self.conv5(out))#need to reshape from (50, 1, 1, 1) to (50, 1)
#flatten
out = out.view(-1, self.conv_dim*4*4*4)#keep track of the dimensions
out = self.fc(out)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
###Output
Tests Passed
###Markdown
GeneratorThe generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs. Exercise: Complete the Generator class* The inputs to the generator are vectors of some length `z_size`* The output should be a image of shape `32x32x3`
###Code
def transpose_conv(in_chan, out_chan, k_size, stride=2, padding=1, batch_norm=True):
layers= []
transpose_conv_layer = nn.ConvTranspose2d(in_chan, out_chan, k_size, stride, padding, bias=False)
layers.append(transpose_conv_layer)
if batch_norm:
layers.append(nn.BatchNorm2d(out_chan))
return nn.Sequential(*layers)
class Generator(nn.Module):
def __init__(self, z_size, conv_dim=32):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
self.conv_dim = conv_dim
self.fc = nn.Linear(z_size, conv_dim*4*4*4)
#self.t_conv1 = transpose_conv(conv_dim*16, conv_dim*8, 4)#outputs 2*2
#self.t_conv2 = transpose_conv(conv_dim*8, conv_dim*4, 4)#4*4
self.t_conv3 = transpose_conv(conv_dim*4, conv_dim*2, 4)#8*8
self.t_conv4 = transpose_conv(conv_dim*2, conv_dim, 4)#16*16
self.t_conv5 = transpose_conv(conv_dim, 3, 4, batch_norm=False)#32*32
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
out = self.fc(x)
out = out.view(-1, self.conv_dim*4,4,4)
#out = F.relu(self.t_conv1(out))
#out = F.relu(self.t_conv2(out))
out = F.relu(self.t_conv3(out))
out = F.relu(self.t_conv4(out))
out = F.relu(self.t_conv5(out))
out = F.tanh(out)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
print(Generator(100,32))
tests.test_generator(Generator)
###Output
Generator(
(fc): Linear(in_features=100, out_features=2048, bias=True)
(t_conv3): Sequential(
(0): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(t_conv4): Sequential(
(0): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(t_conv5): Sequential(
(0): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
)
Tests Passed
###Markdown
Initialize the weights of your networksTo help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.So, your next task will be to define a weight initialization function that does just this!You can refer back to the lesson on weight initialization or even consult existing model code, such as that from [the `networks.py` file in CycleGAN Github repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py) to help you complete this function. Exercise: Complete the weight initialization function* This should initialize only **convolutional** and **linear** layers* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.* The bias terms, if they exist, may be left alone or set to 0.
###Code
from torch.nn import init
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
classname = m.__class__.__name__
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
init.normal_(m.weight.data, 0.0, 0.02)
if hasattr(m, 'bias') and m.bias is not None:
init.constant_(m.bias.data, 0.0)
elif classname.find('BatchNorm2d') != -1:
init.normal_(m.weight.data, 1.0, 0.02)
init.constant_(m.bias.data, 0.0)
###Output
_____no_output_____
###Markdown
Build complete networkDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
###Output
_____no_output_____
###Markdown
Exercise: Define model hyperparameters
###Code
# Define model hyperparams
d_conv_dim = 32
g_conv_dim = 32
z_size = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
###Output
Discriminator(
(conv1): Sequential(
(0): Conv2d(3, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
(conv2): Sequential(
(0): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv3): Sequential(
(0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(fc): Linear(in_features=2048, out_features=1, bias=True)
)
Generator(
(fc): Linear(in_features=100, out_features=2048, bias=True)
(t_conv3): Sequential(
(0): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(t_conv4): Sequential(
(0): ConvTranspose2d(64, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(t_conv5): Sequential(
(0): ConvTranspose2d(32, 3, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1), bias=False)
)
)
###Markdown
Training on GPUCheck if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that >* Models,* Model inputs, and* Loss function argumentsAre moved to GPU, where appropriate.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
###Output
Training on GPU!
###Markdown
--- Discriminator and Generator LossesNow we need to calculate the losses for both types of adversarial networks. Discriminator Losses> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Generator LossThe generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*. Exercise: Complete real and fake loss functions**You may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
###Code
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
batch_length = D_out.size(0)
labels = torch.ones(batch_length)
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
batch_length = D_out.size(0)
labels = torch.zeros(batch_length)
if train_on_gpu:
labels = labels.cuda()
criterion = nn.BCEWithLogitsLoss()
loss = criterion(D_out.squeeze(), labels)
return loss
###Output
_____no_output_____
###Markdown
Optimizers Exercise: Define optimizers for your Discriminator (D) and Generator (G)Define optimizers for your models with appropriate hyperparameters.
###Code
import torch.optim as optim
#learning rate
lr = 0.0002
beta1 = 0.5
beta2 = 0.999
# Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(), lr, betas=[beta1, beta2])
g_optimizer = optim.Adam(G.parameters(), lr, betas=[beta1, beta2])
###Output
_____no_output_____
###Markdown
--- TrainingTraining will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.* You should train the discriminator by alternating on real and fake images* Then the generator, which tries to trick the discriminator and should have an opposing loss function Saving SamplesYou've been given some code to print out some loss statistics and save some generated "fake" samples. Exercise: Complete the training functionKeep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
###Code
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# YOUR CODE HERE: TRAIN THE NETWORKS
# ===============================================
# 1. Train the discriminator on real and fake images
d_optimizer.zero_grad()
if train_on_gpu:
real_images = real_images.cuda()
D_real = D(real_images)
d_real_loss = real_loss(D_real)
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
D_fake = D(fake_images)
d_fake_loss = fake_loss(D_fake)
d_loss = d_real_loss + d_fake_loss
#perform backpropagation
d_loss.backward()
d_optimizer.step()
# 2. Train the generator with an adversarial loss
g_optimizer.zero_grad()
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
if train_on_gpu:
z = z.cuda()
fake_images = G(z)
# Compute the discriminator losses on fake images
# using flipped labels!
D_fake = D(fake_images)
g_loss = real_loss(D_fake) # use real loss to flip labels
# perform backprop
g_loss.backward()
g_optimizer.step()
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
if train_on_gpu:
fixed_z = fixed_z.cuda()
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
###Output
_____no_output_____
###Markdown
Set your number of training epochs and train your GAN!
###Code
# set number of epochs
n_epochs = 20
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
###Output
C:\Users\44774\Anaconda3\envs\deep-learning\lib\site-packages\torch\nn\functional.py:1320: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
###Markdown
Training lossPlot the training losses for the generator and discriminator, recorded after each epoch.
###Code
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
###Output
_____no_output_____
###Markdown
Generator samples from trainingView samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
###Code
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
###Output
_____no_output_____
###Markdown
Face GenerationIn this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible!The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise. Get the DataYou'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training. Pre-processed DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.> If you are working locally, you can download this data [by clicking here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip)This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data `processed_celeba_small/`
###Code
# can comment out after executing
!unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
###Output
_____no_output_____
###Markdown
Visualize the CelebA DataThe [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with [3 color channels (RGB)](https://en.wikipedia.org/wiki/Channel_(digital_image)RGB_Images) each. Pre-process and Load the DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This *pre-processed* dataset is a smaller subset of the very large CelebA data.> There are a few other steps that you'll need to **transform** this data and create a **DataLoader**. Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements:* Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension.* Your function should return a DataLoader that shuffles and batches these Tensor images. ImageFolderTo create a dataset given a directory of images, it's recommended that you use PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.htmlimagefolder) wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.
###Code
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
# TODO: Implement function and return a dataloader
return None
###Output
_____no_output_____
###Markdown
Create a DataLoader Exercise: Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.Call the above function and create a dataloader to view images. * You can decide on any reasonable `batch_size` parameter* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
###Code
# Define function hyperparameters
batch_size =
img_size =
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
###Output
_____no_output_____
###Markdown
Next, you can view some images! You should seen square images of somewhat-centered faces.Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
###Code
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
###Output
_____no_output_____
###Markdown
Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
###Code
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
###Output
_____no_output_____
###Markdown
--- Define the ModelA GAN is comprised of two adversarial networks, a discriminator and a generator. DiscriminatorYour first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful. Exercise: Complete the Discriminator class* The inputs to the discriminator are 32x32x3 tensor images* The output should be a single value that will indicate whether a given image is real or fake
###Code
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# complete init function
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
###Output
_____no_output_____
###Markdown
GeneratorThe generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs. Exercise: Complete the Generator class* The inputs to the generator are vectors of some length `z_size`* The output should be a image of shape `32x32x3`
###Code
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
###Output
_____no_output_____
###Markdown
Initialize the weights of your networksTo help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.So, your next task will be to define a weight initialization function that does just this!You can refer back to the lesson on weight initialization or even consult existing model code, such as that from [the `networks.py` file in CycleGAN Github repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py) to help you complete this function. Exercise: Complete the weight initialization function* This should initialize only **convolutional** and **linear** layers* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.* The bias terms, if they exist, may be left alone or set to 0.
###Code
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# TODO: Apply initial weights to convolutional and linear layers
###Output
_____no_output_____
###Markdown
Build complete networkDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
###Output
_____no_output_____
###Markdown
Exercise: Define model hyperparameters
###Code
# Define model hyperparams
d_conv_dim =
g_conv_dim =
z_size =
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
###Output
_____no_output_____
###Markdown
Training on GPUCheck if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that >* Models,* Model inputs, and* Loss function argumentsAre moved to GPU, where appropriate.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
###Output
_____no_output_____
###Markdown
--- Discriminator and Generator LossesNow we need to calculate the losses for both types of adversarial networks. Discriminator Losses> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Generator LossThe generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*. Exercise: Complete real and fake loss functions**You may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
###Code
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
loss =
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
loss =
return loss
###Output
_____no_output_____
###Markdown
Optimizers Exercise: Define optimizers for your Discriminator (D) and Generator (G)Define optimizers for your models with appropriate hyperparameters.
###Code
import torch.optim as optim
# Create optimizers for the discriminator D and generator G
d_optimizer =
g_optimizer =
###Output
_____no_output_____
###Markdown
--- TrainingTraining will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.* You should train the discriminator by alternating on real and fake images* Then the generator, which tries to trick the discriminator and should have an opposing loss function Saving SamplesYou've been given some code to print out some loss statistics and save some generated "fake" samples. Exercise: Complete the training functionKeep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
###Code
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# YOUR CODE HERE: TRAIN THE NETWORKS
# ===============================================
# 1. Train the discriminator on real and fake images
d_loss =
# 2. Train the generator with an adversarial loss
g_loss =
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
###Output
_____no_output_____
###Markdown
Set your number of training epochs and train your GAN!
###Code
# set number of epochs
n_epochs =
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
###Output
_____no_output_____
###Markdown
Training lossPlot the training losses for the generator and discriminator, recorded after each epoch.
###Code
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
###Output
_____no_output_____
###Markdown
Generator samples from trainingView samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
###Code
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
###Output
_____no_output_____
###Markdown
Face GenerationIn this project, you'll define and train a DCGAN on a dataset of faces. Your goal is to get a generator network to generate *new* images of faces that look as realistic as possible!The project will be broken down into a series of tasks from **loading in data to defining and training adversarial networks**. At the end of the notebook, you'll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look like fairly realistic faces with small amounts of noise. Get the DataYou'll be using the [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) to train your adversarial networks.This dataset is more complex than the number datasets (like MNIST or SVHN) you've been working with, and so, you should prepare to define deeper networks and train them for a longer time to get good results. It is suggested that you utilize a GPU for training. Pre-processed DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. Some sample data is show below.> If you are working locally, you can download this data [by clicking here](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/November/5be7eb6f_processed-celeba-small/processed-celeba-small.zip)This is a zip file that you'll need to extract in the home directory of this notebook for further loading and processing. After extracting the data, you should be left with a directory of data `processed_celeba_small/`
###Code
# can comment out after executing
# !unzip processed_celeba_small.zip
data_dir = 'processed_celeba_small/'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
import problem_unittests as tests
#import helper
%matplotlib inline
###Output
_____no_output_____
###Markdown
Visualize the CelebA DataThe [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with [3 color channels (RGB)](https://en.wikipedia.org/wiki/Channel_(digital_image)RGB_Images) each. Pre-process and Load the DataSince the project's main focus is on building the GANs, we've done *some* of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This *pre-processed* dataset is a smaller subset of the very large CelebA data.> There are a few other steps that you'll need to **transform** this data and create a **DataLoader**. Exercise: Complete the following `get_dataloader` function, such that it satisfies these requirements:* Your images should be square, Tensor images of size `image_size x image_size` in the x and y dimension.* Your function should return a DataLoader that shuffles and batches these Tensor images. ImageFolderTo create a dataset given a directory of images, it's recommended that you use PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.htmlimagefolder) wrapper, with a root directory `processed_celeba_small/` and data transformation passed in.
###Code
# necessary imports
import torch
from torchvision import datasets
from torchvision import transforms
import os
from torch.utils.data import DataLoader
def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'):
"""
Batch the neural network data using DataLoader
:param batch_size: The size of each batch; the number of images in a batch
:param img_size: The square size of the image data (x, y)
:param data_dir: Directory where image data is located
:return: DataLoader with batched data
"""
# TODO: Implement function and return a dataloader
transform = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
image_path = "./" + data_dir
train_dataset = datasets.ImageFolder(image_path, transform)
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=0)
return train_loader
###Output
_____no_output_____
###Markdown
Create a DataLoader Exercise: Create a DataLoader `celeba_train_loader` with appropriate hyperparameters.Call the above function and create a dataloader to view images. * You can decide on any reasonable `batch_size` parameter* Your `image_size` **must be** `32`. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
###Code
# Define function hyperparameters
batch_size = 32
img_size = 32
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)
###Output
_____no_output_____
###Markdown
Next, you can view some images! You should seen square images of somewhat-centered faces.Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested `imshow` code is below, but it may not be perfect.
###Code
# helper display function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
###Output
_____no_output_____
###Markdown
Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1You need to do a bit of pre-processing; you know that the output of a `tanh` activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
###Code
# TODO: Complete the scale function
def scale(x, feature_range=(-1, 1)):
''' Scale takes in an image x and returns that image, scaled
with a feature_range of pixel values from -1 to 1.
This function assumes that the input x is already scaled from 0-1.'''
# assume x is scaled to (0, 1)
# scale to feature_range and return scaled x
min, max = feature_range
x = x * (max - min) + min
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# check scaled range
# should be close to -1 to 1
img = images[0]
scaled_img = scale(img)
print('Min: ', scaled_img.min())
print('Max: ', scaled_img.max())
###Output
Min: tensor(-0.9765)
Max: tensor(1.)
###Markdown
--- Define the ModelA GAN is comprised of two adversarial networks, a discriminator and a generator. DiscriminatorYour first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with **normalization**. You are also allowed to create any helper functions that may be useful. Exercise: Complete the Discriminator class* The inputs to the discriminator are 32x32x3 tensor images* The output should be a single value that will indicate whether a given image is real or fake
###Code
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
def __init__(self, conv_dim):
"""
Initialize the Discriminator Module
:param conv_dim: The depth of the first convolutional layer
"""
super(Discriminator, self).__init__()
# complete init function
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: Discriminator logits; the output of the neural network
"""
# define feedforward behavior
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(Discriminator)
###Output
_____no_output_____
###Markdown
GeneratorThe generator should upsample an input and generate a *new* image of the same size as our training data `32x32x3`. This should be mostly transpose convolutional layers with normalization applied to the outputs. Exercise: Complete the Generator class* The inputs to the generator are vectors of some length `z_size`* The output should be a image of shape `32x32x3`
###Code
class Generator(nn.Module):
def __init__(self, z_size, conv_dim):
"""
Initialize the Generator Module
:param z_size: The length of the input latent vector, z
:param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
"""
super(Generator, self).__init__()
# complete init function
def forward(self, x):
"""
Forward propagation of the neural network
:param x: The input to the neural network
:return: A 32x32x3 Tensor image as output
"""
# define feedforward behavior
return x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(Generator)
###Output
_____no_output_____
###Markdown
Initialize the weights of your networksTo help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the [original DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf), they say:> All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02.So, your next task will be to define a weight initialization function that does just this!You can refer back to the lesson on weight initialization or even consult existing model code, such as that from [the `networks.py` file in CycleGAN Github repository](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py) to help you complete this function. Exercise: Complete the weight initialization function* This should initialize only **convolutional** and **linear** layers* Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.* The bias terms, if they exist, may be left alone or set to 0.
###Code
def weights_init_normal(m):
"""
Applies initial weights to certain layers in a model .
The weights are taken from a normal distribution
with mean = 0, std dev = 0.02.
:param m: A module or layer in a network
"""
# classname will be something like:
# `Conv`, `BatchNorm2d`, `Linear`, etc.
classname = m.__class__.__name__
# TODO: Apply initial weights to convolutional and linear layers
###Output
_____no_output_____
###Markdown
Build complete networkDefine your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
def build_network(d_conv_dim, g_conv_dim, z_size):
# define discriminator and generator
D = Discriminator(d_conv_dim)
G = Generator(z_size=z_size, conv_dim=g_conv_dim)
# initialize model weights
D.apply(weights_init_normal)
G.apply(weights_init_normal)
print(D)
print()
print(G)
return D, G
###Output
_____no_output_____
###Markdown
Exercise: Define model hyperparameters
###Code
# Define model hyperparams
d_conv_dim =
g_conv_dim =
z_size =
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
D, G = build_network(d_conv_dim, g_conv_dim, z_size)
###Output
_____no_output_____
###Markdown
Training on GPUCheck if you can train on GPU. Here, we'll set this as a boolean variable `train_on_gpu`. Later, you'll be responsible for making sure that >* Models,* Model inputs, and* Loss function argumentsAre moved to GPU, where appropriate.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
else:
print('Training on GPU!')
###Output
_____no_output_____
###Markdown
--- Discriminator and Generator LossesNow we need to calculate the losses for both types of adversarial networks. Discriminator Losses> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. * Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Generator LossThe generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to *think* its generated images are *real*. Exercise: Complete real and fake loss functions**You may choose to use either cross entropy or a least squares error loss to complete the following `real_loss` and `fake_loss` functions.**
###Code
def real_loss(D_out):
'''Calculates how close discriminator outputs are to being real.
param, D_out: discriminator logits
return: real loss'''
loss =
return loss
def fake_loss(D_out):
'''Calculates how close discriminator outputs are to being fake.
param, D_out: discriminator logits
return: fake loss'''
loss =
return loss
###Output
_____no_output_____
###Markdown
Optimizers Exercise: Define optimizers for your Discriminator (D) and Generator (G)Define optimizers for your models with appropriate hyperparameters.
###Code
import torch.optim as optim
# Create optimizers for the discriminator D and generator G
d_optimizer =
g_optimizer =
###Output
_____no_output_____
###Markdown
--- TrainingTraining will involve alternating between training the discriminator and the generator. You'll use your functions `real_loss` and `fake_loss` to help you calculate the discriminator losses.* You should train the discriminator by alternating on real and fake images* Then the generator, which tries to trick the discriminator and should have an opposing loss function Saving SamplesYou've been given some code to print out some loss statistics and save some generated "fake" samples. Exercise: Complete the training functionKeep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
###Code
def train(D, G, n_epochs, print_every=50):
'''Trains adversarial networks for some number of epochs
param, D: the discriminator network
param, G: the generator network
param, n_epochs: number of epochs to train for
param, print_every: when to print and record the models' losses
return: D and G losses'''
# move models to GPU
if train_on_gpu:
D.cuda()
G.cuda()
# keep track of loss and generated, "fake" samples
samples = []
losses = []
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# move z to GPU if available
if train_on_gpu:
fixed_z = fixed_z.cuda()
# epoch training loop
for epoch in range(n_epochs):
# batch training loop
for batch_i, (real_images, _) in enumerate(celeba_train_loader):
batch_size = real_images.size(0)
real_images = scale(real_images)
# ===============================================
# YOUR CODE HERE: TRAIN THE NETWORKS
# ===============================================
# 1. Train the discriminator on real and fake images
d_loss =
# 2. Train the generator with an adversarial loss
g_loss =
# ===============================================
# END OF YOUR CODE
# ===============================================
# Print some loss stats
if batch_i % print_every == 0:
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, n_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# this code assumes your generator is named G, feel free to change the name
# generate and save sample, fake images
G.eval() # for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to training mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
# finally return losses
return losses
###Output
_____no_output_____
###Markdown
Set your number of training epochs and train your GAN!
###Code
# set number of epochs
n_epochs =
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# call training function
losses = train(D, G, n_epochs=n_epochs)
###Output
_____no_output_____
###Markdown
Training lossPlot the training losses for the generator and discriminator, recorded after each epoch.
###Code
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
###Output
_____no_output_____
###Markdown
Generator samples from trainingView samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
###Code
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach().cpu().numpy()
img = np.transpose(img, (1, 2, 0))
img = ((img + 1)*255 / (2)).astype(np.uint8)
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((32,32,3)))
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
_ = view_samples(-1, samples)
###Output
_____no_output_____ |
Notebooks/fork-of-17-90508-notebook.ipynb | ###Markdown
**It's based on tanlikesmath's starter notebook at https://www.kaggle.com/tanlikesmath/petfinder-pawpularity-eda-fastai-starter. My teammate Stefano did the most work.** Petfinder.my - Pawpularity Contest: Simple EDA and fastai starterIn this competition, we will use machine learning to predict the "pawpularity" of a pet using images and metadata. If successful, solutions will be adapted into AI tools that will guide shelters and rescuers around the world to improve the appeal of their pet profiles, automatically enhancing photo quality and recommending composition improvements. As a result, stray dogs and cats can find families much faster, and these tools will help improve animal welfare.In this notebook, I will present a quick 'n dirty EDA and a (image-only, for now) fastai starter. **As of 10/26, it's currently the best-scoring notebook for the competition, beating 10-fold ensemble models that are bigger while only using a single and smaller model.**V1: Change get_data(fold) to correct K-Fold, use is_valid for validation data This is the fork of the notebook score 0.190508 which the original author deleted from public view and since I don't like this kind of attempt. I am making this fork public so those who haven't seen it can see the work and learn the technique. A look at the dataLet's start out by setting up our environment by importing the required modules and setting a random seed:
###Code
import sys
sys.path.append('../input/timm-pytorch-image-models/pytorch-image-models-master')
from timm import create_model
from fastai.vision.all import *
set_seed(999, reproducible=True)
BATCH_SIZE = 8 #was 32
###Output
_____no_output_____
###Markdown
Let's check what data is available to us:
###Code
dataset_path = Path('../input/petfinder-pawpularity-score/')
dataset_path.ls()
###Output
_____no_output_____
###Markdown
We can see that we have our train csv file with the train image names, metadata and labels, the test csv file with test image names and metadata, the sample submission csv with the test image names, and the train and test image folders.Let's check the train csv file:
###Code
train_df = pd.read_csv(dataset_path/'train.csv')
train_df.head()
###Output
_____no_output_____
###Markdown
The metadata provided includes information about key visual quality and composition parameters of the photos. The Pawpularity Score is derived from the profile's page view statistics. This is the target we are aiming to predict. Let's do some quick processing of the image filenames to make it easier to access:
###Code
train_df['path'] = train_df['Id'].map(lambda x:str(dataset_path/'train'/x)+'.jpg')
train_df = train_df.drop(columns=['Id'])
train_df = train_df.sample(frac=1).reset_index(drop=True) #shuffle dataframe
train_df.head()
###Output
_____no_output_____
###Markdown
Okay, let's check how many images are available in the training dataset:
###Code
len_df = len(train_df)
print(f"There are {len_df} images")
###Output
_____no_output_____
###Markdown
Let's check the distribution of the Pawpularity Score:
###Code
train_df['Pawpularity'].hist(figsize = (10, 5))
print(f"The mean Pawpularity score is {train_df['Pawpularity'].mean()}")
print(f"The median Pawpularity score is {train_df['Pawpularity'].median()}")
print(f"The standard deviation of the Pawpularity score is {train_df['Pawpularity'].std()}")
print(f"There are {len(train_df['Pawpularity'].unique())} unique values of Pawpularity score")
###Output
_____no_output_____
###Markdown
Note that the Pawpularity score is an integer, so in addition to being a regression problem, it could also be treated as a 100-class classification problem. Alternatively, it can be treated as a binary classification problem if the Pawpularity Score is normalized between 0 and 1:
###Code
train_df['norm_score'] = train_df['Pawpularity']/100
train_df['norm_score']
###Output
_____no_output_____
###Markdown
Let's check an example image to see what it looks like:
###Code
im = Image.open(train_df['path'][1])
width, height = im.size
print(width,height)
im
###Output
_____no_output_____
###Markdown
Data loadingAfter my quick 'n dirty EDA, let's load the data into fastai as DataLoaders objects. We're using the normalized score as the label. I use some fairly basic augmentations here.
###Code
if not os.path.exists('/root/.cache/torch/hub/checkpoints/'):
os.makedirs('/root/.cache/torch/hub/checkpoints/')
!cp '../input/swin-transformer/swin_large_patch4_window7_224_22kto1k.pth' '/root/.cache/torch/hub/checkpoints/swin_large_patch4_window7_224_22kto1k.pth'
seed=999
set_seed(seed, reproducible=True)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.use_deterministic_algorithms = True
#Sturges' rule
num_bins = int(np.floor(1+(3.3)*(np.log2(len(train_df)))))
# num_bins
train_df['bins'] = pd.cut(train_df['norm_score'], bins=num_bins, labels=False)
train_df['bins'].hist()
from sklearn.model_selection import KFold
from sklearn.model_selection import StratifiedKFold
train_df['fold'] = -1
N_FOLDS = 5 #was10
strat_kfold = StratifiedKFold(n_splits=N_FOLDS, random_state=seed, shuffle=True)
for i, (_, train_index) in enumerate(strat_kfold.split(train_df.index, train_df['bins'])):
train_df.iloc[train_index, -1] = i
train_df['fold'] = train_df['fold'].astype('int')
train_df.fold.value_counts().plot.bar()
train_df[train_df['fold']==0].head()
train_df[train_df['fold']==0]['bins'].value_counts()
train_df[train_df['fold']==1]['bins'].value_counts()
def petfinder_rmse(input,target):
return 100*torch.sqrt(F.mse_loss(F.sigmoid(input.flatten()), target))
def get_data(fold):
# train_df_no_val = train_df.query(f'fold != {fold}')
# train_df_val = train_df.query(f'fold == {fold}')
# train_df_bal = pd.concat([train_df_no_val,train_df_val.sample(frac=1).reset_index(drop=True)])
train_df_f = train_df.copy()
# add is_valid for validation fold
train_df_f['is_valid'] = (train_df_f['fold'] == fold)
dls = ImageDataLoaders.from_df(train_df_f, #pass in train DataFrame
# valid_pct=0.2, #80-20 train-validation random split
valid_col='is_valid', #
seed=999, #seed
fn_col='path', #filename/path is in the second column of the DataFrame
label_col='norm_score', #label is in the first column of the DataFrame
y_block=RegressionBlock, #The type of target
bs=BATCH_SIZE, #pass in batch size
num_workers=8,
item_tfms=Resize(224), #pass in item_tfms
batch_tfms=setup_aug_tfms([Brightness(), Contrast(), Hue(), Saturation()])) #pass in batch_tfms
return dls
#Valid Kfolder size
the_data = get_data(0)
assert (len(the_data.train) + len(the_data.valid)) == (len(train_df)//BATCH_SIZE)
def get_learner(fold_num):
data = get_data(fold_num)
model = create_model('swin_large_patch4_window7_224', pretrained=True, num_classes=data.c)
learn = Learner(data, model, loss_func=BCEWithLogitsLossFlat(), metrics=petfinder_rmse).to_fp16()
return learn
test_df = pd.read_csv(dataset_path/'test.csv')
test_df.head()
test_df['Pawpularity'] = [1]*len(test_df)
test_df['path'] = test_df['Id'].map(lambda x:str(dataset_path/'test'/x)+'.jpg')
test_df = test_df.drop(columns=['Id'])
train_df['norm_score'] = train_df['Pawpularity']/100
get_learner(fold_num=0).lr_find(end_lr=3e-2) #was-2
import gc
all_preds = []
for i in range(N_FOLDS):
print(f'Fold {i} results')
learn = get_learner(fold_num=i)
learn.fit_one_cycle(5, 2e-5, cbs=[SaveModelCallback(), EarlyStoppingCallback(monitor='petfinder_rmse', comp=np.less, patience=2)])
learn.recorder.plot_loss()
#learn = learn.to_fp32()
#learn.export(f'model_fold_{i}.pkl')
#learn.save(f'model_fold_{i}.pkl')
dls = ImageDataLoaders.from_df(train_df, #pass in train DataFrame
valid_pct=0.2, #80-20 train-validation random split
seed=999, #seed
fn_col='path', #filename/path is in the second column of the DataFrame
label_col='norm_score', #label is in the first column of the DataFrame
y_block=RegressionBlock, #The type of target
bs=BATCH_SIZE, #pass in batch size
num_workers=8,
item_tfms=Resize(224), #pass in item_tfms
batch_tfms=setup_aug_tfms([Brightness(), Contrast(), Hue(), Saturation()]))
test_dl = dls.test_dl(test_df)
preds, _ = learn.tta(dl=test_dl, n=5, beta=0)
all_preds.append(preds)
del learn
torch.cuda.empty_cache()
gc.collect()
all_preds
np.mean(np.stack(all_preds*100))
sample_df = pd.read_csv(dataset_path/'sample_submission.csv')
preds = np.mean(np.stack(all_preds), axis=0)
sample_df['Pawpularity'] = preds*100
sample_df.to_csv('submission.csv',index=False)
pd.read_csv('submission.csv').head()
#path = './models'
#learn1 = load_learner('model_fold_2.pkl')
path = './models'
learn1 = load_learner('model_fold_2.pkl')
###Output
_____no_output_____
###Markdown
Model trainingLet's train a Swin Transformer model as a baseline. We will use the wonderful timm package by Ross Wightman to define the model. Since this competition doesn't allow internet access, I have added the pretrained weights from timm as a dataset, and the below code cell will allow timm to find the file: Let's now define the model. Let's also define the metric we will use. Note that we multiply by 100 to get a relevant RMSE for Pawpularity Score prediction, not prediction of the normalized score. In fastai, the trainer class is the `Learner`, which takes in the data, model, optimizer, loss function, etc. and allows you to train models, make predictions, etc. Let's define the `Learner` for this task, and also use mixed precision. Note that we use `BCEWithLogitsLoss` to treat this as a classification problem. We are now provided with a Learner object. In order to train a model, we need to find the most optimal learning rate, which can be done with fastai's learning rate finder: Let's now fine-tune the model with the desired learning rate of 2e-5. We'll save the best model and use the early stopping callback. We plotted the loss, put the model back to fp32, and now we can export the model if we want to use later (i.e. for an inference kernel): InferenceIt's very simple to perform inference with fastai. We preprocess the test CSV in the same way as the train CSV, and the `dls.test_dl` function allows you to create test dataloader using the same pipeline we defined earlier.
###Code
# test_df = pd.read_csv(dataset_path/'test.csv')
# test_df.head()
test_df = pd.read_csv(dataset_path/'test.csv')
test_df.head()
# test_df['Pawpularity'] = [1]*len(test_df)
# test_df['path'] = test_df['Id'].map(lambda x:str(dataset_path/'test'/x)+'.jpg')
# test_df = test_df.drop(columns=['Id'])
# train_df['norm_score'] = train_df['Pawpularity']/100
test_df['Pawpularity'] = [1]*len(test_df)
test_df['path'] = test_df['Id'].map(lambda x:str(dataset_path/'test'/x)+'.jpg')
test_df = test_df.drop(columns=['Id'])
train_df['norm_score'] = train_df['Pawpularity']/100
# dls = ImageDataLoaders.from_df(train_df, #pass in train DataFrame
# valid_pct=0.2, #80-20 train-validation random split
# seed=999, #seed
# fn_col='path', #filename/path is in the second column of the DataFrame
# label_col='norm_score', #label is in the first column of the DataFrame
# y_block=RegressionBlock, #The type of target
# bs=32, #pass in batch size
# num_workers=8,
# item_tfms=Resize(224), #pass in item_tfms
# batch_tfms=setup_aug_tfms([Brightness(), Contrast(), Hue(), Saturation()]))
# test_dl = dls.test_dl(test_df)
dls = ImageDataLoaders.from_df(train_df, #pass in train DataFrame
valid_pct=0.2, #80-20 train-validation random split
seed=999, #seed
fn_col='path', #filename/path is in the second column of the DataFrame
label_col='norm_score', #label is in the first column of the DataFrame
y_block=RegressionBlock, #The type of target
bs=8, #was32, #pass in batch size
num_workers=8,
item_tfms=Resize(224), #pass in item_tfms
batch_tfms=setup_aug_tfms([Brightness(), Contrast(), Hue(), Saturation()]))
test_dl = dls.test_dl(test_df)
# test_dl.show_batch()
test_dl.show_batch()
###Output
_____no_output_____
###Markdown
We can easily confirm that the test_dl is correct (the example test images provided are just noise so this is expected): Now let's pass the dataloader to the model and get predictions. Here I am using 5x test-time augmentation which further improves model performance.
###Code
#preds, _ = learn1.tta(dl=test_dl, n=5, beta=0)
###Output
_____no_output_____
###Markdown
Let's make a submission with these predictions!
###Code
preds, _ = learn1.tta(dl=test_dl, n=5, beta=0)
# sample_df = pd.read_csv(dataset_path/'sample_submission.csv')
# sample_df['Pawpularity'] = preds.float().numpy()*100
# sample_df.to_csv('submission.csv',index=False)
sample_df = pd.read_csv(dataset_path/'sample_submission.csv')
sample_df['Pawpularity'] = preds.float().numpy()*100
sample_df.to_csv('submission.csv',index=False)
pd.read_csv('submission.csv').head()
#pd.read_csv('submission.csv').head()
###Output
_____no_output_____ |
artificial_neural_network_101_pytorch.ipynb | ###Markdown
Introduction* The **goal** of this Artificial Neural Network (ANN) 101 session is twofold: * To build an ANN model that will be able to predict y value according to x value. * In other words, we want our ANN model to perform a regression analysis. * To observe three important KPI when dealing with ANN: * The size of the network (called *trainable_params* in our code) * The duration of the training step (called *training_ duration:* in our code) * The efficiency of the ANN model (called *evaluated_loss* in our code) * The data used here are exceptionally simple: * X represents the interesting feature (i.e. will serve as input X for our ANN). * Here, each x sample is a one-dimension single scalar value. * Y represents the target (i.e. will serve as the exected output Y of our ANN). * Here, each x sample is also a one-dimension single scalar value.* Note that in real life: * You will never have such godsent clean, un-noisy and simple data. * You will have more samples, i.e. bigger data (better for statiscally meaningful results). * You may have more dimensions in your feature and/or target (e.g. space data, temporal data...). * You may also have more multiple features and even multiple targets. * Hence your ANN model will be more complex that the one studied here Work to be done:For exercices A to E, the only lines of code that need to be added or modified are in the **create_model()** Python function. Exercice A* Run the whole code, Jupyter cell by Jupyter cell, without modifiying any line of code.* Write down the values for: * *trainable_params:* * *training_ duration:* * *evaluated_loss:* * In the last Jupyter cell, what is the relationship between the predicted x samples and y samples? Try to explain it base on the ANN model? Exercice B* Add a first hidden layer called "hidden_layer_1" containing 8 units in the model of the ANN. * Restart and execute everything again. * Write down the obtained values for: * *trainable_params:* * *training_ duration:* * *evaluated_loss:* * How better is it with regard to Exercice A? * Worse? Not better? Better? Strongly better? Exercice C* Modify the hidden layer called "hidden_layer_1" so that it contains 128 units instead of 8.* Restart and execute everything again.* Write down the obtained values for: * *trainable_params:* * *training_ duration:* * *evaluated_loss:* * How better is it with regard to Exercice B? * Worse? Not better? Better? Strongly better? Exercice D* Add a second hidden layer called "hidden_layer_2" containing 32 units in the model of the ANN. * Write down the obtained values for: * *trainable_params:* * *training_ duration:* * *evaluated_loss:* * How better is it with regard to Exercice C? * Worse? Not better? Better? Strongly better? Exercice E* Add a third hidden layer called "hidden_layer_3" containing 4 units in the model of the ANN. * Restart and execute everything again.* Look at the graph in the last Jupyter cell. Is it better?* Write down the obtained values for: * *trainable_params:* * *training_ duration:* * *evaluated_loss:* * How better is it with regard to Exercice D? * Worse? Not better? Better? Strongly better? Exercice F* If you still have time, you can also play with the training epochs parameter, the number of training samples (or just exchange the training datasets with the test datasets), the type of runtime hardware (GPU orTPU), and so on... Python Code Import the tools
###Code
# library to store and manipulate neural-network input and output data
import numpy as np
# library to graphically display any data
import matplotlib.pyplot as plt
# library to manipulate neural-network models
import torch
import torch.nn as nn
import torch.optim as optim
# the code is compatible with Tensflow v1.4.0
print("Pytorch version:", torch.__version__)
# To check whether you code will use a GPU or not, uncomment the following two
# lines of code. You should either see:
# * an "XLA_GPU",
# * or better a "K80" GPU
# * or even better a "T100" GPU
if torch.cuda.is_available():
print('GPU support (%s)' % torch.cuda.get_device_name(0))
else:
print('no GPU support')
import time
# trivial "debug" function to display the duration between time_1 and time_2
def get_duration(time_1, time_2):
duration_time = time_2 - time_1
m, s = divmod(duration_time, 60)
h, m = divmod(m, 60)
s,m,h = int(round(s, 0)), int(round(m, 0)), int(round(h, 0))
duration = "duration: " + "{0:02d}:{1:02d}:{2:02d}".format(h, m, s)
return duration
###Output
_____no_output_____
###Markdown
Get the data
###Code
# DO NOT MODIFY THIS CODE
# IT HAS JUST BEEN WRITTEN TO GENERATE THE DATA
# library fr generating random number
#import random
# secret relationship between X data and Y data
#def generate_random_output_data_correlated_from_input_data(nb_samples):
# generate nb_samples random x between 0 and 1
# X = np.array( [random.random() for i in range(nb_samples)] )
# generate nb_samples y correlated with x
# Y = np.tan(np.sin(X) + np.cos(X))
# return X, Y
#def get_new_X_Y(nb_samples, debug=False):
# X, Y = generate_random_output_data_correlated_from_input_data(nb_samples)
# if debug:
# print("generate %d X and Y samples:" % nb_samples)
# X_Y = zip(X, Y)
# for i, x_y in enumerate(X_Y):
# print("data sample %d: x=%.3f, y=%.3f" % (i, x_y[0], x_y[1]))
# return X, Y
# Number of samples for the training dataset and the test dateset
#nb_samples=50
# Get some data for training the futture neural-network model
#X_train, Y_train = get_new_X_Y(nb_samples)
# Get some other data for evaluating the futture neural-network model
#X_test, Y_test = get_new_X_Y(nb_samples)
# In most cases, it will be necessary to normalize X and Y data with code like:
# X_centered -= X.mean(axis=0)
# X_normalized /= X_centered.std(axis=0)
#def mstr(X):
# my_str ='['
# for x in X:
# my_str += str(float(int(x*1000)/1000)) + ','
# my_str += ']'
# return my_str
## Call get_data to have an idead of what is returned by call data
#generate_data = False
#if generate_data:
# nb_samples = 50
# X_train, Y_train = get_new_X_Y(nb_samples)
# print('X_train = np.array(%s)' % mstr(X_train))
# print('Y_train = np.array(%s)' % mstr(Y_train))
# X_test, Y_test = get_new_X_Y(nb_samples)
# print('X_test = np.array(%s)' % mstr(X_test))
# print('Y_test = np.array(%s)' % mstr(Y_test))
X_train = np.array([0.765,0.838,0.329,0.277,0.45,0.833,0.44,0.634,0.351,0.784,0.589,0.816,0.352,0.591,0.04,0.38,0.816,0.732,0.32,0.597,0.908,0.146,0.691,0.75,0.568,0.866,0.705,0.027,0.607,0.793,0.864,0.057,0.877,0.164,0.729,0.291,0.324,0.745,0.158,0.098,0.113,0.794,0.452,0.765,0.983,0.001,0.474,0.773,0.155,0.875,])
Y_train = np.array([6.322,6.254,3.224,2.87,4.177,6.267,4.088,5.737,3.379,6.334,5.381,6.306,3.389,5.4,1.704,3.602,6.306,6.254,3.157,5.446,5.918,2.147,6.088,6.298,5.204,6.147,6.153,1.653,5.527,6.332,6.156,1.766,6.098,2.236,6.244,2.96,3.183,6.287,2.205,1.934,1.996,6.331,4.188,6.322,5.368,1.561,4.383,6.33,2.192,6.108,])
X_test = np.array([0.329,0.528,0.323,0.952,0.868,0.931,0.69,0.112,0.574,0.421,0.972,0.715,0.7,0.58,0.69,0.163,0.093,0.695,0.493,0.243,0.928,0.409,0.619,0.011,0.218,0.647,0.499,0.354,0.064,0.571,0.836,0.068,0.451,0.074,0.158,0.571,0.754,0.259,0.035,0.595,0.245,0.929,0.546,0.901,0.822,0.797,0.089,0.924,0.903,0.334,])
Y_test = np.array([3.221,4.858,3.176,5.617,6.141,5.769,6.081,1.995,5.259,3.932,5.458,6.193,6.129,5.305,6.081,2.228,1.912,6.106,4.547,2.665,5.791,3.829,5.619,1.598,2.518,5.826,4.603,3.405,1.794,5.23,6.26,1.81,4.18,1.832,2.208,5.234,6.306,2.759,1.684,5.432,2.673,5.781,5.019,5.965,6.295,6.329,1.894,5.816,5.951,3.258,])
print('X_train contains %d samples' % X_train.shape)
print('Y_train contains %d samples' % Y_train.shape)
print('')
print('X_test contains %d samples' % X_test.shape)
print('Y_test contains %d samples' % Y_test.shape)
# Graphically display our training data
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
plt.title('Scatter plot of the training data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# Graphically display our test data
plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
plt.title('Scatter plot of the testing data')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
###Output
_____no_output_____
###Markdown
Build the artificial neural-network
###Code
# THIS IS THE ONLY CELL WHERE YOU HAVE TO ADD AND/OR MODIFY CODE
from collections import OrderedDict
def create_model():
# This returns a tensor
model = nn.Sequential(OrderedDict([
('hidden_layer_1', nn.Linear(1,128)), ('hidden_layer_1_act', nn.ReLU()),
('hidden_layer_2', nn.Linear(128,32)), ('hidden_layer_2_act', nn.ReLU()),
('hidden_layer_3', nn.Linear(32,4)), ('hidden_layer_3_act', nn.ReLU()),
('output_layer', nn.Linear(4,1))
]))
# NO COMPILATION AS IN TENSORFLOW
#model.compile(optimizer=tf.keras.optimizers.RMSprop(0.01),
# loss='mean_squared_error',
# metrics=['mean_absolute_error', 'mean_squared_error'])
return model
ann_model = create_model()
# Display a textual summary of the newly created model
# Pay attention to size (a.k.a. total parameters) of the network
print(ann_model)
print('params:', sum(p.numel() for p in ann_model.parameters()))
print('trainable_params:', sum(p.numel() for p in ann_model.parameters() if p.requires_grad))
%%html
As a reminder for understanding, the following ANN unit contains <b>m + 1</b> trainable parameters:<br>
<img src='https://www.degruyter.com/view/j/nanoph.2017.6.issue-3/nanoph-2016-0139/graphic/j_nanoph-2016-0139_fig_002.jpg' alt="perceptron" width="400" />
###Output
_____no_output_____
###Markdown
Train the artificial neural-network model
###Code
# Object for storing training results (similar to Tensorflow object)
class Results:
history = {
'train_loss': [],
'valid_loss': []
}
# No Pytorch model.fit() function as it is the case in Tensorflow
# but we can implement it by ourselves.
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
def fit(ann_model, X, Y, verbose=False,
batch_size=1, epochs=500, validation_split=0.2):
n_samples = X.shape[0]
n_samples_test = n_samples - int(n_samples * validation_split)
X = torch.from_numpy(X).unsqueeze(1).float()
Y = torch.from_numpy(Y).unsqueeze(1).float()
X_train = X[0:n_samples_test]
Y_train = Y[0:n_samples_test]
X_valid = X[n_samples_test:]
Y_valid = Y[n_samples_test:]
loss_fn = nn.MSELoss()
optimizer = optim.RMSprop(ann_model.parameters(), lr=0.01)
results = Results()
for epoch in range(0, epochs):
Ŷ_train = ann_model(X_train)
train_loss = loss_fn(Ŷ_train, Y_train)
Ŷ_valid = ann_model(X_valid)
valid_loss = loss_fn(Ŷ_valid, Y_valid)
optimizer.zero_grad()
train_loss.backward()
optimizer.step()
results.history['train_loss'].append(float(train_loss))
results.history['valid_loss'].append(float(valid_loss))
if verbose:
if epoch % 1000 == 0:
print('epoch:%d, train_loss:%.3f, valid_loss:%.3f' \
% (epoch, float(train_loss), float(valid_loss)))
return results
# Train the model with the input data and the output_values
# validation_split=0.2 means that 20% of the X_train samples will be used
# for a validation test and that "only" 80% will be used for training
t0 = time.time()
results = fit(ann_model, X_train, Y_train, verbose=True,
batch_size=1, epochs=10000, validation_split=0.2)
t1 = time.time()
print('training_%s' % get_duration(t0, t1))
plt.plot(results.history['train_loss'], label = 'train_loss')
plt.plot(results.history['valid_loss'], label = 'validation_loss')
plt.legend()
plt.show()
# If you can write a file locally (i.e. If Google Drive available on Colab environnement)
# then, you can save your model in a file for future reuse.
# Only uncomment the following file if you can write a file
#torch.save(ann_model.state_dict(), 'ann_101.pt')
###Output
_____no_output_____
###Markdown
Evaluate the model
###Code
# No Pytorch model.evaluate() function as it is the case in Tensorflow
# but we can implement it by ourselves.
def evaluate(ann_model, X_, Y_, verbose=False):
X = torch.from_numpy(X_).unsqueeze(1).float()
Y = torch.from_numpy(Y_).unsqueeze(1).float()
Ŷ = ann_model(X)
# let's calculate the mean square error
# (could also be calculated with sklearn.metrics.mean_squared_error()
# or we could also calculate other errors like in 5% ok
mean_squared_error = torch.sum((Ŷ - Y) ** 2)/Y.shape[0]
if verbose:
print("mean_squared_error:%.3f" % mean_squared_error)
return mean_squared_error
test_loss = evaluate(ann_model, X_test, Y_test, verbose=True)
###Output
mean_squared_error:0.011
###Markdown
Predict new output data
###Code
X_new_values = torch.Tensor([0., 0.2, 0.4, 0.6, 0.8, 1.0]).unsqueeze(1).float()
Y_predicted_values = ann_model(X_new_values).detach().numpy()
Y_predicted_values
# Display training data and predicted data graphically
plt.title('Training data (green color) + Predicted data (red color)')
# training data in green color
plt.scatter(X_train, Y_train, color='green', alpha=0.5)
# training data in green color
#plt.scatter(X_test, Y_test, color='blue', alpha=0.5)
# predicted data in blue color
plt.scatter(X_new_values, Y_predicted_values, color='red', alpha=0.5)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
###Output
_____no_output_____ |
Index_builder.ipynb | ###Markdown
Import packages
###Code
import sys, os, lucene, threading, time
from java.nio.file import Paths
from org.apache.lucene.analysis.miscellaneous import LimitTokenCountAnalyzer
from org.apache.lucene.analysis.standard import StandardAnalyzer
from org.apache.lucene.document import \
Document, Field, FieldType ,TextField,StringField,LatLonPoint,FloatPoint,IntPoint,StoredField
from org.apache.lucene.index import FieldInfo, IndexWriter, IndexWriterConfig ,DirectoryReader,IndexReader
from org.apache.lucene.store import SimpleFSDirectory
from org.apache.lucene.util import Version
###Output
_____no_output_____
###Markdown
Initialization and Config
###Code
lucene.initVM()
PATH = './data1/index' #Index Path
analyzer = StandardAnalyzer() # Standard analyzer
directory = SimpleFSDirectory(Paths.get(PATH))
config = IndexWriterConfig(analyzer)
config.setOpenMode(IndexWriterConfig.OpenMode.CREATE)
index_writer = IndexWriter(directory, config)
###Output
_____no_output_____
###Markdown
Build index for each Filed
###Code
for i in range(len(rest_info)):
doc = Document()
doc.add(Field("business_id", str(rest_info['business_id'][i]),StringField.TYPE_STORED))
doc.add(Field("name", str(rest_info['name'][i]),TextField.TYPE_STORED))
doc.add(Field("address", str(rest_info['address'][i]),TextField.TYPE_STORED))
doc.add(Field("categories", str(rest_info['categories'][i]),TextField.TYPE_STORED))
doc.add(Field("attributes", str(rest_info['attributes'][i]),TextField.TYPE_STORED))
doc.add(Field("city", str(rest_info['city'][i]),TextField.TYPE_STORED))
doc.add(Field("state", str(rest_info['state'][i]),TextField.TYPE_STORED))
doc.add(Field("postal_code", str(rest_info['postal_code'][i]),TextField.TYPE_STORED))
doc.add(Field("hours", str(rest_info['hours'][i]),TextField.TYPE_STORED))
doc.add(StringField("lat",str(rest_info['latitude'][i]),Field.Store.YES))
doc.add(StringField("long",str(rest_info['longitude'][i]),Field.Store.YES))
doc.add(LatLonPoint("location",float(rest_info['latitude'][i]),
float(rest_info['longitude'][i])))
doc.add(FloatPoint("stars", float(rest_info['stars'][i]) ))
doc.add(StoredField('stars',float(rest_info['stars'][i]) ))
doc.add(IntPoint("review_count", int(rest_info['review_count'][i]) ))
doc.add(StoredField("review_count", int(rest_info['review_count'][i]) ))
doc.add(Field("review", str(rest_info['tip_text'][i]),TextField.TYPE_STORED))
index_writer.addDocument(doc)
index_writer.close()
###Output
_____no_output_____ |
3. Series de tiempo.ipynb | ###Markdown
3. Series de tiempo y ARIMA Finanzas Cuantitativas y Ciencia de Datos Rodrigo Lugo Frias y León Berdichevsky Acosta ITAM Primavera 2019Con este notebook pueden ver de principio a fin como trabajar con series de tiempo e implementar un modelo de prediccion basado en ARIMA.---_INSTRUCCIONES:_* Todas las celdas se corren haciendo __Shift + Enter__ o __Ctrl + Enter___NOTAS:_* _Notebook adaptado de distintas fuentes y proyectos_
###Code
%matplotlib inline
# Librerias importantes
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=1.5)
import datetime as dt
#Silence all warnings
import warnings
warnings.filterwarnings('ignore')
stocks = ['data/ALSEA MM Equity.csv','data/AMXL MM Equity.csv', 'data/BIMBOA MM Equity.csv', 'data/PE&OLES MM Equity.csv']
alsea = pd.read_csv(stocks[0])
amxl = pd.read_csv(stocks[1])
bimbo = pd.read_csv(stocks[2])
penoles = pd.read_csv(stocks[3])
penoles.info()
def change_date( df ):
df.Date = df.Date.apply(lambda x : pd.to_datetime(str(x), format = "%Y%m%d"))
df.set_index(df.Date, inplace = True)
df = df.copy()[df.columns[1:]]
return df
penoles = change_date(penoles)
penoles.tail()
penoles.info()
penoles.describe()
alsea = change_date(alsea)
amxl = change_date(amxl)
bimbo = change_date(bimbo)
x = 'Last'
df = pd.concat([alsea[x],amxl[x],bimbo[x],penoles[x]],axis=1)
df.columns = ['ALSEA', 'AMXL', 'BIMBO', 'PENOLES']
df = df.copy().tail(1000)
fig, ax = plt.subplots()
ax.set_xlabel(' ')
ax.set_ylabel('Price ($ MXN)')
ax.set_title('Mexican companies stocks')
df.plot(ax = ax, figsize = (10,7))
plt.show()
# Yearly average number of shares
shares = {'2019': 172e6, '2018': 168e6, '2017': 162e6, '2016': 144e6, '2015': 128e6}
# Create a year column
df['Year'] = df.index.year
# Take Dates from index and move to Date column
df.reset_index(level=0, inplace = True)
df['MktCap_ALSEA'] = 0
df['MktCap_AMXL'] = 0
df['MktCap_BIMBO'] = 0
df['MktCap_PENOLES'] = 0
df.info()
df.tail()
# Calculate market cap for all years
for i, year in enumerate(df['Year']):
# Retrieve the shares for the year
shares_ = shares[str(year)]
# Update the cap column to shares times the price
df.ix[i, 'MktCap_ALSEA'] = (shares_ * df.ix[i, 'ALSEA'])/1e9
df.ix[i, 'MktCap_AMXL'] = (shares_ * df.ix[i, 'AMXL'])/1e9
df.ix[i, 'MktCap_BIMBO'] = (shares_ * df.ix[i, 'BIMBO'])/1e9
df.ix[i, 'MktCap_PENOLES'] = (shares_ * df.ix[i, 'PENOLES'])/1e9
df.info()
df.sample(5)
market_cap = df.copy()[['Date','MktCap_ALSEA','MktCap_AMXL','MktCap_BIMBO']]
market_cap.columns = ['Date','ALSEA', 'AMXL', 'BIMBO']
market_cap.set_index('Date',inplace=True)
market_cap.tail()
fig, ax = plt.subplots()
ax.set_xlabel(' ')
ax.set_ylabel('Market Cap ($ Bn)')
ax.set_title('Mexican companies stocks')
market_cap.plot(ax = ax, figsize = (10,7))
plt.show()
###Output
_____no_output_____
###Markdown
Under this analysis is AMXL still an atractive company to invest?
###Code
amxl_corp = df.copy()[['Date','AMXL','MktCap_AMXL']]
amxl_corp.set_index('Date',inplace=True)
amxl_corp.columns = ['Price','MktCap']
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
ax1.set_xlabel(' ')
ax1.set_ylabel('Price ($)')
ax2.set_ylabel('Market Cap ($ Bn)')
ax.set_title('Intel Corporation')
amxl_corp.Price.plot(ax = ax1, figsize = (10,7), legend=False, color='r')
amxl_corp.MktCap.plot(ax = ax2, figsize = (10,7), legend=False, color='g')
plt.show()
fig, ax = plt.subplots()
ax.set_xlabel('Price ($)')
ax.set_ylabel('Prob. Density')
ax.set_title('Technology companies stocks')
amxl_corp.Price.plot.density(ax = ax, figsize = (10,7))
plt.show()
###Output
_____no_output_____
###Markdown
ARIMA
###Code
from pandas.plotting import autocorrelation_plot
amxl_sample = amxl_corp.copy().Price.head(60)
fig, ax = plt.subplots()
autocorrelation_plot(amxl_sample, ax=ax)
plt.show()
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(amxl_sample, order=(2,1,0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
residuals = pd.DataFrame(model_fit.resid)
residuals.plot()
plt.show()
residuals.plot(kind='kde')
plt.show()
print(residuals.describe())
###Output
_____no_output_____ |
docs/fintopy_prices.ipynb | ###Markdown
Fintopy *Pandas extensions for financial markets* Prices module
###Code
import sys
sys.path.append('..')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
from xbbg import blp
import fintopy
plt.style.use('seaborn-darkgrid')
###Output
_____no_output_____
###Markdown
Series accessor *Data*
###Code
s = blp.bdh('MSFT US Equity', 'PX_LAST', '2020-06-30', '2021-01-31')
s.columns = s.columns.droplevel(1)
s = s.iloc[:, 0]
s.head()
s.plot(title=s.name);
###Output
_____no_output_____
###Markdown
*Methods* set_frequency()
###Code
# Set the frequency of the series to Business Day
bdaily = s.prices.set_frequency()
bdaily.head()
bdaily.plot(title=f'{bdaily.name} - Frequency: {bdaily.index.freq}');
# Set the frequency of the series to Business Week
bweekly = s.prices.set_frequency('BW')
bweekly.head()
bweekly.plot(title=f'{bweekly.name} - Frequency: {bweekly.index.freq}');
###Output
_____no_output_____
###Markdown
rebase()
###Code
# Rebases the series to 100
rebased = s.prices.set_frequency().prices.rebase()
rebased.name = f'{s.name} rebased'
rebased.head()
pd.concat((bdaily, rebased), axis=1).plot(title=f'{s.name} original and rebased');
###Output
_____no_output_____
###Markdown
log_returns()
###Code
# Daily log returns
lrdaily = s.prices.set_frequency().prices.log_returns()
lrdaily.head()
lrdaily.plot.hist(title=f'Histogram of {lrdaily.name} log daily returns', label='Daily returns');
plt.axvline(lrdaily.mean(), color='C1', linestyle='--', label='Mean return');
plt.text(lrdaily.mean() + 0.001, 52.5, f'{lrdaily.mean():.2%}', color='C1');
plt.legend();
###Output
_____no_output_____
###Markdown
pct_ returns()
###Code
# Weekly percent returns
prweekly = s.prices.set_frequency('BW').prices.pct_returns()
prweekly.head()
prweekly.plot.bar(title=f'{prweekly.name} bar plot of weekly percent returns', label='Percent returns');
plt.gca().yaxis.set_major_formatter(mtick.StrMethodFormatter('{x:.0%}'))
plt.gca().xaxis.set_major_formatter(mtick.FixedFormatter(prweekly.index.strftime('%b %d')))
plt.xticks(rotation=70);
plt.axhline(prweekly.mean(), color='C1', linestyle='--', label='Mean return');
plt.text(26, prweekly.mean() + 0.003, f'{prweekly.mean():.2%}', color='C1');
plt.legend();
###Output
_____no_output_____
###Markdown
abs_return()
###Code
# Absolute return
print(f'Absolute return: {s.prices.abs_return():.2%}')
s.prices.set_frequency().plot(title=f'Absolute return of {s.name}');
plt.axhline(s.iat[0], color='C1')
plt.axhline(s.iat[-1], color='C1')
plt.arrow(s.index[-12], s.iat[0], 0, s.iat[-1] - s.iat[0], color='C1', head_width=4, head_length=2, length_includes_head=True);
plt.text(s.index[-12], s.iat[-1] + 1, f'{s.prices.abs_return():.2%}', color='C1', ha='center');
###Output
_____no_output_____
###Markdown
annualized_return()
###Code
# Annualized return
print(f'Annualized return: {s.prices.annualized_return():.2%}')
idx = pd.date_range(s.index[0], periods=366)
sreal = s.prices.rebase().reindex(idx).fillna(method='bfill')
sreal.name = 'Real data'
retlin = s.prices.abs_return() / (s.index[-1] - s.index[0]).days * (sreal.index - sreal.index[0]).days
slin = pd.Series(index=idx, data=sreal.iat[0] * (1 + retlin))
slin.name = 'Yearly projection'
pd.concat((sreal, slin), axis=1).plot(title=f'Annualized return of {s.name}', style=['-', '--']);
plt.text(slin.index[-1], slin.iat[-1], f'{s.prices.annualized_return():.2%}', color='C1');
###Output
_____no_output_____
###Markdown
cagr()
###Code
# Composite annual growth return
print(f'CAGR: {s.prices.cagr():.2%}')
retlog = (1 + s.prices.abs_return()) ** ((sreal.index - sreal.index[0]).days / (s.index[-1] - s.index[0]).days) - 1
slog = pd.Series(index=idx, data=sreal.iat[0] * (1 + retlog))
slog.name = 'Yearly projection'
pd.concat((sreal, slog), axis=1).plot(title=f'CAGR of {s.name}', style=['-', '--']);
plt.text(slog.index[-1], slog.iat[-1], f'{s.prices.cagr():.2%}', color='C1');
###Output
_____no_output_____
###Markdown
drawdown() e max_drawdown()
###Code
# Calculates the drawdown
dd = s.prices.set_frequency().prices.drawdown(negative=True)
dd.head()
# Max drawdown
print(f'CAGR: {s.prices.max_drawdown(negative=True):.2%}')
dd.plot(title=f'Drawdown and max drawdown of {dd.name}', label='Drawdown');
ddate = dd.idxmin()
plt.gca().yaxis.set_major_formatter(mtick.StrMethodFormatter('{x:.0%}'))
plt.vlines(ddate, 0, s.prices.max_drawdown(negative=True), color='C1', linestyle='--', label='Max drawdown');
plt.text(ddate, s.prices.max_drawdown(negative=True) - 0.005, f'{s.prices.max_drawdown(negative=True):.2%}', color='C1');
plt.legend();
###Output
_____no_output_____ |
tratamento-dados/tratamento-proposicoes.ipynb | ###Markdown
Tratamento de dados coletados sobre proposições legislativas Os dados referentes às proposições legislativas foram coletados manualmente a partir de arquivos estáticos disponíveis em [https://dadosabertos.camara.leg.br/swagger/api.htmlstaticfile](https://dadosabertos.camara.leg.br/swagger/api.htmlstaticfile). Os arquivos utilizados nesse projeto estão disponíveis em [../dados/proposicoes](../dados/proposicoes).Percebemos a necessidade de realizar limpeza dos dados coletados antes de utilizá-los nas análises, então nesse notebook descrevemos o pré-processamento necessário. Reunimos todos os arquivos em um único dataframe pandas
###Code
lista_proposicoes = glob.glob('../dados/proposicoes/propo*')
tipos_dados = {
'id': object,
'uri': object,
'siglaTipo': object,
'numero': object,
'ano': int,
'codTipo': object,
'descricaoTipo': object,
'ementa': object,
'ementaDetalhada': object,
'keywords': object,
'uriOrgaoNumerador': object,
'uriPropAnterior': object,
'uriPropPrincipal': object,
'uriPropPosterior': object,
'urlInteiroTeor': object,
'urnFinal': object,
'ultimoStatus_sequencia': object,
'ultimoStatus_uriRelator': object,
'ultimoStatus_idOrgao': object,
'ultimoStatus_siglaOrgao': object,
'ultimoStatus_uriOrgao': object,
'ultimoStatus_regime': object,
'ultimoStatus_descricaoTramitacao': object,
'ultimoStatus_idTipoTramitacao': object,
'ultimoStatus_descricaoSituacao': object,
'ultimoStatus_idSituacao': object,
'ultimoStatus_despacho': object,
'ultimoStatus_url': object
}
tipo_data = ['dataApresentacao', 'ultimoStatus_dataHora']
lista_df = []
for proposicao in lista_proposicoes:
df_proposicao = pd.read_csv(proposicao, sep=';', dtype=tipos_dados, parse_dates=tipo_data)
lista_df.append(df_proposicao)
df_proposicao_1934_2021 = pd.concat(lista_df, axis=0, ignore_index=True)
df_proposicao_1934_2021.shape
###Output
_____no_output_____
###Markdown
Seleção de dados referentes aos tipos de proposta legislativa desejados para análiseSelecionaremos apenas as propostas referentes aos seguintes tipos:- Projeto de Decreto Legislativo [SF] (PDL)- Projeto de Decreto Legislativo [CD] (PDC)- Projeto de Decreto Legislativo [CN] (PDN)- Projeto de Decreto Legislativo [SF] (PDS)- Proposta de Emenda à Constituição (PEC)- Projeto de Lei (PL)- Projeto de Lei da Câmara (PLC)- Projeto de Lei Complementar (PLP)- Projeto de Lei de Conversão (PLV)- Projeto de Resolução da Câmara dos Deputados (PRC)
###Code
tipos_proposicoes = ['PDS', 'PDC', 'PDN', 'PEC', 'PL', 'PLC', 'PLP', 'PLV', 'PRC']
df_proposicoes_tipos_desejados = df_proposicao_1934_2021[df_proposicao_1934_2021['siglaTipo'].isin(tipos_proposicoes)].copy()
df_proposicoes_tipos_desejados.shape
###Output
_____no_output_____
###Markdown
Seleção de atributos desejados para análise
###Code
df_proposicoes = df_proposicoes_tipos_desejados[['id','siglaTipo','ano', 'codTipo', 'descricaoTipo',
'ementa', 'ementaDetalhada', 'keywords']].copy()
df_proposicoes.shape
###Output
_____no_output_____
###Markdown
Ajuste de valores faltantes
###Code
df_proposicoes.isnull().sum(axis = 0)
df_proposicoes[
(df_proposicoes['ementa'].isnull()) &
(df_proposicoes['ementaDetalhada'].isnull()) &
(df_proposicoes['keywords'].isnull())].head()
df_proposicoes[(df_proposicoes['ementa'].isnull())].head()
df_proposicoes.dropna(axis=0, subset=['ementa'], inplace=True)
df_proposicoes.shape
###Output
_____no_output_____
###Markdown
Limpa dados da coluna "keywords" Identifica propostas legislativas com "keywords"
###Code
df_proposicoes_com_keywords = df_proposicoes[df_proposicoes['keywords'].notna()].copy()
df_proposicoes[df_proposicoes['keywords'].notna()]
###Output
_____no_output_____
###Markdown
Download dos pacotes realtivos a "stopwords" e pontuação da biblioteca NLTK
###Code
nltk.download('punkt')
nltk.download('stopwords')
###Output
[nltk_data] Downloading package punkt to /home/cecivieira/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] /home/cecivieira/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Remove pontuação, preposições e artigos (stopwords)
###Code
meses = ['janeiro', 'fevereiro', 'março', 'abril', 'maio', 'junho', 'julho','agosto', 'setembro', 'outubro', 'novembro', 'dezembro']
def define_stopwords_punctuation():
stopwords = nltk.corpus.stopwords.words('portuguese') + meses
pontuacao = list(punctuation)
stopwords.extend(pontuacao)
return stopwords
###Output
_____no_output_____
###Markdown
Adiciona a `keywords` toda palavra que não for uma stopword ou número
###Code
def remove_stopwords_punctuation_da_sentenca(texto):
padrao_digitos = r'[0-9]'
texto = re.sub(padrao_digitos, '', texto)
palavras = nltk.tokenize.word_tokenize(texto.lower())
stopwords = define_stopwords_punctuation()
keywords = [palavra for palavra in palavras if palavra not in stopwords]
return keywords
df_proposicoes_com_keywords['keywords'] = df_proposicoes_com_keywords['keywords'].apply(remove_stopwords_punctuation_da_sentenca)
###Output
_____no_output_____
###Markdown
Converte lista para string
###Code
def converte_lista_string(lista):
return ','.join([palavra for palavra in lista])
df_proposicoes_com_keywords['keywords'] = df_proposicoes_com_keywords['keywords'].apply(converte_lista_string)
###Output
_____no_output_____
###Markdown
Retira do dataframe proposições cujas `keywords` ficaram vazias depois da limpeza
###Code
df_proposicoes_com_keywords = df_proposicoes_com_keywords[df_proposicoes_com_keywords['keywords'] != '']
df_proposicoes_com_keywords.head()
###Output
_____no_output_____
###Markdown
Extração de palavras chaves das ementas, quando necessárioVerificamos que algumas propostas legislativas não possuem palavras chave desde sua coleta, por isso extrairemos essas palavras do campo `ementa`. Identificação de propostas legislativas com campo "keywords" vazio
###Code
df_proposicoes_sem_keywords = df_proposicoes[df_proposicoes['keywords'].isna()].copy()
###Output
_____no_output_____
###Markdown
Remoção de pontuação, preposições e artigos (stopwords)
###Code
df_proposicoes_sem_keywords['keywords'] = df_proposicoes_sem_keywords['ementa'].apply(remove_stopwords_punctuation_da_sentenca)
###Output
_____no_output_____
###Markdown
Identifica caracteres e abreviações semanticamente irrelevantes ainda presentes na coluna "keywords"
###Code
lista_keywords = []
lista_keywords_temp = df_proposicoes_sem_keywords['keywords'].tolist()
_ = [lista_keywords.extend(item) for item in lista_keywords_temp]
palavras_para_descarte = [item for item in set(lista_keywords) if len(item) <= 3]
###Output
_____no_output_____
###Markdown
Retira os substantivos da lista de caracteres e abreviações semanticamente irrelevantes
###Code
substantivos_nao_descartaveis = ['cão', 'mãe', 'oab', 'boa', 'pré', 'voz', 'rui', 'uva', 'gás', 'glp', 'apa']
###Output
_____no_output_____
###Markdown
Remove da coluna "keywords" lista de caracteres e abreviações semanticamente irrelevantes
###Code
palavras_para_descarte_refinada = [palavra for palavra in palavras_para_descarte if palavra not in substantivos_nao_descartaveis]
def remove_palavras_para_descarte_da_sentenca(texto):
keywords = []
for palavra in texto:
if palavra not in palavras_para_descarte_refinada:
keywords.append(palavra)
return keywords
df_proposicoes_sem_keywords['keywords'] = df_proposicoes_sem_keywords['keywords'].apply(remove_palavras_para_descarte_da_sentenca)
###Output
_____no_output_____
###Markdown
Identifica, na coluna "keywords", palavras sem relevancia semântica, por exemplo: "altera", "dispõe" e "sobre".
###Code
def gera_n_grams(texto, ngram=2):
temporario = zip(*[texto[indice:] for indice in range(0,ngram)])
resultado = [' '.join(ngram) for ngram in temporario]
return resultado
df_proposicoes_sem_keywords['bigrams'] = df_proposicoes_sem_keywords['keywords'].apply(gera_n_grams)
lista_ngrams = []
lista_ngrams_temp = df_proposicoes_sem_keywords['bigrams'].tolist()
_ = [lista_ngrams.extend(item) for item in lista_ngrams_temp]
bigrams_comuns = nltk.FreqDist(lista_ngrams).most_common(50)
lista_bigramas_comuns = [bigrama for bigrama, frequencia in bigrams_comuns]
###Output
_____no_output_____
###Markdown
Foram analisados os 50 bigramas mais frequentes e identificados os semanticamente irrelevantes para criacao de `keywords`
###Code
lista_bigramas_comuns_limpa = ['dispõe sobre', 'outras providências', 'nova redação', 'poder executivo', 'distrito federal',
'autoriza poder', 'federal outras','redação constituição', 'dispõe sôbre', 'código penal', 'artigo constituição',
'disposições constitucionais', 'altera dispõe', 'decreto-lei código', 'constitucionais transitórias', 'altera redação',
'abre ministério', 'executivo abrir', 'redação artigo', 'sobre criação', 'acrescenta parágrafo', 'parágrafo único',
'concede isenção', 'altera dispositivos', 'altera complementar', 'dispondo sobre', 'código processo', 'outras providências.',
'providências. historico', 'ministério fazenda', 'altera leis', 'programa nacional', 'quadro permanente', 'outras providencias',
'inciso constituição', 'abrir ministério', 'estabelece normas', 'ministério justiça', 'tempo serviço', 'instituto nacional',
'institui sistema', 'operações crédito', 'altera institui', 'dispõe sôbre']
palavras_para_descarte_origem_bigramas = []
_ = [palavras_para_descarte_origem_bigramas.extend(bigrama.split(' ')) for bigrama in lista_bigramas_comuns_limpa]
palavras_para_descarte_origem_bigramas_unicas = set(palavras_para_descarte_origem_bigramas)
###Output
_____no_output_____
###Markdown
Remove palavras irrelevantes originarias dos bigramas
###Code
def remove_palavras_origem_bigramas_da_sentenca(texto):
keywords = []
for palavra in texto:
if palavra not in palavras_para_descarte_origem_bigramas_unicas:
keywords.append(palavra)
return keywords
df_proposicoes_sem_keywords['keywords'] = df_proposicoes_sem_keywords['keywords'].apply(remove_palavras_origem_bigramas_da_sentenca)
###Output
_____no_output_____
###Markdown
Converte lista para string
###Code
df_proposicoes_sem_keywords['keywords'] = df_proposicoes_sem_keywords['keywords'].apply(converte_lista_string)
###Output
_____no_output_____
###Markdown
Elimina coluna "bigrams"
###Code
df_proposicoes_sem_keywords = df_proposicoes_sem_keywords.drop(columns=['bigrams'])
###Output
_____no_output_____
###Markdown
Remove propostas cujo campo "keywords" ficou vazio após a limpeza da ementa e extração de palavras chaves
###Code
df_proposicoes_sem_keywords = df_proposicoes_sem_keywords[df_proposicoes_sem_keywords['keywords'] != '']
df_proposicoes_sem_keywords[df_proposicoes_sem_keywords['keywords']== '']
###Output
_____no_output_____
###Markdown
Reuni dados em um único dataframe
###Code
df_proposicoes_v_final = pd.concat([df_proposicoes_com_keywords, df_proposicoes_sem_keywords])
df_proposicoes_v_final.shape
df_proposicoes_v_final.info()
df_proposicoes_v_final.to_csv('../dados/proposicoes_legislativas_limpas.csv', index=False)
###Output
_____no_output_____ |
HomeWork/Day_056_HW.ipynb | ###Markdown
K-Mean 觀察 : 使用輪廓分析 [作業目標]- 試著模仿範例寫法, 利用隨機生成的 5 群高斯分布資料, 以輪廓分析來觀察 K-mean 分群時不同 K 值的比較 [作業重點]- 使用輪廓分析的圖表, 以及實際的分群散佈圖, 觀察 K-Mean 分群法在 K 有所不同時, 分群的效果如何變化 (In[3], Out[3]) 作業* 試著模擬出 5 群高斯分布的資料, 並以此觀察 K-mean 與輪廓分析的結果
###Code
# 載入套件
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn import datasets
from sklearn.metrics import silhouette_samples, silhouette_score
np.random.seed(5)
%matplotlib inline
# 生成 5 群資料
X, y = make_blobs(n_samples=500,
n_features=2,
centers=5,
cluster_std=1,
center_box=(-10.0, 10.0),
shuffle=True,
random_state=123)
# 設定需要計算的 K 值集合
range_n_clusters = [2, 3, 4, 5, 6, 7, 8]
# 計算並繪製輪廓分析的結果
# 因下列為迴圈寫法, 無法再分拆為更小執行區塊, 請見諒
for n_clusters in range_n_clusters:
# 設定小圖排版為 1 row 2 columns
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# 左圖為輪廓分析(Silhouette analysis), 雖然輪廓係數範圍在(-1,1)區間, 但範例中都為正值, 因此我們把顯示範圍定在(-0.1,1)之間
ax1.set_xlim([-0.1, 1])
# (n_clusters+1)*10 這部分是用來在不同輪廓圖間塞入空白, 讓圖形看起來更清楚
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
# 宣告 KMean 分群器, 對 X 訓練並預測
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
cluster_labels = clusterer.fit_predict(X)
# 計算所有點的 silhouette_score 平均
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# 計算所有樣本的 The silhouette_score
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# 收集集群 i 樣本的輪廓分數,並對它們進行排序
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.nipy_spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# 在每個集群中間標上 i 的數值
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# 計算下一個 y_lower 的位置
y_lower = y_upper + 10
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# 將 silhouette_score 平均所在位置, 畫上一條垂直線
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # 清空 y 軸的格線
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
# 右圖我們用來畫上每個樣本點的分群狀態, 從另一個角度觀察分群是否洽當
colors = cm.nipy_spectral(cluster_labels.astype(float) / n_clusters)
ax2.scatter(X[:, 0], X[:, 1], marker='.', s=30, lw=0, alpha=0.7,
c=colors, edgecolor='k')
# 在右圖每一群的中心處, 畫上一個圓圈並標註對應的編號
centers = clusterer.cluster_centers_
ax2.scatter(centers[:, 0], centers[:, 1], marker='o',
c="white", alpha=1, s=200, edgecolor='k')
for i, c in enumerate(centers):
ax2.scatter(c[0], c[1], marker='$%d$' % i, alpha=1,
s=50, edgecolor='k')
ax2.set_title("The visualization of the clustered data.")
ax2.set_xlabel("Feature space for the 1st feature")
ax2.set_ylabel("Feature space for the 2nd feature")
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.show()
###Output
For n_clusters = 2 The average silhouette_score is : 0.5027144446956527
For n_clusters = 3 The average silhouette_score is : 0.6105565451092732
For n_clusters = 4 The average silhouette_score is : 0.6270122040179333
For n_clusters = 5 The average silhouette_score is : 0.6115749260799671
For n_clusters = 6 The average silhouette_score is : 0.5499388428924794
For n_clusters = 7 The average silhouette_score is : 0.4695416652197068
For n_clusters = 8 The average silhouette_score is : 0.4231800504179843
|
Lecture 11/Reading_CensusShapefiles_wKey.ipynb | ###Markdown
Reading Shapefiles from a URL into GeoPandasShapefiles are probably the most commonly used vector geospatial data format. However, because a single Shapefile consists of multiple files (at least 3 and up to 15) they are often transferred as a single zip file. In this post I demonstrate how to read a zipped shapefile from a server into a GeoPandas GeoDataFrame (with coordinate reference system information), all in memory.We read geospatial data from the webWe read U.S. County geographic boundaries from the [Census FTP site](http://www2.census.gov).adapted from: http://andrewgaidus.com/Reading_Zipped_Shapefiles/
###Code
from zipfile import ZipFile
import geopandas as gpd
from shapely.geometry import shape
import osr
import pandas as pd
import requests
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
! pip install pyshp
import shapefile
import sys
print("The Python version is %s.%s.%s" % sys.version_info[:3])
###Output
The Python version is 3.7.7
###Markdown
Use conda install -c conda-forge gdal The first step is to use ```PyShp``` to read in our Shapefile. If we were just reading a Shapefile from disk we would simply call: ```r = shapefile.Reader("myshp.shp")```However, because we are reading our Shapefile from a zipfile, we will need to specify the individual components of this file. Unfortunately, we can't read the contents of a zipfile directly into Python from the URL. While we could have Python download the file to disk and then read it in from there, I want to demonstrate how this can be done all in memory without any disk activity. Therefore we need to take an additional step where the data from the URL is read into a ```StringIO``` object, which is in turn read by Python as a zip file. The ```StringIO``` object is just an intermediate data type between the data read in from the URL and the zip file in Python. This is done in one line of code below:
###Code
import geopandas as gpd
import requests
import zipfile
import io
import matplotlib.pyplot as plt
%matplotlib inline
print(gpd.__version__)
url = 'http://www2.census.gov/geo/tiger/GENZ2015/shp/cb_2015_us_county_500k.zip'
local_path = 'tmp/'
print('Downloading shapefile...')
r = requests.get(url)
z = zipfile.ZipFile(io.BytesIO(r.content))
print("Done")
z.extractall(path=local_path) # extract to folder
filenames = [y for y in sorted(z.namelist()) for ending in ['dbf', 'prj', 'shp', 'shx'] if y.endswith(ending)]
print(filenames)
dbf, prj, shp, shx = [filename for filename in filenames]
usa = gpd.read_file(local_path + shp)
print("Shape of the dataframe: {}".format(usa.shape))
print("Projection of dataframe: {}".format(usa.crs))
usa.tail() #last 5 records in dataframe
###Output
Shape of the dataframe: (3233, 10)
Projection of dataframe: epsg:4269
###Markdown
Now this zipfile object can be treated as any other zipfile that was read in from disk. Below I identify the filenames of the 4 necessary components of the Shapefile. Note that the ```prj``` file is actually not 100% necessary, but in contains the coordinate reference system information which is really nice to have, and will be used below.
###Code
print(len(usa))
nj = usa[usa.STATEFP=='44']
ax = nj.plot(color = 'green', figsize=(10,10),linewidth=2)
ax.set(xticks=[], yticks=[])
plt.savefig("NJ_Counties.png", bbox_inches='tight')
print(nj.head())
###Output
STATEFP COUNTYFP COUNTYNS AFFGEOID GEOID NAME LSAD \
645 44 009 01219782 0500000US44009 44009 Washington 06
724 44 005 01219779 0500000US44005 44005 Newport 06
871 44 007 01219781 0500000US44007 44007 Providence 06
1514 44 001 01219777 0500000US44001 44001 Bristol 06
2106 44 003 01219778 0500000US44003 44003 Kent 06
ALAND AWATER geometry
645 852834829 604715929 MULTIPOLYGON (((-71.61313 41.16028, -71.61053 ...
724 265220482 546983257 MULTIPOLYGON (((-71.28802 41.64558, -71.28647 ...
871 1060637931 67704263 POLYGON ((-71.79924 42.00806, -71.76601 42.009...
1514 62550804 53350670 POLYGON ((-71.35390 41.75130, -71.34718 41.756...
2106 436515567 50720026 POLYGON ((-71.78967 41.72457, -71.75435 41.725...
###Markdown
Modify here to extract census tracts of CA
###Code
url = 'http://www2.census.gov/geo/tiger/GENZ2015/shp/cb_2015_06_tract_500k.zip'
local_path = 'tmp/'
print('Downloading shapefile...')
r = requests.get(url)
z = zipfile.ZipFile(io.BytesIO(r.content))
print("Done")
z.extractall(path=local_path) # extract to folder
filenames = [y for y in sorted(z.namelist()) for ending in ['dbf', 'prj', 'shp', 'shx'] if y.endswith(ending)]
print(filenames)
dbf, prj, shp, shx = [filename for filename in filenames]
usa = gpd.read_file(local_path + shp)
print("Shape of the dataframe: {}".format(usa.shape))
print("Projection of dataframe: {}".format(usa.crs))
usa.tail() #last 5 records in dataframe
len(usa)
alameda = usa[usa.COUNTYFP=='001']
ax = alameda.plot(color = 'green', figsize=(10,10),linewidth=2)
ax.set(xticks=[], yticks=[])
plt.savefig("ALAMEDA_tracts.png", bbox_inches='tight')
ax = alameda.plot(figsize=(10,10), column='ALAND', cmap="tab20b", scheme='quantiles', legend=True)
ax.set(xticks=[], yticks=[]) #removes axes
ax.set_title("Alameda tracts by Land Area", fontsize='large')
#add the legend and specify its location
leg = ax.get_legend()
leg.set_bbox_to_anchor((1.0,0.3))
plt.savefig("Alameda_tracts.png", bbox_inches='tight')
alameda_tract_geo = alameda[alameda.COUNTYFP=='001']
#alameda_tract_geo = alameda.set_index("GEOID")['geometry'].to_crs(epsg=3310)
alameda_tract_geo.plot()
alameda_tract_geo.head()
#current coordinate dataframe
print(alameda_tract_geo.crs)
alameda_tract_geo['geometry'].head()
###Output
_____no_output_____
###Markdown
Create an empty column alameda_tract_geo['income'] = None
###Code
alameda_tract_geo['income'] = None
###Output
_____no_output_____
###Markdown
Get a census API KEY here: https://api.census.gov/data/key_signup.htmlTutorial in Geopandas dataframes here https://github.com/Automating-GIS-processes/Lesson-2-Geo-DataFrames/blob/master/Lesson/pandas-geopandas.md Here we work with the Census Tracts of Alameda County
###Code
from census import Census
from us import states
import csv
MY_API_KEY=''
c = Census(MY_API_KEY)
c.acs5.get(('NAME', 'B19013_001E'),
{'for': 'state:{}'.format(states.CA.fips)})
request = c.acs5.state_county_tract('B19013_001E', '06', '001', Census.ALL)
data = json.loads(str(request).replace("'",'"'))
f = csv.writer(open("income.csv", "w"))
alameda_tract_geo['income'] = None
for index, row in alameda_tract_geo.iterrows():
# Update the value in 'area' column with area information at index
poly_area = row['geometry'].area
# Print information for the user
temp2=row['GEOID']
#print("Polygon area at index {0} is: {1:.6f}".format(index, poly_area),'',temp2)
for line in request:
temp=line["state"]+line["county"]+line["tract"];
if(temp==temp2):
print(temp,' ',temp2,' ',line["B19013_001E"]);
if line["B19013_001E"] > 0:
alameda_tract_geo.loc[index, 'income']=line["B19013_001E"]
else:
alameda_tract_geo.loc[index, 'income']=0.0
alameda_tract_geo.tail()
ax = alameda_tract_geo.plot(figsize=(10,10), column='income', cmap='Purples', scheme='quantiles', legend=True)
ax.set(xticks=[], yticks=[]) #removes axes
ax.set_title("Alameda tracts by Land Area", fontsize='large')
#add the legend and specify its location
leg = ax.get_legend()
leg.set_bbox_to_anchor((1.0,0.3))
plt.savefig("Alameda_tracts.png", bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Testing GeoDataFrames
###Code
alameda_tract_geo.geometry.name
alameda_tract_geo['geometry'].head()
alameda_gdf = gpd.GeoDataFrame(geometry = alameda_tract_geo['geometry'], data = alameda_tract_geo['income'])
alameda_tract_geo['income'].min()
fig, ax = plt.subplots(figsize=(10,10))
ax.set(aspect='equal', xticks=[], yticks=[])
#alameda_gdf.plot(column= 'income', ax = ax)
alameda_gdf.plot(column= 'income', ax = ax, scheme='QUANTILES', cmap='Purples', legend=True)
plt.title('Alameda County, CA - Median Household Income by Census Tract', size = 14)
###Output
_____no_output_____ |
Webscraping/guru99/SeleniumExceptions_Webscraping.ipynb | ###Markdown
3.Scrape the details of selenium exception from guru99.com.Url = https://www.guru99.com/You need to find following details:A) NameB) DescriptionNote: - From guru99 home page you have to reach to selenium exception handling page through code.
###Code
#Connect to web driver
driver=webdriver.Chrome(r"D://chromedriver.exe") #r converts string to raw string
#If not r, we can use executable_path = "C:/path name"
#Getting the website to driver
driver.get('https://www.guru99.com/')
#When we run this line, automatically the webpage will be opened
#Clicking on the selenium tutorial page
driver.find_element_by_xpath("//div[@class='srch']/span[8]/a").click()
#Clicking on the selenium exception tutorial page
driver.find_element_by_xpath("//table[@class='table']/tbody/tr[34]/td/a").click()
#Creating the empty lists to store the scraped data
Name=[]
Description=[]
#Scrapping the data having exception names
name=driver.find_elements_by_xpath("//table[@class='table table-striped']/tbody/tr/td[1]")
for i in name:
Name.append(i.text)
#Scrapping the data having the description details
desc=driver.find_elements_by_xpath("//table[@class='table table-striped']/tbody/tr/td[2]")
for i in desc:
Description.append(i.text)
#Checking the length of the data scrapped
print(len(Name),len(Description))
#Creating a dataframe for the scrapped data
guru99=pd.DataFrame({})
guru99['Exception Name']=Name[1:]
guru99['Description']=Description[1:]
guru99
#Closing the driver
driver.close()
###Output
_____no_output_____ |
EDA_video3_screencast.ipynb | ###Markdown
**IMPORTANT:** You will not be able to run this notebook at coursera platform, as the dataset is not there. The notebook is in read-only mode.But you can run the notebook locally and download the dataset using [this link](https://habrastorage.org/storage/stuff/special/beeline/00.beeline_bigdata.zip) to explore the data interactively.
###Code
pd.set_option('max_columns', 100)
###Output
_____no_output_____
###Markdown
Load the data
###Code
train = pd.read_csv('./train.csv')
train.head()
###Output
_____no_output_____
###Markdown
Build a quick baseline
###Code
from sklearn.ensemble import RandomForestClassifier
# Create a copy to work with
X = train.copy()
# Save and drop labels
y = train.y
X = X.drop('y', axis=1)
# fill NANs
X = X.fillna(-999)
# Label encoder
for c in train.columns[train.dtypes == 'object']:
X[c] = X[c].factorize()[0]
rf = RandomForestClassifier()
rf.fit(X,y)
plt.plot(rf.feature_importances_)
plt.xticks(np.arange(X.shape[1]), X.columns.tolist(), rotation=90);
###Output
/home/dulyanov/miniconda2/lib/python2.7/site-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family [u'serif'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
There is something interesting about `x8`.
###Code
# we see it was standard scaled, most likely, if we concat train and test, we will get exact mean=1, and std 1
print 'Mean:', train.x8.mean()
print 'std:', train.x8.std()
# And we see that it has a lot of repeated values
train.x8.value_counts().head(15)
# It's very hard to work with scaled feature, so let's try to scale them back
# Let's first take a look at difference between neighbouring values in x8
x8_unique = train.x8.unique()
x8_unique_sorted = np.sort(x8_unique)
np.diff(x8_unique_sorted)
# The most of the diffs are 0.04332159!
# The data is scaled, so we don't know what was the diff value for the original feature
# But let's assume it was 1.0
# Let's devide all the numbers by 0.04332159 to get the right scaling
# note, that feature will still have zero mean
np.diff(x8_unique_sorted/0.04332159)
(train.x8/0.04332159).head(10)
# Ok, now we see .102468 in every value
# this looks like a part of a mean that was subtracted during standard scaling
# If we subtract it, the values become almost integers
(train.x8/0.04332159 - .102468).head(10)
# let's round them
x8_int = (train.x8/0.04332159 - .102468).round()
x8_int.head(10)
# Ok, what's next? In fact it is not obvious how to find shift parameter,
# and how to understand what the data this feature actually store
# But ...
x8_int.value_counts()
# do you see this -1968? Doesn't it look like a year? ... So my hypothesis is that this feature is a year of birth!
# Maybe it was a textbox where users enter their year of birth, and someone entered 0000 instead
# The hypothesis looks plausible, isn't it?
(x8_int + 1968.0).value_counts().sort_index()
# After the competition ended the organisers told it was really a year of birth
###Output
_____no_output_____ |
In_Db2_Machine_Learning/Building ML Models with Db2/Notebooks/Clustering_Demo.ipynb | ###Markdown
K-Means Clustering with IBM DB2 Imports
###Code
# Database connectivity
import ibm_db
import ibm_db_dbi
# Pandas for loading values into memory for later visualization
import pandas as pd
from IPython.display import display
import numpy as np
# import scipy.stats as ss
from itertools import combinations_with_replacement, combinations
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(rc={'figure.figsize':(12,8)})
%config InlineBackend.figure_format = 'retina'
# Import custom functions
from InDBMLModules import connect_to_schema, plot_cdf_from_runstats_quartiles, create_correlation_matrix, \
drop_object, plot_histogram, connect_to_db, close_connection_to_db
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
3. Connect to DB
###Code
# Connect to DB
conn_str = "DATABASE=in_db;" + \
"HOSTNAME=*****;"+ \
"PROTOCOL=TCPIP;" + \
"PORT=50000;" + \
"UID=****;" + \
"PWD=*********;"
ibm_db_conn = ibm_db.connect(conn_str,"","")
conn = ibm_db_dbi.Connection(ibm_db_conn)
print('Connection to Db2 Instance Created!')
rc = ibm_db.close(ibm_db_conn)
###Output
Connection to Db2 Instance Created!
###Markdown
Create a schema for this demo
###Code
ibm_db_conn = ibm_db.connect(conn_str, "", "")
ibm_db_dbi_conn = ibm_db_dbi.Connection(ibm_db_conn)
schema = "CLUSTER"
drop_object("CLUSTER", "SCHEMA", ibm_db_conn, verbose = True)
sql ="create schema CLUSTER authorization MLP"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("Schema CLUSTER was created.")
rc = close_connection_to_db(ibm_db_conn, verbose=False)
###Output
Pre-existing TABLE CLUSTER.TPCDS_KM_2_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_2_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_COL_PROP was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_2_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_2_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_2_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_COL_PROP2 was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_PREDICT was dropped.
Pre-existing VIEW CLUSTER.TPCDS_VIEW was dropped.
Pre-existing SCHEMA CLUSTER was dropped.
Schema CLUSTER was created.
###Markdown
4. Data Exploration Let's use the IDAX.COLUMN_PROPERTIES stored procedure to collect statistics
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
drop_object("TPCDS_COL_PROP", "TABLE", ibm_db_conn, verbose = True)
sql = "CALL IDAX.COLUMN_PROPERTIES('intable=DATA.TPCDS_50K, outtable=TPCDS_COL_PROP, withstatistics=true,"
sql+= "incolumn=CLIENT_ID:id;NUM_DEPENDANTS:nom;NUM_DEPENDANTS_EMPLOYED:nom;NUM_DEPENDANTS_COUNT:nom;HOUSEHOLD_INCOME:nom ')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("TABLE TPCDS_COL_PROP was created.")
rc = close_connection_to_db(ibm_db_conn, verbose=False)
# Identify Columns with missing values
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT * FROM TPCDS_COL_PROP"
col_prop = pd.read_sql(sql,conn)
rc = ibm_db.close(ibm_db_conn)
col_prop.sort_values('COLNO')
###Output
_____no_output_____
###Markdown
**Observation**:- FIRST_NAME and LAST_NAME can be dropped - they have too many unique values (high cardinality)
###Code
# Identify Columns with missing values
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT COLNO, NAME, TYPE,NUMMISSING,NUMMISSING+NUMINVALID+NUMVALID as NUMBER_OF_VALUES, "
sql+= "ROUND(dec(NUMMISSING,10,2)/(dec(NUMMISSING, 10,2)+dec(NUMINVALID, 10,2)+dec(NUMVALID, 10,2))*100,2) as PERCENT_NULL "
sql+= "from TPCDS_COL_PROP where NUMMISSING > 0 order by PERCENT_NULL DESC"
missing_vals = pd.read_sql(sql,conn)
rc = ibm_db.close(ibm_db_conn)
missing_vals
###Output
_____no_output_____
###Markdown
**Observation**:- Missing values in AGE should be imputed 5. Data TransformationSelect all columns except for FIRST_NAME and LAST_NAME into a view - these features are not relevant as they contain too many unique values (i.e. high cardinality) to be useful to us.
###Code
# Create view without TICKET, CABIN, and NAME features
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql= "CREATE VIEW TPCDS_VIEW AS SELECT CLIENT_ID, AGE, GENDER, MARITAL_STATUS, EDUCATION, PURCHASE_ESTIMATE,"
sql +="CREDIT_RATING, NUM_DEPENDANTS, NUM_DEPENDANTS_EMPLOYED, NUM_DEPENDANTS_COUNT, HOUSEHOLD_INCOME, "
sql+="HOUSEHOLD_BUY_POTENTIAL FROM DATA.TPCDS_50K"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
rc = ibm_db.close(ibm_db_conn)
###Output
_____no_output_____
###Markdown
We will impute missing values in the AGE column with the mean value.
###Code
# Impute AGE columns w/ mean value
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "CALL IDAX.IMPUTE_DATA('intable=TPCDS_VIEW,method=mean,inColumn=AGE')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
rc = ibm_db.close(ibm_db_conn)
# Verify imputation with col_prop table
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
#Create new col_prop table
drop_object("TPCDS_COL_PROP2", "TABLE", ibm_db_conn, verbose = True)
sql = "CALL IDAX.COLUMN_PROPERTIES('intable=TPCDS_VIEW, outtable=TPCDS_COL_PROP2, withstatistics=true,"
sql+= "incolumn=CLIENT_ID:id;NUM_DEPENDANTS:nom;NUM_DEPENDANTS_EMPLOYED:nom;NUM_DEPENDANTS_COUNT:nom;HOUSEHOLD_INCOME:nom')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("TABLE TPCDS_COL_PROP was created.")
rc = close_connection_to_db(ibm_db_conn, verbose=False)
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT * FROM TPCDS_COL_PROP2"
col_prop2 = pd.read_sql(sql,conn)
col_prop2.sort_values('COLNO')
rc = close_connection_to_db(ibm_db_conn, verbose=False)
col_prop2
###Output
_____no_output_____
###Markdown
7. Model TrainingWe now look to train a K-Means model using the cleaned data
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
drop_object("TPCDS_KM_3", "MODEL", ibm_db_conn, verbose = True)
drop_object("TPCDS_KM_3_OUT", "TABLE", ibm_db_conn, verbose = True)
sql = "CALL IDAX.KMEANS('model=TPCDS_KM_3, intable=TPCDS_VIEW, outtable=TPCDS_KM_3_OUT, id=CLIENT_ID,"
sql+= "colPropertiesTable=TPCDS_COL_PROP2, randseed=42, k=3, distance=euclidean')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("Model trained successfully!")
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT * FROM TPCDS_KM_3_MODEL"
model = pd.read_sql(sql,conn)
sql = "SELECT * FROM TPCDS_KM_3_CLUSTERS ORDER BY CLUSTERID"
clusters = pd.read_sql(sql,conn)
print('Model Table:')
display(model)
print('Model Clusters')
display(clusters)
rc = close_connection_to_db(ibm_db_conn, verbose=False)
###Output
Model Table:
###Markdown
8. Hyperparameter Tuning Hypertune the parameter "k" by using the elbow method - plot the mean sum of squared distances for each cluster vs. k
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
ss_list = []
for k in range(2,11):
model_name = "TPCDS_KM_"+str(k)
outtable_name = model_name + "_OUT"
drop_object(model_name, "MODEL", ibm_db_conn, verbose = True)
drop_object(outtable_name, "TABLE", ibm_db_conn, verbose = True)
# Train models for k=2 to k=10
sql = "CALL IDAX.KMEANS('model="+model_name+", intable=TPCDS_VIEW, outtable="+outtable_name+", id=CLIENT_ID,"
sql+= "colPropertiesTable=TPCDS_COL_PROP2, randseed=42, k="+str(k)+"')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("Model "+ model_name+ " trained successfully!")
# Select the mean sum of squared distances and append to a list for later plotting
sql = "SELECT AVG(WITHINSS) as MEAN_SS FROM "+model_name+"_CLUSTERS"
mean_SS = pd.read_sql(sql,conn)
value = mean_SS.iloc[0]['MEAN_SS']
ss_list.append(value)
# Plot avg sum of squared distances vs. k
k=range(2,11)
plt.plot(k,ss_list);
plt.xlabel('k')
plt.ylabel('Mean sum of sqaured distances')
plt.title('Elbow method of determining optimal k value');
###Output
_____no_output_____
###Markdown
**Observation:** We select k=4 as the optimal model
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT * FROM TPCDS_KM_4_MODEL"
model = pd.read_sql(sql,conn)
sql = "SELECT * FROM TPCDS_KM_4_CLUSTERS"
clusters = pd.read_sql(sql,conn)
print('Model Table:')
display(model)
print('Model Clusters')
display(clusters)
rc = close_connection_to_db(ibm_db_conn, verbose=False)
###Output
Model Table:
###Markdown
Apply the K-Means model to the data
###Code
# Use the PREDICT_KMEANS procedure to apply the clusters to the data
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
drop_object('TPCDS_KM_4_PREDICT', "TABLE", ibm_db_conn, verbose = True)
sql = "CALL IDAX.PREDICT_KMEANS('model=TPCDS_KM_4, intable=TPCDS_VIEW, outtable=TPCDS_KM_4_PREDICT, id=CLIENT_ID')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("The model has made its predictions")
rc = close_connection_to_db(ibm_db_conn, verbose=False)
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql="select * FROM TPCDS_KM_4_PREDICT;"
predictions = pd.read_sql(sql,conn)
rc = close_connection_to_db(ibm_db_conn, verbose=False)
predictions
###Output
_____no_output_____
###Markdown
Visualize the Clusters
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql="SELECT a.*, b.cluster_id FROM TPCDS_VIEW as a INNER JOIN TPCDS_KM_4_PREDICT as b ON a.CLIENT_ID=b.ID order by client_id;;"
results = pd.read_sql(sql,conn)
rc = close_connection_to_db(ibm_db_conn, verbose=False)
results
# Convert the "CLUSTER_ID" datatype to 'category' for plotting
results['CLUSTER_ID']=results['CLUSTER_ID'].astype('category')
# Plot all column combinations
# columns=results.columns[1:-1]
# columns
# for combo in combinations(columns, 2):
# plt.figure()
# col1=combo[0]
# col2=combo[1]
# print(col1,col2)
# sns.scatterplot( x = col1 ,y = col2 , data = results , hue='CLUSTER_ID');
# plt.show()
# Plot PURCHASE ESTIMATE VS AGE
plt.figure()
sns.scatterplot( x = 'PURCHASE_ESTIMATE' ,y = 'AGE' , data = results , hue='CLUSTER_ID');
plt.show()
###Output
_____no_output_____
###Markdown
K-Means Clustering with IBM DB2 Imports
###Code
# Database connectivity
import ibm_db
import ibm_db_dbi
# Pandas for loading values into memory for later visualization
import pandas as pd
from IPython.display import display
import numpy as np
# import scipy.stats as ss
from itertools import combinations_with_replacement, combinations
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(rc={'figure.figsize':(12,8)})
%config InlineBackend.figure_format = 'retina'
# Import custom functions
import sys
sys.path.insert(1, '../lib/')
from InDBMLModules import connect_to_schema, drop_object, plot_histogram, close_connection_to_db
%load_ext autoreload
%autoreload 2
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
3. Connect to DB
###Code
# Connect to DB
conn_str = "DATABASE=mydb;" + \
"HOSTNAME=*****;"+ \
"PROTOCOL=TCPIP;" + \
"PORT=*****;" + \
"UID=*****;" + \
"PWD=*****;"
ibm_db_conn = ibm_db.connect(conn_str,"","")
conn = ibm_db_dbi.Connection(ibm_db_conn)
print('Connection to Db2 Instance Created!')
rc = ibm_db.close(ibm_db_conn)
###Output
Connection to Db2 Instance Created!
###Markdown
Create a schema for this demo
###Code
ibm_db_conn = ibm_db.connect(conn_str, "", "")
ibm_db_dbi_conn = ibm_db_dbi.Connection(ibm_db_conn)
schema = "CLUSTER"
drop_object("CLUSTER", "SCHEMA", ibm_db_conn, verbose = True)
sql ="create schema CLUSTER authorization MLP"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("Schema CLUSTER was created.")
rc = close_connection_to_db(ibm_db_conn, verbose=False)
###Output
Pre-existing TABLE CLUSTER.TPCDS_KM_2_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_2_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_COL_PROP was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_2_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_2_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_2_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_3_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_5_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_6_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_7_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_8_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_9_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_COLUMN_STATISTICS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_MODEL was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_COLUMNS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_CLUSTERS was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_10_OUT was dropped.
Pre-existing TABLE CLUSTER.TPCDS_COL_PROP2 was dropped.
Pre-existing TABLE CLUSTER.TPCDS_KM_4_PREDICT was dropped.
Pre-existing VIEW CLUSTER.TPCDS_VIEW was dropped.
Pre-existing SCHEMA CLUSTER was dropped.
Schema CLUSTER was created.
###Markdown
4. Data Exploration Let's use the IDAX.COLUMN_PROPERTIES stored procedure to collect statistics
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
drop_object("TPCDS_COL_PROP", "TABLE", ibm_db_conn, verbose = True)
sql = "CALL IDAX.COLUMN_PROPERTIES('intable=DATA.TPCDS_50K, outtable=TPCDS_COL_PROP, withstatistics=true,"
sql+= "incolumn=CLIENT_ID:id;NUM_DEPENDANTS:nom;NUM_DEPENDANTS_EMPLOYED:nom;NUM_DEPENDANTS_COUNT:nom;HOUSEHOLD_INCOME:nom ')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("TABLE TPCDS_COL_PROP was created.")
rc = close_connection_to_db(ibm_db_conn, verbose=False)
# Identify Columns with missing values
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT * FROM TPCDS_COL_PROP"
col_prop = pd.read_sql(sql,conn)
rc = ibm_db.close(ibm_db_conn)
col_prop.sort_values('COLNO')
###Output
_____no_output_____
###Markdown
**Observation**:- FIRST_NAME and LAST_NAME can be dropped - they have too many unique values (high cardinality)
###Code
# Identify Columns with missing values
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT COLNO, NAME, TYPE,NUMMISSING,NUMMISSING+NUMINVALID+NUMVALID as NUMBER_OF_VALUES, "
sql+= "ROUND(dec(NUMMISSING,10,2)/(dec(NUMMISSING, 10,2)+dec(NUMINVALID, 10,2)+dec(NUMVALID, 10,2))*100,2) as PERCENT_NULL "
sql+= "from TPCDS_COL_PROP where NUMMISSING > 0 order by PERCENT_NULL DESC"
missing_vals = pd.read_sql(sql,conn)
rc = ibm_db.close(ibm_db_conn)
missing_vals
###Output
_____no_output_____
###Markdown
**Observation**:- Missing values in AGE should be imputed 5. Data TransformationSelect all columns except for FIRST_NAME and LAST_NAME into a view - these features are not relevant as they contain too many unique values (i.e. high cardinality) to be useful to us.
###Code
# Create view without TICKET, CABIN, and NAME features
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql= "CREATE VIEW TPCDS_VIEW AS SELECT CLIENT_ID, AGE, GENDER, MARITAL_STATUS, EDUCATION, PURCHASE_ESTIMATE,"
sql +="CREDIT_RATING, NUM_DEPENDANTS, NUM_DEPENDANTS_EMPLOYED, NUM_DEPENDANTS_COUNT, HOUSEHOLD_INCOME, "
sql+="HOUSEHOLD_BUY_POTENTIAL FROM DATA.TPCDS_50K"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
rc = ibm_db.close(ibm_db_conn)
###Output
_____no_output_____
###Markdown
We will impute missing values in the AGE column with the mean value.
###Code
# Impute AGE columns w/ mean value
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "CALL IDAX.IMPUTE_DATA('intable=TPCDS_VIEW,method=mean,inColumn=AGE')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
rc = ibm_db.close(ibm_db_conn)
# Verify imputation with col_prop table
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
#Create new col_prop table
drop_object("TPCDS_COL_PROP2", "TABLE", ibm_db_conn, verbose = True)
sql = "CALL IDAX.COLUMN_PROPERTIES('intable=TPCDS_VIEW, outtable=TPCDS_COL_PROP2, withstatistics=true,"
sql+= "incolumn=CLIENT_ID:id;NUM_DEPENDANTS:nom;NUM_DEPENDANTS_EMPLOYED:nom;NUM_DEPENDANTS_COUNT:nom;HOUSEHOLD_INCOME:nom')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("TABLE TPCDS_COL_PROP was created.")
rc = close_connection_to_db(ibm_db_conn, verbose=False)
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT * FROM TPCDS_COL_PROP2"
col_prop2 = pd.read_sql(sql,conn)
col_prop2.sort_values('COLNO')
rc = close_connection_to_db(ibm_db_conn, verbose=False)
col_prop2
###Output
_____no_output_____
###Markdown
7. Model TrainingWe now look to train a K-Means model using the cleaned data
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
drop_object("TPCDS_KM_3", "MODEL", ibm_db_conn, verbose = True)
drop_object("TPCDS_KM_3_OUT", "TABLE", ibm_db_conn, verbose = True)
sql = "CALL IDAX.KMEANS('model=TPCDS_KM_3, intable=TPCDS_VIEW, outtable=TPCDS_KM_3_OUT, id=CLIENT_ID,"
sql+= "colPropertiesTable=TPCDS_COL_PROP2, randseed=42, k=3, distance=euclidean')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("Model trained successfully!")
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT * FROM TPCDS_KM_3_MODEL"
model = pd.read_sql(sql,conn)
sql = "SELECT * FROM TPCDS_KM_3_CLUSTERS ORDER BY CLUSTERID"
clusters = pd.read_sql(sql,conn)
print('Model Table:')
display(model)
print('Model Clusters')
display(clusters)
rc = close_connection_to_db(ibm_db_conn, verbose=False)
###Output
Model Table:
###Markdown
8. Hyperparameter Tuning Hypertune the parameter "k" by using the elbow method - plot the mean sum of squared distances for each cluster vs. k
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
ss_list = []
for k in range(2,11):
model_name = "TPCDS_KM_"+str(k)
outtable_name = model_name + "_OUT"
drop_object(model_name, "MODEL", ibm_db_conn, verbose = True)
drop_object(outtable_name, "TABLE", ibm_db_conn, verbose = True)
# Train models for k=2 to k=10
sql = "CALL IDAX.KMEANS('model="+model_name+", intable=TPCDS_VIEW, outtable="+outtable_name+", id=CLIENT_ID,"
sql+= "colPropertiesTable=TPCDS_COL_PROP2, randseed=42, k="+str(k)+"')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("Model "+ model_name+ " trained successfully!")
# Select the mean sum of squared distances and append to a list for later plotting
sql = "SELECT AVG(WITHINSS) as MEAN_SS FROM "+model_name+"_CLUSTERS"
mean_SS = pd.read_sql(sql,conn)
value = mean_SS.iloc[0]['MEAN_SS']
ss_list.append(value)
# Plot avg sum of squared distances vs. k
k=range(2,11)
plt.plot(k,ss_list);
plt.xlabel('k')
plt.ylabel('Mean sum of sqaured distances')
plt.title('Elbow method of determining optimal k value');
###Output
_____no_output_____
###Markdown
**Observation:** We select k=4 as the optimal model
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql = "SELECT * FROM TPCDS_KM_4_MODEL"
model = pd.read_sql(sql,conn)
sql = "SELECT * FROM TPCDS_KM_4_CLUSTERS"
clusters = pd.read_sql(sql,conn)
print('Model Table:')
display(model)
print('Model Clusters')
display(clusters)
rc = close_connection_to_db(ibm_db_conn, verbose=False)
###Output
Model Table:
###Markdown
Apply the K-Means model to the data
###Code
# Use the PREDICT_KMEANS procedure to apply the clusters to the data
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
drop_object('TPCDS_KM_4_PREDICT', "TABLE", ibm_db_conn, verbose = True)
sql = "CALL IDAX.PREDICT_KMEANS('model=TPCDS_KM_4, intable=TPCDS_VIEW, outtable=TPCDS_KM_4_PREDICT, id=CLIENT_ID')"
stmt = ibm_db.exec_immediate(ibm_db_conn, sql)
print("The model has made its predictions")
rc = close_connection_to_db(ibm_db_conn, verbose=False)
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql="select * FROM TPCDS_KM_4_PREDICT;"
predictions = pd.read_sql(sql,conn)
rc = close_connection_to_db(ibm_db_conn, verbose=False)
predictions
###Output
_____no_output_____
###Markdown
Visualize the Clusters
###Code
ibm_db_conn, conn = connect_to_schema(schema,conn_str)
sql="SELECT a.*, b.cluster_id FROM TPCDS_VIEW as a INNER JOIN TPCDS_KM_4_PREDICT as b ON a.CLIENT_ID=b.ID order by client_id;;"
results = pd.read_sql(sql,conn)
rc = close_connection_to_db(ibm_db_conn, verbose=False)
results
# Convert the "CLUSTER_ID" datatype to 'category' for plotting
results['CLUSTER_ID']=results['CLUSTER_ID'].astype('category')
# Plot PURCHASE ESTIMATE VS AGE
plt.figure()
sns.scatterplot( x = 'PURCHASE_ESTIMATE' ,y = 'AGE' , data = results , hue='CLUSTER_ID');
plt.show()
###Output
_____no_output_____ |
Sorting-Problems.ipynb | ###Markdown
Sorting Algorithms: Bubble sort:
###Code
def bubble_sort(arr):
n = len(arr)-1
for i in range(n):
for j in range(0, n-i):
if arr[j]>arr[j+1]:
temp = arr[j]
arr[j] = arr[j+1]
arr[j+1] = temp
return arr
arr = [45,89,12,75,106,5]
bubble_sort(arr)
arr2 = [23,1,45,56,12,34,44,11,10,9]
bubble_sort(arr2)
###Output
_____no_output_____
###Markdown
Selection Sort:
###Code
def selec_sort(arr):
for i in range(0,len(arr)):
min_idx = i
for j in range(i+1,len(arr)):
if arr[min_idx]>arr[j]:
min_idx = j
arr[i],arr[min_idx] = arr[min_idx],arr[i]
return arr
arr1 = [34,1,23,59,21,78,32]
selec_sort(arr1)
def selection_sort(L):
for i in range(len(L)):
min_index = i
for j in range(i+1, len(L)):
if L[j] < L[min_index]:
min_index = j
temp = L[i]
L[i] = L[min_index]
L[min_index] = temp
return L
arr1 = [34,1,23,59,21,78,32]
selection_sort(arr1)
###Output
_____no_output_____
###Markdown
Insertion Sort:
###Code
def insertion_sort(arr):
for i in range(1,len(arr)):
current_value = arr[i]
position = i
while position>0 and arr[position-1]>current_value:
arr[position] = arr[position-1]
position = position-1
arr[position] = current_value
return arr
arr = [50,30,10,80,20,40]
insertion_sort(arr)
###Output
_____no_output_____
###Markdown
Merge Sort:
###Code
def merge_sort(arr):
if len(arr)>1:
mid = int(len(arr)/2)
lefthalf = arr[:mid]
righthalf = arr[mid:]
merge_sort(lefthalf)
merge_sort(righthalf)
i=0
j=0
k=0
while i<len(lefthalf) and j<len(righthalf):
if lefthalf[i]<righthalf[j]:
arr[k] = lefthalf[i]
i +=1
else:
arr[k] = righthalf[j]
j +=1
k +=1
while i<len(lefthalf):
arr[k] = lefthalf[i]
i +=1
k +=1
while j<len(righthalf):
arr[k] = righthalf[j]
j +=1
k +=1
print('Merging : ',arr)
return arr
arr = [34,6,2,68,1,7,4,7,21]
merge_sort(arr)
###Output
Merging : [34]
Merging : [6]
Merging : [6, 34]
Merging : [2]
Merging : [68]
Merging : [2, 68]
Merging : [2, 6, 34, 68]
Merging : [1]
Merging : [7]
Merging : [1, 7]
Merging : [4]
Merging : [7]
Merging : [21]
Merging : [7, 21]
Merging : [4, 7, 21]
Merging : [1, 4, 7, 7, 21]
Merging : [1, 2, 4, 6, 7, 7, 21, 34, 68]
###Markdown
Quick Sort:
###Code
def quick_sort(arr):
quick_sort_help(arr, 0, len(arr)-1)
def quick_sort_help(arr, first, last):
if first<last:
splitpoint = partition(arr, first, last)
quick_sort_help(arr,first, splitpoint-1)
quick_sort_help(arr,splitpoint+1,last)
def partition(arr, first, last):
pivotvalue = arr[first]
leftmark = first+1
###Output
_____no_output_____ |
KerasNN/1200_CNN_BN_CIFAR10.ipynb | ###Markdown
Introduction This notebook demonstratess use of **Batch Normalization** in a simple **ConvNet** applied to [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. **Note:** Original [batch norm paper](https://arxiv.org/abs/1502.03167) explains it's effectivness with "internal covariate shift". But [this recent paper](https://arxiv.org/abs/1805.11604) shows it's actually due to batch norm making "optimization landscape significantly smoother". Both might be worth to read. **Contents*** [CIFAR-10 Dataset](CIFAR-10-Dataset) - load and preprocess dataset* [BN Before Activation](BN-Before-Activation) - as per original [batch norm paper](https://arxiv.org/abs/1502.03167)* [BN After Activation](BN-After-Activation) - sometimes give slightly better results Imports
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Limit TensorFlow GPU memory usage
###Code
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
with tf.Session(config=config):
pass # init sessin with allow_growth
###Output
_____no_output_____
###Markdown
CIFAR-10 Dataset Load dataset and show example images
###Code
(x_train_raw, y_train_raw), (x_test_raw, y_test_raw) = tf.keras.datasets.cifar10.load_data()
class2txt = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
###Output
_____no_output_____
###Markdown
Show example images
###Code
fig, axes = plt.subplots(nrows=1, ncols=6, figsize=[16, 9])
for i in range(len(axes)):
axes[i].set_title(class2txt[y_train_raw[i, 0]])
axes[i].imshow(x_train_raw[i])
###Output
_____no_output_____
###Markdown
Normalize features
###Code
x_train = (x_train_raw - x_train_raw.mean()) / x_train_raw.std()
x_test = (x_test_raw - x_train_raw.mean()) / x_train_raw.std()
print('x_train.shape', x_train.shape)
print('x_test.shape', x_test.shape)
###Output
x_train.shape (50000, 32, 32, 3)
x_test.shape (10000, 32, 32, 3)
###Markdown
One-hot encode labels
###Code
y_train = tf.keras.utils.to_categorical(y_train_raw, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test_raw, num_classes=10)
print('y_train.shape', y_train.shape)
print(y_train[:3])
###Output
y_train.shape (50000, 10)
[[0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
BN Before Activation Create model* apply Conv2D without activation or bias* apply batch normalization* apply activation* apply max-pool There is a bit of confusion going on with BatchNormalization **axis=-1** parameter. As per original batch norm paper, in convolutional layers we want to apply batch norm per channel (as opposed to per-feature in dense layers). Batch norm creates 4x params for each distinct feature it normalizes, so a good sanity check is to ensure that **Param equals to 4x nb filters**.
###Code
from tensorflow.keras.layers import InputLayer, Conv2D, BatchNormalization, MaxPooling2D, Activation
from tensorflow.keras.layers import Flatten, Dense, Dropout
model = tf.keras.Sequential()
model.add(InputLayer(input_shape=[32, 32, 3]))
model.add(Conv2D(filters=16, kernel_size=3, padding='same', activation=None, use_bias=False))
model.add(BatchNormalization()) # leave default axis=-1 in all BN layers
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=[2,2], strides=[2, 2], padding='same'))
model.add(Conv2D(filters=32, kernel_size=3, padding='same', activation=None, use_bias=False))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=[2,2], strides=[2, 2], padding='same'))
model.add(Conv2D(filters=64, kernel_size=3, padding='same', activation=None, use_bias=False))
model.add(BatchNormalization())
model.add(Activation('elu'))
model.add(MaxPooling2D(pool_size=[2,2], strides=[2, 2], padding='same'))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(512, activation='elu'))
model.add(Dropout(0.3))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 32, 32, 16) 432
_________________________________________________________________
batch_normalization (BatchNo (None, 32, 32, 16) 64
_________________________________________________________________
activation (Activation) (None, 32, 32, 16) 0
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 16, 16, 16) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 16, 16, 32) 4608
_________________________________________________________________
batch_normalization_1 (Batch (None, 16, 16, 32) 128
_________________________________________________________________
activation_1 (Activation) (None, 16, 16, 32) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 8, 8, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 8, 8, 64) 18432
_________________________________________________________________
batch_normalization_2 (Batch (None, 8, 8, 64) 256
_________________________________________________________________
activation_2 (Activation) (None, 8, 8, 64) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 64) 0
_________________________________________________________________
dropout (Dropout) (None, 4, 4, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 512) 524800
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 5130
=================================================================
Total params: 553,850
Trainable params: 553,626
Non-trainable params: 224
_________________________________________________________________
###Markdown
Train model
###Code
hist = model.fit(x=x_train, y=y_train, batch_size=250, epochs=10, validation_data=[x_test, y_test], verbose=2)
###Output
Train on 50000 samples, validate on 10000 samples
Epoch 1/10
- 5s - loss: 1.6958 - acc: 0.4322 - val_loss: 1.5946 - val_acc: 0.4415
Epoch 2/10
- 3s - loss: 1.2068 - acc: 0.5734 - val_loss: 1.0327 - val_acc: 0.6338
Epoch 3/10
- 3s - loss: 1.0316 - acc: 0.6362 - val_loss: 0.9225 - val_acc: 0.6782
Epoch 4/10
- 3s - loss: 0.9267 - acc: 0.6723 - val_loss: 0.8463 - val_acc: 0.7028
Epoch 5/10
- 3s - loss: 0.8477 - acc: 0.7029 - val_loss: 0.7955 - val_acc: 0.7211
Epoch 6/10
- 3s - loss: 0.8002 - acc: 0.7199 - val_loss: 0.7940 - val_acc: 0.7192
Epoch 7/10
- 3s - loss: 0.7457 - acc: 0.7363 - val_loss: 0.7269 - val_acc: 0.7410
Epoch 8/10
- 3s - loss: 0.7109 - acc: 0.7502 - val_loss: 0.6987 - val_acc: 0.7558
Epoch 9/10
- 3s - loss: 0.6815 - acc: 0.7612 - val_loss: 0.7015 - val_acc: 0.7530
Epoch 10/10
- 3s - loss: 0.6490 - acc: 0.7704 - val_loss: 0.6899 - val_acc: 0.7590
###Markdown
Final results
###Code
loss, acc = model.evaluate(x_train, y_train, batch_size=250, verbose=0)
print(f'Accuracy on train set: {acc:.3f}')
loss, acc = model.evaluate(x_test, y_test, batch_size=250, verbose=0)
print(f'Accuracy on test set: {acc:.3f}')
###Output
Accuracy on train set: 0.836
Accuracy on test set: 0.759
###Markdown
BN After Activation Create model* apply Conv2D with activation but without bias* apply batch normalization* apply max-pool
###Code
model = tf.keras.Sequential()
model.add(InputLayer(input_shape=[32, 32, 3]))
model.add(Conv2D(filters=16, kernel_size=3, padding='same', activation='elu', use_bias=False))
#model.add(Activation('elu'))
model.add(BatchNormalization()) # leave default axis=-1 in all BN layers
model.add(MaxPooling2D(pool_size=[2,2], strides=[2, 2], padding='same'))
model.add(Conv2D(filters=32, kernel_size=3, padding='same', activation='elu', use_bias=False))
#model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=[2,2], strides=[2, 2], padding='same'))
model.add(Conv2D(filters=64, kernel_size=3, padding='same', activation='elu', use_bias=False))
#model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=[2,2], strides=[2, 2], padding='same'))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(512, activation='elu'))
model.add(Dropout(0.3))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
hist = model.fit(x=x_train, y=y_train, batch_size=250, epochs=10, validation_data=[x_test, y_test], verbose=2)
###Output
Train on 50000 samples, validate on 10000 samples
Epoch 1/10
- 4s - loss: 1.6445 - acc: 0.4548 - val_loss: 1.3127 - val_acc: 0.5355
Epoch 2/10
- 3s - loss: 1.1684 - acc: 0.5909 - val_loss: 0.9892 - val_acc: 0.6480
Epoch 3/10
- 3s - loss: 0.9867 - acc: 0.6526 - val_loss: 0.8886 - val_acc: 0.6874
Epoch 4/10
- 3s - loss: 0.8836 - acc: 0.6876 - val_loss: 0.8381 - val_acc: 0.7117
Epoch 5/10
- 3s - loss: 0.8051 - acc: 0.7147 - val_loss: 0.7951 - val_acc: 0.7197
Epoch 6/10
- 3s - loss: 0.7546 - acc: 0.7344 - val_loss: 0.7497 - val_acc: 0.7329
Epoch 7/10
- 3s - loss: 0.7003 - acc: 0.7528 - val_loss: 0.7559 - val_acc: 0.7369
Epoch 8/10
- 4s - loss: 0.6519 - acc: 0.7679 - val_loss: 0.7065 - val_acc: 0.7536
Epoch 9/10
- 3s - loss: 0.6203 - acc: 0.7804 - val_loss: 0.6992 - val_acc: 0.7536
Epoch 10/10
- 3s - loss: 0.5876 - acc: 0.7916 - val_loss: 0.6985 - val_acc: 0.7583
###Markdown
Final results
###Code
loss, acc = model.evaluate(x_train, y_train, batch_size=250, verbose=0)
print(f'Accuracy on train set: {acc:.3f}')
loss, acc = model.evaluate(x_test, y_test, batch_size=250, verbose=0)
print(f'Accuracy on test set: {acc:.3f}')
###Output
Accuracy on train set: 0.869
Accuracy on test set: 0.758
|
demos/stocks/05-stream-viewer.ipynb | ###Markdown
Real-Time Stream Viewer (HTTP)the following function responds to HTTP requests with the list of last 10 processed twitter messages + sentiments in reverse order (newest on top), it reads records from the enriched stream, take the recent 10 messages, and reverse sort them. the function is using nuclio context to store the last results and stream pointers for max efficiency. The code is automatically converted into a nuclio (serverless) function and and respond to HTTP requeststhe example demonstrate the use of `%nuclio` magic commands to specify environment variables, package dependencies,configurations, and to deploy functions automatically onto a cluster. Initialize nuclio emulation, environment variables and configurationuse ` nuclio: ignore` for sections that don't need to be copied to the function
###Code
# nuclio: ignore
# if the nuclio-jupyter package is not installed run !pip install nuclio-jupyter
import nuclio
%nuclio env -c V3IO_ACCESS_KEY=${V3IO_ACCESS_KEY}
%nuclio env -c V3IO_USERNAME=${V3IO_USERNAME}
%nuclio env -c V3IO_API=${V3IO_API}
###Output
_____no_output_____
###Markdown
Set function configuration use a cron trigger with 5min interval and define the base imagefor more details check [nuclio function configuration reference](https://github.com/nuclio/nuclio/blob/master/docs/reference/function-configuration/function-configuration-reference.md)
###Code
%%nuclio config
# ask for a specific (fixed) API port
spec.triggers.web.kind = "http"
spec.triggers.web.attributes.port = 30100
# define the function base docker image
spec.build.baseImage = "python:3.6-jessie"
###Output
%nuclio: setting spec.triggers.web.kind to 'http'
%nuclio: setting spec.triggers.web.attributes.port to 30099
%nuclio: setting spec.build.baseImage to 'python:3.6-jessie'
###Markdown
Install required packages`%nuclio cmd` allows you to run image build instructions and install packagesNote: `-c` option will only install in nuclio, not locally
###Code
%nuclio cmd pip install git+https://github.com/yaronha/v3io-py-http.git
###Output
_____no_output_____
###Markdown
Nuclio function implementationthis function can run in Jupyter or in nuclio (real-time serverless)
###Code
import v3io
import base64
import json
import os
v3 = v3io.V3io(container='bigdata')
def handler(context, event):
resp = v3.getrecords('stock_stream','0',context.next_location,10)
json_resp = resp.json()
context.next_location = json_resp['NextLocation']
context.logger.info('location: %s', context.next_location)
for rec in json_resp['Records'] :
rec_data = base64.b64decode(rec['Data']).decode('utf-8')
rec_json = json.loads(rec_data)['text']
context.data += [rec_json]
context.data = context.data[-10:]
return context.Response(body=json.dumps(context.data[::-1]),
headers={'Access-Control-Allow-Origin': '*'},
content_type='text/plain',
status_code=200)
def init_context(context):
resp = v3.seek('stock_stream','0','EARLIEST')
context.next_location = resp.json()['Location']
context.data = []
###Output
_____no_output_____
###Markdown
Function invocationthe following section simulates nuclio function invocation and will emit the function results
###Code
# nuclio: ignore
# create a test event and invoke the function locally
init_context(context)
event = nuclio.Event(body='')
handler(context, event)
###Output
Python> 2019-03-05 11:33:24,910 [info] location: AQAAAAEAAAAAAECLAQAAAA==
###Markdown
Deploy a function onto a clusterthe `%nuclio deploy` command deploy functions into a cluster, make sure the notebook is saved prior to running it !check the help (`%nuclio help deploy`) for more information
###Code
%nuclio deploy -p stocks -n stream-view
# nuclio: ignore
# test the new API end point, take the address from the deploy log above
!curl 3.122.204.208:30100
###Output
["RT @Sophiemcneill: Just in - @Google, siding with Saudi Arabia, refuses to remove widely-criticized government app which lets men track wom\u2026", "Diversity and Inclusion Consultants @prdgm, whose CEO @joelle_emerson criticized @Google for going for \"Equality\" r\u2026 https://t.co/Ih7l8EX4mu", "RT @MyWhiteNinja_: OOF https://t.co/3BsS1O5UFI", "RT @AAPPres: Right now, pediatricians are watching our worst fears realized as measles outbreaks spread across the country. My home state o\u2026", "RT @MohapatraHemant: So @lyft is paying $8m/mo to @AWS -- almost $100m/yr! Each ride costs $.14 in AWS rent. I keep hearing they could buil\u2026"]
###Markdown
Real-Time Stream Viewer (HTTP)the following function responds to HTTP requests with the list of last 10 processed twitter messages + sentiments in reverse order (newest on top), it reads records from the enriched stream, take the recent 10 messages, and reverse sort them. the function is using nuclio context to store the last results and stream pointers for max efficiency. The code is automatically converted into a nuclio (serverless) function and and respond to HTTP requeststhe example demonstrate the use of `%nuclio` magic commands to specify environment variables, package dependencies,configurations, and to deploy functions automatically onto a cluster. Initialize nuclio emulation, environment variables and configurationuse ` nuclio: ignore` for sections that dont need to be copied to the function
###Code
# nuclio: ignore
# if the nuclio-jupyter package is not installed run !pip install nuclio-jupyter
import nuclio
%nuclio env -c V3IO_ACCESS_KEY=${V3IO_ACCESS_KEY}
%nuclio env -c V3IO_USERNAME=${V3IO_USERNAME}
%nuclio env -c V3IO_API=${V3IO_API}
###Output
_____no_output_____
###Markdown
Set function configuration use a cron trigger with 5min interval and define the base imagefor more details check [nuclio function configuration reference](https://github.com/nuclio/nuclio/blob/master/docs/reference/function-configuration/function-configuration-reference.md)
###Code
%%nuclio config
# ask for a specific (fixed) API port
spec.triggers.web.kind = "http"
spec.triggers.web.attributes.port = 30100
# define the function base docker image
spec.build.baseImage = "python:3.6-jessie"
###Output
%nuclio: setting spec.triggers.web.kind to 'http'
%nuclio: setting spec.triggers.web.attributes.port to 30099
%nuclio: setting spec.build.baseImage to 'python:3.6-jessie'
###Markdown
Install required packages`%nuclio cmd` allows you to run image build instructions and install packagesNote: `-c` option will only install in nuclio, not locally
###Code
%nuclio cmd pip install git+https://github.com/yaronha/v3io-py-http.git
###Output
_____no_output_____
###Markdown
Nuclio function implementationthis function can run in Jupyter or in nuclio (real-time serverless)
###Code
import v3io
import base64
import json
import os
v3 = v3io.V3io(container='bigdata')
def handler(context, event):
resp = v3.getrecords('stock_stream','0',context.next_location,10)
json_resp = resp.json()
context.next_location = json_resp['NextLocation']
context.logger.info('location: %s', context.next_location)
for rec in json_resp['Records'] :
rec_data = base64.b64decode(rec['Data']).decode('utf-8')
rec_json = json.loads(rec_data)['text']
context.data += [rec_json]
context.data = context.data[-10:]
return context.Response(body=json.dumps(context.data[::-1]),
headers={'Access-Control-Allow-Origin': '*'},
content_type='text/plain',
status_code=200)
def init_context(context):
resp = v3.seek('stock_stream','0','EARLIEST')
context.next_location = resp.json()['Location']
context.data = []
###Output
_____no_output_____
###Markdown
Function invocationthe following section simulates nuclio function invocation and will emit the function results
###Code
# nuclio: ignore
# create a test event and invoke the function locally
init_context(context)
event = nuclio.Event(body='')
handler(context, event)
###Output
Python> 2019-03-05 11:33:24,910 [info] location: AQAAAAEAAAAAAECLAQAAAA==
###Markdown
Deploy a function onto a clusterthe `%nuclio deploy` command deploy functions into a cluster, make sure the notebook is saved prior to running it !check the help (`%nuclio help deploy`) for more information
###Code
%nuclio deploy -p stocks -c
# nuclio: ignore
# test the new API end point, take the address from the deploy log above
!curl 3.122.204.208:30100
###Output
["RT @Sophiemcneill: Just in - @Google, siding with Saudi Arabia, refuses to remove widely-criticized government app which lets men track wom\u2026", "Diversity and Inclusion Consultants @prdgm, whose CEO @joelle_emerson criticized @Google for going for \"Equality\" r\u2026 https://t.co/Ih7l8EX4mu", "RT @MyWhiteNinja_: OOF https://t.co/3BsS1O5UFI", "RT @AAPPres: Right now, pediatricians are watching our worst fears realized as measles outbreaks spread across the country. My home state o\u2026", "RT @MohapatraHemant: So @lyft is paying $8m/mo to @AWS -- almost $100m/yr! Each ride costs $.14 in AWS rent. I keep hearing they could buil\u2026"]
###Markdown
Real-Time Stream Viewer (HTTP)the following function responds to HTTP requests with the list of last 10 processed twitter messages + sentiments in reverse order (newest on top), it reads records from the enriched stream, take the recent 10 messages, and reverse sort them. the function is using nuclio context to store the last results and stream pointers for max efficiency. The code is automatically converted into a nuclio (serverless) function and and respond to HTTP requeststhe example demonstrate the use of `%nuclio` magic commands to specify environment variables, package dependencies,configurations, and to deploy functions automatically onto a cluster. Initialize nuclio emulation, environment variables and configurationuse ` nuclio: ignore` for sections that don't need to be copied to the function
###Code
# nuclio: ignore
# if the nuclio-jupyter package is not installed run !pip install nuclio-jupyter
import nuclio
%nuclio env -c V3IO_ACCESS_KEY=${V3IO_ACCESS_KEY}
%nuclio env -c V3IO_USERNAME=${V3IO_USERNAME}
%nuclio env -c V3IO_API=${V3IO_API}
###Output
_____no_output_____
###Markdown
Set function configuration use a cron trigger with 5min interval and define the base imagefor more details check [nuclio function configuration reference](https://github.com/nuclio/nuclio/blob/master/docs/reference/function-configuration/function-configuration-reference.md)
###Code
%%nuclio config
# ask for a specific (fixed) API port
spec.triggers.web.kind = "http"
spec.triggers.web.attributes.port = 30100
# define the function base docker image
spec.build.baseImage = "python:3.6-jessie"
###Output
%nuclio: setting spec.triggers.web.kind to 'http'
%nuclio: setting spec.triggers.web.attributes.port to 30099
%nuclio: setting spec.build.baseImage to 'python:3.6-jessie'
###Markdown
Install required packages`%nuclio cmd` allows you to run image build instructions and install packagesNote: `-c` option will only install in nuclio, not locally
###Code
%nuclio cmd pip install git+https://github.com/yaronha/v3io-py-http.git
###Output
_____no_output_____
###Markdown
Nuclio function implementationthis function can run in Jupyter or in nuclio (real-time serverless)
###Code
import v3io
import base64
import json
import os
v3 = v3io.V3io(container='bigdata')
def handler(context, event):
resp = v3.getrecords('stock_stream','0',context.next_location,10)
json_resp = resp.json()
context.next_location = json_resp['NextLocation']
context.logger.info('location: %s', context.next_location)
for rec in json_resp['Records'] :
rec_data = base64.b64decode(rec['Data']).decode('utf-8')
rec_json = json.loads(rec_data)['text']
context.data += [rec_json]
context.data = context.data[-10:]
return context.Response(body=json.dumps(context.data[::-1]),
headers={'Access-Control-Allow-Origin': '*'},
content_type='text/plain',
status_code=200)
def init_context(context):
resp = v3.seek('stock_stream','0','EARLIEST')
context.next_location = resp.json()['Location']
context.data = []
###Output
_____no_output_____
###Markdown
Function invocationthe following section simulates nuclio function invocation and will emit the function results
###Code
# nuclio: ignore
# create a test event and invoke the function locally
init_context(context)
event = nuclio.Event(body='')
handler(context, event)
###Output
Python> 2019-03-05 11:33:24,910 [info] location: AQAAAAEAAAAAAECLAQAAAA==
###Markdown
Deploy a function onto a clusterthe `%nuclio deploy` command deploy functions into a cluster, make sure the notebook is saved prior to running it !check the help (`%nuclio help deploy`) for more information
###Code
%nuclio deploy -p stocks -c
# nuclio: ignore
# test the new API end point, take the address from the deploy log above
!curl 3.122.204.208:30100
###Output
["RT @Sophiemcneill: Just in - @Google, siding with Saudi Arabia, refuses to remove widely-criticized government app which lets men track wom\u2026", "Diversity and Inclusion Consultants @prdgm, whose CEO @joelle_emerson criticized @Google for going for \"Equality\" r\u2026 https://t.co/Ih7l8EX4mu", "RT @MyWhiteNinja_: OOF https://t.co/3BsS1O5UFI", "RT @AAPPres: Right now, pediatricians are watching our worst fears realized as measles outbreaks spread across the country. My home state o\u2026", "RT @MohapatraHemant: So @lyft is paying $8m/mo to @AWS -- almost $100m/yr! Each ride costs $.14 in AWS rent. I keep hearing they could buil\u2026"]
###Markdown
Real-Time Stream Viewer (HTTP)the following function responds to HTTP requests with the list of last 10 processed twitter messages + sentiments in reverse order (newest on top), it reads records from the enriched stream, take the recent 10 messages, and reverse sort them. the function is using nuclio context to store the last results and stream pointers for max efficiency. The code is automatically converted into a nuclio (serverless) function and and respond to HTTP requeststhe example demonstrate the use of `%nuclio` magic commands to specify environment variables, package dependencies,configurations, and to deploy functions automatically onto a cluster. Initialize nuclio emulation, environment variables and configurationuse ` nuclio: ignore` for sections that don't need to be copied to the function
###Code
# nuclio: ignore
# if the nuclio-jupyter package is not installed run !pip install nuclio-jupyter
import nuclio
%nuclio env -c V3IO_ACCESS_KEY=${V3IO_ACCESS_KEY}
%nuclio env -c V3IO_USERNAME=${V3IO_USERNAME}
%nuclio env -c V3IO_API=${V3IO_API}
###Output
_____no_output_____
###Markdown
Set function configuration use a cron trigger with 5min interval and define the base imagefor more details check [nuclio function configuration reference](https://github.com/nuclio/nuclio/blob/master/docs/reference/function-configuration/function-configuration-reference.md)
###Code
%%nuclio config
# ask for a specific (fixed) API port
spec.triggers.web.kind = "http"
spec.triggers.web.attributes.port = 30100
# define the function base docker image
spec.build.baseImage = "python:3.6-jessie"
###Output
%nuclio: setting spec.triggers.web.kind to 'http'
%nuclio: setting spec.triggers.web.attributes.port to 30099
%nuclio: setting spec.build.baseImage to 'python:3.6-jessie'
###Markdown
Install required packages`%nuclio cmd` allows you to run image build instructions and install packagesNote: `-c` option will only install in nuclio, not locally
###Code
%nuclio cmd pip install git+https://github.com/yaronha/v3io-py-http.git
###Output
_____no_output_____
###Markdown
Nuclio function implementationthis function can run in Jupyter or in nuclio (real-time serverless)
###Code
import v3io
import base64
import json
import os
v3 = v3io.V3io(container='bigdata')
def handler(context, event):
resp = v3.getrecords('stock_stream','0',context.next_location,10)
json_resp = resp.json()
context.next_location = json_resp['NextLocation']
context.logger.info('location: %s', context.next_location)
for rec in json_resp['Records'] :
rec_data = base64.b64decode(rec['Data']).decode('utf-8')
rec_json = json.loads(rec_data)['text']
context.data += [rec_json]
context.data = context.data[-10:]
return context.Response(body=json.dumps(context.data[::-1]),
headers={'Access-Control-Allow-Origin': '*'},
content_type='text/plain',
status_code=200)
def init_context(context):
resp = v3.seek('stock_stream','0','EARLIEST')
context.next_location = resp.json()['Location']
context.data = []
###Output
_____no_output_____
###Markdown
Function invocationthe following section simulates nuclio function invocation and will emit the function results
###Code
# nuclio: ignore
# create a test event and invoke the function locally
init_context(context)
event = nuclio.Event(body='')
handler(context, event)
###Output
Python> 2019-03-05 11:33:24,910 [info] location: AQAAAAEAAAAAAECLAQAAAA==
###Markdown
Deploy a function onto a clusterthe `%nuclio deploy` command deploy functions into a cluster, make sure the notebook is saved prior to running it !check the help (`%nuclio help deploy`) for more information
###Code
%nuclio deploy -p stocks -c stream-view
# nuclio: ignore
# test the new API end point, take the address from the deploy log above
!curl 3.122.204.208:30100
###Output
["RT @Sophiemcneill: Just in - @Google, siding with Saudi Arabia, refuses to remove widely-criticized government app which lets men track wom\u2026", "Diversity and Inclusion Consultants @prdgm, whose CEO @joelle_emerson criticized @Google for going for \"Equality\" r\u2026 https://t.co/Ih7l8EX4mu", "RT @MyWhiteNinja_: OOF https://t.co/3BsS1O5UFI", "RT @AAPPres: Right now, pediatricians are watching our worst fears realized as measles outbreaks spread across the country. My home state o\u2026", "RT @MohapatraHemant: So @lyft is paying $8m/mo to @AWS -- almost $100m/yr! Each ride costs $.14 in AWS rent. I keep hearing they could buil\u2026"] |
notebooks/Python101/Python101.1.ipynb | ###Markdown
Python 101.1 Iniciando em PythonNotebook apresenta as principais características da linguagem de programação python.Se você não sabe o que é uma linguagem de programação, ou qual a diferença essencial de uma linguagem compilada/interpretada e tipagem forte/fraca sugiro o entendimento destes conceitos através de cursos online de CS (Computer Science).- [Compilador](https://en.wikipedia.org/wiki/Compiler)- [Interpretador](https://en.wikipedia.org/wiki/Interpreter_(computing))Em um primeiro momento serão feitas discussões sobre as principais características da linguagem e seus easter eggs. Zen of Python----------------------
###Code
import this
###Output
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
###Markdown
Antigravity----------------------
###Code
# import antigravity
###Output
_____no_output_____
###Markdown
PEP"s---------------------- [PEP 1](https://www.python.org/dev/peps/pep-0001/): *"PEP stands for **Python Enhancement Proposal**. A PEP is a design document providing information to the Python community, or describing a new feature for Python or its processes or environment. The PEP should provide a concise technical specification of the feature and a rationale for the feature."*[PEP 8](https://www.python.org/dev/peps/pep-0008/): *"This document gives **coding conventions** for the Python code comprising the standard library in the main Python distribution."* PyPI----------------------*"The Python Package Index (PyPI) is a **repository of software** for the Python programming language."* [PyPI](https://pypi.org/)Passou recentemente por uma grande reformulação e nos últimos meses tem sofrido alguns tipos de ataques nos mesmos moldes que o npm. Ferramentas e IDE"s---------------------- Intepretador O modo mais básico para se executar código em python. Abra um shell e digite:```bash$> python``` IDLEA "IDE" mais simples que se pode utilizar para programar em python, vem junto com a própria linguagem. VIM / EMACSEditores textuais em shell utilizados por programadores no ambiente Linux. São ferramentas muito utilizadas, e possuem plugins para se trabalhar com a linguagem python. PyCharmUma IDE completa, no mesmo estilo do Eclipse para Java. Ferramenta criada pelo pessoal da JetBrains, mesma empresa criado do IntelliJ. Visual Studio CodeNovo editor de texto da Microsoft, bem completo, possuindo plugins para diversas linguagens. Vem ganhando mercado nos últimos 2 anos, e o suporte para a linguagem python é bem forte, provavelmente devido a um dos core commiters da linguagem trabalhar na Microsoft desenvolvendo a ferramenta. JupyterNão é uma IDE mas muito utilizada no mundo de Data Science, permite escalonamente a execução de código python (e outras linguagens também). Possue o mesmo conceito dos notebooks criados em outras linguagens como Mathematica e Matlab. Legacy PythonA partir de meados de 2000, a linguagem de programação Python alcançou sua versão 3.0, a qual trouxe dentre os problemas enfrentados uma quebra Sintaxe da LinguagemAssim como outras linguagens de programação python possui muitas estruturas sintáticas parecidas, entretanto o que pode surpreender os programadores é a maneira como a linguagem é estruturada.Diferente de linguagens cuja raiz foi a linguange C, python não se utiliza das chaves (*brackets*) para conter suas estruturas e nem mesmo o ponto e vírgula no final para definir o término de um comando.Conforme será visto, essa é uma maneira de evitar repetições desnecessárias e obrigar os programadores e manterem uma estrutura encadeada de seus códigos. Hello WorldPara executar o programa mais básico em python (um simples "Hello World") não se precisa de muito código ou grandes estruturas.
###Code
print("Hello World")
###Output
Hello World
###Markdown
Variáveis*"... uma variável é um objeto (uma posição, frequentemente localizada na memória) capaz de reter e representar um valor ou expressão. Enquanto as variáveis só "existem" em tempo de execução, elas são associadas a "nomes", chamados identificadores, durante o tempo de desenvolvimento."* [wiki](https://pt.wikipedia.org/wiki/Vari%C3%A1vel_(programa%C3%A7%C3%A3o))Enfim, é um componente o qual armazena um valor ou valores em memória podendo esse valor armazenado ser substituído por outro (no caso da linguagem python).
###Code
x = 1
print(x)
x = 2
print(x)
k = 1
k = 2
print(k)
###Output
1
2
2
###Markdown
Tipos de DadosPython por ser uma linguagem fracamente tipada não limita o desenvolvedor no sentido da necessidade de se definir qual o valor que determinada variável deve aceitar.Isto permite maior flexibilidade em contrário a uma maior dificuldade de leitura do código em outros momentos, principalmente em projetos grandes.**Em python os tipos primitivos são classes!****Principais tipos** - Boolean - Integer - Float - Complex - String - Byte - None Por exemplo o tipo Inteiro é um objeto, possui atributos e métodos.
###Code
x = 10_000_000
print(type(x))
print(x.real, x.imag)
print(x.bit_length())
###Output
<class 'int'>
10000000 0
24
###Markdown
Examples:
###Code
x = True
y = True
z = False
print("x e y são iguais : ", x == y)
print("x é do tipo Boolean : ", type(x))
print("x é bool? ", bool is type(x))
print("y é bool? ", bool is type(y))
print('AND: ', x and z) # Logical AND; prints "False"
print('OR: ', x or z) # Logical OR; prints "True"
print('NOT: ', not x) # Logical NOT; prints "False"
print('XOR: ', x != z) # Logical XOR; prints "True"
x = 1
y = 2
print("x e y são iguais : ", x == y)
print("x é do tipo Integer : ", type(x))
print("x é int? ", int is type(x))
print("y é int? ", int is type(y))
x = 1.1
y = 2.2
print("x e y são iguais : ", x == y)
print("x é do tipo Float : ", type(x))
print("x é float? ", float is type(x))
print("y é float? ", float is type(y))
x = complex(2, 1)
y = 2 + 1j
print("x e y são iguais : ", x == y)
print("x é do tipo Complex : ", type(x))
print("x é complex? ", complex is type(x))
print("y é complex? ", complex is type(y))
# Strings podem ser definidas como aspas simples ou duplas
x = "hello world"
y = "hello world"
print("x e y são iguais : ", x == y)
print("x é do tipo String : ", type(x))
print("x é str? ", str is type(x))
print("y é str? ", str is type(y))
x = bytes(10)
y = bytes("abc", encoding="utf-8")
print("x e y são iguais : ", x == y)
print("x é do tipo Bytes : ", type(x))
print("x é bytes? ", bytes is type(x))
print("y é bytes? ", bytes is type(y))
# Representa o null em Python
x = None
print("x é do tipo None : ", type(x))
print("x é None? ", None is type(x))
###Output
x é do tipo None : <class 'NoneType'>
x é None? False
###Markdown
OperaçõesOs mesmos operadores que existem em outras linguagens de programação, principalmente os matemáticos existem em python.Dessa mesma maneira existem operadores lógicos.
###Code
x = 4
y = 2
print(x + y)
print(x - y)
print(x * y)
print(x / y)
x = 4.1
y = 2.5
print(x + y)
print(x - y)
print(x * y)
print(x / y)
x = True
y = False
print(x and y)
print(x or y)
###Output
False
True
###Markdown
**Operadores em Strings**Strings em python pode sofrer alterações devido a alguns operadores como '+' e '*'' **operador '+'**Concatenação de Strings
###Code
x = "hello world"
y = "world"
print(x + y)
###Output
hello worldworld
###Markdown
**operador '*'**'Replicação' de Strings
###Code
x = "hello"
print(x * 3)
###Output
hellohellohello
###Markdown
**operador 'in'**Verifica se uma String contém a outra
###Code
x = "hello world"
y = "world"
print(y in x)
print('orl' in x)
###Output
True
True
###Markdown
Estruturas condicionais e laços de repetiçãoComo em todas as outras linguagens de programação, python possui as princiaps estruturas condicionais e laços de repetição. - if, elif e else - for - while E para que seja possível realizar as condicionais utilizamos os operadores de comparação. >Operador | Tipo | Valor>--- | --- | ---> == | Igualdade | Verifica a igualdade entre dois valores.> != | Desigualdade | Verifica a diferença entre dois valores.> > | Comparação | Verificar se o valor A é maior que o valor B.> < | Comparação | Verifica se o valor A é menor que o valor B.> >= | Comparação| Verifica se o valor A é maior ou igual ao valor B.> <= | Comparação | Verifica se o valor A é menor ou igual ao valor B.> in | Seqüência | Verifica se o valor A está contido em um conjunto.> is | Comparação | Verificar se a A é o mesmo que B **Condicional : if, elif e else**
###Code
x = 5
if x == 5:
print("x:", x)
else:
print("erro")
x = "hello world"
if "hello2" in x:
print("if")
elif "hello" in x:
print("elif");k = 10 * 5;print(k)
else:
print("else")
###Output
elif
50
###Markdown
**and e or**Diferentemente de outras linguagens de programação que utilizam os caracteres & e | para representar as validações booleanas de "E" e "OU", em python são utilizados os termos escritos em inglês "and" e "or".
###Code
x = 10
y = 20
if x < 10 or y > 15:
print("Hello World")
if x <= 10 or x > 20:
print(f"x = {x}")
if 10 <= x < 20:
print("It works!")
###Output
Hello World
x = 10
It works!
###Markdown
**Laço de repetição: while**
###Code
x = 1
while x <= 5:
print(x)
x += 1
###Output
1
2
3
4
5
###Markdown
**Laço de repetição: for**
###Code
for i in range(1, 5):
print(i)
###Output
1
2
3
4
###Markdown
**enumerate**Normalmente nas linguagens de programação temos o valor inteiro com o qual estamos interando, por exemplo:```javascriptfor(var i=0; i<10; i++) { // valor de i, vai de 0 ... 9}```Para replicar esse mesmo procedimento em python, temos a palavra reservada **enumerate**.
###Code
for i, a in enumerate("hello"):
print(i, a)
###Output
0 h
1 e
2 l
3 l
4 o
###Markdown
FunçõesEstrutura a qual deve ser criada para encapsular blocos de código para serem reutilizados em outros momentos. Pode receber parâmetros como funções matemáticas.Podem ou não retornar valores, caso não seja explícito o retorno, irá retornar **None**.
###Code
def sum(x, y):
return x + y
print(sum(5, 6))
###Output
11
###Markdown
**args e kwargs**Por sua dinamicidade a linguagem possui algumas estruturas que podem facilitar em muito a criação de funções extremamente genéricas, que podema vir a ser bem complexas.O uso de args e kwargs exemplifica em muito o quão a linguagem é dinâmica.
###Code
def soma(*args, **kwargs):
print(args)
print(kwargs)
x = float(args[0])
y = int(kwargs["y"])
return x + y
print(soma(5.5, 10, 90, y=1, a="b"))
###Output
(5.5, 10, 90)
{'y': 1, 'a': 'b'}
6.5
###Markdown
**unpacking**Em python, existe a possibilidade de realizar o unpacking de múltiplos retornos nas funções, e o mesmo podemos fazer na criação de variáveis.
###Code
a, b = 1, 2
print(f"a={a}; b={b}")
def m_pow(*args):
return [m**2 for m in args]
a, b, c, d = m_pow(2, 3, 4, 5)
print(f"a={a}; b={b}; c={c}; d={d}")
###Output
a=1; b=2
a=4; b=9; c=16; d=25
###Markdown
**Exercício 1:**[FizzBuzz](https://en.wikipedia.org/wiki/Fizz_buzz)Crie uma função que apresente a resolução do jogo FizzBuzz.
###Code
# RESOLUÇÃO
def fizzbuzz(x):
if x % 3 == 0 and x % 5 == 0:
x = "fizzbuzz"
elif x % 3 == 0:
x = "fizz"
elif x % 5 == 0:
x = "buzz"
return x
x = 1
while x <= 30:
print(fizzbuzz(x))
x += 1
###Output
1
2
fizz
4
buzz
fizz
7
8
fizz
buzz
11
fizz
13
14
fizzbuzz
16
17
fizz
19
buzz
fizz
22
23
fizz
buzz
26
fizz
28
29
fizzbuzz
###Markdown
**Exercício 2:**[Fatorial](https://pt.wikipedia.org/wiki/Fatorial)Crie uma função para gerar o fatorial de qualquer número.
###Code
# RESOLUÇÃO
def fat(x=0):
m = x
# Método de parada x <= 1
if x > 1:
# Recursividade é gerada...
m = fat(x-1) * x
x -= 1
return m
print('fatorial de 0: ', fat(0))
print('fatorial de 1: ', fat(1))
assert fat(1) == 1
print('fatorial de 20: ', fat(20))
assert fat(20) == 2432902008176640000
print('fatorial de 50: ', fat(50))
assert fat(50) == 30414093201713378043612608166064768844377641568960512000000000000
###Output
fatorial de 0: 0
fatorial de 1: 1
fatorial de 20: 2432902008176640000
fatorial de 50: 30414093201713378043612608166064768844377641568960512000000000000
|
notebooks/.ipynb_checkpoints/06c-s1000-EvolutionaryAlgorithm-Climb,TO,Hover_new-checkpoint.ipynb | ###Markdown
Results
###Code
from IPython.display import display, clear_output
from ipywidgets import widgets
button = widgets.Button(description="Calculate")
display(button)
output = widgets.Output()
@output.capture()
def on_button_clicked(b):
clear_output()
# optimization with SLSQP algorithm
contrainte=lambda x: SizingCode(x, 'Const')
objectif=lambda x: SizingCode(x, 'Obj')
# Differential evolution omptimisation
start = time.time()
result = scipy.optimize.differential_evolution(func=objectif,
bounds=bounds,maxiter=500,
tol=1e-12)
# Final characteristics after optimization
end = time.time()
print("Operation time: %.5f s" %(end - start))
print("-----------------------------------------------")
print("Final characteristics after optimization :")
data=SizingCode(result.x, 'Prt')[0]
data_opt=SizingCode(result.x, 'Prt')[1]
pd.options.display.float_format = '{:,.3f}'.format
def view(x=''):
#if x=='All': return display(df)
if x=='Optimization' : return display(data_opt)
return display(data[data['Type']==x])
items = sorted(data['Type'].unique().tolist())+['Optimization']
w = widgets.Select(options=items)
return display(interactive(view, x=w))
# display(data)
button.on_click(on_button_clicked)
display(output)
###Output
_____no_output_____ |
solutions/3.ipynb | ###Markdown
Übungsblatt 3: sWeights * [Aufgabe 1](Aufgabe-1) * [Aufgabe 2](Aufgabe-2)---
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.stats import norm
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
---Eine experimentelle Verteilung in den Variablen $(x, m)$ habe eine Signalkomponente $s(x, m)$ = $s(x)s(m)$ und eine Untergrundkomponente $b(x,m)$ = $b(x)b(m)$. Der erlaubte Bereich ist $0 < x < 1$ und $0 < m < 1$. Es sei $s(m)$ eine Gaussverteilung mit Mittelwert $\mu = 0.5$ und Standardabweichung $\sigma = 0.05$. Die Verteilungen der anderen Komponenten werden aus gleichverteilten Zufallzahlen $z$ gewonnen. Für $s(x)$ verwende man $x = −0.2\ln{z}$, für $b(m)$ verwende man $m = \sqrt{z}$ und für $b(x)$ die Transformation $x = 1 − \sqrt{z}$.Erzeugen Sie für zwei angenommene Effizienzfunktionen * $\varepsilon(x, m) = 1$ * $\varepsilon(x, m) = (x + m) / 2$ Datensätze von Paaren $(x, m)$ die 20000 akzeptierte Signalereignisse und 100000 akzeptierte Untergrundereignisse umfassen.Betrachten Sie nun die gemeinsame $m$-Verteilung und parametrisieren Sie diese durch\begin{equation} f(m) = s(m) + b(m)\end{equation}mit\begin{equation} s(m) = p_0 \exp\left(-\frac{(m - p_1)^2}{2p_2^2}\right)\end{equation}und\begin{equation} b(m) = p_3 + p_4m + p_5m^2 + p_6\sqrt{m} \,.\end{equation} Für den Fall $\varepsilon(x, m) = (x + m)/2$ benutzen Sie die obige Parametrisierung auch zur Beschreibung der $m_c$ und $m_{cc}$-Verteilungen, für die jeder $m$-Wert mit $1/\varepsilon(x, m)$, bzw. $1/\varepsilon^2(x, m)$ gewichtet wird, und die für die korrekte Behandlung von nicht-konstanten Effizienzen benötigt werden. ---
###Code
def generate_sx(size):
xs = -0.2 * np.log(np.random.uniform(size=2 * size))
xs = xs[xs < 1]
return xs[:size]
def generate_sm(size):
return np.random.normal(0.5, 0.05, size=size)
def generate_s(size):
return np.array([generate_sx(size), generate_sm(size)])
def generate_bx(size):
return 1 - np.sqrt(np.random.uniform(size=size))
def generate_bm(size):
return np.sqrt(np.random.uniform(size=size))
def generate_b(size):
return np.array([generate_bx(size), generate_bm(size)])
def generate_sample(sig_size=20000, bkg_size=100000):
return np.append(generate_s(sig_size), generate_b(bkg_size), axis=1)
def efficiency(x, m):
return (x + m) / 2
def generate_with_efficiency(generator, efficiency, size):
def reset():
xs, ms = generator(size)
effs = efficiency(xs, ms)
accept = np.random.uniform(size=size) > effs
return np.array([xs[accept], ms[accept]])
sample = reset()
while sample.shape[1] < size:
sample = np.append(sample, reset(), axis=1)
return sample[:size]
def generate_sample_with_efficiency(efficiency, sig_size=20000, bkg_size=100000):
return np.append(generate_with_efficiency(generate_s, efficiency, sig_size),
generate_with_efficiency(generate_b, efficiency, bkg_size),
axis=1)
n = 20000
xs, ms = generate_sample()
xs_s, xs_b = xs[:n], xs[n:]
ms_s, ms_b = ms[:n], ms[n:]
plt.hist([xs_s, xs_b], bins=40, histtype='barstacked', label=['Signal', 'Background'])
plt.xlabel(r'$x$')
plt.legend()
plt.show()
plt.hist([ms_s, ms_b], bins=40, histtype='barstacked', label=['Signal', 'Background'])
plt.xlabel(r'$m$')
plt.legend()
plt.show()
effs = efficiency(xs, ms)
effs_s, effs_b = effs[:n], effs[n:]
plt.hist([effs_s, effs_b], bins=40, histtype='barstacked', label=['Signal', 'Background'])
plt.xlabel(r'$\varepsilon$')
plt.legend()
plt.show()
exs, ems = generate_sample_with_efficiency(efficiency)
exs_s, exs_b = exs[:n], exs[n:]
ems_s, ems_b = ems[:n], ems[n:]
plt.hist([exs_s, exs_b], bins=40, histtype='barstacked', label=['Signal', 'Background'])
plt.xlabel(r'$x$')
plt.legend()
plt.show()
plt.hist([ems_s, ems_b], bins=40, histtype='barstacked', label=['Signal', 'Background'])
plt.xlabel(r'$m$')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
--- Aufgabe 1Bestimmen Sie für beide Effizienzfunktion die sWeights $w(m)$ aus den beobachteten $m$-Verteilungen, und verwenden Sie $w(m)/\varepsilon(x, m)$ um die Verteilung $N_{s}s(x)$ aus den Daten heraus zu projizieren. Vergleichen Sie für beide Effizienzfunktionen das Resultat mit der Erwartung.--- Zunächst fitten wir die kombinierte Massenverteilung von Signal und Untergrund an unsere beiden Datensätze. Dabei müssen wir daran denken, die Anzahl Signal- und Untergrundereignissen als Fitparamter zu behandeln. Betrachten wir zuerst den Fall $\varepsilon = 1$.
###Code
hist, medges, xedges = np.histogram2d(ms, xs, bins=(100, 140), range=((0, 1), (0, 1)))
mwidth = medges[1] - medges[0]
mcentres = medges[:-1] + mwidth / 2
xwidth = xedges[1] - xedges[0]
xcentres = xedges[:-1] + xwidth / 2
mhist = np.sum(hist, axis=1)
xhist = np.sum(hist, axis=0)
plt.plot(mcentres, mhist, '.', label='$m$')
plt.plot(xcentres, xhist, '.', label='$x$')
plt.ylabel('Absolute Häufigkeit')
plt.legend()
plt.show()
def pdf_ms(m, p0, p1, p2):
return p0 * np.exp(-(m - p1) ** 2 / 2 / p2 ** 2)
def pdf_mb(m, p3, p4, p5, p6):
return p3 + p4 * m + p5 * m ** 2 + p6 * np.sqrt(m)
def pdf_m(m, p0, p1, p2, p3, p4, p5, p6):
return pdf_ms(m, p0, p1, p2) + pdf_mb(m, p3, p4, p5, p6)
def fit_mass(centres, ns, pars=None):
if pars is None:
pars = [20000, 0.5, 0.5, 100000, 0.1, 0, 1]
return curve_fit(pdf_m, centres, ns, p0=pars)
popt, _ = fit_mass(mcentres, mhist)
plt.plot(mcentres, mhist, '.')
plt.plot(mcentres, pdf_m(mcentres, *popt))
plt.plot(mcentres, pdf_ms(mcentres, *popt[:3]), '--')
plt.plot(mcentres, pdf_mb(mcentres, *popt[3:]), '--')
plt.xlabel('$m$')
plt.show()
###Output
_____no_output_____
###Markdown
Als nächstes können wir mit Hilfe der bestimmten Parameter die sWeights bestimmen.
###Code
def sweights(centres, mhist, popt):
s = pdf_ms(centres, *popt[:3])
b = pdf_mb(centres, *popt[3:])
n = mhist
# Normierung der PDFs
s = s / np.sum(s)
b = b / np.sum(b)
Wss = np.sum((s * s) / n)
Wsb = np.sum((s * b) / n)
Wbb = np.sum((b * b) / n)
alpha = Wbb / (Wss * Wbb - Wsb ** 2)
beta = -Wsb / (Wss * Wbb - Wsb ** 2)
weights = (alpha * s + beta * b) / n
return weights
sw = sweights(mcentres, mhist, popt)
plt.plot(mcentres, sw, '.')
plt.xlabel('$m$')
plt.ylabel('sWeight')
plt.show()
###Output
_____no_output_____
###Markdown
Diese können wir nun verwenden, um die Signalkomponente $s(x)$ herauszuprojizieren.
###Code
def apply_sweights(sweights, hist):
return np.array([w * row for w, row in zip(sweights, hist)]).sum(axis=0)
xweighted = apply_sweights(sw, hist)
plt.plot(xcentres, xweighted, '.', label='sWeighted')
plt.plot(xcentres, xhist, '.', label='s+b')
plt.xlabel('$x$')
plt.ylabel('Häufigkeit')
plt.yscale('log')
plt.legend(loc='lower left')
plt.show()
###Output
_____no_output_____
###Markdown
Für $\varepsilon = (x + m) / 2$ verwenden wir an dieser Stelle fälschlicherweise genau das gleiche Vorgehen.
###Code
ehist, emedges, exedges = np.histogram2d(ems, exs, bins=(100, 140), range=((0, 1), (0, 1)))
exwidth = exedges[1] - exedges[0]
emwidth = emedges[1] - emedges[0]
excentres = exedges[:-1] + exwidth / 2
emcentres = emedges[:-1] + emwidth / 2
emhist = np.sum(ehist, axis=1)
exhist = np.sum(ehist, axis=0)
plt.plot(emcentres, emhist, '.', label='$m$')
plt.plot(excentres, exhist, '.', label='$x$')
plt.ylabel('Häufigkeit')
plt.legend()
plt.show()
epopt, _ = fit_mass(emcentres, emhist, pars=[20000, 0.5, 0.5, 1, 1, -0.1, 100])
plt.plot(emcentres, emhist, '.')
plt.plot(emcentres, pdf_m(emcentres, *epopt))
plt.plot(emcentres, pdf_ms(emcentres, *epopt[:3]), '--')
plt.plot(emcentres, pdf_mb(emcentres, *epopt[3:]), '--')
plt.xlabel('$m$')
plt.show()
esw = sweights(emcentres, emhist, epopt)
plt.plot(emcentres, esw, '.')
plt.xlabel('$m$')
plt.ylabel('sWeight')
plt.show()
exweighted = apply_sweights(esw, ehist)
plt.plot(excentres, exweighted, '.', label='sWeighted')
plt.plot(excentres, exhist, '.', label='s+b')
plt.plot(xcentres, xweighted, '.', label='sWeighted correct')
plt.xlabel('$x$')
plt.ylabel('Häufigkeit')
plt.yscale('log')
plt.legend(loc='lower left')
plt.show()
###Output
_____no_output_____
###Markdown
--- Aufgabe 2Bestimmen Sie für $\varepsilon(x, m) = (x + m)/2$ unter Berücksichtigung der Funktion $\varepsilon(x, m)$ in der Bestimmung von $w(m)$ die korrekten sWeights aus den mit $1/\varepsilon(x, m)$ gewichteten Daten. Verwenden Sie die korrekten sWeights um mit $w(m)/\varepsilon(x, m)$ um die Verteilung $N_{s}s(x)$ zu extrahieren.
###Code
eeffs = efficiency(exs, ems)
ehist, emedges, exedges = np.histogram2d(
ems,
exs,
bins=(100, 140),
range=((0, 1), (0, 1)),
weights=1 / eeffs
)
emwidth = emedges[1] - emedges[0]
emcentres = emedges[:-1] + emwidth / 2
exwidth = exedges[1] - exedges[0]
excentres = exedges[:-1] + exwidth / 2
emhist = np.sum(ehist, axis=1)
exhist = np.sum(ehist, axis=0)
plt.plot(emcentres, emhist, 'o', label='$m$')
plt.plot(excentres, exhist, 's', label='$x$')
plt.ylabel('Gewichtete Häufigkeit')
plt.legend()
plt.show()
epopt, _ = fit_mass(emcentres, emhist,
pars=[2000, 0.5, 0.5, 1, 1, -0.1, 10])
plt.plot(emcentres, emhist, '.')
plt.plot(emcentres, pdf_m(emcentres, *epopt))
plt.plot(emcentres, pdf_ms(emcentres, *epopt[:3]), '--')
plt.plot(emcentres, pdf_mb(emcentres, *epopt[3:]), '--')
plt.xlabel('$m$')
plt.show()
eeffhist = efficiency(*np.meshgrid(excentres, emcentres))
def sweights_q(centres, qs, popt):
s = pdf_ms(centres, *popt[:3])
s = s / np.sum(s)
b = pdf_mb(centres, *popt[3:])
b = b / np.sum(b)
Wss = np.sum((s * s) / qs)
Wsb = np.sum((s * b) / qs)
Wbb = np.sum((b * b) / qs)
alpha = Wbb / (Wss * Wbb - Wsb ** 2)
beta = -Wsb / (Wss * Wbb - Wsb ** 2)
weights = (alpha * s + beta * b) / qs
return weights
qs = np.sum(ehist / eeffhist, axis=1)
esw = sweights_q(emcentres, qs, epopt)
plt.plot(emcentres, esw, '.')
plt.xlabel('$m$')
plt.ylabel('sWeight')
plt.show()
exweighted = np.array([s * h for s, h in zip(sw, ehist)]).sum(axis=0)
plt.plot(excentres, exweighted, '.', label='sWeighted')
plt.plot(excentres, exhist, '.', label='s+b')
plt.plot(xcentres, xweighted, '.', label='sWeighted correct')
plt.xlabel('$x$')
plt.ylabel('Häufigkeit')
plt.yscale('log')
plt.legend(loc='lower left')
plt.show()
###Output
_____no_output_____ |
Project Workshop/data_cleaning_for_ml_lab_EXERCISES.ipynb | ###Markdown
Data Cleaning for Machine Learning Lab w/ Template [Full Project Checklist Here](https://docs.google.com/spreadsheets/d/1y4EdxeAliOQw9CDHx0_brjmk-LUb3gfX52zLGSqLg_g/edit?usp=sharing) In this notebook, we will be cleaning, exploring, preprocessing, and modeling [Airbnb listing data](http://insideairbnb.com/get-the-data.html) from Boston and Cambridge.The purpose of this notebook is to 1. practice data cleaning for ML and 2. show how to effectively use this template to bring some structure to your ML projects. Instructions 1. Edit all cells that say "TO DO"
###Code
## TO DO: Add 1 + 1
###Output
_____no_output_____
###Markdown
2. Read, but do not edit, cells that say "DO NOT CHANGE"
###Code
## DO NOT CHANGE:
10%2==0
###Output
_____no_output_____
###Markdown
Prerequisite: Business and Data UnderstandingBefore doing any data cleaning or exploration, do the best you can to identify your goals, questions, and purpose of this analysis. Additionally, try to get your hands on a Data Dictionary or schema if you can. You, ideally, will be able to answer questions like this...- Business Questions: - What's the goal of this analysis? - What're some questions I want to answer? - Do I need machine learning?- Data Questions: - How many features should I expect? - How much text, categorical, or image data to I have? All of these need to be turned into numbers somehow. - Do I already have the datasets that I need?Honestly, taking 1-2 hours to answer these can go a long way.> **One of the worst feelings you can get in these situations is feeling overwhelmed and lost while trying to understand a big and messy dataset. You're doing yourself a favor by studying the data before you dive in.** --- Let's Get Started...--- Table of Contents I. [Import Data & Libraries](idl) II. [Exploratory Data Analysis](eda) III. [Train/Test Split](tts) IV. [Prepare for ML](pfm) V. [Pick your Models](pym) VI. [Model Selection](ms) VII. [Model Tuning](mt) VIII. [Pick the Best Model](pbm) I. Import Data & Libraries Import Libraries
###Code
## DO NOT CHANGE
# Data manipulation
import pandas as pd
import numpy as np
# More Data Preprocessing & Machine Learning
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MultiLabelBinarizer, OneHotEncoder, StandardScaler
from sklearn.impute import SimpleImputer
# Data Viz
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading/Importing Data
###Code
## DO NOT CHANGE
boston_url = "https://github.com/pdeguzman96/data_cleaning_workshop/blob/master/boston.csv?raw=true"
cambridge_url = "https://github.com/pdeguzman96/data_cleaning_workshop/blob/master/cambridge.csv?raw=true"
## TO DO: import the data using the links above (Hint: pd.read_csv may be helpful)
## if you have the csv files saved into your directory (which you should if you downloaded the whole Github repo)
## Just replace the urls with the filepath
boston_df =
cambridge_df =
## DO NOT CHANGE
## TO DO: Skim through all the columns. There are a lot of columns that we don't need right now.
pd.options.display.max_rows = boston_df.shape[1]
boston_df.head(2).T
###Output
_____no_output_____
###Markdown
Dropping columns that we're not going to use for this notebook.
###Code
## DO NOT CHANGE
## These are urls, irrelevant dates, text data, names, zipcode, repetitive information, columns with 1 value
drop = ['listing_url', 'scrape_id', 'last_scraped', 'summary', 'space', 'description', 'neighborhood_overview',
'notes', 'transit', 'access', 'interaction', 'house_rules', 'thumbnail_url', 'medium_url', 'picture_url',
'xl_picture_url', 'host_id', 'host_url', 'host_about', 'host_thumbnail_url', 'host_picture_url',
'calendar_updated', 'calendar_last_scraped', 'license', 'name', 'host_name', 'zipcode', 'id','city', 'state',
'market','jurisdiction_names', 'host_location', 'street', 'experiences_offered','country_code','country',
'has_availability','is_business_travel_ready', 'host_neighbourhood','neighbourhood_cleansed','smart_location',
'neighbourhood']
## TO DO: drop the columns above from boston_df and cambridge_df
## TO DO: concatenate the dataframes together (hint: pd.concat using axis=0 and ignore_index=True). Store in df
df =
###Output
_____no_output_____
###Markdown
II. Exploratory Data Analysis**[Back to top](toc)**This section is where you're going to really try to get a feel of what you're dealing with. You'll be doing lots of cleaning and visualizing before you're ready for ML.This usually the most time consuming section before you get to a simple working ML algorithm. A. Duplicate Value CheckWe don't need/want any rows that are purely identical to one another.
###Code
## TO DO: drop any duplicate rows
###Output
_____no_output_____
###Markdown
Were there any duplicates? B. Separate Data Types Generally, there are 5-6 types of data you will run into.1. Numerical2. Categorical3. Date/Time4. Text5. Image6. SoundWe don't have any Image or Sound data, and we removed Text data to make this simple and easier, so we're going to have to deal with Numerical, Categorical, and Date/Time. **Let's start with separating our data apart into Numerical and Categorical.**
###Code
## TO DO: create a dataframe of only categorical variables (Hint: df.select_dtypes(['object', 'bool']))
cat_df =
## TO DO: create a dataframe of only numerical variables (Hint: data types "int" or "float")
num_df =
###Output
_____no_output_____
###Markdown
---So now we need to account for all of the following possible data types...1. Numerical *(1.3, -2.345, 6,423.1)*2. Categorical - Binary *(True/False, 0/1, Heads/Tails)* - Ordinal *(Low, Medium, High)* - Nominal *(Red, Blue, Purple)*3. Date/Time ---> As you take an inventory of your data, use this next section to look through your data to **identify anything you have to fix** in order for your data to be **ready for EDA**. Skim through the Numerical data
###Code
# Glance at the numerical data
num_df.head().T
###Output
_____no_output_____
###Markdown
The numerical features should look OK. Nothing obvious that we have to fix other than missing values, which we will deal with later. Skim through the Catgorical DataMost (if not, all) problems will come from this subset.
###Code
# Skim the output to look for things to fix
cat_df.head().T
###Output
_____no_output_____
###Markdown
**We have a lot of work to do for these categorical columns.****Here's what we're going to take care of below...**1. Numerical data stored as Categorial (strings) - Convert some of these to numerical columns (i.e. `price` features and `host_response_rate`)2. Binary data needs to be binarized into 1's and 0's - We can Binarize the Binary/Boolean columns (such as `requires_license`)3. Ordinal (should generally be encoded to retain their information (e.g. {1,2,3} to encode {low, med, high}) - The only ordinal-looking column I see is `host_response_time`, but let's treat it as nominal for simplicity4. Nominal data to be unpacked, then later one hot encoded - `host_verifications` and `amenities` have multiple items that need to be extrapolated into their own columns - All other categorical columns, like `neighborhood`, `cancellation_policy`, `property_type` should be one hot encoded.5. Date/Time features need to be engineered - Using these dates we can engineer features from the dates columns. We'll do this later C. Initial Data Cleaning (for exploration)Before we're ready to perform some EDA (Exploratory Data Analysis), we should address the points stated above. Convert Numerical Features to Numerical Data Types (if they were typed as objects instead of numbers)
###Code
## DO NOT CHANGE
# Getting all the features that should be numerical, but are typed as objects (strings)
cat_to_num = ['host_response_rate', 'price', 'weekly_price',
'monthly_price', 'security_deposit', 'cleaning_fee', 'extra_people']
# Keeping changes in a temporary copied DataFrame
# Setting deep=True creates a "deepcopy", which guarantees that you're creating a new object
# Sometimes when you copy an object into a new variable, this new variable just points back to the copied object
# This can have unintended consequences - if you edit one variable, you also might edit the other
cat_to_num_df = cat_df[cat_to_num].copy(deep=True)
## TO DO: Take a peek at the data in cat_to_num_df using head(), What do you see?
## TO DO: remove the percent sign, then convert to a number (Hint: str.replace() & astype() will be useful)
## Overwrite the old values with the updated values in 'host_response_rate'
###Output
_____no_output_____
###Markdown
For the rest of the columns regarding price, remove the "$" and "," then convert to float.
###Code
## DO NOT CHANGE
price_cols = ['price', 'weekly_price','monthly_price', 'security_deposit', 'cleaning_fee', 'extra_people']
## TO DO: For each of the price columns, remove commas and dollar signs, then convert it to float
for col in price_cols:
## TO DO: Append the new cat_to_num_df data to the num_df DataFrame using pd.concat and axis=1
num_df =
## TO DO: Drop the old columns from the cat_df DataFrame using the appropriate axis
cat_df =
###Output
_____no_output_____
###Markdown
Convert Binary Columns to Boolean (Not necessary for exploration, but we have to do this later anyway)
###Code
bi_cols = []
## TO DO: Loop through each column and store all columns with only 2 values in the bi_cols list
## Hint: the nunique() method will be helpful
for col in cat_df.columns:
## TO DO: Take a peek at first few rows of the columns in bi_cols. What do you see?
## TO DO: Convert all binary columns to 1's and 0's. (Hint: the .map() method with a dictionary is helpful and fast)
## Make sure you overwrite the old columns with these new ones in cat_df
## TO DO: Take a peak at the bi_cols in cat_df using head to see if everything looks okay
###Output
_____no_output_____
###Markdown
Nominal (Extrapolating Multiple Values in one Feature)- host_verifications- amenities
###Code
## DO NOT CHANGE
## Next, we need to unpack these values in order for them to be meaningful
cat_df[['host_verifications', 'amenities']].head(2)
###Output
_____no_output_____
###Markdown
> **Let's start by turning these features into lists.**Looking at the first few rows above, it looks like we need to remove the brackets, curly brackets, and quotes.For example, we need to turn the string `"['email', 'phone', 'reviews', 'kba']"` into a Python list `[email, phone, reviews, kba]` for each row.
###Code
## DO NOT CHANGE
## This function is meant to be used with the apply method
def striplist(l):
'''
To be used with the apply method on a packed feature
'''
return([x.strip() for x in l])
## DO NOT CHANGE
# These steps turn the string into lists
# Note: You can break code lines using "\"
cat_df['host_verifications'] = cat_df['host_verifications'].str.replace('[', '') \
.str.replace(']', '') \
.str.replace("'",'') \
.str.lower() \
.str.split(',') \
.apply(striplist)
## TO DO: turn the amenities column of strings into a column of lists (similar to what we did above)
###Output
_____no_output_____
###Markdown
Binarizing the lists (sklearn has a handy transformer, `MultiLabelBinarizer` that can do this for you efficiently).This Transformer will turn the lists within each column into dummy features.
###Code
## TO DO: instantiate the MultiLabelBinarizer() (make sure you include the parentheses to create the object)
mlb =
## TO DO: Use the MultiLabelBinarizer to fit and transform host_verifications
## TO DO: Store this result in an object called host_verif_matrix (note that sklearn transformers output numpy arrays)
host_verif_matrix =
## DO NOT CHANGE
# This is what the output looks like when you use this transformer.
# The below code converts the matrix to a DataFrame
host_verif_df = pd.DataFrame(host_verif_matrix, columns = mlb.classes_)
host_verif_df.head(2)
## TO DO: Use the MultiLabelBinarizer to fit and transform amenities
amenities_matrix =
## TO DO: Store this result in a DataFrame called amenities_df (similar to what we did above with host_verif_df)
amenities_df =
## TO DO: Print the first few rows of amenities_df using head()
###Output
_____no_output_____
###Markdown
Does something look weird about the very first column after the index? It looks like we picked up a blank column with an empty string as the name. This probably happened because there were blanks in the `amenities` lists. Let's just drop it this column.
###Code
## TO DO: in the amenities_df, drop the column that's named '' (Hint: amenities_df.drop())
###Output
_____no_output_____
###Markdown
Now we need to drop the original columns and concatenate the new DataFrames together to the original `cat_df` DataFrame.
###Code
## TO DO: drop the old host_verifications and amenities features from cat_df
cat_df =
## TO DO: concatenate amenities_df and host_verif_df to the original cat_df DataFrame
cat_df =
###Output
_____no_output_____
###Markdown
Date/Time Feature EngineeringTypically, date/time data is used in time-series analysis. We're not dealing with time-series analysis, so we can get rid of these columns. However, before we get rid of them, let's create some features that might be useful for us later.
###Code
## DO NOT CHANGE
## Here are our date features
dt_cols = ['host_since', 'first_review', 'last_review']
cat_df[dt_cols].head(1)
## TO DO: Loop through the columns, converting them to the datetime dtype (hint: pd.to_datetime() is useful)
###Output
_____no_output_____
###Markdown
Converting the date features to "days since" features so we have numerical values to work with.
###Code
## TO DO: capture today's date using pd.to_datetime (Hint: you can pass the string "today" into to_datetime)
today =
## TO DO: Create one new date feature that counts number of days since today's date for each of the three date features
## Hint 1: You can subtract dates from one another
## Hint 2: the Pandas datetime data type has useful attributes that you can access (e.g. datetime.days)
## TO DO: Drop the original date columns from cat_df
## TO DO: combine your num_df and cat_df into one new DataFrame named cleaned_df
cleaned_df =
###Output
_____no_output_____
###Markdown
D. Visualize & Understand (EDA)> **Now we're in a good position for Exploratory Data Analysis**. Use this section as an opportunity to explore some interesting questions that you can think of. > Note that a lot of interesting questions you may be interested in asking might require Machine Learning, which we can't perform until the end of this notebook. ---**Example EDA**Here's a simple example of something we can try to answer...*Do hosts with no recent reviews have different pricing from hosts with recent reviews?*
###Code
## DO NOT CHANGE - Here's an example of a way to answer the EDA question above
# Creating a discrete feature based on how recent the last review was
bins = [0, 90, 180, 365, 730, np.inf]
labels = ['last 90 days', 'last 180 days','last year', 'last 2 years', 'more than 2 years']
cleaned_df['last_review_discrete'] = pd.cut(num_df['last_review_days'], bins=bins, labels=labels)
# Filling the Null values in this new column with "no reviews", assuming Null means there are no reviews
cleaned_df['last_review_discrete'] = np.where(cleaned_df['last_review_discrete'].isnull(),
'no reviews',
cleaned_df['last_review_discrete'])
###Output
_____no_output_____
###Markdown
> Sometimes you might want to edit your data while you explore it, so it may be a good idea to copy your cleaned data into a new DataFrame just for exploration.
###Code
## DO NOT CHANGE
# The colon ensures a true copy is made
eda = cleaned_df[:]
## DO NOT CHANGE
# Let's separate this out between the room types. Let's ignore the last two
eda['room_type'].value_counts()
## DO NOT CHANGE
eda_viz = eda[eda['room_type'].isin(['Entire home/apt', 'Private room'])]
## DO NOT CHANGE
# Let's also remove the price outliers
plt.figure(figsize=(10,4))
sns.distplot(eda['price'])
plt.show()
## DO NOT CHANGE
# Filtering out prices that are greater than 3 sample standard deviations from the mean
price_mean = np.mean(eda['price'])
price_std = np.std(eda['price'])
price_cutoff = price_mean + price_std*3
## DO NOT CHANGE
eda_viz = eda_viz[eda_viz['price'] < price_cutoff]
## DO NOT CHANGE
fgrid = sns.FacetGrid(eda_viz, col='room_type', height=6,)
fgrid.map(sns.boxplot, 'last_review_discrete', 'price', 'host_is_superhost',
order=labels, hue_order = [0,1])
for ax in fgrid.axes.flat:
plt.setp(ax.get_xticklabels(), rotation=45)
ax.set(xlabel=None, ylabel=None)
l = plt.legend(loc='upper right')
l.get_texts()[0].set_text('Is not Superhost')
l.get_texts()[1].set_text('Is Superhost')
fgrid.fig.tight_layout(w_pad=1)
###Output
_____no_output_____
###Markdown
Looking at the faceted plots above, it seems that units that haven't been reviewed for a long time are priced slightly higher than units with more recent reviews. Also, it appears that superhosts' pricing (dark blue) is higher than non-superhosts (light blue), suggesting that hosts who are verified as superhosts (hosts who are top-rated and most experienced) are priced higher than those who are not.However, we haven't verified any of these with meaningful statistical tests. This is all descriptive analysis. ---
###Code
## SKIP FOR NOW. Come back to this when you've finished the notebook.
## Depending on what you want to try, this might take a while
## TO DO: Think of your own EDA question. Try to answer it below.
## You can create features for this, but don't add them to cleaned_df, add them to a copied DF called eda
## Some ideas...
## What kinds of amenities do the expensive listings usually have?
## Do hosts with many listings have higher or lower reviews than hosts with only a few listings?
###Output
_____no_output_____
###Markdown
E. Assess Missing Values> **Do not fill or impute them yet at this point! We want to fill missing values after we train/test split.**In this section, we need to come up with a strategy on how we're going to tackle our missing values. Most ML algorithms (except fancy ones like [XGBoost](https://xgboost.readthedocs.io/en/latest/index.html)) cannot handle NA values, so we need to deal with them.You have two options, and I'll describe some strategies for each option below:1. **Remove them** - Are there many missing values in a particular **column**? Perhaps it's not very useful if there's too many missing. - Are there many missing values in a particular **row**? Perhaps this missingness caused by something reasonable or a data-collecting failure. Investigate these in case you can reasonably identify a reason why they're missing before you drop them. - Do some rows not contain your *response variable* of interest? Perhaps you want to predict `price` of an Airbnb listing. If so, supervised learning methods require the label (`price`) to be there, so we can disregard these rows.2. **Fill them** (**Warning**: it's generally **not great practice** to fill missing values **before** you **train/test split** your data. You *can* fill missing values now if it's a one-off analysis, but if this is something you want to implement in practice, you want to be able to test your entire preprocessing workflow to evaluate how good it is. Think of your strategy for filling missing values as another hyperparameter that you want to tune.) - Infer the value of missing values from other columns. (e.g. If `state` is missing, but `city` is San Francisco, `state` is probably `CA`) - Fill numerical values with mean, median, or mode. - Fill categorical values with the most frequent value. - Use machine learning techniques to predict missing values. (Check out [IterativeImputer](https://scikit-learn.org/stable/auto_examples/impute/plot_iterative_imputer_variants_comparison.html) from sklearn for a method of doing this.) ---**Here's our approach for the section below...**1. We assess missing values per column to see if we can drop any features.2. We assess missing values per row to see if we can find any patterns in how these values may be missing.3. We strategize how we want to fill our categorical features.4. We strategize how we want to fill our remaining numerical features. 1. Assessing Missing Values per Column
###Code
## TO DO: Let's assume we want to predict the price feature.
## If price is the variable we want to predict, then we have to disregard rows that don't have it
## Drop all rows with missing values in the 'price' column
cleaned_df =
## TO DO: Calculate the proportion/percentage of NA values per column
## TO DO: There should be 5 columns with much more than 80% of their values missing. Which 5 columns are they?
## TO DO (OPTIONAL): Create a bar chart to visualize the proportion of NA values per column
# Hint: matplotlib's bar or barh are useful for this.
## TO DO: Drop the 5 missing columns identified above from cleaned_df
###Output
_____no_output_____
###Markdown
2. Assessing Missing Values per Row
###Code
## TO DO: Create a temporary column called "sum_na_row" in cleaned_df that contains the number of NA values per row
cleaned_df['sum_na_row'] =
## TO DO: Use matplotlib or seaborn to plot the distribution of this new column.
## Is there anything in this distribution that looks odd to you??
## Hint: seaborn's distplot() function is nice and easy for this. Alternatively, countplot() may work, too
###Output
_____no_output_____
###Markdown
Look at the distribution of missing values per row. This distribution looks a little odd. Look at how few missing values per row we have between 5-9, then hundreds more from 10-14.This may be systematically created. Let's investigate if there are specific columns that are consistently empty for these rows.
###Code
## Note: If you were able to spot the same sudden jump in missing values that I did,
## you may have noticed that there are a lot of rows with 10 or more missing values,
## but there's very few with 5-9 missing values.
## TO DO: filter cleaned_df for only rows with 10 or more missing values. store this in a temporary DataFrame
temp =
## TO DO: get the names of the columns that contain missing values from this temporary DF
## Hint: DF.isna().any() can be useful here.
na_cols =
# Take a peek at what these features look like. Transposed for readability
temp[na_cols].transpose()
###Output
_____no_output_____
###Markdown
Do you see any patterns in the missing-ness of our data? Which features contain many missing values? > It looks like a huge portion of missing values are coming from `review`-related features. Why could this be?---**My Guess:**NA values related to reviews are most likely missing because these particular listings do not have any reviews. This could be useful information, so I'll encode these values as `0` so they're different from the values that we do have.Although `0` may be misleading, I believe filling with `0` is better than simply removing the rows or imputing based on other values to maintain its variance from the units that actually have reviews.
###Code
## DO NOT CHANGE
# Collecting the numerical review-related columns
zero_fill_cols = ['review_scores_rating', 'review_scores_accuracy', 'review_scores_cleanliness',
'review_scores_checkin', 'review_scores_communication', 'review_scores_location',
'review_scores_value', 'reviews_per_month', 'first_review_days', 'last_review_days']
###Output
_____no_output_____
###Markdown
3. Categorical Features with Missing ValuesDealing with missing categorical data can be tricky. Here are some ways you can deal with them:- Fill with mode/most frequent value (e.g. if 70% of a column is "red", maybe you fill the remaining NA values with "red")- Infer their value from other columns (e.g. if one feature helps you make an educated guess about the missing value)- Create a dummy variable (e.g. if the value is missing, another dummy feature will have 1 for the missing value. Else, it will be 0)> Let's take the simple most frequent approach. We'll tackle this using the `SimpleImputer` from sklearn after we train/test split our data.
###Code
## TO DO (OPTIONAL): isolate all categorical columns (i.e. columns of dtype 'object'),
## then make countplots or barplots for each one to visualize how these features are distributed
###Output
_____no_output_____
###Markdown
4. Now what should we do about imputing the rest of our missing numerical features below?** - A lot of people like the simple approach of filling them with the **mean** or **median** of the features.- There's also some advanced methods of imputing missing values using Machine Learning. An examle is a neat experimental estimator in sklearn called `IterativeImputer` that uses machine learning to predict and impute many features at once.
###Code
## DO NOT CHANGE
# Getting indices of columns that still contain missing values
columns_idxs_missing = np.where(cleaned_df.isna().any())[0]
# Getting the names of these columns
cols_missing = cleaned_df.columns[columns_idxs_missing]
# Taking a peek at what's left
cleaned_df[cols_missing].head()
###Output
_____no_output_____
###Markdown
> Took keep things simple, let's just fill the rest of these values with the median.
###Code
## TO DO: Drop the sum_na_row feature we made from cleaned_df
## We don't need this column anymore
cleaned_df =
## TO DO: notice how we kept the review-related columns and the categorical columns in the
## arrays "zero_fill_cols" and "cat_cols".
## Identify all remaining columns that aren't in these two lists and store them in an array called median_fill_cols
## Also remove "price" from this array.
## I'll explain why we do these steps later in the notebook when we fill our missing values.
median_fill_cols =
###Output
_____no_output_____
###Markdown
III. Train/Test Split**[Back to top](toc)**Now here we split our data into training, testing, and (optionally) validation.However, if you plan to use a validation set or K-Fold Cross Validation, just create your validation sets later when you're evaluating your ML models.
###Code
## TO DO: store cleaned_df without the price column in a variable called X.
X =
## TO DO: store cleaned_df['price'] in a variable called y
y =
## TO DO: Split your data using train_test_split using a train_size of 80%
## TO DO: store all these in the variables below
X_train, X_test, y_train, y_test =
## RUN THIS, BUT DO NOT CHANGE
# Setting this option to None to suppress a warning that we don't need to worry about right now
pd.options.mode.chained_assignment = None
###Output
_____no_output_____
###Markdown
IV. Prepare for ML **[Back to top](toc)** Now that we've already split our data and engineered the features that we want, all we have to do is prepare our data for our models. A. Dealing with Missing DataThe reason we want to deal with missing data *after* we've split our data is because we want to simulate real world conditions when we test as much as we can. When data is coming/streaming in, we have to be ready with our methods for dealing with missing data.Below, rather than using panda's `fillna` method, we will take advantage of sklearn's `SimpleImputer` estimator (imputing is just another way of saying you're going to fill/infer missing values in this case). ---****A Brief Note on sklearn Estimators/Transformers****Many of sklearn's objects are called "estimators", and all estimators are also "transformers" because they are treated as objects that estimate some parameters about your data, then are used to transform your data in some way to produce a prediction or a transformed (e.g. normalized, standardized, filled NA's with mean, etc) version of your data.--- We will *fit* three `SimpleImputer` objects on **`X_train` only** according to each of our three strategies above. Then, we will use these imputers to transform **both our `X_train` and `X_test`.** As a reminder, this is what we will do...1. Fill categorical features stored in `cat_cols` with their mode/most frequent value2. Fill review-related features stored in `zero_fill_cols` with a constant vaue: 0.3. Fill all remaining numerical features stored in `median_fill_cols` with their median.> This is why we stored these column names in the **Assess Missing Values** section. We want to easily change each of these columns for both our X_train and X_test datasets. Also, remember how we dropped `price` from `median_fill_cols`? We needed to remove it because there is no `price` in `X_train` and `X_test`. **First, let's start with imputing our categorical variables.**
###Code
## DO NOT CHANGE - use this as an example of you have to do in the cells below for numerical variables
## Notice how we're looping through our columns, imputing one at a time.
## Normally, we would fit and transform features all at once with sklearn's ColumnTransformer, but
## this is fine since we're just practicing
# looping through our columns
for col in cat_cols:
# instantiating/creating an imputer with an impute strategy of "most frequent"
imputer = SimpleImputer(strategy='most_frequent')
# fit this imputer to the training column.
# This stores the most frequent value in the imputer for transforming
imputer.fit(X_train[[col]])
# using the transform method to fill NA values with the most frequent value, then updating our DFs
X_train[col] = imputer.transform(X_train[[col]])
X_test[col] = imputer.transform(X_test[[col]])
###Output
_____no_output_____
###Markdown
--- **Now let's impute our numerical variables.**
###Code
## TO DO: impute the zero_fil_cols features using an imputer with strategy = "constant" and fill_value = 0
## Use what we did above for cat_cols as a reference
## TO DO: impute the median_fill_cols using an imputer with strategy = "median"
## Use what we did above for cat_cols as a reference
###Output
_____no_output_____
###Markdown
B. Feature Engineering > Use this section as an opportunity to create useful features for your ML model. Note that any features you create might create NA or Infinite values, which have to be taken care of before using the data in most ML models.**An easy idea**: Ratio of capacity to beds.
###Code
## TO DO: In X_train, create a new feature called "capacity_to_beds" by dividing the "accomodates" feature by "beds"
## TO DO: Do the same thing for X_test. Can you think of anything that can go wrong if you do this?
###Output
_____no_output_____
###Markdown
Be careful with ratios because1. dividing by zero might create infinite values and 2. any operations with NA values create more NA values. This *shouldn't* be a problem because we already took care of NA values, but try to remember this.I think filling these values with zero is reasonable for now.
###Code
## TO DO: Fill infinite values in this new column with zero in X_train and X_test.
## (Hint: np.where and np.isinf can be helpful)
###Output
_____no_output_____
###Markdown
C. Transform Data **Transforming Numerical Data - Log Transform** Now is a good time to do any numerical data transformations if you haven't done them already.An example could be to log-transform salary or price fields to make the distributions look more normal. Here's one way you can do that.
###Code
## DO NOT CHANGE
plt.figure(figsize=(12,4))
# Creating plot on the left
plt.subplot(121)
sns.distplot(X_train['cleaning_fee'])
plt.title('Before Log-Transform')
# Creating plot on the right
plt.subplot(122)
log_transform_train = np.where(np.isinf(np.log(X_train['cleaning_fee'])), 0, np.log(X_train['cleaning_fee']))
log_transform_test = np.where(np.isinf(np.log(X_test['cleaning_fee'])), 0, np.log(X_test['cleaning_fee']))
sns.distplot(log_transform_train)
plt.title('After Log-Transform')
plt.show()
## TO DO: update the "cleaning_fee" features in X_train and X_test with their log-transformed value
## Careful if you try to compute the log yourself - taking the log of 0 will create an infinite value
## if you choose to compute log yourself with np.log, make sure you fill np.inf values with 0
###Output
_____no_output_____
###Markdown
**Transforming Numerical Data - Standardization**A lot of Machine Learning models perform better after you've standardized the features, such as Linear Regression, Logistic Regression, and Neural Networks. It may not always be required (this doesn't really matter for Random Forests).In this section, we will standardize our data anyway. We don't have to standardize our binary features since they're either {0,1}, but we should standardize everything else.
###Code
## DO NOT CHANGE
temp_df = X_train.select_dtypes(['float', 'int'])
# Gathering binary features
bi_cols = []
for col in temp_df.columns:
if temp_df[col].nunique() == 2:
bi_cols.append(col)
## TO DO: store all the columns we need to standardize in cols_to_standardize.
## Hint: bi_cols contains all the columns that you don't need to standardize. np.setdiff1d may be helpful
cols_to_standardize =
## TO DO: instantiate the StandardScaler() (make sure you include the parenthesis to create the object)
## store this object in scaler below
scaler =
## TO DO: fit the scaler to X_train's cols_to_standardize only
## TO DO: transform (DO NOT FIT_TRANSFORM) X_train and X_test's cols_to_standardize and update the DataFrames
X_train[cols_to_standardize] =
X_test[cols_to_standardize] =
###Output
_____no_output_____
###Markdown
Encoding Categorical DataWe still have to convert our categorical data into numbers. Here we're going to simply OneHotEncode (very similar to pd.get_dummies) our `cat_cols`.
###Code
## DO NOT CHANGE
print('Unique Values per categorical column...')
for col in cat_cols:
print(f'{col}: {X_train[col].nunique()}')
## TO DO: Finish the code
for col in cat_cols:
## TO DO: instantiate the OneHotEncoder with handle_unknown = 'ignore' and sparse=False. Store object in ohe
ohe =
## TO DO: fit ohe to the current column "col" in X_train
ohe.fit
# This extracts the names of the dummy columns from ohe
dummy_cols = list(ohe.categories_[0])
# This creates new dummy columns in X_train and X_test that we will fill
for dummy in dummy_cols:
X_train[dummy] = 0
X_test[dummy] = 0
## TO DO: transform the X_train and X_test column "col" and update the dummy_cols we created above
X_train[dummy_cols] =
X_test[dummy_cols] =
## TO DO: drop the original cat_cols from X_train and X_test
###Output
_____no_output_____
###Markdown
**Text Data**We omitted text data in the beginning of the notebook, but a good place to start when working with text data is using [NLTK](https://www.nltk.org/), the Natrual Language Toolkit Library. D. Feature Selection Depending on how big your dataset is, you may want to reduce the number of features you have for performance purposes. Here we will simply reduce number of features by removing highly correlated features.
###Code
## TO DO (this may be tough- check solution for help)
## identify pairs of features that have a correlation higher than 0.8 or lower than -0.8 from X_train
## remove ONLY ONE of these features from both X_train and X_test
## Hint 1: df.corr() and np.triu() can be helpful here
## Hint 2: you should end up removing 33 features (unless you added extra features than what was given)
###Output
_____no_output_____
###Markdown
**Congrats! Now you have train and test datasets that are ready for Machine Learning modeling!** > **This is the end of the exercises. Below is some very simple ML modeling with the data that we prepared together** --- V. Pick your Models **[Back to top](toc)**
###Code
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.svm import SVR
from sklearn.metrics import mean_squared_error, mean_absolute_error
rf = RandomForestRegressor()
gbr = GradientBoostingRegressor()
svr = SVR()
models = [rf, gbr, svr]
###Output
_____no_output_____
###Markdown
VI. Model Selection **[Back to top](toc)**Evaluate your models, and pick the 2-3 best performing ones for tuning.
###Code
results = []
for model in models:
model.fit(X_train, y_train)
y_preds = model.predict(X_test)
mse = mean_squared_error(y_test, y_preds)
mae = mean_absolute_error(y_test, y_preds)
metrics = {}
metrics['model'] = model.__class__.__name__
metrics['mse'] = mse
metrics['mae'] = mae
results.append(metrics)
pd.set_option('display.float_format', lambda x: '%7.2f' % x)
pd.DataFrame(results, index=np.arange(len(results))).round(50)
###Output
_____no_output_____ |
PowerPlantGradientDescentProject.ipynb | ###Markdown
adding features
###Code
temp_col = X.shape[1]
temp_df = pd.DataFrame(X)
for i in range(temp_col):
temp_df[temp_col + i] = temp_df[i] ** 2
extended_x = temp_df.values
# test
temp_col_test = power_test.shape[1]
temp_df_test = pd.DataFrame(power_test)
for i in range(temp_col_test):
temp_df_test[temp_col_test + i] = temp_df_test[i] ** 2
extended_test = temp_df_test.values
temp_df = pd.DataFrame(X)
temp_df[4] = temp_df[0] ** 2
temp_df[5] = temp_df[1] ** 2
temp_df[6] = temp_df[3] ** 2
extended_x = temp_df.values
# test
temp_df_test = pd.DataFrame(power_test)
temp_df_test[4] = temp_df_test[0] ** 2
temp_df_test[5] = temp_df_test[1] ** 2
temp_df_test[6] = temp_df_test[3] ** 2
extended_test = temp_df_test.values
###Output
_____no_output_____
###Markdown
scaling
###Code
from sklearn import preprocessing
scaler = preprocessing.StandardScaler()
scaler.fit(X)
d = scaler.transform(X)
d_test = scaler.transform(power_test)
scaler1 = preprocessing.StandardScaler()
scaler1.fit(extended_x)
de = scaler1.transform(extended_x)
de_test = scaler1.transform(extended_test)
###Output
_____no_output_____
###Markdown
adding col of 1's
###Code
last_col = X.shape[1]
no_rows = Y.shape[0]
no_rows_test = power_test.shape[0]
add_c = np.ones((no_rows,1))
add_c_test = np.ones((no_rows_test,1))
d = np.concatenate((d,add_c), axis = 1)
d_test = np.concatenate((d_test,add_c_test), axis = 1)
#de = np.concatenate((de,add_c), axis = 1)
#de_test = np.concatenate((de_test,add_c_test), axis = 1)
print(d.shape)
print(d_test.shape)
#print(de.shape)
#print(de_test.shape)
###Output
(7176, 5)
(2392, 5)
###Markdown
algo
###Code
def step_gradient(points, op, learning_rate, m):
N = points.shape[1]
m_slope = np.zeros(N)
M = len(points)
for i in range(M):
x = points[i,:]
y = op[i]
for j in range(N):
temp = (-2/M) * ((y - (m*x).sum())*x[j])
m_slope[j] = temp + m_slope[j]
new_m = m - learning_rate*m_slope
#print(new_m, new_c)
return new_m
def gd(points, op, learning_rate, num_iterations):
N = points.shape[1]
m = np.zeros(N)
for i in range(num_iterations):
m = step_gradient(points, op, learning_rate, m)
#print(i, "cost: ", cost(points, m, c))
return m
def run(s,l):
learning_rate = 0.04
num_iteration = 700
m = gd(s, l, learning_rate, num_iteration)
#print(m)
return m
###Output
_____no_output_____
###Markdown
spliting
###Code
from sklearn import model_selection
x_train, x_test, y_train, y_test = model_selection.train_test_split(d, Y)
###Output
_____no_output_____
###Markdown
main
###Code
final_m = run(x_train, y_train)
#final_m = run(d, Y)
final_m
###Output
_____no_output_____
###Markdown
predicting values for test
###Code
list1 = list()
for i in range(len(x_test)):
x = x_test[i,:]
y_pred = (final_m*x).sum()
list1.append(y_pred)
list2 = np.array(list1)
###Output
_____no_output_____
###Markdown
predicting on final_test.csv without extended features
###Code
list_test = list()
for i in range(len(d_test)):
x = d_test[i,:]
y_pred = (final_m*x).sum()
list_test.append(y_pred)
list_final = np.array(list_test)
###Output
_____no_output_____
###Markdown
predicting on final_test.csv with extended features
###Code
list_test = list()
for i in range(len(de_test)):
x = de_test[i,:]
y_pred = (final_m*x).sum()
list_test.append(y_pred)
list_final = np.array(list_test)
import matplotlib.pyplot as plt
#kl = kl.reshape(379)
plt.scatter(list2,y_test)
plt.show()
list_final.shape
np.savetxt("power_gradientdescent_project_predicted_values5.csv", list_final, delimiter = ",", fmt = '%.5f')
###Output
_____no_output_____ |
docs/archive/documentation-OLD/02 - Advanced seamless.ipynb | ###Markdown
Saving and loading seamless contexts
###Code
#Download basic example context
import urllib.request
url = "https://raw.githubusercontent.com/sjdv1982/seamless/master/examples/basic.seamless"
urllib.request.urlretrieve(url, filename = "basic.seamless")
import seamless
from seamless import cell, pythoncell, reactor, transformer
ctx = seamless.fromfile("basic.seamless")
await ctx.computation()
ctx.tofile("basic-copy.seamless", backup=False)
###Output
_____no_output_____
###Markdown
Registrars In the basic example, the code for the fibonacci function is defined in-line within the transformer. For a larger project that uses *fibonacci* in multiple places, you should define it separately. The standard way is to put it in a module and import it:
###Code
fib_module = open("fib.py", "w")
fib_module.write("""
def fibonacci(n):
def fib(n):
if n <= 1:
return [1]
elif n == 2:
return [1, 1]
else:
fib0 = fib(n-1)
return fib0 + [ fib0[-1] + fib0[-2] ]
fib0 = fib(n)
return fib0[-1]
""")
fib_module.close()
ctx.formula.set("""
from fib import fibonacci # Bad!
return fibonacci(a) + fibonacci(b)
""")
###Output
_____no_output_____
###Markdown
But if we do this, we immediately lose live feedback. There is no way that seamless can guess that a change in fib.py should trigger a re-execution of ctx.formula's transformer. Even if you manually force a re-execution, with `ctx.formula.touch()`, this will not change anything: the `fib` module has already been imported by Python. Python's import mechanism is rather hostile to live code changes, and it is difficult to reload any kind of module. While possible to force manually (e.g. using %autoreload), it does not always work. Anyway, all this manual forcing is against the spirit of seamless.**With seamless, only use `import` for external libraries. Avoid importing any project code.** Instead of Python imports, seamless has a different mechanism: **registrars**. First, let's link fib.py to a cell:
###Code
from seamless.lib import link, edit
ctx.fib = pythoncell()
ctx.link_fib = link(ctx.fib, ".", "fib.py") #Loads the cell from the existing fib.py
ctx.ed_fib = edit(ctx.fib, "Fib module")
###Output
_____no_output_____
###Markdown
Then, we will register the fib cell with the Python registrar, and connect the fibonacci Python function object from the Python registrar to the transformer. This will re-establish live feedback: whenever fib.py gets changed, the transformer will execute with the new code.
###Code
rpy = ctx.registrar.python
rpy.register(ctx.fib)
rpy.connect("fibonacci", ctx.transform)
ctx.formula.set("return fibonacci(a) + fibonacci(b)")
###Output
_____no_output_____
###Markdown
For the next section, we will build a new context.You can destroy a context cleanly with `context.destroy()`(Just re-defining ctx should work also, but not inside the Jupyter Notebook)
###Code
ctx.destroy()
###Output
_____no_output_____
###Markdown
Array cells
###Code
import seamless
from seamless import cell, reactor, transformer
ctx = seamless.context()
ctx.x = cell("array")
ctx.y = cell("array")
import numpy as np
arr = np.linspace(0, 100, 200)
ctx.x.set(arr)
ctx.x.value[:10]
arr2 = -0.5 * arr**2 + 32 * arr - 12
ctx.y.set(arr2)
import bqplot
from bqplot import pyplot as plt
fig = plt.figure()
plt.plot(ctx.x.value, ctx.y.value)
plt.show()
###Output
_____no_output_____
###Markdown
**Warning** While cell.value, inputpins and editpins return numpy arrays, seamless assumes that you don't modify them in-place Preliminary transformer resultsLet's pretend that `ctx.computation` performs some complicated scientific computation:
###Code
t = ctx.computation = transformer({
"amplitude": {"pin": "input", "dtype": "float"},
"frequency": {"pin": "input", "dtype": "float"},
"gravity": {"pin": "input", "dtype": "float"},
"temperature": {"pin": "input", "dtype": "float"},
"mutation_rate": {"pin": "input", "dtype": "float"},
"x": {"pin": "input", "dtype": "array"},
"y": {"pin": "output", "dtype": "array"},
})
ctx.amplitude = cell("float").set(4)
ctx.amplitude.connect(t.amplitude)
ctx.frequency = cell("float").set(21)
ctx.frequency.connect(t.frequency)
ctx.gravity = cell("float").set(9.8)
ctx.gravity.connect(t.gravity)
ctx.temperature = cell("float").set(298)
ctx.temperature.connect(t.temperature)
ctx.mutation_rate = cell("float").set(42)
ctx.mutation_rate.connect(t.mutation_rate)
ctx.x.connect(t.x)
t.y.connect(ctx.y)
ctx.computation.code.cell().set("""
import numpy as np
import time
y = np.sin(x/100 * frequency) * amplitude
for n in range(1, 20):
pos = int(n/20*len(y))
return_preliminary(y[:pos])
time.sleep(1)
return y
""")
ctx.computation.code.cell().touch()
###Output
_____no_output_____
###Markdown
Run the cell above, then repeatedly run the cell below
###Code
v = len(ctx.y.value)
print(v)
plt.clear()
plt.plot(ctx.x.value[:v], ctx.y.value)
plt.xlim(0,100)
plt.show()
###Output
200
###Markdown
Simple macros Now let's assume that in the example above, we forgot a parameter "radius". To implement it, we would have to re-declare the transformer with the extra input pin, re-declare the connections, and re-define the code cells. This is very annoying, and it is easy to make a mistake!However, `transformer` and `reactor` are macros, which means that they accept cells as input. So we can declare the computation parameters as a cell, and when we want to modify them, we just modify the cell.Below is a re-factor:
###Code
ctx.computation_params = cell("json").set({
"amplitude": {"pin": "input", "dtype": "float"},
"frequency": {"pin": "input", "dtype": "float"},
"gravity": {"pin": "input", "dtype": "float"},
"temperature": {"pin": "input", "dtype": "float"},
"mutation_rate": {"pin": "input", "dtype": "float"},
"x": {"pin": "input", "dtype": "array"},
"y": {"pin": "output", "dtype": "array"},
})
t = ctx.computation = transformer(ctx.computation_params)
###Output
_____no_output_____
###Markdown
and then the same as before...
###Code
ctx.amplitude = cell("float").set(4)
ctx.amplitude.connect(t.amplitude)
ctx.frequency = cell("float").set(21)
ctx.frequency.connect(t.frequency)
ctx.gravity = cell("float").set(9.8)
ctx.gravity.connect(t.gravity)
ctx.temperature = cell("float").set(298)
ctx.temperature.connect(t.temperature)
ctx.mutation_rate = cell("float").set(42)
ctx.mutation_rate.connect(t.mutation_rate)
ctx.x.connect(t.x)
t.y.connect(ctx.y)
ctx.computation.code.cell().set("""
import numpy as np
import time
y = np.sin(x/100 * frequency) * amplitude
for n in range(1, 20):
pos = int(n/20*len(y))
return_preliminary(y[:pos])
time.sleep(1)
return y
""")
ctx.computation.code.cell().touch()
###Output
_____no_output_____
###Markdown
Again, to see the plot, run the cell above, then repeatedly run the cell below
###Code
v = len(ctx.y.value)
print(v)
plt.clear()
plt.plot(ctx.x.value[:v], ctx.y.value)
plt.xlim(0,100)
plt.show()
###Output
200
###Markdown
Now we can add a parameter, and seamless will re-connect everything.
###Code
d = ctx.computation_params.value
d["radius"] = {"pin": "input", "dtype": "float"}
ctx.computation_params.set(d)
ctx.radius = cell("float").set(10)
ctx.radius.connect(ctx.computation.radius)
###Output
_____no_output_____
###Markdown
This will restart the computation. If you like, you can now modify the code of `ctx.computation.code.cell()` to take into account the value of *radius*.`ctx.computation_params` is a JSON cell. It can be linked to the hard disk and then edited to the hard disk like any other cell:
###Code
from seamless.lib import link
ctx.link1 = link(ctx.computation_params, ".", "computation_params.json")
###Output
_____no_output_____
###Markdown
Now, whenever you modify "computation_params.json", the transformer macro will be re-executed.However, JSON is very unforgiving when it comes to commas and braces. Therefore, it is recommended that you declare `ctx.computation_params` as `cell("cson")` instead. In seamless, JSON and CSON have a special relationship: you can provide a CSON cell whenever a JSON cell is expected, and seamless will make the conversion implicitly. Creating your own macros Seamless macros can be declared with the `@macro` decorator. The following macro does the same as above:
###Code
from seamless import macro
@macro("json")
def create_computation(ctx, params):
from seamless import transformer, cell, pythoncell
ctx.computation = transformer(params)
ctx.computation_code = pythoncell().set("""
import numpy as np
import time
y = np.sin(x/100 * frequency) * amplitude
for n in range(1, 20):
pos = int(n/20*len(y))
return_preliminary(y[:pos])
time.sleep(1)
return y
""")
ctx.computation_code.connect(ctx.computation.code)
ctx.export(ctx.computation) #creates a pin on ctx for every unconnected pin on ctx.computation
ctx.computation = create_computation(ctx.computation_params)
###Output
_____no_output_____
###Markdown
Let's add a little convenience function to reconnect the computation pins:
###Code
def connect_computation(t):
ctx.amplitude = cell("float").set(4)
ctx.amplitude.connect(t.amplitude)
ctx.frequency = cell("float").set(21)
ctx.frequency.connect(t.frequency)
ctx.gravity = cell("float").set(9.8)
ctx.gravity.connect(t.gravity)
ctx.temperature = cell("float").set(298)
ctx.temperature.connect(t.temperature)
ctx.mutation_rate = cell("float").set(42)
ctx.mutation_rate.connect(t.mutation_rate)
ctx.radius = cell("float").set(10)
ctx.radius.connect(ctx.computation.radius)
ctx.x.connect(t.x)
t.y.connect(ctx.y)
connect_computation(ctx.computation)
###Output
_____no_output_____
###Markdown
... and plot the results
###Code
v = len(ctx.y.value)
print(v)
plt.clear()
plt.plot(ctx.x.value[:v], ctx.y.value)
plt.xlim(0,100)
plt.show()
###Output
200
###Markdown
The source code of the macro is added to the context, and it will be saved when the context is saved. Whenever `ctx.computation_params` changes, it will be re-executed.In the next version of seamless, you will be able to edit the macro source code inside a cell. But for now, we have to just re-define it. Let's assume that our scientific computation consists of two parts: a slow computation that depends only on *amplitude* and *frequency*, and a fast analysis of the result that depends on everything else. Using a macro, we can split the computation, and optionally omit the analysis.
###Code
@macro({"params": "json", "run_analysis": "bool"})
def create_computation(ctx, params, run_analysis):
from seamless import transformer, cell, pythoncell
from seamless.core.worker import ExportedOutputPin
# Slow computation
params_computation = {
"amplitude": {"pin": "input", "dtype": "float"},
"frequency": {"pin": "input", "dtype": "float"},
"x": {"pin": "input", "dtype": "array"},
"y": {"pin": "output", "dtype": "array"},
}
ctx.computation = transformer(params_computation)
ctx.computation_code = pythoncell().set("""
import numpy as np
import time
print("start slow computation")
y = np.sin(x/100 * frequency) * amplitude
for n in range(1, 5):
pos = int(n/5*len(y))
return_preliminary(y[:pos])
time.sleep(1)
return y
""")
ctx.computation_code.connect(ctx.computation.code)
ctx.computation_result = cell("array")
ctx.computation.y.connect(ctx.computation_result)
# Fast analysis
params2 = params.copy()
for k in params_computation:
if k not in ("x", "y"):
params2.pop(k, None)
ctx.analysis = transformer(params2)
ctx.analysis_code = pythoncell().set("print('start analysis'); return x")
ctx.analysis_code.connect(ctx.analysis.code)
# Final result
ctx.result = cell("array")
if run_analysis:
ctx.computation_result.connect(ctx.analysis.x)
ctx.analysis.y.connect(ctx.result)
else:
ctx.computation_result.connect(ctx.result)
ctx.y = ExportedOutputPin(ctx.result)
ctx.export(ctx.computation, skipped=["y"])
ctx.export(ctx.analysis, skipped=["x","y"])
ctx.run_analysis = cell("bool").set(True)
ctx.computation = create_computation(
params=ctx.computation_params,
run_analysis=ctx.run_analysis
)
connect_computation(ctx.computation)
###Output
_____no_output_____
###Markdown
As you see, the slow computation starts immediately. Every second, for five seconds, the computation returns the results so far. The results are forwarded to the analysis (which, in this dummy example, does nothing).Now, if we change the *radius* parameter (or *gravity*, or *temperature*, or *mutation_rate*), the analysis will be re-executed, but not the slow computation
###Code
ctx.radius.set(2)
###Output
start slow computation
###Markdown
On the other hand, changing *amplitude* or *frequency* re-launches the entire computation
###Code
ctx.amplitude.set(21)
###Output
start analysis
###Markdown
We can toggle *run_analysis* on and off, and the macro will re-build the computation context
###Code
ctx.run_analysis.set(False)
ctx.run_analysis.set(True)
###Output
start slow computation
Macro object re-computation Seamless cell: .run_analysis run_analysis .computation
DONE DESTROY
CONNECTION: mode 'input', source Seamless cell: .gravity, dest ('analysis', 'gravity')
CONNECTION: mode 'input', source Seamless cell: .mutation_rate, dest ('analysis', 'mutation_rate')
CONNECTION: mode 'input', source Seamless cell: .radius, dest ('analysis', 'radius')
CONNECTION: mode 'input', source Seamless cell: .temperature, dest ('analysis', 'temperature')
CONNECTION: mode 'input', source Seamless cell: .frequency, dest ('computation', 'frequency')
CONNECTION: mode 'input', source Seamless cell: .x, dest ('computation', 'x')
CONNECTION: mode 'input', source Seamless cell: .amplitude, dest ('computation', 'amplitude')
CONNECTION: mode 'alias', source ('result',), dest Seamless cell: .y
###Markdown
Unfortunately, the re-building of the computation context also re-launches the slow computation.However, seamless has (experimental!) caching for macros, which does not re-execute transformers whose inputs have not changed. It can be enabled with the *with_caching* parameter
###Code
@macro({"params": "json", "run_analysis": "bool"}, with_caching = True)
def create_computation(ctx, params, run_analysis):
# For the rest of the cell, as before ....
# ...
# ...
from seamless import transformer, cell, pythoncell
from seamless.core.worker import ExportedOutputPin
# Slow computation
params_computation = {
"amplitude": {"pin": "input", "dtype": "float"},
"frequency": {"pin": "input", "dtype": "float"},
"x": {"pin": "input", "dtype": "array"},
"y": {"pin": "output", "dtype": "array"},
}
ctx.computation = transformer(params_computation)
ctx.computation_code = pythoncell().set("""
import numpy as np
import time
print("start slow computation")
y = np.sin(x/100 * frequency) * amplitude
for n in range(1, 5):
pos = int(n/5*len(y))
return_preliminary(y[:pos])
time.sleep(1)
return y
""")
ctx.computation_code.connect(ctx.computation.code)
ctx.computation_result = cell("array")
ctx.computation.y.connect(ctx.computation_result)
# Fast analysis
params2 = params.copy()
for k in params_computation:
if k not in ("x", "y"):
params2.pop(k, None)
ctx.analysis = transformer(params2)
ctx.analysis_code = pythoncell().set("print('start analysis'); return x")
ctx.analysis_code.connect(ctx.analysis.code)
# Final result
ctx.result = cell("array")
if run_analysis:
ctx.computation_result.connect(ctx.analysis.x)
ctx.analysis.y.connect(ctx.result)
else:
ctx.computation_result.connect(ctx.result)
ctx.y = ExportedOutputPin(ctx.result)
ctx.export(ctx.computation, skipped=["y"])
ctx.export(ctx.analysis, skipped=["x","y"])
ctx.run_analysis = cell("bool").set(True)
ctx.computation = create_computation(
params=ctx.computation_params,
run_analysis=ctx.run_analysis
)
connect_computation(ctx.computation)
await ctx.computation()
###Output
start slow computation
start slow computation
start analysis
Waiting for: ['.computation.computation']
start analysis
start analysis
start analysis
start analysis
###Markdown
Now, when we toggle `run_analysis`, it will no longer re-run the computation
###Code
ctx.run_analysis.set(False)
await ctx.computation()
ctx.run_analysis.set(True)
await ctx.computation()
###Output
Macro object re-computation Seamless cell: .run_analysis run_analysis .computation
DONE DESTROY
CONNECTION: mode 'input', source Seamless cell: .gravity, dest ('analysis', 'gravity')
CONNECTION: mode 'input', source Seamless cell: .mutation_rate, dest ('analysis', 'mutation_rate')
CONNECTION: mode 'input', source Seamless cell: .radius, dest ('analysis', 'radius')
CONNECTION: mode 'input', source Seamless cell: .temperature, dest ('analysis', 'temperature')
CONNECTION: mode 'input', source Seamless cell: .frequency, dest ('computation', 'frequency')
CONNECTION: mode 'input', source Seamless cell: .x, dest ('computation', 'x')
CONNECTION: mode 'input', source Seamless cell: .amplitude, dest ('computation', 'amplitude')
CONNECTION: mode 'alias', source ('result',), dest Seamless cell: .y
start analysis
###Markdown
Creating an interactive dashboard Jupyter has its own widget library for IPython kernels, called `ipywidgets`. In turn, several other visualization libraries, e.g. `bqplot`, are built upon `ipywidgets`. `ipywidgets` uses `traitlets` to perform data synchronization. Below is a code snippet that uses `seamless.observer` and `traitlets.observe` to link a seamless cell to a traitlet (it will be included in the next seamless release).
###Code
import traitlets
from collections import namedtuple
import traceback
def traitlink(c, t):
assert isinstance(c, seamless.core.Cell)
assert isinstance(t, tuple) and len(t) == 2
assert isinstance(t[0], traitlets.HasTraits)
assert t[0].has_trait(t[1])
handler = lambda d: c.set(d["new"])
value = c.value
if value is not None:
setattr(t[0], t[1], value)
else:
c.set(getattr(t[0], t[1]))
def set_traitlet(value):
try:
setattr(t[0], t[1], value)
except:
traceback.print_exc()
t[0].observe(handler, names=[t[1]])
obs = seamless.observer(c, set_traitlet )
result = namedtuple('Traitlink', ["unobserve"])
def unobserve():
nonlocal obs
t[0].unobserve(handler)
del obs
result.unobserve = unobserve
return result
###Output
_____no_output_____
###Markdown
With this, we can create a nice little interactive dashboard for our scientific protocol:
###Code
# Clean up any old traitlinks, created by repeated execution of this cell
try:
for t in traitlinks:
t.unobserve()
except NameError:
pass
from IPython.display import display
from ipywidgets import Checkbox, FloatSlider
w_amp = FloatSlider(description = "Amplitude")
w_freq = FloatSlider(description = "Frequency")
w_ana = Checkbox(description="Run analysis")
traitlinks = [] # You need to hang on to the object returned by traitlink
traitlinks.append( traitlink(ctx.amplitude, (w_amp, "value")) )
traitlinks.append( traitlink(ctx.frequency, (w_freq, "value")) )
traitlinks.append( traitlink(ctx.run_analysis, (w_ana, "value")) )
import bqplot
from bqplot import pyplot as plt
fig = plt.figure()
plt.plot(np.zeros(1), np.zeros(1))
plt.xlim(0,100)
plt.ylim(-100,100)
traitlinks.append( traitlink(ctx.x, (fig.marks[0], "x")) )
traitlinks.append( traitlink(ctx.y, (fig.marks[0], "y")) )
display(w_amp)
display(w_freq)
display(w_ana)
display(fig)
ctx.run_analysis.set(False)
await ctx.computation()
###Output
_____no_output_____ |
examples/reference/panes/DeckGL.ipynb | ###Markdown
[Deck.gl](https://deck.gl//) is a very powerful WebGL-powered framework for visual exploratory data analysis of large datasets. The `DeckGL` *pane* renders JSON Deck.gl JSON specification as well as `PyDeck` plots inside a panel. If data is encoded in the deck.gl layers the pane will extract the data and send it across the websocket in a binary format speeding up rendering.The [`PyDeck`](https://deckgl.readthedocs.io/en/latest/) *package* provides Python bindings. Please follow the [installation instructions](https://github.com/uber/deck.gl/blob/master/bindings/pydeck/README.md) closely to get it working in this Jupyter Notebook. Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).* **``mapbox_api_key``** (string): The MapBox API key if not supplied by a PyDeck object.* **``object``** (object, dict or string): The deck.GL JSON or PyDeck object being displayed* **``tooltips``** (bool or dict, default=True): Whether to enable tooltips or custom tooltip formattersIn addition to parameters which control how the object is displayed the DeckGL pane also exposes a number of parameters which receive updates from the plot:* **``click_state``** (dict): Contains the last click event on the DeckGL plot.* **``hover_state``** (dict): Contains information about the current hover location on the DeckGL plot.* **``view_state``** (dict): Contains information about the current view port of the DeckGL plot.____ In order to use Deck.gl you need a MAP BOX Key which you can acquire for free for limited use at [mapbox.com](https://account.mapbox.com/access-tokens/).Now we can define a JSON spec and pass it to the DeckGL pane along with the Mapbox key:
###Code
MAPBOX_KEY = "pk.eyJ1IjoicGFuZWxvcmciLCJhIjoiY2s1enA3ejhyMWhmZjNobjM1NXhtbWRrMyJ9.B_frQsAVepGIe-HiOJeqvQ"
json_spec = {
"initialViewState": {
"bearing": -27.36,
"latitude": 52.2323,
"longitude": -1.415,
"maxZoom": 15,
"minZoom": 5,
"pitch": 40.5,
"zoom": 6
},
"layers": [{
"@@type": "HexagonLayer",
"autoHighlight": True,
"coverage": 1,
"data": "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv",
"elevationRange": [0, 3000],
"elevationScale": 50,
"extruded": True,
"getPosition": "@@=[lng, lat]",
"id": "8a553b25-ef3a-489c-bbe2-e102d18a3211",
"pickable": True
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [
{"@@type": "MapView", "controller": True}
]
}
deck_gl = pn.pane.DeckGL(json_spec, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
deck_gl
###Output
_____no_output_____
###Markdown
Like other panes the DeckGL object can be replaced or updated. In this example we will change the `colorRange` of the HexagonLayer and then trigger an update:
###Code
COLOR_RANGE = [
[1, 152, 189],
[73, 227, 206],
[216, 254, 181],
[254, 237, 177],
[254, 173, 84],
[209, 55, 78]
]
json_spec['layers'][0]['colorRange'] = COLOR_RANGE
deck_gl.param.trigger('object')
###Output
_____no_output_____
###Markdown
TooltipsBy default tooltips can be disabled and enabled by setting `tooltips=True/False`. For more customization it is possible to pass in a dictionary defining the formatting. Let us start by declaring a plot with two layers:
###Code
DATA_URL = 'https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/geojson/vancouver-blocks.json'
LAND_COVER = [[[-123.0, 49.196], [-123.0, 49.324], [-123.306, 49.324], [-123.306, 49.196]]]
json_spec = {
"initialViewState": {
'latitude': 49.254,
'longitude': -123.13,
'zoom': 11,
'maxZoom': 16,
'pitch': 45,
'bearing': 0
},
"layers": [{
'@@type': 'GeoJsonLayer',
'id': 'geojson',
'data': DATA_URL,
'opacity': 0.8,
'stroked': True,
'filled': True,
'extruded': True,
'wireframe': True,
'fp64': True,
'getLineColor': [255, 255, 255],
'getElevation': "@@=properties.valuePerSqm / 20",
'getFillColor': "@@=[255, 255, properties.growth * 255]",
'pickable': True,
}, {
'@@type': 'PolygonLayer',
'id': 'landcover',
'data': LAND_COVER,
'stroked': True,
'pickable': True,
# processes the data as a flat longitude-latitude pair
'getPolygon': '@@=-',
'getFillColor': [0, 0, 0, 20]
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [
{"@@type": "MapView", "controller": True}
]
}
###Output
_____no_output_____
###Markdown
We have explicitly given these layers the `id` `'landcover'` and `'geojson'`. Ordinarily we wouldn't enable `pickable` property on the 'landcover' polygon and if we only have a single `pickable` layer it is sufficient to declare a tooltip like this:
###Code
geojson_tooltip = {
"html": """
<b>Value per Square meter:</b> {properties.valuePerSqm}<br>
<b>Growth:</b> {properties.growth}
""",
"style": {
"backgroundColor": "steelblue",
"color": "white"
}
}
###Output
_____no_output_____
###Markdown
Here we created an HTML template which is populated by the `properties` in the GeoJSON and then has the `style` applied. In general the dictionary may contain:- `html` - Set the innerHTML of the tooltip.- `text` - Set the innerText of the tooltip.- `style` - A dictionary of CSS styles that will modify the default style of the tooltip.If we have multiple pickable layers we can declare distinct tooltips by nesting the tooltips dictionary, indexed by the layer `id` or the index of the layer in the list of layers (note that the dictionary must be either integer indexed or string indexed not both).
###Code
tooltip = {
"geojson": geojson_tooltip,
"landcover": {
"html": "The background",
"style": {
"backgroundColor": "red",
"color": "white"
}
}
}
pn.pane.DeckGL(json_spec, tooltips=tooltip, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
###Output
_____no_output_____
###Markdown
When hovering on the area around Vancouver you should now see a tooltip saying `'The background'` colored red, while the hover tooltip should show information about each property when hovering over one of the property polygons. PyDeckInstead of writing out raw JSON-like dictionaries the `DeckGL` pane may also be given a PyDeck object to render:
###Code
import pydeck
DATA_URL = "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/geojson/vancouver-blocks.json"
LAND_COVER = [[[-123.0, 49.196], [-123.0, 49.324], [-123.306, 49.324], [-123.306, 49.196]]]
INITIAL_VIEW_STATE = pydeck.ViewState(
latitude=49.254,
longitude=-123.13,
zoom=11,
max_zoom=16,
pitch=45,
bearing=0
)
polygon = pydeck.Layer(
'PolygonLayer',
LAND_COVER,
stroked=False,
# processes the data as a flat longitude-latitude pair
get_polygon='-',
get_fill_color=[0, 0, 0, 20]
)
geojson = pydeck.Layer(
'GeoJsonLayer',
DATA_URL,
opacity=0.8,
stroked=False,
filled=True,
extruded=True,
wireframe=True,
get_elevation='properties.valuePerSqm / 20',
get_fill_color='[255, 255, properties.growth * 255]',
get_line_color=[255, 255, 255],
pickable=True
)
r = pydeck.Deck(
api_keys={'mapbox': MAPBOX_KEY},
layers=[polygon, geojson],
initial_view_state=INITIAL_VIEW_STATE
)
# Tooltip (you can get the id directly from the layer object)
tooltips = {geojson.id: geojson_tooltip}
pn.pane.DeckGL(r, sizing_mode='stretch_width', tooltips=tooltips, height=600)
###Output
_____no_output_____
###Markdown
ControlsThe `DeckGL` pane exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:
###Code
pn.Row(deck_gl.controls(), deck_gl)
###Output
_____no_output_____
###Markdown
[Deck.gl](https://deck.gl//) is a very powerful WebGL-powered framework for visual exploratory data analysis of large datasets. The `DeckGL` *pane* renders JSON Deck.gl JSON specification as well as `PyDeck` plots inside a panel. If data is encoded in the deck.gl layers the pane will extract the data and send it across the websocket in a binary format speeding up rendering.The [`PyDeck`](https://deckgl.readthedocs.io/en/latest/) *package* provides Python bindings. Please follow the [installation instructions](https://github.com/uber/deck.gl/blob/master/bindings/pydeck/README.md) closely to get it working in this Jupyter Notebook. Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).* **``mapbox_api_key``** (string): The MapBox API key if not supplied by a PyDeck object.* **``object``** (object, dict or string): The deck.GL JSON or PyDeck object being displayed* **``tooltips``** (boolean, default=True): Whether to enable tooltipsIn addition to parameters which control how the object is displayed the DeckGL pane also exposes a number of parameters which receive updates from the plot:* **``click_state``** (dict): Contains the last click event on the DeckGL plot.* **``hover_state``** (dict): Contains information about the current hover location on the DeckGL plot.* **``view_state``** (dict): Contains information about the current view port of the DeckGL plot.____ In order to use Deck.gl you need a MAP BOX Key which you can acquire for free for limited use at [mapbox.com](https://account.mapbox.com/access-tokens/).Now we can define a JSON spec and pass it to the DeckGL pane along with the Mapbox key:
###Code
MAPBOX_KEY = "pk.eyJ1IjoicGFuZWxvcmciLCJhIjoiY2s1enA3ejhyMWhmZjNobjM1NXhtbWRrMyJ9.B_frQsAVepGIe-HiOJeqvQ"
json_spec = {
"initialViewState": {
"bearing": -27.36,
"latitude": 52.2323,
"longitude": -1.415,
"maxZoom": 15,
"minZoom": 5,
"pitch": 40.5,
"zoom": 6
},
"layers": [{
"@@type": "HexagonLayer",
"autoHighlight": True,
"coverage": 1,
"data": "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv",
"elevationRange": [0, 3000],
"elevationScale": 50,
"extruded": True,
"getPosition": "@@=[lng, lat]",
"id": "8a553b25-ef3a-489c-bbe2-e102d18a3211", "pickable": True
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [{"@@type": "MapView", "controller": True}]
}
deck_gl = pn.pane.DeckGL(json_spec, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
deck_gl
###Output
_____no_output_____
###Markdown
Like other panes the DeckGL object can be replaced or updated. In this example we will change the `colorRange` of the HexagonLayer and then trigger an update:
###Code
COLOR_RANGE = [
[1, 152, 189],
[73, 227, 206],
[216, 254, 181],
[254, 237, 177],
[254, 173, 84],
[209, 55, 78]
]
json_spec['layers'][0]['colorRange'] = COLOR_RANGE
deck_gl.param.trigger('object')
###Output
_____no_output_____
###Markdown
Alternatively the `DeckGL` pane can also be given a PyDeck object to render:
###Code
import pydeck
DATA_URL = "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/geojson/vancouver-blocks.json"
LAND_COVER = [[[-123.0, 49.196], [-123.0, 49.324], [-123.306, 49.324], [-123.306, 49.196]]]
INITIAL_VIEW_STATE = pydeck.ViewState(
latitude=49.254,
longitude=-123.13,
zoom=11,
max_zoom=16,
pitch=45,
bearing=0
)
polygon = pydeck.Layer(
'PolygonLayer',
LAND_COVER,
stroked=False,
# processes the data as a flat longitude-latitude pair
get_polygon='-',
get_fill_color=[0, 0, 0, 20]
)
geojson = pydeck.Layer(
'GeoJsonLayer',
DATA_URL,
opacity=0.8,
stroked=False,
filled=True,
extruded=True,
wireframe=True,
get_elevation='properties.valuePerSqm / 20',
get_fill_color='[255, 255, properties.growth * 255]',
get_line_color=[255, 255, 255],
pickable=True
)
r = pydeck.Deck(
mapbox_key=MAPBOX_KEY,
layers=[polygon, geojson],
initial_view_state=INITIAL_VIEW_STATE
)
pn.pane.DeckGL(r, sizing_mode='stretch_width', height=600)
###Output
_____no_output_____
###Markdown
[Deck.gl](https://deck.gl//) is a very powerful WebGL-powered framework for visual exploratory data analysis of large datasets. The `DeckGL` *pane* renders JSON Deck.gl JSON specification as well as `PyDeck` plots inside a panel. If data is encoded in the deck.gl layers the pane will extract the data and send it across the websocket in a binary format speeding up rendering.The [`PyDeck`](https://deckgl.readthedocs.io/en/latest/) *package* provides Python bindings. Please follow the [installation instructions](https://github.com/uber/deck.gl/blob/master/bindings/pydeck/README.md) closely to get it working in this Jupyter Notebook. Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).* **``mapbox_api_key``** (string): The MapBox API key if not supplied by a PyDeck object.* **``object``** (object, dict or string): The deck.GL JSON or PyDeck object being displayed* **``tooltips``** (bool or dict, default=True): Whether to enable tooltips or custom tooltip formattersIn addition to parameters which control how the object is displayed the DeckGL pane also exposes a number of parameters which receive updates from the plot:* **``click_state``** (dict): Contains the last click event on the DeckGL plot.* **``hover_state``** (dict): Contains information about the current hover location on the DeckGL plot.* **``view_state``** (dict): Contains information about the current view port of the DeckGL plot.____ In order to use Deck.gl you need a MAP BOX Key which you can acquire for free for limited use at [mapbox.com](https://account.mapbox.com/access-tokens/).Now we can define a JSON spec and pass it to the DeckGL pane along with the Mapbox key:
###Code
MAPBOX_KEY = "pk.eyJ1IjoicGFuZWxvcmciLCJhIjoiY2s1enA3ejhyMWhmZjNobjM1NXhtbWRrMyJ9.B_frQsAVepGIe-HiOJeqvQ"
json_spec = {
"initialViewState": {
"bearing": -27.36,
"latitude": 52.2323,
"longitude": -1.415,
"maxZoom": 15,
"minZoom": 5,
"pitch": 40.5,
"zoom": 6
},
"layers": [{
"@@type": "HexagonLayer",
"autoHighlight": True,
"coverage": 1,
"data": "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv",
"elevationRange": [0, 3000],
"elevationScale": 50,
"extruded": True,
"getPosition": "@@=[lng, lat]",
"id": "8a553b25-ef3a-489c-bbe2-e102d18a3211",
"pickable": True
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [
{"@@type": "MapView", "controller": True}
]
}
deck_gl = pn.pane.DeckGL(json_spec, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
deck_gl
###Output
_____no_output_____
###Markdown
Like other panes the DeckGL object can be replaced or updated. In this example we will change the `colorRange` of the HexagonLayer and then trigger an update:
###Code
COLOR_RANGE = [
[1, 152, 189],
[73, 227, 206],
[216, 254, 181],
[254, 237, 177],
[254, 173, 84],
[209, 55, 78]
]
json_spec['layers'][0]['colorRange'] = COLOR_RANGE
deck_gl.param.trigger('object')
###Output
_____no_output_____
###Markdown
TooltipsBy default tooltips can be disabled and enabled by setting `tooltips=True/False`. For more customization it is possible to pass in a dictionary defining the formatting. Let us start by declaring a plot with two layers:
###Code
DATA_URL = 'https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/geojson/vancouver-blocks.json'
LAND_COVER = [[[-123.0, 49.196], [-123.0, 49.324], [-123.306, 49.324], [-123.306, 49.196]]]
json_spec = {
"initialViewState": {
'latitude': 49.254,
'longitude': -123.13,
'zoom': 11,
'maxZoom': 16,
'pitch': 45,
'bearing': 0
},
"layers": [{
'@@type': 'GeoJsonLayer',
'id': 'geojson',
'data': DATA_URL,
'opacity': 0.8,
'stroked': True,
'filled': True,
'extruded': True,
'wireframe': True,
'fp64': True,
'getLineColor': [255, 255, 255],
'getElevation': "@@=properties.valuePerSqm / 20",
'getFillColor': "@@=[255, 255, properties.growth * 255]",
'pickable': True,
}, {
'@@type': 'PolygonLayer',
'id': 'landcover',
'data': LAND_COVER,
'stroked': True,
'pickable': True,
# processes the data as a flat longitude-latitude pair
'getPolygon': '@@=-',
'getFillColor': [0, 0, 0, 20]
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [
{"@@type": "MapView", "controller": True}
]
}
###Output
_____no_output_____
###Markdown
We have explicitly given these layers the `id` `'landcover'` and `'geojson'`. Ordinarily we wouldn't enable `pickable` property on the 'landcover' polygon and if we only have a single `pickable` layer it is sufficient to declare a tooltip like this:
###Code
geojson_tooltip = {
"html": """
<b>Value per Square meter:</b> {properties.valuePerSqm}<br>
<b>Growth:</b> {properties.growth}
""",
"style": {
"backgroundColor": "steelblue",
"color": "white"
}
}
###Output
_____no_output_____
###Markdown
Here we created an HTML template which is populated by the `properties` in the GeoJSON and then has the `style` applied. In general the dictionary may contain:- `html` - Set the innerHTML of the tooltip.- `text` - Set the innerText of the tooltip.- `style` - A dictionary of CSS styles that will modify the default style of the tooltip.If we have multiple pickable layers we can declare distinct tooltips by nesting the tooltips dictionary, indexed by the layer `id` or the index of the layer in the list of layers (note that the dictionary must be either integer indexed or string indexed not both).
###Code
tooltip = {
"geojson": geojson_tooltip,
"landcover": {
"html": "The background",
"style": {
"backgroundColor": "red",
"color": "white"
}
}
}
pn.pane.DeckGL(json_spec, tooltips=tooltip, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
###Output
_____no_output_____
###Markdown
When hovering on the area around Vancouver you should now see a tooltip saying `'The background'` colored red, while the hover tooltip should show information about each property when hovering over one of the property polygons. PyDeckInstead of writing out raw JSON-like dictionaries the `DeckGL` pane may also be given a PyDeck object to render:
###Code
import pydeck
DATA_URL = "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/geojson/vancouver-blocks.json"
LAND_COVER = [[[-123.0, 49.196], [-123.0, 49.324], [-123.306, 49.324], [-123.306, 49.196]]]
INITIAL_VIEW_STATE = pydeck.ViewState(
latitude=49.254,
longitude=-123.13,
zoom=11,
max_zoom=16,
pitch=45,
bearing=0
)
polygon = pydeck.Layer(
'PolygonLayer',
LAND_COVER,
stroked=False,
# processes the data as a flat longitude-latitude pair
get_polygon='-',
get_fill_color=[0, 0, 0, 20]
)
geojson = pydeck.Layer(
'GeoJsonLayer',
DATA_URL,
opacity=0.8,
stroked=False,
filled=True,
extruded=True,
wireframe=True,
get_elevation='properties.valuePerSqm / 20',
get_fill_color='[255, 255, properties.growth * 255]',
get_line_color=[255, 255, 255],
pickable=True
)
r = pydeck.Deck(
mapbox_key=MAPBOX_KEY,
layers=[polygon, geojson],
initial_view_state=INITIAL_VIEW_STATE
)
# Tooltip (you can get the id directly from the layer object)
tooltips = {geojson.id: geojson_tooltip}
pn.pane.DeckGL(r, sizing_mode='stretch_width', tooltips=tooltips, height=600)
###Output
_____no_output_____
###Markdown
ControlsThe `DeckGL` pane exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:
###Code
pn.Row(deck_gl.controls(), deck_gl)
###Output
_____no_output_____
###Markdown
[Deck.gl](https://deck.gl//) is a very powerful WebGL-powered framework for visual exploratory data analysis of large datasets. The `DeckGL` *pane* renders JSON Deck.gl JSON specification as well as `PyDeck` plots inside a panel. If data is encoded in the deck.gl layers the pane will extract the data and send it across the websocket in a binary format speeding up rendering.The [`PyDeck`](https://deckgl.readthedocs.io/en/latest/) *package* provides Python bindings. Please follow the [installation instructions](https://github.com/uber/deck.gl/blob/master/bindings/pydeck/README.md) closely to get it working in this Jupyter Notebook. Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).* **``mapbox_api_key``** (string): The MapBox API key if not supplied by a PyDeck object.* **``object``** (object, dict or string): The deck.GL JSON or PyDeck object being displayed* **``tooltips``** (bool or dict, default=True): Whether to enable tooltips or custom tooltip formatters* **``throttle``** (dict, default={'view': 200, 'hover': 200}): Throttling timeouts (in milliseconds) for view state and hover events. In addition to parameters which control how the object is displayed the DeckGL pane also exposes a number of parameters which receive updates from the plot:* **``click_state``** (dict): Contains the last click event on the DeckGL plot.* **``hover_state``** (dict): Contains information about the current hover location on the DeckGL plot.* **``view_state``** (dict): Contains information about the current view port of the DeckGL plot.____ In order to use Deck.gl you need a MAP BOX Key which you can acquire for free for limited use at [mapbox.com](https://account.mapbox.com/access-tokens/).Now we can define a JSON spec and pass it to the DeckGL pane along with the Mapbox key:
###Code
MAPBOX_KEY = "pk.eyJ1IjoicGFuZWxvcmciLCJhIjoiY2s1enA3ejhyMWhmZjNobjM1NXhtbWRrMyJ9.B_frQsAVepGIe-HiOJeqvQ"
json_spec = {
"initialViewState": {
"bearing": -27.36,
"latitude": 52.2323,
"longitude": -1.415,
"maxZoom": 15,
"minZoom": 5,
"pitch": 40.5,
"zoom": 6
},
"layers": [{
"@@type": "HexagonLayer",
"autoHighlight": True,
"coverage": 1,
"data": "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv",
"elevationRange": [0, 3000],
"elevationScale": 50,
"extruded": True,
"getPosition": "@@=[lng, lat]",
"id": "8a553b25-ef3a-489c-bbe2-e102d18a3211",
"pickable": True
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [
{"@@type": "MapView", "controller": True}
]
}
deck_gl = pn.pane.DeckGL(json_spec, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
deck_gl
###Output
_____no_output_____
###Markdown
Like other panes the DeckGL object can be replaced or updated. In this example we will change the `colorRange` of the HexagonLayer and then trigger an update:
###Code
COLOR_RANGE = [
[1, 152, 189],
[73, 227, 206],
[216, 254, 181],
[254, 237, 177],
[254, 173, 84],
[209, 55, 78]
]
json_spec['layers'][0]['colorRange'] = COLOR_RANGE
deck_gl.param.trigger('object')
###Output
_____no_output_____
###Markdown
TooltipsBy default tooltips can be disabled and enabled by setting `tooltips=True/False`. For more customization it is possible to pass in a dictionary defining the formatting. Let us start by declaring a plot with two layers:
###Code
DATA_URL = 'https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/geojson/vancouver-blocks.json'
LAND_COVER = [[[-123.0, 49.196], [-123.0, 49.324], [-123.306, 49.324], [-123.306, 49.196]]]
json_spec = {
"initialViewState": {
'latitude': 49.254,
'longitude': -123.13,
'zoom': 11,
'maxZoom': 16,
'pitch': 45,
'bearing': 0
},
"layers": [{
'@@type': 'GeoJsonLayer',
'id': 'geojson',
'data': DATA_URL,
'opacity': 0.8,
'stroked': True,
'filled': True,
'extruded': True,
'wireframe': True,
'fp64': True,
'getLineColor': [255, 255, 255],
'getElevation': "@@=properties.valuePerSqm / 20",
'getFillColor': "@@=[255, 255, properties.growth * 255]",
'pickable': True,
}, {
'@@type': 'PolygonLayer',
'id': 'landcover',
'data': LAND_COVER,
'stroked': True,
'pickable': True,
# processes the data as a flat longitude-latitude pair
'getPolygon': '@@=-',
'getFillColor': [0, 0, 0, 20]
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [
{"@@type": "MapView", "controller": True}
]
}
###Output
_____no_output_____
###Markdown
We have explicitly given these layers the `id` `'landcover'` and `'geojson'`. Ordinarily we wouldn't enable `pickable` property on the 'landcover' polygon and if we only have a single `pickable` layer it is sufficient to declare a tooltip like this:
###Code
geojson_tooltip = {
"html": """
<b>Value per Square meter:</b> {properties.valuePerSqm}<br>
<b>Growth:</b> {properties.growth}
""",
"style": {
"backgroundColor": "steelblue",
"color": "white"
}
}
###Output
_____no_output_____
###Markdown
Here we created an HTML template which is populated by the `properties` in the GeoJSON and then has the `style` applied. In general the dictionary may contain:- `html` - Set the innerHTML of the tooltip.- `text` - Set the innerText of the tooltip.- `style` - A dictionary of CSS styles that will modify the default style of the tooltip.If we have multiple pickable layers we can declare distinct tooltips by nesting the tooltips dictionary, indexed by the layer `id` or the index of the layer in the list of layers (note that the dictionary must be either integer indexed or string indexed not both).
###Code
tooltip = {
"geojson": geojson_tooltip,
"landcover": {
"html": "The background",
"style": {
"backgroundColor": "red",
"color": "white"
}
}
}
pn.pane.DeckGL(json_spec, tooltips=tooltip, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
###Output
_____no_output_____
###Markdown
When hovering on the area around Vancouver you should now see a tooltip saying `'The background'` colored red, while the hover tooltip should show information about each property when hovering over one of the property polygons. PyDeckInstead of writing out raw JSON-like dictionaries the `DeckGL` pane may also be given a PyDeck object to render:
###Code
import pydeck
DATA_URL = "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/geojson/vancouver-blocks.json"
LAND_COVER = [[[-123.0, 49.196], [-123.0, 49.324], [-123.306, 49.324], [-123.306, 49.196]]]
INITIAL_VIEW_STATE = pydeck.ViewState(
latitude=49.254,
longitude=-123.13,
zoom=11,
max_zoom=16,
pitch=45,
bearing=0
)
polygon = pydeck.Layer(
'PolygonLayer',
LAND_COVER,
stroked=False,
# processes the data as a flat longitude-latitude pair
get_polygon='-',
get_fill_color=[0, 0, 0, 20]
)
geojson = pydeck.Layer(
'GeoJsonLayer',
DATA_URL,
opacity=0.8,
stroked=False,
filled=True,
extruded=True,
wireframe=True,
get_elevation='properties.valuePerSqm / 20',
get_fill_color='[255, 255, properties.growth * 255]',
get_line_color=[255, 255, 255],
pickable=True
)
r = pydeck.Deck(
api_keys={'mapbox': MAPBOX_KEY},
layers=[polygon, geojson],
initial_view_state=INITIAL_VIEW_STATE
)
# Tooltip (you can get the id directly from the layer object)
tooltips = {geojson.id: geojson_tooltip}
pn.pane.DeckGL(r, sizing_mode='stretch_width', tooltips=tooltips, height=600)
###Output
_____no_output_____
###Markdown
ControlsThe `DeckGL` pane exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:
###Code
pn.Row(deck_gl.controls(), deck_gl)
###Output
_____no_output_____ |
docs/_build/jupyter_execute/full_first_comp.ipynb | ###Markdown
Integral Comparison =======================Now that we have gotten a firm understandting of the PML and an understanding of how we can use NGSolve to implement and solve forward passes we may now compare the far-field that we get from a first order approximation and using a full far field transformation. We will first use a very similar test set-up to the 'Test Probem' with the only difference being a small $\eta(x)$ as that is required during the derivation.
###Code
%matplotlib notebook
from netgen.geom2d import SplineGeometry
from ngsolve import *
import matplotlib.pyplot as plt
import numpy as np
import math
geo = SplineGeometry()
geo.AddCircle( (0,0), 2.25, leftdomain=3, bc="outerbnd")
geo.AddCircle( (0,0), 1.75, leftdomain=2, rightdomain=3, bc="midbnd")
geo.AddCircle( (0,0), 1, leftdomain=1, rightdomain=2, bc="innerbnd")
geo.SetMaterial(1, "inner")
geo.SetMaterial(2, "mid")
geo.SetMaterial(3, "pmlregion")
mesh = Mesh(geo.GenerateMesh (maxh=0.1))
mesh.Curve(3)
mesh.SetPML(pml.Radial(rad=1.75,alpha=1j,origin=(0,0)), "pmlregion") #Alpha is the strenth of the PML.
omega_0 = 20
omega_tilde = 20.3 #-18.5*exp(-(x**2+y**2)) + 20 #70*exp(-(((x*x)*np.log(7/2))+((y*y)*np.log(7/2)))) #Gaussian function for our test Omega.
domain_values = {'inner': omega_tilde, 'mid': omega_0, 'pmlregion': omega_0}
values_list = [domain_values[mat] for mat in mesh.GetMaterials()]
omega = CoefficientFunction(values_list)
fes = H1(mesh, complex=True, order=5)
def forward_pass(theta):
u_in =exp(1j*omega_0*(cos(theta)*x + sin(theta)*y)) #Can use any vector as long as it is on the unit circle.
#Defining our test and solution functions.
u = fes.TrialFunction()
v = fes.TestFunction()
#Defining our LHS of the problem.
a = BilinearForm(fes)
a += grad(u)*grad(v)*dx - omega**2*u*v*dx
a += -omega*1j*u*v * ds("innerbnd")
a.Assemble()
#Defining the RHS of our problem.
f = LinearForm(fes)
f += -u_in * (omega**2 - omega_0**2) * v * dx
f.Assemble()
#Solving our problem.
u_s = GridFunction(fes, name="u")
u_s.vec.data = a.mat.Inverse() * f.vec
u_tot = u_in + u_s
return [u_in, u_s, u_tot]
s = 1*np.pi
calc = forward_pass(s)
u_in = calc[0]
u_s = calc[1]
u_tot = calc[2]
###Output
_____no_output_____
###Markdown
We first derive the integral form. The first steps of this derivation can be seen eariler in this work. We will skip to where we define $u^s(x)$ as an integral equation.$$u^s(x) = \int_{\Omega} G_0(y-x)\eta(x)e^{i\omega\hat{s}\cdot x}\, dx\tag{3}$$With $G_0$ being the integral kernel of Green functions of the free space Helmholtz operator, $L_0$. Doing so, as well as allowing $o(\eta(x))\to0$ as we are looking at $\eta(x)$ close to 0. We can now use the fundamental solution to the 2D Helmholtz equation to replace $G_0$. This is a Hankel equation of the first kind1 which is an expansion of Bessel functions at infinity and are of the form:$$G_0(z) = \dfrac{i}{4} H^{(1)}_0(z) = \left(\dfrac{2}{\pi \omega |z|}\right)^\frac{1}{2} \left(e^{iw|z|-i\frac{\pi}{4}}+o(1)\right)$$Applying this to (3) and combining with equation (1), we can compute $\hat{u}^s(x)$$$\hat{u}^s(x)=\lim_{\rho\to\infty} \sqrt{\rho} e^{-i\omega\rho} u^s(\rho\cdot\hat{r}) \approx \lim_{\rho\to\infty} \sqrt{\rho} e^{-i\omega\rho} \int_{\Omega} \left( \dfrac{i}{4} \sqrt{\left(\dfrac{2}{\pi \omega \rho}\right)} e^{-i\frac{\pi}{4}} e^{iw(\rho - \hat{r}\cdot x)} \eta(x) e^{-i\omega\hat{s}\cdot x}\right)\, dx$$Grouping all the exponent terms and simplfying gives us $$= \dfrac{i}{4} \sqrt{\dfrac{2}{\pi \omega}} e^{-i\frac{\pi}{4}} \int_{\Omega} e^{-iw(\hat{r}-\hat{s})\cdot x} \eta(x)\, dx =: d_1(s,r)$$$$= \dfrac{e^{\frac{\pi i}{4}}}{\sqrt{8\pi \omega}} \int_{\Omega} e^{-iw(\hat{r}-\hat{s})\cdot x} \eta(x)\, dx =: d_1(s,r)$$With $d_1(s,r)$ denoting the first order approximation to $d(s,r)$ with respect to $\eta(x)$.Similarly, to compute the full far-field transforation we use the formula from Chapter 3, equation $(3.64)$1.$$u^s_{\infty}(\hat{r}) = \dfrac{e^{\frac{\pi i}{4}}}{\sqrt{8\pi \omega}} \int_{\partial\Omega}u(x)\dfrac{\partial e^{-i\omega\hat{r}\cdot x}}{\partial n} - \dfrac{\partial u(x)}{\partial n}e^{-i\omega\hat{r}\cdot x}\,dS(x)$$1: "Inverse Acoustic and Electromagnetic Scattering Theory" by D. Colton et al.
###Code
def f1_field(r,s):
temp = Integrate(exp(-1j*omega_0*((cos(r)-cos(s))*x+(sin(r)-sin(s))*y))*(omega**2 - omega_0**2),
mesh,definedon=mesh.Materials("inner"))
return (exp(1j*r)/np.sqrt(8*np.pi*omega_0))*temp
def ff_field(r):
n = specialcf.normal(2)
us_n = BoundaryFromVolumeCF(Grad(u_s)*n)
ecomp = exp(-1j*omega_0*(cos(r)*x + sin(r)*y))
ecomp_n = CoefficientFunction((ecomp.Diff(x),ecomp.Diff(y)))*n
temp = Integrate(u_s*ecomp_n - us_n*ecomp, mesh,definedon=mesh.Boundaries("innerbnd"))
return (exp(1j*r)/np.sqrt(8*np.pi*omega_0))*temp
r = np.pi
print("Full Comp:", abs(ff_field(r)), "\nFirst Order:", abs(f1_field(r,s)))
theta = np.arange(0, 2*np.pi, 0.01)
mag1 = []
mag2 = []
for r in theta:
mag1.append(abs(ff_field(r)))
mag2.append(abs(f1_field(r,s)))
d = np.pi
maxf = math.ceil(abs(ff_field(d)))
max1 = math.ceil(abs(f1_field(d,s)))
#print(maxf, max1)
fig = plt.figure(figsize=(9, 5))
ax = plt.subplot(1, 2, 1, projection='polar')
plt.title('Full Far-field pattern for s = pi')
ax.plot(theta, mag1)
ax.set_rmax(maxf)
#ax.set_rticks([0.1, .2, .3, 2, .4, .5])# Less radial ticks
ax.set_rlabel_position(-22.5) # Move radial labels away from plotted line
ax.grid(True)
ax = plt.subplot(1, 2, 2, projection='polar')
plt.title('First Order Far-field pattern for s = pi')
ax.plot(theta, mag2)
ax.set_rmax(max1)
#ax.set_rticks([0.1, .2, .3, 2, .4, .5])# Less radial ticks
ax.set_rlabel_position(-22.5) # Move radial labels away from plotted line
ax.grid(True)
plt.show()
###Output
_____no_output_____ |
stock-analysis/code/03-stream-viewer.ipynb | ###Markdown
Real-Time Stream Viewer (HTTP)the following function responds to HTTP requests with the list of last 10 processed twitter messages + sentiments in reverse order (newest on top), it reads records from the enriched stream, take the recent 10 messages, and reverse sort them. the function is using nuclio context to store the last results and stream pointers for max efficiency. The code is automatically converted into a nuclio (serverless) function and and respond to HTTP requeststhe example demonstrate the use of `%nuclio` magic commands to specify environment variables, package dependencies,configurations, and to deploy functions automatically onto a cluster. Initialize nuclio emulation, environment variables and configurationuse ` nuclio: ignore` for sections that don't need to be copied to the function
###Code
# nuclio: ignore
# if the nuclio-jupyter package is not installed run !pip install nuclio-jupyter
import nuclio
%nuclio env -c V3IO_ACCESS_KEY=${V3IO_ACCESS_KEY}
%nuclio env -c V3IO_USERNAME=${V3IO_USERNAME}
%nuclio env -c V3IO_API=${V3IO_API}
###Output
_____no_output_____
###Markdown
Set function configuration use a cron trigger with 5min interval and define the base imagefor more details check [nuclio function configuration reference](https://github.com/nuclio/nuclio/blob/master/docs/reference/function-configuration/function-configuration-reference.md)
###Code
%%nuclio config
kind = "nuclio"
spec.build.baseImage = "mlrun/mlrun"
###Output
%nuclio: setting kind to 'nuclio'
%nuclio: setting spec.build.baseImage to 'mlrun/mlrun'
###Markdown
Install required packages`%nuclio cmd` allows you to run image build instructions and install packagesNote: `-c` option will only install in nuclio, not locally
###Code
%nuclio cmd -c pip install v3io
###Output
_____no_output_____
###Markdown
Nuclio function implementationthis function can run in Jupyter or in nuclio (real-time serverless)
###Code
import v3io.dataplane
import json
import os
def init_context(context):
access_key = os.getenv('V3IO_ACCESS_KEY', None)
setattr(context, 'container', os.getenv('V3IO_CONTAINER', 'users'))
setattr(context, 'stream_path', os.getenv('STOCKS_STREAM',os.getenv('V3IO_USERNAME') + '/stocks/stocks_stream'))
v3io_client = v3io.dataplane.Client(endpoint=os.getenv('V3IO_API', None), access_key=access_key)
setattr(context, 'data', [])
setattr(context, 'v3io_client', v3io_client)
setattr(context, 'limit', os.getenv('LIMIT', 10))
def handler(context, event):
resp = context.v3io_client.seek_shard(container=context.container, path=f'{context.stream_path}/0', seek_type='EARLIEST')
setattr(context, 'next_location', resp.output.location)
resp = context.v3io_client.get_records(container=context.container, path=f'{context.stream_path}/0', location=context.next_location, limit=context.limit)
# context.next_location = resp.output.next_location
context.logger.info('location: %s', context.next_location)
for rec in resp.output.records:
rec_data = rec.data.decode('utf-8')
rec_json = json.loads(rec_data)
context.data.append({'Time': rec_json['time'],
'Symbol': rec_json['symbol'],
'Sentiment': rec_json['sentiment'],
'Link': rec_json['link'],
'Content': rec_json['content']})
context.data = context.data[-context.limit:]
columns = [{'text': key, 'type': 'object'} for key in ['Time', 'Symbol', 'Sentiment', 'Link', 'Content']]
data = [list(item.values()) for item in context.data]
response = [{'columns': columns,
'rows': data,
'type': 'table'}]
return response
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Function invocationthe following section simulates nuclio function invocation and will emit the function results
###Code
# create a test event and invoke the function locally
init_context(context)
event = nuclio.Event(body='')
resp = handler(context, event)
###Output
Python> 2021-03-25 14:01:20,229 [info] location: AQAAAGYAAABHAEBeFwAAAA==
###Markdown
Deploy a function onto a clusterthe `%nuclio deploy` command deploy functions into a cluster, make sure the notebook is saved prior to running it !check the help (`%nuclio help deploy`) for more information
###Code
import os
from mlrun import code_to_function
# Export the bare function
fn = code_to_function('stream-viewer',
handler='handler')
fn.export('03-stream-viewer.yaml')
# Set parameters for current deployment
fn.set_envs({'V3IO_CONTAINER': 'users',
'STOCKS_STREAM': os.getenv('V3IO_USERNAME') + '/stocks/stocks_stream'})
fn.spec.max_replicas = 2
project_name = "stocks-" + os.getenv('V3IO_USERNAME')
addr = fn.deploy(project=project_name)
# nuclio: ignore
# test the new API end point, take the address from the deploy log above
!curl {addr}
###Output
_____no_output_____
###Markdown
Real-Time Stream Viewer (HTTP)the following function responds to HTTP requests with the list of last 10 processed twitter messages + sentiments in reverse order (newest on top), it reads records from the enriched stream, take the recent 10 messages, and reverse sort them. the function is using nuclio context to store the last results and stream pointers for max efficiency. The code is automatically converted into a nuclio (serverless) function and and respond to HTTP requeststhe example demonstrate the use of `%nuclio` magic commands to specify environment variables, package dependencies,configurations, and to deploy functions automatically onto a cluster. Initialize nuclio emulation, environment variables and configurationuse ` nuclio: ignore` for sections that don't need to be copied to the function
###Code
# nuclio: ignore
# if the nuclio-jupyter package is not installed run !pip install nuclio-jupyter
import nuclio
%nuclio env -c V3IO_ACCESS_KEY=${V3IO_ACCESS_KEY}
%nuclio env -c V3IO_USERNAME=${V3IO_USERNAME}
%nuclio env -c V3IO_API=${V3IO_API}
###Output
_____no_output_____
###Markdown
Set function configuration use a cron trigger with 5min interval and define the base imagefor more details check [nuclio function configuration reference](https://github.com/nuclio/nuclio/blob/master/docs/reference/function-configuration/function-configuration-reference.md)
###Code
%%nuclio config
kind = "nuclio"
spec.build.baseImage = "mlrun/mlrun"
###Output
%nuclio: setting kind to 'nuclio'
%nuclio: setting spec.build.baseImage to 'mlrun/mlrun'
###Markdown
Install required packages`%nuclio cmd` allows you to run image build instructions and install packagesNote: `-c` option will only install in nuclio, not locally
###Code
%nuclio cmd -c pip install v3io
###Output
_____no_output_____
###Markdown
Nuclio function implementationthis function can run in Jupyter or in nuclio (real-time serverless)
###Code
import v3io.dataplane
import json
import os
def init_context(context):
access_key = os.getenv('V3IO_ACCESS_KEY', None)
setattr(context, 'container', os.getenv('V3IO_CONTAINER', 'bigdata'))
setattr(context, 'stream_path', os.getenv('STOCKS_STREAM', 'stocks/stocks_stream'))
v3io_client = v3io.dataplane.Client(endpoint=os.getenv('V3IO_API', None), access_key=access_key)
setattr(context, 'data', [])
setattr(context, 'v3io_client', v3io_client)
setattr(context, 'limit', os.getenv('LIMIT', 10))
try:
resp = v3io_client.seek_shard(container=context.container, path=f'{context.stream_path}/0', seek_type='EARLIEST')
setattr(context, 'next_location', resp.output.location)
except:
context.logger.info('Stream not updated yet')
def handler(context, event):
if hasattr(context, 'next_location'):
resp = context.v3io_client.get_records(container=context.container, path=f'{context.stream_path}/0', location=context.next_location, limit=context.limit)
else:
resp = context.v3io_client.seek_shard(container=context.container, path=f'{context.stream_path}/0', seek_type='EARLIEST')
setattr(context, 'next_location', resp.output.location)
context.next_location = resp.output.next_location
context.logger.info('location: %s', context.next_location)
for rec in resp.output.records:
rec_data = rec.data.decode('utf-8')
rec_json = json.loads(rec_data)
context.data.append({'Time': rec_json['time'],
'Symbol': rec_json['symbol'],
'Sentiment': rec_json['sentiment'],
'Link': rec_json['link'],
'Content': rec_json['content']})
context.data = context.data[-context.limit:]
columns = [{'text': key, 'type': 'object'} for key in ['Time', 'Symbol', 'Sentiment', 'Link', 'Content']]
data = [list(item.values()) for item in context.data]
response = [{'columns': columns,
'rows': data,
'type': 'table'}]
return response
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Function invocationthe following section simulates nuclio function invocation and will emit the function results
###Code
# create a test event and invoke the function locally
init_context(context)
event = nuclio.Event(body='')
handler(context, event)
###Output
_____no_output_____
###Markdown
Deploy a function onto a clusterthe `%nuclio deploy` command deploy functions into a cluster, make sure the notebook is saved prior to running it !check the help (`%nuclio help deploy`) for more information
###Code
from mlrun import code_to_function
# Export the bare function
fn = code_to_function('stream-viewer',
handler='handler')
fn.export('03-stream-viewer.yaml')
# Set parameters for current deployment
fn.set_envs({'V3IO_CONTAINER': 'bigdata',
'STOCKS_STREAM': 'stocks/stocks_stream'})
fn.spec.max_replicas = 2
addr = fn.deploy(project='stocks')
# nuclio: ignore
# test the new API end point, take the address from the deploy log above
!curl {addr}
###Output
[{"columns": [{"text": "Time", "type": "object"}, {"text": "Symbol", "type": "object"}, {"text": "Sentiment", "type": "object"}, {"text": "Link", "type": "object"}, {"text": "Content", "type": "object"}], "rows": [["2020-10-04 12:11:05", "GOOGL", -0.12, "https://www.investing.com/news/world-news/shorthanded-us-supreme-court-returns-with-major-challenges-ahead-2315028", "By Lawrence Hurley\nWASHINGTON (Reuters) - The U.S. Supreme Court begins its new nine-month term on Monday buffeted by the death of liberal Justice Ruth Bader Ginsburg, a Senate confirmation battle over her successor, the coronavirus pandemic and the approaching presidential election whose outcome the justices may be called upon to help decide.\nAmid the maelstrom, the shorthanded court - with eight justices rather than a full complement of nine - also has a series of major cases to tackle, including a Republican bid to invalidate the Obamacare healthcare law set to be argued on Nov. 10, a week after Election Day.\nIf President Donald Trump s nominee to replace Ginsburg, federal appeals court judge Amy Coney Barrett, is confirmed as expected by a Senate controlled by his fellow Republicans, the court s ideological balance would tilt further rightward with a potent 6-3 conservative majority.\nThe court kicks off its term according to custom on the first Monday of October. It will begin unlike any other, with two cases being argued by teleconference due to the coronavirus pandemic. The court for the first time began hearing cases that way in May, and will continue doing so at least at the term s outset.\nThe court building, where large crowds of mourners gathered outside after Ginsburg s death of Sept. 18, remains closed to the public because of the pandemic.\nThe confluence of events is a test of leadership for conservative Chief Justice John Roberts, who in February also presided over a Senate impeachment trial that ended in Trump s acquittal on charges of abuse of power and obstruction of Congress for pressing Ukraine to investigate his Democratic election rival Joe Biden. \nRoberts is known as a institutionalist who prizes the court s independence.\n\"He would like to be a steady hand and wants the court to be on a steady path,\" said Nicole Saharsky, a lawyer who argues cases before the justices.\nThe most anticipated case in the term s first week comes on Wednesday, when the justices weigh a multibillion-dollar software copyright dispute between Alphabet Inc s Google and Oracle Corp . The case involves Oracle s accusation that Google infringed its software copyrights to build the Android operating system used in smartphones.\nIn the Obamacare case, Barrett could cast a pivotal vote.\nA group of Democratic-led states including California and New York are striving to preserve the 2010 law, formally known as the Affordable Care Act, in a case in which Republican-led states and Trump s administration are trying to strike it down.\nObamacare has helped roughly 20 million Americans obtain medical insurance either through government programs or through policies from private insurers made available in Obamacare marketplaces. It also bars insurers from refusing to cover people with pre-existing medical conditions. Republican opponents have called the law an unwarranted intervention by government in health insurance markets.\nThe Supreme Court previously upheld it 5-4 in a 2012 ruling in which Roberts cast the crucial vote. It rejected another challenge 6-3 in 2015. Ginsburg was in the majority both times.\nBarrett in the past criticized those two rulings. Democrats opposing her nomination have emphasized that she might vote to strike down Obamacare, although legal experts think the court is unlikely to do so.\nRELIGIOUS RIGHTS\nThe court hears another major case on Nov. 4 concerning the scope of religious-rights exemptions to certain federal laws. The dispute arose from Philadelphia s decision to bar a local Roman Catholic entity from participating in the city s foster-care program because the organization prohibits same-sex couples from serving as foster parents.\nThe justices already have tackled multiple election-related emergency requests this year, some related to rules changes prompted by the pandemic. More are likely.\nThe conservative majority has sided with state officials opposed to courts imposing changes to election procedures to make it easier to vote during the pandemic.\nTrump has said he wants Barrett to be confirmed before Election Day so she could cast a decisive vote in any election-related dispute, potentially in his favor. He has said he expects the Supreme Court to decide the outcome of the election, though it has done so only once - the disputed 2000 contest ultimately awarded to Republican George W. Bush.\nDemocrats have said they will query Barrett during confirmation hearings set to begin on Oct. 12 on whether she should recuse herself in certain election-related cases. Justices have the final say on whether they step aside in a case.\nJeffrey Rosen, president of the nonprofit National Constitution Center, said at an event on Friday hosted by the libertarian Pacific Legal Foundation that he expects the court either to stay out of major election cases or, if unable to do so, to try to reach a unanimous outcome.\n\"The court s legitimacy is crucially important to all the justices in this extraordinarily fragile time,\" Rosen added. \n\nIf the court is divided 4-4 in any cases argued before a new justice is seated, it could hold a second round of oral arguments so the new justice could participate. "], ["2020-10-04 09:00:16", "GOOGL", -0.5, "https://www.investing.com/news/cryptocurrency-news/xrp-ledger-blockchain-energizes-decarbonization-but-tokenization-a-challenge-2315010", "As tech giants like Google and Facebook announce plans to become carbon-neutral businesses by 2030, smaller companies are doing the same. The only difference is that innovative startups are taking clever approaches that seek to be more effective than those implemented by large, centralized companies.\nFor example, Ripple \u2014 a fintech company that allows banks, payment providers and digital asset exchanges to send money using blockchain \u2014 has committed to becoming carbon net-zero by 2030. In order to meet this goal, Ripple has unveiled a set of initiatives driven largely by blockchain technology."], ["2020-10-04 07:05:43", "GOOGL", -0.20833333333333334, "https://www.investing.com/news/stock-market-news/trumps-diagnosis-fuels-uncertainty-for-skittish-us-stock-market-2314978", "By April Joyner and Lewis Krauskopf\n(Reuters) - Investors are gauging how a potential deterioration in President Donald Trump s health could impact asset prices in coming weeks, as the U.S. leader remains hospitalized after being diagnosed with COVID-19.\nSo far, markets have been comparatively sanguine: hopes of a breakthrough in talks among U.S. lawmakers on another stimulus package took the edge off a stock market selloff on Friday, with the S&P 500 losing less than 1% and so-called safe-haven assets seeing limited demand. News of Trump s hospitalization at a military medical center outside Washington, where he remained on Saturday, came after trading ended on Friday. \nMany investors are concerned, however, that a serious decline in Trump\u2019s health less than a month before Americans go to the polls on Nov. 3 could roil a U.S. stock market that recently notched its worst monthly performance since its selloff in March while causing turbulence in other assets. \nIf the president\u2019s health is in jeopardy, there s \"too much uncertainty in the situation for the markets just to shrug it off,\" said Willie Delwiche, investment strategist at Baird. \nThe various outcomes investors currently envision run the gamut from a quick recovery that bolsters Trump s image as a fighter to a drawn-out illness or death stoking uncertainty and drying up risk appetite across markets. \nShould uncertainty persist, technology and momentum stocks that have led this year s rally may be particularly vulnerable to a selloff, some investors said. The tech-heavy Nasdaq fell more than 2% on Friday, double the S&P 500 s decline. \n\"If people ... get nervous right now, probably it manifests itself in crowded trades like tech and mega-cap being unwound a bit,\" Delwiche said\nA record 80% of fund managers surveyed last month by BofA Global Research said that buying technology stocks was the market s \"most crowded\" trade. \nThe concentration of investors in big tech stocks has also raised concerns over their outsized sway on moves in the broader market.\nThe largest five U.S. companies \u2013 Google parent Alphabet , Amazon , Apple , Facebook , and Microsoft \u2013 now account for almost 25% of the S&P 500 s market capitalization, according to research firm Oxford Economics.\nFISCAL STIMULUS TALKS\nTrump\u2019s diagnosis has intensified the spotlight on the fiscal stimulus talks in Washington, with investors saying agreement on another aid package could act as a stabilizing force on markets in the face of election-related uncertainty. \nU.S. House of Representatives Speaker Nancy Pelosi, a Democrat, said on Friday that negotiations were continuing, but she is waiting for a response from the White House on key areas. \nFresh stimulus could speed economic healing from the impact of the pandemic, which has put millions of Americans out of work, and benefit economically-sensitive companies whose stock performance has lagged this year, investors said. \nFor those who are underweight stocks, \"we would be using this volatility as an opportunity to increase equities because we think we re in an early-stage economic recovery,\" said Keith Lerner, chief market strategist at Truist/SunTrust Advisory.\nMarket action on Friday suggested some investors may have been positioning for a stimulus announcement in the midst of the selloff.\nThe S&P 500 sectors representing industrials and financials, two groups that are more sensitive to a broad economic recovery, rallied 1.1% and 0.7%, respectively, while the broader index declined.\nEven with worries over Trump s condition, \"the fiscal program has been the loudest noise in the market,\" said Arnim Holzer, macro and correlation defense strategist at EAB Investment Group.\nInvestor hedges against election-related market swings put in place over the last few months may have softened Friday s decline and could, to a degree, mitigate future volatility, said Christopher Stanton of hedge fund Sunrise Capital Partners LLC.\nDespite Trump s illness, futures on the Cboe Volatility Index continued to show expectations of elevated volatility after the Nov. 3 vote, a pattern consistent with concerns of a contested election.\nNagging doubts over whether the Republican president would agree to hand over the keys to the White House if he loses have grown in recent weeks. During his first debate with Democratic challenger Joe Biden on Tuesday, Trump declined to commit to accepting the results, repeating his unfounded complaint that mail-in ballots would lead to election fraud.\n\n\"If Trump s health does not recover ... then he might give up on contesting the election,\" said Michael Purves, chief executive of Tallbacken Capital Advisors. But \"markets are not shifting off the contested election thing right now.\""], ["2020-10-03 12:37:59", "GOOGL", -0.0625, "https://www.investing.com/news/stock-market-news/pointcounterpoint-the-case-for-palantir-2314278", "By Peter Nurse and Yasin Ebrahim\nInvesting.com -- It\u2019s taken some time in coming but Palantir Technologies Inc has finally gone public, providing investors with an excellent opportunity to benefit from growth at a major player in the key area of data analysis.\nAs American management consultant Geoffrey Moore said, \u201cwithout big data, you are blind and deaf and in the middle of a freeway.\u201d\nFollowing years of secrecy, given its ties with customers including spy, law enforcement and military agencies, Palantir\u2019s foray into the public market has forced it to pull back the curtain on its operations and revealed fundamentals that have some scratching their hands.\nInvesting.com s Peter Nurse argues the bull case for the newly minted stock, while Yasin Ebrahim explains why it s a wait-and-see. This is Point/Counterpoint.\nThe Bull Case\nPalantir provides governments and corporations with the tools required to organize and glean insights from mounds of data, helping in areas as varied as detailing the spread of the novel coronavirus to tracking the activities of terrorists.\nThe U.S. data analytics firm made its debut on New York Stock Exchange debut on Wednesday, after years of speculation, via a direct listing and without the usual razzmatazz surrounding a traditional IPO. \nIt\u2019s true the stock is trading below its $10 opening price, but this is still well above its reference price of $7.25 and things are likely to look very different when the Street starts its coverage.\nThe company has yet to make a profit, but surely that won t be long in coming given its losses in the first six months of 2020 totaled $164 million, down from $280 million the same period a year prior.\nPalantir anticipates 42% revenue growth in 2020 to about $1.06 billion, according to a filing earlier this week, with gross margins for the first half of this year at an impressive 73%. It also forecasts revenue growth of more than 30% next year. \nPalantir was formed in 2003 in the wake of the 9-11 attacks, with its first major backer \u2013 the CIA\u2019s venture arm, In-Q-Tel. It was one of the first companies in this space and thus has had a head start in developing the required technology.\nWhile Palantir still analyzes large amounts of data for U.S. government defense and intelligence agencies -- contracts which tend to be frequently rolled over -- the private sector has become increasingly more important. \nIn fact, Palantir now says that a little more than half of its customers come from the private sector instead of governments. And there will undoubtedly be increasing demand from companies given the massive amounts of data they generate.\n\"Broadly speaking Covid has been a tailwind for our business,\" Chief Operating Officer Shyam Sankar said in a recent interview. \"We started 83 new engagements with customers in the first three weeks of Covid without getting on a plane.\"\nThe coronavirus pandemic has forced companies to revisit how they do business, and the Covid era doesn\u2019t look like ending anytime soon."], ["2020-10-03 11:30:27", "GOOGL", -0.76, "https://www.investing.com/news/stock-market-news/paytm-other-indian-startups-vow-to-fight-big-daddy-googles-clout-sources-2314771", "By Aditya Kalra\nNEW DELHI (Reuters) - Dozens of India s technology startups, chafing at Google s local dominance of key apps, are banding together to consider ways to challenge the U.S. tech giant, including by lodging complaints with the government and courts, executives told Reuters.\nAlthough Google, owned by Alphabet Inc , has worked closely with India s booming startup sector and is ramping up its investments, it has recently angered many tech companies with what they say are unfair practices.\nSetting the stage for a potential showdown, entrepreneurs held two video conferences this week to strategise, three executives told Reuters.\n\"It s definitely going to be a bitter fight,\" said Dinesh Agarwal, CEO of e-commerce firm IndiaMART . \"Google will lose this battle. It s just a matter of time.\"\nHe said executives have discussed forming a new startup association aimed chiefly at lodging protests with the Indian government and courts against the Silicon Valley company.\nNearly 99% of the smartphones of India s half a billion users run on Google s Android mobile operating system. Some Indian startups say that allows Google to exert excessive control over the types of apps and other services they can offer, an allegation the company denies.\nThe uproar began last month when Google removed popular payments app Paytm from its Play Store, citing policy violations. This led to a sharp rebuke from the Indian firm s founder, Vijay Shekhar Sharma, whose app returned to the Google platform a few hours later, after Paytm made certain changes.\nIn a video call on Tuesday, Sharma called Google the \"big daddy\" that controls the \"oxygen supply of (app) distribution\" on Android phones, according to an attendee. He urged the roughly 50 executives on the call to join hands to \"stop this tsunami.\"\n\"If we together don t do anything, then history will not be kind to us. We have to control our digital destiny,\" Sharma said.\nOne idea raised was to launch a local rival to Google s app store, but Sharma said this would not be immediately effective given Google s dominance, one source said.\nSharma and Paytm, which is backed by Japan s SoftBank Group Corp (T:9984), did not respond to requests for comment.\nGoogle declined to comment. It has previously said its policies aim to protect Android users and that it applies and enforces them consistently on developers.\nSTRAINING TIES\nThis week the U.S. company angered some Indian startups by deciding to enforce a 30% commission it charges on payments made within apps on the Android store.\nTwo dozen executives were on a call on Friday where many slammed that decision. They discussed filing antitrust complaints and approaching Google s India head for discussions, said two sources with direct knowledge of the call.\nParticipants included sports technology firm Dream Sports, backed by U.S. hedge fund Tiger Global, social media company ShareChat and digital payments firm PhonePe, the sources said. None of those companies responded to requests for comment.\nGoogle defends the policy, saying 97% of apps worldwide comply with it.\nGoogle already faces an antitrust case related to its payments app in India and a competition investigation into claims it abused Android s dominant position. The company says it complies with all laws. \nThese spats strain Google s strong ties to Indian startups. It has invested in some and helped hundreds with product development. In July, its Indian-born CEO Sundar Pichai committed $10 billion in new investments over five to seven years.\nThe conflict \"is counterproductive to what Google has been doing - it s an odd place for them to be,\" said a senior tech executive familiar with Google s thinking. \"It s a reputation issue. It s in the interest of Google to resolve this issue.\"\nGoogle looms over every aspect of the industry.\nPaytm on Saturday told several startup founders, in a communication seen by Reuters, that it was collating input on challenges to Google Play Store and its policies to submit to the authorities.\n\nTo craft their attack, they are using a shared Google document."], ["2020-10-03 08:40:14", "GOOGL", -0.5, "https://www.investing.com/news/cryptocurrency-news/xrp-ledger-blockchain-energizes-decarbonization-but-tokenization-a-challenge-2314731", "As tech giants like Google and Facebook announce plans to become carbon-neutral businesses by 2030, smaller companies are doing the same. The only difference is that innovative startups are taking clever approaches that seek to be more effective than those implemented by large, centralized companies.\nFor example, Ripple \u2014 a fintech company that allows banks, payment providers and digital asset exchanges to send money using blockchain \u2014 has committed to becoming carbon net-zero by 2030. In order to meet this goal, Ripple has unveiled a set of initiatives driven largely by blockchain technology."], ["2020-10-03 00:35:40", "GOOGL", -0.4, "https://www.investing.com/news/technology-news/twitter-ceo-dorsey-will-testify-before-us-senate-committee-on-october-28-2314628", "By David Shepardson and Nandita Bose\nWASHINGTON (Reuters) - The chief executives of Facebook TWTR) and Alphabet-owned Google have agreed to voluntarily testify at a hearing before the Senate Commerce Committee on Oct. 28 about a key law protecting internet companies. \nFacebook and Twitter confirmed on Friday that their CEOs, Mark Zuckerberg and Jack Dorsey, respectively, will appear, while a source said that Google s Sundar Pichai will appear. That came a day after the committee unanimously voted to approve a plan to subpoena the three CEOs to appear before the panel.\nTwitter s Dorsey tweeted on Friday that the hearing \"must be constructive & focused on what matters most to the American people: how we work together to protect elections.\"\nThe CEOs are to appear virtually. \nIn addition to discussions on reforming the law called Section 230 of the Communications Decency Act, which protects internet companies from liability over content posted by users, the hearing will bring up issues about consumer privacy and media consolidation.\nRepublican President Donald Trump has made holding tech companies accountable for allegedly stifling conservative voices a theme of his administration. As a result, calls for a reform of Section 230 have been intensifying ahead of the Nov. 3 elections, but there is little chance of approval by Congress this year.\nLast week Trump met with nine Republican state attorneys general to discuss the fate of Section 230 after the Justice Department unveiled a legislative proposal aimed at reforming the law.\n\nThe chief executives of Google, Facebook, Apple Inc and Amazon.com Inc recently testified before the House of Representatives Judiciary Committee\u2019s antitrust panel. The panel, which is investigating how the companies\u2019 practices hurt rivals, is expected to release its report as early as next Monday."], ["2020-10-04 07:05:43", "MSFT", -0.20833333333333334, "https://www.investing.com/news/stock-market-news/trumps-diagnosis-fuels-uncertainty-for-skittish-us-stock-market-2314978", "By April Joyner and Lewis Krauskopf\n(Reuters) - Investors are gauging how a potential deterioration in President Donald Trump s health could impact asset prices in coming weeks, as the U.S. leader remains hospitalized after being diagnosed with COVID-19.\nSo far, markets have been comparatively sanguine: hopes of a breakthrough in talks among U.S. lawmakers on another stimulus package took the edge off a stock market selloff on Friday, with the S&P 500 losing less than 1% and so-called safe-haven assets seeing limited demand. News of Trump s hospitalization at a military medical center outside Washington, where he remained on Saturday, came after trading ended on Friday. \nMany investors are concerned, however, that a serious decline in Trump\u2019s health less than a month before Americans go to the polls on Nov. 3 could roil a U.S. stock market that recently notched its worst monthly performance since its selloff in March while causing turbulence in other assets. \nIf the president\u2019s health is in jeopardy, there s \"too much uncertainty in the situation for the markets just to shrug it off,\" said Willie Delwiche, investment strategist at Baird. \nThe various outcomes investors currently envision run the gamut from a quick recovery that bolsters Trump s image as a fighter to a drawn-out illness or death stoking uncertainty and drying up risk appetite across markets. \nShould uncertainty persist, technology and momentum stocks that have led this year s rally may be particularly vulnerable to a selloff, some investors said. The tech-heavy Nasdaq fell more than 2% on Friday, double the S&P 500 s decline. \n\"If people ... get nervous right now, probably it manifests itself in crowded trades like tech and mega-cap being unwound a bit,\" Delwiche said\nA record 80% of fund managers surveyed last month by BofA Global Research said that buying technology stocks was the market s \"most crowded\" trade. \nThe concentration of investors in big tech stocks has also raised concerns over their outsized sway on moves in the broader market.\nThe largest five U.S. companies \u2013 Google parent Alphabet , Amazon , Apple , Facebook , and Microsoft \u2013 now account for almost 25% of the S&P 500 s market capitalization, according to research firm Oxford Economics.\nFISCAL STIMULUS TALKS\nTrump\u2019s diagnosis has intensified the spotlight on the fiscal stimulus talks in Washington, with investors saying agreement on another aid package could act as a stabilizing force on markets in the face of election-related uncertainty. \nU.S. House of Representatives Speaker Nancy Pelosi, a Democrat, said on Friday that negotiations were continuing, but she is waiting for a response from the White House on key areas. \nFresh stimulus could speed economic healing from the impact of the pandemic, which has put millions of Americans out of work, and benefit economically-sensitive companies whose stock performance has lagged this year, investors said. \nFor those who are underweight stocks, \"we would be using this volatility as an opportunity to increase equities because we think we re in an early-stage economic recovery,\" said Keith Lerner, chief market strategist at Truist/SunTrust Advisory.\nMarket action on Friday suggested some investors may have been positioning for a stimulus announcement in the midst of the selloff.\nThe S&P 500 sectors representing industrials and financials, two groups that are more sensitive to a broad economic recovery, rallied 1.1% and 0.7%, respectively, while the broader index declined.\nEven with worries over Trump s condition, \"the fiscal program has been the loudest noise in the market,\" said Arnim Holzer, macro and correlation defense strategist at EAB Investment Group.\nInvestor hedges against election-related market swings put in place over the last few months may have softened Friday s decline and could, to a degree, mitigate future volatility, said Christopher Stanton of hedge fund Sunrise Capital Partners LLC.\nDespite Trump s illness, futures on the Cboe Volatility Index continued to show expectations of elevated volatility after the Nov. 3 vote, a pattern consistent with concerns of a contested election.\nNagging doubts over whether the Republican president would agree to hand over the keys to the White House if he loses have grown in recent weeks. During his first debate with Democratic challenger Joe Biden on Tuesday, Trump declined to commit to accepting the results, repeating his unfounded complaint that mail-in ballots would lead to election fraud.\n\n\"If Trump s health does not recover ... then he might give up on contesting the election,\" said Michael Purves, chief executive of Tallbacken Capital Advisors. But \"markets are not shifting off the contested election thing right now.\""], ["2020-10-02 21:25:59", "MSFT", -0.5, "https://www.investing.com/news/coronavirus/futures-sink-as-trump-tests-positive-for-covid19-2313994", "By Stephen Culp\nNEW YORK (Reuters) - U.S. stocks closed lower on Friday as news that U.S. President Donald Trump tested positive for COVID-19 put investors in a risk-off mood and added to mounting uncertainties surrounding the looming election.\nTech shares weighed heaviest on the indexes, but the blue-chip Dow s losses were mitigated by gains in economically sensitive cyclical stocks.\nDespite Friday s sell-off, the S&P and the Nasdaq both gained 1.5% on the week, while the Dow ended the session 1.9% higher than last Friday s close.\nTrump tweeted late Thursday that he had contracted the coronavirus and would be placed under quarantine, compounding the unknowns for an already volatile market. \nBut stocks pared losses after the White House provided assurances that Trump, while experiencing mild symptoms, is not incapacitated. \n\"This injects further uncertainty into the outcome of the election,\" said Roberto Perli, head of global policy research at Cornerstone Macro in Washington. \"My read is that markets have demonstrated an aversion of late especially to uncertainty, not so much to one or the other candidate winning.\" \nEquities also got a brief boost after U.S. House of Representatives Speaker Nancy Pelosi s announcement that an agreement to provide another $25 billion in government assistance to the airline industry was \"imminent.\" \n\"Markets are also paying attention to the likelihood that another stimulus package will pass soon,\" Perli added. \"If that happens it could offset at least in part the uncertainty generated by the COVID news.\" \nHouse Democrats passed a $2.2 trillion fiscal aid package on Thursday, but the bill is unlikely to be approved in the Republican-controlled Senate.\nPartisan wrangling over the size and details of a new round of stimulus have stalled, over two months after emergency unemployment benefits expired for millions of Americans. \nData released on Friday showed the recovery of the labor market could be losing steam. The U.S. economy added 661,000 jobs in September, fewer than expected and the slowest increase since the recovery began in May.\nPayrolls remain a long way from regaining the 22 million jobs lost since the initial shutdown, and the ranks of the permanently unemployed are swelling. \nThe Dow Jones Industrial Average (DJI) fell 134.09 points, or 0.48%, to 27,682.81, the S&P 500 (SPX) lost 32.36 points, or 0.96%, to 3,348.44 and the Nasdaq Composite (IXIC) dropped 251.49 points, or 2.22%, to 11,075.02.\nOf the 11 major sectors in the S&P 500, tech (SPLRCT) suffered the biggest loss, while real estate <.SPLRCR> and utilities (SPLRCU) enjoyed the largest percentage gains.\nIn a reversal from recent sessions, market leaders Apple Inc Amazon.com and Microsoft Corp were the heaviest drags on the S&P and the Nasdaq.\nCommercial air carriers rose on news off a possible new round of government aid, with the S&P 1500 Airline index <.SPCOMAIR> rising 2.3%.\nTesla Inc shares plunged 7.4% after the electric car maker s third quarter vehicle deliveries, while reaching a new record, underwhelmed investors.\nAdvancing issues outnumbered declining ones on the NYSE by a 1.45-to-1 ratio; on Nasdaq, a 1.13-to-1 ratio favored decliners.\nThe S&P 500 posted six new 52-week highs and one new low; the Nasdaq Composite recorded 56 new highs and 34 new lows. \nVolume on U.S. exchanges was 9.30 billion shares, compared with the 9.93 billion average over the last 20 trading days. \n"], ["2020-10-02 21:25:35", "MSFT", -0.23076923076923078, "https://www.investing.com/news/stock-market-news/us-stocks-lower-at-close-of-trade-dow-jones-industrial-average-down-048-2314575", "Investing.com \u2013 U.S. stocks were lower after the close on Friday, as losses in the Technology, Consumer Services and Consumer Goods sectors led shares lower.\nAt the close in NYSE, the Dow Jones Industrial Average lost 0.48%, while the S&P 500 index declined 0.96%, and the NASDAQ Composite index declined 2.22%.\nThe best performers of the session on the Dow Jones Industrial Average were Dow Inc , which rose 2.60% or 1.20 points to trade at 47.31 at the close. Meanwhile, Caterpillar Inc added 2.20% or 3.23 points to end at 149.94 and McDonald\u2019s Corporation was up 1.40% or 3.08 points to 222.67 in late trade.\nThe worst performers of the session were Amgen Inc , which fell 3.91% or 9.98 points to trade at 245.41 at the close. Apple Inc declined 3.23% or 3.77 points to end at 113.02 and Microsoft Corporation was down 2.95% or 6.27 points to 206.19.\nThe top performers on the S&P 500 were LyondellBasell Industries NV which rose 6.02% to 72.24, Macerich Company which was up 5.71% to settle at 7.41 and United Rentals Inc which gained 5.51% to close at 185.02.\nThe worst performers were Activision Blizzard Inc which was down 5.30% to 78.30 in late trade, Vertex Pharmaceuticals Inc which lost 4.65% to settle at 260.80 and Netflix Inc which was down 4.63% to 503.06 at the close.\nThe top performers on the NASDAQ Composite were Nano X Imaging Ltd which rose 56.20% to 37.44, Westwater Resources Inc which was up 49.32% to settle at 4.420 and ProPhase Labs Inc which gained 32.05% to close at 5.150.\nThe worst performers were Benitec Biopharma Ltd ADR which was down 36.74% to 3.03 in late trade, Mesoblast Ltd which lost 35.27% to settle at 12.03 and Oasis Petroleum Inc which was down 19.10% to 0.171 at the close.\nRising stocks outnumbered declining ones on the New York Stock Exchange by 1907 to 1162 and 96 ended unchanged; on the Nasdaq Stock Exchange, 1486 fell and 1380 advanced, while 78 ended unchanged.\nShares in United Rentals Inc rose to 52-week highs; rising 5.51% or 9.67 to 185.02. Shares in Benitec Biopharma Ltd ADR fell to 52-week lows; down 36.74% or 1.76 to 3.03. Shares in ProPhase Labs Inc rose to 52-week highs; up 32.05% or 1.250 to 5.150. Shares in Oasis Petroleum Inc fell to all time lows; losing 19.10% or 0.040 to 0.171. \nThe CBOE Volatility Index, which measures the implied volatility of S&P 500 options, was up 3.48% to 27.63.\nGold Futures for December delivery was down 0.64% or 12.30 to $1904.00 a troy ounce. Elsewhere in commodities trading, Crude oil for delivery in November fell 4.52% or 1.75 to hit $36.97 a barrel, while the December Brent oil contract fell 4.25% or 1.74 to trade at $39.19 a barrel.\nEUR/USD was down 0.26% to 1.1716, while USD/JPY fell 0.19% to 105.30."]], "type": "table"}] |
ProducingUnsupervisedClustering.ipynb | ###Markdown
Producing unsupervised analysis of days of week (step by step)Treating bridge crossings each day as features to learn about the relationships between various days.This notebok is a re-run of the tutorial analysis by [Jake Vanderplas](https://github.com/jakevdp/JupyterWorkflow) with an updated data set. 0. Python packagesIt's a custom to put the imported/required all at the top of the notebook like so: ```python%matplotlib inlineimport matplotlib.pyplot as pltplt.style.use('seaborn')import pandas as pdimport numpy as npfrom sklearn.decomposition import PCAfrom sklearn.mixture import GaussianMixture```But for the purposes of going with the workflow I'll leave them at the points below at which they were imported. 1. Data 1.1 Access to data so that the analysis is reproducible
###Code
URL = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'
from urllib.request import urlretrieve
urlretrieve(URL, 'Fremont.csv')
###Output
_____no_output_____
###Markdown
1.2 Formatting data
###Code
import pandas as pd
data = pd.read_csv('Fremont.csv', index_col='Date', parse_dates=True)
data.head()
###Output
_____no_output_____
###Markdown
1.2.1 Creating a function for loading dataSo that we don't have to download the file every time (if we have it alrady), we can create this function instead:```pythonimport osfrom urllib.request import urlretrieveimport pandas as pdFREMONT_URL = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD'def get_fremont_data(filename='Fremont.csv', url=FREMONT_URL, force_download=False): """Download and cache the fremont data Parameters ---------- filename : string (optional) location to save the data url : string (optional) web location of the data force_download : bool (optional) if True, force redownload of data Returns ------- data : pandas.DataFrame The fremont bridge data """ if force_download or not os.path.exists(filename): urlretrieve(url, filename) data = pd.read_csv(filename, index_col='Date') try: data.index = pd.to_datetime(data.index, format='%m/%d/%Y %I:%M:%S %p') except TypeError: data.index = pd.to_datetime(data.index) data.columns = ['West', 'East'] data['Total'] = data['West'] + data['East'] return data```- We will need to import 'os' to check whether we have the file on our os (is that what it is?)- We will need to import 'urlretrieve' to retrieve the file if we don't have it- We will need to import pandas and format the data 'read_csv' as before- "force_download" is there if we want to force the download (e.g. data set has been updated). So the function says if we force the download or we don't have the file go and download it. 1.2.1.1 We can put the above function into a Python packageThis will allow us to use it in another notebook, for example. [Jake's tutorial.](https://www.youtube.com/watch?v=DjpCHNYQodY&t=342s)The code then looks like this:```pythonfrom packages.data import get_fremont_data```Where packages.data = folder_name.file_name we chose for the package. 1.3 Visual data analysis
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn') # We can choose from matplotlib style library
data.plot()
# Resampling the data by weekly sum as the above plot is too dense
data.resample('W').sum().plot()
# And we can change the legend to simplify it
data.columns = ['West', 'East']
# Rolling sum over 365 days (represented by the 'D') - the the two '.sum()' in the code
data.resample('D').sum().rolling(365).sum().plot()
# Same chart as above, but...
ax = data.resample('D').sum().rolling(365).sum().plot()
# Change the y axis to zero so that it doesn't exaggerate the trends
ax.set_ylim(0, None)
# Add total
data['Total'] = data['West'] + data['East']
ax = data.resample('D').sum().rolling(365).sum().plot()
ax.set_ylim(0, None)
###Output
_____no_output_____
###Markdown
1.3.1 Take a look at the trend within individual days
###Code
# Group by time of day, then take the mean, and plot it
# This will be the average of crossings of the bridge on time of day throughout the year
data.groupby(data.index.time).mean().plot()
###Output
_____no_output_____
###Markdown
The chart above is indicative of commuting pattern of people crossing the bridge: - West side of the bridge peaks in the morning - into the city - East side of the bridge peaks in the acternoon - back from the city - Total peaks in rush hour times 1.3.2 Seeing the full data setLet's see the whole data set in the same way as the sums above.
###Code
# The arguments inside the pivot_table (method?)
# Total: we want the total counts as values
# index (i.e. columns) is the time from our data.index column (remember, we imported the csv with: index_col='Date')
# columns = dates from our data.index column
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
# Let's look at the first five by five blocks of the pivoted table
pivoted.iloc[:5,:5]
###Output
_____no_output_____
###Markdown
We now have a two dimensional data frame where each column is a day in the data set and each row corresponds to an hour during that day.
###Code
# Let's look at that data without the legent in the plot
pivoted.plot(legend=False)
# Above we have a line for each day in each year so it's hard to see any patterns
# So let's introduce some transparency to the lines with the alpha= parameter
pivoted.plot(legend=False, alpha=0.01)
###Output
_____no_output_____
###Markdown
Now we can see thanks to the transparency in the lines that some commutes have the "two spikes" pattern but also a bunch of days that do NOT have that pattern, they peak mid-day instead.
###Code
pivoted.shape
# Transposed pivot shape - i.e. swapping the axes
pivoted.T.shape
###Output
_____no_output_____
###Markdown
2. Principal Component Analysis Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.We can view the data as 1885 observations and each observation has 24 hours.We'll use scikit.learn to do PCA
###Code
from sklearn.decomposition import PCA
PCA(2)
# Now we need to get the pivoted data into numpy array from this pivoted.T.shape to
X = pivoted.T.values
X.shape
# Simply PCA(2).fit(x) produces an error:
# ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
# So we need to alter this x = pivoted.T.values so that "no values" are zero
X = pivoted.fillna(0).T.values
# This worked for me but not in the video PCA(2).fit(x)
# So he added another parameter to it PCA(2, svd_solver='dense').fit(x) - but 'dense' was actually the wrong parameter anyway.
# But that didn't work for me because it looks like they've altered the package by adding svd_solver='auto'
PCA(2).fit(X)
# It doesn't work without the '_transform' part in it! X2 has no attribute shape without the transformation.
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:,0], X2[:,1])
###Output
_____no_output_____
###Markdown
3. Unsupervised ClusteringWe'll use Gaussian mixture model.
###Code
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
labels
###Output
_____no_output_____
###Markdown
The 0s and 1s in the array are representing the two clusters.
###Code
plt.scatter(X2[:,0], X2[:,1], c=labels, cmap='viridis')
plt.colorbar()
# Putting the two charts on the same line
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
# [labels == 0] means that these are only the data from one (of the two clusters, the other value is 1)
# The (legend=False, alpha=0.01) is there because we have too many lines otherwise
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1]);
# Adding headers
ax[0].set_title('Purple Cluster')
ax[1].set_title('Yellow Cluster');
###Output
_____no_output_____
###Markdown
4. Comparing with Day of Week
###Code
pivoted.columns
# we want to convert the dates to days of week
pd.DatetimeIndex(pivoted.columns)
# because DatetimeIndex has an attribute 'dayofweek'
pd.DatetimeIndex(pivoted.columns).dayofweek
# Let's save the dayofweek as a shorthand
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
# As the scatter plot before but change color from c=labels to c=dayofweek
plt.scatter(X2[:,0], X2[:,1], c=dayofweek, cmap='viridis')
plt.colorbar()
###Output
_____no_output_____
###Markdown
5. Analysing OutliersThe following points are weekdays with weekend pattern. Are they holidays?
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek < 5)]
###Output
_____no_output_____ |
Coursera/Applied Data Science with Python Specialization/Python Social Network Analysis/Network Centrality.ipynb | ###Markdown
Question 5Apply the Scaled Page Rank Algorithm to this network. Find the Page Rank of node 'realclearpolitics.com' with damping value 0.85.*This function should return a float.*
###Code
def answer_five():
return nx.pagerank(G2, alpha=0.85)['realclearpolitics.com']
answer_five()
###Output
_____no_output_____
###Markdown
Question 6Apply the Scaled Page Rank Algorithm to this network with damping value 0.85. Find the 5 nodes with highest Page Rank. *This function should return a list of the top 5 blogs in desending order of Page Rank.*
###Code
def answer_six():
import numpy as np
pageranks = nx.pagerank(G2, alpha=0.85)
pages = np.array(list(pageranks.keys()))
ranks = np.array(list(pageranks.values()))
return list(pages[ranks.argsort()[-5:][::-1]])
answer_six()
###Output
_____no_output_____
###Markdown
Question 7Apply the HITS Algorithm to the network to find the hub and authority scores of node 'realclearpolitics.com'. *Your result should return a tuple of floats `(hub_score, authority_score)`.*
###Code
def answer_seven():
hubs, authorities = nx.hits(G2)
return (hubs['realclearpolitics.com'], authorities['realclearpolitics.com'])
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8 Apply the HITS Algorithm to this network to find the 5 nodes with highest hub scores.*This function should return a list of the top 5 blogs in desending order of hub scores.*
###Code
def answer_eight():
import numpy as np
hubs, authorities = nx.hits(G2)
blogs = np.array(list(hubs.keys()))
hubs = np.array(list(hubs.values()))
return list(blogs[hubs.argsort()[-5:][::-1]])
answer_eight()
###Output
_____no_output_____
###Markdown
Question 9 Apply the HITS Algorithm to this network to find the 5 nodes with highest authority scores.*This function should return a list of the top 5 blogs in desending order of authority scores.*
###Code
def answer_nine():
import numpy as np
hubs, auth = nx.hits(G2)
blogs = np.array(list(auth.keys()))
auth = np.array(list(auth.values()))
return list(blogs[auth.argsort()[-5:][::-1]])
answer_nine()
###Output
_____no_output_____
###Markdown
---_You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-social-network-analysis/resources/yPcBs) course resource._--- Assignment 3In this assignment you will explore measures of centrality on two networks, a friendship network in Part 1, and a blog network in Part 2. Part 1Answer questions 1-4 using the network `G1`, a network of friendships at a university department. Each node corresponds to a person, and an edge indicates friendship. *The network has been loaded as networkx graph object `G1`.*
###Code
import networkx as nx
G1 = nx.read_gml('friendships.gml')
###Output
_____no_output_____
###Markdown
Question 1Find the degree centrality, closeness centrality, and normalized betweeness centrality (excluding endpoints) of node 100.*This function should return a tuple of floats `(degree_centrality, closeness_centrality, betweenness_centrality)`.*
###Code
def answer_one():
centrality = nx.degree_centrality(G1)[100]
closeness = nx.closeness_centrality(G1)[100]
betweenness = nx.betweenness_centrality(G1)[100]
return (centrality, closeness, betweenness)
answer_one()
###Output
_____no_output_____
###Markdown
For Questions 2, 3, and 4, assume that you do not know anything about the structure of the network, except for the all the centrality values of the nodes. That is, use one of the covered centrality measures to rank the nodes and find the most appropriate candidate. Question 2Suppose you are employed by an online shopping website and are tasked with selecting one user in network G1 to send an online shopping voucher to. We expect that the user who receives the voucher will send it to their friends in the network. You want the voucher to reach as many nodes as possible. The voucher can be forwarded to multiple users at the same time, but the travel distance of the voucher is limited to one step, which means if the voucher travels more than one step in this network, it is no longer valid. Apply your knowledge in network centrality to select the best candidate for the voucher. *This function should return an integer, the name of the node.*
###Code
def answer_two():
centrality = nx.degree_centrality(G1)
return max(centrality.keys(), key=(lambda key: centrality[key]))
answer_two()
###Output
_____no_output_____
###Markdown
Question 3Now the limit of the voucher’s travel distance has been removed. Because the network is connected, regardless of who you pick, every node in the network will eventually receive the voucher. However, we now want to ensure that the voucher reaches the nodes in the lowest average number of hops.How would you change your selection strategy? Write a function to tell us who is the best candidate in the network under this condition.*This function should return an integer, the name of the node.*
###Code
def answer_three():
closeness = nx.closeness_centrality(G1)
return max(closeness.keys(), key=(lambda key: closeness[key]))
answer_three()
###Output
_____no_output_____
###Markdown
Question 4Assume the restriction on the voucher’s travel distance is still removed, but now a competitor has developed a strategy to remove a person from the network in order to disrupt the distribution of your company’s voucher. Your competitor is specifically targeting people who are often bridges of information flow between other pairs of people. Identify the single riskiest person to be removed under your competitor’s strategy?*This function should return an integer, the name of the node.*
###Code
def answer_four():
betweenness = nx.betweenness_centrality(G1)
return max(betweenness.keys(), key=(lambda key: betweenness[key]))
answer_four()
###Output
_____no_output_____
###Markdown
Part 2`G2` is a directed network of political blogs, where nodes correspond to a blog and edges correspond to links between blogs. Use your knowledge of PageRank and HITS to answer Questions 5-9.
###Code
G2 = nx.read_gml('blogs.gml')
###Output
_____no_output_____ |
programacioi/chapters/Tema6/Tema6_Fitxers.ipynb | ###Markdown
Tema 6 FitxersFins ara els nostres programes sempre han rebut la informació d'una única font: el teclat. Per altra banda a l'hora degenerar la informació només la sabem mostrar per pantalla.Aquest fet suposa una limitació per al tipus de programes que podem construir, ja que la informació que produïmes perd cada cop que tornem a executar el programa o tanquem l'ordinador.Podem superar aquestes problemàtiques si sabem fer ús de fitxers. Que és un fitxer?Abans de poder treballar amb fitxers, és important entendre què són i com els sistemes operatius moderns gestionenalguns dels seus aspectes.Bàsicament, un fitxer és un conjunt contigu de bytes usats per emmagatzemar dades. Aquestes dades s'organitzen en unformat específic i poden ser tan simples com una seqüència de text o tan complexes com l'executable d'un programa. Elsfitxers es poden emmagatzemar en memòria secundària (disc dur) i per tant la informació que hi guardem perdura mésenllà del temps que l'ordinador es troba encès.Els fitxers de la majoria de sistemes moderns es componen de tres parts:* **Capçalera**: metadades sobre el contingut del fitxer (nom de fitxer, organització interna, tipus, etc.).* **Dades**: contingut del fitxer escrit pel creador o editor.* **Final del fitxer** (EOF): caràcter especial que indica el final del fitxer, en el nostre cas aquest caràcter serà el caràcter buit "" (no confondre amb l'espai " "). Localització (Path)Quan accedim a un fitxer del nostre sistema operatiu des d'un programa, és necessari indicar quin camí hem de seguirde la localització on es troba el programa fins a arribar al fitxer. El camí o *path* del fitxer es representa en el nostreprograma amb un *string* a l'hora de construir un objecte de la classe fitxer. La localització d'un fitxer esdivideix en tres parts principals: * **Ruta de carpetes**: la ubicació de la carpeta al sistema de fitxers on les carpetes posteriors estan separades per una barra inclinada `/`. Per accedir a carpetes anteriors usarem el símbol `..`. * **Nom del fitxer**: el nom real del fitxer. Com a anècdota podem comentar que Windows no permetia noms iniciats amb punt fins la seva versió 10, ja que era un nom reservat per a fitxers del mateix sistema. * **Extensió**: el final del camí del fitxer marcat amb un símbol (.) ens serveix per indicar el tipus de fitxer. En cap cas condiciona el seu contingut, un fitxer amb extensió `.txt` pot tenir informació referent a l'estructura d'una imatge. Els *paths* poden ser: * **Absoluts**: Respecte a l'arrel del sistema. El que coneixem com C: (Windows) o `/` (UNIX). * **Relatius**: Respecte a la carpeta on estem situats. / ← arrel (C: en windows)│ ├── path/ │ ││ ├── to/ │ │ ├── main.py│ │ └── cats.txt│ ││ ├── main2.py│ ││ └── dogs.txt │└── animals.csv Si ens trobem en la carpeta **to** i volem accedir al fitxer **cats.txt** ho farem amb els següents _strings_: path_relatiu = "cats.txt" path_absolut = "/path/to/cats.txt"Si ens trobem a la carpeta **path** i volem accedir al fitxer **cats.txt** ho farem amb els següents _strings_: path_relatiu = "to/cats.txt" path_absolut = "/path/to/cats.txt"Si ens trobem a la carpeta **to** i volem accedir al fitxer **dogs.txt** ho farem amb els següents _strings_: path_relatiu = "../../animals.csv" path_absolut = "/path/dogs.txt" Us de fitxers a PythonQuan vulguem treballar amb un fitxer, el primer que hem de fer és obrir-lo. Això es fa invocant la funció integrada`open` que té un únic argument obligatori que és el camí (*path*) cap al fitxer. `open` **ens retorna un objecte dela classe** _file_.```fitxer = open("harry_potter.txt") el fitxer harry_potter es a la mateixa carpeta que el fitxer amb el codi```Després d’obrir un fitxer, el següent que aprendrem és com tancar-lo. Ho podem fer amb la sentència `close`:```fitxer.close() tancam el fitxer ```**Advertència!!!** Hem d’assegurar-nos que un fitxer obert està tancat correctament.És important saber que tancar el fitxer és la nostra responsabilitat com a programadors. En la majoria dels casos, uncop finalitzat el programa o en el mètode en el qual s'ha obert el fitxer, aquest es tancarà. No obstant això, no hiha cap garantia que això succeeixi i podem tenir ocupacions de memòria no desitjades.Assegurar-nos que el nostre codi es comporta d'una manera ben definida i no es dóna qualsevol comportament nodesitjat és una bona pràctica de programació.Una de les maneres recomanades d'assegurar que un fitxer es tanca correctament és usant la següent estructura de codi:```with open('harry_potter.txt') as fitxer: Processament del fitxer```La sentència `with` s'encarrega de tancar el fitxer automàticament, fins i tot en casos d’error. És recomanable usarel bloc `with` sempre que sigui possible, ja que permet un codi més net i facilita el maneig de qualsevol errorinesperat. El mode d'un fitxerA l'hora d'obrir un fitxer també necessitarem usar el segon argument del mètode open: `mode`. Aquest argument és un_string_ que pot contenir diverses combinacions de caràcters per representar com volem obrir el fitxer. El valor perdefecte és `r`, que representa obrir el fitxer en mode de només lectura, tal com hem fet a l'exemple anterior.CaracterSignificat'r'Obre per a lectura ( per defecte)'w'Obre per a escriptura, sobrescriu el que teniem abans'a'Obre el fitxer en mode escriptura i afegeix la informació al final'r+'Obre el fitxer en mode lectura i escriptura Llegir i escriure en fitxers obertsUn cop obert un fitxer, hi podem llegir o escriure informació. Hi ha diversos mètodes de la classe fitxer que espoden usar per acomplir aquesta tasca:MètodeFuncióread(1)Llegeix des del fitxer en funció del nombre de bytes size . Si no es passa cap argument es llegirà tot el fitxer. Nosaltres com feim amb el text de teclat, llegirem els fitxers caracter a caracter.readline(size=-1) Llegeix com a màxim el nombre de size de caràcters de la línia. Si no es passa cap argument es llegirà tota la línia (o la resta de la línia).readlines() Llegeix les línies restants de l’objecte de fitxer i les retorna com a llista.Ara ens centrarem en l’escriptura de fitxers. Igual que amb la lectura de fitxers, els objectes de fitxer tenendiversos mètodes útils per escriure en un fitxer:MètodeFunciówrite(string)Escriu el string en el fitxer. Ens hem d'ocupar de fer el salt de línia pel nostre compte.writelines(seq)Això escriu la seqüència al fitxer. No s'afegeix cap finalització de línia a cada element de la seqüència. Depènde nosaltres afegir les línies apropiades.**Anem a veure alguns exemples**Lectura de tots els caràcters d'un fitxer.
###Code
with open("harry_potter.txt", 'r') as harry:
lletra = harry.read(1)
while lletra != "": # Final de fitxer
print(lletra, end="")
lletra = harry.read(1)
with open("harry_potter.txt", 'r') as fitxer:
for linia in fitxer:
print(linia, end="-")
###Output
Juro solemnemente que esto es una travesura
-Me voy a la cama antes de que a alguno de los dos se os ocurra otra genial idea y acabemos muertos. O peor: expulsados
-Dobby no mata, solo mutila o hiere de gravedad
-Es Leviosa, no leviosa-
###Markdown
Anem a veure com escriure
###Code
text = ["En", "un" , "agujero" , "en" , "el" , "suelo" , "habitaba" , "un" , "hobbit"]
with open("resultat.txt", 'w') as res:
for element in text:
res.write(element)
res.write("\n")
###Output
_____no_output_____ |
project_notebooks/cokriging_gullfaks_poroseismic_R.ipynb | ###Markdown
Co-kriging for Porosity from Seismic in R Install libraries
###Code
install.packages("gstat")
install.packages("ggplot2")
install.packages("sp")
library(gstat)
library(ggplot2)
library(sp)
###Output
_____no_output_____
###Markdown
Load porosity top data
###Code
filepath = "https://raw.githubusercontent.com/yohanesnuwara/geostatistics/main/results/Top%20Ness%20Porosity.txt"
download.file(filepath, destfile="/content/Top Ness porosity.txt", method="wget")
# Define column names
colnames <- c("WELL", "UTMX", "UTMY", "AVGPOR")
# Read well tops data
portops <- read.table("/content/Top Ness porosity.txt", header=T, col.names=colnames)
print(head(portops, n=10))
# Well name column
portops$WELL
###Output
_____no_output_____
###Markdown
Load amplitude top data
###Code
f1 = "https://raw.githubusercontent.com/yohanesnuwara/geostatistics/main/results/Ness_extracted_amplitude.txt"
download.file(f1, destfile="/content/Top Ness amplitude.txt", method="wget")
colnames <- c("UTMX", "UTMY", "TWT", "AMP")
amptops <- read.table("Top Ness amplitude.txt", col.names=colnames)
print(tail(amptops, 10))
# Remove NaNs in data
amptops <- amptops[complete.cases(amptops), ]
print(head(amptops, 10))
###Output
UTMX UTMY TWT AMP
50 453687.7 6780306 2010.79 -2528.5624
51 453700.2 6780305 2024.79 2013.7127
52 453712.7 6780305 2024.79 1790.7166
53 453725.2 6780305 2020.79 313.1070
54 453762.7 6780304 2020.79 1620.3771
55 453775.2 6780304 2016.79 1300.0781
56 453900.2 6780302 2017.79 666.6587
57 453912.7 6780302 2019.79 1030.5086
58 453925.2 6780302 2023.79 2200.9768
59 453975.2 6780302 2023.79 -1413.9375
###Markdown
Plot Scatter plot with native R
###Code
# control size of figure
options(repr.plot.width=8, repr.plot.height=7)
# Scatter plot of all wells in the top structure
plot(portops$UTMX, portops$UTMY, xlab="UTM X", ylab="UTM Y",
main="Well Coordinates in Top Ness",
pch=1, col="red", cex=1, # pch is point types, cex is point size
xlim=c(min(portops$UTMX), max(portops$UTMX)+100),
ylim=c(min(portops$UTMY), max(portops$UTMY)+100)) # + 100 so that "C3" is visible
# Annotate each point
text(portops$UTMX, portops$UTMY, labels=portops$WELL, pos=4, col="blue")
###Output
_____no_output_____
###Markdown
Scatter plot with ggplot2
###Code
# Scatter plot with ggplot2
p <- ggplot(portops, aes(UTMX, UTMY, color=AVGPOR, label=WELL)) +
labs(y="UTM X [m]", x = "UTM Y [m]", title="Well Coordinates in Base Cretaceous") +
theme(plot.title = element_text(hjust = 0.5, size=20, vjust=2)) +
geom_point(size=5, shape=20) + # point size and style
geom_text(size=6, vjust=-0.5) + # annotate points
scale_color_gradientn(colours = rainbow(10))
print(p)
###Output
_____no_output_____
###Markdown
Variogram analysis Well top data
###Code
coordinates(portops)<-~UTMX+UTMY
pp<-gstat(id="AVGPOR", formula=AVGPOR~1, data=portops)
###Output
_____no_output_____
###Markdown
Two possible variogram models. Model 1:
###Code
portops.vv<-variogram(pp, cutoff=10000, width=20, alpha=c(140, 150, 160, 170))
portops.vm<-vgm(model="Gau", psill=0.0018, range=4000, nugget=0)
plot(portops.vv, model=portops.vm)
###Output
_____no_output_____
###Markdown
Model 2:
###Code
portops.vv<-variogram(pp, cutoff=10000, width=20, alpha=c(50, 60, 70, 80))
portops.vm<-vgm(model="Gau", psill=0.005, range=2000, nugget=0)
plot(portops.vv, model=portops.vm)
###Output
_____no_output_____
###Markdown
Model 2 looks better, with direction of anisotropy 70 degrees.
###Code
portops.vv<-variogram(pp, cutoff=10000, width=20, alpha=70)
portops.vm<-vgm(model="Gau", psill=0.0047, range=2000, nugget=0)
plot(portops.vv, model=portops.vm)
###Output
_____no_output_____
###Markdown
Seismic horizon time
###Code
coordinates(amptops)<-~UTMX+UTMY
pp<-gstat(id="AMP", formula=AMP~1, data=amptops)
# Plot variogram at various directions
# amptops.vv<-variogram(pp, cutoff=15000, alpha=c(0, 45, 90, 135))
amptops.vv<-variogram(pp, cutoff=15000, alpha=c(60, 70, 80, 90))
plot(amptops.vv)
amptops.vv<-variogram(pp, cutoff=15000, alpha=70)
amptops.vm<-vgm(model="Gau", psill=7e+6, range=500, nugget=0)
plot(amptops.vv, model=amptops.vm)
seis_creta.vmf <- fit.variogram(seis_creta.vv, seis_creta.vm)
plot(seis_creta.vv, model=seis_creta.vmf)
print(seis_creta.vmf)
###Output
_____no_output_____
###Markdown
Cross-variogram Base cretaceous
###Code
pp <- gstat(id="AVGPOR", formula=AVGPOR~1, data=portops)
pp <- gstat(pp, id="AMP", formula=AMP~1, data=amptops)
crvv <- variogram(pp, cutoff=15000)
plot(crvv)
pp<-gstat(pp, id="AVGPOR", model=portops.vm, fill.all=T)
pp<-fit.lmc(crvv, pp)
plot(crvv, model=pp)
###Output
_____no_output_____
###Markdown
Top Etive
###Code
coordinates(well_etive)<-~UTMX+UTMY
coordinates(seis_etive)<-~UTMX+UTMY
pp <- gstat(id="TVD", formula=TVD~1, data=well_etive)
pp <- gstat(pp, id="TWT", formula=TWT~1, data=seis_etive)
crvv <- variogram(pp, cutoff=15000)
plot(crvv, ylim=c(0, 25000))
###Output
_____no_output_____
###Markdown
Top Ness
###Code
coordinates(well_ness)<-~UTMX+UTMY
coordinates(seis_ness)<-~UTMX+UTMY
pp <- gstat(id="TVD", formula=TVD~1, data=well_ness)
pp <- gstat(pp, id="TWT", formula=TWT~1, data=seis_ness)
crvv <- variogram(pp, cutoff=15000)
plot(crvv, ylim=c(0, 25000))
###Output
_____no_output_____
###Markdown
Top Tarbert
###Code
coordinates(well_tarb)<-~UTMX+UTMY
coordinates(seis_tarb)<-~UTMX+UTMY
pp <- gstat(id="TVD", formula=TVD~1, data=well_tarb)
pp <- gstat(pp, id="TWT", formula=TWT~1, data=seis_tarb)
crvv <- variogram(pp, cutoff=15000)
plot(crvv, ylim=c(0, 25000))
###Output
_____no_output_____
###Markdown
Co-kriging
###Code
print(pp)
minx = min(portops$UTMX)
maxx = max(portops$UTMX)
miny = min(portops$UTMY)
maxy = max(portops$UTMY)
xx<-seq(from=minx, to=maxx, by=100)
yy<-seq(from=miny, to=maxy, by=100)
xy<-expand.grid(x=xx,y=yy)
coordinates(xy)<- ~ x+y
gridded(xy)<-T
456000, 458000
6782000, 6785000
minx = 456000
maxx = 458000
miny = 6782000
maxy = 6785000
xx<-seq(from=minx, to=maxx, by=100)
yy<-seq(from=miny, to=maxy, by=100)
xy<-expand.grid(x=xx,y=yy)
coordinates(xy)<- ~ x+y
gridded(xy)<-T
print(pp)
pp$set=list(nocheck=1)
cokr <- predict(pp, xy)
print(cokr)
###Output
_____no_output_____ |
Big-Data-Clusters/CU6/Public/content/install/sop064-packman-uninstall-azdata.ipynb | ###Markdown
SOP064 - Uninstall azdata CLI (using package manager)=====================================================Steps----- Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("sop064-packman-uninstall-azdata.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)']}
error_hints = {'azdata': [['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Error processing command: "ApiError', 'TSG110 - Azdata returns ApiError', '../repair/tsg110-azdata-returns-apierror.ipynb'], ['Error processing command: "ControllerError', 'TSG036 - Controller logs', '../log-analyzers/tsg036-get-controller-logs.ipynb'], ['ERROR: 500', 'TSG046 - Knox gateway logs', '../log-analyzers/tsg046-get-knox-logs.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ["Can't open lib 'ODBC Driver 17 for SQL Server", 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ["[Errno 2] No such file or directory: '..\\\\", 'TSG053 - ADS Provided Books must be saved before use', '../repair/tsg053-save-book-first.ipynb'], ["NameError: name 'azdata_login_secret_name' is not defined", 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', "TSG124 - 'No credentials were supplied' error from azdata login", '../repair/tsg124-no-credentials-were-supplied.ipynb']]}
install_hint = {'azdata': ['SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb']}
###Output
_____no_output_____
###Markdown
Uninstall azdata CLI using OS specific package manager
###Code
import os
import sys
import platform
from pathlib import Path
if platform.system() == "Darwin":
run('brew uninstall azdata-cli')
elif platform.system() == "Windows":
# Get the product guid to be able to do the .msi uninstall (this can take 2 or 3 minutes)
#
product_guid = run("""powershell -Command "$product = get-wmiobject Win32_Product | Where {$_.Name -match 'Azure Data CLI'}; $product.IdentifyingNumber" """, return_output=True)
print (f"The product guid is: {product_guid}")
# Uninstall using the product guid
#
# NOTES:
# 1. This will pop up the User Access Control dialog, press 'Yes'
# 2. The installer dialog will appear (it may start as a background window)
#
run(f'msiexec /uninstall {product_guid} /passive')
elif platform.system() == "Linux":
webbrowser.open('https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata-linux-package')
else:
raise SystemExit(f"Platform '{platform.system()}' is not recognized, must be 'Darwin', 'Windows' or 'Linux'")
###Output
_____no_output_____
###Markdown
Related (SOP063, SOP054)
###Code
print('Notebook execution complete.')
###Output
_____no_output_____ |
jwst_validation_notebooks/extract_1d/jwst_extract_1d_miri_test/extract_1d-spec2-miri-lrs-slit.ipynb | ###Markdown
JWST Pipeline Validation Testing Notebook: MIRI LRS Slit Spec2: Extract1d() **Instruments Affected**: MIRI Table of Contents [Imports](imports_ID) [Introduction](intro_ID) [Get Documentaion String for Markdown Blocks](markdown_from_docs) [Loading Data](data_ID) [Run JWST Pipeline](pipeline_ID) [Create Figure or Print Output](residual_ID) [About This Notebook](about_ID) ImportsList the library imports and why they are relevant to this notebook.* os, glob for general OS operations* numpy* astropy.io for opening fits files* inspect to get the docstring of our objects.* IPython.display for printing markdown output* jwst.datamodels for building model for JWST Pipeline* jwst.module.PipelineStep is the pipeline step being tested* matplotlib.pyplot to generate plot* json for editing json files* crds for retrieving reference files as needed[Top of Page](title_ID)
###Code
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
data_dir = TemporaryDirectory()
os.chdir(data_dir.name)
import numpy as np
from numpy.testing import assert_allclose
import os
from glob import glob
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import astropy.io.fits as fits
import astropy.units as u
import jwst.datamodels as datamodels
from jwst.datamodels import RampModel, ImageModel
from jwst.pipeline import Detector1Pipeline, Spec2Pipeline
from jwst.extract_1d import Extract1dStep
from gwcs.wcstools import grid_from_bounding_box
from jwst.associations.asn_from_list import asn_from_list
from jwst.associations.lib.rules_level2_base import DMSLevel2bBase
import json
import crds
from ci_watson.artifactory_helpers import get_bigdata
%matplotlib inline
###Output
_____no_output_____
###Markdown
IntroductionIn this notebook we will test the **extract1d()** step of Spec2Pipeline() for **LRS slit** observations.Step description: https://jwst-pipeline.readthedocs.io/en/stable/jwst/extract_1d/index.htmlPipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/extract_1d Short description of the algorithmThe extract1d() step does the following for POINT source observations:* the code searches for the ROW that is the centre of the bounding box y-range* using the WCS information attached to the data in assign_wcs() it will determine the COLUMN location of the target* it computes the difference between this location and the centre of the bounding box x-range* BY DEFAULT the exraction aperture will be centred on the target location, and the flux in the aperture will be summed row by row. The default extraction width is a constant value of 11 PIXELS.In **Test 1** below, we load the json file that contains the default parameters, and override these to perform extraction over the full bounding box width for a single exposure. This tests the basic arithmetic of the extraction.In **Test 2**, we use the pair of nodded exposures and pass these in an association to the Spec2Pipeline, allowing extract1d() to perform the default extraction on a nodded pair. This is a very typical LRS science use case.[Top of Page](title_ID) Loading DataWe are using here a simulated LRS slit observation, generated with MIRISim v2.3.0 (as of Dec 2020). It is a simple along-slit-nodded observation of a point source (the input was modelled on the flux calibrator BD+60). LRS slit observations cover the full array. [Top of Page](title_ID)
###Code
Slitfile1 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_spec2',
'spec2_miri_test',
'miri_lrs_slit_pt_nod1_v2.3.fits')
Slitfile2 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_spec2',
'spec2_miri_test',
'miri_lrs_slit_pt_nod2_v2.3.fits')
files = [Slitfile1, Slitfile2]
###Output
_____no_output_____
###Markdown
Run JWST PipelineFirst we run the data through the Detector1() pipeline to convert the raw counts into slopes. This should use the calwebb_detector1.cfg file. The output of this stage will then be run through the Spec2Pipeline. Extract_1d is the final step of this pipeline stage, so we will just run through the whole pipeline.[Top of Page](title_ID) Detector1Pipeline
###Code
det1 = []
# Run pipeline on both files
for ff in files:
d1 = Detector1Pipeline.call(ff, save_results=True)
det1.append(d1)
print(det1)
###Output
_____no_output_____
###Markdown
Spec2PipelineNext we go ahead to the Spec2 pipeline. At this stage we perform 2 tests:1. run the Spec2 pipeline on one single exposure, extracting over the full bounding box width. we compare this with the manual extraction over the same aperture. this tests whether the pipeline is performing the correct arithmetic in the extraction procedure.2. run the Spec2 pipeline on the nodded set of exposures. this mimics more closely how the pipeline will be run in automated way during routine operations. this will test whether the pipeline is finding the source positions, and is able to extract both nodded observations in the same way.The initial steps will be the same for both tests and will be run on both initially.First we run the Spec2Pipeline() **skipping** the extract1d() step.
###Code
spec2 = []
for dd in det1:
s2 = Spec2Pipeline.call(dd.meta.filename,save_results=True, steps={"extract_1d": {"skip": True}})
spec2.append(s2)
calfiles = glob('*_cal.fits')
print(calfiles)
photom = []
nods = []
for cf in calfiles:
if 'nod1' in cf:
nn = 'nod1'
else:
nn = 'nod2'
ph = datamodels.open(cf)
photom.append(ph)
nods.append(nn)
print(photom)
###Output
_____no_output_____
###Markdown
Retrieve the wcs information from the PHOTOM output file so we know the coordinates of the bounding box and the wavelength grid. We use the ``grid_from_bounding_box`` function to generate these grids. We convert the wavelength grid into a wavelength vector by averaging over each row. This works because LRS distortion is minimal, so lines of equal wavelength run along rows (not 100% accurate but for this purpose this is correct).This cell performs a check that both nods have the same wavelength assignment over the full bounding box, which is expected.
###Code
lams = []
for ph,nn in zip(photom, nods):
bbox_w = ph.meta.wcs.bounding_box[0][1] - ph.meta.wcs.bounding_box[0][0]
bbox_ht = ph.meta.wcs.bounding_box[1][1] - ph.meta.wcs.bounding_box[1][0]
print('Model bbox ({1}) = {0} '.format(ph.meta.wcs.bounding_box, nn))
print('Model: Height x width of bounding box ({2})= {0} x {1} pixels'.format(bbox_ht, bbox_w, nn))
x,y = grid_from_bounding_box(ph.meta.wcs.bounding_box)
ra, dec, lam = ph.meta.wcs(x, y)
lam_vec = np.mean(lam, axis=1)
lams.append(lam_vec)
# check that the wavelength vectors for the nods are equal, then we can just work with one going forward
assert np.array_equal(lams[0], lams[1], equal_nan=True), "Arrays not equal!"
###Output
_____no_output_____
###Markdown
Test 1: Single exposure, full width extractionTo enable the extraction over the full width of the LRS slit bounding box, we have to edit the json parameters file and run the step with an override to the config file. We first run the Spec2Pipeline with its default settings, skipping the extract_1d() step. **The next few steps will be executed with one of the nods only.** Next we perform a manual extraction by first extracting the bounding box portion of the array, and then summing up the values in each row over the full BB width. This returns the flux in MJy/sr, which we convert to Jy using the pixel area. A MIRI imager pixel measures 0.11" on the side.**NOTE: as per default, the extract_1d() pipeline step will find the location of the target and offset the extraction window to be centred on the target. To extract the full slit, we want this to be disabled, so we set use_source_posn to False in the json input file.**
###Code
ph1 = photom[0]
nn = nods[0]
print('The next steps will be run only on {0}, the {1} exposure'.format(ph1.meta.filename, nn))
photom_sub = ph1.data[np.int(np.min(y)):np.int(np.max(y)+1), np.int(np.min(x)):np.int(np.max(x)+1)]
print('Cutout has dimensions ({0})'.format(np.shape(photom_sub)))
print('The cutout was taken from pixel {0} to pixel {1} in x'.format(np.int(np.min(x)),np.int(np.max(x)+1)))
xsub = np.sum(photom_sub, axis=1)
#remove some nans
lam_vec = lams[0]
xsub = xsub[~np.isnan(lam_vec)]
lam_vec = lam_vec[~np.isnan(lam_vec)]
# calculate the pixel area in sr
pix_scale = 0.11 * u.arcsec
pixar_as2 = pix_scale**2
pixar_sr = pixar_as2.to(u.sr)
###Output
_____no_output_____
###Markdown
Next we have to apply the aperture correction (new from B7.6). We load in the aperture correction reference file, identify the correct values, and multiply the calibrated spectrum by the correct numbers.
###Code
apcorr_file = 'jwst_miri_apcorr_0007.fits'
# retrieve this file
basename = crds.core.config.pop_crds_uri(apcorr_file)
filepath = crds.locate_file(basename, "jwst")
acref = datamodels.open(filepath)
# check that list item 1 is for slitlessprism
ind = 0
assert acref.apcorr_table[ind]['subarray']=='FULL', "index does not correspond to the correct subarray!"
xwidth = np.int(np.max(x)) - np.int(np.min(x))
print("extraction width = {0}".format(xwidth))
# first identify where the aperture width is in the "size" array
if xwidth >= np.max(acref.apcorr_table[ind]['size']):
size_ind = np.argwhere(acref.apcorr_table[ind]['size'] == np.max(acref.apcorr_table[ind]['size']))
else:
size_ind = np.argwhere(acref.apcorr_table[ind]['size'] == xwidth)
# take the vector from the apcorr_table at this location and extract.
apcorr_vec = acref.apcorr_table[ind]['apcorr'][:,size_ind[0][0]]
print(np.shape(apcorr_vec))
print(np.shape(acref.apcorr_table[ind]['wavelength']))
# now we create an interpolated vector of values corresponding to the lam_vec wavelengths.
# NOTE: the wavelengths are running in descending order so make sure assume_sorted = FALSE
intp_ac = interp1d(acref.apcorr_table[ind]['wavelength'], apcorr_vec, assume_sorted=False)
iapcorr = intp_ac(lam_vec)
plt.figure(figsize=[10,6])
plt.plot(acref.apcorr_table[1]['wavelength'], apcorr_vec, 'g-', label='ref file')
plt.plot(lam_vec, iapcorr, 'r-', label='interpolated')
#plt.plot(lam_vec, ac_vals, 'r-', label='aperture corrections for {} px ap'.format(xwidth))
plt.show()
###Output
_____no_output_____
###Markdown
Now multiply the manually extracted spectra by the flux scaling and aperture correction vectors.
###Code
# now convert flux from MJy/sr to Jy using the pixel area
if (ph1.meta.bunit_data == 'MJy/sr'):
xsub_cal = xsub * pixar_sr.value * 1e6 * iapcorr
###Output
_____no_output_____
###Markdown
Next we run the ``extract_1d()`` step on the same file, editing the configuration to sum up over the entire aperture as we did above. We load in the json file, make ajustments and run the step with a config file override option.
###Code
extreffile='jwst_miri_extract1d_0004.json'
basename=crds.core.config.pop_crds_uri(extreffile)
path=crds.locate_file(basename,"jwst")
with open(path) as json_ref:
jsreforig = json.load(json_ref)
jsrefdict = jsreforig.copy()
jsrefdict['apertures'][0]['xstart'] = np.int(np.min(x))
jsrefdict['apertures'][0]['xstop'] = np.int(np.max(x)) + 1
#jsrefdict['apertures'][0]['use_source_posn'] = False
for element in jsrefdict['apertures']:
element.pop('extract_width', None)
element.pop('nod2_offset', None)
with open('extract1d_slit_full_spec2.json','w') as jsrefout:
json.dump(jsrefdict,jsrefout,indent=4)
with open('extract1d_slit_full_spec2.cfg','w') as cfg:
cfg.write('name = "extract_1d"'+'\n')
cfg.write('class = "jwst.extract_1d.Extract1dStep"'+'\n')
cfg.write(''+'\n')
cfg.write('log_increment = 50'+'\n')
cfg.write('smoothing_length = 0'+'\n')
cfg.write('use_source_posn = False' + '\n')
cfg.write('override_extract1d="extract1d_slit_full_spec2.json"'+'\n')
xsub_pipe = Extract1dStep.call(ph1, config_file='extract1d_slit_full_spec2.cfg', save_results=True)
###Output
_____no_output_____
###Markdown
If the step ran successfully, we can now look at the output and compare to our manual extraction spectrum. To ratio the 2 spectra we interpolate the manually extracted spectrum ``xsub_cal`` onto the pipeline-generated wavelength grid.
###Code
fig, ax = plt.subplots(ncols=3, nrows=1, figsize=[14,5])
ax[0].imshow(photom_sub, origin='lower', interpolation='None')
ax[1].plot(lam_vec, xsub_cal, 'b-', label='manual extraction')
ax[1].plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], xsub_pipe.spec[0].spec_table['FLUX'], 'g-', label='pipeline extraction')
ax[1].set_title('{0}, Full BBox extraction'.format(nn))
#interpolate the two onto the same grid so we can look at the difference
f = interp1d(lam_vec, xsub_cal, fill_value='extrapolate')
ixsub_cal = f(xsub_pipe.spec[0].spec_table['WAVELENGTH'])
diff = ((xsub_pipe.spec[0].spec_table['FLUX'] - ixsub_cal) / xsub_pipe.spec[0].spec_table['FLUX']) * 100.
ax[2].plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], diff, 'k-')
ax[2].set_title('Difference manual - pipeline extracted spectra')
ax[2].set_ylim([-20., 20.])
fig.show()
###Output
_____no_output_____
###Markdown
We check that the ratio between the 2 is on average <= 1 per cent in the core range between 5 and 10 micron.
###Code
inds = (xsub_pipe.spec[0].spec_table['WAVELENGTH'] >= 5.0) & (xsub_pipe.spec[0].spec_table['WAVELENGTH'] <= 10.)
print(np.mean(diff[inds]))
try:
assert np.mean(diff[inds]) <= 1.0, "Mean difference between pipeline and manual extraction >= 1 per cent in 5-10 um. CHECK."
except AssertionError as e:
print("****************************************************")
print("")
print("ERROR: {}".format(e))
print("")
print("****************************************************")
###Output
_____no_output_____
###Markdown
------------------------------**END OF TEST PART 1**---------------------------------------------- Test 2: Nodded observation, two exposuresIn this second test we use both nodded observations. In this scenarion, the nods are used as each other's background observations and we need to ensure that the extraction aperture is placed in the right position with a realistic aperture for both nods.We will re-run the first steps of the Spec2Pipeline, so that the nods are used as each other's backgrounds. This requires creation of an association from which the Spec2Pipeline will be run. Then we will run them both through the extract_1d() step with the default parameters, checking:* the location of the aperture* the extraction width
###Code
asn_files = [det1[0].meta.filename, det1[1].meta.filename]
asn = asn_from_list(asn_files, rule=DMSLevel2bBase, meta={'program':'test', 'target':'bd60', 'asn_pool':'test'})
# now add the opposite nod as background exposure:
asn['products'][0]['members'].append({'expname': 'miri_lrs_slit_pt_nod2_v2.3_rate.fits', 'exptype':'background'})
asn['products'][1]['members'].append({'expname': 'miri_lrs_slit_pt_nod1_v2.3_rate.fits', 'exptype':'background'})
# write this out to a json file
with open('lrs-slit-test_asn.json', 'w') as fp:
fp.write(asn.dump()[1])
###Output
_____no_output_____
###Markdown
Now run the Spec2Pipeline with this association files as input, instead of the individual FITS files or datamodels.
###Code
sp2 = Spec2Pipeline.call('lrs-slit-test_asn.json', save_results=True)
x1dfiles = glob('*_x1d.fits')
print(x1dfiles)
x1ds = [datamodels.open(xf) for xf in x1dfiles]
fig = plt.figure(figsize=[15,5])
for x1 in x1ds:
if 'nod1' in x1.meta.filename:
nn = 'nod1'
else:
nn = 'nod2'
plt.plot(x1.spec[0].spec_table['WAVELENGTH'], x1.spec[0].spec_table['FLUX'], label=nn)
plt.plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], xsub_pipe.spec[0].spec_table['FLUX'], label='nod 1, full bbox extracted')
plt.legend(fontsize='large')
plt.grid()
plt.xlim([4.5, 12.5])
plt.xlabel('wavelength (micron)', fontsize='large')
plt.ylabel('Jy', fontsize='large')
fig.show()
###Output
_____no_output_____
###Markdown
What we will test for:* the extracted spectra should be near-identical (chosen criteria: mean ration between the 2 <= 5%)**Further tests to add:*** perform a full verification of the extraction at the source position and with the same extraction width as in the parameters file.**If the ``assert`` statement below passes, we consider the test a success.**
###Code
inds = x1ds[0].spec[0].spec_table['FLUX'] > 0.00
ratio = x1ds[0].spec[0].spec_table['FLUX'][inds] / x1ds[1].spec[0].spec_table['FLUX'][inds]
infs = np.isinf(ratio-1.0)
try:
assert np.mean(np.abs(ratio[~infs] - 1.)) <= 0.05, "Extracted spectra don't match! CHECK!"
except AssertionError as e:
print("****************************************************")
print("")
print("ERROR: {}".format(e))
print("")
print("****************************************************")
###Output
_____no_output_____
###Markdown
JWST Pipeline Validation Testing Notebook: MIRI LRS Slit Spec2: Extract1d() **Instruments Affected**: MIRI Table of Contents [Imports](imports_ID) [Introduction](intro_ID) [Get Documentaion String for Markdown Blocks](markdown_from_docs) [Loading Data](data_ID) [Run JWST Pipeline](pipeline_ID) [Create Figure or Print Output](residual_ID) [About This Notebook](about_ID) ImportsList the library imports and why they are relevant to this notebook.* os, glob for general OS operations* numpy* astropy.io for opening fits files* inspect to get the docstring of our objects.* IPython.display for printing markdown output* jwst.datamodels for building model for JWST Pipeline* jwst.module.PipelineStep is the pipeline step being tested* matplotlib.pyplot to generate plot* json for editing json files* crds for retrieving reference files as needed[Top of Page](title_ID)
###Code
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
data_dir = TemporaryDirectory()
os.chdir(data_dir.name)
import numpy as np
from numpy.testing import assert_allclose
import os
from glob import glob
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import astropy.io.fits as fits
import astropy.units as u
import jwst.datamodels as datamodels
from jwst.datamodels import RampModel, ImageModel
from jwst.pipeline import Detector1Pipeline, Spec2Pipeline
from jwst.extract_1d import Extract1dStep
from gwcs.wcstools import grid_from_bounding_box
from jwst.associations.asn_from_list import asn_from_list
from jwst.associations.lib.rules_level2_base import DMSLevel2bBase
import json
import crds
from ci_watson.artifactory_helpers import get_bigdata
%matplotlib inline
###Output
_____no_output_____
###Markdown
IntroductionIn this notebook we will test the **extract1d()** step of Spec2Pipeline() for **LRS slit** observations.Step description: https://jwst-pipeline.readthedocs.io/en/stable/jwst/extract_1d/index.htmlPipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/extract_1d Short description of the algorithmThe extract1d() step does the following for POINT source observations:* the code searches for the ROW that is the centre of the bounding box y-range* using the WCS information attached to the data in assign_wcs() it will determine the COLUMN location of the target* it computes the difference between this location and the centre of the bounding box x-range* BY DEFAULT the exraction aperture will be centred on the target location, and the flux in the aperture will be summed row by row. The default extraction width is a constant value of 11 PIXELS.In **Test 1** below, we load the json file that contains the default parameters, and override these to perform extraction over the full bounding box width for a single exposure. This tests the basic arithmetic of the extraction.In **Test 2**, we use the pair of nodded exposures and pass these in an association to the Spec2Pipeline, allowing extract1d() to perform the default extraction on a nodded pair. This is a very typical LRS science use case.[Top of Page](title_ID) Loading DataWe are using here a simulated LRS slit observation, generated with MIRISim v2.3.0 (as of Dec 2020). It is a simple along-slit-nodded observation of a point source (the input was modelled on the flux calibrator BD+60). LRS slit observations cover the full array. [Top of Page](title_ID)
###Code
Slitfile1 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_spec2',
'spec2_miri_test',
'miri_lrs_slit_pt_nod1_v2.3.fits')
Slitfile2 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_spec2',
'spec2_miri_test',
'miri_lrs_slit_pt_nod2_v2.3.fits')
files = [Slitfile1, Slitfile2]
###Output
_____no_output_____
###Markdown
Run JWST PipelineFirst we run the data through the Detector1() pipeline to convert the raw counts into slopes. This should use the calwebb_detector1.cfg file. The output of this stage will then be run through the Spec2Pipeline. Extract_1d is the final step of this pipeline stage, so we will just run through the whole pipeline.[Top of Page](title_ID) Detector1Pipeline
###Code
det1 = []
# Run pipeline on both files
for ff in files:
d1 = Detector1Pipeline.call(ff, save_results=True)
det1.append(d1)
print(det1)
###Output
_____no_output_____
###Markdown
Spec2PipelineNext we go ahead to the Spec2 pipeline. At this stage we perform 2 tests:1. run the Spec2 pipeline on one single exposure, extracting over the full bounding box width. we compare this with the manual extraction over the same aperture. this tests whether the pipeline is performing the correct arithmetic in the extraction procedure.2. run the Spec2 pipeline on the nodded set of exposures. this mimics more closely how the pipeline will be run in automated way during routine operations. this will test whether the pipeline is finding the source positions, and is able to extract both nodded observations in the same way.The initial steps will be the same for both tests and will be run on both initially. Spectral extraction is performed on the output file of the 2D resampled images (\_s2d.fits)First we run the Spec2Pipeline() **skipping** the extract1d() step.
###Code
spec2 = []
for dd in det1:
s2 = Spec2Pipeline.call(dd.meta.filename,save_results=True, steps={"extract_1d": {"skip": True}})
spec2.append(s2)
#calfiles = glob('*_cal.fits')
calfiles = glob('*_s2d.fits')
print(calfiles)
s2d = []
nods = []
for cf in calfiles:
if 'nod1' in cf:
nn = 'nod1'
else:
nn = 'nod2'
ph = datamodels.open(cf)
s2d.append(ph)
nods.append(nn)
print(s2d)
###Output
_____no_output_____
###Markdown
Retrieve the wcs information from the S2D output file so we know the coordinates of the bounding box and the wavelength grid. We use the ``grid_from_bounding_box`` function to generate these grids. We convert the wavelength grid into a wavelength vector by averaging over each row. This works because LRS distortion is minimal, so lines of equal wavelength run along rows (not 100% accurate but for this purpose this is correct).This cell performs a check that both nods have the same wavelength assignment over the full bounding box, which is expected.
###Code
lams = []
for ss,nn in zip(s2d, nods):
bbox_w = ss.meta.wcs.bounding_box[0][1] - ss.meta.wcs.bounding_box[0][0]
bbox_ht = ss.meta.wcs.bounding_box[1][1] - ss.meta.wcs.bounding_box[1][0]
print('Model bbox ({1}) = {0} '.format(ss.meta.wcs.bounding_box, nn))
print('Model: Height x width of bounding box ({2})= {0} x {1} pixels'.format(bbox_ht, bbox_w, nn))
x,y = grid_from_bounding_box(ss.meta.wcs.bounding_box)
ra, dec, lam = ss.meta.wcs(x, y)
lam_vec = np.mean(lam, axis=1)
lams.append(lam_vec)
# check that the wavelength vectors for the nods are equal, then we can just work with one going forward
assert np.array_equal(lams[0], lams[1], equal_nan=True), "Arrays not equal!"
###Output
_____no_output_____
###Markdown
Test 1: Single exposure, full width extractionTo enable the extraction over the full width of the LRS slit bounding box, we have to edit the json parameters file and run the step with an override to the config file. **The next few steps will be executed with one of the nods only.** Next we perform a manual extraction by first extracting the bounding box portion of the array, and then summing up the values in each row over the full BB width. This returns the flux in MJy/sr, which we convert to Jy using the pixel area. A MIRI imager pixel measures 0.11" on the side.**NOTE: as per default, the extract_1d() pipeline step will find the location of the target and offset the extraction window to be centred on the target. To extract the full slit, we want this to be disabled, so we set use_source_posn to False in the cfg input file.**
###Code
s1 = s2d[0]
nn = nods[0]
print('The next steps will be run only on {0}, the {1} exposure'.format(s1.meta.filename, nn))
#photom_sub = ph1.data[int(np.min(y)):int(np.max(y)+1), int(np.min(x)):int(np.max(x)+1)]
s2d_sub = s2d[0].data
print('Cutout has dimensions ({0})'.format(np.shape(s2d_sub)))
print('The cutout was taken from pixel {0} to pixel {1} in x'.format(int(np.min(x)),int(np.max(x)+1)))
xsub = np.sum(s2d_sub, axis=1)
#remove some nans
lam_vec = lams[0]
xsub = xsub[~np.isnan(lam_vec)]
lam_vec = lam_vec[~np.isnan(lam_vec)]
# calculate the pixel area in sr
pix_scale = 0.11 * u.arcsec
pixar_as2 = pix_scale**2
pixar_sr = pixar_as2.to(u.sr)
###Output
_____no_output_____
###Markdown
Next we have to apply the aperture correction (new from B7.6). We load in the aperture correction reference file, identify the correct values, and multiply the calibrated spectrum by the correct numbers. When we are extracting over the full aperture, the correction should effectively be 1 as we are not losing any of the flux.
###Code
apcorr_file = 'jwst_miri_apcorr_0007.fits'
# retrieve this file and open as datamodel
basename = crds.core.config.pop_crds_uri(apcorr_file)
filepath = crds.locate_file(basename, "jwst")
acref = datamodels.open(filepath)
# check that list item 0 is for the slit mode (subarray = FULL)
ind = 0
assert acref.apcorr_table[ind]['subarray']=='FULL', "index does not correspond to the correct subarray!"
xwidth = int(np.max(x))+1 - int(np.min(x))
print("extraction width = {0}".format(xwidth))
# first identify where the aperture width is in the "size" array
if xwidth >= np.max(acref.apcorr_table[ind]['size']):
size_ind = np.argwhere(acref.apcorr_table[ind]['size'] == np.max(acref.apcorr_table[ind]['size']))
else:
size_ind = np.argwhere(acref.apcorr_table[ind]['size'] == xwidth)
# take the vector from the apcorr_table at this location and extract.
apcorr_vec = acref.apcorr_table[ind]['apcorr'][:,size_ind[0][0]]
print(np.shape(apcorr_vec))
print(np.shape(acref.apcorr_table[ind]['wavelength']))
# now we create an interpolated vector of values corresponding to the lam_vec wavelengths.
# NOTE: the wavelengths are running in descending order so make sure assume_sorted = FALSE
intp_ac = interp1d(acref.apcorr_table[ind]['wavelength'], apcorr_vec, kind='linear', assume_sorted=False)
iapcorr = intp_ac(lam_vec)
plt.figure(figsize=[10,6])
plt.plot(acref.apcorr_table[1]['wavelength'], apcorr_vec, 'g-', label='ref file')
plt.plot(lam_vec, iapcorr, 'r-', label='interpolated')
#plt.plot(lam_vec, ac_vals, 'r-', label='aperture corrections for {} px ap'.format(xwidth))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Now multiply the manually extracted spectra by the flux scaling and aperture correction vectors.
###Code
# now convert flux from MJy/sr to Jy using the pixel area
if (s1.meta.bunit_data == 'MJy/sr'):
xsub_cal = xsub * pixar_sr.value * 1e6 * iapcorr
###Output
_____no_output_____
###Markdown
Next we run the ``extract_1d()`` step on the same file, editing the configuration to sum up over the entire aperture as we did above. We load in the json file, make adjustments and run the step with a config file override option.
###Code
extreffile='jwst_miri_extract1d_0004.json'
basename=crds.core.config.pop_crds_uri(extreffile)
path=crds.locate_file(basename,"jwst")
with open(path) as json_ref:
jsreforig = json.load(json_ref)
jsrefdict = jsreforig.copy()
jsrefdict['apertures'][0]['xstart'] = int(np.min(x))
jsrefdict['apertures'][0]['xstop'] = int(np.max(x)) + 1
#jsrefdict['apertures'][0]['use_source_posn'] = False
for element in jsrefdict['apertures']:
element.pop('extract_width', None)
element.pop('nod2_offset', None)
with open('extract1d_slit_full_spec2.json','w') as jsrefout:
json.dump(jsrefdict,jsrefout,indent=4)
with open('extract1d_slit_full_spec2.cfg','w') as cfg:
cfg.write('name = "extract_1d"'+'\n')
cfg.write('class = "jwst.extract_1d.Extract1dStep"'+'\n')
cfg.write(''+'\n')
cfg.write('log_increment = 50'+'\n')
cfg.write('smoothing_length = 0'+'\n')
cfg.write('use_source_posn = False' + '\n')
cfg.write('override_extract1d="extract1d_slit_full_spec2.json"'+'\n')
xsub_pipe = Extract1dStep.call(s1, config_file='extract1d_slit_full_spec2.cfg', save_results=True)
###Output
_____no_output_____
###Markdown
If the step ran successfully, we can now look at the output and compare to our manual extraction spectrum. To ratio the 2 spectra we interpolate the manually extracted spectrum ``xsub_cal`` onto the pipeline-generated wavelength grid.
###Code
fig, ax = plt.subplots(ncols=3, nrows=1, figsize=[14,5])
ax[0].imshow(s2d_sub, origin='lower', interpolation='None')
ax[1].plot(lam_vec, xsub_cal, 'b-', label='manual extraction')
ax[1].plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], xsub_pipe.spec[0].spec_table['FLUX'], 'g-', label='pipeline extraction')
ax[1].set_title('{0}, Full BBox extraction'.format(nn))
ax[1].legend()
ax[1].set_xlim([5, 12])
#ax[1].set_ylim([0.004, 0.008])
#interpolate the two onto the same grid so we can look at the difference
f = interp1d(lam_vec, xsub_cal, kind='linear', fill_value='extrapolate')
ixsub_cal = f(xsub_pipe.spec[0].spec_table['WAVELENGTH'])
#ax[1].plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], ixsub_cal, 'r-', label='manual, interpolated')
diff = ((xsub_pipe.spec[0].spec_table['FLUX'] - ixsub_cal) / xsub_pipe.spec[0].spec_table['FLUX']) * 100.
ax[2].plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], diff, 'k-')
ax[2].axhline(y=1., xmin=0, xmax=1, color='r', ls='--')
ax[2].axhline(y=-1., xmin=0, xmax=1, color='r', ls='--')
ax[2].set_title('Difference manual - pipeline extracted spectra')
ax[2].set_ylim([-5., 5.])
ax[2].set_xlim([4.8, 10.2])
fig.show()
###Output
_____no_output_____
###Markdown
We check that the ratio between the 2 is on average <= 1 per cent in the core range between 5 and 10 micron.
###Code
inds = (xsub_pipe.spec[0].spec_table['WAVELENGTH'] >= 5.0) & (xsub_pipe.spec[0].spec_table['WAVELENGTH'] <= 10.)
print('Mean difference between pipeline and manual extraction = {:.4f} per cent'.format(np.mean(diff[inds])))
try:
assert np.mean(diff[inds]) <= 1.0, "Mean difference between pipeline and manual extraction >= 1 per cent in 5-10 um. CHECK."
except AssertionError as e:
print("****************************************************")
print("")
print("ERROR: {}".format(e))
print("")
print("****************************************************")
###Output
_____no_output_____
###Markdown
------------------------------**END OF TEST PART 1**---------------------------------------------- Test 2: Nodded observation, two exposuresIn this second test we use both nodded observations. In this scenarion, the nods are used as each other's background observations and we need to ensure that the extraction aperture is placed in the right position with a realistic aperture for both nods.We will re-run the first steps of the Spec2Pipeline, so that the nods are used as each other's backgrounds. This requires creation of an association from which the Spec2Pipeline will be run. Then we will run them both through the extract_1d() step with the default parameters, checking:* the location of the aperture* the extraction width
###Code
asn_files = [det1[0].meta.filename, det1[1].meta.filename]
asn = asn_from_list(asn_files, rule=DMSLevel2bBase, meta={'program':'test', 'target':'bd60', 'asn_pool':'test'})
# now add the opposite nod as background exposure:
asn['products'][0]['members'].append({'expname': 'miri_lrs_slit_pt_nod2_v2.3_rate.fits', 'exptype':'background'})
asn['products'][1]['members'].append({'expname': 'miri_lrs_slit_pt_nod1_v2.3_rate.fits', 'exptype':'background'})
# write this out to a json file
with open('lrs-slit-test_asn.json', 'w') as fp:
fp.write(asn.dump()[1])
###Output
_____no_output_____
###Markdown
Now run the Spec2Pipeline with this association files as input, instead of the individual FITS files or datamodels.
###Code
sp2 = Spec2Pipeline.call('lrs-slit-test_asn.json', save_results=True)
x1dfiles = glob('*_x1d.fits')
print(x1dfiles)
x1ds = [datamodels.open(xf) for xf in x1dfiles]
fig = plt.figure(figsize=[15,5])
for x1 in x1ds:
if 'nod1' in x1.meta.filename:
nn = 'nod1'
else:
nn = 'nod2'
plt.plot(x1.spec[0].spec_table['WAVELENGTH'], x1.spec[0].spec_table['FLUX'], label=nn)
plt.plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], xsub_pipe.spec[0].spec_table['FLUX'], label='nod 1, full bbox extracted')
plt.legend(fontsize='large')
plt.grid()
plt.xlim([4.5, 12.5])
plt.xlabel('wavelength (micron)', fontsize='large')
plt.ylabel('Jy', fontsize='large')
fig.show()
###Output
_____no_output_____
###Markdown
What we will test for:* the extracted spectra should be near-identical (chosen criteria: mean ration between the 2 <= 5%)**Further tests to add:*** perform a full verification of the extraction at the source position and with the same extraction width as in the parameters file.**If the ``assert`` statement below passes, we consider the test a success.**
###Code
inds = x1ds[0].spec[0].spec_table['FLUX'] > 0.00
ratio = x1ds[0].spec[0].spec_table['FLUX'][inds] / x1ds[1].spec[0].spec_table['FLUX'][inds]
infs = np.isinf(ratio-1.0)
try:
assert np.mean(np.abs(ratio[~infs] - 1.)) <= 0.05, "Extracted spectra don't match! CHECK!"
except AssertionError as e:
print("****************************************************")
print("")
print("ERROR: {}".format(e))
print("")
print("****************************************************")
###Output
_____no_output_____
###Markdown
JWST Pipeline Validation Testing Notebook: MIRI LRS Slit Spec2: Extract1d() **Instruments Affected**: MIRI Table of Contents [Imports](imports_ID) [Introduction](intro_ID) [Get Documentaion String for Markdown Blocks](markdown_from_docs) [Loading Data](data_ID) [Run JWST Pipeline](pipeline_ID) [Create Figure or Print Output](residual_ID) [About This Notebook](about_ID) ImportsList the library imports and why they are relevant to this notebook.* os, glob for general OS operations* numpy* astropy.io for opening fits files* inspect to get the docstring of our objects.* IPython.display for printing markdown output* jwst.datamodels for building model for JWST Pipeline* jwst.module.PipelineStep is the pipeline step being tested* matplotlib.pyplot to generate plot* json for editing json files* crds for retrieving reference files as needed[Top of Page](title_ID)
###Code
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
data_dir = TemporaryDirectory()
os.chdir(data_dir.name)
import os
if 'CRDS_CACHE_TYPE' in os.environ:
if os.environ['CRDS_CACHE_TYPE'] == 'local':
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif os.path.isdir(os.environ['CRDS_CACHE_TYPE']):
os.environ['CRDS_PATH'] = os.environ['CRDS_CACHE_TYPE']
print('CRDS cache location: {}'.format(os.environ['CRDS_PATH']))
import numpy as np
from numpy.testing import assert_allclose
import os
from glob import glob
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
import astropy.io.fits as fits
import astropy.units as u
import jwst.datamodels as datamodels
from jwst.datamodels import RampModel, ImageModel
from jwst.pipeline import Detector1Pipeline, Spec2Pipeline
from jwst.extract_1d import Extract1dStep
from gwcs.wcstools import grid_from_bounding_box
from jwst.associations.asn_from_list import asn_from_list
from jwst.associations.lib.rules_level2_base import DMSLevel2bBase
import json
import crds
from ci_watson.artifactory_helpers import get_bigdata
%matplotlib inline
###Output
_____no_output_____
###Markdown
IntroductionIn this notebook we will test the **extract1d()** step of Spec2Pipeline() for **LRS slit** observations.Step description: https://jwst-pipeline.readthedocs.io/en/stable/jwst/extract_1d/index.htmlPipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/extract_1d Short description of the algorithmThe extract1d() step does the following for POINT source observations:* the code searches for the ROW that is the centre of the bounding box y-range* using the WCS information attached to the data in assign_wcs() it will determine the COLUMN location of the target* it computes the difference between this location and the centre of the bounding box x-range* BY DEFAULT the exraction aperture will be centred on the target location, and the flux in the aperture will be summed row by row. The default extraction width is a constant value of 11 PIXELS.In **Test 1** below, we load the json file that contains the default parameters, and override these to perform extraction over the full bounding box width for a single exposure. This tests the basic arithmetic of the extraction.In **Test 2**, we use the pair of nodded exposures and pass these in an association to the Spec2Pipeline, allowing extract1d() to perform the default extraction on a nodded pair. This is a very typical LRS science use case.[Top of Page](title_ID) Loading DataWe are using here a simulated LRS slit observation, generated with MIRISim v2.3.0 (as of Dec 2020). It is a simple along-slit-nodded observation of a point source (the input was modelled on the flux calibrator BD+60). LRS slit observations cover the full array. [Top of Page](title_ID)
###Code
Slitfile1 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_spec2',
'spec2_miri_test',
'miri_lrs_slit_pt_nod1_v2.3.fits')
Slitfile2 = get_bigdata('jwst_validation_notebooks',
'validation_data',
'calwebb_spec2',
'spec2_miri_test',
'miri_lrs_slit_pt_nod2_v2.3.fits')
files = [Slitfile1, Slitfile2]
###Output
_____no_output_____
###Markdown
Run JWST PipelineFirst we run the data through the Detector1() pipeline to convert the raw counts into slopes. This should use the calwebb_detector1.cfg file. The output of this stage will then be run through the Spec2Pipeline. Extract_1d is the final step of this pipeline stage, so we will just run through the whole pipeline.[Top of Page](title_ID) Detector1Pipeline
###Code
det1 = []
# Run pipeline on both files
for ff in files:
d1 = Detector1Pipeline.call(ff, save_results=True)
det1.append(d1)
print(det1)
###Output
_____no_output_____
###Markdown
Spec2PipelineNext we go ahead to the Spec2 pipeline. At this stage we perform 2 tests:1. run the Spec2 pipeline on one single exposure, extracting over the full bounding box width. we compare this with the manual extraction over the same aperture. this tests whether the pipeline is performing the correct arithmetic in the extraction procedure.2. run the Spec2 pipeline on the nodded set of exposures. this mimics more closely how the pipeline will be run in automated way during routine operations. this will test whether the pipeline is finding the source positions, and is able to extract both nodded observations in the same way.The initial steps will be the same for both tests and will be run on both initially. Spectral extraction is performed on the output file of the 2D resampled images (\_s2d.fits)First we run the Spec2Pipeline() **skipping** the extract1d() step.
###Code
spec2 = []
for dd in det1:
s2 = Spec2Pipeline.call(dd.meta.filename,save_results=True, steps={"extract_1d": {"skip": True}})
spec2.append(s2)
#calfiles = glob('*_cal.fits')
calfiles = glob('*_s2d.fits')
print(calfiles)
s2d = []
nods = []
for cf in calfiles:
if 'nod1' in cf:
nn = 'nod1'
else:
nn = 'nod2'
ph = datamodels.open(cf)
s2d.append(ph)
nods.append(nn)
print(s2d)
###Output
_____no_output_____
###Markdown
Retrieve the wcs information from the S2D output file so we know the coordinates of the bounding box and the wavelength grid. We use the ``grid_from_bounding_box`` function to generate these grids. We convert the wavelength grid into a wavelength vector by averaging over each row. This works because LRS distortion is minimal, so lines of equal wavelength run along rows (not 100% accurate but for this purpose this is correct).This cell performs a check that both nods have the same wavelength assignment over the full bounding box, which is expected.
###Code
lams = []
for ss,nn in zip(s2d, nods):
bbox_w = ss.meta.wcs.bounding_box[0][1] - ss.meta.wcs.bounding_box[0][0]
bbox_ht = ss.meta.wcs.bounding_box[1][1] - ss.meta.wcs.bounding_box[1][0]
print('Model bbox ({1}) = {0} '.format(ss.meta.wcs.bounding_box, nn))
print('Model: Height x width of bounding box ({2})= {0} x {1} pixels'.format(bbox_ht, bbox_w, nn))
x,y = grid_from_bounding_box(ss.meta.wcs.bounding_box)
ra, dec, lam = ss.meta.wcs(x, y)
lam_vec = np.mean(lam, axis=1)
lams.append(lam_vec)
# check that the wavelength vectors for the nods are equal, then we can just work with one going forward
assert np.array_equal(lams[0], lams[1], equal_nan=True), "Arrays not equal!"
###Output
_____no_output_____
###Markdown
Test 1: Single exposure, full width extractionTo enable the extraction over the full width of the LRS slit bounding box, we have to edit the json parameters file and run the step with an override to the config file. **The next few steps will be executed with one of the nods only.** Next we perform a manual extraction by first extracting the bounding box portion of the array, and then summing up the values in each row over the full BB width. This returns the flux in MJy/sr, which we convert to Jy using the pixel area. A MIRI imager pixel measures 0.11" on the side.**NOTE: as per default, the extract_1d() pipeline step will find the location of the target and offset the extraction window to be centred on the target. To extract the full slit, we want this to be disabled, so we set use_source_posn to False in the cfg input file.**
###Code
s1 = s2d[0]
nn = nods[0]
print('The next steps will be run only on {0}, the {1} exposure'.format(s1.meta.filename, nn))
#photom_sub = ph1.data[int(np.min(y)):int(np.max(y)+1), int(np.min(x)):int(np.max(x)+1)]
s2d_sub = s2d[0].data
print('Cutout has dimensions ({0})'.format(np.shape(s2d_sub)))
print('The cutout was taken from pixel {0} to pixel {1} in x'.format(int(np.min(x)),int(np.max(x)+1)))
xsub = np.sum(s2d_sub, axis=1)
#remove some nans
lam_vec = lams[0]
xsub = xsub[~np.isnan(lam_vec)]
lam_vec = lam_vec[~np.isnan(lam_vec)]
# calculate the pixel area in sr
pix_scale = 0.11 * u.arcsec
pixar_as2 = pix_scale**2
pixar_sr = pixar_as2.to(u.sr)
###Output
_____no_output_____
###Markdown
Next we have to apply the aperture correction (new from B7.6). We load in the aperture correction reference file, identify the correct values, and multiply the calibrated spectrum by the correct numbers. When we are extracting over the full aperture, the correction should effectively be 1 as we are not losing any of the flux.
###Code
apcorr_file = 'jwst_miri_apcorr_0007.fits'
# retrieve this file and open as datamodel
basename = crds.core.config.pop_crds_uri(apcorr_file)
filepath = crds.locate_file(basename, "jwst")
acref = datamodels.open(filepath)
# check that list item 0 is for the slit mode (subarray = FULL)
ind = 0
assert acref.apcorr_table[ind]['subarray']=='FULL', "index does not correspond to the correct subarray!"
xwidth = int(np.max(x))+1 - int(np.min(x))
print("extraction width = {0}".format(xwidth))
# first identify where the aperture width is in the "size" array
if xwidth >= np.max(acref.apcorr_table[ind]['size']):
size_ind = np.argwhere(acref.apcorr_table[ind]['size'] == np.max(acref.apcorr_table[ind]['size']))
else:
size_ind = np.argwhere(acref.apcorr_table[ind]['size'] == xwidth)
# take the vector from the apcorr_table at this location and extract.
apcorr_vec = acref.apcorr_table[ind]['apcorr'][:,size_ind[0][0]]
print(np.shape(apcorr_vec))
print(np.shape(acref.apcorr_table[ind]['wavelength']))
# now we create an interpolated vector of values corresponding to the lam_vec wavelengths.
# NOTE: the wavelengths are running in descending order so make sure assume_sorted = FALSE
intp_ac = interp1d(acref.apcorr_table[ind]['wavelength'], apcorr_vec, kind='linear', assume_sorted=False)
iapcorr = intp_ac(lam_vec)
plt.figure(figsize=[10,6])
plt.plot(acref.apcorr_table[1]['wavelength'], apcorr_vec, 'g-', label='ref file')
plt.plot(lam_vec, iapcorr, 'r-', label='interpolated')
#plt.plot(lam_vec, ac_vals, 'r-', label='aperture corrections for {} px ap'.format(xwidth))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Now multiply the manually extracted spectra by the flux scaling and aperture correction vectors.
###Code
# now convert flux from MJy/sr to Jy using the pixel area
if (s1.meta.bunit_data == 'MJy/sr'):
xsub_cal = xsub * pixar_sr.value * 1e6 * iapcorr
###Output
_____no_output_____
###Markdown
Next we run the ``extract_1d()`` step on the same file, editing the configuration to sum up over the entire aperture as we did above. We load in the json file, make adjustments and run the step with a config file override option.
###Code
extreffile='jwst_miri_extract1d_0004.json'
basename=crds.core.config.pop_crds_uri(extreffile)
path=crds.locate_file(basename,"jwst")
with open(path) as json_ref:
jsreforig = json.load(json_ref)
jsrefdict = jsreforig.copy()
jsrefdict['apertures'][0]['xstart'] = int(np.min(x))
jsrefdict['apertures'][0]['xstop'] = int(np.max(x)) + 1
#jsrefdict['apertures'][0]['use_source_posn'] = False
for element in jsrefdict['apertures']:
element.pop('extract_width', None)
element.pop('nod2_offset', None)
with open('extract1d_slit_full_spec2.json','w') as jsrefout:
json.dump(jsrefdict,jsrefout,indent=4)
with open('extract1d_slit_full_spec2.cfg','w') as cfg:
cfg.write('name = "extract_1d"'+'\n')
cfg.write('class = "jwst.extract_1d.Extract1dStep"'+'\n')
cfg.write(''+'\n')
cfg.write('log_increment = 50'+'\n')
cfg.write('smoothing_length = 0'+'\n')
cfg.write('use_source_posn = False' + '\n')
cfg.write('override_extract1d="extract1d_slit_full_spec2.json"'+'\n')
xsub_pipe = Extract1dStep.call(s1, config_file='extract1d_slit_full_spec2.cfg', save_results=True)
###Output
_____no_output_____
###Markdown
If the step ran successfully, we can now look at the output and compare to our manual extraction spectrum. To ratio the 2 spectra we interpolate the manually extracted spectrum ``xsub_cal`` onto the pipeline-generated wavelength grid.
###Code
fig, ax = plt.subplots(ncols=3, nrows=1, figsize=[14,5])
ax[0].imshow(s2d_sub, origin='lower', interpolation='None')
ax[1].plot(lam_vec, xsub_cal, 'b-', label='manual extraction')
ax[1].plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], xsub_pipe.spec[0].spec_table['FLUX'], 'g-', label='pipeline extraction')
ax[1].set_title('{0}, Full BBox extraction'.format(nn))
ax[1].legend()
ax[1].set_xlim([5, 12])
#ax[1].set_ylim([0.004, 0.008])
#interpolate the two onto the same grid so we can look at the difference
f = interp1d(lam_vec, xsub_cal, kind='linear', fill_value='extrapolate')
ixsub_cal = f(xsub_pipe.spec[0].spec_table['WAVELENGTH'])
#ax[1].plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], ixsub_cal, 'r-', label='manual, interpolated')
diff = ((xsub_pipe.spec[0].spec_table['FLUX'] - ixsub_cal) / xsub_pipe.spec[0].spec_table['FLUX']) * 100.
ax[2].plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], diff, 'k-')
ax[2].axhline(y=1., xmin=0, xmax=1, color='r', ls='--')
ax[2].axhline(y=-1., xmin=0, xmax=1, color='r', ls='--')
ax[2].set_title('Difference manual - pipeline extracted spectra')
ax[2].set_ylim([-5., 5.])
ax[2].set_xlim([4.8, 10.2])
fig.show()
###Output
_____no_output_____
###Markdown
We check that the ratio between the 2 is on average <= 1 per cent in the core range between 5 and 10 micron.
###Code
inds = (xsub_pipe.spec[0].spec_table['WAVELENGTH'] >= 5.0) & (xsub_pipe.spec[0].spec_table['WAVELENGTH'] <= 10.)
print('Mean difference between pipeline and manual extraction = {:.4f} per cent'.format(np.mean(diff[inds])))
try:
assert np.mean(diff[inds]) <= 1.0, "Mean difference between pipeline and manual extraction >= 1 per cent in 5-10 um. CHECK."
except AssertionError as e:
print("****************************************************")
print("")
print("ERROR: {}".format(e))
print("")
print("****************************************************")
###Output
_____no_output_____
###Markdown
------------------------------**END OF TEST PART 1**---------------------------------------------- Test 2: Nodded observation, two exposuresIn this second test we use both nodded observations. In this scenarion, the nods are used as each other's background observations and we need to ensure that the extraction aperture is placed in the right position with a realistic aperture for both nods.We will re-run the first steps of the Spec2Pipeline, so that the nods are used as each other's backgrounds. This requires creation of an association from which the Spec2Pipeline will be run. Then we will run them both through the extract_1d() step with the default parameters, checking:* the location of the aperture* the extraction width
###Code
asn_files = [det1[0].meta.filename, det1[1].meta.filename]
asn = asn_from_list(asn_files, rule=DMSLevel2bBase, meta={'program':'test', 'target':'bd60', 'asn_pool':'test'})
# now add the opposite nod as background exposure:
asn['products'][0]['members'].append({'expname': 'miri_lrs_slit_pt_nod2_v2.3_rate.fits', 'exptype':'background'})
asn['products'][1]['members'].append({'expname': 'miri_lrs_slit_pt_nod1_v2.3_rate.fits', 'exptype':'background'})
# write this out to a json file
with open('lrs-slit-test_asn.json', 'w') as fp:
fp.write(asn.dump()[1])
###Output
_____no_output_____
###Markdown
Now run the Spec2Pipeline with this association files as input, instead of the individual FITS files or datamodels.
###Code
sp2 = Spec2Pipeline.call('lrs-slit-test_asn.json', save_results=True)
x1dfiles = glob('*_x1d.fits')
print(x1dfiles)
x1ds = [datamodels.open(xf) for xf in x1dfiles]
fig = plt.figure(figsize=[15,5])
for x1 in x1ds:
if 'nod1' in x1.meta.filename:
nn = 'nod1'
else:
nn = 'nod2'
plt.plot(x1.spec[0].spec_table['WAVELENGTH'], x1.spec[0].spec_table['FLUX'], label=nn)
plt.plot(xsub_pipe.spec[0].spec_table['WAVELENGTH'], xsub_pipe.spec[0].spec_table['FLUX'], label='nod 1, full bbox extracted')
plt.legend(fontsize='large')
plt.grid()
plt.xlim([4.5, 12.5])
plt.xlabel('wavelength (micron)', fontsize='large')
plt.ylabel('Jy', fontsize='large')
fig.show()
###Output
_____no_output_____
###Markdown
What we will test for:* the extracted spectra should be near-identical (chosen criteria: mean ration between the 2 <= 5%)**Further tests to add:*** perform a full verification of the extraction at the source position and with the same extraction width as in the parameters file.**If the ``assert`` statement below passes, we consider the test a success.**
###Code
inds = x1ds[0].spec[0].spec_table['FLUX'] > 0.00
ratio = x1ds[0].spec[0].spec_table['FLUX'][inds] / x1ds[1].spec[0].spec_table['FLUX'][inds]
infs = np.isinf(ratio-1.0)
try:
assert np.mean(np.abs(ratio[~infs] - 1.)) <= 0.05, "Extracted spectra don't match! CHECK!"
except AssertionError as e:
print("****************************************************")
print("")
print("ERROR: {}".format(e))
print("")
print("****************************************************")
###Output
_____no_output_____ |
10.Applied Data Science Capstone/Solution Notebooks/Data Collection with web scrapping Solution.ipynb | ###Markdown
**Space X Falcon 9 First Stage Landing Prediction** Web scraping Falcon 9 and Falcon Heavy Launches Records from Wikipedia Estimated time needed: **40** minutes In this lab, you will be performing web scraping to collect Falcon 9 historical launch records from a Wikipedia page titled `List of Falcon 9 and Falcon Heavy launches`[https://en.wikipedia.org/wiki/List_of_Falcon\_9\_and_Falcon_Heavy_launches](https://en.wikipedia.org/wiki/List_of_Falcon\_9\_and_Falcon_Heavy_launches?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDS0321ENSkillsNetwork26802033-2021-01-01)  Falcon 9 first stage will land successfully  Several examples of an unsuccessful landing are shown here:  More specifically, the launch records are stored in a HTML table shown below:  ObjectivesWeb scrap Falcon 9 launch records with `BeautifulSoup`:* Extract a Falcon 9 launch records HTML table from Wikipedia* Parse the table and convert it into a Pandas data frame First let's import required packages for this lab
###Code
!pip3 install beautifulsoup4
!pip3 install requests
import sys
import requests
from bs4 import BeautifulSoup
import re
import unicodedata
import pandas as pd
###Output
_____no_output_____
###Markdown
and we will provide some helper functions for you to process web scraped HTML table
###Code
def date_time(table_cells):
"""
This function returns the data and time from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
return [data_time.strip() for data_time in list(table_cells.strings)][0:2]
def booster_version(table_cells):
"""
This function returns the booster version from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=''.join([booster_version for i,booster_version in enumerate( table_cells.strings) if i%2==0][0:-1])
return out
def landing_status(table_cells):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
out=[i for i in table_cells.strings][0]
return out
def get_mass(table_cells):
mass=unicodedata.normalize("NFKD", table_cells.text).strip()
if mass:
mass.find("kg")
new_mass=mass[0:mass.find("kg")+2]
else:
new_mass=0
return new_mass
def extract_column_from_header(row):
"""
This function returns the landing status from the HTML table cell
Input: the element of a table data cell extracts extra row
"""
if (row.br):
row.br.extract()
if row.a:
row.a.extract()
if row.sup:
row.sup.extract()
colunm_name = ' '.join(row.contents)
# Filter the digit and empty names
if not(colunm_name.strip().isdigit()):
colunm_name = colunm_name.strip()
return colunm_name
###Output
_____no_output_____
###Markdown
To keep the lab tasks consistent, you will be asked to scrape the data from a snapshot of the `List of Falcon 9 and Falcon Heavy launches` Wikipage updated on`9th June 2021`
###Code
static_url = "https://en.wikipedia.org/w/index.php?title=List_of_Falcon_9_and_Falcon_Heavy_launches&oldid=1027686922"
###Output
_____no_output_____
###Markdown
Next, request the HTML page from the above URL and get a `response` object TASK 1: Request the Falcon9 Launch Wiki page from its URL First, let's perform an HTTP GET method to request the Falcon9 Launch HTML page, as an HTTP response.
###Code
# use requests.get() method with the provided static_url
# assign the response to a object
response = requests.get(static_url)
HTML = response.text
###Output
_____no_output_____
###Markdown
Create a `BeautifulSoup` object from the HTML `response`
###Code
# Use BeautifulSoup() to create a BeautifulSoup object from a response text content
soup = BeautifulSoup(HTML,'html.parser')
###Output
_____no_output_____
###Markdown
Print the page title to verify if the `BeautifulSoup` object was created properly
###Code
# Use soup.title attribute
soup.title.string
###Output
_____no_output_____
###Markdown
TASK 2: Extract all column/variable names from the HTML table header Next, we want to collect all relevant column names from the HTML table header Let's try to find all tables on the wiki page first. If you need to refresh your memory about `BeautifulSoup`, please check the external reference link towards the end of this lab
###Code
# Use the find_all function in the BeautifulSoup object, with element type `table`
# Assign the result to a list called `html_tables`
html_tables = soup.find_all('table')
###Output
_____no_output_____
###Markdown
Starting from the third table is our target table contains the actual launch records.
###Code
# Let's print the third table and check its content
first_launch_table = html_tables[2]
print(first_launch_table)
###Output
<table class="wikitable plainrowheaders collapsible" style="width: 100%;">
<tbody><tr>
<th scope="col">Flight No.
</th>
<th scope="col">Date and<br/>time (<a href="/wiki/Coordinated_Universal_Time" title="Coordinated Universal Time">UTC</a>)
</th>
<th scope="col"><a href="/wiki/List_of_Falcon_9_first-stage_boosters" title="List of Falcon 9 first-stage boosters">Version,<br/>Booster</a> <sup class="reference" id="cite_ref-booster_11-0"><a href="#cite_note-booster-11">[b]</a></sup>
</th>
<th scope="col">Launch site
</th>
<th scope="col">Payload<sup class="reference" id="cite_ref-Dragon_12-0"><a href="#cite_note-Dragon-12">[c]</a></sup>
</th>
<th scope="col">Payload mass
</th>
<th scope="col">Orbit
</th>
<th scope="col">Customer
</th>
<th scope="col">Launch<br/>outcome
</th>
<th scope="col"><a href="/wiki/Falcon_9_first-stage_landing_tests" title="Falcon 9 first-stage landing tests">Booster<br/>landing</a>
</th></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">1
</th>
<td>4 June 2010,<br/>18:45
</td>
<td><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-0"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0003.1<sup class="reference" id="cite_ref-block_numbers_14-0"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/Dragon_Spacecraft_Qualification_Unit" title="Dragon Spacecraft Qualification Unit">Dragon Spacecraft Qualification Unit</a>
</td>
<td>
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a>
</td>
<td><a href="/wiki/SpaceX" title="SpaceX">SpaceX</a>
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success
</td>
<td class="table-failure" style="background: #ffbbbb; color: black; vertical-align: middle; text-align: center;">Failure<sup class="reference" id="cite_ref-ns20110930_15-0"><a href="#cite_note-ns20110930-15">[9]</a></sup><sup class="reference" id="cite_ref-16"><a href="#cite_note-16">[10]</a></sup><br/><small>(parachute)</small>
</td></tr>
<tr>
<td colspan="9">First flight of Falcon 9 v1.0.<sup class="reference" id="cite_ref-sfn20100604_17-0"><a href="#cite_note-sfn20100604-17">[11]</a></sup> Used a boilerplate version of Dragon capsule which was not designed to separate from the second stage.<small>(<a href="#First_flight_of_Falcon_9">more details below</a>)</small> Attempted to recover the first stage by parachuting it into the ocean, but it burned up on reentry, before the parachutes even deployed.<sup class="reference" id="cite_ref-parachute_18-0"><a href="#cite_note-parachute-18">[12]</a></sup>
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">2
</th>
<td>8 December 2010,<br/>15:43<sup class="reference" id="cite_ref-spaceflightnow_Clark_Launch_Report_19-0"><a href="#cite_note-spaceflightnow_Clark_Launch_Report-19">[13]</a></sup>
</td>
<td><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-1"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0004.1<sup class="reference" id="cite_ref-block_numbers_14-1"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SpaceX_Dragon" title="SpaceX Dragon">Dragon</a> <a class="mw-redirect" href="/wiki/COTS_Demo_Flight_1" title="COTS Demo Flight 1">demo flight C1</a><br/>(Dragon C101)
</td>
<td>
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a> (<a href="/wiki/International_Space_Station" title="International Space Station">ISS</a>)
</td>
<td><div class="plainlist">
<ul><li><a href="/wiki/NASA" title="NASA">NASA</a> (<a href="/wiki/Commercial_Orbital_Transportation_Services" title="Commercial Orbital Transportation Services">COTS</a>)</li>
<li><a href="/wiki/National_Reconnaissance_Office" title="National Reconnaissance Office">NRO</a></li></ul>
</div>
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success<sup class="reference" id="cite_ref-ns20110930_15-1"><a href="#cite_note-ns20110930-15">[9]</a></sup>
</td>
<td class="table-failure" style="background: #ffbbbb; color: black; vertical-align: middle; text-align: center;">Failure<sup class="reference" id="cite_ref-ns20110930_15-2"><a href="#cite_note-ns20110930-15">[9]</a></sup><sup class="reference" id="cite_ref-20"><a href="#cite_note-20">[14]</a></sup><br/><small>(parachute)</small>
</td></tr>
<tr>
<td colspan="9">Maiden flight of <a class="mw-redirect" href="/wiki/Dragon_capsule" title="Dragon capsule">Dragon capsule</a>, consisting of over 3 hours of testing thruster maneuvering and reentry.<sup class="reference" id="cite_ref-spaceflightnow_Clark_unleashing_Dragon_21-0"><a href="#cite_note-spaceflightnow_Clark_unleashing_Dragon-21">[15]</a></sup> Attempted to recover the first stage by parachuting it into the ocean, but it disintegrated upon reentry, before the parachutes were deployed.<sup class="reference" id="cite_ref-parachute_18-1"><a href="#cite_note-parachute-18">[12]</a></sup> <small>(<a href="#COTS_demo_missions">more details below</a>)</small> It also included two <a href="/wiki/CubeSat" title="CubeSat">CubeSats</a>,<sup class="reference" id="cite_ref-NRO_Taps_Boeing_for_Next_Batch_of_CubeSats_22-0"><a href="#cite_note-NRO_Taps_Boeing_for_Next_Batch_of_CubeSats-22">[16]</a></sup> and a wheel of <a href="/wiki/Brou%C3%A8re" title="Brouère">Brouère</a> cheese.
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">3
</th>
<td>22 May 2012,<br/>07:44<sup class="reference" id="cite_ref-BBC_new_era_23-0"><a href="#cite_note-BBC_new_era-23">[17]</a></sup>
</td>
<td><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-2"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0005.1<sup class="reference" id="cite_ref-block_numbers_14-2"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SpaceX_Dragon" title="SpaceX Dragon">Dragon</a> <a class="mw-redirect" href="/wiki/Dragon_C2%2B" title="Dragon C2+">demo flight C2+</a><sup class="reference" id="cite_ref-C2_24-0"><a href="#cite_note-C2-24">[18]</a></sup><br/>(Dragon C102)
</td>
<td>525 kg (1,157 lb)<sup class="reference" id="cite_ref-25"><a href="#cite_note-25">[19]</a></sup>
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a> (<a href="/wiki/International_Space_Station" title="International Space Station">ISS</a>)
</td>
<td><a href="/wiki/NASA" title="NASA">NASA</a> (<a href="/wiki/Commercial_Orbital_Transportation_Services" title="Commercial Orbital Transportation Services">COTS</a>)
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success<sup class="reference" id="cite_ref-26"><a href="#cite_note-26">[20]</a></sup>
</td>
<td class="table-noAttempt" style="background: #ececec; color: black; vertical-align: middle; white-space: nowrap; text-align: center;">No attempt
</td></tr>
<tr>
<td colspan="9">Dragon spacecraft demonstrated a series of tests before it was allowed to approach the <a href="/wiki/International_Space_Station" title="International Space Station">International Space Station</a>. Two days later, it became the first commercial spacecraft to board the ISS.<sup class="reference" id="cite_ref-BBC_new_era_23-1"><a href="#cite_note-BBC_new_era-23">[17]</a></sup> <small>(<a href="#COTS_demo_missions">more details below</a>)</small>
</td></tr>
<tr>
<th rowspan="3" scope="row" style="text-align:center;">4
</th>
<td rowspan="2">8 October 2012,<br/>00:35<sup class="reference" id="cite_ref-SFN_LLog_27-0"><a href="#cite_note-SFN_LLog-27">[21]</a></sup>
</td>
<td rowspan="2"><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-3"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0006.1<sup class="reference" id="cite_ref-block_numbers_14-3"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td rowspan="2"><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SpaceX_CRS-1" title="SpaceX CRS-1">SpaceX CRS-1</a><sup class="reference" id="cite_ref-sxManifest20120925_28-0"><a href="#cite_note-sxManifest20120925-28">[22]</a></sup><br/>(Dragon C103)
</td>
<td>4,700 kg (10,400 lb)
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a> (<a href="/wiki/International_Space_Station" title="International Space Station">ISS</a>)
</td>
<td><a href="/wiki/NASA" title="NASA">NASA</a> (<a href="/wiki/Commercial_Resupply_Services" title="Commercial Resupply Services">CRS</a>)
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success
</td>
<td rowspan="2" style="background:#ececec; text-align:center;"><span class="nowrap">No attempt</span>
</td></tr>
<tr>
<td><a href="/wiki/Orbcomm_(satellite)" title="Orbcomm (satellite)">Orbcomm-OG2</a><sup class="reference" id="cite_ref-Orbcomm_29-0"><a href="#cite_note-Orbcomm-29">[23]</a></sup>
</td>
<td>172 kg (379 lb)<sup class="reference" id="cite_ref-gunter-og2_30-0"><a href="#cite_note-gunter-og2-30">[24]</a></sup>
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a>
</td>
<td><a href="/wiki/Orbcomm" title="Orbcomm">Orbcomm</a>
</td>
<td class="table-partial" style="background: wheat; color: black; vertical-align: middle; text-align: center;">Partial failure<sup class="reference" id="cite_ref-nyt-20121030_31-0"><a href="#cite_note-nyt-20121030-31">[25]</a></sup>
</td></tr>
<tr>
<td colspan="9">CRS-1 was successful, but the <a href="/wiki/Secondary_payload" title="Secondary payload">secondary payload</a> was inserted into an abnormally low orbit and subsequently lost. This was due to one of the nine <a href="/wiki/SpaceX_Merlin" title="SpaceX Merlin">Merlin engines</a> shutting down during the launch, and NASA declining a second reignition, as per <a href="/wiki/International_Space_Station" title="International Space Station">ISS</a> visiting vehicle safety rules, the primary payload owner is contractually allowed to decline a second reignition. NASA stated that this was because SpaceX could not guarantee a high enough likelihood of the second stage completing the second burn successfully which was required to avoid any risk of secondary payload's collision with the ISS.<sup class="reference" id="cite_ref-OrbcommTotalLoss_32-0"><a href="#cite_note-OrbcommTotalLoss-32">[26]</a></sup><sup class="reference" id="cite_ref-sn20121011_33-0"><a href="#cite_note-sn20121011-33">[27]</a></sup><sup class="reference" id="cite_ref-34"><a href="#cite_note-34">[28]</a></sup>
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">5
</th>
<td>1 March 2013,<br/>15:10
</td>
<td><a href="/wiki/Falcon_9_v1.0" title="Falcon 9 v1.0">F9 v1.0</a><sup class="reference" id="cite_ref-MuskMay2012_13-4"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B0007.1<sup class="reference" id="cite_ref-block_numbers_14-4"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SpaceX_CRS-2" title="SpaceX CRS-2">SpaceX CRS-2</a><sup class="reference" id="cite_ref-sxManifest20120925_28-1"><a href="#cite_note-sxManifest20120925-28">[22]</a></sup><br/>(Dragon C104)
</td>
<td>4,877 kg (10,752 lb)
</td>
<td><a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a> (<a class="mw-redirect" href="/wiki/ISS" title="ISS">ISS</a>)
</td>
<td><a href="/wiki/NASA" title="NASA">NASA</a> (<a href="/wiki/Commercial_Resupply_Services" title="Commercial Resupply Services">CRS</a>)
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success
</td>
<td class="table-noAttempt" style="background: #ececec; color: black; vertical-align: middle; white-space: nowrap; text-align: center;">No attempt
</td></tr>
<tr>
<td colspan="9">Last launch of the original Falcon 9 v1.0 <a href="/wiki/Launch_vehicle" title="Launch vehicle">launch vehicle</a>, first use of the unpressurized trunk section of Dragon.<sup class="reference" id="cite_ref-sxf9_20110321_35-0"><a href="#cite_note-sxf9_20110321-35">[29]</a></sup>
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">6
</th>
<td>29 September 2013,<br/>16:00<sup class="reference" id="cite_ref-pa20130930_36-0"><a href="#cite_note-pa20130930-36">[30]</a></sup>
</td>
<td><a href="/wiki/Falcon_9_v1.1" title="Falcon 9 v1.1">F9 v1.1</a><sup class="reference" id="cite_ref-MuskMay2012_13-5"><a href="#cite_note-MuskMay2012-13">[7]</a></sup><br/>B1003<sup class="reference" id="cite_ref-block_numbers_14-5"><a href="#cite_note-block_numbers-14">[8]</a></sup>
</td>
<td><a class="mw-redirect" href="/wiki/Vandenberg_Air_Force_Base" title="Vandenberg Air Force Base">VAFB</a>,<br/><a href="/wiki/Vandenberg_Space_Launch_Complex_4" title="Vandenberg Space Launch Complex 4">SLC-4E</a>
</td>
<td><a href="/wiki/CASSIOPE" title="CASSIOPE">CASSIOPE</a><sup class="reference" id="cite_ref-sxManifest20120925_28-2"><a href="#cite_note-sxManifest20120925-28">[22]</a></sup><sup class="reference" id="cite_ref-CASSIOPE_MDA_37-0"><a href="#cite_note-CASSIOPE_MDA-37">[31]</a></sup>
</td>
<td>500 kg (1,100 lb)
</td>
<td><a href="/wiki/Polar_orbit" title="Polar orbit">Polar orbit</a> <a href="/wiki/Low_Earth_orbit" title="Low Earth orbit">LEO</a>
</td>
<td><a href="/wiki/Maxar_Technologies" title="Maxar Technologies">MDA</a>
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success<sup class="reference" id="cite_ref-pa20130930_36-1"><a href="#cite_note-pa20130930-36">[30]</a></sup>
</td>
<td class="table-no2" style="background: #ffdddd; color: black; vertical-align: middle; text-align: center;">Uncontrolled<br/><small>(ocean)</small><sup class="reference" id="cite_ref-ocean_landing_38-0"><a href="#cite_note-ocean_landing-38">[d]</a></sup>
</td></tr>
<tr>
<td colspan="9">First commercial mission with a private customer, first launch from Vandenberg, and demonstration flight of Falcon 9 v1.1 with an improved 13-tonne to LEO capacity.<sup class="reference" id="cite_ref-sxf9_20110321_35-1"><a href="#cite_note-sxf9_20110321-35">[29]</a></sup> After separation from the second stage carrying Canadian commercial and scientific satellites, the first stage booster performed a controlled reentry,<sup class="reference" id="cite_ref-39"><a href="#cite_note-39">[32]</a></sup> and an <a href="/wiki/Falcon_9_first-stage_landing_tests" title="Falcon 9 first-stage landing tests">ocean touchdown test</a> for the first time. This provided good test data, even though the booster started rolling as it neared the ocean, leading to the shutdown of the central engine as the roll depleted it of fuel, resulting in a hard impact with the ocean.<sup class="reference" id="cite_ref-pa20130930_36-2"><a href="#cite_note-pa20130930-36">[30]</a></sup> This was the first known attempt of a rocket engine being lit to perform a supersonic retro propulsion, and allowed SpaceX to enter a public-private partnership with <a href="/wiki/NASA" title="NASA">NASA</a> and its Mars entry, descent, and landing technologies research projects.<sup class="reference" id="cite_ref-40"><a href="#cite_note-40">[33]</a></sup> <small>(<a href="#Maiden_flight_of_v1.1">more details below</a>)</small>
</td></tr>
<tr>
<th rowspan="2" scope="row" style="text-align:center;">7
</th>
<td>3 December 2013,<br/>22:41<sup class="reference" id="cite_ref-sfn_wwls20130624_41-0"><a href="#cite_note-sfn_wwls20130624-41">[34]</a></sup>
</td>
<td><a href="/wiki/Falcon_9_v1.1" title="Falcon 9 v1.1">F9 v1.1</a><br/>B1004
</td>
<td><a href="/wiki/Cape_Canaveral_Space_Force_Station" title="Cape Canaveral Space Force Station">CCAFS</a>,<br/><a href="/wiki/Cape_Canaveral_Space_Launch_Complex_40" title="Cape Canaveral Space Launch Complex 40">SLC-40</a>
</td>
<td><a href="/wiki/SES-8" title="SES-8">SES-8</a><sup class="reference" id="cite_ref-sxManifest20120925_28-3"><a href="#cite_note-sxManifest20120925-28">[22]</a></sup><sup class="reference" id="cite_ref-spx-pr_42-0"><a href="#cite_note-spx-pr-42">[35]</a></sup><sup class="reference" id="cite_ref-aw20110323_43-0"><a href="#cite_note-aw20110323-43">[36]</a></sup>
</td>
<td>3,170 kg (6,990 lb)
</td>
<td><a href="/wiki/Geostationary_transfer_orbit" title="Geostationary transfer orbit">GTO</a>
</td>
<td><a href="/wiki/SES_S.A." title="SES S.A.">SES</a>
</td>
<td class="table-success" style="background: LightGreen; color: black; vertical-align: middle; text-align: center;">Success<sup class="reference" id="cite_ref-SNMissionStatus7_44-0"><a href="#cite_note-SNMissionStatus7-44">[37]</a></sup>
</td>
<td class="table-noAttempt" style="background: #ececec; color: black; vertical-align: middle; white-space: nowrap; text-align: center;">No attempt<br/><sup class="reference" id="cite_ref-sf10120131203_45-0"><a href="#cite_note-sf10120131203-45">[38]</a></sup>
</td></tr>
<tr>
<td colspan="9">First <a href="/wiki/Geostationary_transfer_orbit" title="Geostationary transfer orbit">Geostationary transfer orbit</a> (GTO) launch for Falcon 9,<sup class="reference" id="cite_ref-spx-pr_42-1"><a href="#cite_note-spx-pr-42">[35]</a></sup> and first successful reignition of the second stage.<sup class="reference" id="cite_ref-46"><a href="#cite_note-46">[39]</a></sup> SES-8 was inserted into a <a href="/wiki/Geostationary_transfer_orbit" title="Geostationary transfer orbit">Super-Synchronous Transfer Orbit</a> of 79,341 km (49,300 mi) in apogee with an <a href="/wiki/Orbital_inclination" title="Orbital inclination">inclination</a> of 20.55° to the <a href="/wiki/Equator" title="Equator">equator</a>.
</td></tr></tbody></table>
###Markdown
You should able to see the columns names embedded in the table header elements `` as follows: ```Flight No.Date andtime (UTC)Version,Booster [b]Launch sitePayload[c]Payload massOrbitCustomerLaunchoutcomeBoosterlanding``` Next, we just need to iterate through the `` elements and apply the provided `extract_column_from_header()` to extract column name one by one
###Code
column_names = []
# Apply find_all() function with `th` element on first_launch_table
first_launch_table.find_all('th')
# Iterate each th element and apply the provided extract_column_from_header() to get a column name
# Append the Non-empty column name (`if name is not None and len(name) > 0`) into a list called column_names
for column in first_launch_table.find_all('th'):
col_name = extract_column_from_header(column)
if ((col_name !=None) and len(col_name)>0):
column_names.append(col_name)
###Output
_____no_output_____
###Markdown
Check the extracted column names
###Code
print(column_names)
###Output
['Flight No.', 'Date and time ( )', 'Launch site', 'Payload', 'Payload mass', 'Orbit', 'Customer', 'Launch outcome']
###Markdown
TASK 3: Create a data frame by parsing the launch HTML tables We will create an empty dictionary with keys from the extracted column names in the previous task. Later, this dictionary will be converted into a Pandas dataframe
###Code
launch_dict= dict.fromkeys(column_names)
# Remove an irrelvant column
del launch_dict['Date and time ( )']
# Let's initial the launch_dict with each value to be an empty list
launch_dict['Flight No.'] = []
launch_dict['Launch site'] = []
launch_dict['Payload'] = []
launch_dict['Payload mass'] = []
launch_dict['Orbit'] = []
launch_dict['Customer'] = []
launch_dict['Launch outcome'] = []
# Added some new columns
launch_dict['Version Booster']=[]
launch_dict['Booster landing']=[]
launch_dict['Date']=[]
launch_dict['Time']=[]
###Output
_____no_output_____
###Markdown
Next, we just need to fill up the `launch_dict` with launch records extracted from table rows. Usually, HTML tables in Wiki pages are likely to contain unexpected annotations and other types of noises, such as reference links `B0004.1[8]`, missing values `N/A [e]`, inconsistent formatting, etc. To simplify the parsing process, we have provided an incomplete code snippet below to help you to fill up the `launch_dict`. Please complete the following code snippet with TODOs or you can choose to write your own logic to parse all launch tables:
###Code
extracted_row = 0
#Extract each table
for table_number,table in enumerate(soup.find_all('table',"wikitable plainrowheaders collapsible")):
# get table row
for rows in table.find_all("tr"):
#check to see if first table heading is as number corresponding to launch a number
if rows.th:
if rows.th.string:
flight_number=rows.th.string.strip()
flag=flight_number.isdigit()
else:
flag=False
#get table element
row=rows.find_all('td')
#if it is number save cells in a dictonary
if flag:
extracted_row += 1
# Flight Number value
# TODO: Append the flight_number into launch_dict with key `Flight No.`
launch_dict['Flight No.'].append(flight_number)
print(flight_number)
datatimelist=date_time(row[0])
# Date value
# TODO: Append the date into launch_dict with key `Date`
date = datatimelist[0].strip(',')
launch_dict['Date'].append(date)
print(date)
# Time value
# TODO: Append the time into launch_dict with key `Time`
time = datatimelist[1]
launch_dict['Time'].append(time)
print(time)
# Booster version
# TODO: Append the bv into launch_dict with key `Version Booster`
bv=booster_version(row[1])
if not(bv):
bv=row[1].a.string
launch_dict["Version Booster"].append(bv)
print(bv)
# Launch Site
# TODO: Append the bv into launch_dict with key `Launch Site`
launch_site = row[2].a.string
launch_dict["Launch site"].append(launch_site)
print(launch_site)
# Payload
# TODO: Append the payload into launch_dict with key `Payload`
payload = row[3].a.string
launch_dict["Payload"].append(payload)
print(payload)
# Payload Mass
# TODO: Append the payload_mass into launch_dict with key `Payload mass`
payload_mass = get_mass(row[4])
launch_dict["Payload mass"].append(payload_mass)
print(payload)
# Orbit
# TODO: Append the orbit into launch_dict with key `Orbit`
orbit = row[5].a.string
launch_dict["Orbit"].append(orbit)
print(orbit)
# Customer
# TODO: Append the customer into launch_dict with key `Customer`
customer = row[6].a.string
launch_dict["Customer"].append(customer)
print(customer)
# Launch outcome
# TODO: Append the launch_outcome into launch_dict with key `Launch outcome`
launch_outcome = list(row[7].strings)[0]
launch_dict["Launch outcome"].append(launch_outcome)
print(launch_outcome)
# Booster landing
# TODO: Append the launch_outcome into launch_dict with key `Booster landing`
booster_landing = landing_status(row[8])
launch_dict["Booster landing"].append(booster_landing)
print(booster_landing)
###Output
1
4 June 2010
18:45
F9 v1.0B0003.1
CCAFS
Dragon Spacecraft Qualification Unit
Dragon Spacecraft Qualification Unit
LEO
SpaceX
Success
Failure
2
8 December 2010
15:43
F9 v1.0B0004.1
CCAFS
Dragon
Dragon
LEO
NASA
Success
Failure
3
22 May 2012
07:44
F9 v1.0B0005.1
CCAFS
Dragon
Dragon
LEO
NASA
Success
No attempt
4
8 October 2012
00:35
F9 v1.0B0006.1
CCAFS
SpaceX CRS-1
SpaceX CRS-1
LEO
NASA
Success
No attempt
5
1 March 2013
15:10
F9 v1.0B0007.1
CCAFS
SpaceX CRS-2
SpaceX CRS-2
LEO
NASA
Success
No attempt
6
29 September 2013
16:00
F9 v1.1B1003
VAFB
CASSIOPE
CASSIOPE
Polar orbit
MDA
Success
Uncontrolled
7
3 December 2013
22:41
F9 v1.1
CCAFS
SES-8
SES-8
GTO
SES
Success
No attempt
8
6 January 2014
22:06
F9 v1.1
CCAFS
Thaicom 6
Thaicom 6
GTO
Thaicom
Success
No attempt
9
18 April 2014
19:25
F9 v1.1
Cape Canaveral
SpaceX CRS-3
SpaceX CRS-3
LEO
NASA
Success
Controlled
10
14 July 2014
15:15
F9 v1.1
Cape Canaveral
Orbcomm-OG2
Orbcomm-OG2
LEO
Orbcomm
Success
Controlled
11
5 August 2014
08:00
F9 v1.1
Cape Canaveral
AsiaSat 8
AsiaSat 8
GTO
AsiaSat
Success
No attempt
12
7 September 2014
05:00
F9 v1.1
Cape Canaveral
AsiaSat 6
AsiaSat 6
GTO
AsiaSat
Success
No attempt
13
21 September 2014
05:52
F9 v1.1
Cape Canaveral
SpaceX CRS-4
SpaceX CRS-4
LEO
NASA
Success
Uncontrolled
14
10 January 2015
09:47
F9 v1.1
Cape Canaveral
SpaceX CRS-5
SpaceX CRS-5
LEO
NASA
Success
Failure
15
11 February 2015
23:03
F9 v1.1
Cape Canaveral
DSCOVR
DSCOVR
HEO
USAF
Success
Controlled
16
2 March 2015
03:50
F9 v1.1
Cape Canaveral
ABS-3A
ABS-3A
GTO
ABS
Success
No attempt
17
14 April 2015
20:10
F9 v1.1
Cape Canaveral
SpaceX CRS-6
SpaceX CRS-6
LEO
NASA
Success
Failure
18
27 April 2015
23:03
F9 v1.1
Cape Canaveral
TürkmenÄlem 52°E / MonacoSAT
TürkmenÄlem 52°E / MonacoSAT
GTO
None
Success
No attempt
19
28 June 2015
14:21
F9 v1.1
Cape Canaveral
SpaceX CRS-7
SpaceX CRS-7
LEO
NASA
Failure
Precluded
20
22 December 2015
01:29
F9 FT
Cape Canaveral
Orbcomm-OG2
Orbcomm-OG2
LEO
Orbcomm
Success
Success
21
17 January 2016
18:42
F9 v1.1
VAFB
Jason-3
Jason-3
LEO
NASA
Success
Failure
22
4 March 2016
23:35
F9 FT
Cape Canaveral
SES-9
SES-9
GTO
SES
Success
Failure
23
8 April 2016
20:43
F9 FT
Cape Canaveral
SpaceX CRS-8
SpaceX CRS-8
LEO
NASA
Success
Success
24
6 May 2016
05:21
F9 FT
Cape Canaveral
JCSAT-14
JCSAT-14
GTO
SKY Perfect JSAT Group
Success
Success
25
27 May 2016
21:39
F9 FT
Cape Canaveral
Thaicom 8
Thaicom 8
GTO
Thaicom
Success
Success
26
15 June 2016
14:29
F9 FT
Cape Canaveral
ABS-2A
ABS-2A
GTO
ABS
Success
Failure
27
18 July 2016
04:45
F9 FT
Cape Canaveral
SpaceX CRS-9
SpaceX CRS-9
LEO
NASA
Success
Success
28
14 August 2016
05:26
F9 FT
Cape Canaveral
JCSAT-16
JCSAT-16
GTO
SKY Perfect JSAT Group
Success
Success
29
14 January 2017
17:54
F9 FT
VAFB
Iridium NEXT
Iridium NEXT
Polar
Iridium Communications
Success
Success
30
19 February 2017
14:39
F9 FT
KSC
SpaceX CRS-10
SpaceX CRS-10
LEO
NASA
Success
Success
31
16 March 2017
06:00
F9 FT
KSC
EchoStar 23
EchoStar 23
GTO
EchoStar
Success
No attempt
32
30 March 2017
22:27
F9 FT♺
KSC
SES-10
SES-10
GTO
SES
Success
Success
33
1 May 2017
11:15
F9 FT
KSC
NROL-76
NROL-76
LEO
NRO
Success
Success
34
15 May 2017
23:21
F9 FT
KSC
Inmarsat-5 F4
Inmarsat-5 F4
GTO
Inmarsat
Success
No attempt
35
3 June 2017
21:07
F9 FT
KSC
SpaceX CRS-11
SpaceX CRS-11
LEO
NASA
Success
Success
36
23 June 2017
19:10
F9 FTB1029.2
KSC
BulgariaSat-1
BulgariaSat-1
GTO
Bulsatcom
Success
Success
37
25 June 2017
20:25
F9 FT
VAFB
Iridium NEXT
Iridium NEXT
LEO
Iridium Communications
Success
Success
38
5 July 2017
23:38
F9 FT
KSC
Intelsat 35e
Intelsat 35e
GTO
Intelsat
Success
No attempt
39
14 August 2017
16:31
F9 B4
KSC
SpaceX CRS-12
SpaceX CRS-12
LEO
NASA
Success
Success
40
24 August 2017
18:51
F9 FT
VAFB
Formosat-5
Formosat-5
SSO
NSPO
Success
Success
41
7 September 2017
14:00
F9 B4
KSC
Boeing X-37B
Boeing X-37B
LEO
USAF
Success
Success
42
9 October 2017
12:37
F9 B4
VAFB
Iridium NEXT
Iridium NEXT
Polar
Iridium Communications
Success
Success
43
11 October 2017
22:53:00
F9 FTB1031.2
KSC
SES-11
SES-11
GTO
SES S.A.
Success
Success
44
30 October 2017
19:34
F9 B4
KSC
Koreasat 5A
Koreasat 5A
GTO
KT Corporation
Success
Success
45
15 December 2017
15:36
F9 FTB1035.2
Cape Canaveral
SpaceX CRS-13
SpaceX CRS-13
LEO
NASA
Success
Success
46
23 December 2017
01:27
F9 FTB1036.2
VAFB
Iridium NEXT
Iridium NEXT
Polar
Iridium Communications
Success
Controlled
47
8 January 2018
01:00
F9 B4
CCAFS
Zuma
Zuma
LEO
Northrop Grumman
Success
Success
48
31 January 2018
21:25
F9 FTB1032.2
CCAFS
GovSat-1
GovSat-1
GTO
SES
Success
Controlled
49
22 February 2018
14:17
F9 FTB1038.2
VAFB
Paz
Paz
SSO
Hisdesat
Success
No attempt
50
6 March 2018
05:33
F9 B4
CCAFS
Hispasat 30W-6
Hispasat 30W-6
GTO
Hispasat
Success
No attempt
51
30 March 2018
14:14
F9 B4B1041.2
VAFB
Iridium NEXT
Iridium NEXT
Polar
Iridium Communications
Success
No attempt
52
2 April 2018
20:30
F9 B4B1039.2
CCAFS
SpaceX CRS-14
SpaceX CRS-14
LEO
NASA
Success
No attempt
53
18 April 2018
22:51
F9 B4
CCAFS
Transiting Exoplanet Survey Satellite
Transiting Exoplanet Survey Satellite
HEO
NASA
Success
Success
54
11 May 2018
20:14
F9 B5B1046.1
KSC
Bangabandhu-1
Bangabandhu-1
GTO
Thales-Alenia
Success
Success
55
22 May 2018
19:47
F9 B4B1043.2
VAFB
Iridium NEXT
Iridium NEXT
Polar
Iridium Communications
Success
No attempt
56
4 June 2018
04:45
F9 B4B1040.2
CCAFS
SES-12
SES-12
GTO
SES
Success
No attempt
57
29 June 2018
09:42
F9 B4B1045.2
CCAFS
SpaceX CRS-15
SpaceX CRS-15
LEO
NASA
Success
No attempt
58
22 July 2018
05:50
F9 B5
CCAFS
Telstar 19V
Telstar 19V
GTO
Telesat
Success
Success
59
25 July 2018
11:39
F9 B5B1048
VAFB
Iridium NEXT
Iridium NEXT
Polar
Iridium Communications
Success
Success
60
7 August 2018
05:18
F9 B5B1046.2
CCAFS
Merah Putih
Merah Putih
GTO
Telkom Indonesia
Success
Success
61
10 September 2018
04:45
F9 B5
CCAFS
Telstar 18V
Telstar 18V
GTO
Telesat
Success
Success
62
8 October 2018
02:22
F9 B5B1048.2
VAFB
SAOCOM 1A
SAOCOM 1A
SSO
CONAE
Success
Success
63
15 November 2018
20:46
F9 B5B1047.2
KSC
Es'hail 2
Es'hail 2
GTO
Es'hailSat
Success
Success
64
3 December 2018
18:34:05
F9 B5B1046.3
VAFB
SSO-A
SSO-A
SSO
Spaceflight Industries
Success
Success
65
5 December 2018
18:16
F9 B5
CCAFS
SpaceX CRS-16
SpaceX CRS-16
LEO
NASA
Success
Failure
66
23 December 2018
13:51
F9 B5
CCAFS
GPS III
GPS III
MEO
USAF
Success
No attempt
67
11 January 2019
15:31
F9 B5B1049.2
VAFB
Iridium NEXT
Iridium NEXT
Polar
Iridium Communications
Success
Success
68
22 February 2019
01:45
F9 B5B1048.3
CCAFS
Nusantara Satu
Nusantara Satu
GTO
PSN
Success
Success
69
2 March 2019
07:49
F9 B5[268]
KSC
Crew Dragon Demo-1
Crew Dragon Demo-1
LEO
NASA
Success
Success
70
4 May 2019
06:48
F9 B5
CCAFS
SpaceX CRS-17
SpaceX CRS-17
LEO
NASA
Success
Success
71
24 May 2019
02:30
F9 B5B1049.3
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
72
12 June 2019
14:17
F9 B5B1051.2
VAFB
RADARSAT Constellation
RADARSAT Constellation
SSO
Canadian Space Agency
Success
Success
73
25 July 2019
22:01
F9 B5B1056.2
CCAFS
SpaceX CRS-18
SpaceX CRS-18
LEO
NASA
Success
Success
74
6 August 2019
23:23
F9 B5B1047.3
CCAFS
AMOS-17
AMOS-17
GTO
Spacecom
Success
No attempt
75
11 November 2019
14:56
F9 B5
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
76
5 December 2019
17:29
F9 B5
CCAFS
SpaceX CRS-19
SpaceX CRS-19
LEO
NASA
Success
Success
77
17 December 2019
00:10
F9 B5B1056.3
CCAFS
JCSat-18
JCSat-18
GTO
Sky Perfect JSAT
Success
Success
78
7 January 2020
02:19:21
F9 B5
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
79
19 January 2020
15:30
F9 B5
KSC
Crew Dragon in-flight abort test
Crew Dragon in-flight abort test
Sub-orbital
NASA
Success
No attempt
80
29 January 2020
14:07
F9 B5
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
81
17 February 2020
15:05
F9 B5
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Failure
82
7 March 2020
04:50
F9 B5
CCAFS
SpaceX CRS-20
SpaceX CRS-20
LEO
NASA
Success
Success
83
18 March 2020
12:16
F9 B5
KSC
Starlink
Starlink
LEO
SpaceX
Success
Failure
84
22 April 2020
19:30
F9 B5
KSC
Starlink
Starlink
LEO
SpaceX
Success
Success
85
30 May 2020
19:22
F9 B5
KSC
Crew Dragon Demo-2
Crew Dragon Demo-2
LEO
NASA
Success
Success
86
4 June 2020
01:25
F9 B5
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
87
13 June 2020
09:21
F9 B5
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
88
30 June 2020
20:10:46
F9 B5
CCAFS
GPS III
GPS III
MEO
U.S. Space Force
Success
Success
89
20 July 2020
21:30
F9 B5B1058.2
CCAFS
ANASIS-II
ANASIS-II
GTO
Republic of Korea Army
Success
Success
90
7 August 2020
05:12
F9 B5
KSC
Starlink
Starlink
LEO
SpaceX
Success
Success
91
18 August 2020
14:31
F9 B5B1049.6
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
92
30 August 2020
23:18
F9 B5
CCAFS
SAOCOM 1B
SAOCOM 1B
SSO
CONAE
Success
Success
93
3 September 2020
12:46:14
F9 B5B1060.2
KSC
Starlink
Starlink
LEO
SpaceX
Success
Success
94
6 October 2020
11:29:34
F9 B5B1058.3
KSC
Starlink
Starlink
LEO
SpaceX
Success
Success
95
18 October 2020
12:25:57
F9 B5B1051.6
KSC
Starlink
Starlink
LEO
SpaceX
Success
Success
96
24 October 2020
15:31:34
F9 B5
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
97
5 November 2020
23:24:23
F9 B5
CCAFS
GPS III
GPS III
MEO
USSF
Success
Success
98
16 November 2020
00:27
F9 B5
KSC
Crew-1
Crew-1
LEO
NASA
Success
Success
99
21 November 2020
17:17:08
F9 B5
VAFB
Sentinel-6 Michael Freilich (Jason-CS A)
Sentinel-6 Michael Freilich (Jason-CS A)
LEO
NASA
Success
Success
100
25 November 2020
02:13
F9 B5 ♺
CCAFS
Starlink
Starlink
LEO
SpaceX
Success
Success
101
6 December 2020
16:17:08
F9 B5 ♺
KSC
SpaceX CRS-21
SpaceX CRS-21
LEO
NASA
Success
Success
102
13 December 2020
17:30:00
F9 B5 ♺
CCSFS
SXM-7
SXM-7
GTO
Sirius XM
Success
Success
103
19 December 2020
14:00:00
F9 B5 ♺
KSC
NROL-108
NROL-108
LEO
NRO
Success
Success
104
8 January 2021
02:15
F9 B5
CCSFS
Türksat 5A
Türksat 5A
GTO
Türksat
Success
Success
105
20 January 2021
13:02
F9 B5B1051.8
KSC
Starlink
Starlink
LEO
SpaceX
Success
Success
106
24 January 2021
15:00
F9 B5B1058.5
CCSFS
Transporter-1
Transporter-1
SSO
###Markdown
After you have fill in the parsed launch record values into `launch_dict`, you can create a dataframe from it.
###Code
df=pd.DataFrame(launch_dict)
df
###Output
_____no_output_____ |
notebooks/Grouped Editing.ipynb | ###Markdown
Grouping Edits for File ClassificationsIn the case that you might want to edit a field uniformly, we can write a script that groups the acquisition files by certain fields. In this example, we show how to edit data by grouping.First, we query flywheel for the full project:
###Code
import flywheel
import pandas as pd
from pandas.io.json.normalize import nested_to_record
import re
# add the script to the path
import sys
import os
sys.path.append(os.path.abspath("/home/ttapera/bids-on-flywheel/flywheel_bids_tools"))
import query_bids
import upload_bids
from tqdm import tqdm
import math
fw = flywheel.Client()
result = query_bids.query_fw("Q7 DSI", fw)
###Output
_____no_output_____
###Markdown
Convert this to a dataframe:
###Code
view = fw.View(columns='subject')
subject_df = fw.read_view_dataframe(view, result.id)
sessions = []
view = fw.View(columns='acquisition')
pbar = tqdm(total=100)
for ind, row in tqdm(subject_df.iterrows(), total=subject_df.shape[0]):
session = fw.read_view_dataframe(view, row["subject.id"])
if(session.shape[0] > 0):
sessions.append(session)
acquisitions = pd.concat(sessions)
###Output
_____no_output_____
###Markdown
And next, extract the acquisition's BIDS data.A slight modification we add to the BIDS extractor function is adding the file classification, Series name, and TR (what we assume will be useful grouping criteria)
###Code
acquisitions
def unlist_item(ls):
if type(ls) is list:
ls.sort()
return(', '.join(x for x in ls))
else:
return float('nan')
def process_acquisition(acq_id, client):
'''
Extract an acquisition
This function extracts an acquisition object and collects the important
file classification information. These data are processed and returned as
a pandas dataframe that can then be manipulated
'''
# get the acquisition object
acq = client.get(acq_id)
# convert to dictionary, and flatten the dictionary to avoid nested dicts
files = [x.to_dict() for x in acq.files]
flat_files = [nested_to_record(my_dict, sep='_') for my_dict in files]
# define desirable columns in regex
cols = r'(classification)|(^type$)|(^modality$)|(BIDS)|(RepetitionTime)|(SequenceName)|(SeriesDescription)'
# filter the dict keys for the columns names
flat_files = [
{k: v for k, v in my_dict.items() if re.search(cols, k)}
for my_dict in flat_files
]
# add acquisition ID for reference
for x in flat_files:
x.update({'acquisition.id': acq_id})
# to data frame
df = pd.DataFrame(flat_files)
# lastly, only pull niftis and dicoms; also convert list to string
if 'type' in df.columns:
df = df[df.type.str.contains(r'(nifti)|dicom')].reset_index(drop=True)
list_cols = (df.applymap(type) == list).all()
df.loc[:, list_cols] = df.loc[:, list_cols].applymap(unlist_item)
return df
acq_dfs = []
for index, row in tqdm(acquisitions.iterrows(), total=acquisitions.shape[0]):
try:
temp = process_acquisition(row["acquisition.id"], fw)
acq_dfs.append(temp)
except:
continue
bids_data=pd.concat(acq_dfs, sort=False)
bids_data.head()
###Output
_____no_output_____
###Markdown
Now let's assume we want to group by the following:
###Code
groups_list = ['classification_Intent', 'classification_Measurement']
###Output
_____no_output_____
###Markdown
In order to reference back to our original data frame, we create a group ID based on the groupings. These can be as granular as necessary and have as many different groups as you'd like.
###Code
bids_data2 = bids_data.copy()
# figured out how to pipe pandas like R's "%>%". goddamn finally
bids_data2['group_id'] = (bids_data
# groupby and keep the columns as columns
.groupby(groups_list, as_index=False)
# index the groups
.ngroup()
.add(1))
bids_data2.head()
###Output
_____no_output_____
###Markdown
So this is where we group the data, and select a random exemplar from each group:
###Code
grouped_data = bids_data2.groupby(groups_list, as_index=False).nth(1).reset_index(drop=True)
grouped_data
###Output
_____no_output_____
###Markdown
This is what you would download from a grouped query. Note that this isn't *strictly* a grouped dataframe. We have effectively emulated grouping by dropping duplicate rows by specific columns.Now, we modify some data in a copy of the query:
###Code
grouped_data_modified = grouped_data.copy()
grouped_data_modified.loc[grouped_data_modified['classification_Measurement'].isnull(), 'classification_Measurement'] = "Diffusion"
grouped_data_modified.loc[2, 'info_SequenceName'] = "SomeT1wSequence"
grouped_data_modified
###Output
_____no_output_____
###Markdown
We use our function to index the cells that have changed between the source and modified:
###Code
diff = upload_bids.get_unequal_cells(grouped_data_modified, grouped_data)
diff
###Output
_____no_output_____
###Markdown
Here, we loop through each of the changes and create a dictionary where the `key` is the group that the change needs to be applied to, and the value is a tuple of the `column:new_value` pair.
###Code
changes = {}
for x in diff:
key = grouped_data_modified.loc[x[0], 'group_id']
val = (grouped_data_modified.columns[x[1]], grouped_data_modified.iloc[x[0], x[1]])
changes.update({key: val})
changes
###Output
_____no_output_____
###Markdown
Now, using these indices, we can apply the changes to the groups in the full dataset:
###Code
for group, change in changes.items():
bids_data2.loc[bids_data2['group_id'] == group, change[0]] = change[1]
bids_data2.head(10)
###Output
_____no_output_____ |
notebooks/ranking-policy-eval_eric.ipynb | ###Markdown
Ability of direct method to measure on-policy
###Code
from sklearn.linear_model import LinearRegression
# split into samples
def get_value(df, model, model_type):
true_value = df['label'].mean()
if model_type in {'boost', 'lambda', 'rf'}:
pred_value = model.predict(df[model_features].astype('float'))
if model_type == 'lm':
pred_value = model.predict(df[model_features].astype('float').fillna(0))
pred_value = np.where(pred_value > 4, 4, np.where(pred_value < 0, 0, pred_value))
bias = (true_value - pred_value).mean()
rmse = np.sqrt(np.mean((true_value - pred_value) ** 2))
return true_value, bias, rmse
def off_policy_estimate(v0, v1, model_type, name):
bias_v0s = []
bias_v1s = []
true_value_v0s = []
true_value_v1s = []
rmse_v0s = []
rmse_v1s = []
for i in range(10):
v0_train_requests = v0['search_request_id'].drop_duplicates().sample(frac=0.5)
v0_train = v0[v0['search_request_id'].isin(v0_train_requests)].sort_values('search_request_id')
v0_test = v0[~v0['search_request_id'].isin(v0_train_requests)]
# train model
if model_type == 'boost':
model = xgb.XGBRegressor(n_estimators=50)
model.fit(v0_train[model_features].astype('float'), v0_train['label'])
if model_type == 'lm':
model = LinearRegression()
model.fit(v0_train[model_features].astype('float').fillna(0), v0_train['label'])
if model_type == 'lambda':
model = xgb.XGBRanker(n_estimators=50)
model.fit(v0_train[model_features].astype('float'),
v0_train['label'],
v0_train['search_request_id'].value_counts(sort=False).sort_index())
if model_type == 'rf':
model = xgb.XGBRFRegressor(n_estimators=50)
model.fit(v0_train[model_features].astype('float'), v0_train['label'])
# predict and evaluate on-policy value
true_value_v0, bias_v0, rmse_v0 = get_value(v0_test, model, model_type)
bias_v0s.append(bias_v0)
true_value_v0s.append(true_value_v0)
rmse_v0s.append(rmse_v0)
# predict and evaluate off-policy value
true_value_v1, bias_v1, rmse_v1 = get_value(v1, model, model_type)
bias_v1s.append(bias_v1)
true_value_v1s.append(true_value_v1)
rmse_v1s.append(rmse_v1)
results = {}
results['Name'] = name
results['On Policy Value'] = np.mean(true_value_v0s)
results['Off Policy Value'] = np.mean(true_value_v1s)
results['On Policy Bias'] = np.mean(bias_v0s)
results['Off Policy Bias'] = np.mean(bias_v1s)
results['On Policy Std'] = np.std(bias_v0s)
results['Off Policy Std'] = np.std(bias_v1s)
results['On Policy RMSE'] = np.mean(rmse_v0s)
results['Off Policy RMSE'] = np.mean(rmse_v1s)
return results
v0 = data[data['variant'] == 'original']
v1 = data[data['variant'] == 'alternative']
performance = []
performance.append(off_policy_estimate(v0, v1, 'boost', 'OP - Boosting'))
performance.append(off_policy_estimate(v0, v1, 'lm', 'OP - Linear Regression'))
performance.append(off_policy_estimate(v0, v1, 'lambda', 'OP - LambdaMART'))
performance.append(off_policy_estimate(v0, v1, 'rf', 'OP - Random Forest'))
performance.append(off_policy_estimate(v1, v0, 'boost', 'NP - Boosting'))
performance.append(off_policy_estimate(v1, v0, 'lm', 'NP - Linear Regression'))
performance.append(off_policy_estimate(v1, v0, 'lambda', 'NP - LambdaMART'))
performance.append(off_policy_estimate(v1, v0, 'rf', 'NP - Random Forest'))
performance_df = pd.DataFrame(performance)
print(performance_df[['Name', 'On Policy Value', 'On Policy Bias', 'On Policy Std', 'On Policy RMSE']].to_latex())
print(performance_df[['Name', 'Off Policy Value', 'Off Policy Bias', 'Off Policy Std', 'Off Policy RMSE']].to_latex())
###Output
\begin{tabular}{llrrrr}
\toprule
{} & Name & Off Policy Value & Off Policy Bias & Off Policy Std & Off Policy RMSE \\
\midrule
0 & OP - Boosting & 0.186435 & -0.009941 & 0.001531 & 0.212893 \\
1 & OP - Linear Regression & 0.186435 & 0.002677 & 0.000688 & 0.172784 \\
2 & OP - LambdaMART & 0.186435 & -0.442453 & 0.004265 & 0.699071 \\
3 & OP - Random Forest Regression & 0.186435 & -0.006005 & 0.001467 & 0.191687 \\
4 & NP - Boosting & 0.137336 & -0.010458 & 0.001054 & 0.158688 \\
5 & NP - Linear Regression & 0.137336 & -0.012108 & 0.000564 & 0.116803 \\
6 & NP - LambdaMART & 0.137336 & -0.115359 & 0.004427 & 0.418053 \\
7 & NP - Random Forest & 0.137336 & -0.023485 & 0.000603 & 0.132441 \\
\bottomrule
\end{tabular}
|
OOI/from_ooi_json-pwrsys.ipynb | ###Markdown
Convert OOI Parsed pwrsys JSON to NetCDF fileusing CF-1.6, Discrete Sampling Geometry (DSG) conventions, **`featureType=timeSeries`**
###Code
%matplotlib inline
import json
import pandas as pd
import numpy as np
from pyaxiom.netcdf.sensors import TimeSeries
infile = '/usgs/data2/notebook/data/20170130.superv.json'
infile = '/sand/usgs/users/rsignell/data/ooi/endurance/cg_proc/ce02shsm/D00004/buoy/pwrsys/20170208.pwrsys.json'
outfile = '/usgs/data2/notebook/data/20170208.pwrsys.nc'
with open(infile) as jf:
js = json.load(jf)
df = pd.DataFrame({})
for k, v in js.items():
df[k] = v
df['time'] = pd.to_datetime(df.time, unit='s')
df['depth'] = 0.
df.head()
df['solar_panel4_voltage'].plot();
df.index = df['time']
df['solar_panel4_voltage'].plot();
###Output
_____no_output_____
###Markdown
Define the NetCDF global attributes
###Code
global_attributes = {
'institution':'Oregon State University',
'title':'OOI CE02SHSM Pwrsys Data',
'summary':'OOI Pwrsys data from Coastal Endurance Oregon Shelf Surface Mooring',
'creator_name':'Chris Wingard',
'creator_email':'[email protected]',
'creator_url':'http://ceoas.oregonstate.edu/ooi'
}
###Output
_____no_output_____
###Markdown
Create initial file
###Code
ts = TimeSeries(
output_directory='.',
latitude=44.64,
longitude=-124.31,
station_name='ce02shsm',
global_attributes=global_attributes,
times=df.time.values.astype(np.int64) // 10**9,
verticals=df.depth.values,
output_filename=outfile,
vertical_positive='down'
)
###Output
_____no_output_____
###Markdown
Add data variables
###Code
df.columns.tolist()
# create a dictionary of variable attributes
atts = {
'main_current':{'units':'volts', 'long_name':'main current'},
'solar_panel3_voltage':{'units':'volts', 'long_name':'solar panel 3 voltage'}
}
print(atts.get('main_current'))
# if we ask for a key that doesn't exist, we get a value of "None"
print(atts.get('foobar'))
for c in df.columns:
if c in ts._nc.variables:
print("Skipping '{}' (already in file)".format(c))
continue
if c in ['time', 'lat', 'lon', 'depth', 'cpm_date_time_string']:
print("Skipping axis '{}' (already in file)".format(c))
continue
if 'object' in df[c].dtype.name:
print("Skipping object {}".format(c))
continue
print("Adding {}".format(c))
# add variable values and variable attributes here
ts.add_variable(c, df[c].values, attributes=atts.get(c))
df['error_flag3'][0]
ts.ncd
import netCDF4
nc = netCDF4.Dataset(outfile)
nc['main_current']
nc.close()
###Output
_____no_output_____ |
data/exchange_rates/exchange_rates.ipynb | ###Markdown
Read Data
###Code
rates = pd.read_csv(
'input/Export_GBP.csv', sep=';', header=None,
names=['month_start', 'month_end', 'exchange', 'rate'
])
rates.shape
rates.head()
###Output
_____no_output_____
###Markdown
Fix Types
###Code
rates.month_start = pd.to_datetime(rates.month_start, dayfirst=True)
rates.drop('month_end', axis=1, inplace=True)
rates.plot.line('month_start', 'rate')
rates.month_start.unique().shape
rates.month_start.describe()
###Output
_____no_output_____
###Markdown
TrimExclude the current month (incomplete) and the pre-EURO data, which is before our datasets start.
###Code
rates = rates[rates.month_start < '2018-08-01'].copy()
rates.exchange.unique()
rates[rates.exchange == 'ECU/GBP'].month_start.max()
rates = rates[rates.exchange == 'EUR/GBP'].copy()
rates.drop('exchange', axis=1, inplace=True)
rates.head()
###Output
_____no_output_____
###Markdown
Extrapolate
###Code
rates.head()
mean_rate = rates[rates.month_start >= '2016-07-01'].rate.mean()
mean_rate
future_month_starts = pd.to_datetime([
'{:4d}-{:02d}-01'.format(year, month)
for year in range(2018, 2026)
for month in range(1, 13)
])
future_month_starts = future_month_starts[future_month_starts > rates.month_start.max()]
future_month_starts
future_rates = pd.DataFrame({
'month_start': future_month_starts,
'rate': mean_rate
})
future_rates.head()
all_rates = pd.concat([rates, future_rates]).sort_values('month_start')
all_rates.shape
all_rates.head()
all_rates.tail()
all_rates.plot.line('month_start', 'rate')
all_rates.to_pickle('output/exchange_rates.pkl.gz')
###Output
_____no_output_____ |
Chapter 4/R Lab/4.6.5 K-Nearest Neighbors.ipynb | ###Markdown
**K-Means without standardisation (K = 1)**
###Code
from sklearn.neighbors import KNeighborsClassifier
knn_1 = KNeighborsClassifier(n_neighbors=1).fit(X_train, y_train)
knn_1_pred = knn_1.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, knn_1_pred))
print(classification_report(y_test, knn_1_pred))
###Output
precision recall f1-score support
Down 0.44 0.46 0.45 118
Up 0.51 0.49 0.50 134
micro avg 0.48 0.48 0.48 252
macro avg 0.48 0.48 0.47 252
weighted avg 0.48 0.48 0.48 252
###Markdown
**K-Means without standardisation (K = 3)**
###Code
from sklearn.neighbors import KNeighborsClassifier
knn_3 = KNeighborsClassifier().fit(X_train, y_train)
knn_3_pred = knn_3.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, knn_3_pred))
print(classification_report(y_test, knn_3_pred))
###Output
precision recall f1-score support
Down 0.47 0.43 0.45 118
Up 0.53 0.57 0.55 134
micro avg 0.50 0.50 0.50 252
macro avg 0.50 0.50 0.50 252
weighted avg 0.50 0.50 0.50 252
###Markdown
*As we can see, increase the number of K marginally improves the precision of the model.* **K-Means with standardisation (K = 1)****Why standardise?** *Because KNN classifier classifies variables of different sizes, in which distances may vary on an absolute scale (e.g. we might be classifying a variable based on house prices (where the distances could be in '000s of £ and age, where the distances could be a few years). Standardisation ensures that these distances are accounted for and there "standardised".*
###Code
from sklearn.preprocessing import StandardScaler
scaler_1 = StandardScaler()
scaler_1.fit(Smarket.drop(columns = 'Direction', axis = 1).astype(float))
scaled_features_1 = scaler_1.transform(Smarket.drop(columns = 'Direction', axis = 1).astype(float))
df_1 = pd.DataFrame(scaled_features_1, columns = Smarket.columns[:-1] )
df_1.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(scaled_features_1,Smarket['Direction'],
test_size=0.30)
from sklearn.neighbors import KNeighborsClassifier
knn_s_1 = KNeighborsClassifier(n_neighbors=1)
knn_s_1.fit(X_train, y_train)
knn_s_1_pred = knn_s_1.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, knn_s_1_pred))
print(classification_report(y_test, knn_s_1_pred))
###Output
precision recall f1-score support
Down 0.81 0.77 0.79 191
Up 0.77 0.82 0.79 184
micro avg 0.79 0.79 0.79 375
macro avg 0.79 0.79 0.79 375
weighted avg 0.79 0.79 0.79 375
###Markdown
**K-Means with standardisation (K = 3)**
###Code
from sklearn.preprocessing import StandardScaler
scaler_3 = StandardScaler()
scaler_3.fit(Smarket.drop(columns='Direction', axis = 1).astype(float))
scaled_features_3 = scaler_3.transform(Smarket.drop(columns='Direction', axis = 1).astype(float))
df_3 = pd.DataFrame(scaled_features_3, columns = Smarket.columns[:-1] )
df_3.head()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(scaled_features_3,Smarket['Direction'],
test_size=0.30)
from sklearn.neighbors import KNeighborsClassifier
knn_s_3 = KNeighborsClassifier(n_neighbors=3)
knn_s_3.fit(X_train, y_train)
knn_s_3_pred = knn_s_3.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, knn_s_3_pred))
print(classification_report(y_test, knn_s_3_pred))
###Output
precision recall f1-score support
Down 0.87 0.84 0.86 181
Up 0.86 0.89 0.87 194
micro avg 0.86 0.86 0.86 375
macro avg 0.86 0.86 0.86 375
weighted avg 0.86 0.86 0.86 375
|
colab_notebooks/turker_ensemble_original_updated.ipynb | ###Markdown
###Code
%tensorflow_version 1.x
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import *
from tensorflow.python.lib.io import file_io
%matplotlib inline
import keras
from keras import backend as K
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import plot_model
from sklearn.metrics import *
from keras.engine import Model
from keras.layers import Input, Flatten, Dense, Activation, Conv2D, MaxPool2D, BatchNormalization, Dropout, MaxPooling2D
import skimage
from skimage.transform import rescale, resize
from google.colab import drive
drive.mount('/content/drive')
Resize_pixelsize = 197
# Function that reads the data from the csv file, increases the size of the images and returns the images and their labels
# dataset: Data path
def get_data(dataset):
file_stream = file_io.FileIO(dataset, mode='r')
data = pd.read_csv(file_stream)
data[' pixels'] = data[' pixels'].apply(lambda x: [int(pixel) for pixel in x.split()])
X, Y = data[' pixels'].tolist(), data['emotion'].values
X = np.array(X, dtype='float32').reshape(-1,48,48,1)
X = X/255.0
X_res = np.zeros((X.shape[0], Resize_pixelsize,Resize_pixelsize,3))
for ind in range(X.shape[0]):
sample = X[ind]
sample = sample.reshape(48, 48)
image_resized = resize(sample, (Resize_pixelsize, Resize_pixelsize), anti_aliasing=True)
X_res[ind,:,:,:] = image_resized.reshape(Resize_pixelsize,Resize_pixelsize,1)
Y_res = np.zeros((Y.size, 7))
Y_res[np.arange(Y.size),Y] = 1
return X, X_res, Y_res
local_path = "/content/drive/My Drive/Personal projects/emotion_recognition_paper/data/fer_csv/"
dev_dataset_dir = local_path +"dev.csv"
test_dataset_dir = local_path + "test.csv"
X_dev, X_res_dev, Y_dev = get_data(dev_dataset_dir)
X_test, X_res_test, Y_test = get_data(test_dataset_dir)
model = load_model('/content/drive/My Drive//Personal projects/emotion_recognition_paper/cs230 project/models/soa-SGD_LR_0.01000-EPOCHS_300-BS_128-DROPOUT_0.3test_acc_0.663.h5')
print('\n# Evaluate on dev data')
results_dev = model.evaluate(X_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = model.evaluate(X_test,Y_test)
print('test loss, test acc:', results_test)
model2 = load_model('/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/soa-SGD_LR_0.01000-EPOCHS_300-BS_128-DROPOUT_0.4test_acc_0.657.h5')
print('\n# Evaluate on dev data')
results_dev = model2.evaluate(X_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = model2.evaluate(X_test,Y_test)
print('test loss, test acc:', results_test)
Resnet_model = load_model('/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/tl/ResNet-BEST-73.2.h5')
print('\n# Evaluate on dev data')
results_dev = Resnet_model.evaluate(X_res_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = Resnet_model.evaluate(X_res_test,Y_test)
print('test loss, test acc:', results_test)
Resnet_model_wcw = load_model('/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/tl/ResNet-BEST-WCW-0.677.h5')
print('\n# Evaluate on dev data')
results_dev = Resnet_model_wcw.evaluate(X_res_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = Resnet_model_wcw.evaluate(X_res_test,Y_test)
print('test loss, test acc:', results_test)
# Senet_model = load_model('/content/drive/My Drive/cs230 project/models/tl/SeNet50-BEST-69.8.h5')
# print('\n# Evaluate on dev data')
# results_dev = Senet_model.evaluate(X_res_dev,Y_dev)
# print('dev loss, dev acc:', results_dev)
# print('\n# Evaluate on test data')
# results_test = Senet_model.evaluate(X_res_test,Y_test)
# print('test loss, test acc:', results_test)
Senet_model_wcw = load_model('/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/tl/SeNet50-WCW-BEST-68.9.h5')
print('\n# Evaluate on dev data')
results_dev = Senet_model_wcw.evaluate(X_res_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = Senet_model_wcw.evaluate(X_res_test,Y_test)
print('test loss, test acc:', results_test)
# VGG100_model = load_model('/content/drive/My Drive/cs230 project/models/tl/VGG-BEST-69.5.h5')
# print('\n# Evaluate on dev data')
# results_dev = VGG100_model.evaluate(X_res_dev,Y_dev)
# print('dev loss, dev acc:', results_dev)
# print('\n# Evaluate on test data')
# results_test = VGG100_model.evaluate(X_res_test,Y_test)
# print('test loss, test acc:', results_test)
# VGG100_model_wcw = load_model("/content/drive/My Drive/cs230 project/models/tl/vgg100-WCW-BEST-70.h5")
# print('\n# Evaluate on dev data')
# results_dev = VGG100_model_wcw.evaluate(X_res_dev,Y_dev)
# print('dev loss, dev acc:', results_dev)
# print('\n# Evaluate on test data')
# results_test = VGG100_model_wcw.evaluate(X_res_test,Y_test)
# print('test loss, test acc:', results_test)
Resnet_Aux_model = load_model("/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/auxiliary/RESNET50-AUX-BEST-72.7.h5")
print('\n# Evaluate on dev data')
results_dev = Resnet_Aux_model.evaluate(X_res_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = Resnet_Aux_model.evaluate(X_res_test,Y_test)
print('test loss, test acc:', results_test)
Resnet_Aux_model_wcw = load_model("/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/auxiliary/RESNET50-WCW-AUX-BEST-72.4.h5")
print('\n# Evaluate on dev data')
results_dev = Resnet_Aux_model_wcw.evaluate(X_res_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = Resnet_Aux_model_wcw.evaluate(X_res_test,Y_test)
print('test loss, test acc:', results_test)
Senet_Aux_model = load_model('/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/auxiliary/SENET50-AUX-BEST-72.5.h5')
print('\n# Evaluate on dev data')
results_dev = Senet_Aux_model.evaluate(X_res_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = Senet_Aux_model.evaluate(X_res_test,Y_test)
print('test loss, test acc:', results_test)
Senet_Aux_model_wcw = load_model('/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/auxiliary/SENET50-WCW-AUX-BEST-71.6.h5')
print('\n# Evaluate on dev data')
results_dev = Senet_Aux_model_wcw.evaluate(X_res_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = Senet_Aux_model_wcw.evaluate(X_res_test,Y_test)
print('test loss, test acc:', results_test)
# !ls "/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/final/"
VGG_Aux_model = load_model("/content/drive/My Drive/Personal projects/emotion_recognition_paper/cs230 project/models/auxiliary/VGG16-AUX-BEST-70.2.h5")
print('\n# Evaluate on dev data')
results_dev = VGG_Aux_model.evaluate(X_res_dev,Y_dev)
print('dev loss, dev acc:', results_dev)
print('\n# Evaluate on test data')
results_test = VGG_Aux_model.evaluate(X_res_test,Y_test)
print('test loss, test acc:', results_test)
models_SOA = [model, model2]
models_TL = [Resnet_model, Resnet_Aux_model_wcw, Senet_Aux_model, Senet_Aux_model_wcw, VGG_Aux_model]
# make an ensemble prediction for multi-class classification
def ensemble_predictions(models_SOA, testX, models_TL, testresX):
# make predictions
yhats = np.zeros((len(models_SOA)+len(models_TL),testX.shape[0],7))
for model_ind in range(len(models_SOA)):
yhat = models_SOA[model_ind].predict(testX)
yhats[model_ind,:,:] = yhat
for model_ind in range(len(models_TL)):
yhat = models_TL[model_ind].predict(testresX)
yhats[len(models_SOA)+model_ind,:,:] = yhat
summed = np.sum(yhats, axis=0)
result = np.argmax(summed, axis=1)
return result
# evaluate a specific number of members in an ensemble
def evaluate_n_members(models_SOA, testX, models_TL, testresX, testy):
# select a subset of members
#subset = members[:n_members]
#print(len(subset))
# make prediction
yhat = ensemble_predictions(models_SOA, testX, models_TL, testresX)
# calculate accuracy
return accuracy_score(testy, yhat)
ens_acc = evaluate_n_members(models_SOA, X_test, models_TL, X_res_test, np.argmax(Y_test, axis=1))
print(ens_acc)
# ens_acc = evaluate_n_members(models_SOA, X_dev, models_TL, X_res_dev, np.argmax(Y_dev, axis=1))
# print(ens_acc)
###Output
_____no_output_____ |
test/.ipynb_checkpoints/test_video_loading-checkpoint.ipynb | ###Markdown
Test video loading 1. Setup
###Code
import sys
# sys.path.append('/Users/zhenyvlu/work/sesame')
sys.path.append(r'C:\Users\luzhe\deep_learning\sesame')
import numpy
import torch
%matplotlib inline
import matplotlib.pyplot as plt
import sesame.utils.logger as logger
from sesame.datasets.ava_dataset import Ava
from sesame.config.defaults import get_cfg
log = logger.get_logger('notebook')
logger.setup_logging('.')
cfg = get_cfg()
cfg.merge_from_file("../configs/AVA/MVIT_B_16x4_CONV.yaml")
# cfg = CN.load_cfg(open("../configs/Kinetics/MVIT_B_16x4_CONV.yaml", "r", encoding="UTF-8"))
ava_dataset_train = Ava(cfg, 'train')
###Output
[01/24 15:41:58][INFO] ava_dataset.py: 47: Finished loading annotations from: %s
[01/24 15:41:58][INFO] ava_dataset.py: 48: Detection threshold: 0.9
[01/24 15:41:58][INFO] ava_dataset.py: 49: Number of unique boxes: 1132
[01/24 15:41:58][INFO] ava_dataset.py: 50: Number of annotations: 3009
[01/24 15:41:58][INFO] ava_dataset.py: 98: 706 keyframes used.
[01/24 15:41:58][INFO] ava_dataset.py: 243: === AVA dataset summary ===
[01/24 15:41:58][INFO] ava_dataset.py: 244: Split: train
|
notebooks/nn_pass_difficulty.ipynb | ###Markdown
Pass Difficulty with Neural Networks Using convolutional neural networks to create pass difficulty surfacesThis work is highly motivated by the recent [SoccerMap](https://whova.com/embedded/subsession/ecmlp_202009/1194275/1194279/) paper published by [Javier Fernandez](https://twitter.com/JaviOnData) and [Luke Bornn](https://twitter.com/LukeBornn) which provided a deep learning architecture for producing probability surfaces from raw tracking data. It's really brilliant, and you should take a look. This notebook is an exercise in applying some of the major learnings from their paper, in particular the evaluation of a loss function at a single output pixel, to the less-exclusive event-based domain.Using StatsBomb's Open Data from the 2018 World Cup, we aim to produce continuous pass completion probability surfaces given the originating location of the pass. Naturally, our predictions offer less flexibility and precision as those produced with the richer context found in full tracking data, but I think this offers a possible improvement on many existing pass probability models in the event-based domain. And it produces some really attractive plots along the way.---
###Code
# This builds the soccerutils module in the Analytics Handbook so you can import it
!pip install git+https://github.com/devinpleuler/analytics-handbook.git
from soccerutils.statsbomb import get_events
from soccerutils.pitch import Pitch
###Output
_____no_output_____
###Markdown
Like previous examples, we import `get_events` from the `soccerutils.statsbomb` module, which loads all events from a single competition/season into a simple list.We also import the `Pitch` class from `soccerutils.pitch`, which we can use to plot field lines.
###Code
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
events = get_events(competition_id=43, season_id=3)
###Output
_____no_output_____
###Markdown
Get all 2018 World Cup events and put them into the events list.---
###Code
def transform(coords, x_bins=52, y_bins=34):
x, y = coords
x_bin = np.digitize(x, np.linspace(0, 120, x_bins))
y_bin = np.digitize(y, np.linspace(0, 80, y_bins))
matrix = np.zeros((x_bins, y_bins))
try:
matrix[x_bin][y_bin] = 1
except IndexError:
pass
return matrix
def build_tensor(point, x_bins=52, y_bins=34):
xx = np.linspace(0, 120, x_bins)
yy = np.linspace(0, 80, y_bins)
xv, yv = np.meshgrid(xx, yy, sparse=False, indexing='ij')
coords = np.dstack([xv, yv])
origin = np.array(point)
goal = np.array([120,40])
pos = transform(point)
r_origin = np.linalg.norm(origin - coords, axis=2)
r_goal = np.linalg.norm(goal - coords, axis=2)
tensor = np.dstack([pos, r_origin, r_goal])
return tensor
###Output
_____no_output_____
###Markdown
We use `transform()` and `build_tensor()` to help construct our training data. You can see the outputs of these functions in spatial form a little bit further down in the notebook.I highly recommend using `numpy` vectoried approaches to the construction of these surfaces, otherwise you're gonna be sitting around for a while waiting for your training data to populate.Note: we borrow some of the matrix dimensions from `SoccerMap`, working at 1/2, and 1/4 size of their 104 x 68 representation.---
###Code
from tqdm import tqdm
passes= []
for e in tqdm(events):
if e['type']['id'] == 30: # Events with type ID == 30 are Passes
passes.append({
'player': e['player'],
'origin': build_tensor(e['location']),
'dest': transform(e['pass']['end_location']),
'outcome': 0 if 'outcome' in e['pass'].keys() else 1
})
Xp = np.asarray([p['origin'] for p in passes])
Xd = np.asarray([p['dest'] for p in passes])
Y = np.asarray([p['outcome'] for p in passes], dtype=np.float32)
from sklearn.model_selection import train_test_split
Xp_train, Xp_test, \
Xd_train, Xd_test, \
Y_train, Y_test = train_test_split(Xp, Xd, Y,
test_size=0.1,
random_state = 1,
shuffle=True)
###Output
_____no_output_____
###Markdown
This splits our data 90/10 into seperate training and testing groups.---
###Code
n = 500 # We use this sample index through the entire notebook
fig, axs = plt.subplots(1,3, figsize=(10,8))
for i, ax in enumerate(axs):
ax.matshow(Xp_test[n][:,:,i])
###Output
_____no_output_____
###Markdown
Like the approach suggested in `SoccerMap`, we produce an `l x w x c` matrix, where `l` (*length*) and `w` (*width*) represent coarsened locations on a field and `c` represents *channels* of information that are binned into the spatial representation in the first two dimensions.Unlike `SoccerMap`, we don't have to represent player-by-player locations and velocities, but we do construct various channels that represent spatial relationships between the origin of the pass and the goal. We also include a sparse channel that represents the cell that belongs to the origin of the pass.---
###Code
avg_completion_rate = np.mean(Y_train)
print(avg_completion_rate)
###Output
0.7975862
###Markdown
We initialize the final prediction surface with the `avg_completion_rate` to speed up training.---
###Code
from keras.layers import Input, Concatenate, Conv2D, MaxPooling2D, UpSampling2D, Lambda
from keras.models import Model
from keras.initializers import Constant
import keras.backend as K
from tensorflow import pad, constant
# I couldn't figure out how to do padding with the Keras backend
def symmetric_pad(x):
paddings = constant([[0, 0], [1, 1], [1, 1], [0, 0]])
return pad(x, paddings, "SYMMETRIC")
###Output
_____no_output_____
###Markdown
Getting the padding right is critical for reducing artifacting along the edges of our probability surfaes. As convolutions reduce the size of your representations (depending on their kernal size and stride), you need to return your representations to the original size.`Symmetric Padding` is particularly useful for this situation as it fills the padding cells with values that are similar to those around it, unlike `same` padding which fills those cells with a constant value.We apply this sort of padding after any convolution layers that have kernel sizes other than `(1,1)`.---
###Code
def pixel_layer(x):
surface = x[:,:,:,0]
mask = x[:,:,:,1]
masked = surface * mask
value = K.sum(masked, axis=(2,1))
return value
###Output
_____no_output_____
###Markdown
This custom single-pixel layer is critical piece to evaluating loss during model training.As you can see in the model structure below, we pass in two seperate inputs. One is the training data that we construct via `build_tensor()`, but we also pass in the sparse spatial representation of the destination as a seperate input.We borrow that layer to *mask* (via multiplication) the final prediction surface such that we can grab the prediction at the cell on the surface that matches the actual destination of the pass and compare it to the true value with log loss.
###Code
pass_input = Input(shape=(52,34,3), name='pass_input')
dest_input = Input(shape=(52,34,1), name='dest_input')
x = Conv2D(16, (3, 3), activation='relu', padding='valid')(pass_input)
x = Lambda(symmetric_pad)(x)
x = Conv2D(1, (1, 1), activation='linear')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(32, (3, 3), activation='relu', padding='valid')(x)
x = Lambda(symmetric_pad)(x)
x = Conv2D(1, (1, 1), activation='linear')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='valid')(x)
x = Lambda(symmetric_pad)(x)
out = Conv2D(1, (1,1), activation='sigmoid',
kernel_initializer=Constant(avg_completion_rate))(x)
combined = Concatenate()([out, dest_input])
pixel = Lambda(pixel_layer)(combined)
model = Model([pass_input, dest_input], combined)
full = Model([pass_input, dest_input], pixel)
###Output
_____no_output_____
###Markdown
Notice that we define two seperate models, `model` and `full`. One the subset of the other.We will use `model` to produce surfaces, while we use `full` to produce predictions at just the destination coordinates of the pass---
###Code
# full.summary()
# If you're interested in seeing the layer-by-layer dimensions, run this cell.
full.compile(loss="binary_crossentropy", optimizer="adam")
fit = full.fit(
[Xp_train, Xd_train], Y_train,
epochs=30,
validation_data=([Xp_test, Xd_test], Y_test))
train_loss = fit.history['loss']
test_loss = fit.history['val_loss']
plt.plot(train_loss, label='Training Loss')
plt.plot(test_loss, label="Validation Loss")
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the model loss during training for the training and validation data sets.---
###Code
surfaces = model.predict([Xp_test, Xd_test])
###Output
_____no_output_____
###Markdown
Construct surfaces from the test data---
###Code
s = plt.matshow(surfaces[n][:,:,0])
plt.colorbar(s, shrink=0.8)
###Output
_____no_output_____
###Markdown
Notice the dimples in pass difficult around the origin of the pass. This is due to survivorship bias cased by incomplete passes being cut out before they reach their intended destination. In particular, this happens frequently with blocked passes near the origin, causing this interesting artifact. In an ideal world, we would be using the intended destinatino of the pass as opposed to the actuaion destination, but that's impossible in the event-based domain.
###Code
from scipy.ndimage import gaussian_filter
def draw_map(img, pass_, title=None, dims=(52,34)):
image = gaussian_filter((img).reshape(dims), sigma=1.8)
fig, ax = plt.subplots(figsize=(8,6))
pitch = Pitch(title=title)
pitch.create_pitch(ax)
z = np.rot90(image, 1)
xx = np.linspace(0, 120, 52)
yy = np.linspace(0, 80, 34)
c = ax.contourf(
xx, yy, z,
zorder=2,
levels=np.linspace(0.2, 1.0, 17),
alpha=0.8,
antialiased=True,
cmap='RdBu')
x,y = np.unravel_index(pass_.argmax(), pass_.shape)
y = dims[1] - y
origin = np.asarray([x,y])*[120/dims[0],80/dims[1]]
cosmetics = {
'linewidth': 1,
'facecolor': "yellow",
'edgecolor': "black",
'radius': 1.5,
'zorder': 5
}
pitch.draw_points(ax, [origin], cosmetics=cosmetics)
ax.set_aspect(1)
ax.axis('off')
plt.tight_layout()
plt.colorbar(c, ax=ax, shrink=0.6)
plt.show()
surface = surfaces[n][:,:,0]
draw_map(surface, Xp_test[n][:,:,0])
from sklearn.metrics import roc_curve, roc_auc_score
from sklearn.calibration import calibration_curve
predictions = full.predict([Xp_test, Xd_test])
###Output
_____no_output_____
###Markdown
Notice, we pull predictions from the `full` model since we're only interested in the predictions at the destination coordinates, not the full surface.---
###Code
fraction_of_positives, mean_predicted_value = calibration_curve(Y_test, predictions, n_bins=10)
plt.plot([0, 1], [0, 1], "k--", label="Perfectly calibrated")
plt.plot(fraction_of_positives, mean_predicted_value, label="Our model")
plt.xlabel('Actual Completion Percentage')
plt.ylabel('Predicted Completion Percentage')
plt.title('Model Calibration')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Model struggles a bit at the bottom end, which isn't entirely surprising. Not a huge sample size down there.---
###Code
fpr, tpr, _ = roc_curve(Y_test, predictions)
auc = roc_auc_score(Y_test, predictions)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr, tpr, label="AUC: {:.3f}".format(auc))
plt.xlabel('False Positive Rate')
plt.ylabel('Fraction of Positives')
plt.title('ROC Curve')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____ |
Sequence Models/week1/Dinosaurus+Island+--+Character+level+language+model+final+-+v3.ipynb | ###Markdown
Character level language model - Dinosaurus landWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! By completing this assignment you will learn:- How to store text data for processing using an RNN - How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit- How to build a character-level text generation recurrent neural network- Why clipping the gradients is importantWe will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment.
###Code
import numpy as np
from utils import *
import random
###Output
_____no_output_____
###Markdown
1 - Problem Statement 1.1 - Dataset and PreprocessingRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
###Code
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
###Output
There are 19909 total characters and 27 unique characters in your data.
###Markdown
The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the `` (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, `char_to_ix` and `ix_to_char` are the python dictionaries.
###Code
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
###Output
{0: '\n', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i', 10: 'j', 11: 'k', 12: 'l', 13: 'm', 14: 'n', 15: 'o', 16: 'p', 17: 'q', 18: 'r', 19: 's', 20: 't', 21: 'u', 22: 'v', 23: 'w', 24: 'x', 25: 'y', 26: 'z'}
###Markdown
1.2 - Overview of the modelYour model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients - Using the gradients, update your parameter with the gradient descent update rule.- Return the learned parameters **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a RNN - Step by Step". At each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set, while $Y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is such that at every time-step $t$, we have $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. 2 - Building blocks of the modelIn this part, you will build two important blocks of the overall model:- Gradient clipping: to avoid exploding gradients- Sampling: a technique used to generate charactersYou will then apply these two functions to build the model. 2.1 - Clipping the gradients in the optimization loopIn this section you will implement the `clip` function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not "exploding," meaning taking on overly large values. In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a `maxValue` (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone. **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight "exploding gradient" problems. **Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this [hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html) for examples of how to clip in numpy. You will need to use the argument `out = ...`.
###Code
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
np.clip(gradient,-maxValue,maxValue,out = gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
###Output
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
###Markdown
** Expected output:** **gradients["dWaa"][1][2] ** 10.0 **gradients["dWax"][3][1]** -10.0 **gradients["dWya"][1][2]** 0.29713815361 **gradients["db"][4]** [ 10.] **gradients["dby"][1]** [ 8.45833407] 2.2 - SamplingNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. **Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:- **Step 1**: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$- **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a `softmax()` function that you can use.- **Step 3**: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use [`np.random.choice`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).Here is an example of how to use `np.random.choice()`:```pythonnp.random.seed(0)p = np.array([0.1, 0.0, 0.7, 0.2])index = np.random.choice([0, 1, 2, 3], p = p.ravel())```This means that you will pick the `index` according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.- **Step 4**: The last step to implement in `sample()` is to overwrite the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name.
###Code
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)
x = np.zeros((vocab_size,1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros((n_a,1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(np.dot(Waa,a_prev)+np.dot(Wax,x)+b)
z = np.dot(Wya,a)+by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = np.random.choice(list(range(vocab_size)), p = y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
###Output
Sampling:
list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]
list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n']
###Markdown
** Expected output:** **list of sampled indices:** [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0] **list of sampled characters:** ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n'] 3 - Building the language model It is time to build the character-level language model for text generation. 3.1 - Gradient descent In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:- Forward propagate through the RNN to compute the loss- Backward propagate through time to compute the gradients of the loss with respect to the parameters- Clip the gradients if necessary - Update your parameters using gradient descent **Exercise**: Implement this optimization process (one step of stochastic gradient descent). We provide you with the following functions: ```pythondef rnn_forward(X, Y, a_prev, parameters): """ Performs the forward propagation through the RNN and computes the cross-entropy loss. It returns the loss' value as well as a "cache" storing values to be used in the backpropagation.""" .... return loss, cache def rnn_backward(X, Y, parameters, cache): """ Performs the backward propagation through time to compute the gradients of the loss with respect to the parameters. It returns also all the hidden states.""" ... return gradients, adef update_parameters(parameters, gradients, learning_rate): """ Updates parameters using the Gradient Descent Update Rule.""" ... return parameters```
###Code
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X,Y,a_prev,parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X,Y,parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients,5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters,gradients,learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
###Output
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
###Markdown
** Expected output:** **Loss ** 126.503975722 **gradients["dWaa"][1][2]** 0.194709315347 **np.argmax(gradients["dWax"])** 93 **gradients["dWya"][1][2]** -0.007773876032 **gradients["db"][4]** [-0.06809825] **gradients["dby"][1]** [ 0.01538192] **a_last[4]** [-1.] 3.2 - Training the model Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. **Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:```python index = j % len(examples) X = [None] + [char_to_ix[ch] for ch in examples[index]] Y = X[1:] + [char_to_ix["\n"]]```Note that we use: `index= j % len(examples)`, where `j = 1....num_iterations`, to make sure that `examples[index]` is always a valid statement (`index` is smaller than `len(examples)`).The first entry of `X` being `None` will be interpreted by `rnn_forward()` as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that `Y` is equal to `X` but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name.
###Code
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (≈ 2 lines)
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix['\n']]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X,Y,a_prev,parameters)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
###Code
parameters = model(data, ix_to_char, char_to_ix)
###Output
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
Iteration: 2000, Loss: 27.884160
Liusskeomnolxeros
Hmdaairus
Hytroligoraurus
Lecalosapaus
Xusicikoraurus
Abalpsamantisaurus
Tpraneronxeros
Iteration: 4000, Loss: 25.901815
Mivrosaurus
Inee
Ivtroplisaurus
Mbaaisaurus
Wusichisaurus
Cabaselachus
Toraperlethosdarenitochusthiamamumamaon
Iteration: 6000, Loss: 24.608779
Onwusceomosaurus
Lieeaerosaurus
Lxussaurus
Oma
Xusteonosaurus
Eeahosaurus
Toreonosaurus
Iteration: 8000, Loss: 24.070350
Onxusichepriuon
Kilabersaurus
Lutrodon
Omaaerosaurus
Xutrcheps
Edaksoje
Trodiktonus
Iteration: 10000, Loss: 23.844446
Onyusaurus
Klecalosaurus
Lustodon
Ola
Xusodonia
Eeaeosaurus
Troceosaurus
Iteration: 12000, Loss: 23.291971
Onyxosaurus
Kica
Lustrepiosaurus
Olaagrraiansaurus
Yuspangosaurus
Eealosaurus
Trognesaurus
Iteration: 14000, Loss: 23.382339
Meutromodromurus
Inda
Iutroinatorsaurus
Maca
Yusteratoptititan
Ca
Troclosaurus
Iteration: 16000, Loss: 23.288447
Meuspsangosaurus
Ingaa
Iusosaurus
Macalosaurus
Yushanis
Daalosaurus
Trpandon
Iteration: 18000, Loss: 22.823526
Phytrolonhonyg
Mela
Mustrerasaurus
Peg
Ytronorosaurus
Ehalosaurus
Trolomeehus
Iteration: 20000, Loss: 23.041871
Nousmofonosaurus
Loma
Lytrognatiasaurus
Ngaa
Ytroenetiaudostarmilus
Eiafosaurus
Troenchulunosaurus
Iteration: 22000, Loss: 22.728849
Piutyrangosaurus
Midaa
Myroranisaurus
Pedadosaurus
Ytrodon
Eiadosaurus
Trodoniomusitocorces
Iteration: 24000, Loss: 22.683403
Meutromeisaurus
Indeceratlapsaurus
Jurosaurus
Ndaa
Yusicheropterus
Eiaeropectus
Trodonasaurus
Iteration: 26000, Loss: 22.554523
Phyusaurus
Liceceron
Lyusichenodylus
Pegahus
Yustenhtonthosaurus
Elagosaurus
Trodontonsaurus
Iteration: 28000, Loss: 22.484472
Onyutimaerihus
Koia
Lytusaurus
Ola
Ytroheltorus
Eiadosaurus
Trofiashates
Iteration: 30000, Loss: 22.774404
Phytys
Lica
Lysus
Pacalosaurus
Ytrochisaurus
Eiacosaurus
Trochesaurus
Iteration: 32000, Loss: 22.209473
Mawusaurus
Jica
Lustoia
Macaisaurus
Yusolenqtesaurus
Eeaeosaurus
Trnanatrax
Iteration: 34000, Loss: 22.396744
Mavptokekus
Ilabaisaurus
Itosaurus
Macaesaurus
Yrosaurus
Eiaeosaurus
Trodon
###Markdown
ConclusionYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus! 4 - Writing like ShakespeareThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. Let's become poets! We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
###Code
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
###Output
Using TensorFlow backend.
###Markdown
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
###Code
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
###Output
_____no_output_____
###Markdown
Character level language model - Dinosaurus landWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! By completing this assignment you will learn:- How to store text data for processing using an RNN - How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit- How to build a character-level text generation recurrent neural network- Why clipping the gradients is importantWe will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment.
###Code
import numpy as np
from utils import *
import random
###Output
_____no_output_____
###Markdown
1 - Problem Statement 1.1 - Dataset and PreprocessingRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
###Code
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
###Output
There are 19909 total characters and 27 unique characters in your data.
###Markdown
The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the `` (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, `char_to_ix` and `ix_to_char` are the python dictionaries.
###Code
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
###Output
{0: '\n', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i', 10: 'j', 11: 'k', 12: 'l', 13: 'm', 14: 'n', 15: 'o', 16: 'p', 17: 'q', 18: 'r', 19: 's', 20: 't', 21: 'u', 22: 'v', 23: 'w', 24: 'x', 25: 'y', 26: 'z'}
###Markdown
1.2 - Overview of the modelYour model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients - Using the gradients, update your parameter with the gradient descent update rule.- Return the learned parameters **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a RNN - Step by Step". At each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set, while $Y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is such that at every time-step $t$, we have $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. 2 - Building blocks of the modelIn this part, you will build two important blocks of the overall model:- Gradient clipping: to avoid exploding gradients- Sampling: a technique used to generate charactersYou will then apply these two functions to build the model. 2.1 - Clipping the gradients in the optimization loopIn this section you will implement the `clip` function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not "exploding," meaning taking on overly large values. In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a `maxValue` (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone. **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight "exploding gradient" problems. **Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this [hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html) for examples of how to clip in numpy. You will need to use the argument `out = ...`.
###Code
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
np.clip(gradient, -maxValue, maxValue, out=gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
###Output
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
###Markdown
** Expected output:** **gradients["dWaa"][1][2] ** 10.0 **gradients["dWax"][3][1]** -10.0 **gradients["dWya"][1][2]** 0.29713815361 **gradients["db"][4]** [ 10.] **gradients["dby"][1]** [ 8.45833407] 2.2 - SamplingNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. **Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:- **Step 1**: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$- **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a `softmax()` function that you can use.- **Step 3**: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use [`np.random.choice`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).Here is an example of how to use `np.random.choice()`:```pythonnp.random.seed(0)p = np.array([0.1, 0.0, 0.7, 0.2])index = np.random.choice([0, 1, 2, 3], p = p.ravel())```This means that you will pick the `index` according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.- **Step 4**: The last step to implement in `sample()` is to overwrite the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name.
###Code
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)
x = np.zeros((vocab_size,1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros((n_a,1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh((np.dot(Wax,x)+np.dot(Waa,a_prev))+b)
z = np.dot(Wya,a)+by
y = softmax(z)
# for grading purposes
np.random.seed(counter + seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = np.random.choice(list(range(vocab_size)), p=y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
###Output
Sampling:
list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]
list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n']
###Markdown
** Expected output:** **list of sampled indices:** [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0] **list of sampled characters:** ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n'] 3 - Building the language model It is time to build the character-level language model for text generation. 3.1 - Gradient descent In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:- Forward propagate through the RNN to compute the loss- Backward propagate through time to compute the gradients of the loss with respect to the parameters- Clip the gradients if necessary - Update your parameters using gradient descent **Exercise**: Implement this optimization process (one step of stochastic gradient descent). We provide you with the following functions: ```pythondef rnn_forward(X, Y, a_prev, parameters): """ Performs the forward propagation through the RNN and computes the cross-entropy loss. It returns the loss' value as well as a "cache" storing values to be used in the backpropagation.""" .... return loss, cache def rnn_backward(X, Y, parameters, cache): """ Performs the backward propagation through time to compute the gradients of the loss with respect to the parameters. It returns also all the hidden states.""" ... return gradients, adef update_parameters(parameters, gradients, learning_rate): """ Updates parameters using the Gradient Descent Update Rule.""" ... return parameters```
###Code
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
###Output
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
###Markdown
** Expected output:** **Loss ** 126.503975722 **gradients["dWaa"][1][2]** 0.194709315347 **np.argmax(gradients["dWax"])** 93 **gradients["dWya"][1][2]** -0.007773876032 **gradients["db"][4]** [-0.06809825] **gradients["dby"][1]** [ 0.01538192] **a_last[4]** [-1.] 3.2 - Training the model Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. **Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:```python index = j % len(examples) X = [None] + [char_to_ix[ch] for ch in examples[index]] Y = X[1:] + [char_to_ix["\n"]]```Note that we use: `index= j % len(examples)`, where `j = 1....num_iterations`, to make sure that `examples[index]` is always a valid statement (`index` is smaller than `len(examples)`).The first entry of `X` being `None` will be interpreted by `rnn_forward()` as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that `Y` is equal to `X` but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name.
###Code
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (≈ 2 lines)
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
###Code
parameters = model(data, ix_to_char, char_to_ix)
###Output
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
Iteration: 2000, Loss: 27.884160
Liusskeomnolxeros
Hmdaairus
Hytroligoraurus
Lecalosapaus
Xusicikoraurus
Abalpsamantisaurus
Tpraneronxeros
Iteration: 4000, Loss: 25.901927
Mivrosaurus
Inee
Ivtroplisaurus
Mbaaisaurus
Wusichisaurus
Cabaselachus
Toraperlethosdarenitochusthiamamumamaon
Iteration: 6000, Loss: 24.610543
Onwusceomosaurus
Lieeaerosaurus
Lwtrolonnonx
Oma
Xusteonosaurus
Eeahosaurus
Toreonosaurus
Iteration: 8000, Loss: 24.072900
Onxusichepriuon
Kilabersaurus
Lutrodon
Omaaerosaurus
Xutrcheps
Edaiskol
Trodiktonterltasaurus
Iteration: 10000, Loss: 23.836164
Onyusaurus
Klecalosaurus
Kwtrocherhauiosaurus
Ola
Xusodonaraverlocopansannthyhecelater
Daadosaurus
Troceosaurus
Iteration: 12000, Loss: 23.321827
Onyvrophes
Kidbalosaurus
Lustrie
Ola
Xustephopevesaurus
Edanosaurus
Tosaurus
Iteration: 14000, Loss: 23.342593
Onyuronogopeurus
Incechus
Kwrsonooropurstratoneuraotiurnathotelia
Olaaiosaurus
Yuspeoloneviosaurus
Daacorai
Usocoosaurus
Iteration: 16000, Loss: 23.251887
Leusineraspavblos
Inee
Iusosaurus
Macaesjacaptos
Yusia
Cacdos
Tosaurus
###Markdown
ConclusionYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus! 4 - Writing like ShakespeareThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. Let's become poets! We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
###Code
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
###Output
_____no_output_____
###Markdown
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
###Code
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
###Output
_____no_output_____ |
Python_Stock/Time_Series_Forecasting/Stock_Forecasting_PyAF_Example.ipynb | ###Markdown
Stock Forecasting using PyAF (Python Automatic Forecasting) https://github.com/antoinecarme/pyaf
###Code
# Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pyaf.ForecastEngine as autof
import yfinance as yf
yf.pdr_override()
stock = 'AMD' # input
start = '2017-01-01' # input
end = '2021-11-08' # input
df = yf.download(stock, start, end)['Adj Close']
df.head()
plt.figure(figsize=(16,8))
plt.plot(df)
plt.title('Stock Price')
plt.ylabel('Price')
plt.show()
df = df.reset_index()
df.tail()
lEngine = autof.cForecastEngine()
# get the best time series model for predicting one week
lEngine.train(iInputDS = df, iTime = 'Date', iSignal = 'Adj Close', iHorizon = 7);
lEngine.getModelInfo()
lEngine.standardPlots()
df_forecast = lEngine.forecast(iInputDS = df, iHorizon = 7)
print(df_forecast.columns)
print(df_forecast['Date'].tail(7).values)
df_forecast
df_forecast.columns
print(df_forecast['Adj Close_Forecast'].tail(5).values)
###Output
[136.33999634 136.33999634 136.33999634 136.33999634 136.33999634]
|
solutions-by-authors/challenge-4/challenge-4.ipynb | ###Markdown
IBM Quantum Challenge Fall 2021 Challenge 4: Battery revenue optimization We recommend that you switch to **light** workspace theme under the Account menu in the upper right corner for optimal experience. Introduction to QAOAWhen it comes to optimization problems, a well-known algorithm for finding approximate solutions to combinatorial-optimization problems is **QAOA (Quantum approximate optimization algorithm)**. You may have already used it once in the finance exercise of Challenge-1, but still don't know what it is. In this challlenge we will further learn about QAOA----how does it work? Why we need it?First off, what is QAOA? Simply put, QAOA is a classical-quantum hybrid algorithm that combines a parametrized quantum circuit known as ansatz, and a classical part to optimize those circuits proposed by Farhi, Goldstone, and Gutmann (2014)[**[1]**](https://arxiv.org/abs/1411.4028). It is a variational algorithm that uses a unitary $U(\boldsymbol{\beta}, \boldsymbol{\gamma})$ characterized by the parameters $(\boldsymbol{\beta}, \boldsymbol{\gamma})$ to prepare a quantum state $|\psi(\boldsymbol{\beta}, \boldsymbol{\gamma})\rangle$. The goal of the algorithm is to find optimal parameters $(\boldsymbol{\beta}_{opt}, \boldsymbol{\gamma}_{opt})$ such that the quantum state $|\psi(\boldsymbol{\beta}_{opt}, \boldsymbol{\gamma}_{opt})\rangle$ encodes the solution to the problem. The unitary $U(\boldsymbol{\beta}, \boldsymbol{\gamma})$ has a specific form and is composed of two unitaries $U(\boldsymbol{\beta}) = e^{-i \boldsymbol{\beta} H_B}$ and $U(\boldsymbol{\gamma}) = e^{-i \boldsymbol{\gamma} H_P}$ where $H_{B}$ is the mixing Hamiltonian and $H_{P}$ is the problem Hamiltonian. Such a choice of unitary drives its inspiration from a related scheme called quantum annealing.The state is prepared by applying these unitaries as alternating blocks of the two unitaries applied $p$ times such that $$\lvert \psi(\boldsymbol{\beta}, \boldsymbol{\gamma}) \rangle = \underbrace{U(\boldsymbol{\beta}) U(\boldsymbol{\gamma}) \cdots U(\boldsymbol{\beta}) U(\boldsymbol{\gamma})}_{p \; \text{times}} \lvert \psi_0 \rangle$$where $|\psi_{0}\rangle$ is a suitable initial state.The QAOA implementation of Qiskit directly extends VQE and inherits VQE’s general hybrid optimization structure.To learn more about QAOA, please refer to the [**QAOA chapter**](https://qiskit.org/textbook/ch-applications/qaoa.html) of Qiskit Textbook. **Goal**Implement the quantum optimization code for the battery revenue problem. **Plan**First, you will learn about QAOA and knapsack problem.**Challenge 4a** - Simple knapsack problem with QAOA: familiarize yourself with a typical knapsack problem and find the optimized solution with QAOA.**Final Challenge 4b** - Battery revenue optimization with Qiskit knapsack class: learn the battery revenue optimization problem and find the optimized solution with QAOA. You can receive a badge for solving all the challenge exercises up to 4b.**Final Challenge 4c** - Battery revenue optimization with your own quantum circuit: implement the battery revenue optimization problem to find the lowest circuit cost and circuit depth. Achieve better accuracy with smaller circuits. you can obtain a score with ranking by solving this exercise.Before you begin, we recommend watching the [**Qiskit Optimization Demo Session with Atsushi Matsuo**](https://youtu.be/claoY57eVIc?t=104) and check out the corresponding [**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-optimization) to learn how to do classifications using QSVM. As we just mentioned, QAOA is an algorithm which can be used to find approximate solutions to combinatorial optimization problems, which includes many specific problems, such as:- TSP (Traveling Salesman Problem) problem- Vehicle routing problem- Set cover problem- Knapsack problem- Scheduling problems,etc. Some of them are hard to solve (or in another word, they are NP-hard problems), and it is impractical to find their exact solutions in a reasonable amount of time, and that is why we need the approximate algorithm. Next, we will introduce an instance of using QAOA to solve one of the combinatorial optimization problems----**knapsack problem**. Knapsack Problem [**Knapsack Problem**](https://en.wikipedia.org/wiki/Knapsack_problem) is an optimization problem that goes like this: given a list of items that each has a weight and a value and a knapsack that can hold a maximum weight. Determine which items to take in the knapsack so as to maximize the total value taken without exceeding the maximum weight the knapsack can hold. The most efficient approach would be a greedy approach, but that is not guaranteed to give the best result. Image source: [Knapsack.svg.](https://commons.wikimedia.org/w/index.php?title=File:Knapsack.svg&oldid=457280382)Note: Knapsack problem have many variations, here we will only discuss the 0-1 Knapsack problem: either take an item or not (0-1 property), which is a NP-hard problem. We can not divide one item, or take multiple same items. Challenge 4a: Simple knapsack problem with QAOA **Challenge 4a** You are given a knapsack with a capacity of 18 and 5 pieces of luggage. When the weights of each piece of luggage $W$ is $w_i = [4,5,6,7,8]$ and the value $V$ is $v_i = [5,6,7,8,9]$, find the packing method that maximizes the sum of the values of the luggage within the capacity limit of 18.
###Code
from qiskit_optimization.algorithms import MinimumEigenOptimizer
from qiskit import Aer
from qiskit.utils import algorithm_globals, QuantumInstance
from qiskit.algorithms import QAOA, NumPyMinimumEigensolver
import numpy as np
###Output
_____no_output_____
###Markdown
Dynamic Programming Approach A typical classical method for finding an exact solution, the Dynamic Programming approach is as follows:
###Code
val = [5,6,7,8,9]
wt = [4,5,6,7,8]
W = 18
def dp(W, wt, val, n):
k = [[0 for x in range(W + 1)] for x in range(n + 1)]
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
k[i][w] = 0
elif wt[i-1] <= w:
k[i][w] = max(val[i-1] + k[i-1][w-wt[i-1]], k[i-1][w])
else:
k[i][w] = k[i-1][w]
picks=[0 for x in range(n)]
volume=W
for i in range(n,-1,-1):
if (k[i][volume]>k[i-1][volume]):
picks[i-1]=1
volume -= wt[i-1]
return k[n][W],picks
n = len(val)
print("optimal value:", dp(W, wt, val, n)[0])
print('\n index of the chosen items:')
for i in range(n):
if dp(W, wt, val, n)[1][i]:
print(i,end=' ')
###Output
optimal value: 21
index of the chosen items:
1 2 3
###Markdown
The time complexity of this method $O(N*W)$, where $N$ is the number of items and $W$ is the maximum weight of the knapsack. We can solve this problem using an exact solution approach within a reasonable time since the number of combinations is limited, but when the number of items becomes huge, it will be impractical to deal with by using a exact solution approach. QAOA approach Qiskit provides application classes for various optimization problems, including the knapsack problem so that users can easily try various optimization problems on quantum computers. In this exercise, we are going to use the application classes for the `Knapsack` problem here. There are application classes for other optimization problems available as well. See [**Application Classes for Optimization Problems**](https://qiskit.org/documentation/optimization/tutorials/09_application_classes.htmlKnapsack-problem) for details.
###Code
# import packages necessary for application classes.
from qiskit_optimization.applications import Knapsack
###Output
_____no_output_____
###Markdown
To represent Knapsack problem as an optimization problem that can be solved by QAOA, we need to formulate the cost function for this problem.
###Code
def knapsack_quadratic_program():
# Put values, weights and max_weight parameter for the Knapsack()
##############################
# Provide your code here
# prob = Knapsack('Insert parameters here')
prob = Knapsack(values = val, weights = wt, max_weight=W)
#
##############################
# to_quadratic_program generates a corresponding QuadraticProgram of the instance of the knapsack problem.
kqp = prob.to_quadratic_program()
return prob, kqp
prob,quadratic_program=knapsack_quadratic_program()
quadratic_program
###Output
_____no_output_____
###Markdown
We can solve the problem using the classical `NumPyMinimumEigensolver` to find the minimum eigenvector, which may be useful as a reference without doing things by Dynamic Programming; we can also apply QAOA.
###Code
# Numpy Eigensolver
meo = MinimumEigenOptimizer(min_eigen_solver=NumPyMinimumEigensolver())
result = meo.solve(quadratic_program)
print('result:\n', result)
print('\n index of the chosen items:', prob.interpret(result))
# QAOA
seed = 123
algorithm_globals.random_seed = seed
qins = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), shots=1000, seed_simulator=seed, seed_transpiler=seed)
meo = MinimumEigenOptimizer(min_eigen_solver=QAOA(reps=1, quantum_instance=qins))
result = meo.solve(quadratic_program)
print('result:\n', result)
print('\n index of the chosen items:', prob.interpret(result))
###Output
result:
optimal function value: 21.0
optimal value: [0. 1. 1. 1. 0.]
status: SUCCESS
index of the chosen items: [1, 2, 3]
###Markdown
You will submit the quadratic program created by your `knapsack_quadratic_program` function.
###Code
# Check your answer and submit using the following code
from qc_grader import grade_ex4a
grade_ex4a(quadratic_program)
###Output
Grading your answer for 4a. Please wait...
Congratulations 🎉! Your answer is correct.
###Markdown
Note: QAOA finds the approximate solutions, so the solution by QAOA is not always optimal. Battery Revenue Optimization Problem In this exercise we will use a quantum algorithm to solve a real-world instance of a combinatorial optimization problem: Battery revenue optimization problem. Battery storage systems have provided a solution to flexibly integrate large-scale renewable energy (such as wind and solar) in a power system. The revenues from batteries come from different types of services sold to the grid. The process of energy trading of battery storage assets is as follows: A regulator asks each battery supplier to choose a market in advance for each time window. Then, the batteries operator will charge the battery with renewable energy and release the energy to the grid depending on pre-agreed contracts. The supplier makes therefore forecasts on the return and the number of charge/discharge cycles for each time window to optimize its overall return. How to maximize the revenue of battery-based energy storage is a concern of all battery storage investors. Choose to let the battery always supply power to the market which pays the most for every time window might be a simple guess, but in reality, we have to consider many other factors. What we can not ignore is the aging of batteries, also known as **degradation**. As the battery charge/discharge cycle progresses, the battery capacity will gradually degrade (the amount of energy a battery can store, or the amount of power it can deliver will permanently reduce). After a number of cycles, the battery will reach the end of its usefulness. Since the performance of a battery decreases while it is used, choosing the best cash return for every time window one after the other, without considering the degradation, does not lead to an optimal return over the lifetime of the battery, i.e. before the number of charge/discharge cycles reached.Therefore, in order to optimize the revenue of the battery, what we have to do is to select the market for the battery in each time window taking both **the returns on these markets (value)**, based on price forecast, as well as expected battery **degradation over time (cost)** into account ——It sounds like solving a common optimization problem, right?We will investigate how quantum optimization algorithms could be adapted to tackle this problem.Image source: [pixabay](https://pixabay.com/photos/renewable-energy-environment-wind-1989416/) Problem SettingHere, we have referred to the problem setting in de la Grand'rive and Hullo's paper [**[2]**](https://arxiv.org/abs/1908.02210).Considering two markets $M_{1}$ , $M_{2}$, during every time window (typically a day), the battery operates on one or the other market, for a maximum of $n$ time windows. Every day is considered independent and the intraday optimization is a standalone problem: every morning the battery starts with the same level of power so that we don’t consider charging problems. Forecasts on both markets being available for the $n$ time windows, we assume known for each time window $t$ (day) and for each market:- the daily returns $\lambda_{1}^{t}$ , $\lambda_{2}^{t}$- the daily degradation, or health cost (number of cycles), for the battery $c_{1}^{t}$, $c_{2}^{t}$ We want to find the optimal schedule, i.e. optimize the life time return with a cost less than $C_{max}$ cycles. We introduce $d = max_{t}\left\{c_{1}^{t}, c_{2}^{t}\right\} $.We introduce the decision variable $z_{t}, \forall t \in [1, n]$ such that $z_{t} = 0$ if the supplier chooses $M_{1}$ , $z_{t} = 1$ if choose $M_{2}$, with every possible vector $z = [z_{1}, ..., z_{n}]$ being a possible schedule. The previously formulated problem can then be expressed as:\begin{equation}\underset{z \in \left\{0,1\right\}^{n}}{max} \displaystyle\sum_{t=1}^{n}(1-z_{t})\lambda_{1}^{t}+z_{t}\lambda_{2}^{t}\end{equation}\begin{equation} s.t. \sum_{t=1}^{n}[(1-z_{t})c_{1}^{t}+z_{t}c_{2}^{t}]\leq C_{max}\end{equation} This does not look like one of the well-known combinatorial optimization problems, but no worries! we are going to give hints on how to solve this problem with quantum computing step by step. Challenge 4b: Battery revenue optimization with Qiskit knapsack class **Challenge 4b** We will optimize the battery schedule using Qiskit optimization knapsack class with QAOA to maximize the total return with a cost within $C_{max}$ under the following conditions; - the time window $t = 7$- the daily return $\lambda_{1} = [5, 3, 3, 6, 9, 7, 1]$- the daily return $\lambda_{2} = [8, 4, 5, 12, 10, 11, 2]$- the daily degradation for the battery $c_{1} = [1, 1, 2, 1, 1, 1, 2]$- the daily degradation for the battery $c_{2} = [3, 2, 3, 2, 4, 3, 3]$- $C_{max} = 16$ Your task is to find the argument, `values`, `weights`, and `max_weight` used for the Qiskit optimization knapsack class, to get a solution which "0" denote the choice of market $M_{1}$, and "1" denote the choice of market $M_{2}$. We will check your answer with another data set of $\lambda_{1}, \lambda_{2}, c_{1}, c_{2}, C_{max}$. You can receive a badge for solving all the challenge exercises up to 4b.
###Code
L1 = [5,3,3,6,9,7,1]
L2 = [8,4,5,12,10,11,2]
C1 = [1,1,2,1,1,1,2]
C2 = [3,2,3,2,4,3,3]
C_max = 16
def knapsack_argument(L1, L2, C1, C2, C_max):
##############################
# Provide your code here
values = list(map(lambda x, y: x - y, L2, L1))
weights = list(map(lambda x, y: x - y, C2, C1))
max_weight = C_max - sum(C1)
#
##############################
return values, weights, max_weight
values, weights, max_weight = knapsack_argument(L1, L2, C1, C2, C_max)
print(values, weights, max_weight)
prob = Knapsack(values = values, weights = weights, max_weight = max_weight)
qp = prob.to_quadratic_program()
qp
# Check your answer and submit using the following code
from qc_grader import grade_ex4b
grade_ex4b(knapsack_argument)
###Output
Running "knapsack_argument" (1/3)...
Running "knapsack_argument" (2/3)...
Running "knapsack_argument" (3/3)...
Grading your answer for 4b. Please wait...
Congratulations 🎉! Your answer is correct.
###Markdown
We can solve the problem using QAOA.
###Code
# QAOA
seed = 123
algorithm_globals.random_seed = seed
qins = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), shots=1000, seed_simulator=seed, seed_transpiler=seed)
meo = MinimumEigenOptimizer(min_eigen_solver=QAOA(reps=1, quantum_instance=qins))
result = meo.solve(qp)
print('result:', result.x)
item = np.array(result.x)
revenue=0
for i in range(len(item)):
if item[i]==0:
revenue+=L1[i]
else:
revenue+=L2[i]
print('total revenue:', revenue)
###Output
result: [1. 1. 1. 1. 0. 1. 0.]
total revenue: 50
###Markdown
Challenge 4c: Battery revenue optimization with adiabatic quantum computationHere we come to the final exercise! The final challenge is for people to compete in ranking. BackgroundQAOA was developed with inspiration from adiabatic quantum computation. In adiabatic quantum computation, based on the quantum adiabatic theorem, the ground state of a given Hamiltonian can ideally be obtained. Therefore, by mapping the optimization problem to this Hamiltonian, it is possible to solve the optimization problem with adiabatic quantum computation.Although the computational equivalence of adiabatic quantum computation and quantum circuits has been shown, simulating adiabatic quantum computation on quantum circuits involves a large number of gate operations, which is difficult to achieve with current noisy devices. QAOA solves this problem by using a quantum-classical hybrid approach.In this extra challenge, you will be asked to implement a quantum circuit that solves an optimization problem without classical optimization, based on this adiabatic quantum computation framework. In other words, the circuit you build is expected to give a good approximate solution in a single run.Instead of using the Qiskit Optimization Module and Knapsack class, let's try to implement a quantum circuit with as few gate operations as possible, that is, as small as possible. By relaxing the constraints of the optimization problem, it is possible to find the optimum solution with a smaller circuit. We recommend that you follow the solution tips.**Challenge 4c**We will optimize the battery schedule using the adiabatic quantum computation to maximize the total return with a cost within $C_{max}$ under the following conditions;- the time window $t = 11$- the daily return $\lambda_{1} = [3, 7, 3, 4, 2, 6, 2, 2, 4, 6, 6]$- the daily return $\lambda_{2} = [7, 8, 7, 6, 6, 9, 6, 7, 6, 7, 7]$- the daily degradation for the battery $c_{1} = [2, 2, 2, 3, 2, 4, 2, 2, 2, 2, 2]$- the daily degradation for the battery $c_{2} = [4, 3, 3, 4, 4, 5, 3, 4, 4, 3, 4]$- $C_{max} = 33$ - **Note:** $\lambda_{1}[i] < \lambda_{2}[i]$ and $c_{1}[i] < c_{2}[i]$ holds for $i \in \{1,2,...,t\}$ Let "0" denote the choice of market $M_{1}$ and "1" denote the choice of market $M_{2}$, the optimal solutions are "00111111000", and "10110111000" with return value $67$ and cost $33$.Your task is to implement adiabatic quantum computation circuit to meet the accuracy below. We will check your answer with other data set of $\lambda_{1}, \lambda_{2}, c_{1}, c_{2}, C_{max}$. We show examples of inputs for checking below. We will use similar inputs with these examples.
###Code
instance_examples = [
{
'L1': [3, 7, 3, 4, 2, 6, 2, 2, 4, 6, 6],
'L2': [7, 8, 7, 6, 6, 9, 6, 7, 6, 7, 7],
'C1': [2, 2, 2, 3, 2, 4, 2, 2, 2, 2, 2],
'C2': [4, 3, 3, 4, 4, 5, 3, 4, 4, 3, 4],
'C_max': 33
},
{
'L1': [4, 2, 2, 3, 5, 3, 6, 3, 8, 3, 2],
'L2': [6, 5, 8, 5, 6, 6, 9, 7, 9, 5, 8],
'C1': [3, 3, 2, 3, 4, 2, 2, 3, 4, 2, 2],
'C2': [4, 4, 3, 5, 5, 3, 4, 5, 5, 3, 5],
'C_max': 38
},
{
'L1': [5, 4, 3, 3, 3, 7, 6, 4, 3, 5, 3],
'L2': [9, 7, 5, 5, 7, 8, 8, 7, 5, 7, 9],
'C1': [2, 2, 4, 2, 3, 4, 2, 2, 2, 2, 2],
'C2': [3, 4, 5, 4, 4, 5, 3, 3, 5, 3, 5],
'C_max': 35
}
]
###Output
_____no_output_____
###Markdown
IMPORTANT: Final exercise submission rulesFor solving this problem:- Do not optimize with classical methods.- Create a quantum circuit by filling source code in the functions along the following steps.- As for the parameters $p$ and $\alpha$, please **do not change the values from $p=5$ and $\alpha=1$.**- Please implement the quantum circuit within 28 qubits.- You should submit a function that takes (L1, L2, C1, C2, C_max) as inputs and returns a QuantumCircuit. (You can change the name of the function in your way.)- Your circuit should be able to solve different input values. We will validate your circuit with several inputs. - Create a circuit that gives precision of 0.8 or better with lower cost. The precision is explained below. The lower the cost, the better.- Please **do not run jobs in succession** even if you are concerned that your job is not running properly. This can create a long queue and clog the backend. You can check whether your job is running properly at:[**https://quantum-computing.ibm.com/jobs**](https://quantum-computing.ibm.com/jobs) - Judges will check top 10 solutions manually to see if their solutions adhere to the rules. **Please note that your ranking is subject to change after the challenge period as a result of the judging process.**- Top 10 participants will be recognized and asked to submit a write up on how they solved the exercise. **Note: In this challenge, please be aware that you should solve the problem with a quantum circuit, otherwise you will not have a rank in the final ranking.** Scoring RuleThe score of submitted function is computed by two steps.1. In the first step, the precision of output of your quantum circuit is checked.To pass this step, your circuit should output a probability distribution whose **average precision is more than 0.80** for eight instances; four of them are fixed data, while the remaining four are randomly selected data from multiple datasets.If your circuit cannot satisfy this threshold **0.8**, you will not obtain a score.We will explain how the precision of a probability distribution will be calculated when the submitted quantum circuit solves one instance. 1. This precision evaluates how the values of measured feasible solutions are close to the value of optimal solutions. 2. Firstly the number of measured feasible solutions is very low, the precision will be 0 (Please check **"The number of feasible solutions"** below). Before calculating precision, the values of solutions will be normalized so that the precision of the solution whose value is the lowest would be always 0 by subtracting the lowest value. Let $N_s$, $N_f$, and $\lambda_{opt}$ be the total shots (the number of execution), the shots of measured feasible solutions, the optimial solution value. Also let $R(x)$ and $C(x)$ be value and cost of a solution $x\in\{0,1\}^n$ respectively. We normalize the values by subtracting the lowest value of instance, which can be calculated by the summation of $\lambda_{1}$. Given a probability distribution, the precision is computed with the following formula: \begin{equation*} \text{precision} = \frac 1 {N_f\cdot (\lambda_{opt}-\mathrm{sum}(\lambda_{1}) )} \sum_{x, \text{$\mathrm{shots}_x$}\in \text{ prob.dist.}} (R(x)-\mathrm{sum}(\lambda_{1})) \cdot \text{$\mathrm{shots}_x$} \cdot 1_{C(x) \leq C_{max}} \end{equation*} Here, $\mathrm{shots}_x$ is the counts of measuring the solution $x$. For example, given a probability distribution {"1000101": 26, "1000110": 35, "1000111": 12, "1001000": 16, "1001001": 11} with shots $N_s = 100$, the value and the cost of each solution are listed below. | Solution | Value | Cost | Feasible or not | Shot counts | |:-------:|:-------:|:-------:|:-------:|:--------------:| | 1000101 | 46 | 16 | 1 | 26 | | 1000110 | 48 | 17 | 0 | 35 | | 1000111 | 45 | 15 | 1 | 12 | | 1001000 | 45 | 18 | 0 | 16 | | 1001001 | 42 | 16 | 1 | 11 | Since $C_{max}= 16$, the solutions "1000101", "1000111", and "1001001" are feasible, but the solutions "1000110" and "1001000" are infeasible. So, the shots of measured feasbile solutions $N_f$ is calculated as $N_f = 26+12+11=49$. And the lowest value is $ \mathrm{sum}(\lambda_{1}) = 5+3+3+6+9+7+1=34$. Therefore, the precision becomes $$((46-34) \cdot 26 \cdot 1 + (48-34) \cdot 35 \cdot 0 + (45-34) \cdot 12 \cdot 1 + (45-34) \cdot 16 \cdot 0 + (42-34) \cdot 11 \cdot 1) / (49\cdot (50-34)) = 0.68$$ **The number of feasible solutions**: If $N_f$ is less than 20 ($ N_f < 20$), the precision will be calculated as 0.2. In the second step, the score of your quantum circuit will be evaluated only if your solution passes the first step.The score is the sum of circuit costs of four instances, where the circuit cost is calculated as below. 1. Transpile the quantum circuit without gate optimization and decompose the gates into the basis gates of "rz", "sx", "cx". 2. Then the score is calculated by \begin{equation*} \text{score} = 50 \cdot depth + 10 \cdot \(cx) + \(rz) + \(sx) \end{equation*} where $\(gate)$ denotes the number of $gate$ in the circuit. Your circuit will be executed 512 times, which means $N_s = 512$ here.The smaller the score become, the higher you will be ranked. General ApproachHere we are making the answer according to the way shown in [**[2]**](https://arxiv.org/abs/1908.02210), which is solving the "relaxed" formulation of knapsack problem.The relaxed problem can be defined as follows:\begin{equation*}\text{maximize } f(z)=return(z)+penalty(z)\end{equation*}\begin{equation*}\text{where} \quad return(z)=\sum_{t=1}^{n} return_{t}(z) \quad \text{with} \quad return_{t}(z) \equiv\left(1-z_{t}\right) \lambda_{1}^{t}+z_{t} \lambda_{2}^{t}\end{equation*}\begin{equation*}\quad \quad \quad \quad \quad \quad penalty(z)=\left\{\begin{array}{ll}0 & \text{if}\quad cost(z)<C_{\max } \\-\alpha\left(cost(z)-C_{\max }\right) & \text{if} \quad cost(z) \geq C_{\max }, \alpha>0 \quad \text{constant}\end{array}\right.\end{equation*}A non-Ising target function to compute a linear penalty is used here.This may reduce the depth of the circuit while still achieving high accuracy. The basic unit of relaxed approach consisits of the following items.1. Phase Operator $U(C, \gamma_i)$ 1. return part 2. penalty part 1. Cost calculation (data encoding) 2. Constraint testing (marking the indices whose data exceed $C_{max}$) 3. Penalty dephasing (adding penalty to the marked indices) 4. Reinitialization of constraint testing and cost calculation (clean the data register and flag register)2. Mixing Operator $U(B, \beta_i)$This procedure unit $U(B, \beta_i)U(C, \gamma_i)$ will be totally repeated $p$ times in the whole relaxed QAOA procedure.Let's take a look at each function one by one. The quantum circuit we are going to make consists of three types of registers: index register, data register, and flag register.Index register and data register are used for QRAM which contain the cost data for every possible choice of battery.Here these registers appear in the function templates named as follows:- `qr_index`: a quantum register representing the index (the choice of 0 or 1 in each time window)- `qr_data`: a quantum register representing the total cost associated with each index- `qr_f`: a quantum register that store the flag for penalty dephasingWe also use the following variables to represent the number of qubits in each register.- `index_qubits`: the number of qubits in `qr_index`- `data_qubits`: the number of qubits in `qr_data` **Challenge 4c - Step 1** Phase Operator $U(C, \gamma_i)$ Return PartThe return part $return (z)$ can be transformed as follows:\begin{equation*}\begin{aligned}e^{-i \gamma_i . return(z)}\left|z\right\rangle &=\prod_{t=1}^{n} e^{-i \gamma_i return_{t}(z)}\left|z\right\rangle \\&=e^{i \theta} \bigotimes_{t=1}^{n} e^{-i \gamma_i z_{t}\left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)}\left|z_{t}\right\rangle \\\text{with}\quad \theta &=\sum_{t=1}^{n} \lambda_{1}^{t}\quad \text{constant}\end{aligned}\end{equation*}Since we can ignore the constant phase rotation, the return part $return (z)$ can be realized by rotation gate $U_1\left(\gamma_i \left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)\right)= e^{-i \frac{\gamma_i \left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)} 2}$ for each qubit.Fill in the blank in the following cell to complete the `phase_return` function.
###Code
from typing import List, Union
import math
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, assemble
from qiskit.compiler import transpile
from qiskit.circuit import Gate
from qiskit.circuit.library.standard_gates import *
from qiskit.circuit.library import QFT
def phase_return(index_qubits: int, gamma: float, L1: list, L2: list, to_gate=True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
##############################
### U_1(gamma * (lambda2 - lambda1)) for each qubit ###
# Provide your code here
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
for i, (l1, l2) in enumerate(zip(L1, L2)):
qc.p(- gamma * (l2 - l1), qr_index[i])
##############################
return qc.to_gate(label=" phase return ") if to_gate else qc
###Output
_____no_output_____
###Markdown
Phase Operator $U(C, \gamma_i)$ Penalty PartIn this part, we are considering how to add penalty to the quantum states in index register whose total cost exceed the constraint $C_{max}$.As shown above, this can be realized by the following four steps.1. Cost calculation (data encoding)2. Constraint testing (marking the indices whose data value exceed $C_{max}$)3. Penalty dephasing (adding penalty to the marked indices)4. Reinitialization of constraint testing and cost calculation (clean the data register and flag register) **Challenge 4c - Step 2** Cost calculation (data encoding)To represent the sum of cost for every choice of answer, we can use QRAM structure.In order to implement QRAM by quantum circuit, the addition function would be helpful.Here we will first prepare a function for constant value addition.To add a constant value to data we can use- Series of full adders- Plain adder network [**[3]**](https://arxiv.org/abs/quant-ph/9511018)- Ripple carry adder [**[4]**](https://arxiv.org/abs/quant-ph/0410184)- QFT adder **[[5](https://arxiv.org/abs/quant-ph/0008033), [6](https://arxiv.org/abs/1411.5949)]**- etc...Each adder has its own characteristics. Here, for example, we will briefly explain how to implement QFT adder, which is less likely increase circuits cost when the number of additions increases.1. QFT on the target quantum register2. Local phase rotation on the target quantum register controlled by quantum register for the constant3. IQFT on the target quantum registerFill in the blank in the following cell to complete the `const_adder` and `subroutine_add_const` function.
###Code
def subroutine_add_const(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
qc = QuantumCircuit(data_qubits)
##############################
### Phase Rotation ###
# Provide your code here
const = const % (2 ** data_qubits)
for i in range(data_qubits):
for j in range(data_qubits - i):
if const >> (data_qubits - 1 - (i + j)) & 1:
qc.p(math.pi / (2 ** j), i)
##############################
return qc.to_gate(label=" [+"+str(const)+"] ") if to_gate else qc
def const_adder(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_data)
##############################
appr = 0
qc.append( QFT(data_qubits, approximation_degree=appr, do_swaps=False, inverse=False, name='QFT').to_gate(), qr_data[::-1] )
qc.append(subroutine_add_const(data_qubits, const), qr_data)
qc.append( QFT(data_qubits, approximation_degree=appr, do_swaps=False, inverse=True, name='IQFT').to_gate(), qr_data[::-1] )
##############################
return qc.to_gate(label=" [ +" + str(const) + "] ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 3** Here we want to store the cost in a QRAM form: \begin{equation*}\sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle\end{equation*}where $t$ is the number of time window (=size of index register), and $x$ is the pattern of battery choice through all the time window.Given two lists $C^1 = \left[c_0^1, c_1^1, \cdots\right]$ and $C^2 = \left[c_0^2, c_1^2, \cdots\right]$, we can encode the "total sum of each choice" of these data using controlled gates by each index qubit.If we want to add $c_i^1$ to the data whose $i$-th index qubit is $0$ and $c_i^2$ to the data whose $i$-th index qubit is $1$, then we can add $C_i^1$ to data register when the $i$-th qubit in index register is $0$,and $C_i^2$ to data register when the $i$-th qubit in index register is $1$.These operation can be realized by controlled gates.If you want to create controlled gate from gate with type `qiskit.circuit.Gate`, the `control()` method might be useful.Fill in the blank in the following cell to complete the `cost_calculation` function.
###Code
def cost_calculation(index_qubits: int, data_qubits: int, list1: list, list2: list, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_index, qr_data)
approx = 0
qc.append( QFT(data_qubits, approximation_degree=approx, do_swaps=False, inverse=False, name='QFT').to_gate(), qr_data[::-1] )
for i, (val1, val2) in enumerate(zip(list1, list2)):
##############################
### Add val2 using const_adder controlled by i-th index register (set to 1) ###
# Provide your code here
qc.append(subroutine_add_const(data_qubits, val2).control(1), [qr_index[i]] + qr_data[:])
##############################
qc.x(qr_index[i])
##############################
### Add val1 using const_adder controlled by i-th index register (set to 0) ###
# Provide your code here
qc.append(subroutine_add_const(data_qubits, val1).control(1), [qr_index[i]] + qr_data[:])
##############################
qc.x(qr_index[i])
qc.append( QFT(data_qubits, approximation_degree=approx, do_swaps=False, inverse=True, name='IQFT').to_gate(), qr_data[::-1] )
return qc.to_gate(label=" Cost Calculation ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 4** Constraint TestingAfter the cost calculation process, we have gained the entangled QRAM with flag qubits set to zero for all indices:\begin{equation*}\sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle\left|0\right\rangle\end{equation*}In order to selectively add penalty to those indices with cost values larger than $C_{max}$, we have to prepare the following state:\begin{equation*}\sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle\left|cost(x)\geq C_{max}\right\rangle\end{equation*}Fill in the blank in the following cell to complete the `constraint_testing` function.
###Code
def constraint_testing(data_qubits: int, C_max: int, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_data, qr_f)
##############################
### Set the flag register for indices with costs larger than C_max ###
# Provide your code here
value_c = 2 ** (data_qubits - 1) - C_max - 1
qc.append(const_adder(data_qubits, value_c), qr_data)
qc.append(XGate().control(), [qr_data[0], qr_f])
##############################
return qc.to_gate(label=" Constraint Testing ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 5** Penalty DephasingWe also have to add penalty to the indices with total costs larger than $C_{max}$ in the following way.\begin{equation*}\quad \quad \quad \quad \quad \quad penalty(z)=\left\{\begin{array}{ll}0 & \text{if}\quad cost(z)<C_{\max } \\-\alpha\left(cost(z)-C_{\max }\right) & \text{if} \quad cost(z) \geq C_{\max }, \alpha>0 \quad \text{constant}\end{array}\right.\end{equation*}This penalty can be described as quantum operator $e^{i \gamma \alpha\left(cost(z)-C_{\max }\right)}$.To realize this unitary operator as quantum circuit, we focus on the following property.\begin{equation*}\alpha\left(cost(z)-C_{m a x}\right)=\sum_{j=0}^{k-1} 2^{j} \alpha A_{1}[j]-2^{c} \alpha\end{equation*}where $A_1$ is the quantum register for qram data, $A_1[j]$ is the $j$-th qubit of $A_1$, and $k$ and $c$ are appropriate constants.Using this property, the penalty rotation part can be realized as rotation gates on each digit of data register of QRAM controlled by the flag register.Fill in the blank in the following cell to complete the `penalty_dephasing` function.
###Code
def penalty_dephasing(data_qubits: int, alpha: float, gamma: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_data, qr_f)
##############################
### Phase Rotation ###
# Provide your code here
num_carry = 1
for i in range(data_qubits - num_carry):
qc.append(PhaseGate(2 ** i * alpha * gamma).control(), [qr_f[:], qr_data[data_qubits - 1 - i]])
qc.append(PhaseGate(- (2 ** (data_qubits - 1)) * alpha * gamma), qr_f)
##############################
return qc.to_gate(label=" Penalty Dephasing ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 6** ReinitializationThe ancillary qubits such as the data register and the flag register should be reinitialized to zero states when the operator $U(C, \gamma_i)$ finishes.If you want to apply inverse unitary of a `qiskit.circuit.Gate`, the `inverse()` method might be useful.Fill in the blank in the following cell to complete the `reinitialization` function.
###Code
def reinitialization(index_qubits: int, data_qubits: int, C1: list, C2: list, C_max: int, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_index, qr_data, qr_f)
##############################
### Reinitialization Circuit ###
# Provide your code here
value_c = 2 ** (data_qubits - 1) - C_max - 1
qc.append(XGate().control(), [qr_data[0], qr_f])
qc.append(const_adder(data_qubits, value_c).inverse(), qr_data)
qc.append(cost_calculation(index_qubits, data_qubits, C1, C2).inverse(), qr_index[:] + qr_data[:])
##############################
return qc.to_gate(label=" Reinitialization ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 7** Mixing Operator $U(B, \beta_i)$Finally, we have to add the mixing operator $U(B,\beta_i)$ after phase operator $U(C,\gamma_i)$.The mixing operator can be represented as follows.\begin{equation*}U(B, \beta_i)=\exp (-i \beta_i B)=\prod_{i=j}^{n} \exp \left(-i \beta_i \sigma_{j}^{x}\right)\end{equation*}This operator can be realized by $R_x(2\beta_i)$ gate on each qubits in index register.Fill in the blank in the following cell to complete the `mixing_operator` function.
###Code
def mixing_operator(index_qubits: int, beta: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
##############################
### Mixing Operator ###
# Provide your code here
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
qc.rx(2 * beta, qr_index)
##############################
return qc.to_gate(label=" Mixing Operator ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 8** Finally, using the functions we have created above, we will make the submit function `solver_function` for whole relaxed QAOA process.Fill the TODO blank in the following cell to complete the answer function.- You can copy and paste the function you have made above.- you may also adjust the number of qubits and its arrangement if needed.
###Code
def solver_function(L1: list, L2: list, C1: list, C2: list, C_max: int) -> QuantumCircuit:
# the number of qubits representing answers
index_qubits = len(L1)
# the maximum possible total cost
max_c = sum([max(l0, l1) for l0, l1 in zip(C1, C2)])
# the number of qubits representing data values can be defined using the maximum possible total cost as follows:
data_qubits = math.ceil(math.log(max_c, 2)) + 1 if not max_c & (max_c - 1) == 0 else math.ceil(math.log(max_c, 2)) + 2
### Phase Operator ###
# return part
def phase_return(index_qubits: int, gamma: float, L1: list, L2: list, to_gate=True) -> Union[Gate, QuantumCircuit]:
### TODO ###
### Paste answer here ###
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
for i, (l1, l2) in enumerate(zip(L1, L2)):
qc.p(- gamma * (l2 - l1), qr_index[i])
return qc.to_gate(label="phase return") if to_gate else qc
def subroutine_add_const(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
const = const % (2 ** data_qubits)
qc = QuantumCircuit(data_qubits)
for i in range(data_qubits):
for j in range(data_qubits - i):
if const >> (data_qubits - 1 - (i + j)) & 1:
qc.p(math.pi / (2 ** j), i)
return qc.to_gate(label=" [+"+str(const)+"] ") if to_gate else qc
# penalty part
def const_adder(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
### TODO ###
### Paste your answer here ###
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_data)
appr = 0
qc.append( QFT(data_qubits, approximation_degree=appr, do_swaps=False, inverse=False, name='QFT').to_gate(), qr_data[::-1] )
qc.append(subroutine_add_const(data_qubits, const), qr_data)
qc.append( QFT(data_qubits, approximation_degree=appr, do_swaps=False, inverse=True, name='IQFT').to_gate(), qr_data[::-1] )
return qc.to_gate(label=" [ +" + str(const) + "] ") if to_gate else qc
# penalty part
def cost_calculation(index_qubits: int, data_qubits: int, list1: list, list2: list, to_gate = True) -> Union[Gate, QuantumCircuit]:
### TODO ###
### Paste your answer here ###
qr_index = QuantumRegister(index_qubits, "index")
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_index, qr_data)
approx = 0
qc.append( QFT(data_qubits, approximation_degree=approx, do_swaps=False, inverse=False, name='QFT').to_gate(), qr_data[::-1] )
for i, (val1, val2) in enumerate(zip(list1, list2)):
qc.append(subroutine_add_const(data_qubits, val2).control(1), [qr_index[i]] + qr_data[:])
qc.x(qr_index[i])
qc.append(subroutine_add_const(data_qubits, val1).control(1), [qr_index[i]] + qr_data[:])
qc.x(qr_index[i])
qc.append( QFT(data_qubits, approximation_degree=approx, do_swaps=False, inverse=True, name='IQFT').to_gate(), qr_data[::-1] )
return qc.to_gate(label=" Cost Calculation ") if to_gate else qc
# penalty part
def constraint_testing(data_qubits: int, C_max: int, to_gate = True) -> Union[Gate, QuantumCircuit]:
### TODO ###
### Paste your answer here ###
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_data, qr_f)
value_c = 2 ** (data_qubits - 1) - C_max - 1
qc.append(const_adder(data_qubits, value_c), qr_data)
qc.append(XGate().control(), [qr_data[0], qr_f])
return qc.to_gate(label=" Constraint Testing ") if to_gate else qc
# penalty part
def penalty_dephasing(data_qubits: int, alpha: float, gamma: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
### TODO ###
### Paste your answer here ###
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_data, qr_f)
num_carry = 1
for i in range(data_qubits - num_carry):
qc.append(PhaseGate(2 ** i * alpha * gamma).control(), [qr_f[:], qr_data[data_qubits - 1 - i]])
qc.append(PhaseGate(- (2 ** (data_qubits - 1)) * alpha * gamma), qr_f)
return qc.to_gate(label=" Penalty Dephasing ") if to_gate else qc
# penalty part
def reinitialization(index_qubits: int, data_qubits: int, C1: list, C2: list, C_max: int, to_gate = True) -> Union[Gate, QuantumCircuit]:
### TODO ###
### Paste your answer here ###
qr_index = QuantumRegister(index_qubits, "index")
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_index, qr_data, qr_f)
value_c = 2 ** (data_qubits - 1) - C_max - 1
qc.append(XGate().control(), [qr_data[0], qr_f])
qc.append(const_adder(data_qubits, value_c).inverse(), qr_data)
qc.append(cost_calculation(index_qubits, data_qubits, C1, C2).inverse(), qr_index[:] + qr_data[:])
return qc.to_gate(label=" Reinitialization ") if to_gate else qc
### Mixing Operator ###
def mixing_operator(index_qubits: int, beta: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
### TODO ###
### Paste your answer here ###
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
qc.rx(2 * beta, qr_index)
return qc.to_gate(label=" Mixing Operator ") if to_gate else qc
qr_index = QuantumRegister(index_qubits, "index") # index register
qr_data = QuantumRegister(data_qubits, "data") # data register
qr_f = QuantumRegister(1, "flag") # flag register
cr_index = ClassicalRegister(index_qubits, "c_index") # classical register storing the measurement result of index register
qc = QuantumCircuit(qr_index, qr_data, qr_f, cr_index)
### initialize the index register with uniform superposition state ###
qc.h(qr_index)
### DO NOT CHANGE THE CODE BELOW
p = 5
alpha = 1
for i in range(p):
### set fixed parameters for each round ###
beta = 1 - (i + 1) / p
gamma = (i + 1) / p
### return part ###
qc.append(phase_return(index_qubits, gamma, L1, L2), qr_index)
### step 1: cost calculation ###
qc.append(cost_calculation(index_qubits, data_qubits, C1, C2), qr_index[:] + qr_data[:])
### step 2: Constraint testing ###
qc.append(constraint_testing(data_qubits, C_max), qr_data[:] + qr_f[:])
### step 3: penalty dephasing ###
qc.append(penalty_dephasing(data_qubits, alpha, gamma), qr_data[:] + qr_f[:])
### step 4: reinitialization ###
qc.append(reinitialization(index_qubits, data_qubits, C1, C2, C_max), qr_index[:] + qr_data[:] + qr_f[:])
### mixing operator ###
qc.append(mixing_operator(index_qubits, beta), qr_index)
### measure the index ###
### since the default measurement outcome is shown in big endian, it is necessary to reverse the classical bits in order to unify the endian ###
qc.measure(qr_index, cr_index[::-1])
return qc
###Output
_____no_output_____
###Markdown
Validation function contains four input instances.The output should pass the precision threshold 0.80 for the eight inputs before scored.
###Code
# Execute your circuit with following prepare_ex4c() function.
# The prepare_ex4c() function works like the execute() function with only QuantumCircuit as an argument.
from qc_grader import prepare_ex4c
job = prepare_ex4c(solver_function)
result = job.result()
# Check your answer and submit using the following code
from qc_grader import grade_ex4c
grade_ex4c(job)
###Output
Grading your answer for 4c. Please wait...
Congratulations 🎉! Your answer is correct.
Your score is 1672604.
|
examples/reference/streams/bokeh/PolyDraw.ipynb | ###Markdown
Title PolyDraw Description A linked streams example demonstrating how to use the PolyDraw stream. Backends Bokeh Tags streams, linked, position, interactive
###Code
import holoviews as hv
from holoviews import opts, streams
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
The ``PolyDraw`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting polygons and making the drawn data available to Python. The tool supports the following actions:**Add patch/multi-line** Double tap to add the first vertex, then use tap to add each subsequent vertex, to finalize the draw action double tap to insert the final vertex or press the ESC key to stop drawing.**Move patch/multi-line** Tap and drag an existing patch/multi-line, the point will be dropped once you let go of the mouse button.**Delete patch/multi-line** Tap a patch/multi-line to select it then press BACKSPACE key while the mouse is within the plot area. Properties* **``drag``** (boolean): Whether to enable dragging of paths and polygons* **``empty_value``**: Value to add to non-coordinate columns when adding new path or polygon* **``num_objects``** (int): Maximum number of paths or polygons to draw before deleting the oldest object* **``show_vertices``** (boolean): Whether to show the vertices of the paths or polygons* **``styles``** (dict): Dictionary of style properties (e.g. line_color, line_width etc.) to apply to each path and polygon. If values are lists the values will cycle over the values) * **``vertex_style``** (dict): Dictionary of style properties (e.g. fill_color, line_width etc.) to apply to vertices if ``show_vertices`` enabled As a simple example we will create simple ``Path`` and ``Polygons`` elements and attach each to a ``PolyDraw`` stream. We will also enable the ``drag`` option on the stream to enable dragging of existing glyphs. Additionally we can enable the ``show_vertices`` option which shows the vertices of the drawn polygons/lines and adds the ability to snap to them. Finally the ``num_objects`` option limits the number of lines/polygons that can be drawn by dropping the first glyph when the limit is exceeded.
###Code
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True, show_vertices=True)
poly_stream = streams.PolyDraw(source=poly, drag=True, num_objects=4,
show_vertices=True, styles={
'fill_color': ['red', 'green', 'blue']
})
(path * poly).opts(
opts.Path(color='red', height=400, line_width=5, width=400),
opts.Polygons(fill_alpha=0.3, active_tools=['poly_draw']))
###Output
_____no_output_____
###Markdown
Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns:
###Code
path_stream.data
###Output
_____no_output_____
###Markdown
Alternatively we can use the ``element`` property to get an Element containing the returned data:
###Code
path_stream.element * poly_stream.element
###Output
_____no_output_____
###Markdown
Title PolyDraw Description A linked streams example demonstrating how to use the PolyDraw stream. Backends Bokeh Tags streams, linked, position, interactive
###Code
import holoviews as hv
from holoviews import opts, streams
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
The ``PolyDraw`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting polygons and making the drawn data available to Python. The tool supports the following actions:**Add patch/multi-line** Double tap to add the first vertex, then use tap to add each subsequent vertex, to finalize the draw action double tap to insert the final vertex or press the ESC key to stop drawing.**Move patch/multi-line** Tap and drag an existing patch/multi-line, the point will be dropped once you let go of the mouse button.**Delete patch/multi-line** Tap a patch/multi-line to select it then press BACKSPACE key while the mouse is within the plot area. As a simple example we will create simple ``Path`` and ``Polygons`` elements and attach each to a ``PolyDraw`` stream. We will also enable the ``drag`` option on the stream to enable dragging of existing glyphs. Additionally we can enable the ``show_vertices`` option which shows the vertices of the drawn polygons/lines and adds the ability to snap to them. Finally the ``num_objects`` option limits the number of lines/polygons that can be drawn by dropping the first glyph when the limit is exceeded.
###Code
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True, show_vertices=True)
poly_stream = streams.PolyDraw(source=poly, drag=True, num_objects=2, show_vertices=True)
(path * poly).opts(
opts.Path(color='red', height=400, line_width=5, width=400),
opts.Polygons(fill_alpha=0.3, active_tools=['poly_draw']))
###Output
_____no_output_____
###Markdown
Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns:
###Code
path_stream.data
###Output
_____no_output_____
###Markdown
Alternatively we can use the ``element`` property to get an Element containing the returned data:
###Code
path_stream.element * poly_stream.element
###Output
_____no_output_____
###Markdown
Title PolyDraw Description A linked streams example demonstrating how to use the PolyDraw stream. Backends Bokeh Tags streams, linked, position, interactive
###Code
import numpy as np
import holoviews as hv
from holoviews import streams
from matplotlib.path import Path
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
The ``PolyDraw`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting polygons and making the drawn data available to Python. The tool supports the following actions:**Add patch/multi-line** Double tap to add the first vertex, then use tap to add each subsequent vertex, to finalize the draw action double tap to insert the final vertex or press the ESC key to stop drawing.**Move patch/multi-line** Tap and drag an existing patch/multi-line, the point will be dropped once you let go of the mouse button.**Delete patch/multi-line** Tap a patch/multi-line to select it then press BACKSPACE key while the mouse is within the plot area. As a simple example we will create simple ``Path`` and ``Polygons`` elements and attach each to a ``PolyDraw`` stream. We will also enable the ``drag`` option on the stream to enable dragging of existing glyphs.
###Code
%%opts Path [width=400 height=400] (line_width=5 color='red') Polygons (fill_alpha=0.3)
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True)
poly_stream = streams.PolyDraw(source=poly, drag=True)
path * poly
###Output
_____no_output_____
###Markdown
Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns:
###Code
path_stream.data
###Output
_____no_output_____
###Markdown
Alternatively we can use the ``element`` property to get an Element containing the returned data:
###Code
path_stream.element * poly_stream.element
###Output
_____no_output_____
###Markdown
**Title**: PolyDraw**Description**: A linked streams example demonstrating how to use the PolyDraw stream.**Dependencies**: Bokeh**Backends**: [Bokeh](./PolyDraw.ipynb)
###Code
import holoviews as hv
from holoviews import opts, streams
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
The ``PolyDraw`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting polygons and making the drawn data available to Python. The tool supports the following actions:**Add patch/multi-line** Double tap to add the first vertex, then use tap to add each subsequent vertex, to finalize the draw action double tap to insert the final vertex or press the ESC key to stop drawing.**Move patch/multi-line** Tap and drag an existing patch/multi-line; the point will be dropped once you let go of the mouse button.**Delete patch/multi-line** Tap a patch/multi-line to select it then press BACKSPACE key while the mouse is within the plot area. Properties* **``drag``** (boolean): Whether to enable dragging of paths and polygons* **``empty_value``**: Value to add to non-coordinate columns when adding new path or polygon* **``num_objects``** (int): Maximum number of paths or polygons to draw before deleting the oldest object* **``show_vertices``** (boolean): Whether to show the vertices of the paths or polygons* **``styles``** (dict): Dictionary of style properties (e.g. line_color, line_width etc.) to apply to each path and polygon. If values are lists the values will cycle over the values) * **``vertex_style``** (dict): Dictionary of style properties (e.g. fill_color, line_width etc.) to apply to vertices if ``show_vertices`` enabled As a simple example we will create simple ``Path`` and ``Polygons`` elements and attach each to a ``PolyDraw`` stream. We will also enable the ``drag`` option on the stream to enable dragging of existing glyphs. Additionally we can enable the ``show_vertices`` option which shows the vertices of the drawn polygons/lines and adds the ability to snap to them. Finally the ``num_objects`` option limits the number of lines/polygons that can be drawn by dropping the first glyph when the limit is exceeded.
###Code
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True, show_vertices=True)
poly_stream = streams.PolyDraw(source=poly, drag=True, num_objects=4,
show_vertices=True, styles={
'fill_color': ['red', 'green', 'blue']
})
(path * poly).opts(
opts.Path(color='red', height=400, line_width=5, width=400),
opts.Polygons(fill_alpha=0.3, active_tools=['poly_draw']))
###Output
_____no_output_____
###Markdown
Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns:
###Code
path_stream.data
###Output
_____no_output_____
###Markdown
Alternatively we can use the ``element`` property to get an Element containing the returned data:
###Code
path_stream.element * poly_stream.element
###Output
_____no_output_____
###Markdown
Title PolyDraw Description A linked streams example demonstrating how to use the PolyDraw stream. Backends Bokeh Tags streams, linked, position, interactive
###Code
import holoviews as hv
from holoviews import opts, streams
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
The ``PolyDraw`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting polygons and making the drawn data available to Python. The tool supports the following actions:**Add patch/multi-line** Double tap to add the first vertex, then use tap to add each subsequent vertex, to finalize the draw action double tap to insert the final vertex or press the ESC key to stop drawing.**Move patch/multi-line** Tap and drag an existing patch/multi-line; the point will be dropped once you let go of the mouse button.**Delete patch/multi-line** Tap a patch/multi-line to select it then press BACKSPACE key while the mouse is within the plot area. Properties* **``drag``** (boolean): Whether to enable dragging of paths and polygons* **``empty_value``**: Value to add to non-coordinate columns when adding new path or polygon* **``num_objects``** (int): Maximum number of paths or polygons to draw before deleting the oldest object* **``show_vertices``** (boolean): Whether to show the vertices of the paths or polygons* **``styles``** (dict): Dictionary of style properties (e.g. line_color, line_width etc.) to apply to each path and polygon. If values are lists the values will cycle over the values) * **``vertex_style``** (dict): Dictionary of style properties (e.g. fill_color, line_width etc.) to apply to vertices if ``show_vertices`` enabled As a simple example we will create simple ``Path`` and ``Polygons`` elements and attach each to a ``PolyDraw`` stream. We will also enable the ``drag`` option on the stream to enable dragging of existing glyphs. Additionally we can enable the ``show_vertices`` option which shows the vertices of the drawn polygons/lines and adds the ability to snap to them. Finally the ``num_objects`` option limits the number of lines/polygons that can be drawn by dropping the first glyph when the limit is exceeded.
###Code
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True, show_vertices=True)
poly_stream = streams.PolyDraw(source=poly, drag=True, num_objects=4,
show_vertices=True, styles={
'fill_color': ['red', 'green', 'blue']
})
(path * poly).opts(
opts.Path(color='red', height=400, line_width=5, width=400),
opts.Polygons(fill_alpha=0.3, active_tools=['poly_draw']))
###Output
_____no_output_____
###Markdown
Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns:
###Code
path_stream.data
###Output
_____no_output_____
###Markdown
Alternatively we can use the ``element`` property to get an Element containing the returned data:
###Code
path_stream.element * poly_stream.element
###Output
_____no_output_____
###Markdown
Title PolyDraw Description A linked streams example demonstrating how to use the PolyDraw stream. Backends Bokeh Tags streams, linked, position, interactive
###Code
import holoviews as hv
from holoviews import streams
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
The ``PolyDraw`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting polygons and making the drawn data available to Python. The tool supports the following actions:**Add patch/multi-line** Double tap to add the first vertex, then use tap to add each subsequent vertex, to finalize the draw action double tap to insert the final vertex or press the ESC key to stop drawing.**Move patch/multi-line** Tap and drag an existing patch/multi-line, the point will be dropped once you let go of the mouse button.**Delete patch/multi-line** Tap a patch/multi-line to select it then press BACKSPACE key while the mouse is within the plot area. As a simple example we will create simple ``Path`` and ``Polygons`` elements and attach each to a ``PolyDraw`` stream. We will also enable the ``drag`` option on the stream to enable dragging of existing glyphs. Additionally we can enable the ``show_vertices`` option which shows the vertices of the drawn polygons/lines and adds the ability to snap to them. Finally the ``num_objects`` option limits the number of lines/polygons that can be drawn by dropping the first glyph when the limit is exceeded.
###Code
%%opts Path [width=400 height=400] (line_width=5 color='red') Polygons (fill_alpha=0.3)
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True, show_vertices=True)
poly_stream = streams.PolyDraw(source=poly, drag=True, num_objects=2, show_vertices=True)
path * poly
###Output
_____no_output_____
###Markdown
Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns:
###Code
path_stream.data
###Output
_____no_output_____
###Markdown
Alternatively we can use the ``element`` property to get an Element containing the returned data:
###Code
path_stream.element * poly_stream.element
###Output
_____no_output_____
###Markdown
Title PolyDraw Description A linked streams example demonstrating how to use the PolyDraw stream. Backends Bokeh Tags streams, linked, position, interactive
###Code
import holoviews as hv
from holoviews import streams
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
The ``PolyDraw`` stream adds a bokeh tool to the source plot, which allows drawing, dragging and deleting polygons and making the drawn data available to Python. The tool supports the following actions:**Add patch/multi-line** Double tap to add the first vertex, then use tap to add each subsequent vertex, to finalize the draw action double tap to insert the final vertex or press the ESC key to stop drawing.**Move patch/multi-line** Tap and drag an existing patch/multi-line, the point will be dropped once you let go of the mouse button.**Delete patch/multi-line** Tap a patch/multi-line to select it then press BACKSPACE key while the mouse is within the plot area. As a simple example we will create simple ``Path`` and ``Polygons`` elements and attach each to a ``PolyDraw`` stream. We will also enable the ``drag`` option on the stream to enable dragging of existing glyphs.
###Code
%%opts Path [width=400 height=400] (line_width=5 color='red') Polygons (fill_alpha=0.3)
path = hv.Path([[(1, 5), (9, 5)]])
poly = hv.Polygons([[(2, 2), (5, 8), (8, 2)]])
path_stream = streams.PolyDraw(source=path, drag=True)
poly_stream = streams.PolyDraw(source=poly, drag=True)
path * poly
###Output
_____no_output_____
###Markdown
Whenever the data source is edited the data is synced with Python, both in the notebook and when deployed on the bokeh server. The data is made available as a dictionary of columns:
###Code
path_stream.data
###Output
_____no_output_____
###Markdown
Alternatively we can use the ``element`` property to get an Element containing the returned data:
###Code
path_stream.element * poly_stream.element
###Output
_____no_output_____ |
labs/laboratorio_02.ipynb | ###Markdown
MAT281 - Laboratorio N°02 Objetivos de la clase* Reforzar los conceptos básicos de numpy. Contenidos* [Problema 01](p1)* [Problema 02](p2)* [Problema 03](p3) Problema 01Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:$$\displaystyle sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2) = [promedio(1,2), promedio(2,3), promedio(3,4), promedio(4,5)] = [1.5, 2.5, 3.5, 4.5]* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=3$ sería: * sma(3) = [promedio(1,2,3), promedio(2,3,4), promedio(3,4,5)] = [2.,3.,4.]Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:* **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.**Hint**: utilice la función `numpy.cumsum`
###Code
# importar librerias
import numpy as np
###Output
_____no_output_____
###Markdown
Definir Función
###Code
def sma(ar:np.array,window:int):
l=np.zeros((1,ar.shape[0]-window+1))
for j in range(ar.shape[0]-window+1):
suma=0
for i in range(j,window+j):
suma=suma+ar[i]
l[0][j]=suma/window
return l
###Output
_____no_output_____
###Markdown
Verificar ejemplos
###Code
# ejemplo 01
a = np.array([1,2,3,4,5])
np.testing.assert_array_equal(
sma(a,2),
np.array([[1.5, 2.5, 3.5, 4.5]])
)
# ejemplo 02
a = np.array([5,3,8,10,2,1,5,1,0,2])
np.testing.assert_array_equal(
sma(a, 2),
np.array([[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]])
)
###Output
_____no_output_____
###Markdown
Problema 02La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.* Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$Implemente una función llamada `strides(a,n,p)` cuyo input sea:* $a$: un arreglo unidimensional, * $n$: el número de columnas,* $p$: el número de pasos hacia adelante y retorne la matriz de $n$ columnas, cuyos desfaces hacia adelante se hacen de $p$ en $p$ pasos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$ Definir Función
###Code
def strides(a:np.array,n:int,p:int):
m=a.shape[0]
matrx=np.zeros((1,n))
mat_ag=np.zeros((1,n))
for i in range(m):
k=0
k=k+(p*i)
for j in range(n):
matrx[i,j]=a[k]
k=k+1
if k == m:
return matrx
matrx=np.r_[matrx,mat_ag]
###Output
_____no_output_____
###Markdown
Verificar ejemplos
###Code
# ejemplo 01
a = np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
n=4
p=2
np.testing.assert_array_equal(
strides(a,n,p),
np.array([
[ 1, 2, 3, 4],
[ 3, 4, 5, 6],
[ 5, 6, 7, 8],
[ 7, 8, 9, 10]
])
)
###Output
_____no_output_____
###Markdown
Problema 03Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$\displaystyle M_{n} = \dfrac{n(n^2+1)}{2}$$Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$,es un cuadrado mágico.* $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico.Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario* **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False**Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$. Definir Función
###Code
def es_cuadrado_magico(A:np.array):
n=A.shape[0]
M=(n*(n**2)+n)/2
for i in range(n):
suma=0
for j in range(n):
suma=suma+A[i,j]
if suma!=M:
return False
for j in range(n):
suma=0
for i in range(n):
suma=suma+A[i,j]
if suma!=M:
return False
suma=0
for i in range(n):
suma=suma+A[i,i]
if suma!=M:
return False
suma=0
for i in range(n):
suma=suma+A[n-i-1,i]
if suma!=M:
return False
return True
###Output
_____no_output_____
###Markdown
Verificar ejemplos
###Code
# ejemplo 01
A = np.array([[4,9,2],[3,5,7],[8,1,6]])
assert es_cuadrado_magico(A) == True, "ejemplo 01 incorrecto"
# ejemplo 02
B = np.array([[4,2,9],[3,5,7],[8,1,6]])
assert es_cuadrado_magico(B) == False, "ejemplo 02 incorrecto"
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°02 Objetivos de la clase* Reforzar los conceptos básicos de numpy. Contenidos* [Problema 01](p1)* [Problema 02](p2)* [Problema 03](p3)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Problema 01Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:$$sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2): [mean(1,2),mean(2,3),mean(3,4)] = [1.5, 2.5, 3.5, 4.5] * sma(3): [mean(1,2,3),mean(2,3,4),mean(3,4,5)] = [2.,3.,4.]Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:* **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.**Hint**: utilice la función `numpy.cumsum`
###Code
def sma(lista, n):
"""
sma(lista,n)
Calcula el promedio de n datos a lo largo de una lista
Parameters
----------
lista: list
Lista a calculador promedios
n : int
Ventana de términos
Returns
-------
output : list
Promedios calculados
Examples
--------
>>> sma([1,2,3,4,5], 2)
sma([1,2,3,4,5], 2)
"""
sol = np.empty(len(lista)+1-n)
aux = np.empty(len(lista)+1)
aux[0] =np.array([0])
aux[1:]=np.cumsum(lista)
for i in range(0, len(lista)+1-n):
sol[i]=(aux[i+n]-aux[i])/n
return list(sol)
sma([1,2,3,4,5], 2)
###Output
_____no_output_____
###Markdown
Problema 02La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.* Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$Implemente una función llamada `strides(a,4,2)` cuyo input sea un arreglo unidimensional y retorne la matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$
###Code
def strides(lista, n, p):
"""
strides(lista,n, p)
Redimensiona una lista en una matriz de n columnas
Parameters
----------
lista: list
Lista a redimensionar
n : int
número de columnas
p: paso de repetición
Returns
-------
output : list
matriz buscada
Examples
--------
>>> strides( [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],4,2)
[[1, 2, 3, 4], [3, 4, 5, 6], [5, 6, 7, 8], [7, 8, 9, 10]]
"""
lista_aux = np.zeros(len(lista)+len(lista)%p)
matrix=np.zeros(((len(lista_aux)-n)//p+1,n)) #(len(lista_aux)-n)//p+1 es el número de filas de la matriz
lista_aux[:len(lista)]=lista
for i in range((len(lista_aux)-n)//p+1):
matrix[i]=lista_aux[i*p:n+i*p]
return matrix
strides( [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],4,4)
###Output
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
<class 'numpy.ndarray'>
###Markdown
Problema 03Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$M_{n} = \dfrac{n(n^2+1)}{2}$$Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$,es un cuadrado mágico.* $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico.Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario* **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False**Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$.
###Code
def es_cuadrado_magico(matrix):
"""
es_cuadrado_magico(matrix)
Determina si una matriz es o no un cuadrado mágico.
Parameters
----------
matrix: list
Matriz a corroborar
Returns
-------
output : boolean
Valor de verdad sobre si matrix es cuadrado mágico
Examples
--------
>>> es_cuadrado_magico([[4,9,2],[3,5,7],[8,1,6]])
True
>>> es_cuadrado_magico([[4,2,9],[3,5,7],[8,1,6]])
False
"""
matrix = np.array(matrix)
n = len(matrix)
#Validamos que la matriz sea cuadrada
if np.shape(matrix) != (n,n):
return False
#Parte que valida si son numeros consecutivos del 1 al n^2
aux = np.arange(1,len(matrix)**2+1)
uni_matrix = matrix.ravel()
for i in uni_matrix:
if i not in aux:
return False
#Parte que valida si es cuadrado mágico
if matrix.sum(axis=1).all() != matrix.sum(axis=0).all(): #filas v/s columnas.
return False
if np.trace(matrix) == np.trace(np.fliplr(matrix)) == n*(n**2+1)//2:
return True
return False
es_cuadrado_magico([[4,9,2],[3,5,7],[8,1,6]])
es_cuadrado_magico([[4,2,9],[3,5,7],[8,1,6]])
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°02 Objetivos de la clase* Reforzar los conceptos básicos de numpy. Contenidos* [Problema 01](p1)* [Problema 02](p2)* [Problema 03](p3) Problema 01Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:$$sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2): [mean(1,2),mean(2,3),mean(3,4)] = [1.5, 2.5, 3.5, 4.5] * sma(3): [mean(1,2,3),mean(2,3,4),mean(3,4,5)] = [2.,3.,4.]Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:* **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.**Hint**: utilice la función `numpy.cumsum`
###Code
# importar librerias
import numpy as np
def sma(a:np.ndarray,n:int):
"""
sma(arreglo,n)
Aproximacion del valor de pi mediante el método de Leibniz
Parameters
----------
n : int
Ventana para calcular la media.
a : np.ndarray
Arreglo al que se le calculara la media movil.
Returns
-------
output : np.ndarray
Valor de la media movil con una ventana n.
Examples
--------
>>> sma([1,2,3,4,5],2)
[1.5, 2.5, 3.5, 4.5]
>>> sma([5,3,8,10,2,1,5,1,0,2],2)
[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]
"""
l = len(a)
if l <= n: #Primero analisamos el caso de que el arreglo tenga menos elemntos que la ventana
a_1 = np.zeros(1) #Definimos un arreglo de un elemento
v = np.cumsum(a) #Sumamos los elementos del arreglo
a_1[0] = v[l-1] #Calculamos su media movil
return a_1
else:
v = np.zeros((l-n+1))
a_1 = np.zeros(n)
contador = 0 #Iniciamos un contador
while contador != l-n+1: #Mientras no se realicen todas las operaciones continua el while
for i in range(0,n):
a_1[i] = a[contador + i]
v[contador] = (np.cumsum(a_1)[n-1]/n) #Calculamos la media movil y la agregamos a v
contador += 1
return v #Retornamos el arreglo con las medias
###Output
_____no_output_____
###Markdown
Verificar ejemplos:
###Code
# ejemplo 01
a = [1,2,3,4,5]
np.testing.assert_array_equal(
sma(a, 2),
np.array([1.5, 2.5, 3.5, 4.5])
)
# ejemplo 02
a = [5,3,8,10,2,1,5,1,0,2]
np.testing.assert_array_equal(
sma(a, 2),
np.array([4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ])
)
###Output
_____no_output_____
###Markdown
Problema 02La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.* Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$Implemente una función llamada `strides(a,4,2)` cuyo input sea un arreglo unidimensional y retorne la matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$
###Code
# importar librerias
import numpy as np
def strides(a:np.ndarray,columnas:int,saltos:int):
"""
stride(arreglo,columnas,saltos)
Crea una matriz a partir de un arreglo
Parameters
----------
columnas : int
Cantidad de columnas de la matriz por construir.
a : np.ndarray
Arreglo al que se le creara la matriz.
saltos : int
Saltos que tendra cada fila respecto al arreglo
Returns
-------
output : np.ndarray
Matriz construida a partir del arreglo con una cantidad de columnas dada.
Examples
--------
>>> strides(np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]),4,2)
array([[ 1., 2., 3., 4.],
[ 3., 4., 5., 6.],
[ 5., 6., 7., 8.],
[ 7., 8., 9., 10.]])
"""
if len(a) <= columnas: #Primero verificamos el caso de que la cantidad de columnas sea mayor al largo del arreglo
matriz = np.zeros(1,len(a))
for i in range(0,len(a)-1):
matriz[1,i] = a[i] #Agregamos los elementos a la matriz con 0 en la primera fila
return matriz #Retornamos la matriz
for i in range(0,len(a)-1,saltos):
if i == 0:
matriz = np.zeros((1,columnas)) #Para la primera iteracion creamos una matriz con 0
for k in range(0,columnas):
matriz[0,k]=a[k] #Agregamos los primeros elementos a la primera fila de la matriz
elif i + columnas <= len(a): #Verificamos que se puedan seguir agregando elementos
a_1 = np.zeros((1,columnas)) #Creamos un arreglo que luego se agregara a la matriz inicial
for k in range(0,columnas):
a_1[0,k] = a[i+k] #Agregamos los elementos al arreglo
a_1.shape
matriz = np.r_[matriz,a_1] #Agregamos el arreglo a la matriz
return matriz #Retornamos la matriz con los elementos del arreglo
###Output
_____no_output_____
###Markdown
Verificar ejemplos:
###Code
# ejemplo 01
a = np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
n=4
p=2
np.testing.assert_array_equal(
strides(a,n,p),
np.array([
[ 1, 2, 3, 4],
[ 3, 4, 5, 6],
[ 5, 6, 7, 8],
[ 7, 8, 9, 10]])
)
###Output
_____no_output_____
###Markdown
Problema 03Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$M_{n} = \dfrac{n(n^2+1)}{2}$$Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$,es un cuadrado mágico.* $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico.Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario* **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False**Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$.
###Code
# importar librerias
import numpy as np
def es_cuadrado_magico(A:np.ndarray):
"""
es_cuadrado_magico(arreglo)
Determina si la matriz ingresada es un cuadrado magico o no
Parameters
----------
a : np.ndarray
Matriz a determinar si es cuadrado magico o no.
Returns
-------
output : bolean
Valor de verdad para determinar si es un cuadrado magico o no.
Examples
--------
>>> es_cuadrado_magico(np.array([[4,9,2],[3,5,7],[8,1,6]]))
True
>>> es_cuadrado_magico(np.array([[4,2,9],[3,5,7],[8,1,6]]))
False
"""
size = A.shape
if size[0] == size[1]: #Verificamos que la matriz sea cuadrada
sum = 0 #Iniciamos una suma
Valor = True #Suponemos que si es cuadrado magico
for i in range(0,size[0]-1):
a = A[i]
if i == 0:
sum = np.cumsum(a)[size[0]-1] #Sumamos la primera fila de la matriz
if sum != np.cumsum(a)[size[0]-1]: #Verificamos que el resto de filas sume lo mismo que la primera
return False
for j in range(0,size[1]-1): #Verificamos que las columnas sumen lo mismo que la primera fila
a = A[:,j]
if sum != np.cumsum(a)[size[0]-1]: #En caso contrario retornar False
return False
if np.trace(A) != sum: #Verificamos que la diagonal sume lo mismo que la primera fila
return False
return True #Retornamos True en caso de ser cuadrado magico
else: #Retornamos False en caso de que no sea cuadrada
return False
###Output
_____no_output_____
###Markdown
Verificar ejemplos:
###Code
# ejemplo 01
A = np.array([[4,9,2],[3,5,7],[8,1,6]])
assert es_cuadrado_magico(A) == True, "ejemplo 01 incorrecto"
# ejemplo 02
B = np.array([[4,2,9],[3,5,7],[8,1,6]])
assert es_cuadrado_magico(B) == False, "ejemplo 02 incorrecto"
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°02 Objetivos de la clase* Reforzar los conceptos básicos de numpy. Contenidos* [Problema 01](p1)* [Problema 02](p2)* [Problema 03](p3) Problema 01Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:$$sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2): [mean(1,2),mean(2,3),mean(3,4)] = [1.5, 2.5, 3.5, 4.5] * sma(3): [mean(1,2,3),mean(2,3,4),mean(3,4,5)] = [2.,3.,4.]Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:* **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.**Hint**: utilice la función `numpy.cumsum`
###Code
import numpy as np
def sma(array:np.array,n):
lista = np.zeros(len(array)-n+1)
for i in range(len(lista)):
lista[i]= np.mean(array[i:i+n])
return lista
sma([5,3,8,10,2,1,5,1,0,2], 2)
###Output
_____no_output_____
###Markdown
Problema 02La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.* Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$Implemente una función llamada `strides(a,4,2)` cuyo input sea un arreglo unidimensional y retorne la matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$
###Code
import numpy as np
def strides(a:np.array,n,p):
mat = np.array([a[0:n]])
for i in range(p,n*p,p):
mat = np.vstack( (mat, np.array(a[i:i+n])) )
return mat
a = np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
A = strides(a,4,2)
print(A)
###Output
[[ 1 2 3 4]
[ 3 4 5 6]
[ 5 6 7 8]
[ 7 8 9 10]]
###Markdown
Problema 03Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$M_{n} = \dfrac{n(n^2+1)}{2}$$Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$,es un cuadrado mágico.* $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico.Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario* **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False**Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$.
###Code
# suma por columna, fila y diagonales sean iguales
import numpy as np
def es_consecutiva(A:np.array):
n = A.shape[0]
k=0
mat_consecutiva = np.zeros((n,n))
# se verifica que la matriz es cuadrada
if A.shape[0] != A.shape[1]:
print("no es cuadrada")
return False
# se genera la matriz consecutiva para comparación (poco práctico)
for i in range(0,n**2,n):
mat_consecutiva[k] = np.arange(i+1,i+n+1)
k+=1
for i in A == mat_consecutiva:
for k in i:
if not i.all():
return False
return True
def es_cuadrado_magico(A:np.array):
dim = A.shape[0]
suma_cols = np.zeros(dim)
suma_fila = np.zeros(dim)
suma_diag_principal = 0
suma_diag_secundaria = 0
#calculo de todas las sumas pertinentes
for i in range(0,dim):
suma_cols[i] = sum(A[i])
suma_fila[i] = sum(A.transpose()[i])
suma_diag_principal+=A[i,i]
suma_diag_secundaria+=A[dim-i-1,i]
#comparación de las sumas
if suma_diag_principal != suma_diag_secundaria :
print( " las diagonales no cumplen la suma")
return False
for i in range(0,dim):
if (suma_diag_principal != suma_cols)[i]:
print(" columna" ,i, "no es igual a la diagonal principal")
return False
if (suma_diag_principal != suma_fila)[i]:
print(" fila" ,i, "no es igual a la diagonal principal")
return False
return True
A = np.array([ [4,9,2],
[3,5,7],
[8,1,6]
])
es_cuadrado_magico(A)
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°02 Objetivos de la clase* Reforzar los conceptos básicos de numpy. Contenidos* [Problema 01](p1)* [Problema 02](p2)* [Problema 03](p3) Problema 01Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:$$sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2): [mean(1,2),mean(2,3),mean(3,4)] = [1.5, 2.5, 3.5, 4.5] * sma(3): [mean(1,2,3),mean(2,3,4),mean(3,4,5)] = [2.,3.,4.]Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:* **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.**Hint**: utilice la función `numpy.cumsum`
###Code
import numpy as np
import time
import sys
def sma(a:list, n:int):
lista = np.array([]) #Se requiere el arreglo vacío con el cual se retornará
for elem in range(len(a)):
if len(a)-elem<n: #Se quiebra en caso de que el largo menos la posicion del elemento sea menor a n
break
aux = np.array([a[elem+i] for i in range(n)]) #Dado un elemento, se almacena una lista de n elementos hasta completar.
lista = np.append(lista,np.mean(aux)) #Se calcula para cada ventana la media ponderada, y se añade al arreglo final
return list(lista)
#ejemplo n°1:
sma([5,3,8,10,2,1,5,1,0,2], 2)
#observación: el resultado es el mismo que en el enunciado, a excepción que deja los enteros como n.0 en lugar de n.
#ejemplo n°2 (cambiamos el paso):
sma([5,3,8,10,2,1,5,1,0,2], 3)
#ejemplo n°3:
sma([1,2,3,4,5],2)
###Output
_____no_output_____
###Markdown
Problema 02La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.* Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$Implemente una función llamada `strides(a,4,2)` cuyo input sea un arreglo unidimensional y retorne la matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$
###Code
def strides(a:np.array, n:int, p:int)->np.array:
a = np.array(a)
if len(a.shape)!=1: #Se requiere un arreglo que sea unidimensional
return "Error: Se requiere un arreglo que sea unidimensional" #hay que anunciarlo en un error
else:
m_final = np.zeros((int((len(a)-n+p)/p),n)) #Se crea la matriz nula para reemplazar después
for fila in range(int((len(a)-n+p)/p)+1):
if fila>0: #Si no se está en la siguiente fila, se hace lo que sigue
if len(a[fila*p:fila*p+n])<n:
break #De llegar al final, si sobran menos de n elementos se quiebra
else:
m_final[fila] = np.array(a[fila*p:fila*p+n]) #Aquí se hace el proceso de agregar la fila de largo n en base
#al desplazamiento p
print("Fila "+str(fila+1)+": "+str(m_final[fila]))
else:
m_final[fila] = np.array(a[:n]) # la primera fila son los primeros n terminos del arreglo
print("Fila "+str(fila+1)+": "+str(m_final[fila]))
return m_final
a=[1,2,3,4,5,6,7,8,9,10]
strides(a,4,2)
#Observación: Hace lo que pide, pero queda en formato float en lugar de int's
###Output
Fila 1: [1. 2. 3. 4.]
Fila 2: [3. 4. 5. 6.]
Fila 3: [5. 6. 7. 8.]
Fila 4: [ 7. 8. 9. 10.]
###Markdown
Problema 03Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$M_{n} = \dfrac{n(n^2+1)}{2}$$Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$,es un cuadrado mágico.* $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico.Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario* **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False**Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$.
###Code
#Vamos a crear una función que avise si la matriz es cuadrada.
def cuadrada(A=np.array):
size=np.shape(A)
if size[0]==size[1]:
return True
else:
return False
#Basta usar shape, corroborar si los elementos de la tupla son iguales y retornar la veracidad.
A=np.zeros((3,3)) #ejemplo con matriz de 3x3
cuadrada(A)
B=np.zeros((2,3)) #ejemplo con matriz de 2x3
cuadrada(B)
#Ahora hay que checkear si solo hay números consecutivos del 1 al n^2
def consecutivos(A=np.array):
l=[] #Vamos a dejar la matriz en una lista y usar sort
for i in range(np.shape(A)[0]):
for j in range(np.shape(A)[1]):
l.append(A[i][j]) #se recorre la matriz completa y se van agregando los términos
l_ordenada=sorted(l) #se deja una lista ordenada
check= list(range(1, np.size(A)+1)) #Se crea la lista con los números ordenados.
for i in range(len(l)):
if l_ordenada[i]!= check[i]:
return False #Basta que haya un solo término mal para que falle el programa
return True #Si logró pasar hasta acá es porque las listas son iguales, así que la matriz sólo tenía números consecutivos
#ejemplo n°1: tiene números consecutivos (evidentemente)
A=np.array([[1,2,3],[4,5,6],[7,8,9]])
consecutivos(A)
#ejemplo n°2: no tiene números consecutivos
B=np.array([[2,3,4],[4,5,8],[1,2,3]])
consecutivos(B)
def es_cuadrado_magico(A=np.array):
if cuadrada(A)== False:
return "La matriz debe ser cuadrada" #Se debe chequear si la matriz es cuadrada
if consecutivos(A)== False:
return "La matriz no tiene números consecutivos" #Se debe chequear si la matriz está compuesta por números consecutivos
n=np.shape(A)[0] #Se necesita la dimensión de la matriz para corroborar el cuadrado
num_magico= n*(n**2 +1)/2
fila=0
col=0
diag_p=0
diag_s=0 #son las variables para corroborar el número mágico
for i in range(n):
for j in range(n):
fila+=A[i][j]
if fila != num_magico: #Aquí se chequean que las filas cumplan la condición del número mágico
return "El cuadrado no es mágico"
fila=0 #hay que resetear la fila para volver a iterar
for i in range(n):
for j in range(n):
col+=A[j][i] #Aquí que las columnas cumplan la condición del número mágico
if col != num_magico:
return "El cuadrado no es mágico"
col=0 #hay que resetear la columna para iterar otra vez
for i in range(n):
diag_p+=A[i][i]
if diag_p != num_magico: #Ahora la diagonal principal
return "El cuadrado no es mágico"
for i in range(n):
diag_s+=A[n-i-1][n-i-1]
if diag_s != num_magico: #Finalmenta la diagonal secundaria
return "El cuadrado no es mágico"
return "El cuadrado es mágico!"
#Ejemplo n°1: El cuadrado mágico del ejemplo
A= np.array([[4,9,2],[3,5,7],[8,1,6]])
es_cuadrado_magico(A)
#Ejemplo n°2: El tipico 1,2,..,9 evidentemente no es mágico
B= np.array([[1,2,3],[4,5,6],[7,8,9]])
es_cuadrado_magico(B)
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°02 Objetivos de la clase* Reforzar los conceptos básicos de numpy. Contenidos* [Problema 01](p1)* [Problema 02](p2)* [Problema 03](p3)
###Code
import numpy as np
import time
import sys
###Output
_____no_output_____
###Markdown
Problema 01Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:$$sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2): [mean(1,2),mean(2,3),mean(3,4)] = [1.5, 2.5, 3.5, 4.5] * sma(3): [mean(1,2,3),mean(2,3,4),mean(3,4,5)] = [2.,3.,4.]Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:* **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.**Hint**: utilice la función `numpy.cumsum`
###Code
def sma(a:np.ndarray,n:int)->list:
"""
sma(a,n)
Calcula medias moviles simples de todos los grupos de n numeros susecivos con saltos de 1 en cada grupo.
Parameters
----------
a : np.array
lista de numeros
n : int
Numero de terminos.
Returns
-------
output : np.array
Medias moviles simples sucesivas.
Examples
--------
>>> sma([5,3,8,10,2,1,5,1,0,2], 2) = [4.,5.5,9.,6.,1.5,3.,3.,0.5,1.]
"""
b=np.cumsum(a)
l=np.linspace(0,0,np.size(a)-n+1)
l[0]=float(b[n-1]/n)#El primer termino es delicado.
for i in range (0,np.size(a)-n): #Agregamos los siguiendes siguiendo la regla de los pasos indicada
d=float((b[n+i]-b[i])/n)
l[i+1]=d
return(l)
sma([5,3,8,10,2,1,5,1,0,2], 2)
###Output
_____no_output_____
###Markdown
Problema 02La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.* Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$Implemente una función llamada `strides(a,4,2)` cuyo input sea un arreglo unidimensional y retorne la matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$
###Code
def strides(a: np.ndarray,n:int,p:int)->np.ndarray:
"""
strides(a,n,p)
Transforma un arreglo unidimensional 𝑎 en una matriz de 𝑛 columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en 𝑝 pasos
Parameters
----------
a : n.array
Arreglo a transformar
n : int
Numero de columnas del matriz resultante.
p : int
Numero de pasos
Returns
-------
output : no.array
Matriz de n columnas cuyas filas son posiciones consecutivas y desfasadas en p pasos del arreglo a.
"""
A=np.zeros([np.size(a)-(n+p),n]) #Definimos las cantidad de filas y columnas de la matriz como una que solo contiene 0´s
for i in range (0,np.size(a)-(n+p)): #Iteramos sobre las columnas
for j in range(0,n):c #iteramos sobre las columnas para acceder a todos los elementosde la matriz
A[i,j]=a[j+p*i] #Asignamos el elemento del array a correpondiente a los pasos
return(A)
#EJEMPLO
a=np.array([1,2,3,4,5,6,7,8,9,10])
strides(a,4,2)
###Output
_____no_output_____
###Markdown
Problema 03Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$M_{n} = \dfrac{n(n^2+1)}{2}$$Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$,es un cuadrado mágico.* $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico.Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario* **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False**Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$.
###Code
def Validar_Matriz(A: np.ndarray)->bool: #Funcion que va a validar si la Matriz es cuadrada y si conttiene los numeros del 1 al n^2
if A.shape[0] != A.shape[1]: #chekeamos si es cuadrada
return False
n=A.shape[1]
a=np.ravel(A) #Transformamaos la matriz en una array plano para trabajar mas comodos
for i in range (1 , n**2+1): #Chekeamos que esten los numeros del 1 al n^2
if i not in a:
return False
return True
B=np.array([[4,2,9],[3,5,7],[8,1,6]])
Validar_Matriz(B)
def es_cuadrado_magico(A: np.ndarray)->bool:
"""
es_cuadrado_magico(A)
Valid si la matris A es un cuadrado magico
Parameters
----------
A : n.array
Matriz a validar
Returns
-------
output : Bool
Verdadero si la matriz es un cuadrado magico , False en caso contrario.
"""
if Validar_Matriz(A)==True:
n=A.shape[1]
a=A.ravel
for i in range (0,n): #Validamos las sumas de las filas
if np.sum(A[:,i]) == (n*(n**2+1))/2:
pass
else:
return(False)
for i in range (0,n):
if np.sum(A[i,:]) == (n*(n**2+1))/2:#Validamos las sumas de las columnas
pass
else:
return(False)
if np.sum(np.diag(A)) == (n*(n**2+1))/2: #Validamos diagonal principal
pass
else:
return(False)
if np.sum(np.diag(np.fliplr(A))) == (n*(n**2+1))/2: #Validamos diagonal secundaria
pass
else:
return(False)
return(True)#Validamos que sea cuadrada
else:
return False
A=np.array([[4,9,2],[3,5,7],[8,1,6]])
es_cuadrado_magico(A)
A
B=np.array([[4,2,9],[3,5,7],[8,1,6]])
es_cuadrado_magico(B)
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorio N°02 Objetivos de la clase* Reforzar los conceptos básicos de numpy. Contenidos* [Problema 01](p1)* [Problema 02](p2)* [Problema 03](p3) Problema 01Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:$$sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2): [mean(1,2),mean(2,3),mean(3,4)] = [1.5, 2.5, 3.5, 4.5] * sma(3): [mean(1,2,3),mean(2,3,4),mean(3,4,5)] = [2.,3.,4.]Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:* **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.**Hint**: utilice la función `numpy.cumsum`
###Code
import numpy as np
def sma(a:np.array, n:int)->list:
"""
sma(a,n)
Calcula el promedio considerando n datos sobre un arreglo a
Parameters
----------
a : np.array
Arreglo unidimensional de números.
n : int
Número de terminos.
Returns
-------
output : list
lista con los promedios calculados considerando n términos.
Examples
--------
>>> sma([5,3,8,10,2,1,5,1,0,2], 2)
[4.,5.5,9.,6.,1.5,3.,3.,0.5,1.]
"""
cum_a= np.cumsum(a) #retorna un arreglo de la suma acumulada de los numeros en el arreglo
arr=np.empty(len(a)-n+1, dtype=float)
arr[0]=cum_a[n-1]/n
for i in range(0,len(cum_a)-n):
arr[i+1]=(cum_a[i+n]-cum_a[i])/n
return list(arr)
print(sma([5,3,8,10,2,1,5,1,0,2], n=2))
###Output
[4.0, 5.5, 9.0, 6.0, 1.5, 3.0, 3.0, 0.5, 1.0]
###Markdown
Problema 02La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.* Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$Implemente una función llamada `strides(a,4,2)` cuyo input sea un arreglo unidimensional y retorne la matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$
###Code
def strides(a:np.array, n:int, p:int)->np.array:
"""
strides(a,n,p)
Transforma un arreglo unidimensional en una matriz de n columnas, considerando elementos con desfase p
Parameters
----------
a : np.array
Arreglo unidimensional de números.
n : int
Número de terminos.
p : int
Número de desfase.
Returns
-------
output : np.array
matriz de n columnas con desfase p.
Examples
--------
>>> strides([1, 2, 3, 4, 5, 6, 7, 8, 9, 10],4,2)
[[ 1. 2. 3. 4.]
[ 3. 4. 5. 6.]
[ 5. 6. 7. 8.]
[ 7. 8. 9. 10.]]
"""
arr=np.empty([(len(a)-n)//p+1,n])
k=0
for i in range(0,(len(a)-n)//p+1):
for j in range(n):
arr[i][j]=a[j+k]
k+=p
return arr
print(strides([1, 2, 3, 4, 5, 6, 7, 8, 9, 10],4,2))
###Output
[[ 1. 2. 3. 4.]
[ 3. 4. 5. 6.]
[ 5. 6. 7. 8.]
[ 7. 8. 9. 10.]]
###Markdown
Problema 03Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$M_{n} = \dfrac{n(n^2+1)}{2}$$Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$,es un cuadrado mágico.* $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico.Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario* **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False**Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$.
###Code
#Función que verifica si la matriz es cuadrada y tiene números consecutivos de 1 a n^2
def cuadrada(a:np.array)->bool:
"""
cuadrada(a)
Verifica si una matriz es cuadrada con números consecutivos desde 1 a n^2.
Parameters
----------
a : np.array
Matriz de números.
Returns
-------
output : bool
Retorna True si la matriz es cuadrada y tiene numeros consecutivos de 1 a n^2.
Donde n es el número de filas de la matriz.
Retorna False si alguna de las condiciones no se cumple
Examples
--------
>>> cuadrada(np.array([[4, 9, 2], [3, 5, 7],[8, 1, 6]]))
True
>>> cuadrada(np.array([[4, 9, 2], [3, 5, 7]))
False
"""
(n,m)=a.shape
if n==m:
flag=True
k=1
while flag and k<=n**2:
flag=False
for row in a:
if k in row:
flag=True
k+=1
return flag
else:
return False
def es_cuadrado_magico(a:np.array)->bool:
"""
es_cuadrado_magico(a)
Verifica si una matriz cuadrada con números consecutivos desde 1 a n^2, es cuadrado mágico.
Donde n es el número de filas de la matriz
Parameters
----------
a : np.array
Matriz de números.
Returns
-------
output : bool
Retorna True si la matriz es cuadrado mágico o False si no lo es.
Examples
--------
>>> es_cuadrado_magico(np.array([[4, 9, 2], [3, 5, 7],[8, 1, 6]]))
True
>>> es_cuadrado_magico(np.array([[4, 2, 9], [3, 5, 7],[8, 1, 6]]))
False
"""
if cuadrada(a)==True:
n= np.size(a, 0) #entrega el numero de filas de la matriz a
sum= n*(n**2+1)/2
for i in range(0,len(a.sum(axis=0))):
if a.sum(axis=0)[i]!= sum:
return False
for i in range(0,len(a.sum(axis=1))):
if a.sum(axis=1)[i]!= sum:
return False
if np.trace(a)!= sum:
return False
if np.trace(np.fliplr(a)) != sum:
return False
return True
else:
return False
print(es_cuadrado_magico(np.array([[4, 9, 2], [3, 5, 7],[8, 1, 6]])))
###Output
True
|
02-increment-train/a2i-audio-classification-and-retraining.ipynb | ###Markdown
Amazon Augmented AI (Amazon A2I) integration with Amazon SageMaker Hosted Endpoint for Audio Classification and Model Retraining Architecture 5. A2I Setup a. [Introduction](Introduction)b. [Setup](Setup)c. [Create Control Plane Resources](Create-Control-Plane-Resources) 6. Setup workforce and Labeling Manually a. [Starting Human Loops](Starting-Human-Loops)b. [Configure a2i status change to SQS](sqs_a2i)c. [Wait For Workers to Complete Task](Wait-For-Workers-to-Complete-Task)d. [Check Status of Human Loop](Check-Status-of-Human-Loop)e. [View Task Results](View-Task-Results) 7. Retrain and Redeploy [Incremental training with SageMaker](Incremental-training-with-SageMaker) 8. Configure Lambda and Api gateway[Create Lambda Function triggering a2i process](lambda) IntroductionAmazon Augmented AI (Amazon A2I) makes it easy to build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. You can create your own workflows for ML models built on Amazon SageMaker or any other tools. Using Amazon A2I, you can allow human reviewers to step in when a model is unable to make a high confidence prediction or to audit its predictions on an on-going basis. Learn more here: https://aws.amazon.com/augmented-ai/In this tutorial, we will show how you can use **Amazon A2I with an Amazon SageMaker Hosted Endpoint.** We will be using an exisiting audio classification model in this notebook. We will also demonstrate how to manipulate the A2I output to perform incremental training to improve the model accuracy with the newly labeled data using A2I.For more in depth instructions, visit https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-getting-started.html To incorporate Amazon A2I into your human review workflows, you need three resources:* A **worker task template** to create a worker UI. The worker UI displays your input data, such as documents or images, and instructions to workers. It also provides interactive tools that the worker uses to complete your tasks. For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-instructions-overview.html* A **human review workflow**, also referred to as a flow definition. You use the flow definition to configure your human workforce and provide information about how to accomplish the human review task. You can create a flow definition in the Amazon Augmented AI console or with Amazon A2I APIs. To learn more about both of these options, see https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html* A **human loop** to start your human review workflow. When you use one of the built-in task types, the corresponding AWS service creates and starts a human loop on your behalf when the conditions specified in your flow definition are met or for each object if no conditions were specified. When a human loop is triggered, human review tasks are sent to the workers as specified in the flow definition.When using a custom task type, as this tutorial will show, you start a human loop using the Amazon Augmented AI Runtime API. When you call `start_human_loop()` in your custom application, a task is sent to human reviewers. SetupThis notebook is developed and tested in a SageMaker Notebook Instance with a `ml.t2.medium` instance with SageMaker Python SDK v2. It is recommended to execute the notebook in the same environment for best experience. Install Latest SDK
###Code
!pip install -U sagemaker==2.23.1
import sagemaker
from pkg_resources import parse_version
assert parse_version(sagemaker.__version__) >= parse_version('2'), \
'''This notebook is only compatible with sagemaker python SDK >= 2.
Current version is %s. Please make sure you upgrade the library.''' % sagemaker.__version__
print('SageMaker python SDK version: %s' % sagemaker.__version__)
###Output
SageMaker python SDK version: 2.23.1
###Markdown
We need to set up the following data:* `region` - Region to call A2I.* `BUCKET` - A S3 bucket accessible by the given role * Used to store the sample images & output results * Must be within the same region A2I is called from* `role` - The IAM role used as part of StartHumanLoop. By default, this notebook will use the execution role* `workteam` - Group of people to send the work to
###Code
import boto3
my_session = boto3.session.Session()
region = my_session.region_name
%store -r endpoint_name
endpoint_name
###Output
_____no_output_____
###Markdown
Role and PermissionsThe AWS IAM Role used to execute the notebook needs to have the following permissions:* SagemakerFullAccess* AmazonSageMakerMechanicalTurkAccess (if using MechanicalTurk as your Workforce)
###Code
from sagemaker import get_execution_role
import sagemaker
# Setting Role to the default SageMaker Execution Role
role = get_execution_role()
display(role)
import os
import boto3
import botocore
sess = sagemaker.Session()
BUCKET = sess.default_bucket()
TRAIN_PATH = f's3://{BUCKET}/tomofun'
OUTPUT_PATH = f's3://{BUCKET}/a2i-results'
###Output
_____no_output_____
###Markdown
Setup Bucket and Paths**Important**: The bucket you specify for `BUCKET` must have CORS enabled. You can enable CORS by adding a policy similar to the following to your Amazon S3 bucket. To learn how to add CORS to an S3 bucket, see [CORS Permission Requirement](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-permissions-security.htmla2i-cors-update) in the Amazon A2I documentation. ```[{ "AllowedHeaders": [], "AllowedMethods": ["GET"], "AllowedOrigins": ["*"], "ExposeHeaders": []}]```If you do not add a CORS configuration to the S3 buckets that contains your image input data, human review tasks for those input data objects will fail.
###Code
cors_configuration = {
'CORSRules': [{
"AllowedHeaders": [],
"AllowedMethods": ["GET"],
"AllowedOrigins": ["*"],
"ExposeHeaders": []
}]
}
# Set the CORS configuration
s3 = boto3.client('s3')
s3.put_bucket_cors(Bucket=BUCKET,
CORSConfiguration=cors_configuration)
###Output
_____no_output_____
###Markdown
Audio Classification with Amazon SageMakerTo demonstrate A2I with Amazon SageMaker hosted endpoint, we will take a trained audio classification model from a S3 bucket and host it on the SageMaker endpoint for real-time prediction. Load the model and create an endpointThe next cell will setup an endpoint from a trained model. It will take about 3 minutes.
###Code
import boto3
my_session = boto3.session.Session()
client = boto3.client("sts")
account_id = client.get_caller_identity()["Account"]
# algorithm_name = "vgg16-audio"
algorithm_name = "vgg-audio"
image_uri=f"{account_id}.dkr.ecr.{region}.amazonaws.com/{algorithm_name}"
image_uri
###Output
_____no_output_____
###Markdown
Helper functions
###Code
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.image as mpimg
import random
import numpy as np
import json
runtime_client = boto3.client('runtime.sagemaker')
def load_and_predict(file_name):
"""
load an audio file, make audio classification to an predictor
Parameters:
----------
file_name : str
image file location, in str format
predictor : sagemaker.predictor.RealTimePredictor
a predictor loaded from hosted endpoint
threshold : float
score threshold for bounding box display
"""
with open(file_name, 'rb') as image:
f = image.read()
b = bytearray(f)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/octet-stream',
Body=b)
results = response['Body'].read().decode('utf-8')
print(results)
detections = json.loads(results)
return results, detections
# object_categories = ["Barking", "Howling", "Crying", "COSmoke","GlassBreaking","Other"]
object_categories = ["Barking", "Howling", "Crying", "COSmoke","GlassBreaking","Other",
"Doorbell", 'Bird', 'Music_Instrument', 'Laugh_Shout_Scream']
###Output
_____no_output_____
###Markdown
Sample DataLet's take a look how the audio classification model looks like using some audio clips on our hands. The predicted class and the prediction probability is presented.
###Code
# !mkdir audios
!cp ../01-byoc/train_data/train_00001.wav audios
!cp ../01-byoc/train_data/train_00010.wav audios
!cp ../01-byoc/train_data/train_00021.wav audios
test_audios = ['audios/train_00001.wav', # motorcycle
'audios/train_00010.wav', # bicycle
'audios/train_00021.wav'] # sofa
import IPython.display as ipd
ipd.Audio(test_audios[0], autoplay=True)
for audio in test_audios:
results, detections = load_and_predict(audio)
print(detections)
###Output
{"label": 0, "probability": [0.9903320670127869, 0.0001989333686651662, 0.0007428666576743126, 0.00016775673429947346, 3.222770828870125e-05, 3.8894515455467626e-05, 0.0003140326589345932, 0.0003089535457547754, 3.332734195282683e-05, 0.007830953225493431]}
{'label': 0, 'probability': [0.9903320670127869, 0.0001989333686651662, 0.0007428666576743126, 0.00016775673429947346, 3.222770828870125e-05, 3.8894515455467626e-05, 0.0003140326589345932, 0.0003089535457547754, 3.332734195282683e-05, 0.007830953225493431]}
{"label": 0, "probability": [0.8528938293457031, 0.0019418800948187709, 0.0011375222820788622, 0.0007829791866242886, 0.013402803801000118, 0.0073211463168263435, 0.03736455738544464, 0.00028037233278155327, 0.0017033849144354463, 0.083171546459198]}
{'label': 0, 'probability': [0.8528938293457031, 0.0019418800948187709, 0.0011375222820788622, 0.0007829791866242886, 0.013402803801000118, 0.0073211463168263435, 0.03736455738544464, 0.00028037233278155327, 0.0017033849144354463, 0.083171546459198]}
{"label": 0, "probability": [0.999336838722229, 1.1203339454368688e-05, 4.069593342137523e-05, 1.0619601198413875e-05, 0.000325192348100245, 8.879670531314332e-06, 1.770690687408205e-05, 2.0911184037686326e-05, 4.967172117176233e-06, 0.0002231001853942871]}
{'label': 0, 'probability': [0.999336838722229, 1.1203339454368688e-05, 4.069593342137523e-05, 1.0619601198413875e-05, 0.000325192348100245, 8.879670531314332e-06, 1.770690687408205e-05, 2.0911184037686326e-05, 4.967172117176233e-06, 0.0002231001853942871]}
###Markdown
Probability of 0.465 is considered quite low in modern computer vision and there is a mislabeling. This is due to the fact that the SSD model was under-trained for demonstration purposes in the [training notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb). However this under-trained model serves as a perfect example of brining human reviewers when a model is unable to make a high confidence prediction. Creating human review Workteam or Workforce A workforce is the group of workers that you have selected to label your dataset. You can choose either the Amazon Mechanical Turk workforce, a vendor-managed workforce, or you can create your own private workforce for human reviews. Whichever workforce type you choose, Amazon Augmented AI takes care of sending tasks to workers. When you use a private workforce, you also create work teams, a group of workers from your workforce that are assigned to Amazon Augmented AI human review tasks. You can have multiple work teams and can assign one or more work teams to each job. To create your Workteam, visit the instructions here: https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management.htmlAfter you have created your workteam, replace YOUR_WORKTEAM_ARN below
###Code
my_session = boto3.session.Session()
my_region = my_session.region_name
client = boto3.client("sts")
account_id = client.get_caller_identity()["Account"]
# WORKTEAM_ARN = "arn:aws:sagemaker:{}:{}:workteam/private-crowd/seal-squad".format(my_region, account_id)
WORKTEAM_ARN = "arn:aws:sagemaker:{}:{}:workteam/private-crowd/fish-squad".format(my_region, account_id)
WORKTEAM_ARN
###Output
_____no_output_____
###Markdown
Visit: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-permissions-security.html to add the necessary permissions to your role Client Setup Here we are going to setup the rest of our clients.
###Code
import io
import uuid
import time
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
# Amazon SageMaker client
sagemaker_client = boto3.client('sagemaker', region)
s3_client = boto3.client('s3')
# Amazon Augment AI (A2I) client
a2i = boto3.client('sagemaker-a2i-runtime')
# Amazon S3 client
s3 = boto3.client('s3', region)
# Flow definition name - this value is unique per account and region. You can also provide your own value here.
flowDefinitionName = 'fd-sagemaker-audio-classification-demo-' + timestamp
# Task UI name - this value is unique per account and region. You can also provide your own value here.
taskUIName = 'ui-sagemaker-audio-classification-demo-' + timestamp
###Output
_____no_output_____
###Markdown
Create Control Plane Resources Create Human Task UICreate a human task UI resource, giving a UI template in liquid html. This template will be rendered to the human workers whenever human loop is required.For over 70 pre built UIs, check: https://github.com/aws-samples/amazon-a2i-sample-task-uis.We will be taking an [audio classification UI](https://github.com/aws-samples/amazon-sagemaker-ground-truth-task-uis/blob/master/audio/audio-classification.liquid.html) and filling in the object categories in the `labels` variable in the template.
###Code
# task.input.taskObject
template = r"""
<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>
<crowd-form>
<crowd-classifier
name="sentiment"
categories="['Barking', 'Howling', 'Crying', 'COSmoke','GlassBreaking','Other','Doorbell', 'Bird', 'Music_Instrument', 'Laugh_Shout_Scream']"
header="What class does this audio represent?"
>
<classification-target>
<audio controls>
<source src="{{ task.input.taskObject | grant_read_access }}" type="audio/wav">
Your browser does not support the audio element.
</audio>
</classification-target>
<full-instructions header="Audio Classification Analysis Instructions">
<p><strong>Barking</strong>Barking </p>
<p><strong>Howling</strong>Howling</p>
<p><strong>Crying</strong>Crying</p>
<p><strong>COSmoke</strong>COSmoke</p>
<p><strong>GlassBreaking</strong>GlassBreaking</p>
<p><strong>Other</strong>Other</p>
<p><strong>Other</strong>Doorbell</p>
<p><strong>Other</strong>Bird</p>
<p><strong>Other</strong>Music_Instrument</p>
<p><strong>Other</strong>Laugh_Shout_Scream</p>
</full-instructions>
<short-instructions>
<p>Choose the primary sentiment that is expressed by the audio.</p>
</short-instructions>
</crowd-classifier>
</crowd-form>
"""
def create_task_ui():
'''
Creates a Human Task UI resource.
Returns:
struct: HumanTaskUiArn
'''
response = sagemaker_client.create_human_task_ui(
HumanTaskUiName=taskUIName,
UiTemplate={'Content': template})
return response
# Create task UI
humanTaskUiResponse = create_task_ui()
humanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn']
print(humanTaskUiArn)
###Output
arn:aws:sagemaker:us-west-2:355444812467:human-task-ui/ui-sagemaker-audio-classification-demo-2021-07-17-03-32-38
###Markdown
Create the Flow Definition In this section, we're going to create a flow definition definition. Flow Definitions allow us to specify:* The workforce that your tasks will be sent to.* The instructions that your workforce will receive. This is called a worker task template.* The configuration of your worker tasks, including the number of workers that receive a task and time limits to complete tasks.* Where your output data will be stored.This demo is going to use the API, but you can optionally create this workflow definition in the console as well. For more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html.
###Code
create_workflow_definition_response = sagemaker_client.create_flow_definition(
FlowDefinitionName= flowDefinitionName,
RoleArn= role,
HumanLoopConfig= {
"WorkteamArn": WORKTEAM_ARN,
"HumanTaskUiArn": humanTaskUiArn,
"TaskCount": 1,
"TaskDescription": "Classify the audio category.",
"TaskTitle": "Audio Classification"
},
OutputConfig={
"S3OutputPath" : OUTPUT_PATH
}
)
flowDefinitionArn = create_workflow_definition_response['FlowDefinitionArn'] # let's save this ARN for future use
# Describe flow definition - status should be active
for x in range(60):
describeFlowDefinitionResponse = sagemaker_client.describe_flow_definition(FlowDefinitionName=flowDefinitionName)
print(describeFlowDefinitionResponse['FlowDefinitionStatus'])
if (describeFlowDefinitionResponse['FlowDefinitionStatus'] == 'Active'):
print("Flow Definition is active")
break
time.sleep(2)
###Output
Initializing
Active
Flow Definition is active
###Markdown
Create SQS queue and pass a2i task status change event to the queue
###Code
sqs = boto3.resource('sqs')
queue_name = 'a2itasks'
queue_arn = "arn:aws:sqs:{}:{}:{}".format(region, account_id, queue_name)
policy = '''{
"Version": "2012-10-17",
"Id": "MyQueuePolicy",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": ["events.amazonaws.com",
"sqs.amazonaws.com"]
},
"Action": "sqs:SendMessage"
}]}'''
policy_obj = json.loads(policy)
policy_obj['Statement'][0]['Resource'] = queue_arn
policy = json.dumps(policy_obj)
# queue = sqs.create_queue(QueueName=queue_name, Attributes={'DelaySeconds': '0',
# 'Policy': policy})
queue = sqs.get_queue_by_name(QueueName=queue_name)
print(queue.url)
sqs_client = boto3.client('sqs')
sqs_client.add_permission(
QueueUrl=queue.url,
Label="a2i",
AWSAccountIds=[
account_id,
],
Actions=[
'SendMessage',
]
)
iam = boto3.client("iam")
role_name = "AmazonSageMaker-SageMakerExecutionRole"
assume_role_policy_document = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": ["sagemaker.amazonaws.com", "events.amazonaws.com"]
},
"Action": "sts:AssumeRole"
}
]
}
# create_role_response = iam.create_role(
# RoleName = role_name,
# AssumeRolePolicyDocument = json.dumps(assume_role_policy_document)
# )
# Now add S3 support
iam.attach_role_policy(
PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess',
RoleName=role_name
)
time.sleep(60) # wait for a minute to allow IAM role policy attachment to propagate
# sm_role_arn = create_role_response["Role"]["Arn"]
sm_role_arn = 'arn:aws:iam::355444812467:role/AmazonSageMaker-SageMakerExecutionRole'
print(sm_role_arn)
%%bash -s "$sm_role_arn" "$my_region"
aws events put-rule --name "A2IHumanLoopStatusChanges" \
--event-pattern "{\"source\":[\"aws.sagemaker\"],\"detail-type\":[\"SageMaker A2I HumanLoop Status Change\"]}" \
--role-arn "$1" \
--region $2
!sed "s/<account_id>/$account_id/g" targets-template.json > targets-tmp.json
!sed "s/<region>/$my_region/g" targets-tmp.json > targets.json
!aws events put-targets --rule A2IHumanLoopStatusChanges \
--targets file://$PWD/targets.json
###Output
{
"FailedEntryCount": 0,
"FailedEntries": []
}
###Markdown
Have newly created SQS queue as a target of the rule we just defined Starting Human Loops Now that we have setup our Flow Definition, we are ready to call our object detection endpoint on SageMaker and start our human loops. In this tutorial, we are interested in starting a HumanLoop only if the highest prediction probability score returned by our model for objects detected is less than 50%. So, with a bit of logic, we can check the response for each call to the SageMaker endpoint using `load_and_predict` helper function, and if the highest score is less than 50%, we will kick off a HumanLoop to engage our workforce for a human review.
###Code
# Get the sample images to s3 bucket for a2i UI to display
!aws s3 sync ./audios/ s3://{BUCKET}/audios/
human_loops_started = []
SCORE_THRESHOLD = 0.9
import json
for fname in test_audios:
# Call SageMaker endpoint and not display any object detected with probability lower than 0.4.
# Sort by prediction score so that the first item has the highest probability
result, detections = load_and_predict(audio)
max_p = max(detections['probability'])
# Our condition for triggering a human review
if max_p < SCORE_THRESHOLD:
s3_fname='s3://%s/%s' % (BUCKET, fname)
print(s3_fname)
humanLoopName = str(uuid.uuid4())
inputContent = {
"initialValue": max_p,
"taskObject": s3_fname # the s3 object will be passed to the worker task UI to render
}
# start an a2i human review loop with an input
start_loop_response = a2i.start_human_loop(
HumanLoopName=humanLoopName,
FlowDefinitionArn=flowDefinitionArn,
HumanLoopInput={
"InputContent": json.dumps(inputContent)
}
)
print(start_loop_response)
human_loops_started.append(humanLoopName)
print(f'Object detection Confidence Score of %s is less than the threshold of %.2f' % (max_p, SCORE_THRESHOLD))
print(f'Starting human loop with name: {humanLoopName} \n')
else:
print(f'Object detection Confidence Score of %s is above than the threshold of %.2f' % (max_p, SCORE_THRESHOLD))
print('No human loop created. \n')
###Output
{"label": 0, "probability": [0.999336838722229, 1.1203339454368688e-05, 4.069593342137523e-05, 1.0619601198413875e-05, 0.000325192348100245, 8.879670531314332e-06, 1.770690687408205e-05, 2.0911184037686326e-05, 4.967172117176233e-06, 0.0002231001853942871]}
Object detection Confidence Score of 0.999336838722229 is above than the threshold of 0.90
No human loop created.
{"label": 0, "probability": [0.999336838722229, 1.1203339454368688e-05, 4.069593342137523e-05, 1.0619601198413875e-05, 0.000325192348100245, 8.879670531314332e-06, 1.770690687408205e-05, 2.0911184037686326e-05, 4.967172117176233e-06, 0.0002231001853942871]}
Object detection Confidence Score of 0.999336838722229 is above than the threshold of 0.90
No human loop created.
{"label": 0, "probability": [0.999336838722229, 1.1203339454368688e-05, 4.069593342137523e-05, 1.0619601198413875e-05, 0.000325192348100245, 8.879670531314332e-06, 1.770690687408205e-05, 2.0911184037686326e-05, 4.967172117176233e-06, 0.0002231001853942871]}
Object detection Confidence Score of 0.999336838722229 is above than the threshold of 0.90
No human loop created.
###Markdown
Check Status of Human Loop
###Code
completed_human_loops = []
for human_loop_name in human_loops_started:
resp = a2i.describe_human_loop(HumanLoopName=human_loop_name)
print(resp)
print(f'HumanLoop Name: {human_loop_name}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('\n')
if resp["HumanLoopStatus"] == "Completed":
completed_human_loops.append(resp)
###Output
_____no_output_____
###Markdown
Wait For Workers to Complete TaskSince we are using private workteam, we should go to the labling UI to perform the inspection ourselves.
###Code
workteamName = WORKTEAM_ARN[WORKTEAM_ARN.rfind('/') + 1:]
print("Navigate to the private worker portal and do the tasks. Make sure you've invited yourself to your workteam!")
print('https://' + sagemaker_client.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain'])
completed_human_loops = []
for human_loop_name in human_loops_started:
resp = a2i.describe_human_loop(HumanLoopName=human_loop_name)
print(resp)
print(f'HumanLoop Name: {human_loop_name}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('\n')
if resp["HumanLoopStatus"] == "Completed":
completed_human_loops.append(resp)
###Output
_____no_output_____
###Markdown
Collect data from a2i to build the training data for the next round
###Code
queue.url
sqs = boto3.client('sqs')
completed_human_loops = []
while True:
response = sqs.receive_message(
QueueUrl=queue.url,
MaxNumberOfMessages=10,
MessageAttributeNames=[
'All'
],
VisibilityTimeout=10,
WaitTimeSeconds=0
)
if 'Messages' not in response:
break
messages = response['Messages']
for m in messages:
task = json.loads(m['Body'])['detail']
name = task['humanLoopName']
output_s3 = task['humanLoopOutput']['outputS3Uri']
completed_human_loops.append((name, output_s3))
receipt_handle = m['ReceiptHandle']
# Delete received message from queue
sqs.delete_message(
QueueUrl=queue.url,
ReceiptHandle=receipt_handle
)
print(completed_human_loops)
###Output
[('bc0db63b-7082-4e58-8930-2863c5880d78', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/56/18/bc0db63b-7082-4e58-8930-2863c5880d78/output.json'), ('2ab79815-1ce9-4f04-b244-ad50a1157371', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/11/27/2ab79815-1ce9-4f04-b244-ad50a1157371/output.json'), ('8c8771be-2c3d-4512-9ecc-dc1d7d6fa731', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/09/43/8c8771be-2c3d-4512-9ecc-dc1d7d6fa731/output.json'), ('6b208faf-24ac-4907-86d6-8422b0404d1a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/40/17/6b208faf-24ac-4907-86d6-8422b0404d1a/output.json'), ('1e5d8a14-e3ab-42cf-95e6-418795df996a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/13/59/1e5d8a14-e3ab-42cf-95e6-418795df996a/output.json'), ('a4f724e7-65f1-4381-9736-84a00966d6e2', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/22/30/a4f724e7-65f1-4381-9736-84a00966d6e2/output.json'), ('5a1ded75-49de-40f6-be19-253ce1c8fee2', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/57/12/5a1ded75-49de-40f6-be19-253ce1c8fee2/output.json'), ('e27e38c8-07ea-4cc2-9205-3748b6af412e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/35/23/e27e38c8-07ea-4cc2-9205-3748b6af412e/output.json'), ('2b0ac6b4-13c5-41fd-a91c-7f4e18d0e0d2', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/17/04/2b0ac6b4-13c5-41fd-a91c-7f4e18d0e0d2/output.json'), ('9c056523-3269-4072-a618-548a6c414abc', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/09/49/9c056523-3269-4072-a618-548a6c414abc/output.json'), ('c30fca79-cffc-4448-9af5-d6973ed22f4f', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/08/53/c30fca79-cffc-4448-9af5-d6973ed22f4f/output.json'), ('0d62dcd9-ad48-4636-b2d2-e86fdedeef1d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/06/37/0d62dcd9-ad48-4636-b2d2-e86fdedeef1d/output.json'), ('1b75103c-9bf2-47ad-8527-259b711f8c3d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/14/45/1b75103c-9bf2-47ad-8527-259b711f8c3d/output.json'), ('162ab659-c98d-4feb-ad51-40d9b09484e1', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/36/27/162ab659-c98d-4feb-ad51-40d9b09484e1/output.json'), ('2f6a5f48-838d-4636-bbcc-58e7e0505e14', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/50/50/2f6a5f48-838d-4636-bbcc-58e7e0505e14/output.json'), ('5117cde5-591b-4715-8a43-603159228ac6', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/44/43/5117cde5-591b-4715-8a43-603159228ac6/output.json'), ('efee8a1d-5f93-4cc5-a751-befe6a18fff6', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/26/38/efee8a1d-5f93-4cc5-a751-befe6a18fff6/output.json'), ('354e0e74-8097-43fd-a3ba-74b2a10afdbc', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/47/25/354e0e74-8097-43fd-a3ba-74b2a10afdbc/output.json'), ('368337da-8af2-44e2-9c53-7724b2eac0ab', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/23/58/368337da-8af2-44e2-9c53-7724b2eac0ab/output.json'), ('6201f7d5-5320-4b2c-b331-d519b76d6647', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/24/30/6201f7d5-5320-4b2c-b331-d519b76d6647/output.json'), ('42f175b0-e3c2-4038-9511-0764c41450d5', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/03/41/42f175b0-e3c2-4038-9511-0764c41450d5/output.json'), ('a5ee8cb3-3b48-44b8-90a7-497b48f79fca', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/07/51/a5ee8cb3-3b48-44b8-90a7-497b48f79fca/output.json'), ('fefafd37-7749-45f1-a9d7-0e4d0fd7daea', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/09/21/fefafd37-7749-45f1-a9d7-0e4d0fd7daea/output.json'), ('81f9571a-584d-4eef-bd06-af383eee969c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/08/23/81f9571a-584d-4eef-bd06-af383eee969c/output.json'), ('e99e2f19-8c97-4653-9290-9706104e02f2', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/12/57/e99e2f19-8c97-4653-9290-9706104e02f2/output.json'), ('99f07cfb-1429-4562-9794-62e0fa6b21f2', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/25/24/99f07cfb-1429-4562-9794-62e0fa6b21f2/output.json'), ('b4d0046f-326b-4216-81f3-7d41c1bb6a65', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/23/20/b4d0046f-326b-4216-81f3-7d41c1bb6a65/output.json'), ('5ef07468-fda8-4097-a747-7cec7b68a476', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/55/36/5ef07468-fda8-4097-a747-7cec7b68a476/output.json'), ('22861c83-c3dc-41f6-9cb9-efc3c9475b5a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/47/56/22861c83-c3dc-41f6-9cb9-efc3c9475b5a/output.json'), ('edcaf64c-ee8e-4410-b784-83f2dc5b2ee1', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/29/58/edcaf64c-ee8e-4410-b784-83f2dc5b2ee1/output.json'), ('d9f733bd-0463-455f-a0b6-7ac344428e2b', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/41/07/d9f733bd-0463-455f-a0b6-7ac344428e2b/output.json'), ('6a95d54f-4886-4587-b230-3e0375b07a1a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/25/20/6a95d54f-4886-4587-b230-3e0375b07a1a/output.json'), ('c71e7351-1beb-4fee-bd15-157f381f006d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/21/05/c71e7351-1beb-4fee-bd15-157f381f006d/output.json'), ('3114dac3-bb5d-4489-ac51-c6e29547dc21', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/58/26/3114dac3-bb5d-4489-ac51-c6e29547dc21/output.json'), ('10021d62-9590-4ff7-8188-a73bb0b6daff', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/28/18/10021d62-9590-4ff7-8188-a73bb0b6daff/output.json'), ('6c4efdfe-caca-4c92-86f2-0da397533690', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/25/34/6c4efdfe-caca-4c92-86f2-0da397533690/output.json'), ('d3a9bc9a-ee32-4c62-9f51-efc031cdc74f', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/21/51/d3a9bc9a-ee32-4c62-9f51-efc031cdc74f/output.json'), ('123896f0-de12-4e2e-9d61-d5e2087640ae', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/25/52/123896f0-de12-4e2e-9d61-d5e2087640ae/output.json'), ('386d1f1c-fb4e-4e16-8899-540daf793f47', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/32/04/386d1f1c-fb4e-4e16-8899-540daf793f47/output.json'), ('82a0c6aa-9ae0-48af-b272-d0859adc5105', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/59/28/82a0c6aa-9ae0-48af-b272-d0859adc5105/output.json'), ('5487470a-1412-439f-8a08-d5cb3d5997c3', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/35/17/5487470a-1412-439f-8a08-d5cb3d5997c3/output.json'), ('02490abf-e03e-4a5e-bd9d-2454a63d4f6f', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/58/08/02490abf-e03e-4a5e-bd9d-2454a63d4f6f/output.json'), ('3265f9f0-20cb-43db-b440-8c9c7e47fd2a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/15/17/3265f9f0-20cb-43db-b440-8c9c7e47fd2a/output.json'), ('e64ba890-4bb9-4670-82d8-fa061476522f', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/00/10/e64ba890-4bb9-4670-82d8-fa061476522f/output.json'), ('ed39c849-cf50-4633-be0f-b2e011984dd2', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/43/05/ed39c849-cf50-4633-be0f-b2e011984dd2/output.json'), ('69418824-86b3-4ef8-ad72-3ad119408ef0', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/50/22/69418824-86b3-4ef8-ad72-3ad119408ef0/output.json'), ('ee6b1a04-92b2-4812-a922-bd5cb2df8d28', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/30/55/ee6b1a04-92b2-4812-a922-bd5cb2df8d28/output.json'), ('8ed7d0b4-e1f3-4eea-84c8-609892c2d56d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/28/02/8ed7d0b4-e1f3-4eea-84c8-609892c2d56d/output.json'), ('24cc9f14-2b57-4b3b-b869-50da3a974c41', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/00/56/24cc9f14-2b57-4b3b-b869-50da3a974c41/output.json'), ('b61356a8-11bc-470f-bb34-cb5f40cfc662', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/55/20/b61356a8-11bc-470f-bb34-cb5f40cfc662/output.json'), ('6612d1a0-1206-4f0d-ad30-c3327cb9b026', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/54/16/6612d1a0-1206-4f0d-ad30-c3327cb9b026/output.json'), ('5ab5feb0-470c-4a79-b51b-03a0f65d5c52', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/09/35/5ab5feb0-470c-4a79-b51b-03a0f65d5c52/output.json'), ('f3b67b27-3a56-46a3-8f93-d51f13760e77', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/50/40/f3b67b27-3a56-46a3-8f93-d51f13760e77/output.json'), ('63d584aa-8641-41f1-bf4d-83fcae459aba', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/07/41/63d584aa-8641-41f1-bf4d-83fcae459aba/output.json'), ('98b65acf-93a3-42b0-9d72-7abeb759c125', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/35/45/98b65acf-93a3-42b0-9d72-7abeb759c125/output.json'), ('5c6a6a3d-02f0-42d4-8358-37961c7b7b42', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/58/44/5c6a6a3d-02f0-42d4-8358-37961c7b7b42/output.json'), ('9b166cf2-2f3e-4e86-9d06-52405b09f72b', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/07/17/9b166cf2-2f3e-4e86-9d06-52405b09f72b/output.json'), ('084643f5-0e0b-4bb3-af6f-5f6aacfba571', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/56/48/084643f5-0e0b-4bb3-af6f-5f6aacfba571/output.json'), ('41a85d33-8c1b-45d8-96f4-115597b3dbc0', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/57/16/41a85d33-8c1b-45d8-96f4-115597b3dbc0/output.json'), ('53d85c4f-5083-4968-b261-51ccc4bc9a9a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/29/52/53d85c4f-5083-4968-b261-51ccc4bc9a9a/output.json'), ('10e3f4fa-70d6-48ba-b691-1523a991147d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/03/18/10e3f4fa-70d6-48ba-b691-1523a991147d/output.json'), ('fda6a10f-f16f-4e7b-a454-42c50209c9db', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/37/17/fda6a10f-f16f-4e7b-a454-42c50209c9db/output.json'), ('bf9aaca2-eac9-47a7-8d67-a638413483ce', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/58/40/bf9aaca2-eac9-47a7-8d67-a638413483ce/output.json'), ('603ff63f-68c5-4cce-b0ae-0cd9063b3437', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/45/39/603ff63f-68c5-4cce-b0ae-0cd9063b3437/output.json'), ('2a5e08d1-5d4f-4880-a7bc-27c3982549b8', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/11/57/2a5e08d1-5d4f-4880-a7bc-27c3982549b8/output.json'), ('323cb90f-aced-4342-a061-bd92dcedb6f4', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/39/17/323cb90f-aced-4342-a061-bd92dcedb6f4/output.json'), ('0256276a-502f-472e-b5bc-a3317bc473d1', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/24/36/0256276a-502f-472e-b5bc-a3317bc473d1/output.json'), ('16aa6079-7c63-4fcc-9e03-3f09bf80730f', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/37/06/16aa6079-7c63-4fcc-9e03-3f09bf80730f/output.json'), ('e263e79f-0843-4e60-844a-c75f3ad8500d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/10/49/e263e79f-0843-4e60-844a-c75f3ad8500d/output.json'), ('d662b090-1884-40f2-8358-6c52eb3e541e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/01/48/d662b090-1884-40f2-8358-6c52eb3e541e/output.json'), ('2d13ea78-07c3-42dd-b137-dbc58dae9f7b', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/09/37/2d13ea78-07c3-42dd-b137-dbc58dae9f7b/output.json'), ('f1eb8012-e1a8-42d5-8ac6-3a2c1fc4396c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/41/33/f1eb8012-e1a8-42d5-8ac6-3a2c1fc4396c/output.json'), ('4edcb011-ae88-4143-a351-ef96452f7a15', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/21/18/4edcb011-ae88-4143-a351-ef96452f7a15/output.json'), ('01dd3a0d-c095-4f52-9994-5251eb0c5d24', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/38/11/01dd3a0d-c095-4f52-9994-5251eb0c5d24/output.json'), ('af539bf7-5031-4353-98b0-9bac5f54b16c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/32/28/af539bf7-5031-4353-98b0-9bac5f54b16c/output.json'), ('75539cde-7d37-4174-9d88-dcfe6a41ec72', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/21/24/75539cde-7d37-4174-9d88-dcfe6a41ec72/output.json'), ('9b8a703d-3e04-4f8c-ba49-bdd1ad27e661', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/42/53/9b8a703d-3e04-4f8c-ba49-bdd1ad27e661/output.json'), ('20fc39d3-4cd7-4b0c-8875-cd73e12733c7', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/00/24/20fc39d3-4cd7-4b0c-8875-cd73e12733c7/output.json'), ('17d62d20-d4c6-4ad0-bacb-daed34f815f1', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/03/46/17d62d20-d4c6-4ad0-bacb-daed34f815f1/output.json'), ('5a0fac14-ab35-42f9-9636-a9002f48a8e8', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/43/35/5a0fac14-ab35-42f9-9636-a9002f48a8e8/output.json'), ('473c03f9-341c-4d89-8c65-a6c5de489fa7', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/30/56/473c03f9-341c-4d89-8c65-a6c5de489fa7/output.json'), ('8e70f938-def1-42e4-923f-db432d9f5ed0', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/42/29/8e70f938-def1-42e4-923f-db432d9f5ed0/output.json'), ('bdaf9360-f775-4020-b2bd-3cd5ad84f96d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/57/10/bdaf9360-f775-4020-b2bd-3cd5ad84f96d/output.json'), ('720f3f4c-ff17-41a0-94fb-3f4cf6a298d4', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/40/17/720f3f4c-ff17-41a0-94fb-3f4cf6a298d4/output.json'), ('451b559a-6da0-48ef-9732-07f1332db061', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/03/30/451b559a-6da0-48ef-9732-07f1332db061/output.json'), ('e4106920-9570-420a-8395-a76413310775', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/04/32/e4106920-9570-420a-8395-a76413310775/output.json'), ('84eb488b-3d52-4653-a5a3-edb3e42aa3f7', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/34/03/84eb488b-3d52-4653-a5a3-edb3e42aa3f7/output.json'), ('e082c5b8-ddce-4820-b0db-36a05576a72a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/19/20/e082c5b8-ddce-4820-b0db-36a05576a72a/output.json'), ('0a0cae74-d0b1-4143-84cd-bcc64f57ec7e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/50/20/0a0cae74-d0b1-4143-84cd-bcc64f57ec7e/output.json'), ('77222148-9a49-4869-a607-f24caad6b53e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/20/36/77222148-9a49-4869-a607-f24caad6b53e/output.json'), ('f6a936b8-e18a-4bff-8fd5-24fb81a3aa32', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/29/56/f6a936b8-e18a-4bff-8fd5-24fb81a3aa32/output.json'), ('f95604d3-dcf0-4883-b7bd-36c7fe83a755', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/05/49/f95604d3-dcf0-4883-b7bd-36c7fe83a755/output.json'), ('85a19cf3-bde2-4f02-ba09-e9fa363cfb7e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/55/52/85a19cf3-bde2-4f02-ba09-e9fa363cfb7e/output.json'), ('51d81ddf-aa10-4118-837d-4d4e0c05503d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/45/21/51d81ddf-aa10-4118-837d-4d4e0c05503d/output.json'), ('d618be95-f33f-4ea9-988d-ee01926306c5', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/43/59/d618be95-f33f-4ea9-988d-ee01926306c5/output.json'), ('3656c27a-f15b-45c2-b961-a9681cf36736', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/20/08/3656c27a-f15b-45c2-b961-a9681cf36736/output.json'), ('08e94ca3-f877-4e15-837f-2ee7505fd4bb', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/23/10/08e94ca3-f877-4e15-837f-2ee7505fd4bb/output.json'), ('07e68d3f-f0b3-4fd6-ba4a-cbbc0ea0c3cc', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/06/02/07e68d3f-f0b3-4fd6-ba4a-cbbc0ea0c3cc/output.json'), ('04c3ddfc-3ead-405e-b9cf-5c11f6f9e4d5', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/43/39/04c3ddfc-3ead-405e-b9cf-5c11f6f9e4d5/output.json'), ('80ee275d-70fa-455f-9b4f-8dda37743fc1', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/54/24/80ee275d-70fa-455f-9b4f-8dda37743fc1/output.json'), ('6fac7a22-87f7-45bc-a98f-9b5421c4b164', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/14/53/6fac7a22-87f7-45bc-a98f-9b5421c4b164/output.json'), ('25d7c618-fdd9-446c-9964-82cb98f9682e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/29/44/25d7c618-fdd9-446c-9964-82cb98f9682e/output.json'), ('8ac1eb68-82b0-43ba-b66d-b05b3fdc7495', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/28/20/8ac1eb68-82b0-43ba-b66d-b05b3fdc7495/output.json'), ('9ae42e89-1230-4055-bfef-8d4b0951c2df', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/45/05/9ae42e89-1230-4055-bfef-8d4b0951c2df/output.json'), ('6864ed0a-f6eb-40f8-9a90-15b51abf95a9', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/40/45/6864ed0a-f6eb-40f8-9a90-15b51abf95a9/output.json'), ('b970b027-892c-434b-898c-39a4670443ef', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/41/45/b970b027-892c-434b-898c-39a4670443ef/output.json'), ('c6e97af6-a7b3-42b8-b772-0b8d114497bd', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/57/06/c6e97af6-a7b3-42b8-b772-0b8d114497bd/output.json'), ('eb08ed39-2836-4e64-8df5-f82f7b1895e6', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/58/20/eb08ed39-2836-4e64-8df5-f82f7b1895e6/output.json'), ('3927bbb4-b23e-4c75-a668-3bd74fac4c66', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/07/17/3927bbb4-b23e-4c75-a668-3bd74fac4c66/output.json'), ('39690ce1-5313-4632-b346-3df250f6165a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/50/46/39690ce1-5313-4632-b346-3df250f6165a/output.json'), ('dd9f0c08-3e60-4580-bb07-79e3f5ca2471', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/55/32/dd9f0c08-3e60-4580-bb07-79e3f5ca2471/output.json'), ('05c75be3-eada-4ea5-9d97-149b0bcf1b6b', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/12/35/05c75be3-eada-4ea5-9d97-149b0bcf1b6b/output.json'), ('4d972b41-ab99-4e38-8187-090cbffafb01', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/02/20/4d972b41-ab99-4e38-8187-090cbffafb01/output.json'), ('c4f69dbe-8c10-42d0-b2a6-b6032a7abc70', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/17/01/c4f69dbe-8c10-42d0-b2a6-b6032a7abc70/output.json'), ('8ebcce2c-98d5-4586-948e-373ffe61ea7d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/02/47/8ebcce2c-98d5-4586-948e-373ffe61ea7d/output.json'), ('3574cddd-8e3e-430e-8fa7-5b338cb54b42', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/06/16/3574cddd-8e3e-430e-8fa7-5b338cb54b42/output.json'), ('6df9bda6-b8d0-42b9-b911-1ca7196d8b66', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/46/29/6df9bda6-b8d0-42b9-b911-1ca7196d8b66/output.json'), ('417c001b-3155-4ba8-a3d3-f0581118e4ad', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/11/17/417c001b-3155-4ba8-a3d3-f0581118e4ad/output.json'), ('5cb5be2d-38c4-4627-b882-720973c53a10', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/48/01/5cb5be2d-38c4-4627-b882-720973c53a10/output.json'), ('4df22b01-83ab-41a6-8994-1c84fd8f2274', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/06/07/4df22b01-83ab-41a6-8994-1c84fd8f2274/output.json'), ('cc33c6cc-7476-423a-8c55-09a7d7fac44c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/16/53/cc33c6cc-7476-423a-8c55-09a7d7fac44c/output.json'), ('4abeb96e-c402-4192-9e35-c6ed6370c7f9', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/16/48/4abeb96e-c402-4192-9e35-c6ed6370c7f9/output.json'), ('4ea5b71b-411e-4a55-98b8-da3dd17797e6', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/25/48/4ea5b71b-411e-4a55-98b8-da3dd17797e6/output.json'), ('5bb8c2da-92b7-47bc-93ea-0d995fb9fb91', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/44/37/5bb8c2da-92b7-47bc-93ea-0d995fb9fb91/output.json'), ('c17121de-228b-4384-bbc9-aef8e2d88fe5', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/00/54/c17121de-228b-4384-bbc9-aef8e2d88fe5/output.json'), ('de76ec67-b5ad-4316-95f1-5df06e63eb2c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/11/45/de76ec67-b5ad-4316-95f1-5df06e63eb2c/output.json'), ('d68dde70-c0ff-453f-ae08-0f42f9ed5bd3', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/05/29/d68dde70-c0ff-453f-ae08-0f42f9ed5bd3/output.json'), ('dfaf1afd-0705-47d9-9ba7-5c86d7890e8c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/18/45/dfaf1afd-0705-47d9-9ba7-5c86d7890e8c/output.json'), ('7315184d-490d-418a-a5ae-537f74ed31c9', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/57/20/7315184d-490d-418a-a5ae-537f74ed31c9/output.json'), ('82ee5ace-bd47-4127-933a-e94c6e55e864', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/04/52/82ee5ace-bd47-4127-933a-e94c6e55e864/output.json'), ('6adc2e8f-36b9-4c9f-9b7c-883719e5b12d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/14/25/6adc2e8f-36b9-4c9f-9b7c-883719e5b12d/output.json'), ('925cd4c2-17e5-4ade-afe1-69b2c5b3f4f4', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/42/01/925cd4c2-17e5-4ade-afe1-69b2c5b3f4f4/output.json'), ('30154726-5f04-419a-bf4b-1a214ed928cd', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/12/56/30154726-5f04-419a-bf4b-1a214ed928cd/output.json'), ('7be70627-8b01-42f9-8f7b-cb75af19d24b', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/08/21/7be70627-8b01-42f9-8f7b-cb75af19d24b/output.json'), ('7c209ed6-9353-4b20-9a8b-091d81e8b732', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/16/03/7c209ed6-9353-4b20-9a8b-091d81e8b732/output.json'), ('6f5d642e-e640-43bf-8acd-736cdb7caaee', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/29/00/6f5d642e-e640-43bf-8acd-736cdb7caaee/output.json'), ('f378835c-a113-4f13-840d-71e616c60d77', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/23/50/f378835c-a113-4f13-840d-71e616c60d77/output.json'), ('e1ca7597-8490-4496-8c66-0c0fe287381f', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/44/25/e1ca7597-8490-4496-8c66-0c0fe287381f/output.json'), ('d9021928-6219-490f-b23d-01336f4df374', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/31/08/d9021928-6219-490f-b23d-01336f4df374/output.json'), ('7a25ee8f-8dca-4620-a6d7-b875ca726a97', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/47/32/7a25ee8f-8dca-4620-a6d7-b875ca726a97/output.json'), ('f20eaa6c-ba93-4be7-9e7d-0e4ee69f9c88', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/01/16/f20eaa6c-ba93-4be7-9e7d-0e4ee69f9c88/output.json'), ('d96e318d-3f7c-44de-891c-09baf6264358', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/19/13/d96e318d-3f7c-44de-891c-09baf6264358/output.json'), ('9f69293c-c736-4ad0-a4f7-0a190d6bffba', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/56/22/9f69293c-c736-4ad0-a4f7-0a190d6bffba/output.json'), ('20ac8721-ae80-4ef5-a1a7-bb2826b6de72', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/25/06/20ac8721-ae80-4ef5-a1a7-bb2826b6de72/output.json'), ('456e4a21-cd48-4c29-9c0b-0cfcfb0db3ac', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/53/38/456e4a21-cd48-4c29-9c0b-0cfcfb0db3ac/output.json'), ('7aad352a-759f-4d92-bea4-9ad2f8aaa2d7', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/39/45/7aad352a-759f-4d92-bea4-9ad2f8aaa2d7/output.json'), ('790b6acd-6711-4858-9716-e94b4ee540a3', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/25/06/790b6acd-6711-4858-9716-e94b4ee540a3/output.json'), ('85ea1997-bff7-4d77-8758-6f3f5bb2bf12', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/03/33/85ea1997-bff7-4d77-8758-6f3f5bb2bf12/output.json'), ('e444bdf5-0abc-41ef-836a-152c17460c3e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/14/15/e444bdf5-0abc-41ef-836a-152c17460c3e/output.json'), ('b82538ed-09f6-46bb-86b9-ae59f550db9d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/24/34/b82538ed-09f6-46bb-86b9-ae59f550db9d/output.json'), ('8ae62e10-ac90-429a-b416-2bf0d1a23870', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/47/53/8ae62e10-ac90-429a-b416-2bf0d1a23870/output.json'), ('5e6b86c9-28e2-4519-a705-6bda03827a75', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/14/13/5e6b86c9-28e2-4519-a705-6bda03827a75/output.json'), ('f8e9d709-5ff5-4ad6-86b1-ec54171fc413', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/00/26/f8e9d709-5ff5-4ad6-86b1-ec54171fc413/output.json'), ('e764da84-a28c-4d9b-8c5d-5541f5a6bed4', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/03/31/e764da84-a28c-4d9b-8c5d-5541f5a6bed4/output.json'), ('a9583da4-c638-4688-80ea-e01b9ae29500', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/24/16/a9583da4-c638-4688-80ea-e01b9ae29500/output.json'), ('fa8c243d-a58f-4f41-b3a1-5d09563e132b', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/24/46/fa8c243d-a58f-4f41-b3a1-5d09563e132b/output.json'), ('3bb97556-a9ac-4c67-8e13-3de971b322b6', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/56/10/3bb97556-a9ac-4c67-8e13-3de971b322b6/output.json'), ('474da7c6-a9b3-4458-848f-da792ddf91ce', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/44/39/474da7c6-a9b3-4458-848f-da792ddf91ce/output.json'), ('e0e5a790-ab09-4af1-b88c-f6f2b02e95bc', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/24/44/e0e5a790-ab09-4af1-b88c-f6f2b02e95bc/output.json'), ('54a1ef4d-4ad2-46b2-9376-ac3f595b7764', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/15/55/54a1ef4d-4ad2-46b2-9376-ac3f595b7764/output.json'), ('3d70cbb1-d76b-43cb-be98-aa44f5eb4426', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/33/03/3d70cbb1-d76b-43cb-be98-aa44f5eb4426/output.json'), ('e9fd02ae-b388-4431-be7b-071dbd3e680f', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/24/06/e9fd02ae-b388-4431-be7b-071dbd3e680f/output.json'), ('f097ab2f-05d9-4b6f-bc5d-fc6b6ab0e675', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/55/22/f097ab2f-05d9-4b6f-bc5d-fc6b6ab0e675/output.json'), ('dc99830b-dc99-43bf-b6cc-69fa0a921ada', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/05/01/dc99830b-dc99-43bf-b6cc-69fa0a921ada/output.json'), ('4fee89fd-4ff5-490d-be9d-53f2e9911dc5', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/33/56/4fee89fd-4ff5-490d-be9d-53f2e9911dc5/output.json'), ('620df582-2e6c-452f-a364-d92d3ea755ea', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/06/22/620df582-2e6c-452f-a364-d92d3ea755ea/output.json'), ('5b17d368-4644-4cad-a43f-73b25c3ce6c5', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/27/26/5b17d368-4644-4cad-a43f-73b25c3ce6c5/output.json'), ('aa32948f-f470-455c-8eda-d770fc279bac', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/30/20/aa32948f-f470-455c-8eda-d770fc279bac/output.json'), ('f669cdbf-f1cc-4f46-8cd0-854f09d18d29', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/59/00/f669cdbf-f1cc-4f46-8cd0-854f09d18d29/output.json'), ('6ddd7f18-abe3-4b59-842c-85333098d8be', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/42/15/6ddd7f18-abe3-4b59-842c-85333098d8be/output.json'), ('337ee688-8536-45c1-9f9b-bc44f25fd248', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/37/54/337ee688-8536-45c1-9f9b-bc44f25fd248/output.json'), ('825e6f3f-ae35-46fe-af9d-6438fea93826', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/18/58/825e6f3f-ae35-46fe-af9d-6438fea93826/output.json'), ('86ee3ffe-4317-49c2-9d39-1bdfb04dff2e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/37/26/86ee3ffe-4317-49c2-9d39-1bdfb04dff2e/output.json'), ('f36d5e1a-b6e5-461d-942d-1a07343ae16e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/44/51/f36d5e1a-b6e5-461d-942d-1a07343ae16e/output.json'), ('c0a23e61-7f69-4ef2-be23-f90d29bb4649', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/15/29/c0a23e61-7f69-4ef2-be23-f90d29bb4649/output.json'), ('6e445693-0156-489f-8b0d-f36655742ac4', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/39/15/6e445693-0156-489f-8b0d-f36655742ac4/output.json'), ('0c7b68ab-5a9f-4210-8876-9bc9bf0ec24c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/57/44/0c7b68ab-5a9f-4210-8876-9bc9bf0ec24c/output.json'), ('c50170be-5ec0-42b4-a38c-be23513dd372', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/48/03/c50170be-5ec0-42b4-a38c-be23513dd372/output.json'), ('e8dc76bc-b8ba-4695-8e00-938e6b2bf523', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/25/08/e8dc76bc-b8ba-4695-8e00-938e6b2bf523/output.json'), ('5ef4c493-6f26-45f5-8b55-b8bd2eb14b72', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/32/16/5ef4c493-6f26-45f5-8b55-b8bd2eb14b72/output.json'), ('53a5453c-5cb2-4725-9587-9db4943aff75', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/06/03/53a5453c-5cb2-4725-9587-9db4943aff75/output.json'), ('a2f88761-28b7-4535-8ed8-550d4a4bc366', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/07/43/a2f88761-28b7-4535-8ed8-550d4a4bc366/output.json'), ('02ecd466-e2eb-45ca-95e7-b7a080a0fabc', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/31/43/02ecd466-e2eb-45ca-95e7-b7a080a0fabc/output.json'), ('082fb1ea-41cb-416d-ae8e-5b05d20c37d3', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/39/31/082fb1ea-41cb-416d-ae8e-5b05d20c37d3/output.json'), ('0107c305-be51-4bfd-b30f-6b39f744f974', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/18/22/0107c305-be51-4bfd-b30f-6b39f744f974/output.json'), ('52f24cd0-201a-44bf-bf47-842314e188f3', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/58/58/52f24cd0-201a-44bf-bf47-842314e188f3/output.json'), ('b5a074e1-ae21-461f-aab1-4f079de2353c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/42/13/b5a074e1-ae21-461f-aab1-4f079de2353c/output.json'), ('06f8dd54-3602-40ea-8331-52922947ea09', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/48/26/06f8dd54-3602-40ea-8331-52922947ea09/output.json'), ('0c014202-ab06-4693-b760-3aef160a4d04', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/58/22/0c014202-ab06-4693-b760-3aef160a4d04/output.json'), ('b4d604d7-f0ec-4281-a675-0e851072986f', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/23/30/b4d604d7-f0ec-4281-a675-0e851072986f/output.json'), ('66a563e1-4e85-4218-8587-48dc71f9d140', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/46/37/66a563e1-4e85-4218-8587-48dc71f9d140/output.json'), ('36fc733c-65a6-4158-8c87-79f71a3217e6', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/41/01/36fc733c-65a6-4158-8c87-79f71a3217e6/output.json'), ('3a98d371-3647-4cd1-9de6-da596eb6e794', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/13/47/3a98d371-3647-4cd1-9de6-da596eb6e794/output.json'), ('b0bf6128-6c22-478f-a2b3-10b6588660f6', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/51/41/b0bf6128-6c22-478f-a2b3-10b6588660f6/output.json'), ('44e010f6-2a73-4ec7-bff8-17c7dc877aa3', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/00/18/44e010f6-2a73-4ec7-bff8-17c7dc877aa3/output.json'), ('db059f76-a38c-43d6-9878-024d2eafcb28', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/39/49/db059f76-a38c-43d6-9878-024d2eafcb28/output.json'), ('098e6ed5-abb3-492c-a0d3-e11d57e67dca', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/31/06/098e6ed5-abb3-492c-a0d3-e11d57e67dca/output.json'), ('c4a2cb60-cfa6-445d-9a57-6dbd7e16261a', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/22/24/c4a2cb60-cfa6-445d-9a57-6dbd7e16261a/output.json'), ('ebccf8d7-7b77-48e1-96e9-fa18dfaf7bb3', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/31/44/ebccf8d7-7b77-48e1-96e9-fa18dfaf7bb3/output.json'), ('b91c6efe-ddd3-4379-8787-b90c9b40c6d7', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/01/52/b91c6efe-ddd3-4379-8787-b90c9b40c6d7/output.json'), ('6cbaf2ef-8417-414e-ae38-ec90803c71e8', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/03/02/6cbaf2ef-8417-414e-ae38-ec90803c71e8/output.json'), ('efc9d503-4eb5-4cdd-9d04-348695736515', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/47/15/efc9d503-4eb5-4cdd-9d04-348695736515/output.json'), ('79ad5d9e-2288-4e0a-aacf-bea79f4c5b6c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/08/46/79ad5d9e-2288-4e0a-aacf-bea79f4c5b6c/output.json'), ('4bc57cd2-7137-43df-bb71-d72762dccd7e', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/26/24/4bc57cd2-7137-43df-bb71-d72762dccd7e/output.json'), ('d1f8a830-fca0-4411-a9dd-e02c3422f1d8', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/08/49/d1f8a830-fca0-4411-a9dd-e02c3422f1d8/output.json'), ('566f30c1-04af-4b65-bf07-f70e0b5e0f11', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/02/11/566f30c1-04af-4b65-bf07-f70e0b5e0f11/output.json'), ('28b4087d-c199-48e0-85f6-fefd2e405e0c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/20/42/28b4087d-c199-48e0-85f6-fefd2e405e0c/output.json'), ('0b272f03-5db2-42e7-8c8d-5ef19405d062', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/01/06/0b272f03-5db2-42e7-8c8d-5ef19405d062/output.json'), ('65c4a520-03a8-4d3a-9b88-59328ab8fb16', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/35/38/65c4a520-03a8-4d3a-9b88-59328ab8fb16/output.json'), ('28c744e3-6266-48e6-ad4c-7e04b1512ba2', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/42/27/28c744e3-6266-48e6-ad4c-7e04b1512ba2/output.json'), ('b2ee6dbc-dc16-45a5-8932-63a93b0ce476', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/08/43/b2ee6dbc-dc16-45a5-8932-63a93b0ce476/output.json'), ('519ba6b8-0c32-4ed7-a376-ba850be1f7f8', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/43/07/519ba6b8-0c32-4ed7-a376-ba850be1f7f8/output.json'), ('43bafbec-236d-438a-ab93-f906ab431c11', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/16/57/43bafbec-236d-438a-ab93-f906ab431c11/output.json'), ('bd5ad95b-e7e9-4c43-a26d-7a3d5e59426b', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/52/28/bd5ad95b-e7e9-4c43-a26d-7a3d5e59426b/output.json'), ('e1ce91ce-15be-4eac-b9a3-e1c06ba43ead', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/41/55/e1ce91ce-15be-4eac-b9a3-e1c06ba43ead/output.json'), ('3d61ec96-29b7-42ca-bf98-1d9f93aa7f54', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/06/45/3d61ec96-29b7-42ca-bf98-1d9f93aa7f54/output.json'), ('f8806d2c-781f-4aac-8d6c-e74b248206de', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/30/04/f8806d2c-781f-4aac-8d6c-e74b248206de/output.json'), ('4dac44ae-a240-4712-9e90-096ecdd12772', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/57/24/4dac44ae-a240-4712-9e90-096ecdd12772/output.json'), ('e5198170-73f0-43ff-9d6e-450879c05010', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/45/45/e5198170-73f0-43ff-9d6e-450879c05010/output.json'), ('dac7df83-c640-438d-8f3a-aa9d55cc8490', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/35/50/dac7df83-c640-438d-8f3a-aa9d55cc8490/output.json'), ('f70e9818-1637-4e83-a726-55aa82fc5da8', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/22/22/f70e9818-1637-4e83-a726-55aa82fc5da8/output.json'), ('70b796fe-6f83-451d-8b98-9b83c4ec2822', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/06/51/70b796fe-6f83-451d-8b98-9b83c4ec2822/output.json'), ('62542ff8-816e-4f4d-8cf1-16ca9d98a193', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/22/41/62542ff8-816e-4f4d-8cf1-16ca9d98a193/output.json'), ('40e3a804-fe94-4f2a-b7bb-ccac3b4c0ad3', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/19/48/40e3a804-fe94-4f2a-b7bb-ccac3b4c0ad3/output.json'), ('ee3a10bc-20ac-4ad9-bb32-b67f985f6365', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/48/37/ee3a10bc-20ac-4ad9-bb32-b67f985f6365/output.json'), ('f54cc976-5a80-421a-8434-d7559e40db05', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/57/26/f54cc976-5a80-421a-8434-d7559e40db05/output.json'), ('9ba62ff5-bd6a-474d-afe3-ddab18638bfc', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/19/47/9ba62ff5-bd6a-474d-afe3-ddab18638bfc/output.json'), ('162f4e76-2fe4-4665-9069-391ca43b0171', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/52/36/162f4e76-2fe4-4665-9069-391ca43b0171/output.json'), ('ba6d2859-e594-43a0-8358-cc8c5e431840', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/18/46/ba6d2859-e594-43a0-8358-cc8c5e431840/output.json'), ('95b1c474-0883-4772-9cc7-5f60a4803c99', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/34/27/95b1c474-0883-4772-9cc7-5f60a4803c99/output.json'), ('11ead62f-7548-4ae4-b6c2-9c7aee89c75d', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/06/26/11ead62f-7548-4ae4-b6c2-9c7aee89c75d/output.json'), ('dafbb808-10c3-4758-8b83-3e11dfbb4d7b', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/37/49/dafbb808-10c3-4758-8b83-3e11dfbb4d7b/output.json'), ('5034e9f3-28a5-44d9-a07a-ca75ec25e908', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/20/43/5034e9f3-28a5-44d9-a07a-ca75ec25e908/output.json'), ('b925d4a4-548a-4d27-a08d-42338c7ca1ac', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/35/00/b925d4a4-548a-4d27-a08d-42338c7ca1ac/output.json'), ('f1c4a494-76cc-4086-a617-d97dfc17a306', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/58/48/f1c4a494-76cc-4086-a617-d97dfc17a306/output.json'), ('67864eef-f085-4bcc-8033-0677dafdb61c', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/27/30/67864eef-f085-4bcc-8033-0677dafdb61c/output.json'), ('ca5da798-3474-4d54-93e1-dea22b1c86ba', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/00/52/ca5da798-3474-4d54-93e1-dea22b1c86ba/output.json'), ('9489f32e-7cbd-43ac-b407-0fe48c017c28', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/02/57/02/9489f32e-7cbd-43ac-b407-0fe48c017c28/output.json'), ('b18489f1-282a-4890-bec0-2fe6245422fa', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/20/28/b18489f1-282a-4890-bec0-2fe6245422fa/output.json'), ('725e49c7-b7b4-464a-86e2-5cd8dbcd56eb', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/27/24/725e49c7-b7b4-464a-86e2-5cd8dbcd56eb/output.json'), ('2790ec02-61d4-4a75-8cf7-6ebd66010515', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/32/36/2790ec02-61d4-4a75-8cf7-6ebd66010515/output.json'), ('261e9cdd-d1ae-487e-b8e2-1d7f2fe2cad9', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/29/26/261e9cdd-d1ae-487e-b8e2-1d7f2fe2cad9/output.json'), ('ed1b6d8c-98ff-42f7-9223-85141e8500f5', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/14/15/ed1b6d8c-98ff-42f7-9223-85141e8500f5/output.json'), ('2b15c6c1-e1d1-4a94-9abf-f771dccf9bb1', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/30/52/2b15c6c1-e1d1-4a94-9abf-f771dccf9bb1/output.json'), ('716de3c7-04eb-48ce-940f-fbd37a017152', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/12/37/716de3c7-04eb-48ce-940f-fbd37a017152/output.json'), ('4d81ccba-9c12-412a-8734-0e8010e975a9', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/04/40/03/4d81ccba-9c12-412a-8734-0e8010e975a9/output.json'), ('19fa0976-d2cf-4a35-bcbb-18dc40b2febe', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/39/05/19fa0976-d2cf-4a35-bcbb-18dc40b2febe/output.json'), ('fbdd336f-6bff-4790-8391-6859098d96c6', 's3://sagemaker-us-west-2-355444812467/a2i-results/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03/2021/07/17/03/58/46/fbdd336f-6bff-4790-8391-6859098d96c6/output.json')]
###Markdown
View Task Results Once work is completed, Amazon A2I stores results in your S3 bucket and sends a Cloudwatch event. Your results should be available in the S3 OUTPUT_PATH when all work is completed. Note that the human answer, the label and the bounding box, is returned and saved in the json file.
###Code
import re
import pprint
pp = pprint.PrettyPrinter(indent=4)
for name, s3_output_path in completed_human_loops:
splitted_string = re.split('s3://' + BUCKET + '/',s3_output_path)
output_bucket_key = splitted_string[1]
response = s3.get_object(Bucket=BUCKET, Key=output_bucket_key)
content = response["Body"].read()
json_output = json.loads(content)
pp.pprint(json_output)
print('\n')
###Output
{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-west-2:355444812467:flow-definition/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03',
'humanAnswers': [ { 'acceptanceTime': '2021-07-17T04:22:06.174Z',
'answerContent': { 'sentiment': { 'label': 'COSmoke'}},
'submissionTime': '2021-07-17T04:22:14.679Z',
'timeSpentInSeconds': 8.505,
'workerId': 'b4f9ba1756e931d5',
'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
'issuer': 'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_9iI2T8Z22',
'sub': 'c4224d8d-9cb8-4a80-8267-daa3ec89dc27'}}}],
'humanLoopName': 'bc0db63b-7082-4e58-8930-2863c5880d78',
'inputContent': { 'initialValue': 0.9999856948852539,
'taskObject': 's3://sagemaker-us-west-2-355444812467/a2i-demo/bc0db63b-7082-4e58-8930-2863c5880d78.wav'}}
{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-west-2:355444812467:flow-definition/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03',
'humanAnswers': [ { 'acceptanceTime': '2021-07-17T04:22:35.245Z',
'answerContent': {'sentiment': {'label': 'Bird'}},
'submissionTime': '2021-07-17T04:22:51.528Z',
'timeSpentInSeconds': 16.283,
'workerId': 'b4f9ba1756e931d5',
'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
'issuer': 'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_9iI2T8Z22',
'sub': 'c4224d8d-9cb8-4a80-8267-daa3ec89dc27'}}}],
'humanLoopName': '2ab79815-1ce9-4f04-b244-ad50a1157371',
'inputContent': { 'initialValue': 0.6937770843505859,
'taskObject': 's3://sagemaker-us-west-2-355444812467/a2i-demo/2ab79815-1ce9-4f04-b244-ad50a1157371.wav'}}
{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-west-2:355444812467:flow-definition/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03',
'humanAnswers': [ { 'acceptanceTime': '2021-07-17T04:23:28.537Z',
'answerContent': { 'sentiment': { 'label': 'Howling'}},
'submissionTime': '2021-07-17T04:23:34.212Z',
'timeSpentInSeconds': 5.675,
'workerId': 'b4f9ba1756e931d5',
'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
'issuer': 'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_9iI2T8Z22',
'sub': 'c4224d8d-9cb8-4a80-8267-daa3ec89dc27'}}}],
'humanLoopName': '8c8771be-2c3d-4512-9ecc-dc1d7d6fa731',
'inputContent': { 'initialValue': 0.9999551773071289,
'taskObject': 's3://sagemaker-us-west-2-355444812467/a2i-demo/8c8771be-2c3d-4512-9ecc-dc1d7d6fa731.wav'}}
{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-west-2:355444812467:flow-definition/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03',
'humanAnswers': [ { 'acceptanceTime': '2021-07-17T04:24:25.021Z',
'answerContent': { 'sentiment': { 'label': 'Barking'}},
'submissionTime': '2021-07-17T04:24:34.241Z',
'timeSpentInSeconds': 9.22,
'workerId': 'b4f9ba1756e931d5',
'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
'issuer': 'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_9iI2T8Z22',
'sub': 'c4224d8d-9cb8-4a80-8267-daa3ec89dc27'}}}],
'humanLoopName': '6b208faf-24ac-4907-86d6-8422b0404d1a',
'inputContent': { 'initialValue': 0.734894871711731,
'taskObject': 's3://sagemaker-us-west-2-355444812467/a2i-demo/6b208faf-24ac-4907-86d6-8422b0404d1a.wav'}}
{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-west-2:355444812467:flow-definition/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03',
'humanAnswers': [ { 'acceptanceTime': '2021-07-17T04:25:22.536Z',
'answerContent': { 'sentiment': { 'label': 'Laugh_Shout_Scream'}},
'submissionTime': '2021-07-17T04:25:26.936Z',
'timeSpentInSeconds': 4.4,
'workerId': 'b4f9ba1756e931d5',
'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
'issuer': 'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_9iI2T8Z22',
'sub': 'c4224d8d-9cb8-4a80-8267-daa3ec89dc27'}}}],
'humanLoopName': '1e5d8a14-e3ab-42cf-95e6-418795df996a',
'inputContent': { 'initialValue': 0.9936985373497009,
'taskObject': 's3://sagemaker-us-west-2-355444812467/a2i-demo/1e5d8a14-e3ab-42cf-95e6-418795df996a.wav'}}
{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-west-2:355444812467:flow-definition/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03',
'humanAnswers': [ { 'acceptanceTime': '2021-07-17T04:25:27.018Z',
'answerContent': { 'sentiment': { 'label': 'Laugh_Shout_Scream'}},
'submissionTime': '2021-07-17T04:25:32.679Z',
'timeSpentInSeconds': 5.661,
'workerId': 'b4f9ba1756e931d5',
'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
'issuer': 'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_9iI2T8Z22',
'sub': 'c4224d8d-9cb8-4a80-8267-daa3ec89dc27'}}}],
'humanLoopName': 'a4f724e7-65f1-4381-9736-84a00966d6e2',
'inputContent': { 'initialValue': 0.9961455464363098,
'taskObject': 's3://sagemaker-us-west-2-355444812467/a2i-demo/a4f724e7-65f1-4381-9736-84a00966d6e2.wav'}}
{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-west-2:355444812467:flow-definition/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03',
'humanAnswers': [ { 'acceptanceTime': '2021-07-17T04:26:18.067Z',
'answerContent': { 'sentiment': { 'label': 'COSmoke'}},
'submissionTime': '2021-07-17T04:26:24.556Z',
'timeSpentInSeconds': 6.489,
'workerId': 'b4f9ba1756e931d5',
'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
'issuer': 'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_9iI2T8Z22',
'sub': 'c4224d8d-9cb8-4a80-8267-daa3ec89dc27'}}}],
'humanLoopName': '5a1ded75-49de-40f6-be19-253ce1c8fee2',
'inputContent': { 'initialValue': 0.9999578595161438,
'taskObject': 's3://sagemaker-us-west-2-355444812467/a2i-demo/5a1ded75-49de-40f6-be19-253ce1c8fee2.wav'}}
{ 'flowDefinitionArn': 'arn:aws:sagemaker:us-west-2:355444812467:flow-definition/fd-sagemaker-audio-classification-demo-2021-07-17-02-03-03',
'humanAnswers': [ { 'acceptanceTime': '2021-07-17T04:26:54.194Z',
'answerContent': { 'sentiment': { 'label': 'Laugh_Shout_Scream'}},
'submissionTime': '2021-07-17T04:26:58.567Z',
'timeSpentInSeconds': 4.373,
'workerId': 'b4f9ba1756e931d5',
'workerMetadata': { 'identityData': { 'identityProviderType': 'Cognito',
'issuer': 'https://cognito-idp.us-west-2.amazonaws.com/us-west-2_9iI2T8Z22',
'sub': 'c4224d8d-9cb8-4a80-8267-daa3ec89dc27'}}}],
'humanLoopName': 'e27e38c8-07ea-4cc2-9205-3748b6af412e',
'inputContent': { 'initialValue': 0.9998497366905212,
'taskObject': 's3://sagemaker-us-west-2-355444812467/a2i-demo/e27e38c8-07ea-4cc2-9205-3748b6af412e.wav'}}
###Markdown
Incremental training with SageMakerNow that we have used the model to generate prediction on some random out-of-sample images and got unsatisfactory prediction (low probability). We also demonstrated how to use Amazon Augmented AI to review and label the image based on custom criteria. Next step in a typical machine learning life cycle is to include these cases with which the model has trouble in the next batch of training data for retraining purposes so that the model can now learn from a set of new training data to improve the model. In machine learning we call it [incremental training](https://docs.aws.amazon.com/sagemaker/latest/dg/incremental-training.html).Now we can obtain the result of a2i tasks and formulated the information into the format of our training data - * the meta data in csv file format```Filename,Label,Remarktrain_00021,1,Howling```* and associating audio files on s3
###Code
object_categories_dict = {j: i for i, j in enumerate(object_categories)}
def convert_a2i_to_augmented_manifest(a2i_output):
label = a2i_output['humanAnswers'][0]['answerContent']['sentiment']['label']
s3_path = a2i_output['inputContent']['taskObject']
filename = s3_path.split('/')[-1][:-4]
label_id = str(object_categories_dict[label])
return '{},{},{}'.format(filename, label_id, label), s3_path
object_categories_dict
###Output
_____no_output_____
###Markdown
This function will take an A2I output json and result in a json object that is compatible to how Amazon SageMaker Ground Truth outputs the result and how SageMaker built-in object detection algorithm expects from the input. In order to create a cohort of training images from all the images re-labeled by human reviewers in A2I console. You can loop through all the A2I output, convert the json file, and concatenate them into a JSON Lines file, with each line represents results of one image.
###Code
s3_paths=[]
with open('augmented.manifest', 'w') as outfile:
outfile.write("Filename,Label,Remark\n")
# convert the a2i json to augmented manifest for each human loop output
for name, s3_output_path in completed_human_loops:
splitted_string = re.split('s3://' + BUCKET + '/', s3_output_path)
output_bucket_key = splitted_string[1]
response = s3.get_object(Bucket=BUCKET, Key=output_bucket_key)
content = response["Body"].read()
json_output = json.loads(content)
print(json_output)
# convert using the function
augmented_manifest, s3_path = convert_a2i_to_augmented_manifest(json_output)
s3_paths.append(s3_path)
outfile.write(augmented_manifest)
outfile.write('\n')
# take a look at how Json Lines looks like
!head -n1000 augmented.manifest
# upload the manifest file to S3
import time;
ts = time.time()
train_path = f"{TRAIN_PATH}/{ts}/competition"
train_path
train_path
s3_paths
# ./augmented.manifest to s3://sagemaker-us-west-2-355444812467/tomofun/1626494294.0506582/
# ./augmented.manifest to s3://sagemaker-us-west-2-355444812467/tomofun/1626494470.5241082/
!aws s3 cp augmented.manifest {train_path}/meta_train.csv
for s3_path in s3_paths:
filename = s3_path.split('/')[-1]
!aws s3 cp {s3_path} {train_path}/train/{filename}
###Output
upload: ./augmented.manifest to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/meta_train.csv
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/bc0db63b-7082-4e58-8930-2863c5880d78.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/bc0db63b-7082-4e58-8930-2863c5880d78.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/2ab79815-1ce9-4f04-b244-ad50a1157371.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/2ab79815-1ce9-4f04-b244-ad50a1157371.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/8c8771be-2c3d-4512-9ecc-dc1d7d6fa731.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/8c8771be-2c3d-4512-9ecc-dc1d7d6fa731.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/6b208faf-24ac-4907-86d6-8422b0404d1a.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/6b208faf-24ac-4907-86d6-8422b0404d1a.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/1e5d8a14-e3ab-42cf-95e6-418795df996a.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/1e5d8a14-e3ab-42cf-95e6-418795df996a.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/a4f724e7-65f1-4381-9736-84a00966d6e2.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/a4f724e7-65f1-4381-9736-84a00966d6e2.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/5a1ded75-49de-40f6-be19-253ce1c8fee2.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/5a1ded75-49de-40f6-be19-253ce1c8fee2.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/e27e38c8-07ea-4cc2-9205-3748b6af412e.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/e27e38c8-07ea-4cc2-9205-3748b6af412e.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/2b0ac6b4-13c5-41fd-a91c-7f4e18d0e0d2.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/2b0ac6b4-13c5-41fd-a91c-7f4e18d0e0d2.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/9c056523-3269-4072-a618-548a6c414abc.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/9c056523-3269-4072-a618-548a6c414abc.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/c30fca79-cffc-4448-9af5-d6973ed22f4f.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/c30fca79-cffc-4448-9af5-d6973ed22f4f.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/0d62dcd9-ad48-4636-b2d2-e86fdedeef1d.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/0d62dcd9-ad48-4636-b2d2-e86fdedeef1d.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/1b75103c-9bf2-47ad-8527-259b711f8c3d.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/1b75103c-9bf2-47ad-8527-259b711f8c3d.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/162ab659-c98d-4feb-ad51-40d9b09484e1.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/162ab659-c98d-4feb-ad51-40d9b09484e1.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/2f6a5f48-838d-4636-bbcc-58e7e0505e14.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/2f6a5f48-838d-4636-bbcc-58e7e0505e14.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/5117cde5-591b-4715-8a43-603159228ac6.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/5117cde5-591b-4715-8a43-603159228ac6.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/efee8a1d-5f93-4cc5-a751-befe6a18fff6.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/efee8a1d-5f93-4cc5-a751-befe6a18fff6.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/354e0e74-8097-43fd-a3ba-74b2a10afdbc.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/354e0e74-8097-43fd-a3ba-74b2a10afdbc.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/368337da-8af2-44e2-9c53-7724b2eac0ab.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/368337da-8af2-44e2-9c53-7724b2eac0ab.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/6201f7d5-5320-4b2c-b331-d519b76d6647.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/6201f7d5-5320-4b2c-b331-d519b76d6647.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/42f175b0-e3c2-4038-9511-0764c41450d5.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/42f175b0-e3c2-4038-9511-0764c41450d5.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/a5ee8cb3-3b48-44b8-90a7-497b48f79fca.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/a5ee8cb3-3b48-44b8-90a7-497b48f79fca.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/fefafd37-7749-45f1-a9d7-0e4d0fd7daea.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/fefafd37-7749-45f1-a9d7-0e4d0fd7daea.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/81f9571a-584d-4eef-bd06-af383eee969c.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/81f9571a-584d-4eef-bd06-af383eee969c.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/e99e2f19-8c97-4653-9290-9706104e02f2.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/e99e2f19-8c97-4653-9290-9706104e02f2.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/99f07cfb-1429-4562-9794-62e0fa6b21f2.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/99f07cfb-1429-4562-9794-62e0fa6b21f2.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/b4d0046f-326b-4216-81f3-7d41c1bb6a65.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/b4d0046f-326b-4216-81f3-7d41c1bb6a65.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/5ef07468-fda8-4097-a747-7cec7b68a476.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/5ef07468-fda8-4097-a747-7cec7b68a476.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/22861c83-c3dc-41f6-9cb9-efc3c9475b5a.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/22861c83-c3dc-41f6-9cb9-efc3c9475b5a.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/edcaf64c-ee8e-4410-b784-83f2dc5b2ee1.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/edcaf64c-ee8e-4410-b784-83f2dc5b2ee1.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/d9f733bd-0463-455f-a0b6-7ac344428e2b.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/d9f733bd-0463-455f-a0b6-7ac344428e2b.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/6a95d54f-4886-4587-b230-3e0375b07a1a.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/6a95d54f-4886-4587-b230-3e0375b07a1a.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/c71e7351-1beb-4fee-bd15-157f381f006d.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/c71e7351-1beb-4fee-bd15-157f381f006d.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/3114dac3-bb5d-4489-ac51-c6e29547dc21.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/3114dac3-bb5d-4489-ac51-c6e29547dc21.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/10021d62-9590-4ff7-8188-a73bb0b6daff.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/10021d62-9590-4ff7-8188-a73bb0b6daff.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/6c4efdfe-caca-4c92-86f2-0da397533690.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/6c4efdfe-caca-4c92-86f2-0da397533690.wav
copy: s3://sagemaker-us-west-2-355444812467/a2i-demo/d3a9bc9a-ee32-4c62-9f51-efc031cdc74f.wav to s3://sagemaker-us-west-2-355444812467/tomofun/1626497155.52871/competition/train/d3a9bc9a-ee32-4c62-9f51-efc031cdc74f.wav
###Markdown
---- 跑到這邊就好 ----- Similar to training with Ground Truth output augmented manifest file outlined in this [blog](https://aws.amazon.com/blogs/machine-learning/easily-train-models-using-datasets-labeled-by-amazon-sagemaker-ground-truth/), once we have collected enough data points, we can construct a new `Estimator` for incremental training. For incremental training, the choice of hyperparameters becomes critical. Since we are continue the learning and optimization from the last model, an appropriate starting `learning_rate`, for example, would again need to be determined. But as a rule of thumb, even with the introduction of new, unseen data, we should start out the incremental training with a smaller `learning_rate` and different learning rate schedule (`lr_scheduler_factor` and `lr_scheduler_step`) than that of the previous training job as the optimization has previously reached to a more stable state with reduced learning rate. We should see a similar mAP performance on the original validation dataset in the first epoch in the incremental training. We here will be using the hyperparameters exactly the same as how the first model was trained in the [training notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb), with the following exceptions- smaller learning rate (`learning_rate` was 0.001, now 0.0001)- using the weights from the trained model instead of pre-trained weights that comes with the algorithm (`use_pretrained_model=0`).Note that the following working code snippet is meant to demonstrate how to set up the A2I output for training in SageMaker with object detection algorithm. Incremental training with merely 1 or 2 new samples and untuned hyperparameters, would not yield a meaning model, if not experiencing [catastrophic forgetting](https://en.wikipedia.org/wiki/Catastrophic_interference).*The next cell would take about 5 minutes.*
###Code
%store -r model_s3_path
# path definition
s3_train_data = train_path
# Reusing the training data for validation here for demonstration purposes
# but in practice you should provide a set of data that you want to validate the training against
s3_validation_data = train_path
s3_output_location = f'{OUTPUT_PATH}/incremental-training'
# num_training_samples = len(output)
num_training_samples = 3
# Create a model object set to using "Pipe" mode because we are inputing augmented manifest files.
new_od_model = sagemaker.estimator.Estimator(image_uri, # same object detection image that we used for model hosting
role,
instance_count=1,
instance_type='ml.p3.2xlarge',
volume_size = 50,
max_run = 360000,
input_mode = 'File',
output_path=s3_output_location,
sagemaker_session=sess)
# same set of hyperparameters from the original training job
new_od_model.set_hyperparameters(batch_size = 1)
# setting the input data
train_data = sagemaker.inputs.TrainingInput(s3_train_data)
validation_data = sagemaker.inputs.TrainingInput(s3_validation_data)
# Use the output model from the original training job.
model_data = sagemaker.inputs.TrainingInput(model_s3_path)
data_channels = {'competition': train_data,
'model': model_data}
new_od_model.fit(inputs=data_channels, logs=True, wait=False)
###Output
_____no_output_____
###Markdown
After training, you would get a new model in the `s3_output_location`, you can deploy it to a new endpoint or modify an endpoint without taking models that are already deployed into production out of service. For example, you can add new model variants, update the ML Compute instance configurations of existing model variants, or change the distribution of traffic among model variants. To modify an endpoint, you provide a new endpoint configuration. Amazon SageMaker implements the changes without any downtime. For more information, see [UpdateEndpoint](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateEndpoint.html) and [UpdateEndpointWeightsAndCapacities](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateEndpointWeightsAndCapacities.html).
###Code
new_od_model.model_data
incremented_model = sagemaker.model.Model(image_uri,
model_data = new_od_model.model_data,
role = role,
predictor_cls = sagemaker.predictor.Predictor,
sagemaker_session = sess)
new_detector = sagemaker.predictor.Predictor(endpoint_name = endpoint_name)
new_detector.update_endpoint(model_name=incremented_model.name, initial_instance_count = 1,
instance_type = 'ml.p2.xlarge', wait=False)
###Output
_____no_output_____
###Markdown
Create a Lambda function pass samples with low confidence to a2i
###Code
%%bash -s "$BUCKET"
cd invoke_endpoint_a2i
zip -r invoke_endpoint_a2i.zip .
aws s3 cp invoke_endpoint_a2i.zip s3://$1/lambda/
%store -r lambda_role_arn
lambda_role_arn
import os
cwd = os.getcwd()
!aws lambda create-function --function-name invoke_endpoint_a2i --zip-file fileb://$cwd/invoke_endpoint_a2i/invoke_endpoint_a2i.zip --handler lambda_function.lambda_handler --runtime python3.7 --role $lambda_role_arn
###Output
An error occurred (ResourceConflictException) when calling the CreateFunction operation: Function already exist: invoke_endpoint_a2i
###Markdown
Configure lambda function - invoke_image_object_detection * you can also do it by command line - ```aws lambda update-function-configuration --function-name invoke_image_object_detection \ --environment "Variables={BUCKET=my-bucket,KEY=file.txt}"``` 
###Code
bucket_key = "a2i-demo"
variables = f"A2IFLOW_DEF={flowDefinitionArn},BUCKET={BUCKET},ENDPOINT_NAME={endpoint_name},KEY={bucket_key}"
env = "Variables={"+variables+"}"
!aws lambda update-function-configuration --function-name invoke_endpoint_a2i --environment "$env"
!aws lambda add-permission \
--function-name invoke_endpoint_a2i \
--action lambda:InvokeFunction \
--statement-id apigateway \
--principal apigateway.amazonaws.com
###Output
{
"Statement": "{\"Sid\":\"apigateway\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-west-2:355444812467:function:invoke_endpoint_a2i\"}"
}
###Markdown
Integrate the Lambda with API Gateway * reference to the previous notebook Advanced material - use sagemaker pipeline to manege the training / deployment process
###Code
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
train_data = ParameterString(
name="TrainData",
default_value=s3_train_data,
)
validation_data = ParameterString(
name="ValidationData",
default_value=s3_validation_data,
)
model_data = ParameterString(
name="ModelData",
default_value=model_s3_path,
)
model_approval_status = ParameterString(
name="ModelApprovalStatus",
default_value="Approved"
)
from sagemaker.workflow.steps import TrainingStep
step_train = TrainingStep(
name="AudioClassificationTraining",
estimator=new_od_model,
inputs={
"competition": sagemaker.inputs.TrainingInput(train_data,
distribution='FullyReplicated'),
"validation":sagemaker.inputs.TrainingInput(validation_data,
distribution='FullyReplicated'),
"model":sagemaker.inputs.TrainingInput(model_data,
distribution='FullyReplicated')
},
)
import time
from sagemaker.workflow.step_collections import CreateModelStep
model_name='audio-vgg16-'+str(int(time.time()))
model = sagemaker.model.Model(
name=model_name,
image_uri=step_train.properties.AlgorithmSpecification.TrainingImage,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
sagemaker_session=sess,
role=role
)
inputs = sagemaker.inputs.CreateModelInput(
instance_type="ml.m4.xlarge"
)
create_model_step = CreateModelStep(
name="ModelPreDeployment",
model=model,
inputs=inputs
)
from sagemaker.workflow.step_collections import RegisterModel
model_package_group_name = f"AudioClassificationGroupModel"
step_register = RegisterModel(
name="AudioClassificationModel",
estimator=new_od_model,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
content_types=["application/octet-stream"],
response_types=["application/json"],
inference_instances=["ml.t2.medium", "ml.m5.xlarge"],
transform_instances=["ml.m5.xlarge"],
model_package_group_name=model_package_group_name,
approval_status=model_approval_status,
)
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.workflow.steps import ProcessingStep
deploy_model_processor = SKLearnProcessor(
framework_version='0.23-1',
role=role,
instance_type="ml.m5.large",
instance_count=1,
sagemaker_session=sess)
deploy_step = ProcessingStep(
name='DeployModel',
processor=deploy_model_processor,
job_arguments=[
"--model-name", create_model_step.properties.ModelName,
"--endpoint-name", endpoint_name,
"--region", region],
code="./deploy_model.py")
endpoint_name
pipeline_name="AudioClassification"
from sagemaker.workflow.pipeline import Pipeline
pipeline = Pipeline(
name=pipeline_name,
parameters=[
train_data, validation_data, model_data, model_approval_status
],
steps=[ step_train, step_register, create_model_step, deploy_step],
)
json.loads(pipeline.definition())
pipeline.upsert(role_arn=role)
execution = pipeline.start()
###Output
_____no_output_____
###Markdown
More on incremental trainingIt is recommended to perform a search over the hyperparameter space for your incremental training with [hyperparameter tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) for an optimal set of hyperparameters, especially the ones related to learning rate: `learning_rate`, `lr_scheduler_factor` and `lr_scheduler_step` from the SageMaker object detection algorithm. We have an [example](https://github.com/aws/amazon-sagemaker-examples/blob/master/hyperparameter_tuning/image_classification_early_stopping/hpo_image_classification_early_stopping.ipynb) of running a hyperparameter tuning job using Amazon SageMaker Automatic Model Tuning feature. Please try it out! The End, but....!This is the end of the example. Remember to execute the next cell to delete the endpoint otherwise it will continue to incur charges.
###Code
%store flowDefinitionArn
%store endpoint_name
%store model_package_group_name
%store pipeline_name
%store role
%store lambda_role_arn
#object_detector.delete_endpoint()
###Output
_____no_output_____
###Markdown
Amazon Augmented AI (Amazon A2I) integration with Amazon SageMaker Hosted Endpoint for Audio Classification and Model Retraining Architecture 5. A2I Setup a. [Introduction](Introduction)b. [Setup](Setup)c. [Create Control Plane Resources](Create-Control-Plane-Resources) 6. Setup workforce and Labeling Manually a. [Starting Human Loops](Starting-Human-Loops)b. [Configure a2i status change to SQS](sqs_a2i)c. [Wait For Workers to Complete Task](Wait-For-Workers-to-Complete-Task)d. [Check Status of Human Loop](Check-Status-of-Human-Loop)e. [View Task Results](View-Task-Results) 7. Retrain and Redeploy [Incremental training with SageMaker](Incremental-training-with-SageMaker) 8. Configure Lambda and Api gateway[Create Lambda Function triggering a2i process](lambda) IntroductionAmazon Augmented AI (Amazon A2I) makes it easy to build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. You can create your own workflows for ML models built on Amazon SageMaker or any other tools. Using Amazon A2I, you can allow human reviewers to step in when a model is unable to make a high confidence prediction or to audit its predictions on an on-going basis. Learn more here: https://aws.amazon.com/augmented-ai/In this tutorial, we will show how you can use **Amazon A2I with an Amazon SageMaker Hosted Endpoint.** We will be using an exisiting audio classification model in this notebook. We will also demonstrate how to manipulate the A2I output to perform incremental training to improve the model accuracy with the newly labeled data using A2I.For more in depth instructions, visit https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-getting-started.html To incorporate Amazon A2I into your human review workflows, you need three resources:* A **worker task template** to create a worker UI. The worker UI displays your input data, such as documents or images, and instructions to workers. It also provides interactive tools that the worker uses to complete your tasks. For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-instructions-overview.html* A **human review workflow**, also referred to as a flow definition. You use the flow definition to configure your human workforce and provide information about how to accomplish the human review task. You can create a flow definition in the Amazon Augmented AI console or with Amazon A2I APIs. To learn more about both of these options, see https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html* A **human loop** to start your human review workflow. When you use one of the built-in task types, the corresponding AWS service creates and starts a human loop on your behalf when the conditions specified in your flow definition are met or for each object if no conditions were specified. When a human loop is triggered, human review tasks are sent to the workers as specified in the flow definition.When using a custom task type, as this tutorial will show, you start a human loop using the Amazon Augmented AI Runtime API. When you call `start_human_loop()` in your custom application, a task is sent to human reviewers. SetupThis notebook is developed and tested in a SageMaker Notebook Instance with a `ml.t2.medium` instance with SageMaker Python SDK v2. It is recommended to execute the notebook in the same environment for best experience. Install Latest SDK
###Code
!pip install -U sagemaker==2.23.1
import sagemaker
from pkg_resources import parse_version
assert parse_version(sagemaker.__version__) >= parse_version('2'), \
'''This notebook is only compatible with sagemaker python SDK >= 2.
Current version is %s. Please make sure you upgrade the library.''' % sagemaker.__version__
print('SageMaker python SDK version: %s' % sagemaker.__version__)
###Output
_____no_output_____
###Markdown
We need to set up the following data:* `region` - Region to call A2I.* `BUCKET` - A S3 bucket accessible by the given role * Used to store the sample images & output results * Must be within the same region A2I is called from* `role` - The IAM role used as part of StartHumanLoop. By default, this notebook will use the execution role* `workteam` - Group of people to send the work to
###Code
import boto3
my_session = boto3.session.Session()
region = my_session.region_name
%store -r endpoint_name
###Output
_____no_output_____
###Markdown
Role and PermissionsThe AWS IAM Role used to execute the notebook needs to have the following permissions:* SagemakerFullAccess* AmazonSageMakerMechanicalTurkAccess (if using MechanicalTurk as your Workforce)
###Code
from sagemaker import get_execution_role
import sagemaker
# Setting Role to the default SageMaker Execution Role
role = get_execution_role()
display(role)
import os
import boto3
import botocore
sess = sagemaker.Session()
BUCKET = sess.default_bucket()
TRAIN_PATH = f's3://{BUCKET}/tomofun'
OUTPUT_PATH = f's3://{BUCKET}/a2i-results'
###Output
_____no_output_____
###Markdown
Setup Bucket and Paths**Important**: The bucket you specify for `BUCKET` must have CORS enabled. You can enable CORS by adding a policy similar to the following to your Amazon S3 bucket. To learn how to add CORS to an S3 bucket, see [CORS Permission Requirement](https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-permissions-security.htmla2i-cors-update) in the Amazon A2I documentation. ```[{ "AllowedHeaders": [], "AllowedMethods": ["GET"], "AllowedOrigins": ["*"], "ExposeHeaders": []}]```If you do not add a CORS configuration to the S3 buckets that contains your image input data, human review tasks for those input data objects will fail.
###Code
cors_configuration = {
'CORSRules': [{
"AllowedHeaders": [],
"AllowedMethods": ["GET"],
"AllowedOrigins": ["*"],
"ExposeHeaders": []
}]
}
# Set the CORS configuration
s3 = boto3.client('s3')
s3.put_bucket_cors(Bucket=BUCKET,
CORSConfiguration=cors_configuration)
###Output
_____no_output_____
###Markdown
Audio Classification with Amazon SageMakerTo demonstrate A2I with Amazon SageMaker hosted endpoint, we will take a trained audio classification model from a S3 bucket and host it on the SageMaker endpoint for real-time prediction. Load the model and create an endpointThe next cell will setup an endpoint from a trained model. It will take about 3 minutes.
###Code
import boto3
my_session = boto3.session.Session()
client = boto3.client("sts")
account_id = client.get_caller_identity()["Account"]
algorithm_name = "vgg16-audio"
image_uri=f"{account_id}.dkr.ecr.{region}.amazonaws.com/{algorithm_name}"
###Output
_____no_output_____
###Markdown
Helper functions
###Code
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.image as mpimg
import random
import numpy as np
import json
runtime_client = boto3.client('runtime.sagemaker')
def load_and_predict(file_name):
"""
load an audio file, make audio classification to an predictor
Parameters:
----------
file_name : str
image file location, in str format
predictor : sagemaker.predictor.RealTimePredictor
a predictor loaded from hosted endpoint
threshold : float
score threshold for bounding box display
"""
with open(file_name, 'rb') as image:
f = image.read()
b = bytearray(f)
response = runtime_client.invoke_endpoint(EndpointName=endpoint_name,
ContentType='application/octet-stream',
Body=b)
results = response['Body'].read().decode('utf-8')
print(results)
detections = json.loads(results)
return results, detections
object_categories = ["Barking", "Howling", "Crying", "COSmoke","GlassBreaking","Other"]
###Output
_____no_output_____
###Markdown
Sample DataLet's take a look how the audio classification model looks like using some audio clips on our hands. The predicted class and the prediction probability is presented.
###Code
!mkdir audios
!cp ../01-byoc/input/data/competition/train/train_00001.wav audios
!cp ../01-byoc/input/data/competition/train/train_00010.wav audios
!cp ../01-byoc/input/data/competition/train/train_00021.wav audios
test_audios = ['audios/train_00001.wav', # motorcycle
'audios/train_00010.wav', # bicycle
'audios/train_00021.wav'] # sofa
import IPython.display as ipd
ipd.Audio(test_audios[0], autoplay=True)
for audio in test_audios:
results, detections = load_and_predict(audio)
print(detections)
###Output
_____no_output_____
###Markdown
Probability of 0.465 is considered quite low in modern computer vision and there is a mislabeling. This is due to the fact that the SSD model was under-trained for demonstration purposes in the [training notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb). However this under-trained model serves as a perfect example of brining human reviewers when a model is unable to make a high confidence prediction. Creating human review Workteam or Workforce A workforce is the group of workers that you have selected to label your dataset. You can choose either the Amazon Mechanical Turk workforce, a vendor-managed workforce, or you can create your own private workforce for human reviews. Whichever workforce type you choose, Amazon Augmented AI takes care of sending tasks to workers. When you use a private workforce, you also create work teams, a group of workers from your workforce that are assigned to Amazon Augmented AI human review tasks. You can have multiple work teams and can assign one or more work teams to each job. To create your Workteam, visit the instructions here: https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management.htmlAfter you have created your workteam, replace YOUR_WORKTEAM_ARN below
###Code
my_session = boto3.session.Session()
my_region = my_session.region_name
client = boto3.client("sts")
account_id = client.get_caller_identity()["Account"]
WORKTEAM_ARN = "arn:aws:sagemaker:{}:{}:workteam/private-crowd/seal-squad".format(my_region, account_id)
WORKTEAM_ARN
###Output
_____no_output_____
###Markdown
Visit: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-permissions-security.html to add the necessary permissions to your role Client Setup Here we are going to setup the rest of our clients.
###Code
import io
import uuid
import time
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
# Amazon SageMaker client
sagemaker_client = boto3.client('sagemaker', region)
s3_client = boto3.client('s3')
# Amazon Augment AI (A2I) client
a2i = boto3.client('sagemaker-a2i-runtime')
# Amazon S3 client
s3 = boto3.client('s3', region)
# Flow definition name - this value is unique per account and region. You can also provide your own value here.
flowDefinitionName = 'fd-sagemaker-audio-classification-demo-' + timestamp
# Task UI name - this value is unique per account and region. You can also provide your own value here.
taskUIName = 'ui-sagemaker-audio-classification-demo-' + timestamp
###Output
_____no_output_____
###Markdown
Create Control Plane Resources Create Human Task UICreate a human task UI resource, giving a UI template in liquid html. This template will be rendered to the human workers whenever human loop is required.For over 70 pre built UIs, check: https://github.com/aws-samples/amazon-a2i-sample-task-uis.We will be taking an [audio classification UI](https://github.com/aws-samples/amazon-sagemaker-ground-truth-task-uis/blob/master/audio/audio-classification.liquid.html) and filling in the object categories in the `labels` variable in the template.
###Code
# task.input.taskObject
template = r"""
<script src="https://assets.crowd.aws/crowd-html-elements.js"></script>
<crowd-form>
<crowd-classifier
name="sentiment"
categories="['Barking', 'Howling', 'Crying', 'COSmoke','GlassBreaking','Other']"
header="What class does this audio represent?"
>
<classification-target>
<audio controls>
<source src="{{ task.input.taskObject | grant_read_access }}" type="audio/wav">
Your browser does not support the audio element.
</audio>
</classification-target>
<full-instructions header="Audio Classification Analysis Instructions">
<p><strong>Barking</strong>Barking </p>
<p><strong>Howling</strong>Howling</p>
<p><strong>Crying</strong>Crying</p>
<p><strong>COSmoke</strong>COSmoke</p>
<p><strong>GlassBreaking</strong>GlassBreaking</p>
<p><strong>Other</strong>Other</p>
</full-instructions>
<short-instructions>
<p>Choose the primary sentiment that is expressed by the audio.</p>
</short-instructions>
</crowd-classifier>
</crowd-form>
"""
def create_task_ui():
'''
Creates a Human Task UI resource.
Returns:
struct: HumanTaskUiArn
'''
response = sagemaker_client.create_human_task_ui(
HumanTaskUiName=taskUIName,
UiTemplate={'Content': template})
return response
# Create task UI
humanTaskUiResponse = create_task_ui()
humanTaskUiArn = humanTaskUiResponse['HumanTaskUiArn']
print(humanTaskUiArn)
###Output
_____no_output_____
###Markdown
Create the Flow Definition In this section, we're going to create a flow definition definition. Flow Definitions allow us to specify:* The workforce that your tasks will be sent to.* The instructions that your workforce will receive. This is called a worker task template.* The configuration of your worker tasks, including the number of workers that receive a task and time limits to complete tasks.* Where your output data will be stored.This demo is going to use the API, but you can optionally create this workflow definition in the console as well. For more details and instructions, see: https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-create-flow-definition.html.
###Code
create_workflow_definition_response = sagemaker_client.create_flow_definition(
FlowDefinitionName= flowDefinitionName,
RoleArn= role,
HumanLoopConfig= {
"WorkteamArn": WORKTEAM_ARN,
"HumanTaskUiArn": humanTaskUiArn,
"TaskCount": 1,
"TaskDescription": "Classify the audio category.",
"TaskTitle": "Audio Classification"
},
OutputConfig={
"S3OutputPath" : OUTPUT_PATH
}
)
flowDefinitionArn = create_workflow_definition_response['FlowDefinitionArn'] # let's save this ARN for future use
# Describe flow definition - status should be active
for x in range(60):
describeFlowDefinitionResponse = sagemaker_client.describe_flow_definition(FlowDefinitionName=flowDefinitionName)
print(describeFlowDefinitionResponse['FlowDefinitionStatus'])
if (describeFlowDefinitionResponse['FlowDefinitionStatus'] == 'Active'):
print("Flow Definition is active")
break
time.sleep(2)
###Output
_____no_output_____
###Markdown
Create SQS queue and pass a2i task status change event to the queue
###Code
sqs = boto3.resource('sqs')
queue_name = 'a2itasks'
queue_arn = "arn:aws:sqs:{}:{}:{}".format(region, account_id, queue_name)
policy = '''{
"Version": "2012-10-17",
"Id": "MyQueuePolicy",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": ["events.amazonaws.com",
"sqs.amazonaws.com"]
},
"Action": "sqs:SendMessage"
}]}'''
policy_obj = json.loads(policy)
policy_obj['Statement'][0]['Resource'] = queue_arn
policy = json.dumps(policy_obj)
queue = sqs.create_queue(QueueName=queue_name, Attributes={'DelaySeconds': '0',
'Policy': policy})
print(queue.url)
print(queue)
sqs_client = boto3.client('sqs')
sqs_client.add_permission(
QueueUrl=queue.url,
Label="a2i",
AWSAccountIds=[
account_id,
],
Actions=[
'SendMessage',
]
)
iam = boto3.client("iam")
role_name = "AmazonSageMaker-SageMakerExecutionRole"
assume_role_policy_document = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": ["sagemaker.amazonaws.com", "events.amazonaws.com"]
},
"Action": "sts:AssumeRole"
}
]
}
create_role_response = iam.create_role(
RoleName = role_name,
AssumeRolePolicyDocument = json.dumps(assume_role_policy_document)
)
# Now add S3 support
iam.attach_role_policy(
PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess',
RoleName=role_name
)
time.sleep(60) # wait for a minute to allow IAM role policy attachment to propagate
sm_role_arn = create_role_response["Role"]["Arn"]
print(sm_role_arn)
%%bash -s "$sm_role_arn" "$my_region"
aws events put-rule --name "A2IHumanLoopStatusChanges" \
--event-pattern "{\"source\":[\"aws.sagemaker\"],\"detail-type\":[\"SageMaker A2I HumanLoop Status Change\"]}" \
--role-arn "$1" \
--region $2
!sed "s/<account_id>/$account_id/g" targets-template.json > targets-tmp.json
!sed "s/<region>/$my_region/g" targets-tmp.json > targets.json
!aws events put-targets --rule A2IHumanLoopStatusChanges \
--targets file://$PWD/targets.json
###Output
_____no_output_____
###Markdown
Have newly created SQS queue as a target of the rule we just defined Starting Human Loops Now that we have setup our Flow Definition, we are ready to call our object detection endpoint on SageMaker and start our human loops. In this tutorial, we are interested in starting a HumanLoop only if the highest prediction probability score returned by our model for objects detected is less than 50%. So, with a bit of logic, we can check the response for each call to the SageMaker endpoint using `load_and_predict` helper function, and if the highest score is less than 50%, we will kick off a HumanLoop to engage our workforce for a human review.
###Code
# Get the sample images to s3 bucket for a2i UI to display
!aws s3 sync ./audios/ s3://{BUCKET}/audios/
human_loops_started = []
SCORE_THRESHOLD = .50
import json
for fname in test_audios:
# Call SageMaker endpoint and not display any object detected with probability lower than 0.4.
# Sort by prediction score so that the first item has the highest probability
result, detections = load_and_predict(audio)
max_p = max(detections['probability'])
# Our condition for triggering a human review
if max_p < SCORE_THRESHOLD:
s3_fname='s3://%s/%s' % (BUCKET, fname)
print(s3_fname)
humanLoopName = str(uuid.uuid4())
inputContent = {
"initialValue": max_p,
"taskObject": s3_fname # the s3 object will be passed to the worker task UI to render
}
# start an a2i human review loop with an input
start_loop_response = a2i.start_human_loop(
HumanLoopName=humanLoopName,
FlowDefinitionArn=flowDefinitionArn,
HumanLoopInput={
"InputContent": json.dumps(inputContent)
}
)
print(start_loop_response)
human_loops_started.append(humanLoopName)
print(f'Object detection Confidence Score of %s is less than the threshold of %.2f' % (max_p, SCORE_THRESHOLD))
print(f'Starting human loop with name: {humanLoopName} \n')
else:
print(f'Object detection Confidence Score of %s is above than the threshold of %.2f' % (max_p, SCORE_THRESHOLD))
print('No human loop created. \n')
###Output
_____no_output_____
###Markdown
Check Status of Human Loop
###Code
completed_human_loops = []
for human_loop_name in human_loops_started:
resp = a2i.describe_human_loop(HumanLoopName=human_loop_name)
print(resp)
print(f'HumanLoop Name: {human_loop_name}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('\n')
if resp["HumanLoopStatus"] == "Completed":
completed_human_loops.append(resp)
###Output
_____no_output_____
###Markdown
Wait For Workers to Complete TaskSince we are using private workteam, we should go to the labling UI to perform the inspection ourselves.
###Code
workteamName = WORKTEAM_ARN[WORKTEAM_ARN.rfind('/') + 1:]
print("Navigate to the private worker portal and do the tasks. Make sure you've invited yourself to your workteam!")
print('https://' + sagemaker_client.describe_workteam(WorkteamName=workteamName)['Workteam']['SubDomain'])
completed_human_loops = []
for human_loop_name in human_loops_started:
resp = a2i.describe_human_loop(HumanLoopName=human_loop_name)
print(resp)
print(f'HumanLoop Name: {human_loop_name}')
print(f'HumanLoop Status: {resp["HumanLoopStatus"]}')
print(f'HumanLoop Output Destination: {resp["HumanLoopOutput"]}')
print('\n')
if resp["HumanLoopStatus"] == "Completed":
completed_human_loops.append(resp)
###Output
_____no_output_____
###Markdown
Collect data from a2i to build the training data for the next round
###Code
queue.url
sqs = boto3.client('sqs')
completed_human_loops = []
while True:
response = sqs.receive_message(
QueueUrl=queue.url,
MaxNumberOfMessages=10,
MessageAttributeNames=[
'All'
],
VisibilityTimeout=10,
WaitTimeSeconds=0
)
if 'Messages' not in response:
break
messages = response['Messages']
for m in messages:
task = json.loads(m['Body'])['detail']
name = task['humanLoopName']
output_s3 = task['humanLoopOutput']['outputS3Uri']
completed_human_loops.append((name, output_s3))
receipt_handle = m['ReceiptHandle']
# Delete received message from queue
sqs.delete_message(
QueueUrl=queue.url,
ReceiptHandle=receipt_handle
)
print(completed_human_loops)
###Output
_____no_output_____
###Markdown
View Task Results Once work is completed, Amazon A2I stores results in your S3 bucket and sends a Cloudwatch event. Your results should be available in the S3 OUTPUT_PATH when all work is completed. Note that the human answer, the label and the bounding box, is returned and saved in the json file.
###Code
import re
import pprint
pp = pprint.PrettyPrinter(indent=4)
for name, s3_output_path in completed_human_loops:
splitted_string = re.split('s3://' + BUCKET + '/',s3_output_path)
output_bucket_key = splitted_string[1]
response = s3.get_object(Bucket=BUCKET, Key=output_bucket_key)
content = response["Body"].read()
json_output = json.loads(content)
pp.pprint(json_output)
print('\n')
###Output
_____no_output_____
###Markdown
Incremental training with SageMakerNow that we have used the model to generate prediction on some random out-of-sample images and got unsatisfactory prediction (low probability). We also demonstrated how to use Amazon Augmented AI to review and label the image based on custom criteria. Next step in a typical machine learning life cycle is to include these cases with which the model has trouble in the next batch of training data for retraining purposes so that the model can now learn from a set of new training data to improve the model. In machine learning we call it [incremental training](https://docs.aws.amazon.com/sagemaker/latest/dg/incremental-training.html).Now we can obtain the result of a2i tasks and formulated the information into the format of our training data - * the meta data in csv file format```Filename,Label,Remarktrain_00021,1,Howling```* and associating audio files on s3
###Code
object_categories_dict = {j: i for i, j in enumerate(object_categories)}
def convert_a2i_to_augmented_manifest(a2i_output):
label = a2i_output['humanAnswers'][0]['answerContent']['sentiment']['label']
s3_path = a2i_output['inputContent']['taskObject']
filename = s3_path.split('/')[-1][:-4]
label_id = str(object_categories_dict[label])
return '{},{},{}'.format(filename, label_id, label), s3_path
object_categories_dict
###Output
_____no_output_____
###Markdown
This function will take an A2I output json and result in a json object that is compatible to how Amazon SageMaker Ground Truth outputs the result and how SageMaker built-in object detection algorithm expects from the input. In order to create a cohort of training images from all the images re-labeled by human reviewers in A2I console. You can loop through all the A2I output, convert the json file, and concatenate them into a JSON Lines file, with each line represents results of one image.
###Code
s3_paths=[]
with open('augmented.manifest', 'w') as outfile:
outfile.write("Filename,Label,Remark\n")
# convert the a2i json to augmented manifest for each human loop output
for name, s3_output_path in completed_human_loops:
splitted_string = re.split('s3://' + BUCKET + '/', s3_output_path)
output_bucket_key = splitted_string[1]
response = s3.get_object(Bucket=BUCKET, Key=output_bucket_key)
content = response["Body"].read()
json_output = json.loads(content)
print(json_output)
# convert using the function
augmented_manifest, s3_path = convert_a2i_to_augmented_manifest(json_output)
s3_paths.append(s3_path)
outfile.write(augmented_manifest)
outfile.write('\n')
# take a look at how Json Lines looks like
!head -n2 augmented.manifest
# upload the manifest file to S3
import time;
ts = time.time()
train_path = f"{TRAIN_PATH}/{ts}/competition"
!aws s3 cp augmented.manifest {train_path}/meta_train.csv
for s3_path in s3_paths:
filename = s3_path.split('/')[-1]
!aws s3 cp {s3_path} {train_path}/train/{filename}
###Output
_____no_output_____
###Markdown
Similar to training with Ground Truth output augmented manifest file outlined in this [blog](https://aws.amazon.com/blogs/machine-learning/easily-train-models-using-datasets-labeled-by-amazon-sagemaker-ground-truth/), once we have collected enough data points, we can construct a new `Estimator` for incremental training. For incremental training, the choice of hyperparameters becomes critical. Since we are continue the learning and optimization from the last model, an appropriate starting `learning_rate`, for example, would again need to be determined. But as a rule of thumb, even with the introduction of new, unseen data, we should start out the incremental training with a smaller `learning_rate` and different learning rate schedule (`lr_scheduler_factor` and `lr_scheduler_step`) than that of the previous training job as the optimization has previously reached to a more stable state with reduced learning rate. We should see a similar mAP performance on the original validation dataset in the first epoch in the incremental training. We here will be using the hyperparameters exactly the same as how the first model was trained in the [training notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_recordio_format.ipynb), with the following exceptions- smaller learning rate (`learning_rate` was 0.001, now 0.0001)- using the weights from the trained model instead of pre-trained weights that comes with the algorithm (`use_pretrained_model=0`).Note that the following working code snippet is meant to demonstrate how to set up the A2I output for training in SageMaker with object detection algorithm. Incremental training with merely 1 or 2 new samples and untuned hyperparameters, would not yield a meaning model, if not experiencing [catastrophic forgetting](https://en.wikipedia.org/wiki/Catastrophic_interference).*The next cell would take about 5 minutes.*
###Code
%store -r model_s3_path
# path definition
s3_train_data = train_path
# Reusing the training data for validation here for demonstration purposes
# but in practice you should provide a set of data that you want to validate the training against
s3_validation_data = train_path
s3_output_location = f'{OUTPUT_PATH}/incremental-training'
# num_training_samples = len(output)
num_training_samples = 3
# Create a model object set to using "Pipe" mode because we are inputing augmented manifest files.
new_od_model = sagemaker.estimator.Estimator(image_uri, # same object detection image that we used for model hosting
role,
instance_count=1,
instance_type='ml.p3.2xlarge',
volume_size = 50,
max_run = 360000,
input_mode = 'File',
output_path=s3_output_location,
sagemaker_session=sess)
# same set of hyperparameters from the original training job
new_od_model.set_hyperparameters(batch_size = 1)
# setting the input data
train_data = sagemaker.inputs.TrainingInput(s3_train_data)
validation_data = sagemaker.inputs.TrainingInput(s3_validation_data)
# Use the output model from the original training job.
model_data = sagemaker.inputs.TrainingInput(model_s3_path)
data_channels = {'competition': train_data,
'model': model_data}
new_od_model.fit(inputs=data_channels, logs=True, wait=False)
###Output
_____no_output_____
###Markdown
After training, you would get a new model in the `s3_output_location`, you can deploy it to a new endpoint or modify an endpoint without taking models that are already deployed into production out of service. For example, you can add new model variants, update the ML Compute instance configurations of existing model variants, or change the distribution of traffic among model variants. To modify an endpoint, you provide a new endpoint configuration. Amazon SageMaker implements the changes without any downtime. For more information, see [UpdateEndpoint](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateEndpoint.html) and [UpdateEndpointWeightsAndCapacities](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_UpdateEndpointWeightsAndCapacities.html).
###Code
new_od_model.model_data
incremented_model = sagemaker.model.Model(image_uri,
model_data = new_od_model.model_data,
role = role,
predictor_cls = sagemaker.predictor.Predictor,
sagemaker_session = sess)
new_detector = sagemaker.predictor.Predictor(endpoint_name = endpoint_name)
new_detector.update_endpoint(model_name=incremented_model.name, initial_instance_count = 1,
instance_type = 'ml.p2.xlarge', wait=False)
###Output
_____no_output_____
###Markdown
Create a Lambda function pass samples with low confidence to a2i
###Code
%%bash -s "$BUCKET"
cd invoke_endpoint_a2i
zip -r invoke_endpoint_a2i.zip .
aws s3 cp invoke_endpoint_a2i.zip s3://$1/lambda/
%store -r lambda_role_arn
import os
cwd = os.getcwd()
!aws lambda create-function --function-name invoke_endpoint_a2i --zip-file fileb://$cwd/invoke_endpoint_a2i/invoke_endpoint_a2i.zip --handler lambda_function.lambda_handler --runtime python3.7 --role $lambda_role_arn
###Output
_____no_output_____
###Markdown
Configure lambda function - invoke_image_object_detection * you can also do it by command line - ```aws lambda update-function-configuration --function-name invoke_image_object_detection \ --environment "Variables={BUCKET=my-bucket,KEY=file.txt}"``` 
###Code
bucket_key = "a2i-demo"
variables = f"A2IFLOW_DEF={flowDefinitionArn},BUCKET={BUCKET},ENDPOINT_NAME={endpoint_name},KEY={bucket_key}"
env = "Variables={"+variables+"}"
!aws lambda update-function-configuration --function-name invoke_endpoint_a2i --environment "$env"
!aws lambda add-permission \
--function-name invoke_endpoint_a2i \
--action lambda:InvokeFunction \
--statement-id apigateway \
--principal apigateway.amazonaws.com
###Output
_____no_output_____
###Markdown
Integrate the Lambda with API Gateway * reference to the previous notebook Advanced material - use sagemaker pipeline to manege the training / deployment process
###Code
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
)
train_data = ParameterString(
name="TrainData",
default_value=s3_train_data,
)
validation_data = ParameterString(
name="ValidationData",
default_value=s3_validation_data,
)
model_data = ParameterString(
name="ModelData",
default_value=model_s3_path,
)
model_approval_status = ParameterString(
name="ModelApprovalStatus",
default_value="Approved"
)
from sagemaker.workflow.steps import TrainingStep
step_train = TrainingStep(
name="AudioClassificationTraining",
estimator=new_od_model,
inputs={
"competition": sagemaker.inputs.TrainingInput(train_data,
distribution='FullyReplicated'),
"validation":sagemaker.inputs.TrainingInput(validation_data,
distribution='FullyReplicated'),
"model":sagemaker.inputs.TrainingInput(model_data,
distribution='FullyReplicated')
},
)
import time
from sagemaker.workflow.step_collections import CreateModelStep
model_name='audio-vgg16-'+str(int(time.time()))
model = sagemaker.model.Model(
name=model_name,
image_uri=step_train.properties.AlgorithmSpecification.TrainingImage,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
sagemaker_session=sess,
role=role
)
inputs = sagemaker.inputs.CreateModelInput(
instance_type="ml.m4.xlarge"
)
create_model_step = CreateModelStep(
name="ModelPreDeployment",
model=model,
inputs=inputs
)
from sagemaker.workflow.step_collections import RegisterModel
model_package_group_name = f"AudioClassificationGroupModel"
step_register = RegisterModel(
name="AudioClassificationModel",
estimator=new_od_model,
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
content_types=["application/octet-stream"],
response_types=["application/json"],
inference_instances=["ml.t2.medium", "ml.m5.xlarge"],
transform_instances=["ml.m5.xlarge"],
model_package_group_name=model_package_group_name,
approval_status=model_approval_status,
)
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.workflow.steps import ProcessingStep
deploy_model_processor = SKLearnProcessor(
framework_version='0.23-1',
role=role,
instance_type="ml.m5.large",
instance_count=1,
sagemaker_session=sess)
deploy_step = ProcessingStep(
name='DeployModel',
processor=deploy_model_processor,
job_arguments=[
"--model-name", create_model_step.properties.ModelName,
"--endpoint-name", endpoint_name,
"--region", region],
code="./deploy_model.py")
endpoint_name
pipeline_name="AudioClassification"
from sagemaker.workflow.pipeline import Pipeline
pipeline = Pipeline(
name=pipeline_name,
parameters=[
train_data, validation_data, model_data, model_approval_status
],
steps=[ step_train, step_register, create_model_step, deploy_step],
)
json.loads(pipeline.definition())
pipeline.upsert(role_arn=role)
execution = pipeline.start()
###Output
_____no_output_____
###Markdown
More on incremental trainingIt is recommended to perform a search over the hyperparameter space for your incremental training with [hyperparameter tuning](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning.html) for an optimal set of hyperparameters, especially the ones related to learning rate: `learning_rate`, `lr_scheduler_factor` and `lr_scheduler_step` from the SageMaker object detection algorithm. We have an [example](https://github.com/aws/amazon-sagemaker-examples/blob/master/hyperparameter_tuning/image_classification_early_stopping/hpo_image_classification_early_stopping.ipynb) of running a hyperparameter tuning job using Amazon SageMaker Automatic Model Tuning feature. Please try it out! The End, but....!This is the end of the example. Remember to execute the next cell to delete the endpoint otherwise it will continue to incur charges.
###Code
%store flowDefinitionArn
%store endpoint_name
%store model_package_group_name
%store pipeline_name
%store role
%store lambda_role_arn
#object_detector.delete_endpoint()
###Output
_____no_output_____ |
simple_model_implementations/multiple_regression.ipynb | ###Markdown
Multiple Linear Regression $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \varepsilon$
###Code
data_url = 'https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2021/2021-07-27/olympics.csv'
df = pd.read_csv(data_url, sep=",")[:100]
df.dropna(subset = ["height", "weight", "age"], inplace=True)
###Output
_____no_output_____
###Markdown
Create data matrix
###Code
X_1 = df.height.values.reshape((len(df.height),1))
X_2 = df.weight.values.reshape((len(df.weight),1))
y = df.age.values.reshape((len(df.age),1))
ones = np.ones((len(X_1),1))
X_dat = np.concatenate((X_1, X_2), axis=1)
X = np.concatenate((ones, X_dat), axis=1)
fig = plt.figure(1)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:, 1], X[:, 2], y)
###Output
_____no_output_____
###Markdown
Compute best set of weights
###Code
hat_β = np.linalg.inv((X.T @ X)) @ X.T @ y
hat_y = X @ hat_β
###Output
_____no_output_____
###Markdown
Create predictions in a grid format to plot in 3D.
###Code
xx, yy, zz = np.meshgrid(X[:, 0], X[:, 1], X[:, 2])
grid = np.vstack((xx.flatten(), yy.flatten(), zz.flatten())).T
Z = grid @ hat_β
fig = plt.figure(2)
ax = fig.add_subplot(111, projection='3d')
ax.scatter(X[:, 1], X[:, 2], y, color='r')
ax.plot_trisurf(grid[:, 1], grid[:, 2], Z.reshape((len(Z),)), alpha=0.9)
plt.show()
###Output
_____no_output_____
###Markdown
Polynomial Multiple Linear Regression $y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1^2 + \beta_4 x_2^2 + \varepsilon$
###Code
def create_phi_matrix(data, degrees):
copy = data.copy()
polynomials = np.ones((data.shape[0],1))
for i in range(data.shape[1]):
for j in range(2, degrees+1):
polynomials = np.append(polynomials, (copy[:,i]**j).reshape((len(data),1)), axis=1)
df = np.append(data.reshape((len(data),2)), polynomials[:,1:], axis=1)
return df
polynomial_degree = 4
Φ_mat = create_phi_matrix(data=X_dat, degrees=polynomial_degree)
ones = np.ones((len(Φ_mat),1))
Φ = np.append(ones, Φ_mat, axis=1)
hat_β = np.linalg.inv((Φ.T @ Φ)) @ Φ.T @ y
hat_y = Φ @ hat_β
rmse = round(((sum((y - hat_y)**2))**(1/2))[0],2)
mn = np.min(X_dat, axis=0)
mx = np.max(X_dat, axis=0)
X_ax,Y_ax = np.meshgrid(np.linspace(mn[0], mx[0], 20), np.linspace(mn[1], mx[1], 20))
XX = X_ax.flatten()
YY = Y_ax.flatten()
# use the same function to create the Φ matrix in the form that the 3D graph needs them.
X_flat = np.c_[XX,YY]
Φ_flat = create_phi_matrix(data=X_flat, degrees=polynomial_degree)
Z = ((np.c_[np.ones(XX.shape), Φ_flat]) @ hat_β).reshape(X_ax.shape)
import warnings
warnings.filterwarnings("ignore")
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(X_ax, Y_ax, Z, rstride=1, cstride=1, alpha=0.2)
ax.scatter(X_dat[:,0], X_dat[:,1], y)
plt.title(f'Polynomial fit of degree {polynomial_degree}', fontsize=15)
###Output
_____no_output_____ |
TrainAgent.ipynb | ###Markdown
Train agent All the required codes to train agent are defined in this notebook.For the running environment setup, see the [README.md](./README.md). Define neural network models for agent
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # GPU is not tested.
class A2C(nn.Module):
"""Advantage Actor Critic Model"""
def __init__(self, state_size, action_size, seed, fc1_units=128, fc2_units=128):
"""Initialize parameters and build model.
Parameters
----------
state_size : int
Dimension of each state
action_size : int
Dimension of each action
seed : int
Random seed
fc1_units : int
Number of nodes in the first hidden layer
fc2_units : int
Number of nodes in the second hidden layer
"""
super(A2C, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size, fc1_units)
self.fc2 = nn.Linear(fc1_units, fc2_units)
# Actor and critic networks share hidden layers
self.fc_actor = nn.Linear(fc2_units, action_size)
self.fc_critic = nn.Linear(fc2_units, 1)
# Standard deviation for distribution to generate actions
self.std = nn.Parameter(torch.ones(action_size))
def forward(self, state):
"""In this A2C model, a batch of state from all agents shall be processed.
Parameters
----------
state : array_like
State of agent
Returns
-------
value : torch.Tensor
V value of the current state.
action : torch.Tensor
Action generated from the current policy and state
log_prob : torch.Tensor
Log of the probability density/mass function evaluated at value.
entropy : torch.Tensor
entropy of the distribution
"""
state = torch.tensor(state, device=DEVICE, dtype=torch.float32)
out1 = F.relu(self.fc1(state))
out2 = F.relu(self.fc2(out1))
# mean of the Gaussian distribution range in [-1, 1]
mean = torch.tanh(self.fc_actor(out2))
# V value
value = self.fc_critic(out2)
# Create distribution from mean and standard deviation
# Use softplus function to make deviation always positive
# SoftPlus is a smooth approximation to ReLU function
# i.e. softplus(1.0) = 1.4189
# softplus(0.0) = 0.6931
# softplus(-1.0) = 0.3133
# softplus(-2.0) = 0.1269
dist = torch.distributions.Normal(mean, F.softplus(self.std))
# Sample next action from the distribution.
# [[action_(1,1), action_(1,2), .., action_(1,action_size)],
# [action_(2,1), action_(2,2), .., action_(2,action_size)],
# ...
# [action_(NumOfAgents,1), action_(NumOfAgents,2), .., action_(NumOfAgents,action_size)]]
action = dist.sample()
action = torch.clamp(action, min=-1.0, max=1.0)
# Create the log of the probability at the actions
# Sum up them, and recover 1 dimention
# --> dist.log_prob(action)
# [[lp_(1,1), lp_(1,2), .., lp_(1,action_size)],
# [lp_(2,1), lp_(2,2), .., lp_(2,action_size)],
# ...
# [lp_(NumOfAgents,1), lp_(NumOfAgents,2), .., lp_(NumOfAgents,action_size)]]
# --> sum(-1)
# [sum_1,sum_2, .., sum_NumOfAgents]
# --> unsqueeze(-1)
# [[sum_1],
# [sum_2],
# ...
# [sum_NumOfAgents]]
# Todo: Check theory of multiple actions. Is it okay to sum up?
log_prob = dist.log_prob(action).sum(-1).unsqueeze(-1)
# When std is fixed, entropy is same value.
entropy = dist.entropy().sum(-1).unsqueeze(-1)
return value, action, log_prob, entropy
###Output
_____no_output_____
###Markdown
Define environment and agent
###Code
from unityagents import UnityEnvironment
import gym
import numpy as np
import random
import torch
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
GAMMA = 0.99 # discount factor
LR = 5e-4 # learning rate
class BallTrackEnv():
"""Reacher environment"""
def __init__(self, seed = 0):
"""Initialize environment."""
env = UnityEnvironment(file_name='Reacher.app', seed=seed)
self.env = env
self.brain_name = env.brain_names[0]
self.brain = env.brains[self.brain_name]
self.env_info = self.env.reset(train_mode=True)[self.brain_name]
self.action_size = self.brain.vector_action_space_size
print('Size of action:', self.action_size)
states = self.env_info.vector_observations
self.state_size = states.shape[1]
print('Size of state:', self.state_size)
self.num_agents = len(self.env_info.agents)
print('Number of agents:', self.num_agents)
def reset(self, train_mode = True):
"""Reset environment and return initial states.
Returns
-------
state : numpy.ndarray
Initial agents' states of the environment (20 agents x 33 states)
"""
self.env_info = self.env.reset(train_mode)[self.brain_name]
return self.env_info.vector_observations
def one_step_forward(self, actions):
"""Take one step with actions.
Parameters
----------
actions : numpy.ndarray
Agents' actions a_t (20 agents x 4 actions)
Returns
-------
new_states : numpy.ndarray of numpy.float64
New states s_(t+1)
rewards : list of float
Rewards r(t)
dones : list of bool
True if a episode is done
"""
self.env_info = self.env.step(actions)[self.brain_name]
new_states = self.env_info.vector_observations
rewards = self.env_info.rewards
dones = self.env_info.local_done
return new_states, rewards, dones
def close(self):
self.env.close()
class Agent():
"""Interacts with and learns from the environment."""
def __init__(self, env, seed):
"""Initialize an Agent object.
Parameters
----------
env : BallTrackEnv
environment for the agent to interacts
seed :
random seed
Returns
-------
new_states : numpy.ndarray
New states s_(t+1)
rewards : list
Rewards r(t)
dones : list
True if a episode is done
"""
self.env = env
self.state_size = env.state_size
self.action_size = env.action_size
self.seed = random.seed(seed)
self.a2c = A2C(self.state_size, self.action_size, seed).to(DEVICE)
self.optimizer = optim.Adam(self.a2c.parameters(), lr=LR)
self.log_probs = []
self.values = []
self.rewards = []
self.episode_score = 0
self.rollout = 5
self.actor_loss = 0.0
self.critic_loss = 0.0
self.entropy = 0.0
self.loss = 0.0
self.critic_loss_coef = 3.0
def collet_experiences(self, state, step, num_steps):
"""Collect experiences for the number of rollout
Parameters
----------
state : numpy.ndarray (20 x 33)
Initial states
step :
Curret step
num_steps :
Maximum number of steps for an episode
Returns
-------
rewards : list
Rewards r(t)
dones : list
True if a episode is done
values : list of torch.Tensor
V values
log_probs : list of torch.Tensor
Log of the probability density/mass function evaluated at value.
entropys : list of torch.Tensor
entropy of the distribution
step : int
current step
state: numpy.ndarray (20 x 33)
Current states
episode_done : bool
If episode is done, True
"""
rewards = []
dones = []
values = []
log_probs = []
entropys = []
for rollout in range(self.rollout):
value, action, log_prob, entropy = self.a2c.forward(state)
new_state, reward, done = self.env.one_step_forward(action.detach().squeeze().numpy())
self.episode_score += np.sum(reward)/self.env.num_agents # accumulate mean of rewards
reward = torch.tensor(reward).unsqueeze(-1)
rewards.append(reward)
dones.append(done)
values.append(value)
log_probs.append(log_prob)
entropys.append(entropy)
state = new_state
episode_done = np.any(done) or step == num_steps-1
if episode_done or rollout == self.rollout-1:
step += 1
break
step += 1
return rewards, dones, values, log_probs, entropys, step, state, episode_done
def learn(self, state, rewards, dones, values, log_probs, entropys):
"""Learn from collected experiences
Parameters
----------
state : numpy.ndarray (20 x 33)
Current states
rewards : list
Rewards r(t)
dones : list
True if a episode is done
values : list of torch.Tensor
V values
log_probs : list of torch.Tensor
Log of the probability density/mass function evaluated at value.
entropys : list of torch.Tensor
entropy of the distribution
Returns
-------
none
"""
length = len(rewards)
# Create area to store advantages and returns over trajectory
advantages = [torch.FloatTensor(np.zeros((self.env.num_agents, 1)))]*length
returns = [torch.FloatTensor(np.zeros((self.env.num_agents, 1)))]*length
# Caclculate V(t_end+1)
value, _, _, _ = self.a2c.forward(state)
# Set V(t_end+1) to temporal return value
_return = value.detach()
# Calculate advantages and returns backwards
for i in reversed(range(length)):
# Return(t) = reward(t) + gamma * Return(t+1) if not last step
_return = rewards[i] + GAMMA * _return * torch.FloatTensor(1 - np.array(dones[i])).unsqueeze(-1)
# Advantage(t) = Return(t) - Value(t)
advantages[i] = _return.detach() - values[i].detach()
returns[i] = _return.detach()
# Flatten all agents results in one list
log_probs = torch.cat(log_probs, dim=0)
advantages = torch.cat(advantages, dim=0)
returns = torch.cat(returns, dim=0)
values = torch.cat(values, dim=0)
entropys = torch.cat(entropys, dim=0)
# Calculate losses
self.actor_loss = -(log_probs * advantages).mean()
self.critic_loss = (0.5 * (returns - values).pow(2)).mean()
self.entropy = entropys.mean()
# Sum-up all losses with weights
self.loss = self.actor_loss + self.critic_loss_coef * self.critic_loss - 0.001 * self.entropy
# Update model
self.optimizer.zero_grad()
self.loss.backward()
self.optimizer.step()
def run_episode(self, num_steps):
"""Initialize an Agent object.
Parameters
----------
num_steps : int
maximum steps of one episode
show_result :
random seed
Returns
-------
episode_reward : float
Mean of total rewards that all agents collected
"""
state = self.env.reset()
self.episode_score = 0
episode_done = False
step = 0
while not episode_done:
rewards, dones, values, log_probs, entropys, step, state, episode_done = self.collet_experiences(state, step, num_steps)
self.learn(state, rewards, dones, values, log_probs, entropys)
return self.episode_score
def save(self, filename):
torch.save(self.a2c.state_dict(), filename)
def load(self, filename):
state_dict = torch.load(filename, map_location=lambda storage, loc: storage)
self.a2c.load_state_dict(state_dict)
###Output
_____no_output_____
###Markdown
Train agentSample output:```Size of action: 4Size of state: 33Number of agents: 20---------------------------------------------------------------- Layer (type) Output Shape Param ================================================================ Linear-1 [-1, 1, 128] 4,352 Linear-2 [-1, 1, 128] 16,512 Linear-3 [-1, 1, 4] 516 Linear-4 [-1, 1, 1] 129================================================================Total params: 21,509Trainable params: 21,509Non-trainable params: 0----------------------------------------------------------------Input size (MB): 0.00Forward/backward pass size (MB): 0.00Params size (MB): 0.08Estimated Total Size (MB): 0.08----------------------------------------------------------------episode, score, total_loss, actor_loss, critic_loss, entropy1, 0.1695, 0.0073, 0.2254, 0.0073, 6.76582, 0.0640, 0.0007, -0.0526, 0.0007, 6.76583, 0.1455, 0.0005, -0.0927, 0.0005, 6.76584, 0.1285, 0.0003, 0.0020, 0.0003, 6.76585, 0.1435, 0.0002, -0.0154, 0.0002, 6.76586, 0.0620, 0.0002, -0.0384, 0.0002, 6.76587, 0.1350, 0.0005, -0.1349, 0.0005, 6.76588, 0.0565, 0.0001, 0.0107, 0.0001, 6.76589, 0.0460, 0.0002, 0.0019, 0.0002, 6.765810, 0.0810, 0.0001, 0.0102, 0.0001, 6.7658```
###Code
from torchsummary import summary
seed = 1
env = BallTrackEnv()
agent = Agent(env, seed)
summary(agent.a2c, (1, agent.state_size, 0))
max_episodes = 200
num_steps = 10000 # The maximum steps of the environement is 1000. Thus this parameter does nothing.
print("episode, \tscore, \ttotal_loss, \tactor_loss, \tcritic_loss, \tentropy")
for i in range(max_episodes):
episode = i + 1
episode_score = agent.run_episode(num_steps)
if episode % 1 == 0:
print("{}, \t{:.4f}, \t{:.4f}, \t{:.4f}, \t{:.4f}, \t{:.4f}".format(episode, episode_score, agent.loss, agent.actor_loss, agent.critic_loss, agent.entropy))
if episode % 50 == 0 and episode != 0:
agent.save("check_point_{}".format(episode))
###Output
_____no_output_____
###Markdown
Plot trained result
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def plotScore(data_paths):
plt.rcParams["figure.dpi"] = 100.0
num=100
b=np.ones(num)/num
a = np.zeros(num-1)
for path in data_paths:
data = np.loadtxt(path[1], skiprows=1, usecols=1, delimiter=', \t', dtype='float')
length = len(data)
data_mean = np.convolve(np.hstack((a, data)), b, mode='vaild')
print('At '+ str(np.where(data > 30.0)[0][0] + 1), 'episode', path[0], 'achieved score 30.0')
print('At '+ str(np.where(data_mean > 30.0)[0][0] + 1), 'episode', path[0], 'achieved average score 30.0')
plt.plot(np.linspace(1, length, length, endpoint=True), data, label=path[0])
plt.plot(np.linspace(1, length, length, endpoint=True), data_mean, label=path[0]+"_Average")
plt.xlabel('Episode #')
plt.ylabel('score')
plt.legend()
plt.xlim(0, length)
plt.ylim(0.0, 40.0)
plt.grid()
plt.hlines([30.0], 0, length, "green")
plt.show()
data_paths = []
data_paths.append(['A2C','results/critic3_en-3_std1.txt'])
plotScore(data_paths)
###Output
At 73 episode A2C achieved score 30.0
At 151 episode A2C achieved average score 30.0
###Markdown
Watch trained agent and take screenshots
###Code
from PIL import ImageGrab
take_screenshot = False
# Watch trained agent
seed = 100
env = BallTrackEnv(seed)
agent = Agent(env, seed)
agent.load('results/check_point_200_critic3_en-3')
# softplus(-100) = 1.00000e-44 * 3.7835, which means nearly determistic
agent.a2c.std = nn.Parameter(-100*torch.ones(agent.action_size))
state = agent.env.reset(train_mode=False) # reset the environment and get the initial state
score = np.zeros(agent.env.num_agents) # initialize the score
step = 0
while True:
_, action, _, _ = agent.a2c.forward(state) # select an action
new_state, reward, done = agent.env.one_step_forward(action.detach().squeeze().numpy())
score = score + reward
state = new_state
if take_screenshot:
if step % 1 == 0: # create screenshot in each step
filename = "results/step" + str(step) + ".png"
ImageGrab.grab(bbox=(0, 88, 1280, 804)).save(filename) #Screenshot area to be adjusted in your environment
step += 1
if np.any(done):
break
print("Score of each agent: {}".format(score))
print("Average score: {}".format(np.sum(score)/agent.env.num_agents))
agent.env.close()
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
goal_size -> 5.0
goal_speed -> 1.0
Unity brain name: ReacherBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 33
Number of stacked Vector Observation: 1
Vector Action space type: continuous
Vector Action space size (per agent): 4
Vector Action descriptions: , , ,
###Markdown
Visualize network structure by [graphviz library](https://graphviz.readthedocs.io)
###Code
# pip install graphviz
# conda install python-graphviz
import torch
from torchviz import make_dot
seed = 1
env = BallTrackEnv()
agent = Agent(env, seed)
x = agent.env.reset(train_mode=False)
out = agent.a2c(x)
dot = make_dot(out, params=dict(agent.a2c.named_parameters()))
dot.format = 'png'
dot.render('content/model')
agent.env.close()
###Output
_____no_output_____
###Markdown
Train Agent
###Code
from unityagents import UnityEnvironment
import numpy as np
env = UnityEnvironment(file_name="Banana.app")
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
print("brain_name: ", brain_name)
print("brain: ", brain)
# For detailes: https://github.com/udacity/deep-reinforcement-learning/blob/master/python/unityagents/brain.py
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
import random
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
from src.dqn_agent import Agent
agent = Agent(state_size=37, action_size=4, seed=0)
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0
#for t in range(max_t):
while True:
action = agent.act(state, eps)
env_info = env.step(action)[brain_name]
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % 10 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
return scores
scores = dqn()
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
# Watch trained agent
from PIL import ImageGrab
agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
take_screenshot = False
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
step = 0
while True:
action = agent.act(state) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if take_screenshot:
if step % 1 == 0: # create screenshot in each step
filename = "banana_step" + str(step) + ".png"
ImageGrab.grab(bbox=(0, 88, 1286, 844)).save(filename) #Screen shot area to be adjusted in your environment
step += 1
if done: # exit loop if episode finished
break
print("Score: {}".format(score))
env.close()
###Output
_____no_output_____
###Markdown
Training Function
###Code
n_episodes=800
print_every=100
goal_score = 13
score_window_size = 100
keep_training = True
epsilon_start = 1.0
epsilon_end = 0.001
epsilon_decay = 0.995
scores = train_dqn(env, agent, n_episodes=n_episodes, goal_score=goal_score,
score_window_size=score_window_size,
keep_training=keep_training, print_every=print_every,
eps_start=epsilon_start, eps_end=epsilon_end, eps_decay=epsilon_decay)
plot_training_scores(scores, goal_score, window=score_window_size, agent_name=agent.name)
# demo the current notebook's agent to watch performance in real time
# uncomment the lines below to run demo
#from demos import demo_agent
#demo_agent(env, agent, n_episodes=3, epsilon=0.05, seed=SEED)
# demo a saved agent by loading it from disk
# uncomment the lines below to run demo
#from demos import demo_saved_agent
#saved_agent_name = 'Agent Naners'
#demo_saved_agent(env, saved_agent_name, n_episodes=1, epsilon=0.05, seed=SEED)
# close the environment when finished
env.close()
###Output
_____no_output_____
###Markdown
Create Agent
###Code
# parameters used for the provided agent
agent_params = {
'name': 'Agent OmegaPong',
'buffer_size': int(1e6),
'batch_size': 256,
'layers_actor': [512, 256],
'lr_actor': 5e-4,
'layers_critic': [512, 256, 256],
'lr_critic': 1e-3,
'learn_every': 5,
'learn_passes':5,
'gamma': 0.99,
'tau': 5e-3,
'batch_norm': True,
'weight_decay':0.0
}
# create the agent
agent = DDPG_Agent(state_size, action_size, brain_name, seed=SEED,
params=agent_params)
print(agent.display_params())
###Output
{'name': 'Agent OmegaPong', 'buffer_size': 1000000, 'batch_size': 256, 'layers_actor': [512, 256], 'layers_critic': [512, 256, 256], 'lr_actor': 0.0005, 'lr_critic': 0.001, 'gamma': 0.99, 'tau': 0.005, 'weight_decay': 0.0, 'learn_every': 5, 'learn_passes': 5, 'batch_norm': True}
###Markdown
Train Agent
###Code
# train the agent
n_episodes = 3000
max_t = 2000
print_every = 50
goal_score = 0.5
score_window_size = 100
keep_training = True
scores = train_ddpg(env, agent, num_agents,
n_episodes=n_episodes, max_t=max_t,
print_every=print_every,
goal_score=goal_score, score_window_size=score_window_size,
keep_training=keep_training)
# plot training results
plot_training_scores(scores, goal_score, window=score_window_size,
ylabel='Max Score for all Agents',
agent_name=agent.name)
###Output
_____no_output_____
###Markdown
Demo Trained or Saved Agents
###Code
# demo the agent trained in this notebook by uncommenting the cells below
#from demos import demo_agent_cont
#demo_scores = demo_agent_cont(env, agent, num_agents, n_episodes=3)
# load a saved agent and run demo
from demos import demo_saved_agent_cont
demo_agent_name = 'Agent OmegaPong'
demo_saved_agent_cont(env, demo_agent_name, n_episodes=3)
# close the environment when complete
env.close()
###Output
_____no_output_____ |
Dichromatic_pattern_CSL.ipynb | ###Markdown
Produce Lists of CSL boundaries for any given rotation axis (hkl) :
###Code
# for example: [1, 0, 0], [1, 1, 0] or [1, 1, 1]
axis = np.array([1,1,1])
# list Sigma boundaries < 50
csl.print_list(axis,50)
###Output
Sigma: 1 Theta: 0.00
Sigma: 3 Theta: 60.00
Sigma: 7 Theta: 38.21
Sigma: 13 Theta: 27.80
Sigma: 19 Theta: 46.83
Sigma: 21 Theta: 21.79
Sigma: 31 Theta: 17.90
Sigma: 37 Theta: 50.57
Sigma: 39 Theta: 32.20
Sigma: 43 Theta: 15.18
Sigma: 49 Theta: 43.57
###Markdown
Select a sigma and get the characteristics of the GB:
###Code
# pick a sigma for this axis, ex: 7.
sigma = 7
theta, m, n = csl.get_theta_m_n_list(axis, sigma)[0]
R = csl.rot(axis, theta)
# Minimal CSL cells. The plane orientations and the orthogonal cells
# will be produced from these original cells.
M1, M2 = csl.Create_minimal_cell_Method_1(sigma, axis, R)
print('Angle:', degrees(theta), '\n', 'Sigma:', sigma,'\n',
'Minimal cells:','\n', M1,'\n', M2, '\n')
###Output
Angle: 38.21321070173819
Sigma: 7
Minimal cells:
[[ 1 2 1]
[ 0 -1 1]
[-2 0 1]]
[[ 0 2 1]
[ 1 0 1]
[-2 -1 1]]
###Markdown
Produce Lists of GB planes for the chosen boundary :
###Code
# the higher the limit the higher the indices of GB planes produced.
lim = 5
V1, V2, M, Gb = csl.Create_Possible_GB_Plane_List(axis, m,n,lim)
# the following data frame shows the created list of GB planes and their corresponding types
df = pd.DataFrame(
{'GB1': list(V1),
'GB2': list(V2),
'Type': Gb
})
df.head()
# Only show the twist boundaries in this system:
df[df['Type'] == 'Twist'].head()
###Output
_____no_output_____
###Markdown
Select a GB plane and go on: You only need to pick the GB1 plane
###Code
# choose a basis and a boundary plane:
basis = 'fcc'
v1 = np.array([1, 1, 1])
# find its orthogonal cell
O1, O2, Num = csl.Find_Orthogonal_cell(basis, axis, m, n, v1)
# function to plot the dichromatic pattern on
# plane v1 (atoms from grain one, grain two and the overlapped CSL points)
def PlotPlane(v1, lim=6, plane_thickness=0.1):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
v = dot(R, v1)
x = np.arange(-lim,lim)
y = np.arange(-lim,lim)
z = np.arange(-lim,lim)
V = len(x)*len(y)*len(z)
indice = (np.stack(np.meshgrid(x, y, z)).T).reshape(V, 3)
# import basis
Base = csl.Basis(str(basis))
Atoms1 = []
vecs = []
tol = 0.001
# create lattice
for i in range(V):
for j in range(len(Base)):
Atoms1.append(indice[i,0:3] + Base[j,0:3])
Atoms1 = np.array(Atoms1)
# plot atoms of one grain on a given plane
for i in range(len(Atoms1)):
if abs(dot(Atoms1[i],v)) <= plane_thickness:
ax.scatter(Atoms1[i,0],Atoms1[i,1],Atoms1[i,2],'s', s = 5,
facecolor = 'k',edgecolor='k')
# plot atoms of the other grain on a given plane
Atoms2 = dot(R,Atoms1.T).T
for i in range(len(Atoms2)):
if abs(dot(Atoms2[i],v)) <= plane_thickness:
ax.scatter(Atoms2[i,0], Atoms2[i,1], Atoms2[i,2],'s', s = 50,
facecolor = 'g',edgecolor='g', alpha=0.2)
# create the CSL lattice
csl_cell = np.round(dot(R,csl.CSL_vec(basis, M1)),7)
for i in range(V):
vector = (indice[i, 0] * csl_cell[:, 0] + indice[i, 1] * csl_cell[:, 1] + indice[i, 2] * csl_cell[:, 2])
vecs.append(vector)
# plot the CSL atoms on a given plane
vecs = np.array(vecs)
for i in range(len(vecs)):
if abs(dot(vecs[i],v)) <= plane_thickness :
ax.scatter(vecs[i,0],vecs[i,1],vecs[i,2],'o', s=100,
facecolor = 'y',edgecolor='y', alpha=0.3)
ax.set_proj_type('ortho')
ax.axis('scaled')
ax.set_xlim(-lim, lim)
ax.set_ylim(-lim, lim)
ax.set_zlim(-lim, lim)
ax.grid(False)
# view direction: normal to the plane
az = degrees(atan2(v[1],v[0]))
el = degrees(asin(v[2]/norm(v)))
ax.view_init(azim = az, elev = el)
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Z axis')
return ax
###Output
_____no_output_____
###Markdown
Plot the twist boundary plane with the DSC vectors (the small CSL repeat vectors) _DSC vectors are the smallest vectors that keep the symmetry of the CSL lattice intact. They are therefore possible Burgers vectors of grain boundary dislocations. To know more about this I refer you to: 'Interfaces in crystalline materials', Sutton and Balluffi, clarendon press, 1996._
###Code
%matplotlib notebook
ax = PlotPlane(v1, lim=4)
# function to plot a line
def PlotLine(Vec, origin=[0,0,0], length=1, color='k'):
return (ax.plot([origin[0], length*Vec[0] + origin[0]],
[origin[1], length*Vec[1] + origin[1]],
[origin[2], length*Vec[2]+ origin[2]], str(color)))
# plot the DSC vectors network, if you want the projected vectors on the plane use csl.DSC_on_plane
dsc_cell = np.round(dot(R,1/sigma*csl.DSC_vec(basis,sigma, M1)),7)
PlotLine(dsc_cell[:,0], color='r')
PlotLine(dsc_cell[:,1], color='b')
PlotLine(dsc_cell[:,2], color='g')
#Plot the orthogonal CSL vectors on the v1 plane
PlotLine(O2[:,1], color='k')
PlotLine(O2[:,2], color='m')
###Output
_____no_output_____
###Markdown
A more advanced example of the usage of the CSL lattice in creating large facets: _This is how I created a large two-faceted structure by decomposing the high index boundary plane onto two lower energy facets. All the 3 planes are CSL planes and form a triangle. To know more follow the link to the paper:_ __(https://journals.aps.org/prmaterials/abstract/10.1103/PhysRevMaterials.2.043601)__
###Code
df[df['Type'] == 'Tilt'].head()
v1 = np.array([-5, 4, 1])
O1, O2, Num = csl.Find_Orthogonal_cell(basis,axis,m,n,v1)
ax = PlotPlane(v1, lim=27)
# I intended to decompose the mixed grain boundary (7, 16, -29) into two lower energy facets.
# 1 unit vector of (7, 16, -29) = 7 units of (1 2 3) plane + 2 units of (0 1 4).
PlotLine(dot(R,[7, 16, -29]), color='r')
PlotLine(dot(R,[1, 2, -3]), length=7, color='r')
PlotLine(dot(R,[0, 1, -4]), origin=dot(R,7*np.array([1, 2, -3])), length=2, color='r')
ax.set_xlim(-5, 20)
ax.set_ylim(-5, 20)
ax.set_zlim(-20, 3)
###Output
_____no_output_____ |
Working Notebooks/nb3_NBA_random_forest_nb3.ipynb | ###Markdown
Imports
###Code
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import pandas as pd
import itertools
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.ensemble import (RandomForestClassifier, ExtraTreesClassifier, VotingClassifier,
AdaBoostClassifier, BaggingRegressor, StackingClassifier)
from sklearn.metrics import precision_score, recall_score, precision_recall_curve,f1_score, fbeta_score, classification_report
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score, make_scorer, confusion_matrix
from sklearn.metrics import roc_auc_score, roc_curve
from sklearn.model_selection import train_test_split, GridSearchCV, cross_validate
%config InlineBackend.figure_formats = ['retina'] # or svg
%matplotlib inline
plt.rcParams['figure.figsize'] = (9, 6)
###Output
_____no_output_____
###Markdown
Initialization
###Code
#Load the data into the notebook as a dataframe.
all_seasons_df = pd.read_csv('/Users/johnmetzger/Desktop/Coding/Projects/Project3/all_seasons_df')
###Output
_____no_output_____
###Markdown
Train-Test Split
###Code
X_train, X_test, y_train, y_test = train_test_split(all_seasons_df.drop('WL',axis=1),
all_seasons_df.WL,test_size=0.2, random_state=42)
randomforest = RandomForestClassifier(n_estimators=1000)
randomforest.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict and Score
###Code
randomforest = RandomForestClassifier(n_estimators=300)
randomforest.fit(X_train, y_train)
print("The accuracy score for Random Forest is")
print("Training: {:6.2f}%".format(100*randomforest.score(X_train, y_train)))
print("Test set: {:6.2f}%".format(100*randomforest.score(X_test, y_test)))
randomforest.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Cross-Validation
###Code
#10-fold cross-validation.
scores = cross_val_score(randomforest, X_train, y_train, cv=10, scoring='accuracy')
print(scores)
# use average accuracy as an estimate of out-of-sample accuracy
print(scores.mean())
###Output
_____no_output_____
###Markdown
Other Metrics
###Code
y_true, y_pred = y_test, randomforest.predict(X_test)
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
0 0.72 0.72 0.72 4657
1 0.72 0.72 0.72 4599
accuracy 0.72 9256
macro avg 0.72 0.72 0.72 9256
weighted avg 0.72 0.72 0.72 9256
###Markdown
ROAUC Curve
###Code
#Let's use probabilities to make a curve showing us how recall
# and thresholds trade off
precision_curve, recall_curve, threshold_curve = precision_recall_curve(y_test, randomforest.predict_proba(X_test)[:,1] )
plt.figure(dpi=80)
plt.plot(threshold_curve, precision_curve[1:],label='precision')
plt.plot(threshold_curve, recall_curve[1:], label='recall')
plt.legend(loc='lower left')
plt.xlabel('Threshold (above this probability, label as Wrong)');
plt.title('Precision and Recall Curves');
###Output
_____no_output_____
###Markdown
* The intersection above is the threshold value of where precision and recall are balanced.* Chose the threshold per use case. * Think about if i want to weigh precision or recall more for my use case.
###Code
from sklearn.metrics import roc_auc_score, roc_curve
fpr, tpr, thresholds = roc_curve(y_test, randomforest.predict_proba(X_test)[:,1])
taste = fpr, tpr, thresholds
#ROAUC Curve Figure
plt.plot(fpr, tpr,lw=2)
plt.plot([0,1],[0,1],c='violet',ls='--')
plt.xlim([-0.05,1.05])
plt.ylim([-0.05,1.05])
plt.rcParams["figure.figsize"] = (4,4)
plt.savefig('test2png', dpi=300, format='pdf')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve: Predict NBA Winner at Halftime');
print("ROC AUC score = ", roc_auc_score(y_test, randomforest.predict_proba(X_test)[:,1]))
###Output
ROC AUC score = 0.7983613713300354
###Markdown
GridSearch *Due to computational constraints, Grid search hyperparameter ranges were entered manually and then combined for runs later. The hyperparameters are:1. *n_estimators*2. *max_depth*3. *min_samples_split*4. *min_samples_leaf*
###Code
max_depth = [14,16,18]
min_samples_leaf = [7,9,11]
min_samples_split = [2,5,7]
n_estimators = [100]
hyperF = dict(n_estimators = n_estimators, max_depth = max_depth,
min_samples_split = min_samples_split,
min_samples_leaf = min_samples_leaf)
gridF = GridSearchCV(randomforest, hyperF, cv = 3, verbose = 1,
n_jobs = -1)
#This makes bestF your random forest model to do .predict on.
gridF.fit(X_train, y_train)
'''
max_depth = [14,16,18]
min_samples_leaf = [7,9,11]
min_samples_split = [2,5,7]
n_estimators = [100]
'''
gridF.best_score_
'''
max_depth = [14,16,18]
min_samples_leaf = [7,9,11]
min_samples_split = [2,5,7]
n_estimators = [100]
'''
gridF.best_params_
'''
max_depth = [12,14,16,18]
min_samples_leaf = [7,9,11,13]
min_samples_split = [5]
n_estimators = [100]
'''
gridF.best_score_
'''
max_depth = [12,14,16,18]
min_samples_leaf = [7,9,11,13]
min_samples_split = [5]
n_estimators = [100]
'''
gridF.best_params_
'''
max_depth = [5,8,10,12,14]
min_samples_leaf = [2,5,7,9]
min_samples_split = [2,5,7,9]
n_estimators = [100]
'''
gridF.best_score_
'''
max_depth = [5,8,10,12,14]
min_samples_leaf = [2,5,7,9]
min_samples_split = [2,5,7,9]
n_estimators = [100]
'''
gridF.best_params_
'''
max_depth = [10]
min_samples_leaf = [7]
min_samples_split = [2]
n_estimators = [10,30,100,500,1000]
'''
gridF.best_score_
'''
max_depth = [10]
min_samples_leaf = [7]
min_samples_split = [2]
n_estimators = [10,30,100,500,1000]
'''
gridF.best_params_
'''
max_depth = [5, 8, 10, 12]
min_samples_leaf = [2,5,7,9]
min_samples_split = [2,5,7,9]
n_estimators = [100]
'''
gridF.best_score_
'''
max_depth = [5, 8, 10, 12]
min_samples_leaf = [5]
min_samples_split = [2,5]
n_estimators = [100]
'''
gridF.best_params_
'''
max_depth = [5, 8, 10, 12]
min_samples_leaf = [5]
min_samples_split = [2,5]
n_estimators = [100]
'''
gridF.best_score_
'''
max_depth = [5, 8, 10, 12]
min_samples_leaf = [2,5,7,9]
min_samples_split = [2,5,7,9]
n_estimators = [100]
'''
gridF.best_params_
'''
max_depth = [5, 8, 10, 12]
min_samples_leaf = [5]
min_samples_split = [2]
n_estimators = [100]
'''
gridF.best_score_
'''
max_depth = [5, 8, 10, 12]
min_samples_leaf = [5]
min_samples_split = [2]
n_estimators = [100]
'''
gridF.best_params_
###Output
_____no_output_____
###Markdown
Best Hyperparameters Note: not all combinations were tested all at once.Range tested:* max_depth = [5,8,10,12]* min_samples_leaf = [2,5,7,9] * min_samples_split = [2,5,7,9]* n_estimators = [10,30,100,500,1000]
###Code
'''
max_depth = [14,16,18]
min_samples_leaf = [7,9,11]
min_samples_split = [2,5,7]
n_estimators = [100]'''
gridF.best_score_
'''
max_depth = [14,16,18]
min_samples_leaf = [7,9,11]
min_samples_split = [2,5,7]
n_estimators = [100]'''
gridF.best_params_
###Output
_____no_output_____ |
experiments_and_development/SimCLRv2-PyTorch/data_processing.ipynb | ###Markdown
CLOVER Data Processing DevelopmentNotebook for data processing we'll need to run models for CLOVER.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import pandas as pd
import os
import sys
from torchvision.datasets import ImageFolder
sys.path.append('/Users/kaipak/dev/clover_datasets/src/')
from datasets import CloverDatasets
df = pd.read_csv(r"/Users/kaipak/dev/clover_datasets/metadata/msl/msl_synset_words-indexed.txt",
names=['idx', 'name'], delimiter='\s{4,}', index_col=False)
df
foo = CloverDatasets(data_path='/Users/kaipak/datasets/CLOVER/',
out_path='/Users/kaipak/datasets/CLOVER/processed')
foo.generate_mslv2_dataset(train_file='/Users/kaipak/dev/clover_datasets/metadata/msl/90pctTrain.txt')
df = pd.read_csv('/Users/kaipak/datasets/CLOVER/msl-labeled-data-set-v2.1/90pctTrain.txt',
sep='\t', names=['img', 'label'])
df = pd.read_csv('/Users/kaipak/dev/clover_datasets/metadata/msl/train-set-v2.1.txt',
delimiter='\s', names=['foo', 'bar'])
df.bar.unique()
foo = ImageFolder('/Users/kaipak/datasets/CLOVER/processed/msl_dataset/train/')
foo.make_dataset(class_to_idx='/Users/kaipak/dev/clover_datasets/metadata/msl/msl_synset_words-indexed.txt')
foo.classes
###Output
_____no_output_____ |
.ipynb_checkpoints/Terry Stops v7-checkpoint.ipynb | ###Markdown
Seattle Terry Stops Final Project Submission* Student name: Rebecca Mih* Student pace: Part Time Online* Scheduled project review date/time: * Instructor name: James Irving* Blog post URL: * **Data Source:** https://www.kaggle.com/city-of-seattle/seattle-terry-stops * Date of last update to the datasource: April 15, 2020* **Key references:*** https://assets.documentcloud.org/documents/6136893/SPDs-2019-Annual-Report-on-Stops-and-Detentions.pdf * https://www.seattletimes.com/seattle-news/crime/federal-monitor-finds-seattle-police-are-conducting-proper-stops-and-frisks/ * https://catboost.ai/docs/concepts/python-reference_catboost_grid_search.html* https://towardsdatascience.com/catboost-vs-light-gbm-vs-xgboost-5f93620723db <img src= "Seattle Police Dept.jpg" width=200"/></div Backgroundhttps://caselaw.findlaw.com/us-supreme-court/392/1.htmlThis data represents records of police reported stops under Terry v. Ohio, 392 U.S. 1 (1968). Each row represents a unique stop. A Terry stop is a seizure under both state and federal law. A Terry stop isdefined in policy as a brief, minimally intrusive seizure of a subject based upon**articulable reasonable suspicion (ARS) in order to investigate possible criminal activity.**The stop can apply to people as well as to vehicles. The subject of a Terry stop is**not** free to leave.Section 6.220 of the Seattle Police Department (SPD) Manual defines Reasonable Suspicion as:Specific, objective, articulable facts which, taken together with rational inferences, wouldcreate a **well-founded suspicion that there is a substantial possibility that a subject hasengaged, is engaging or is about to engage in criminal conduct.**- Each record contains perceived demographics of the subject, as reported by the officer making the stop and officer demographics as reported to the Seattle Police Department, for employment purposes.- Where available, data elements from the associated Computer Aided Dispatch (CAD) event (e.g. Call Type, Initial Call Type, Final Call Type) are included. Notes on Concealed Weapons in the State of WashingtonWHAT ARE WASHINGTON’S CONCEALED CARRY LAWS?Open carry of a firearm is lawful without a permit in the state of Washington except, according to the law, “under circumstances, and at a time and place that either manifests an intent to intimidate another or that warrants alarm for the safety of other persons.”**However, open carry of a loaded handgun in a vehicle is legal only with a concealed pistol license. Open carry of a loaded long gun in a vehicle is illegal.**The criminal charge of “carrying a concealed firearm” happens in this state when someone carries a concealed firearm **without a concealed pistol license**. It does not matter if the weapon was discovered in the defendant’s home, vehicle, or on his or her person. Objectives Target: * Identify Terry Stops which lead to Arrest or Prosecution (Binary Classification) Features: * Location (Precinct) * Day of the Week (Date) * Shift (Time) * Initial Call Type * Final Call Type * Stop Resolution * Weapon type * Officer Squad * Age of officer * Age of detainee Optional Features: * Race of officer * Race of detainee * Gender of officer * Gender of detainee Definition of Features Provided Column Names and descriptions provided in the SPD dataset * **Subject Age Group** Subject Age Group (10 year increments) as reported by the officer. * **Subject ID** Key, generated daily, identifying unique subjects in the dataset using a character to character match of first name and last name. "Null" values indicate an "anonymous" or "unidentified" subject. Subjects of a Terry Stop are not required to present identification. **Not Used** * **GO / SC Num**General Offense or Street Check number, relating the Terry Stop to the parent report. This field may have a one to many relationship in the data. **Not Used** * **Terry Stop ID**Key identifying unique Terry Stop reports. **Not Used*** **Stop Resolution**Resolution of the stop**One hot encoding** * **Weapon Type** Type of weapon, if any, identified during a search or frisk of the subject. Indicates "None" if no weapons was found. * **Officer ID** Key identifying unique officers in the dataset.**Not Used** * **Officer YOB** Year of birth, as reported by the officer. * **Officer Gender** Gender of the officer, as reported by the officer. * **Officer Race** Race of the officer, as reported by the officer. * **Subject Perceived Race** Perceived race of the subject, as reported by the officer. * **Subject Perceived Gender** Perceived gender of the subject, as reported by the officer. * **Reported Date** Date the report was filed in the Records Management System (RMS). Not necessarily the date the stop occurred but generally within 1 day. * **Reported Time** Time the stop was reported in the Records Management System (RMS). Not the time the stop occurred but generally within 10 hours. * **Initial Call Type** Initial classification of the call as assigned by 911. * **Final Call Type** Final classification of the call as assigned by the primary officer closing the event. * **Call Type** How the call was received by the communication center.* **Officer Squad** Functional squad assignment (not budget) of the officer as reported by the Data Analytics Platform (DAP). * **Arrest Flag** Indicator of whether a "physical arrest" was made, of the subject, during the Terry Stop. Does not necessarily reflect a report of an arrest in the Records Management System (RMS). * **Frisk Flag** Indicator of whether a "frisk" was conducted, by the officer, of the subject, during the Terry Stop. * **Precinct** Precinct of the address associated with the underlying Computer Aided Dispatch (CAD) event. Not necessarily where the Terry Stop occurred. * **Sector** Sector of the address associated with the underlying Computer Aided Dispatch (CAD) event. Not necessarily where the Terry Stop occurred. * **Beat** Beat of the address associated with the underlying Computer Aided Dispatch (CAD) event. Not necessarily where the Terry Stop occurred. Analysis Workflow (OSEMN) 1. **Obtain and Pre-process** - [x] Import data - [x] Remove unused columns - [x] Check data size, NaNs, and of non-null values which are not valid data - [x] Clean up missing values by imputing values or dropping - [x] Replace ? or other non-valid data by imputing values or dropping data - [x] Check for duplicates and remove if appropriate - [x] Change datatypes of columns as appropriate - [x] Note which features are continuous and which are categorical2. **Data Scoping** - [x] Use value_counts() to identify dummy categories such as "-", or "?" for later re-mapping - [x] Identify most common word data - [x] Decide on which columns (features) to keep for further feature engineering 3. **Transformation of data (Feature Engineering)** - [x] Re-bin categories to reduce noise - [x] Re-map categories as needed - [x] Engineer text data to extract common word information - [x] Transform categoricals using 1-hot encoding or label encoding/ - [x] Perform log transformations on continuous variables (if applicable) - [x] Normalize continuous variables - [x] Use re-sampling if needed to balance the dataset 4. **Further Feature Selection** - [x] Use .describe() and .hist() histograms - [x] Identify outliers (based on auto-scaling of plots) and remove or inpute as needed - [x] Perform visualizations on key features to understand - [x] Inspect feature correlations (Pearson correlation) to identify co-linear features**5. **Create a Vanilla Machine Learning Model** - [x] Split into train and test data - [x] Run the model - [x] Review Quality indicators of the model 6. **Run more advanced models** - [x] Compare the model quality - [x] Choose one or more models for grid searching 7. **Revise data inputs if needed to improve quality indicators** - [x] By adding created features, and removing colinear features - [x] By improving unbalanced datasets through oversampling or undersampling - [x] by removing outliers through filters - [x] through use of subject matter knowledge 8. **Write the Report** - [X] Explain key findings and recommended next steps 1. Obtain and Pre-Process the Data 1. **Obtain and Pre-process** - [x] Import data - [x] Remove unused columns - [x] Check data size, NaNs, and of non-null values which are not valid data - [x] Clean up missing values by imputing values or dropping - [x] Replace ? or other non-valid data by imputing values or dropping data - [x] Check for duplicates and remove if appropriate - [x] Change datatypes of columns as appropriate - [x] Decide the target column, if not already decided - [x] Determine if some data is not relevent to the question (drop columns or rows) - [x] Note which features which will need to be re-mapped or encoded - [x] Note which features might require feature engineering (example - date, time)
###Code
#!pip install -U fsds_100719
from fsds_100719.imports import *
#import pandas as pd
#import numpy as np
#import matplotlib.pyplot as plt
#import seaborn as sns
import copy
import sklearn
import math
import datetime
#import plotly.express as px
#import plotly.graphy_objects as go
import warnings
warnings.filterwarnings('ignore')
import sklearn.metrics as metrics
pd.options.display.float_format = '{:.2f}'.format
pd.set_option('display.max_columns',0)
pd.set_option('display.max_info_rows',200)
%matplotlib inline
def plot_importance2(tree, top_n=20,figsize=(10,10)):
df_importance = pd.Series(tree.feature_importances_,index=X_train.columns)
df_importance.sort_values(ascending=True).tail(top_n).plot(kind='barh',figsize=figsize)
return df_importance
#check = evaluate_model(y_test,y_hat_test, X_test, xgb_rf)
plot_importance2(xgb_rf)
# Write a function which evaluates the model, and returns
def evaluate_model(y_true, y_pred,X_true,clf,cm_kws=dict(cmap="Blues",
normalize='true'),figsize=(10,4),plot_roc_auc=True):
## Reporting Scores
print('Accuracy Score :',accuracy_score(y_true, y_pred))
print(metrics.classification_report(y_true,y_pred,
target_names = ['Not Arrested', 'Arrested']))
if plot_roc_auc:
num_cols=2
else:
num_cols=1
fig, ax = plt.subplots(figsize=figsize,ncols=num_cols)
if not isinstance(ax,np.ndarray):
ax=[ax]
metrics.plot_confusion_matrix(clf,X_true,y_true,ax=ax[0],**cm_kws)
ax[0].set(title='Confusion Matrix')
if plot_roc_auc:
try:
y_score = clf.predict_proba(X_true)[:,1]
fpr,tpr,thresh = metrics.roc_curve(y_true,y_score)
# print(f"ROC-area-under-the-curve= {}")
roc_auc = round(metrics.auc(fpr,tpr),3)
ax[1].plot(fpr,tpr,color='darkorange',label=f'ROC Curve (AUC={roc_auc})')
ax[1].plot([0,1],[0,1],ls=':')
ax[1].legend()
ax[1].grid()
ax[1].set(ylabel='True Positive Rate',xlabel='False Positive Rate',
title='Receiver operating characteristic (ROC) Curve')
plt.tight_layout()
plt.show()
except:
pass
try:
df_important = plot_importance(clf)
except:
df_important = None
return df_important
def plot_importance(tree, top_n=20,figsize=(10,10),expt_name='Model'):
'''Feature Selection tool, which plots the feature importance based on results
Inputs:
tree: classification learning function utilized
top_n: top n features contributing to the model, default = 20
figsize: size of the plot, default=(10,10)
expt_name: Pass in the experiment name, so that the saved feature importance image will be unique
default = Model
Returns: df_importance - series of the model features sorted by importance
Saves: Feature importance figure as "Feature expt_name.png", Default expt_name = "Model" '''
df_importance = pd.Series(tree.feature_importances_,index=X_train.columns)
df_importance.sort_values(ascending=True).tail(top_n).plot(kind='barh',figsize=figsize)
plt.savefig(("Feature {}.png").format(expt_name))
#plt.savefig("Feature Importance 2.png", transparent = True)
return df_importance
#check = evaluate_model(y_test,y_hat_test, X_test, xgb_rf)
plot_importance(xgb_rf)
# Write a function which evaluates the model, and returns
def evaluate_model(y_true, y_pred,X_true,clf,metrics_df,
cm_kws=dict(cmap="Greens",normalize='true'),figsize=(10,4),plot_roc_auc=True,
expt_name='Model'):
'''Function which evaluates each model, stores the result and figures
Inputs:
y_true: target output of the model based on test data
y_pred: target input to the model based on train data
X_true: result output of the model based on test data
clf: classification learning function utilized for the model (examples: xgb-rf, Catboost)
metrics_df: dataframe which contains the classification metrics
(precision, recall, f1-score, weighted average)
cm_kws: keyword settings for plotting and normalization
Defaults: cmap="Blues", normalize = "true"
figsize: size of the plot, default=(10,10)
expt_name: Pass in the experiment name, so that the saved feature importance image will be unique
default = A
Outputs: df_important - series of the model features sorted by importance
Saves: roc_auc plot - plot of AUC for the model
Feature importance plot
'''
## Reporting Scores
accuracy_result = accuracy_score(y_true, y_pred)
print('Accuracy Score for {}:',accuracy_result).format('expt_name')
metrics_report = metrics.classification_report(y_true,y_pred,
target_names = ['Not Arrested', 'Arrested'],
output_dict=True)
#print(metrics_report)
## Save scores into the results dataframe
result_df = pd.DataFrame(metrics_report).transpose()
#display(result_df)
result_df.drop(labels='macro avg',axis = 0, inplace=True)
result_df.drop(labels='support', axis = 1, inplace=True)
#display(result_df)
# Swap Rows https://stackoverflow.com/questions/55439469/swapping-two-rows-together-with-index-within-the-same-pandas-dataframe
result_df.iloc[np.r_[0:len(result_df) - 2, -1, -2]]
result_df.rename(index= {'weighted avg':'Weighted Avg', 'accuracy':'Accuracy'}, inplace=True)
result_df.rename(columns = {'precision': 'Precision', 'recall':'Recall',
'f1-score':'F1 Score'}, inplace=True)
column_list = result_df.columns
display(result_df)
if plot_roc_auc:
num_cols=2
else:
num_cols=1
fig, ax = plt.subplots(figsize=figsize,ncols=num_cols)
if not isinstance(ax,np.ndarray):
ax=[ax]
metrics.plot_confusion_matrix(clf,X_true,y_true,ax=ax[0],**cm_kws)
ax[0].set(title='Confusion Matrix')
plt.savefig("Confusion Matrix {}.png").format(expt_name)
if plot_roc_auc:
try:
y_score = clf.predict_proba(X_true)[:,1]
fpr,tpr,thresh = metrics.roc_curve(y_true,y_score)
roc_auc = round(metrics.auc(fpr,tpr),3)
ax[1].plot(fpr,tpr,color='darkorange',label=f'ROC Curve (AUC={roc_auc})')
ax[1].plot([0,1],[0,1],ls=':')
ax[1].legend()
ax[1].grid()
ax[1].set(ylabel='True Positive Rate',xlabel='False Positive Rate',
title='Receiver operating characteristic (ROC) Curve')
plt.tight_layout()
plt.show()
plt.savefig("ROC Curve {}.png").format(expt_name)
# #res = result_df.set_value(len(res), roc_auc, roc_auc, roc_auc)
except:
print('ROC-AUC not working')
try:
df_important = plot_importance(clf)
except:
df_important = None
print('importance plotting not working')
return result_df
#def evaluate_model(y_true, y_pred,X_true,clf,metrics_df,
# cm_kws=dict(cmap="Greens",normalize='true'),figsize=(10,4),plot_roc_auc=True,
# expt_name='Model'):
''' metrics_report = metrics.classification_report(y_test,y_hat_test,
target_names = ['Not Arrested', 'Arrested'],
output_dict=True)
#print(metrics_report)
## Save scores into the results dataframe
result_df = pd.DataFrame(metrics_report).transpose()
#display(result_df)
result_df.drop(labels='macro avg',axis = 0, inplace=True)
result_df.drop(labels='support', axis = 1, inplace=True)
#display(result_df)
# Swap Rows https://stackoverflow.com/questions/55439469/swapping-two-rows-together-with-index-within-the-same-pandas-dataframe
result_df.iloc[np.r_[0:len(result_df) - 2, -1, -2]]
result_df.rename(index= {'weighted avg':'Weighted Avg', 'accuracy':'Accuracy'}, inplace=True)
result_df.rename(columns = {'precision': 'Precision', 'recall':'Recall',
'f1-score':'F1 Score'}, inplace=True)
column_list = result_df.columns
#result_df = result_df.set_value(len(result_df), 'aoc', 'aoc', 'aoc')
display(result_df) '''
df = pd.read_csv('Terry_Stops.csv',low_memory=False)
df.duplicated().sum()
df.head()
###Output
_____no_output_____
###Markdown
* Drop Columns which contain IDs, which are not useful features.
###Code
df.drop(columns = ['Subject ID', 'GO / SC Num', 'Terry Stop ID', 'Officer ID'], inplace=True)
df.duplicated().sum()
# After dropping some of the columns, some rows appear to be duplicated.
# However, since the date and time of the incident are NOT exact (i.e. the date could be 24 hours later, and the
# time could be 10 hours later), it's possible to get some that are similar on different consecutive dates.
df.columns
col_names = df.columns
print(col_names)
df.shape
# The rationale for this is to understand how big the dataset is, how many features are contained in the data
# This helps with planning for function vs lambda functions, and whether certain kinds of visualizations will be feasible
# for the analysis (with my computer hardware). With compute limitations, types of correlation plots cause the kernal to die,
# if there are more than 11 features.
###Output
_____no_output_____
###Markdown
* df.isna().sum()isna().sum() determines how many data are missing from a given feature* df.info() df.info() helps you determine if there missing values or datatypes that need to be modified* Handy alternate checks if needed ** - [x] df.isna().any() - [x] df.isnull().any() - [x] df.shape
###Code
df.isna().sum()
df['Officer Squad'].fillna('Unknown', inplace=True)
###Output
_____no_output_____
###Markdown
* Findings from isna().sum() ** Officer Squad has 535 missing data (1.3% of the data) * Impute "Unknown"
###Code
df.isna().sum()
df.info()
df.duplicated().sum()
duplicates = df[df.duplicated(keep = False)]
#duplicates.head(118)
###Output
_____no_output_____
###Markdown
Use value_counts() - inspect for dummy variables, and determine next steps for data cleaning1. Rationale: This analysis is useful for flushing out missing values in the form of question marks, dashes or other symbols or dummy variables 2. It also gives a preliminary view of the number and distribution of categories in each feature, albeit by numbers rather than graphics 3. For text data, value_counts serves as a preliminary investigation of the common important word data
###Code
for col in df.columns:
print(col, '\n', df[col].value_counts(), '\n')
###Output
Subject Age Group
26 - 35 13615
36 - 45 8547
18 - 25 8509
46 - 55 5274
56 and Above 1996
1 - 17 1876
- 1287
Name: Subject Age Group, dtype: int64
Stop Resolution
Field Contact 16287
Offense Report 13976
Arrest 9957
Referred for Prosecution 728
Citation / Infraction 156
Name: Stop Resolution, dtype: int64
Weapon Type
None 32565
- 6213
Lethal Cutting Instrument 1482
Knife/Cutting/Stabbing Instrument 308
Handgun 262
Firearm Other 100
Club, Blackjack, Brass Knuckles 49
Blunt Object/Striking Implement 37
Firearm 18
Firearm (unk type) 15
Other Firearm 13
Mace/Pepper Spray 12
Club 9
Rifle 5
Taser/Stun Gun 4
None/Not Applicable 4
Shotgun 3
Automatic Handgun 2
Brass Knuckles 1
Blackjack 1
Fire/Incendiary Device 1
Name: Weapon Type, dtype: int64
Officer YOB
1986 2930
1987 2600
1984 2558
1991 2356
1985 2331
1992 2033
1990 1892
1988 1831
1989 1753
1982 1733
1983 1587
1979 1351
1981 1268
1971 1177
1993 1113
1978 1043
1977 933
1976 904
1973 856
1980 754
1995 753
1967 684
1994 591
1968 583
1970 544
1974 522
1969 511
1975 475
1962 447
1965 403
1972 394
1964 391
1996 252
1963 227
1958 218
1966 216
1961 202
1959 174
1997 170
1960 154
1954 43
1957 41
1953 32
1955 21
1956 16
1948 11
1900 9
1952 9
1949 5
1946 2
1951 1
Name: Officer YOB, dtype: int64
Officer Gender
M 36504
F 4593
N 7
Name: Officer Gender, dtype: int64
Officer Race
White 31805
Hispanic or Latino 2255
Two or More Races 2158
Black or African American 1674
Asian 1563
Not Specified 912
Nat Hawaiian/Oth Pac Islander 419
American Indian/Alaska Native 309
Unknown 9
Name: Officer Race, dtype: int64
Subject Perceived Race
White 20192
Black or African American 12243
Unknown 2073
Hispanic 1684
- 1422
Asian 1278
American Indian or Alaska Native 1224
Multi-Racial 809
Other 152
Native Hawaiian or Other Pacific Islander 27
Name: Subject Perceived Race, dtype: int64
Subject Perceived Gender
Male 32049
Female 8468
Unable to Determine 326
- 253
Unknown 7
Gender Diverse (gender non-conforming and/or transgender) 1
Name: Subject Perceived Gender, dtype: int64
Reported Date
2015-10-01T00:00:00 101
2015-09-29T00:00:00 66
2015-05-28T00:00:00 57
2015-07-18T00:00:00 55
2019-04-26T00:00:00 54
...
2015-03-28T00:00:00 1
2015-04-28T00:00:00 1
2015-05-10T00:00:00 1
2015-03-24T00:00:00 1
2015-03-15T00:00:00 1
Name: Reported Date, Length: 1860, dtype: int64
Reported Time
19:18:00 51
03:13:00 50
02:56:00 50
03:09:00 50
18:51:00 49
..
09:25:29 1
12:40:10 1
00:38:28 1
19:20:21 1
20:36:41 1
Name: Reported Time, Length: 7644, dtype: int64
Initial Call Type
- 12743
SUSPICIOUS PERSON, VEHICLE OR INCIDENT 2492
SUSPICIOUS STOP - OFFICER INITIATED ONVIEW 2489
DISTURBANCE, MISCELLANEOUS/OTHER 2116
ASLT - IP/JO - WITH OR W/O WPNS (NO SHOOTINGS) 1714
...
VICE - PORNOGRAPHY 1
ESCAPE - PRISONER 1
PHONE - OBSCENE OR NUISANCE PHONE CALLS 1
ALARM - RESIDENTIAL - SILENT/AUD PANIC/DURESS 1
KNOWN KIDNAPPNG 1
Name: Initial Call Type, Length: 161, dtype: int64
Final Call Type
- 12743
--SUSPICIOUS CIRCUM. - SUSPICIOUS PERSON 2991
--PROWLER - TRESPASS 2672
--DISTURBANCE - OTHER 2318
--ASSAULTS, OTHER 1967
...
MVC - WITH INJURIES (INCLUDES HIT AND RUN) 1
--PREMISE CHECKS - REQUEST TO WATCH 1
-ASSIGNED DUTY - FOOT BEAT (FROM ASSIGNED CAR) 1
-ASSIGNED DUTY - STAKEOUT 1
ORDER - VIOLATION OF COURT ORDER (NON DV) 1
Name: Final Call Type, Length: 193, dtype: int64
Call Type
911 17857
- 12743
ONVIEW 7445
TELEPHONE OTHER, NOT 911 2828
ALARM CALL (NOT POLICE ALARM) 226
PROACTIVE (OFFICER INITIATED) 2
TEXT MESSAGE 2
SCHEDULED EVENT (RECURRING) 1
Name: Call Type, dtype: int64
Officer Squad
TRAINING - FIELD TRAINING SQUAD 4310
WEST PCT 1ST W - DAVID/MARY 1273
NORTH PCT 2ND WATCH - NORTH BEATS 879
WEST PCT 2ND W - D/M RELIEF 874
SOUTHWEST PCT 2ND W - FRANK 838
...
HR - BLEA - ACADEMY RECRUITS 1
TRAINING - ADVANCED - SQUAD C 1
ZOLD CRIME ANALYSIS UNIT - ANALYSTS 1
VICE - GENERAL INVESTIGATIONS SQUAD 1
TRAF - MOTORCYCLE UNIT - T2 SQUAD 1
Name: Officer Squad, Length: 158, dtype: int64
Arrest Flag
N 39355
Y 1749
Name: Arrest Flag, dtype: int64
Frisk Flag
N 31577
Y 9049
- 478
Name: Frisk Flag, dtype: int64
Precinct
- 9485
North 9166
West 8937
East 5515
South 4893
Southwest 2320
SouthWest 558
Unknown 200
OOJ 18
FK ERROR 12
Name: Precinct, dtype: int64
Sector
- 9664
E 2337
M 2270
N 2191
K 1762
B 1658
L 1639
D 1512
R 1455
F 1378
S 1348
U 1302
O 1161
J 1119
G 1087
C 1037
Q 967
K 950
W 941
E 596
M 569
D 532
N 377
Q 375
O 344
F 337
R 302
S 282
G 256
B 236
J 231
U 223
W 222
C 202
L 189
99 53
Name: Sector, dtype: int64
Beat
- 9630
N3 1175
E2 1092
M2 852
M3 792
...
C2 45
N1 42
OOJ 17
99 15
S 2
Name: Beat, Length: 107, dtype: int64
###Markdown
Findings from value_counts() and Next Steps:1. The "-" is used as a substitute for unknown, in many cases. Perhaps it would be good to build a function to impute "unknown" for the "-" for multiple features2. Race and gender need re-mapping3. Call Types, Weapons need re-binning4. Officer Squad text can be split and provide the precinct, and the watch.**Next steps:**- [x] Investigation of the Stop Resolution, to determine whether the target should be "Stop Resolution - Arrests" or "Arrest Flag", and whether "Frisk Flag" is useful for predicting arrests.- [x] Decide whether time and location information can be extracted from the "Officer Squad" column instead of the columns for time, Precinct, Sector and Beats
###Code
# Viewing the data to get a sense of which Stop Resolutions are correlated to the "Arrest Flag"
df.sort_values(by=['Stop Resolution'], ascending=True).head(100)
# Check out what are the differences between a Stop Resolution of "Arrest" and the "Arrest Flag"
df.loc[(df['Stop Resolution']=='Arrest') & (df['Arrest Flag']=="N")].shape
# This is the number of cases where the final stop resolution as reported by the officer, was "Arrest" and the
# Arrest Flag was N. This indicates that many arrests are finalized after the actual Terry Stop
df.loc[(df['Stop Resolution']!='Arrest') & (df['Arrest Flag']=="Y")].shape
# Number of times an arrest was not made, but the arrest flag was yes (an arrest was made during the Terry Stop)
df.loc[(df['Stop Resolution']=='Arrest') & (df['Arrest Flag']=="Y")].shape
# These are the number of arrests DURING the Terry stop, that had a final resolution of arrest
# Conclusion: Use the Stop Resolution of Arrest to capture all the arrests made arising from a Terry stop
# The total number of arrests as repored by the officers is 8210 + 1747 or ~ 25% of the total # of Terry stops
# Check to see whether the Frisk Flag has usefulness
df.loc[(df['Stop Resolution']=='Arrest') & (df['Frisk Flag']=="Y")].shape
# Out of 10,000 arrests (and ~ 9000 Frisks), the number of arrest, that were frisked was ~30%
# It would appear that the 'Frisk Flag' is not helpful for predicting arrests. Drop the 'Frisk Flag'
# CheckType whether 'Call Type' has usefulness
df.loc[(df['Stop Resolution']=='Arrest') & (df['Call Type']=="911")].shape
# Out of ~10,000 arrests roughly 50% came through 911. Doesn't appear to be particularly useful for predicting arrests
# Drop the 'Call Type'
df.head()
###Output
_____no_output_____
###Markdown
2. Data Scoping 1. Which is better to use the "Arrest Flag" column or the "Stop Resolution column as the target?: * Arrest Flag is a'1' only when there was an actual arrest during the Terry Stop. Which may not be easy to do, resulting in a lower number (1747) * Stop Resolution records ~10,000 arrests, roughly 25% of the total dataset. Since Stop Resolution is about officers recording the resolution of the Terry Stop, and with a likely performance target for officers, they are likely to record this more accurately. * A quick check of "Frisk Flag" which is an indicator of those Terry stops where a Frisk was performed, does not seem well correlated with arrests. Recomend to drop "Frisk Flag" Conclusion: Use "Stop Resolution" Arrests as the target - [x] Create a new column called "Arrests" which encodes Stop Resolution Arrests as a "1" and all others "0". - [x] Drop the "Arrest Flag" column - [x] Drop the "Frisk Flag" column 2. Location data, there are a number of columns which relate to location such as "Precincts", "Officer Squad", "Sector", "Beat", but are indirect measures of the actual location of the Terry Stop. Inspection of the "Officer Squad" text shows the Location assignment of the officer making the report. In ~10% of cases, Terry stops were performed by field training units or other units which are not captured by precinct (hence roughly 25% of the precincps are unknown). The training unit information is captured in the "Officer Squad" column. 3. For time data there is a "Reported Time" -- which is the time when the officer report was submitted, and according to the documentation could be delayed up to 10 hours, rather than the time of the actual Terry stop. However, inspection of the text in "Officer Squad" shows that the reporting officer's watch is recorded. In the Seattle police squad there are 3 watches to cover each 24 hour period. Watch 1 (03:00 - 11:00), Watch 2 (11:00 - 19:00), and Watch 3 (19:00 - 03:00). Since officer performance is rated based on number of cases and crimes prevented or apprehended, likely the "Officer Squad" data which comes from the report is likely to be the most reliable in terms of time. Conclusion: Use "Officer Squad" text data for time and location- [x] Parse the "Officer Squad" data to capture the location and time based on officer assignments, creating columns for location and watch. - [x] Drop the "Reported Time", "Precincts", "Sector", and "Beat" columns
###Code
df.drop(columns=['Arrest Flag', 'Frisk Flag', 'Reported Time', 'Precinct', 'Sector', 'Beat'], inplace = True)
# Re-Check for duplicates
#duplicates = seattle_df[seattle_df.duplicated(subset =['id'], keep = False)]
#duplicates.sort_values(by=['id']).head()
duplicates = df[df.duplicated(keep = False)]
df.duplicated().sum()
###Output
_____no_output_____
###Markdown
Finding from duplicated():- If you look at the beginning of the analysis, I checked for duplications with the entire dataset (before removing columns of data, such as "ID"), there were no duplicates. But after dropping the ID, there are 118 rows in duplication, 59 pairs. - Because the date and time are not exact (the documentation says sometimes the date could have been entered 24 hours later, or the time could be off by 10 hours, so that actually unique Terry stops could have the same data (when the ID columns are removed).- There are a few that are arrests. Still open to decide whether to remove the duplicated data or not. - What is curious is that the index number is not always consecutive between different pairs of duplicates. This suggests that perhaps the data was input twice -- maybe due to some computer or internet glitches? 3. Data Transformation * Officer data: YOB, race, gender * Subject data- Age Group, race, gender * Stop Resolution (target column) * Weapons * Type of potential crime: Call type Initial and Final * Date to day of week * Location and time: from Officer Squad A. Transform Gender Using Dictionary Mapping .map()
###Code
# Re-mapping gender categories. 0 = Male, 1 = Female, 2 = Unknown
# officer_gender
officer_gender = {'M':0, 'F':1, 'N':2}
df['Officer Gender'] = df['Officer Gender'].map(officer_gender)
# subject perceived gender
subject_gender = {'Male':0, 'Female':1, 'Unknown':2, '-':2,
'Unable to Determine':2, 'Gender Diverse (gender non-conforming and/or transgender)':2}
df['Subject Perceived Gender'] = df['Subject Perceived Gender'].map(subject_gender)
#Check the mapping
df.loc[(df['Officer Gender']== 0.0)].shape, df.loc[(df['Subject Perceived Gender']== 0.0)].shape
df['Officer Gender'].value_counts()
df['Subject Perceived Gender'].value_counts()
df.loc[(df['Stop Resolution']=='Arrest') & (df['Subject Perceived Gender']== np.nan)].shape
# Checking to see if those arrested were gender different. In this case none
# Check the mapping
df['Officer Gender'].isna().sum(), df['Subject Perceived Gender'].isna().sum() #NAs are not found
###Output
_____no_output_____
###Markdown
B. Transform Age Using Dictionary Mapping .map() and binning (.cut)
###Code
# Re-mapping subject age categories
subject_age = {'1 - 17':1, '18 - 25':2, '26 - 35':3, '36 - 45':4, '46 - 55':5, '56 and Above':6, '-':0}
df['Subject Age Group'] = df['Subject Age Group'].map(subject_age)
df['Subject Age Group'].isna().sum()
df['Subject Age Group'].value_counts()
# Checking to see of those arrested, how many had an unknown age group
# There are 193 arrests of people whose age is unknown
df.loc[(df['Stop Resolution']=='Arrest') & (df['Subject Age Group']== 0)].shape
# Calculated the Officers Age, and bin into same bins as the subject age
df['Reported Year']=pd.to_datetime(df['Reported Date']).dt.year
df['Reported Year']
df['Officer Age'] = df['Reported Year'] - df['Officer YOB']
df['Officer Age'].value_counts(dropna=False)
#subject_age = {'1 - 17':1, '18 - 25':2, '26 - 35':3, '36 - 45':4, '46 - 55':5, '56 and Above':6, '-':0}
#bins = [0, 17, 25, 35, 45, 55,85]
#age_bins = pd.cut(df['Officer Age'], bins)
#age_bins.cat.as_ordered()
#age_bins.head()
df['Officer Age'] =pd.cut(x=df['Officer Age'], bins=[1,18,25,35,45,55,70,120], labels = [1,2,3,4,5,6,0])
df['Officer Age'].value_counts(dropna=False)
df.head()
###Output
_____no_output_____
###Markdown
C. Transform Gender using Dictionary Mapping
###Code
# Check how many arrested had unknown race (or - or other)
df.loc[(df['Stop Resolution']=='Arrest') & (df['Subject Perceived Race']== "Unknown")].shape
#df.loc[(df['Stop Resolution']=='Arrest') & (df['Subject Perceived Race']== "-")].shape
#df.loc[(df['Stop Resolution']=='Arrest') & (df['Subject Perceived Race']== "Other")].shape
df['Subject Perceived Race'].value_counts()
race_map = {'White': 'White', 'Black or African American':'African American', 'Hispanic':'Hispanic',
'Hispanic or Latino':'Hispanic', 'Two or More Races':'Multi-Racial','Multi-Racial':'Multi-Racial',
'American Indian or Alaska Native':'Native', 'American Indian/Alaska Native':'Native',
'Native Hawaiian or Other Pacific Islander':'Native', 'Nat Hawaiian/Oth Pac Islander':'Native',
'-':'Unknown', 'Other':'Unknown', 'Not Specified':'Unknown','Unknown':'Unknown',
'Asian': 'Asian',}
df['Subject Perceived Race'] = df['Subject Perceived Race'].map(race_map)
df['Officer Race'] = df['Officer Race'].map(race_map)
df['Officer Race'].value_counts()
df['Subject Perceived Race'].value_counts()
###Output
_____no_output_____
###Markdown
D. Transform Stop Resolution Using Dictionary Mapping .map()
###Code
# Now address the Stop Resolution categories
df['Stop Resolution'].value_counts()
# Re-map the Stop Resolution, to combine categories Arrest and Referred for Prosecution
# Map Arrest and Referred for Prosecution to 1, and all others 0
stop_resolution = {'Field Contact': 0, 'Offense Report': 0, 'Arrest': 1,
'Referred for Prosecution': 1, 'Citation / Infraction': 0}
df['Stop Resolution']=df['Stop Resolution'].map(stop_resolution)
df['Stop Resolution'].value_counts()
###Output
_____no_output_____
###Markdown
E. Transform Weapon Type Using a Dictionary and .map()
###Code
df.head()
# Now re-map Weapon Type feature. First check the categories of Weapons
df['Weapon Type'].value_counts()
weapon_type = {'None':'None', 'None/Not Applicable':'None', 'Fire/Incendiary Device':'Incendiary',
'Lethal Cutting Instrument':'Lethal Blade', 'Knife/Cutting/Stabbing Instrument':'Lethal Blade',
'Handgun':'Firearm', 'Firearm Other':'Firearm','Firearm':'Firearm', 'Firearm (unk type)':'Firearm',
'Other Firearm':'Firearm', 'Rifle':'Firearm', 'Shotgun':'Firearm', 'Automatic Handgun':'Firearm',
'Club, Blackjack, Brass Knuckles':'Blunt Force', 'Club':'Blunt Force',
'Brass Knuckles':'Blunt Force', 'Blackjack':'Blunt Force',
'Blunt Object/Striking Implement':'Blunt Force', '-':'Unknown',
'Taser/Stun gun':'Taser', 'Mace/Pepper Spray':'Spray',}
df['Weapon Type']=df['Weapon Type'].map(weapon_type)
df['Weapon Type'].value_counts()
###Output
_____no_output_____
###Markdown
F. Transform the Date using to_datetime, .weekday, and .day* Calculate the reported date of the week - [x] Day of the week: 0 = Monday, 6 = Sunday * Calculate the first, mid and last weeks of the month because perhaps more crimes / arrests are made when the bills come due - [x] Time of month: 1 = First week, 2 = 2nd and 3rd weeks, 4 = last week of the month
###Code
df['Reported Date'].head()
# Transform the Reported date into a day of the week, or the time of month
# Day of the week: 0 = Monday, 6 = Sunday
# Time of month: 1 = First week, 2 = 2nd and 3rd weeks, 4 = last week of the month
df['Reported Date']=pd.to_datetime(df['Reported Date']) # Processed earlier for Officer YOB calculation
df['Weekday']=df['Reported Date'].dt.weekday
df['Time of Month'] = df['Reported Date'].dt.day
month_map = {1:1, 2:1,3:1,4:1, 5:1, 6:1, 7:1,8:2, 9:2, 10:2, 11:2, 12:2, 13:2, 14:2, 15:2,
16:2, 17:2, 18:2, 19:2, 20:2, 21:2, 22:2, 23:3, 24:3, 25:3, 26:3, 27:3, 28:3, 29:3, 30:3, 31:3}
df['Time of Month'] = df['Time of Month'].map(month_map)
df.isna().sum()
df.head()
###Output
_____no_output_____
###Markdown
G. Use Officer Squad data to create the location information (Precinct or Officer Team) and the time of day of the arrest (Officer Watch)* Use Pandas Regex .str.extract to get the name of the precinct and the Watch if available* Analyse if some precincts / units never make arrests * The Officer Squad text data is likely more reliable estimate assuming use the information provided is the squad name / location, and the watch that handled the reports, not a specific person schedule or squad. * With the Reported Date and Time, since the reports can come 1 day, or 10 hours later, the recorded time is not the actual Terry stop time. * Features created from Officer Squad: - [x] Precinct or Squad name following the Terry stop - [x] Watch: 0 = Unknown, if the watch is not normally recorded 1 = Watch 1 03:00 - 11:00 2 = Watch 2 11:00 - 19:00 3 = Watch 3 19:00 - 03:00
###Code
df.head()
# Use Python Regex commands to clean up the Call Types and Officer Squad
df['Officer Squad'].value_counts()
df['Precinct'] = df['Officer Squad'].str.extract(r'(\w+)')
df['Watch'] = df['Officer Squad'].str.extract(pat = '([\d])').fillna(0)
df.head(100)
# Some Officer Quads do not recorde the Watch number
# Don't leave the NaNs in the Watch column, fill with 0
# Watch definition: 0 = Unknown, 1 = 1st Watch, 2 = 2nd Watch, 3 = 3rd Watch
df.isna().sum()
# Identify the Precincts are not typically making arrests, by comparing the number of arrests (Stop Resolution = Arrest)
# to the total number of Terry stops.
arrest_df = df.loc[df['Stop Resolution'] == 1] # Dataframe only for those Terry stops that resulted in arrests
arrest_df['Precinct'].value_counts(), df['Precinct'].value_counts() # compare the value_counts for both dataframes
# Subsetting to only the Stop Resolution of arrest
# Caculate the # of precincts that have arrests by dividing the arrest_df to the total number of terry stops
arrest_percentage = arrest_df['Precinct'].value_counts() / df['Precinct'].value_counts()
print(f'The percentage of arrests based on terry stops, by squad \n\n',arrest_percentage)
# Create a dictionary for mapping the squads which have successful arrest. Those officer squads which have
# reported Terry stops with no arrests will be dropped from the dataset
successful_arrest_map=arrest_percentage.to_dict()
# successful_arrest_map # Take a look at the dictionary
df['Precinct Success']=df['Precinct'].map(successful_arrest_map)
df.isna().sum()
# There are 36 units / precincts which do not have any arrests since 2015
# Likely these units are not expected to make arrests
#df.to_csv('terry_stops_cleanup3.csv') #save with all manipulations except for Call Types, without dropping
# Drop out the units Terry stops which do not routinely make arrests
df.dropna(inplace=True) # Drop the squads with no arrests
df.reset_index(inplace=True) # Reset the Index
df.drop(columns=['Call Type', 'Reported Date', 'Officer Squad'], inplace = True) # Drop Processed Columns
df.to_csv('terry_stops_cleanup4.csv') #Save after dropping squads with no arrests and columns and reset index
df.head()
###Output
_____no_output_____
###Markdown
H. Transform Initial or Final Call Types
###Code
def clean_call_types(df_to_clean, col_name, new_col):
'''Transform Call Type text into a single identifier
Inputs: df, col_name - column which has the Call type, and a new column name
Outputs: The dataframe with a new column name, and a map'''
idx = df_to_clean[col_name] == '-' # Create an index of the true and false values for the condition == '-'
df_to_clean.loc[idx, col_name] = 'Unknown'
column_series = df_to_clean[col_name]
df_to_clean[new_col] = column_series.apply(lambda x:x.replace('--','').split('-')[0].strip())
#df_to_clean[new_col].value_counts(dropna=False).sort_index()
#df_to_clean.isna().sum()
df_to_clean[new_col] = df_to_clean[new_col].str.extract(r'(\w+)')
df_to_clean[new_col] = df_to_clean[new_col].str.lower()
last_map = df_to_clean[new_col].value_counts().to_dict()
return last_map
final_map = clean_call_types(df,'Final Call Type', 'Final Call Re-map')
initial_map = clean_call_types(df, 'Initial Call Type', 'Initial Call Re-map')
final_map
initial_map
# Check to see if keys of the two dictionaries are the same
diff = set(final_map) - set(initial_map) # the keys in final_map and not in initial_map
diff2 = set(initial_map) - set(final_map) # the keys that are in initial_map, and not in final_map
diff, diff2
# Expand the existing call map to include additional keys
# This call dictionary was built on the final calls, not the initial calls text. So add the initial calls and input values
call_dictionary = {'unknown': 'unknown',
'suspicious': 'suspicious',
'assaults': 'assault',
'disturbance': 'disturbance',
'prowler': 'trespass',
'dv': 'domestic violence',
'warrant': 'warrant',
'theft': 'theft',
'narcotics': 'under influence',
'robbery': 'theft',
'burglary': 'theft',
'traffic': 'traffic',
'property': 'property damage',
'weapon': 'weapon',
'crisis': 'person in crisis',
'automobiles': 'auto',
'assist': 'assist others',
'sex': 'vice',
'mischief': 'mischief',
'arson': 'arson',
'fraud': 'fraud',
'vice': 'vice',
'drive': 'auto',
'misc': 'misdemeanor',
'premise': 'trespass',
'alarm': 'suspicious',
'intox': 'under influence',
'rape': 'rape',
'child': 'child',
'trespass': 'trespass',
'person': 'person in crisis',
'homicide': 'homicide',
'burg': 'theft',
'kidnap': 'kidnap',
'animal': 'animal',
'hazards': 'hazard',
'aslt': 'assault',
'casualty': 'homicide',
'fight': 'disturbance',
'shoplift': 'theft',
'auto': 'auto',
'haras': 'disturbance',
'purse': 'theft',
'weapn': 'weapon',
'fireworks': 'arson',
'follow': 'disturbance',
'dist': 'disturbance',
'haz': 'hazard',
'nuisance': 'mischief',
'threats': 'disturbance',
'liquor': 'under influence',
'mvc': 'auto',
'shots': 'weapon',
'harbor': 'auto',
'down': 'homicide',
'service': 'unknown',
'hospital': 'unknown',
'bomb': 'arson',
'undercover': 'under influence',
'burn': 'arson',
'lewd': 'vice',
'dui': 'under influence',
'crowd': 'unknown',
'order': 'assist',
'escape': 'assist',
'commercial': 'trespass',
'noise': 'disturbance',
'narcotics': 'under influence',
'awol': 'kidnap',
'bias': 'unknown',
'carjacking': 'kidnap',
'demonstrations':'disturbance',
'directed':'unknown',
'doa':'assist',
'explosion':'arson',
'foot': 'trespass',
'found':'unknown',
'gambling': 'vice',
'help':'assist',
'illegal':'assist',
'injured':'assist',
'juvenile':'child',
'littering': 'nuisance',
'missing': 'kidnap',
'off':'suspicious',
'open':'unknown',
'overdose':'under influence',
'panhandling':'disturbance',
'parking':'disturbance',
'parks':'disturbance',
'peace':'disturbance',
'pedestrian':'disturbance',
'phone':'disturbance',
'request':'assist',
'sfd':'assist',
'sick':'assist',
'sleeper':'disturbance',
'suicide':'assist'}
df['Final Call Re-map'] = df['Final Call Re-map'].map(call_dictionary)
df['Final Call Re-map'].value_counts(dropna=False)
df['Initial Call Re-map'] = df['Initial Call Re-map'].map(call_dictionary)
df['Initial Call Re-map'].value_counts(dropna=False)
df.isna().sum()
#Drop all NaNs
df.dropna(inplace=True)
df.reset_index(inplace=True)
df.to_csv('terry_stops_cleanup4.csv')
df.head(100)
df.drop(columns = ['Initial Call Type', 'Final Call Type', 'Precinct Success', 'Officer YOB',
'Reported Year', 'level_0', 'index'], inplace=True)
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 41028 entries, 0 to 41027
Data columns (total 14 columns):
# Column Dtype
--- ------ -----
0 Subject Age Group int64
1 Stop Resolution int64
2 Weapon Type object
3 Officer Gender int64
4 Officer Race object
5 Subject Perceived Race object
6 Subject Perceived Gender int64
7 Officer Age category
8 Weekday int64
9 Time of Month int64
10 Precinct object
11 Watch object
12 Final Call Re-map object
13 Initial Call Re-map object
dtypes: category(1), int64(6), object(7)
memory usage: 4.1+ MB
###Markdown
4. Vanilla Model XBG + Initial Call Type
###Code
df_to_split = df.drop(columns = 'Final Call Re-map')
category_cols = df_to_split.columns
target_col = ['Stop Resolution']
df.info()
df_to_split = pd.DataFrame()
from sklearn.preprocessing import MinMaxScaler
# Convert catogories to cat.codes
for header in category_cols:
df_to_split[header] = df[header].astype('category').cat.codes
df_to_split.info()
df_to_split.head()
# Check the correlation matrix to see the autocorrelated variables and plot it out
# Will run the correlation matrix for the last kernel run
sns.axes_style("white")
pearson = df_to_split.corr(method = 'pearson')
sns.set(rc={'figure.figsize':(20,12)})
# Generate a mask for the upper triangle
mask = np.zeros_like(pearson)
mask[np.triu_indices_from(mask)] = True
ax = sns.heatmap(data=pearson, mask=mask, cmap="YlGnBu",
linewidth=0.5, annot=True, square=True, cbar_kws={'shrink': 0.5})
# Save the correlations information
plt.savefig("Correlation.png")
plt.savefig("Correlation 2.png", transparent = True)
y = df_to_split['Stop Resolution']
X = df_to_split.drop('Stop Resolution',axis=1)
from sklearn.model_selection import train_test_split
## Train test split
X_train, X_test, y_train,y_test = train_test_split(X,y,test_size=.3,
random_state=42,)#,stratify=y)
display(y_train.value_counts(normalize=False),y_test.value_counts(normalize=False))
#!pip3 install xgboost
import xgboost as xbg
from xgboost import XGBRFClassifier,XGBClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import roc_auc_score, roc_curve
xgb_rf = XGBRFClassifier()
xgb_rf.fit(X_train, y_train)
print('Training score: ' ,round(xgb_rf.score(X_train,y_train),2))
print('Test score: ',round(xgb_rf.score(X_test,y_test),2))
y_hat_test = xgb_rf.predict(X_test)
check = evaluate_model(y_test,y_hat_test, X_test, xgb_rf)
# Importance Check
check
def cramers_corrected_stat(df, column1, column2):
""" Calculate Cramers V statistic for categorial-categorial association.
uses correction from Bergsma and Wicher,
Journal of the Korean Statistical Society 42 (2013): 323-328
Reference: https://stackoverflow.com/questions/20892799/using-pandas-calculate-cram%C3%A9rs-coefficient-matrix
Inputs: confusion_matrix
Outputs
"""
confusion_matrix = pd.crosstab(df[column1], df[column2])
print(confusion_matrix)
chi2 = ss.chi2_contingency(confusion_matrix)[0]
n = confusion_matrix.sum()
phi2 = chi2/n
r,k = confusion_matrix.shape
phi2corr = max(0, phi2 - ((k-1)*(r-1))/(n-1))
rcorr = r - ((r-1)**2)/(n-1)
kcorr = k - ((k-1)**2)/(n-1)
return np.sqrt(phi2corr / min( (kcorr-1), (rcorr-1)))
###Output
_____no_output_____
###Markdown
5. Vanilla Model Results & Experimental Plan* Results: - [x] "Initial Call Type" is the most important feature, with "Weapon" and "Officer Age" as the 2nd and 3rd most important features, respectively. - [x] Training accuracy of 0.76, and testing accuracy of 0.74 - [x] However, the Confusion Matrix shows the main reason is that the "Non-arrests" are better classified than the arrests. For arrests, the true negatives were well predicted (97%), and the true positives were poorly predicted (11%), while false positives were 89%. - [x] This seems to make sense given the class imbalance (only 25% of the data were arrests) - [x] The AOC was well above random chance B - XGB + Final Call Type* **The Next Steps will be a set of experiments to look how the models can improve based on:** - [1] Feature Selection: Initial Call Type Versus Final Call Type - [2] Model type: XGBoost-RF vs CatBoost - [3] Balancing the dataset from best model of [1] and [2] - [4] HyperParameter tuning for [3] * **The Next Experiment will be (bold type):** - A = Vanilla Model = XBG + Initial Call Type - **B = XBG + Final Call Type** - C = Cat + Initial Call Type - D = Cat + Final Call Type - E = SMOTE + Best of (A,B,C,D)) - F = Gridsearch on E
###Code
# Setup a results dataframe to capture all the results
result_idx = ['Accuracy','Precision - no Arrest', 'Precision - Arrest', 'Precision-wt Avg',
'Recall - no Arrest', 'Recall - Arrest', 'Recall - wt Avg', 'F1 - no Arrest',
'F1 - Arrest', 'F1 - wt Avg', 'Training AUC', 'Test AUC']
result_cols = ['XGB + initial', 'XGB + final', 'CB + initial', 'CB + final', 'CBC + initial',
'CBC + final', "SMOTE+XGB+final", "SMOTE+CB+final", "SMOTE+CBC+final"]
results_df = pd.DataFrame(index = result_idx, columns = result_cols)
#results_df
# Save the initial results
# Change input to drop Initial Call Type and keep Final Call Type
df_to_split = df.drop(columns = 'Initial Call Re-map')
category_cols = df_to_split.columns
target_col = ['Stop Resolution']
df_to_split = pd.DataFrame()
# Convert catogories to cat.codes
for header in category_cols:
df_to_split[header] = df[header].astype('category').cat.codes
df_to_split.head()
y = df_to_split['Stop Resolution']
X = df_to_split.drop('Stop Resolution',axis=1)
X_train, X_test, y_train,y_test = train_test_split(X,y,test_size=.3,
random_state=42,)#,stratify=y)
display(y_train.value_counts(normalize=False),y_test.value_counts(normalize=False))
xgb_rf = XGBRFClassifier()
xgb_rf.fit(X_train, y_train)
print('Training score: ' ,round(xgb_rf.score(X_train,y_train),2))
print('Test score: ',round(xgb_rf.score(X_test,y_test),2))
y_hat_test = xgb_rf.predict(X_test)
evaluate_model(y_test,y_hat_test, X_test, xgb_rf)
###Output
_____no_output_____
###Markdown
6. CatBoost with Final Call Type D - Catboost + Final Call Type* **The Next Steps will be a set of experiments to look how the models can improve based on:** - [1] Feature Selection: Initial Call Type Versus Final Call Type - [2] Model type: XGBoost-RF vs CatBoost - [3] Balancing the dataset from best model of [1] and [2] - [4] HyperParameter tuning for [3] * **The Next Experiment will be (Bold Type):** - A = Vanilla Model = XBG + Initial Call Type - B = XBG + Final Call Type - C = Catboost + Initial Call Type - **D = Catboost + Final Call Type** - E = SMOTE + Best of (A,B,C,D)) - F = Gridsearch on E
###Code
#!pip install -U catboost
from catboost import CatBoostClassifier
clf = CatBoostClassifier()
clf.fit(X_train,y_train,logging_level='Silent')
print('Training score: ' ,round(clf.score(X_train,y_train),2))
print('Test score: ',round(clf.score(X_test,y_test),2))
y_hat_test = clf.predict(X_test)
evaluate_model(y_test,y_hat_test,X_test,clf)
###Output
_____no_output_____
###Markdown
7. Catboost with Initial Call Type C - Catboost + Initial Call Type* **The Next Steps will be a set of experiments to look how the models can improve based on:** - [1] Feature Selection: Initial Call Type Versus Final Call Type - [2] Model type: XGBoost-RF vs CatBoost - [3] Balancing the dataset from best model of [1] and [2] - [4] HyperParameter tuning for [3] * **The Next Experiment will be (Bold Type):** - A = Vanilla Model = XBG + Initial Call Type - B = XBG + Final Call Type - **C = Catboost + Initial Call Type** - D = Catboost + Final Call Type - E = SMOTE + Best of (A,B,C,D)) - F = Gridsearch on E
###Code
df_to_split = df.drop(columns = 'Final Call Re-map')
category_cols = df_to_split.columns
target_col = ['Stop Resolution']
df_to_split = pd.DataFrame()
# Convert catogories to cat.codes
for header in category_cols:
df_to_split[header] = df[header].astype('category').cat.codes
y = df_to_split['Stop Resolution']
X = df_to_split.drop('Stop Resolution',axis=1)
X_train, X_test, y_train,y_test = train_test_split(X,y,test_size=.3,
random_state=42,)#,stratify=y)
clf = CatBoostClassifier()
clf.fit(X_train,y_train,logging_level='Silent')
print('Training score: ' ,round(clf.score(X_train,y_train),2))
print('Test score: ',round(clf.score(X_test,y_test),2))
y_hat_test = clf.predict(X_test)
evaluate_model(y_test,y_hat_test,X_test,clf)
#fig, ax = plt.subplots(1, 1, figsize=(5, 3))
#ax_arr = (ax1, ax2, ax3, ax4)
#weights_arr = ((0.01, 0.01, 0.98), (0.01, 0.05, 0.94),
# (0.2, 0.1, 0.7), (0.33, 0.33, 0.33))
#for ax, weights in zip(ax_arr, weights_arr):
#X, y = create_dataset(n_samples=1000, weights=weights)
# clf = CatBoostClassifier()
# clf.fit(X_train,y_train,logging_level='Silent')
#clf = LinearSVC().fit(X, y)
#plot_decision_function(X_train, y_train, clf, ax)
#ax.set_title('Catboost with Final Call Type')
#fig.tight_layout()
#y_train
###Output
_____no_output_____
###Markdown
8. SMOTE + Catboost + Final Call E - SMOTE + Best of (A,B,C,D)* **The Next Steps will be a set of experiments to look how the models can improve based on:** - [1] Feature Selection: Initial Call Type Versus Final Call Type - [2] Model type: XGBoost-RF vs CatBoost - [3] Balancing the dataset from best model of [1] and [2] - [4] HyperParameter tuning for [3] * **The Next Experiment will be (Bold Type):** - A = Vanilla Model = XBG + Initial Call Type - B = XBG + Final Call Type - C = Catboost + Initial Call Type - D = Catboost + Final Call Type - **E = SMOTE + Best of (A,B,C,D))** - F = Gridsearch on E
###Code
#!pip install -U imbalanced-learn
from imblearn.over_sampling import SMOTE
smote = SMOTE()
df_to_split = df.drop(columns = 'Initial Call Re-map')
category_cols = df_to_split.columns
target_col = ['Stop Resolution']
df_to_split = pd.DataFrame()
# Convert catogories to cat.codes
for header in category_cols:
df_to_split[header] = df[header].astype('category').cat.codes
y = df_to_split['Stop Resolution']
X = df_to_split.drop('Stop Resolution',axis=1)
X_train, X_test, y_train,y_test = train_test_split(X,y,test_size=.3,
random_state=42,stratify=y)
X_train, y_train = smote.fit_sample(X_train, y_train)
display(y_train.value_counts(normalize=False),y_test.value_counts(normalize=False))
clf = CatBoostClassifier()
clf.fit(X_train,y_train,logging_level='Silent')
print('Training score: ' ,round(clf.score(X_train,y_train),2))
print('Test score: ',round(clf.score(X_test,y_test),2))
y_hat_test = clf.predict(X_test)
evaluate_model(y_test,y_hat_test,X_test,clf)
# The SMOTED data on XBG-RF, just for fun
xgb_rf = XGBRFClassifier()
xgb_rf.fit(X_train, y_train)
print('Training score: ' ,round(xgb_rf.score(X_train,y_train),2))
print('Test score: ',round(xgb_rf.score(X_test,y_test),2))
y_hat_test = xgb_rf.predict(X_test)
evaluate_model(y_test,y_hat_test, X_test, xgb_rf)
# Try a Support Vector Machine, for the heck of it
from sklearn.svm import SVC,LinearSVC,NuSVC
clf = SVC()
clf.fit(X_train,y_train)
y_hat_test = clf.predict(X_test)
evaluate_model(y_test,y_hat_test,X_test,clf)
# Try Categorical SMOTE
from imblearn.over_sampling import SMOTENC
smote_nc = SMOTENC(categorical_features = [0,11])
df_to_split = df.drop(columns = 'Initial Call Re-map')
category_cols = df_to_split.columns
target_col = ['Stop Resolution']
df_to_split = pd.DataFrame()
# Convert catogories to cat.codes
for header in category_cols:
df_to_split[header] = df[header].astype('category').cat.codes
y = df_to_split['Stop Resolution']
X = df_to_split.drop('Stop Resolution',axis=1)
X_train, X_test, y_train,y_test = train_test_split(X,y,test_size=.3,
random_state=42,stratify=y)
# Now modify the training data by oversampling (SMOTENC)
X_train, y_train = smote_nc.fit_sample(X_train, y_train)
display(y_train.value_counts(normalize=False),y_test.value_counts(normalize=False))
clf = CatBoostClassifier()
clf.fit(X_train,y_train,logging_level='Silent')
print('Training score: ' ,round(clf.score(X_train,y_train),2))
print('Test score: ',round(clf.score(X_test,y_test),2))
y_hat_test = clf.predict(X_test)
evaluate_model(y_test,y_hat_test,X_test,clf)
###Output
_____no_output_____
###Markdown
9. SMOTE + CatBoostClassifier + Final Call type E - SMOTE + CatBoostClassifer + Final Call type* **The Next Steps will be a set of experiments to look how the models can improve based on:** - [1] Feature Selection: Initial Call Type Versus Final Call Type - [2] Model type: XGBoost-RF vs CatBoost - [3] Balancing the dataset from best model of [1] and [2] - [4] HyperParameter tuning for [3] * **The Next Experiment will be (Bold Type):** - A = Vanilla Model = XBG + Initial Call Type - B = XBG + Final Call Type - C = Catboost + Initial Call Type - D = Catboost + Final Call Type = CatBoostClassifier + Final Call Type - **E = SMOTE + Best of (A,B,C,D))** - F = Gridsearch on E Reference: https://catboost.ai/docs/concepts/python-reference_catboost_grid_search.htmlfrom catboost import CatBoostmodel = CatBoostClassifier() grid = {'learning_rate': [0.01,.04,0.8], 'depth': [3,5,8,12], 'l2_leaf_reg': [1, 3, 7, 9]}grid_search_result = model.grid_search(grid, X=X_train, y=y_train, plot=True) print('Training score: ' ,round(model.score(X_train,y_train),2))print('Test score: ',round(model.score(X_test,y_test),2))
###Code
#y_hat_test = model.predict(X_test)
#evaluate_model(y_test,y_hat_test,X_test,model)
###Output
_____no_output_____
###Markdown
model = CatBoost()grid = {'learning_rate': [0.03, 0.1], 'depth': [4, 6, 10], 'l2_leaf_reg': [1, 3, 5, 7, 9]}grid_search_result = model.grid_search(grid, X=X_train, y=y_train, plot=True)
###Code
#df_to_split5 = pd.DataFrame()
df_to_split = df.drop(columns = ['Initial Call Re-map','Stop Resolution'])
df_to_split.head()
X = pd.get_dummies(df_to_split, drop_first=True)
y = df['Stop Resolution']
X
#from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4)
from catboost import Pool, CatBoostClassifier
category_cols = X.columns
train_pool = Pool(data=X_train, label=y_train, cat_features=category_cols)
test_pool = Pool(data=X_test, label=y_test, cat_features=category_cols)
cb_base = CatBoostClassifier(iterations=500, depth=12,
boosting_type='Ordered',
learning_rate=0.03,
thread_count=-1,
eval_metric='AUC',
silent=True,
allow_const_label=True)#,
#task_type='GPU')
cb_base.fit(train_pool,eval_set=test_pool, plot=True, early_stopping_rounds=10)
cb_base.best_score_
# Plotting Feature Importances
important_feature_names = cb_base.feature_names_
important_feature_scores = cb_base.feature_importances_
important_features = pd.Series(important_feature_scores, index = important_feature_names)
important_features.sort_values().plot(kind='barh');
print('Training score: ' ,round(cb_base.score(X_train,y_train),2))
print('Test score: ',round(cb_base.score(X_test,y_test),2))
y_hat_test = cb_base.predict(X_test)
evaluate_model(y_test,y_hat_test,X_test,cb_base)
# Try the same approach, with SMOTENC first
smote_nc = SMOTENC(categorical_features = [0,11])
df_to_split = df.drop(columns = ['Initial Call Re-map', 'Stop Resolution'])
category_cols = df_to_split.columns
#target_col = ['Stop Resolution']
#df_to_split = pd.DataFrame()
# Convert catogories to cat.codes
X = pd.get_dummies(df_to_split, drop_first=True)
y = df['Stop Resolution']
#for header in category_cols:
# df_to_split[header] = df[header].astype('category').cat.codes
X_train, X_test, y_train,y_test = train_test_split(X,y,test_size=.3,
random_state=42,stratify=y)#
X_train, y_train = smote_nc.fit_sample(X_train, y_train)
display(y_train.value_counts(normalize=False),y_test.value_counts(normalize=False))
X
clf = CatBoostClassifier()
clf.fit(X_train,y_train,logging_level='Silent')
print('Training score: ' ,round(clf.score(X_train,y_train),2))
print('Test score: ',round(clf.score(X_test,y_test),2))
y_hat_test = clf.predict(X_test)
evaluate_model(y_test,y_hat_test,X_test,clf)
category_cols = X.columns
train_pool = Pool(data=X_train, label=y_train, cat_features=category_cols)
test_pool = Pool(data=X_test, label=y_test, cat_features=category_cols)
cb_base = CatBoostClassifier(iterations=500, depth=12,
boosting_type='Ordered',
learning_rate=0.03,
thread_count=-1,
eval_metric='AUC',
silent=True,
allow_const_label=True)#,
#task_type='GPU')
cb_base.fit(train_pool,eval_set=test_pool, plot=True, early_stopping_rounds=10)
cb_base.best_score_
print('Training score: ' ,round(cb_base.score(X_train,y_train),2))
print('Test score: ',round(cb_base.score(X_test,y_test),2))
y_hat_test = cb_base.predict(X_test)
evaluate_model(y_test,y_hat_test,X_test,cb_base)
###Output
_____no_output_____
###Markdown
10. Gridsearch on Best Model F - Gridsearch on E* **The Next Steps will be a set of experiments to look how the models can improve based on:** - [1] Feature Selection: Initial Call Type Versus Final Call Type - [2] Model type: XGBoost-RF vs CatBoost - [3] Balancing the dataset from best model of [1] and [2] - [4] HyperParameter tuning for [3] * **The Next Experiment will be (Bold Type):** - A = Vanilla Model = XBG + Initial Call Type - B = XBG + Final Call Type - C = Catboost + Initial Call Type - D = Catboost + Final Call Type - E = SMOTE + Best of (A,B,C,D)) - **F = Gridsearch on E** 11. Optional Feature Engineering for Training data only
###Code
### The key concept is that in training we know that the some precincts are more successful than others at getting to an arrest. Instead of imputed a 1-hot encoded value, use the percentage of successful arrests as the values for the precinct.
Calculate how successful particular precincts were at making arrests
arrest_percentage = arrest_df['Precinct'].value_counts() / df['Precinct'].value_counts()
print(f'The percentage of arrests based on terry stops, by squad \n\n',arrest_percentage)
### Create a dictionary for mapping the squads which have successful arrest. Those officer squads which have <br><br>
### reported Terry stops with no arrests will be dropped from the dataset
successful_arrest_map=arrest_percentage.to_dict()
### successful_arrest_map # Take a look at the dictionary
df['Precinct Success']=df['Precinct'].map(successful_arrest_map) # map the dictionary to the dataframe with a new column3
### Perform the same analysis to see which call types lead to more arrests
arrest_df = df.loc[df['Stop Resolution'] == 'Arrest'] # Re-Create the arrest_df in case there were removals earlier
arrest_df['Final Call'].value_counts(), df['Final Call'].value_counts()
arrest_categories = arrest_df['Final Call Type'].value_counts() / df['Final Call Type'].value_counts()
arrest_map = arrest_categories.to_dict()
arrest_map # look at the dictionary
df['Final Call Success'] = df['Final Call Type'].map(arrest_map)
results_df = pd.DataFrame(
{'Expt Name': ["Accuracy", "Precision Not Arrested", 'Precision Arrested',
'Precision Weighted Avg', "RecalL Not Arrested", 'Recall Arrested',
'Recall Weighted Avg', 'F1 Not Arrested', 'F1 Arrested', 'F1 Weighted Avg',
'AUC'],})
results_df = pd.Dataframe(
{'Expt Name': ['xgb-rf-initial call'], 'Accuracy':})
###Output
_____no_output_____ |
notebooks/3_classification_colab.ipynb | ###Markdown
Whale Sound ClassificationIn this notebook we will discuss ways to classify signals using annotated data. Objectives:* train a machine learning classifier with the scikit-learn package* learn how to evaluate machine learning models In the kaggle competition we had the privilage to have annotated data. So let's use those labels. We are in fact trying to solve a classification problem: we want to build a classifier which correctly identifies whale calls. Data Loading---
###Code
# ignore warnings
import warnings
warnings.filterwarnings("ignore")
# importing multiple visualization libraries
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import mlab
import pylab as pl
#import seaborn
# setting figure size
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = (8,5)
# importing libraries to manipulate the data files
import os
from glob import glob
# import numpy
import numpy as np
!wget http://oceanhackweek2018.s3.amazonaws.com/oceans_data/X.npy
!wget http://oceanhackweek2018.s3.amazonaws.com/oceans_data/y.npy
# loading the data
X = np.load('X.npy')
y = np.load('y.npy')
###Output
_____no_output_____
###Markdown
Data Splitting---We will organize the data for traning. We will select a testing data set which we will not touch, until we are happy with our algorithm and we want to evaluate the error. The rest will be used for training. During training we can use that dataset as much as we want to improve our algorithms. We can further subset it into a training set and validate set, and use the validate set to evaluate different hyperparameters and models (or use formal cross-vslidation).  Since we have already left out a big part of the dataset (10000:30000), we will split $X$ and $y$ in two parts: a training set and a validation set.
###Code
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.20, random_state=2018)
###Output
_____no_output_____
###Markdown
Model Fitting--- [//]:
###Code
from IPython.display import Image
Image("https://raw.githack.com/oceanhackweek/ohw19-tutorial-machine-learning/master/img/MLmap.png", height=500)
###Output
_____no_output_____
###Markdown
Source: http://scikit-learn.org/stable/tutorial/machine_learning_map/index.html `scikit learn` has many built-in classification algorithms. It is a good strategy to first try a linear classifier and create a baseline, and then try more complex methods. Here are some good candidates:* [Logistic Regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html), Support Vector Classifier ([SVC](http://scikit-learn.org/stable/modules/svm.html))* [Random Forests](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html), [Gradient Boosting](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html) - nonlinear classifiers (ensemble methods)
###Code
%%time
# Fitting a Logistic Regression
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X_train, y_train)
###Output
CPU times: user 1min 35s, sys: 504 ms, total: 1min 35s
Wall time: 1min 35s
###Markdown
Evaluation and Model Selection--- **Accuracy** Ok, we fitted a classifier, but how should we evaluate performance? First let's look at the accuracy on the train dataset: it better be good!!!
###Code
# prediction on the training dataset
train_accuracy = 1 - np.sum(np.abs(clf.predict(X_train) - y_train))/len(y_train)
print('Accuracy on the train dataset is '+ str(train_accuracy))
###Output
Accuracy on the train dataset is 0.8322499999999999
###Markdown
But that does not matter, what we want to know is how the method performs on the validation dataset, whose labels we have not seen in training.
###Code
# prediction on the validation dataset
val_accuracy = 1 - np.sum(np.abs(clf.predict(X_val)-y_val))/len(y_val)
print('Accuracy on the validation dataset is '+ str(val_accuracy))
###Output
Accuracy on the validation dataset is 0.7775
###Markdown
Ok, it is lower, as expected, but still decent. **Receiver Operating Characteristic (ROC) Curves** Note: the Logistic Regression is a probalistic algorithm and in fact can output a score for the chance of belonging to a class.
###Code
# predicting class
y_pred = clf.predict(X_val)
# predicting score for each class
y_score = clf.predict_proba(X_val)[:,1]
###Output
_____no_output_____
###Markdown
Warning: `clf.predict` by default uses 0.5 as a decision threshold: `y_score>0.5`: right whale upcall`y_score<0.5`: no right whale upcall But we can adjust this threshold to improve the performance. To study this performance we can use the [Receiver Operating Characteristic](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) and the Area Under the Curve (ROC AUC).
###Code
from sklearn.metrics import roc_auc_score, roc_curve
roc_auc = roc_auc_score(y_val, y_score)
fpr, tpr, _ = roc_curve(y_val, y_score)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate\n predicted upcall when no upcall')
plt.ylabel('True Positive Rate\n predicted upcall when upcall')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
threshold == 0: we will identify all whale upcalls, but also all the no upcalls will be identified as upcalls.threshold == 1: we will miss all the whale upcalls, but won't have wrongly identified no upcalls.The area under the curve (AUC) gives us a measure for the performance under different thresholds. It is good when close to 1. But is this enough? *Question:* what will be the accuracy if we always claim there is no whale call? *Hint:* what is the percentage of snippets with whale calls? So we can achieve pretty decent accuracy with a crappy classifier, which never detects 0 of all upcalls.We definity need to look at other metrics. **Confusion Matrix** It is useful to look at all types of errors the algorithm makes. For, that we compute the confusion matrix.
###Code
from sklearn.metrics import confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
plt.ylim([1.5, -.5])
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
#plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_val, y_pred)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
class_names = ['no_right_upcall','right_upcall']
plot_confusion_matrix(cnf_matrix, classes=class_names,normalize=True,
title='Confusion matrix, with normalization')
###Output
Normalized confusion matrix
[[0.93 0.07]
[0.7 0.3 ]]
###Markdown
**Precision and Recall** Precision: $\frac{\textrm{correctly predicted upcalls}}{\textrm{predicted upcalls}}$
###Code
# calculate precision with formula
predicted_upcalls = (y_pred==1)
correctly_predicted_upcalls = y_val[predicted_upcalls]
precision = sum(correctly_predicted_upcalls)/sum(predicted_upcalls)
print('Precision: {0:0.2f}'.format(precision))
# calculate precision with scikit-learn function
from sklearn.metrics import precision_score
precision_score(y_val, y_pred)
from sklearn.metrics import average_precision_score
average_precision = average_precision_score(y_pred, y_val)
print('Average precision-recall score: {0:0.2f}'.format(
average_precision))
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
precision, recall, _ = precision_recall_curve(y_val, y_score)
plt.step(recall, precision, color='b', alpha=0.2,
where='post')
plt.fill_between(recall, precision, step='post', alpha=0.2,
color='b')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('2-class Precision-Recall curve: AP={0:0.2f}'.format(
average_precision))
###Output
_____no_output_____
###Markdown
[F1 score](https://en.wikipedia.org/wiki/F1_score) is a measure which combines both precision and recall.$\textrm{F1} = 2 \frac{\textrm{precision x recall}}{\text{precision+recall}}$
###Code
from sklearn.metrics import f1_score
print('F1 score: {0:0.2f}'.format(f1_score(y_val, y_pred)))
###Output
F1 score: 0.40
###Markdown
**How can we improve**?* perform [cross validation](http://scikit-learn.org/stable/modules/cross_validation.html)* try other classifiers: [Random Forest](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html), [gradient boosting](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html), [SVC](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html) and compare their ROC/PR curves* balance the training set, not the validation one* stratify the samples: so you have similar proportion in the subsamples* apply dimensionality reduction first and then classify* account for time shifting* ??? Too slow? Have more cores? Checkout [dask-ml](https://dask-ml.readthedocs.io/en/latest/) for parallelizing some machine learning functions. References:---* https://github.com/jaimeps/whale-sound-classification * https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Scikit_Learn_Cheat_Sheet_Python.pdf Exercises:--- **Exercise 1:** selected a different threshold for the predictions, and calculate the precision and recall. **Exercise 2:** 1. Fit the data with a RandomForest Classifier2. Calculate the ROC curve3. Plot together with the ROC curve for Logistic Regression
###Code
from sklearn.ensemble import RandomForestClassifier
...
###Output
_____no_output_____ |
LinearRegressionDL.ipynb | ###Markdown
Datasets
###Code
class SimpleTrainDataset(Dataset):
def __init__(self):
self.x = torch.arange(-3,3,0.01).view(-1,1)
self.y = -3 * X + torch.randn(X.size())
self.len=self.x.shape[0]
def __getitem__(self, index):
return self.x[index], self.y[index]
def __len__(self):
return self.len
dataset = SimpleTrainDataset()
plt.plot(dataset.x.numpy(), dataset.y.numpy(), "ro")
plt.title("Training set")
plt.show()
class SimpleValidationDataset(Dataset):
def __init__(self):
self.x = torch.arange(-3,3,0.01).view(-1,1)
self.y = -2*X+torch.randn(X.size()) + 0.42*torch.randn(X.size())
self.len=self.x.shape[0]
def __getitem__(self, index):
return self.x[index], self.y[index]
def __len__(self):
return self.len
dataset = SimpleValidationDataset()
plt.plot(dataset.x.numpy(), dataset.y.numpy(), "ro")
plt.title("Validation set")
plt.show()
class SimpleTestDataset(Dataset):
def __init__(self):
self.x = torch.arange(-3,3,0.01).view(-1,1)
self.y = -3.44*X+torch.randn(X.size()) + 42*torch.randn(X.size())*0.1
self.len=self.x.shape[0]
def __getitem__(self, index):
return self.x[index], self.y[index]
def __len__(self):
return self.len
dataset = SimpleTestDataset()
plt.plot(dataset.x.numpy(), dataset.y.numpy(), "ro")
plt.title("Test set")
plt.show()
###Output
_____no_output_____
###Markdown
Model Definition
###Code
class LR(nn.Module):
def __init__(self, in_size, out_size):
super(LR, self).__init__()
self.linear1 = nn.Linear(in_size, out_size)
def forward(self, x):
out = self.linear1(x)
return out
###Output
_____no_output_____
###Markdown
Training Training loop
###Code
def train(dataset, learning_rate, desired_batch_size, number_of_epochs=4):
trainloader = DataLoader(dataset=dataset, batch_size=desired_batch_size)
model = LR(in_size=1, out_size=1)
model.train()
optimizer = SGD(model.parameters(),lr=learning_rate)
criterion = nn.MSELoss()
losses = []
for epoch in range(number_of_epochs):
for x, y in trainloader:
yhat = model(x)
loss = criterion(yhat, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
losses.append(loss.item())
return losses, model
train_dataset = SimpleTrainDataset()
###Output
_____no_output_____
###Markdown
Batch Size Analysis
###Code
learning_rate=0.01
number_of_epochs=10
batch_size_1 = train(train_dataset, learning_rate, desired_batch_size=1, number_of_epochs=number_of_epochs)
batch_size_5 = train(train_dataset, learning_rate, desired_batch_size=5, number_of_epochs=number_of_epochs)
batch_size_25 = train(train_dataset, learning_rate,desired_batch_size=25, number_of_epochs=number_of_epochs)
batch_size_100 = train(train_dataset, learning_rate, desired_batch_size=100, number_of_epochs=number_of_epochs)
n=20
plt.plot(batch_size_1[0:n], "r", label="batch size = 1")
plt.plot(batch_size_5[0:n], "g", label="batch size = 5")
plt.plot(batch_size_25[0:n], "b", label="batch size = 25")
plt.plot(batch_size_100[0:n], "c", label="batch size = 100")
plt.legend()
plt.title("Training loss")
plt.show()
###Output
_____no_output_____
###Markdown
Learning Rates Analysis
###Code
validation_dataset = SimpleValidationDataset()
validationloader = DataLoader(dataset=validation_dataset, batch_size=100)
learning_rates=[0.001, 0.005, 0.01, 0.05, 0.1]
number_of_epochs=400
learning_rates_losses = {}
for learning_rate in learning_rates:
_, model = train(validation_dataset, learning_rate,desired_batch_size=25, number_of_epochs=number_of_epochs)
losses = []
for x, y in validationloader:
yhat = model(x)
loss = criterion(yhat, y)
losses.append(loss.item())
learning_rates_losses[learning_rate] = np.mean(losses)
print(f"Best learning rate : {min(learning_rates_losses, key=learning_rates_losses.get)}")
###Output
Best learning rate : 0.001
|
DL_TF20/Part 7 - Comparing TensorFlow20 with PyTorch.ipynb | ###Markdown
TensorFlow 2.0
###Code
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import datasets
###Output
_____no_output_____
###Markdown
Hyperparameter Tunning
###Code
num_epochs = 1
batch_size = 64
learning_rate = 0.001
dropout_rate = 0.7
input_shape = (28, 28, 1)
num_classes = 10
###Output
_____no_output_____
###Markdown
Preprocess
###Code
(train_x, train_y), (test_x, test_y) = datasets.mnist.load_data()
train_x = train_x[..., tf.newaxis]
test_x = test_x[..., tf.newaxis]
train_x = train_x / 255.
test_x = test_x / 255.
###Output
_____no_output_____
###Markdown
Build Model
###Code
inputs = layers.Input(input_shape)
net = layers.Conv2D(32, (3, 3), padding='SAME')(inputs)
net = layers.Activation('relu')(net)
net = layers.Conv2D(32, (3, 3), padding='SAME')(net)
net = layers.Activation('relu')(net)
net = layers.MaxPooling2D(pool_size=(2, 2))(net)
net = layers.Dropout(dropout_rate)(net)
net = layers.Conv2D(64, (3, 3), padding='SAME')(net)
net = layers.Activation('relu')(net)
net = layers.Conv2D(64, (3, 3), padding='SAME')(net)
net = layers.Activation('relu')(net)
net = layers.MaxPooling2D(pool_size=(2, 2))(net)
net = layers.Dropout(dropout_rate)(net)
net = layers.Flatten()(net)
net = layers.Dense(512)(net)
net = layers.Activation('relu')(net)
net = layers.Dropout(dropout_rate)(net)
net = layers.Dense(num_classes)(net)
net = layers.Activation('softmax')(net)
model = tf.keras.Model(inputs=inputs, outputs=net, name='Basic_CNN')
# Model is the full model w/o custom layers
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate), # Optimization
loss='sparse_categorical_crossentropy', # Loss Function
metrics=['accuracy']) # Metrics / Accuracy
###Output
_____no_output_____
###Markdown
Training
###Code
model.fit(train_x, train_y,
batch_size=batch_size,
shuffle=True)
model.evaluate(test_x, test_y, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
PyTorch
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
seed = 1
lr = 0.001
momentum = 0.5
batch_size = 64
test_batch_size = 64
epochs = 5
no_cuda = False
log_interval = 100
###Output
_____no_output_____
###Markdown
Model
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
###Output
_____no_output_____
###Markdown
Preprocess
###Code
torch.manual_seed(seed)
use_cuda = not no_cuda and torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=test_batch_size, shuffle=True, **kwargs)
###Output
_____no_output_____
###Markdown
Optimization
###Code
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=lr, momentum=momentum)
###Output
_____no_output_____
###Markdown
Training
###Code
for epoch in range(1, epochs + 1):
# Train Mode
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad() # backpropagation 계산하기 전에 0으로 기울기 계산
output = model(data)
loss = F.nll_loss(output, target) # https://pytorch.org/docs/stable/nn.html#nll-loss
loss.backward() # 계산한 기울기를
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
# Test mode
model.eval() # batch norm이나 dropout 등을 train mode 변환
test_loss = 0
correct = 0
with torch.no_grad(): # autograd engine, 즉 backpropagatin이나 gradient 계산 등을 꺼서 memory usage를 줄이고 속도를 높임
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item() # pred와 target과 같은지 확인
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/recommendation_systems/labs/deep_recommenders.ipynb | ###Markdown
Building deep retrieval models**Learning Objectives**1. Converting raw input examples into feature embeddings.2. Splitting the data into a training set and a testing set.3. Configuring the deeper model with losses and metrics. Introduction In [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.In general, deeper models are capable of learning more complex patterns than shallower models. For example, our [user model](fhttps://www.tensorflow.org/recommenders/examples/featurizationuser_model) incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful [hyperparameter tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization). For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.Each learning objective will correspond to a _TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/deep_recommenders.ipynb) PreliminariesWe first import the necessary packages.
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
###Output
_____no_output_____
###Markdown
**NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.**
###Code
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0
###Markdown
In this tutorial we will use the models from [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
###Code
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
###Output
[1mDownloading and preparing dataset movielens/100k-ratings/0.1.0 (download: 4.70 MiB, generated: 32.41 MiB, total: 37.10 MiB) to /home/kbuilder/tensorflow_datasets/movielens/100k-ratings/0.1.0...[0m
###Markdown
We also do some housekeeping to prepare feature vocabularies.
###Code
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
###Output
_____no_output_____
###Markdown
Model definition Query modelWe start with the user model defined in [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings.
###Code
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
###Output
_____no_output_____
###Markdown
Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:``` +----------------------+ | 128 x 64 | +----------------------+ | relu +--------------------------+ | 256 x 128 | +--------------------------+ | relu +------------------------------+ | ... x 256 | +------------------------------+```Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
###Code
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# We first use the user model for generating embeddings.
# TODO 1a -- your code goes here
# Then construct the layers.
# TODO 1b -- your code goes here
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
The `layer_sizes` parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models. Candidate modelWe can adopt the same approach for the movie model. Again, we start with the `MovieModel` from the [featurization](https://www.tensorflow.org/recommenders/examples/featurization) tutorial:
###Code
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
###Output
_____no_output_____
###Markdown
And expand it with hidden layers:
###Code
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
Combined modelWith both `QueryModel` and `CandidateModel` defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
###Code
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
###Output
_____no_output_____
###Markdown
Training the model Prepare the dataWe first split the data into a training set and a testing set.
###Code
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# Split the data into a training set and a testing set
# TODO 2a -- your code goes here
###Output
_____no_output_____
###Markdown
Shallow modelWe're ready to try out our first, shallow, model! **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models. Deeper modelWhat about a deeper model with two layers? **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
The accuracy here is 0.29, quite a bit better than the shallow model.We can plot the validation accuracy curves to illustrate this:
###Code
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.However, even deeper models are not necessarily better. The following model extends the depth to three layers: **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
# Model extends the depth to three layers
# TODO 3a -- your code goes here
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
In fact, we don't see improvement over the shallow model:
###Code
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Building deep retrieval models**Learning Objectives**1. Converting raw input examples into feature embeddings.2. Splitting the data into a training set and a testing set.3. Configuring the deeper model with losses and metrics. Introduction In [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.In general, deeper models are capable of learning more complex patterns than shallower models. For example, our [user model](fhttps://www.tensorflow.org/recommenders/examples/featurizationuser_model) incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful [hyperparameter tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization). For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.Each learning objective will correspond to a _TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/deep_recommenders.ipynb) PreliminariesWe first import the necessary packages.
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
###Output
_____no_output_____
###Markdown
**NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.**
###Code
!pip install tensorflow==2.5.0
###Output
Collecting tensorflow==2.5.0
Downloading tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl (454.3 MB)
[K |████████████████████████████▍ | 402.9 MB 84.1 MB/s eta 0:00:013
###Markdown
**NOTE: Please ignore any incompatibility warnings and errors.** **NOTE: Restart your kernel to use updated packages.**
###Code
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0
###Markdown
In this tutorial we will use the models from [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
###Code
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
###Output
[1mDownloading and preparing dataset movielens/100k-ratings/0.1.0 (download: 4.70 MiB, generated: 32.41 MiB, total: 37.10 MiB) to /home/kbuilder/tensorflow_datasets/movielens/100k-ratings/0.1.0...[0m
###Markdown
We also do some housekeeping to prepare feature vocabularies.
###Code
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
###Output
_____no_output_____
###Markdown
Model definition Query modelWe start with the user model defined in [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings.
###Code
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
###Output
_____no_output_____
###Markdown
Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:``` +----------------------+ | 128 x 64 | +----------------------+ | relu +--------------------------+ | 256 x 128 | +--------------------------+ | relu +------------------------------+ | ... x 256 | +------------------------------+```Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
###Code
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# We first use the user model for generating embeddings.
# TODO 1a -- your code goes here
# Then construct the layers.
# TODO 1b -- your code goes here
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
The `layer_sizes` parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models. Candidate modelWe can adopt the same approach for the movie model. Again, we start with the `MovieModel` from the [featurization](https://www.tensorflow.org/recommenders/examples/featurization) tutorial:
###Code
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
###Output
_____no_output_____
###Markdown
And expand it with hidden layers:
###Code
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
Combined modelWith both `QueryModel` and `CandidateModel` defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
###Code
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
###Output
_____no_output_____
###Markdown
Training the model Prepare the dataWe first split the data into a training set and a testing set.
###Code
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# Split the data into a training set and a testing set
# TODO 2a -- your code goes here
###Output
_____no_output_____
###Markdown
Shallow modelWe're ready to try out our first, shallow, model! **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models. Deeper modelWhat about a deeper model with two layers? **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
The accuracy here is 0.29, quite a bit better than the shallow model.We can plot the validation accuracy curves to illustrate this:
###Code
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.However, even deeper models are not necessarily better. The following model extends the depth to three layers: **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
# Model extends the depth to three layers
# TODO 3a -- your code goes here
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
In fact, we don't see improvement over the shallow model:
###Code
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Building deep retrieval models**Learning Objectives**1. Converting raw input examples into feature embeddings.2. Splitting the data into a training set and a testing set.3. Configuring the deeper model with losses and metrics. Introduction In [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.In general, deeper models are capable of learning more complex patterns than shallower models. For example, our [user model](fhttps://www.tensorflow.org/recommenders/examples/featurizationuser_model) incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful [hyperparameter tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization). For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.Each learning objective will correspond to a _TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/deep_recommenders.ipynb) PreliminariesWe first import the necessary packages.
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
###Output
_____no_output_____
###Markdown
**NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.**
###Code
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.3.0
###Markdown
In this tutorial we will use the models from [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
###Code
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
###Output
[1mDownloading and preparing dataset movielens/100k-ratings/0.1.0 (download: 4.70 MiB, generated: 32.41 MiB, total: 37.10 MiB) to /home/kbuilder/tensorflow_datasets/movielens/100k-ratings/0.1.0...[0m
###Markdown
We also do some housekeeping to prepare feature vocabularies.
###Code
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
###Output
_____no_output_____
###Markdown
Model definition Query modelWe start with the user model defined in [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings.
###Code
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
###Output
_____no_output_____
###Markdown
Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:``` +----------------------+ | 128 x 64 | +----------------------+ | relu +--------------------------+ | 256 x 128 | +--------------------------+ | relu +------------------------------+ | ... x 256 | +------------------------------+```Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
###Code
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# We first use the user model for generating embeddings.
# TODO 1a -- your code goes here
# Then construct the layers.
# TODO 1b -- your code goes here
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
The `layer_sizes` parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models. Candidate modelWe can adopt the same approach for the movie model. Again, we start with the `MovieModel` from the [featurization](https://www.tensorflow.org/recommenders/examples/featurization) tutorial:
###Code
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
###Output
_____no_output_____
###Markdown
And expand it with hidden layers:
###Code
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
Combined modelWith both `QueryModel` and `CandidateModel` defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
###Code
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
###Output
_____no_output_____
###Markdown
Training the model Prepare the dataWe first split the data into a training set and a testing set.
###Code
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# Split the data into a training set and a testing set
# TODO 2a -- your code goes here
###Output
_____no_output_____
###Markdown
Shallow modelWe're ready to try out our first, shallow, model! **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models. Deeper modelWhat about a deeper model with two layers? **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
The accuracy here is 0.29, quite a bit better than the shallow model.We can plot the validation accuracy curves to illustrate this:
###Code
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.However, even deeper models are not necessarily better. The following model extends the depth to three layers: **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
# Model extends the depth to three layers
# TODO 3a -- your code goes here
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
In fact, we don't see improvement over the shallow model:
###Code
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Building deep retrieval models**Learning Objectives**1. Converting raw input examples into feature embeddings.2. Splitting the data into a training set and a testing set.3. Configuring the deeper model with losses and metrics. Introduction In [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.In general, deeper models are capable of learning more complex patterns than shallower models. For example, our [user model](fhttps://www.tensorflow.org/recommenders/examples/featurizationuser_model) incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful [hyperparameter tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization). For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.Each learning objective will correspond to a _TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/deep_recommenders.ipynb) PreliminariesWe first import the necessary packages.
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
###Output
_____no_output_____
###Markdown
**NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.**
###Code
!pip install tensorflow==2.5.0
###Output
Collecting tensorflow==2.5.0
Downloading tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl (454.3 MB)
[K |████████████████████████████▍ | 402.9 MB 84.1 MB/s eta 0:00:013
###Markdown
**NOTE: Please ignore any incompatibility warnings and errors.** **NOTE: Restart your kernel to use updated packages.**
###Code
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.5.0
###Markdown
In this tutorial we will use the models from [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
###Code
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
###Output
[1mDownloading and preparing dataset movielens/100k-ratings/0.1.0 (download: 4.70 MiB, generated: 32.41 MiB, total: 37.10 MiB) to /home/kbuilder/tensorflow_datasets/movielens/100k-ratings/0.1.0...[0m
###Markdown
We also do some housekeeping to prepare feature vocabularies.
###Code
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
###Output
_____no_output_____
###Markdown
Model definition Query modelWe start with the user model defined in [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings.
###Code
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
###Output
_____no_output_____
###Markdown
Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:``` +----------------------+ | 128 x 64 | +----------------------+ | relu +--------------------------+ | 256 x 128 | +--------------------------+ | relu +------------------------------+ | ... x 256 | +------------------------------+```Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
###Code
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# We first use the user model for generating embeddings.
# TODO 1a -- your code goes here
# Then construct the layers.
# TODO 1b -- your code goes here
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
The `layer_sizes` parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models. Candidate modelWe can adopt the same approach for the movie model. Again, we start with the `MovieModel` from the [featurization](https://www.tensorflow.org/recommenders/examples/featurization) tutorial:
###Code
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
###Output
_____no_output_____
###Markdown
And expand it with hidden layers:
###Code
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
Combined modelWith both `QueryModel` and `CandidateModel` defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
###Code
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
###Output
_____no_output_____
###Markdown
Training the model Prepare the dataWe first split the data into a training set and a testing set.
###Code
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# Split the data into a training set and a testing set
# TODO 2a -- your code goes here
###Output
_____no_output_____
###Markdown
Shallow modelWe're ready to try out our first, shallow, model! **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models. Deeper modelWhat about a deeper model with two layers? **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
The accuracy here is 0.29, quite a bit better than the shallow model.We can plot the validation accuracy curves to illustrate this:
###Code
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.However, even deeper models are not necessarily better. The following model extends the depth to three layers: **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
# Model extends the depth to three layers
# TODO 3a -- your code goes here
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
In fact, we don't see improvement over the shallow model:
###Code
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Building deep retrieval models**Learning Objectives**1. Converting raw input examples into feature embeddings.2. Splitting the data into a training set and a testing set.3. Configuring the deeper model with losses and metrics. Introduction In [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.In general, deeper models are capable of learning more complex patterns than shallower models. For example, our [user model](fhttps://www.tensorflow.org/recommenders/examples/featurizationuser_model) incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful [hyperparameter tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization). For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.Each learning objective will correspond to a _TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/recommendation_systems/soulutions/deep_recommenders.ipynb) PreliminariesWe first import the necessary packages.
###Code
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
###Output
_____no_output_____
###Markdown
**NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.**
###Code
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
###Output
_____no_output_____
###Markdown
This notebook uses TF2.x.Please check your tensorflow version using the cell below.
###Code
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
###Output
TensorFlow version: 2.3.0
###Markdown
In this tutorial we will use the models from [the featurization tutorial](featurization) to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
###Code
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
###Output
[1mDownloading and preparing dataset movielens/100k-ratings/0.1.0 (download: 4.70 MiB, generated: 32.41 MiB, total: 37.10 MiB) to /home/kbuilder/tensorflow_datasets/movielens/100k-ratings/0.1.0...[0m
###Markdown
We also do some housekeeping to prepare feature vocabularies.
###Code
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
###Output
_____no_output_____
###Markdown
Model definition Query modelWe start with the user model defined in [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings.
###Code
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
###Output
_____no_output_____
###Markdown
Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:``` +----------------------+ | 128 x 64 | +----------------------+ | relu +--------------------------+ | 256 x 128 | +--------------------------+ | relu +------------------------------+ | ... x 256 | +------------------------------+```Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
###Code
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# We first use the user model for generating embeddings.
# TODO 1a -- your code goes here
# Then construct the layers.
# TODO 1b -- your code goes here
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
The `layer_sizes` parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models. Candidate modelWe can adopt the same approach for the movie model. Again, we start with the `MovieModel` from the [featurization](https://www.tensorflow.org/recommenders/examples/featurization) tutorial:
###Code
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
###Output
_____no_output_____
###Markdown
And expand it with hidden layers:
###Code
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
###Output
_____no_output_____
###Markdown
Combined modelWith both `QueryModel` and `CandidateModel` defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
###Code
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
###Output
_____no_output_____
###Markdown
Training the model Prepare the dataWe first split the data into a training set and a testing set.
###Code
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# Split the data into a training set and a testing set
# TODO 2a -- your code goes here
###Output
_____no_output_____
###Markdown
Shallow modelWe're ready to try out our first, shallow, model! **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models. Deeper modelWhat about a deeper model with two layers? **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
The accuracy here is 0.29, quite a bit better than the shallow model.We can plot the validation accuracy curves to illustrate this:
###Code
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____
###Markdown
Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.However, even deeper models are not necessarily better. The following model extends the depth to three layers: **NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
###Code
# Model extends the depth to three layers
# TODO 3a -- your code goes here
###Output
WARNING:tensorflow:The dtype of the source tensor must be floating (e.g. tf.float32) when calling GradientTape.gradient, got tf.int32
###Markdown
In fact, we don't see improvement over the shallow model:
###Code
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
###Output
_____no_output_____ |
nbs/display.ipynb | ###Markdown
[discord.Message.created_at returns a datetime.datetime](https://discordpy.readthedocs.io/en/stable/api.htmldiscord.Message.created_at)
###Code
from datetime import datetime
testts = datetime(year=2021, month=12, day=7, hour=12, minute=31, second=15, microsecond=12345 )
testts.strftime("%b %d %Y %H:%M:%S")
#export
import json
def encode(u):
if isinstance(u, discord.Message):
ts = u.created_at
serialized = "({ts}){author}: {content}".format(ts=ts.strftime("%b %d %Y %H:%M:%S"),
author=u.author.name,
content=u.content)
return serialized
elif isinstance(u, discord.Thread):
return 'Thread: {}'.format(u.name)
elif isinstance(u, discord.TextChannel):
return 'Channel: {}'.format(u.name)
elif isinstance(u, discord.Guild):
return 'Guild: {}'.format(u.name)
else:
type_name = u.__class__.__name__
raise TypeError("Unexpected type {0}".format(type_name))
class DiscordEncoder(json.JSONEncoder):
def default(self, u):
if isinstance(u, discord.Message):
"""
serialized = {
"id": u.id,
"content": u.content,
"author": u.author.name,
"created_at": u.created_at.isoformat()
}
"""
serialized = "({ts}){author}: {content}".format(ts=u.created_at.isoformat(),
author=u.author.name,
content=u.content)
return serialized
elif isinstance(u, discord.Thread):
return 'Thread: {}'.format(u.name)
elif isinstance(u, discord.TextChannel):
return 'Channel: {}'.format(u.name)
elif isinstance(u, discord.Guild):
return 'Guild: {}'.format(u.name)
else:
#type_name = u.__class__.__name__
#raise TypeError("Unexpected type {0}".format(type_name))
return json.JSONEncoder.default(self, obj)
class Formatter:
def __init__(self):
self.lines = []
def add(self, thing):
#entry = json.dumps(thing, cls=DiscordEncoder)
entry = encode(thing)
self.lines.append(entry)
#export
#TODO change the data model for this to something more standard.
# use only strings for the keywords rather than discord objects
def serialize_content(guild_content):
fmt = Formatter()
print('--------- content summary -------------')
for guild, channels_d in guild_content.items():
fmt.add(guild)
for channel_obj, thread_d in channels_d.items():
fmt.add(channel_obj)
for thread, msg_list in thread_d.items():
if msg_list:
fmt.add(thread)
for msg in msg_list:
fmt.add(msg)
return fmt.lines
def html_content(guild_content):
lines = serialize_content(guild_content)
print(lines)
return '\n<br>'.join(lines)
###Output
_____no_output_____ |
DeepLearning-RealTimeFaceRecognition.ipynb | ###Markdown
Real time face recognizition application using deep neural network Below is the implementation of face detector and recognizer which can identify the face of the person showing on a web cam. We'll be implementing it Keras framework.The deep neural nework we'll be using here is based on [FaceNet](https://arxiv.org/pdf/1503.03832.pdf), which was published by Google in 2015 and achieved 99.57% accuracy on a popular face recognition dataset named “Labeled Faces in thae Wild(LFW)". You can find its open-source Keras version [here](https://github.com/iwantooxxoox/Keras-OpenFace) and Tensorflow version [here](https://github.com/davidsandberg/facenet), and play around to build your own models. Import libraries
###Code
import numpy as np
from numpy import genfromtxt
import pandas as pd
import os
import glob
import cv2
from mtcnn.mtcnn import MTCNN
import utils
#keras imported in utils.py file
%load_ext autoreload
%autoreload 2
np.set_printoptions(threshold=np.nan)
###Output
_____no_output_____
###Markdown
How to let computers tell whether two pictures are the same person?Looking at the two photos below with our naked eyes, we can easily tell it is the same person, although the hairstyle, dressing and distance from the camera are different. But how can we let computers tell whether it is the same person or not? Notice when computers 'see' pictures, a RGB picture will be 'seen' as values with RGB three channels at each pixel of the picture. If it is a pixel_size*pixel_size RGB picture, it will be a (pixel_size, pixel_size, 3) matrix. Then, how to let computers tell whether two matrices represent the same person? At first, we might think of reshaping the (pixel_size, pixel_size, 3) matrix into a 1-dimensional vector and verify whether they are the same person based on the distance between them. However, the time when she took different pictures, she might be dressing in different clothes, wearing different accessories, standing at different distances away from the camera, etc. All these possibilities will significantly mislead the computer's judgements. Based on this, a direct comparation of corresponding 1-d vectors of two pictures is not an ideal strategy. Instead, we'll approach this problem by encoding the input picture into a 128-dimentional embedding by passing this picture through a deep neural network, and use the 128-dimentional embedding as the representaion of each picture. The model architecture is shown below. If the distance between two 128-d vectors is larger than the customized threshold, then these two pictures are not the same person, vice versa. We'll talk about the triplet loss function in later chapter, first, let's implement the deep neural network.  Deep neural network--FacenetNotice the input data shape is (96, 96, 3), which is 96*96 pixel RGB(3 channels) picture; after driving through this Inception-blocks model, the last layer (which is the output) is a fully connected layer with 128 neurons. The output 128-dimension vector extracts the important features of the input facial picture and will be as the representaion of input picture.
###Code
#import facenet model
#see inception_blocks.py for model implementation
from utils import LRN2D
import utils
from inception_blocks import *
#show the architecture of the network
model = faceRecoModel((96, 96, 3))
model.summary()
###Output
_____no_output_____
###Markdown
Triplet loss functionThe FaceNet model converts input images into 128-d embeddings to represent the image. Then parameters are trained by minimizing the triplet loss. The Triplet Loss minimizes the distance between an anchor and a positive, both of which have the same identity, and maximizes the distance between the anchor and a negative of a different identity. As shown below:The training process requires GPU and high amount of training data, you can also transfer learning and fine tune the weights. But here we'll be loading previously trained weights, which are available at [here](https://github.com/iwantooxxoox/Keras-OpenFace) in the "weights" folder and they are also provided in this source.
###Code
# load weights(this process will take a few minutes)
import utils
weights = utils.weights
weights_dict = utils.load_weights()
for name in weights:
if model.get_layer(name) != None:
model.get_layer(name).set_weights(weights_dict[name])
elif model.get_layer(name) != None:
model.get_layer(name).set_weights(weights_dict[name])
###Output
_____no_output_____
###Markdown
Capture, crop, align and resize identity face image in real time using OpenCVIn this section, we'll be using OpenCV (make sure you've installed it) to open a web camera, detect and outine the face area using a blue rectangle and then capture 15 face images of the person that is in front of the camera. These cropped face snapshots are stored in **"images"** folder with the name NameHere_1 to NameHere_15. Select onely one well captured face image from these 15 images for each person. Rename it with the name of person and delete rest of them. Repeat this process by different people, with each person only keeps one picture in this folder. Later in this program, when a person shows up in front of the camera, it will calculate its distance from each stored pictures and return the most likely one's name.
###Code
cap = cv2.VideoCapture(0)
detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
count = 0
while(True):
# capture frame by frame
ret, img = cap.read()
# detect the face, you can change the scaleFactor according to your case
faces = detector.detectMultiScale(img, scaleFactor= 1.5, minNeighbors= 5)
for (x,y,w,h) in faces:
# outline the face area by a blue rectangle
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0),2)
count += 1
# save the cropped face image into the datasets folder
cv2.imwrite("images/NameHere_" + str(count) + ".jpg", img[y:y+h,x:x+w])
cv2.imshow('image', img)
# Press 'ESC' for exiting video
k = cv2.waitKey(200) & 0xff
if k == 27:
break
elif count >= 8:
break
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Or detect face by Multi-task CNNBesides using OpenCV to detect face, we can also use Dlib or deep learning Multi-task CNN. Here we show how to use MTCNN to detect the face from an image. After running this section, you may go to "pictures" folder to check t
###Code
from mtcnn.mtcnn import MTCNN
image= cv2.imread('pictures/yifei.jpg')
detector1= MTCNN()
result=detector1.detect_faces(image)
print(result)
count=0
for person in result:
bounding_box = person['box']
x=bounding_box[0]
y=bounding_box[1]
w=bounding_box[2]
h=bounding_box[3]
keypoints = person['keypoints']
cv2.rectangle(image, (x, y), (x+w, y+h), (255,0,255), 2)
cv2.circle(image,(keypoints['left_eye']), 2, (0,155,255), 2)
cv2.circle(image,(keypoints['right_eye']), 2, (0,155,255), 2)
cv2.circle(image,(keypoints['nose']), 2, (0,155,255), 2)
cv2.circle(image,(keypoints['mouth_left']), 2, (0,155,255), 2)
cv2.circle(image,(keypoints['mouth_right']), 2, (0,155,255), 2)
cv2.imwrite("pictures/" + str(count)+ "_detected.jpg", image)
cv2.imwrite("pictures/" + str(count)+ ".jpg", image[y:y+h,x:x+w])
count +=1
###Output
_____no_output_____
###Markdown
Steps to recognize faces:First, encode one single image into embeddingsSecond, build a database containing embeddings for all images by passing all images through the weighted Facenet modelThird, identify images by using the embeddings(find the minimum L2 euclidean distance between embeddings)
###Code
#First, encode one single image into embeddings
def image_to_embedding(image, model):
image = cv2.resize(image, (96, 96))
img = image[...,::-1]
img = np.around(np.transpose(img, (0,1,2))/255.0, decimals=12)
x_train = np.array([img])
embedding = model.predict_on_batch(x_train)
return embedding
#Second, build a database containing embeddings for all images
def build_database_dict():
database = {}
for file in glob.glob("/Users/Olivia/Documents/ML/Face-recognition-using-deep-learning-master/images/*"):
database_name = os.path.splitext(os.path.basename(file))[0]
image_file = cv2.imread(file, 1)
database[database_name] = image_to_embedding(image_file, model)
return database
#Third, identify images by using the embeddings(find the minimum L2 euclidean distance between embeddings)
def recognize_face(face_image, database, model):
embedding = image_to_embedding(face_image, model)
minimum_distance = 200
name = None
# Loop over names and encodings.
for (database_name, database_embedding) in database.items():
euclidean_distance = np.linalg.norm(embedding-database_embedding)
print('Euclidean distance from %s is %s' %(database_name, euclidean_distance))
if euclidean_distance < minimum_distance:
minimum_distance = euclidean_distance
name = database_name
if minimum_distance < 0.8:
return str(name)+str(' ')+str(round(minimum_distance,14))
else:
return 'Unknown'
###Output
_____no_output_____
###Markdown
Try an image
###Code
database= build_database_dict()
image= cv2.imread('images/Obama.jpg')
recognize_face(image, database, model)
###Output
_____no_output_____
###Markdown
Recognize faces in real time using webcam
###Code
cv2.namedWindow("Face Recognizer")
vc = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
while True:
ret, frame = vc.read()
height, width, channels = frame.shape
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
# loop through all the faces detected
for (x, y, w, h) in faces:
face_image = frame[max(0, y):min(height, y+h), max(0, x):min(width, x+w)]
identity = recognize_face(face_image, database, model)
if identity is not None:
img = cv2.rectangle(frame,(x, y),(x+w, y+h),(255,0,0),2)
cv2.putText(frame, str(identity), (x+5,y-5), cv2.FONT_HERSHEY_SIMPLEX, 1, (255,0,255), 2)
key = cv2.waitKey(100)
cv2.imshow("Face Recognizer", frame)
if key == 27: # exit on ESC
break
vc.release()
cv2.destroyAllWindows()
###Output
_____no_output_____ |
examples/smokes_friends_cancer/smokes_friends_cancer.ipynb | ###Markdown
Language
###Code
embedding_size = 5
g1 = {l:ltn.constant(np.random.uniform(low=0.0,high=1.0,size=embedding_size),trainable=True) for l in 'abcdefgh'}
g2 = {l:ltn.constant(np.random.uniform(low=0.0,high=1.0,size=embedding_size),trainable=True) for l in 'ijklmn'}
g = {**g1,**g2}
Smokes = ltn.Predicate.MLP([embedding_size],hidden_layer_sizes=(8,8))
Friends = ltn.Predicate.MLP([embedding_size,embedding_size],hidden_layer_sizes=(8,8))
Cancer = ltn.Predicate.MLP([embedding_size],hidden_layer_sizes=(8,8))
friends = [('a','b'),('a','e'),('a','f'),('a','g'),('b','c'),('c','d'),('e','f'),('g','h'),
('i','j'),('j','m'),('k','l'),('m','n')]
smokes = ['a','e','f','g','j','n']
cancer = ['a','e']
Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())
And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())
Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())
Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())
Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics="forall")
Exists = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMean(p=6),semantics="exists")
formula_aggregator = ltn.fuzzy_ops.Aggreg_pMeanError()
# defining the theory
@tf.function
def axioms(p_exists):
"""
NOTE: we update the embeddings at each step
-> we should re-compute the variables.
"""
p = ltn.variable("p",tf.stack(list(g.values())))
q = ltn.variable("q",tf.stack(list(g.values())))
axioms = []
# Friends: knowledge incomplete in that
# Friend(x,y) with x<y may be known
# but Friend(y,x) may not be known
axioms.append(formula_aggregator(tf.stack(
[Friends([g[x],g[y]]) for (x,y) in friends])))
axioms.append(formula_aggregator(tf.stack(
[Not(Friends([g[x],g[y]])) for x in g1 for y in g1 if (x,y) not in friends and x<y ]+\
[Not(Friends([g[x],g[y]])) for x in g2 for y in g2 if (x,y) not in friends and x<y ])))
# Smokes: knowledge complete
axioms.append(formula_aggregator(tf.stack(
[Smokes(g[x]) for x in smokes])))
axioms.append(formula_aggregator(tf.stack(
[Not(Smokes(g[x])) for x in g if x not in smokes])))
# Cancer: knowledge complete in g1 only
axioms.append(formula_aggregator(tf.stack(
[Cancer(g[x]) for x in cancer])))
axioms.append(formula_aggregator(tf.stack(
[Not(Cancer(g[x])) for x in g1 if x not in cancer])))
# friendship is anti-reflexive
axioms.append(Forall(p,Not(Friends([p,p])),p=5))
# friendship is symmetric
axioms.append(Forall((p,q),Implies(Friends([p,q]),Friends([q,p])),p=5))
# everyone has a friend
axioms.append(Forall(p,Exists(q,Friends([p,q]),p=p_exists)))
# smoking propagates among friends
axioms.append(Forall((p,q),Implies(And(Friends([p,q]),Smokes(p)),Smokes(q))))
# smoking causes cancer + not smoking causes not cancer
axioms.append(Forall(p,Implies(Smokes(p),Cancer(p))))
axioms.append(Forall(p,Implies(Not(Smokes(p)),Not(Cancer(p)))))
# computing sat_level
axioms = tf.stack([tf.squeeze(ax) for ax in axioms])
sat_level = formula_aggregator(axioms)
return sat_level, axioms
axioms(p_exists=tf.constant(1.))
###Output
_____no_output_____
###Markdown
Training
###Code
trainable_variables = \
Smokes.trainable_variables \
+ Friends.trainable_variables \
+ Cancer.trainable_variables \
+ list(g.values())
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
for epoch in range(2000):
if 0 <= epoch < 400:
p_exists = tf.constant(1.)
else:
p_exists = tf.constant(6.)
with tf.GradientTape() as tape:
loss_value = 1. - axioms(p_exists=p_exists)[0]
grads = tape.gradient(loss_value, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
if epoch%200 == 0:
print("Epoch %d: Sat Level %.3f"%(epoch, axioms(p_exists=p_exists)[0]))
print("Training finished at Epoch %d with Sat Level %.3f"%(epoch, axioms(p_exists=p_exists)[0]))
###Output
Epoch 0: Sat Level 0.570
Epoch 200: Sat Level 0.684
Epoch 400: Sat Level 0.764
Epoch 600: Sat Level 0.822
Epoch 800: Sat Level 0.838
Epoch 1000: Sat Level 0.846
Epoch 1200: Sat Level 0.863
Epoch 1400: Sat Level 0.875
Epoch 1600: Sat Level 0.876
Epoch 1800: Sat Level 0.876
Training finished at Epoch 1999 with Sat Level 0.876
###Markdown
ResultsPartial facts
###Code
df_smokes_cancer_facts = pd.DataFrame(
np.array([[(x in smokes), (x in cancer) if x in g1 else math.nan] for x in g]),
columns=["Smokes","Cancer"],
index=list('abcdefghijklmn'))
df_friends_ah_facts = pd.DataFrame(
np.array([[((x,y) in friends) if x<y else math.nan for x in g1] for y in g1]),
index = list('abcdefgh'),
columns = list('abcdefgh'))
df_friends_in_facts = pd.DataFrame(
np.array([[((x,y) in friends) if x<y else math.nan for x in g2] for y in g2]),
index = list('ijklmn'),
columns = list('ijklmn'))
p = ltn.variable("p",tf.stack(list(g.values())))
q = ltn.variable("q",tf.stack(list(g.values())))
df_smokes_cancer = pd.DataFrame(
tf.stack([Smokes(p),Cancer(p)],axis=1).numpy(),
columns=["Smokes","Cancer"],
index=list('abcdefghijklmn'))
pred_friends = tf.squeeze(Friends([p,q]))
df_friends_ah = pd.DataFrame(
pred_friends[:8,:8].numpy(),
index=list('abcdefgh'),
columns=list('abcdefgh'))
df_friends_in = pd.DataFrame(
pred_friends[8:,8:].numpy(),
index=list('ijklmn'),
columns=list('ijklmn'))
plt.rcParams['font.size'] = 12
plt.rcParams['axes.linewidth'] = 1
###Output
_____no_output_____
###Markdown
Facts given in the "groundtruth". Notice that the facts are not all compatible with the axioms. For instance, `f` is said to smoke but not to have cancer, while one rule states that every smoker has cancer.
###Code
plt.figure(figsize=(12,3))
plt.subplot(131)
plt_heatmap(df_smokes_cancer_facts, vmin=0, vmax=1)
plt.subplot(132)
plt.title("Friend(x,y) in Group 1")
plt_heatmap(df_friends_ah_facts, vmin=0, vmax=1)
plt.subplot(133)
plt.title("Friend(x,y) in Group 2")
plt_heatmap(df_friends_in_facts, vmin=0, vmax=1)
#plt.savefig('ex_smokes_givenfacts.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Facts inferred by the LTN system.
###Code
plt.figure(figsize=(12,3))
plt.subplot(131)
plt_heatmap(df_smokes_cancer, vmin=0, vmax=1)
plt.subplot(132)
plt.title("Friend(x,y) in Group 1")
plt_heatmap(df_friends_ah, vmin=0, vmax=1)
plt.subplot(133)
plt.title("Friend(x,y) in Group 2")
plt_heatmap(df_friends_in, vmin=0, vmax=1)
#plt.savefig('ex_smokes_inferfacts.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Satisfiability of the axioms.
###Code
print("forall p: ~Friends(p,p) : %.2f" % Forall(p,Not(Friends([p,p]))))
print("forall p,q: Friends(p,q) -> Friends(q,p) : %.2f" % Forall((p,q),Implies(Friends([p,q]),Friends([q,p]))))
print("forall p: exists q: Friends(p,q) : %.2f" % Forall(p,Exists(q,Friends([p,q]))))
print("forall p,q: Friends(p,q) -> (Smokes(p)->Smokes(q)) : %.2f" % Forall((p,q),Implies(Friends([p,q]),Implies(Smokes(p),Smokes(q)))))
print("forall p: Smokes(p) -> Cancer(p) : %.2f" % Forall(p,Implies(Smokes(p),Cancer(p))))
###Output
forall p: ~Friends(p,p) : 1.00
forall p,q: Friends(p,q) -> Friends(q,p) : 0.99
forall p: exists q: Friends(p,q) : 0.78
forall p,q: Friends(p,q) -> (Smokes(p)->Smokes(q)) : 0.79
forall p: Smokes(p) -> Cancer(p) : 0.76
###Markdown
We can query unknown formulas.
###Code
print("forall p: Cancer(p) -> Smokes(p): %.2f" % Forall(p,Implies(Cancer(p),Smokes(p)),p=5))
print("forall p,q: (Cancer(p) or Cancer(q)) -> Friends(p,q): %.2f" % Forall((p,q), Implies(Or(Cancer(p),Cancer(q)),Friends([p,q])),p=5))
###Output
forall p: Cancer(p) -> Smokes(p): 0.96
forall p,q: (Cancer(p) or Cancer(q)) -> Friends(p,q): 0.21
###Markdown
Visualize the embeddings
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
x = [v.numpy() for v in g.values()]
x_norm = StandardScaler().fit_transform(x)
pca = PCA(n_components=2)
pca_transformed = pca.fit_transform(x_norm)
var_x = ltn.variable("x",x)
var_x1 = ltn.variable("x1",x)
var_x2 = ltn.variable("x2",x)
plt.figure(figsize=(8,5))
plt.subplot(221)
plt.scatter(pca_transformed[:len(g1.values()),0],pca_transformed[:len(g1.values()),1],label="Group 1")
plt.scatter(pca_transformed[len(g1.values()):,0],pca_transformed[len(g1.values()):,1],label="Group 2")
names = list(g.keys())
for i in range(len(names)):
plt.annotate(names[i].upper(),pca_transformed[i])
plt.title("Embeddings")
plt.legend()
plt.subplot(222)
plt.scatter(pca_transformed[:,0],pca_transformed[:,1],c=Smokes(var_x))
plt.title("Smokes")
plt.colorbar()
plt.subplot(224)
plt.scatter(pca_transformed[:,0],pca_transformed[:,1],c=Cancer(var_x))
plt.title("Cancer")
plt.colorbar()
plt.subplot(223)
plt.scatter(pca_transformed[:len(g1.values()),0],pca_transformed[:len(g1.values()),1],label="Group 1")
plt.scatter(pca_transformed[len(g1.values()):,0],pca_transformed[len(g1.values()):,1],label="Group 2")
res = Friends([var_x1,var_x2]).numpy()
for i1 in range(len(x)):
for i2 in range(i1,len(x)):
if (names[i1] in g1 and names[i2] in g2) \
or (names[i1] in g2 and names[i2] in g1):
continue
plt.plot(
[pca_transformed[i1,0],pca_transformed[i2,0]],
[pca_transformed[i1,1],pca_transformed[i2,1]],
alpha=res[i1,i2],c="black")
plt.title("Friendships per group")
plt.tight_layout()
#plt.savefig("ex_smokes_embeddings.pdf")
###Output
_____no_output_____
###Markdown
Smokes Friends CancerA classic example of Statistical Relational Learning is the smokers-friends-cancer example introduced in the [Markov Logic Networks paper (2006)](https://homes.cs.washington.edu/~pedrod/papers/mlj05.pdf).There are 14 people divided into two groups $\{a,b,\dots,h\}$ and $\{i,j,\dots,n\}$. - Within each group, there is complete knowledge about smoking habits. - In the first group, there is complete knowledge about who has and who does not have cancer. - Knowledge about the friendship relation is complete within each group only if symmetry is assumed, that is, $\forall x,y \ (friends(x,y) \rightarrow friends(y,x))$. Otherwise, knowledge about friendship is incomplete in that it may be known that e.g.\ $a$ is a friend of $b$, and it may be not known whether $b$ is a friend of $a$.- Finally, there is general knowledge about smoking, friendship and cancer, namely that smoking causes cancer, friendship is normally symmetric and anti-reflexive, everyone has a friend, and smoking propagates (actively or passively) among friends. One can formulate this task easily in LTN as follows.
###Code
import logging; logging.basicConfig(level=logging.INFO)
import pdb
import tensorflow as tf
import numpy as np
import math
import matplotlib.pyplot as plt
import pandas as pd
import logictensornetworks as ltn
np.set_printoptions(suppress=True)
pd.options.display.max_rows=999
pd.options.display.max_columns=999
pd.set_option('display.width',1000)
pd.options.display.float_format = '{:,.2f}'.format
def plt_heatmap(df, vmin=None, vmax=None):
plt.pcolor(df, vmin=vmin, vmax=vmax)
plt.yticks(np.arange(0.5,len(df.index),1),df.index)
plt.xticks(np.arange(0.5,len(df.columns),1),df.columns)
plt.colorbar()
pd.set_option('precision',2)
###Output
_____no_output_____
###Markdown
Language- LTN constants are used to denote the individuals. Each is grounded as a trainable embedding.- The `Smokes`, `Friends`, `Cancer` predicates are grounded as simple MLPs.- All the rules in the preamble are formulate in the knowledgebase.
###Code
embedding_size = 5
g1 = {l:ltn.constant(np.random.uniform(low=0.0,high=1.0,size=embedding_size),trainable=True) for l in 'abcdefgh'}
g2 = {l:ltn.constant(np.random.uniform(low=0.0,high=1.0,size=embedding_size),trainable=True) for l in 'ijklmn'}
g = {**g1,**g2}
Smokes = ltn.Predicate.MLP([embedding_size],hidden_layer_sizes=(8,8))
Friends = ltn.Predicate.MLP([embedding_size,embedding_size],hidden_layer_sizes=(8,8))
Cancer = ltn.Predicate.MLP([embedding_size],hidden_layer_sizes=(8,8))
friends = [('a','b'),('a','e'),('a','f'),('a','g'),('b','c'),('c','d'),('e','f'),('g','h'),
('i','j'),('j','m'),('k','l'),('m','n')]
smokes = ['a','e','f','g','j','n']
cancer = ['a','e']
Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())
And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())
Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())
Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())
Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics="forall")
Exists = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMean(p=6),semantics="exists")
formula_aggregator = ltn.fuzzy_ops.Aggreg_pMeanError()
###Output
_____no_output_____
###Markdown
Notice that the knowledge-base is not satisfiable in the strict logical sense of the word.For instance, the individual $f$ is said to smoke but not to have cancer, which is inconsistent with the rule $\forall x \ (S(x) \rightarrow C(x))$.Hence, it is important to adopt a probabilistic approach as done with MLN or a many-valued fuzzy logic interpretation as done with LTN.
###Code
# defining the theory
@tf.function
def axioms(p_exists):
"""
NOTE: we update the embeddings at each step
-> we should re-compute the variables.
"""
p = ltn.variable("p",tf.stack(list(g.values())))
q = ltn.variable("q",tf.stack(list(g.values())))
axioms = []
# Friends: knowledge incomplete in that
# Friend(x,y) with x<y may be known
# but Friend(y,x) may not be known
axioms.append(formula_aggregator(tf.stack(
[Friends([g[x],g[y]]) for (x,y) in friends])))
axioms.append(formula_aggregator(tf.stack(
[Not(Friends([g[x],g[y]])) for x in g1 for y in g1 if (x,y) not in friends and x<y ]+\
[Not(Friends([g[x],g[y]])) for x in g2 for y in g2 if (x,y) not in friends and x<y ])))
# Smokes: knowledge complete
axioms.append(formula_aggregator(tf.stack(
[Smokes(g[x]) for x in smokes])))
axioms.append(formula_aggregator(tf.stack(
[Not(Smokes(g[x])) for x in g if x not in smokes])))
# Cancer: knowledge complete in g1 only
axioms.append(formula_aggregator(tf.stack(
[Cancer(g[x]) for x in cancer])))
axioms.append(formula_aggregator(tf.stack(
[Not(Cancer(g[x])) for x in g1 if x not in cancer])))
# friendship is anti-reflexive
axioms.append(Forall(p,Not(Friends([p,p])),p=5))
# friendship is symmetric
axioms.append(Forall((p,q),Implies(Friends([p,q]),Friends([q,p])),p=5))
# everyone has a friend
axioms.append(Forall(p,Exists(q,Friends([p,q]),p=p_exists)))
# smoking propagates among friends
axioms.append(Forall((p,q),Implies(And(Friends([p,q]),Smokes(p)),Smokes(q))))
# smoking causes cancer + not smoking causes not cancer
axioms.append(Forall(p,Implies(Smokes(p),Cancer(p))))
axioms.append(Forall(p,Implies(Not(Smokes(p)),Not(Cancer(p)))))
# computing sat_level
axioms = tf.stack([tf.squeeze(ax) for ax in axioms])
sat_level = formula_aggregator(axioms)
return sat_level, axioms
axioms(p_exists=tf.constant(1.))
###Output
_____no_output_____
###Markdown
Training
###Code
trainable_variables = \
Smokes.trainable_variables \
+ Friends.trainable_variables \
+ Cancer.trainable_variables \
+ list(g.values())
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
for epoch in range(2000):
if 0 <= epoch < 400:
p_exists = tf.constant(1.)
else:
p_exists = tf.constant(6.)
with tf.GradientTape() as tape:
loss_value = 1. - axioms(p_exists=p_exists)[0]
grads = tape.gradient(loss_value, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
if epoch%200 == 0:
print("Epoch %d: Sat Level %.3f"%(epoch, axioms(p_exists=p_exists)[0]))
print("Training finished at Epoch %d with Sat Level %.3f"%(epoch, axioms(p_exists=p_exists)[0]))
###Output
Epoch 0: Sat Level 0.570
Epoch 200: Sat Level 0.684
Epoch 400: Sat Level 0.764
Epoch 600: Sat Level 0.822
Epoch 800: Sat Level 0.838
Epoch 1000: Sat Level 0.846
Epoch 1200: Sat Level 0.863
Epoch 1400: Sat Level 0.875
Epoch 1600: Sat Level 0.876
Epoch 1800: Sat Level 0.876
Training finished at Epoch 1999 with Sat Level 0.876
###Markdown
Results
###Code
df_smokes_cancer_facts = pd.DataFrame(
np.array([[(x in smokes), (x in cancer) if x in g1 else math.nan] for x in g]),
columns=["Smokes","Cancer"],
index=list('abcdefghijklmn'))
df_friends_ah_facts = pd.DataFrame(
np.array([[((x,y) in friends) if x<y else math.nan for x in g1] for y in g1]),
index = list('abcdefgh'),
columns = list('abcdefgh'))
df_friends_in_facts = pd.DataFrame(
np.array([[((x,y) in friends) if x<y else math.nan for x in g2] for y in g2]),
index = list('ijklmn'),
columns = list('ijklmn'))
p = ltn.variable("p",tf.stack(list(g.values())))
q = ltn.variable("q",tf.stack(list(g.values())))
df_smokes_cancer = pd.DataFrame(
tf.stack([Smokes(p),Cancer(p)],axis=1).numpy(),
columns=["Smokes","Cancer"],
index=list('abcdefghijklmn'))
pred_friends = tf.squeeze(Friends([p,q]))
df_friends_ah = pd.DataFrame(
pred_friends[:8,:8].numpy(),
index=list('abcdefgh'),
columns=list('abcdefgh'))
df_friends_in = pd.DataFrame(
pred_friends[8:,8:].numpy(),
index=list('ijklmn'),
columns=list('ijklmn'))
plt.rcParams['font.size'] = 12
plt.rcParams['axes.linewidth'] = 1
###Output
_____no_output_____
###Markdown
Incomplete facts in the knowledge-base: axioms for smokers for individuals $a$ to $n$ and for cancer for individuals $a$ to $h$ (left), friendship relations in group 1 (middle), and friendship relations in group 2 (right).
###Code
plt.figure(figsize=(12,3))
plt.subplot(131)
plt_heatmap(df_smokes_cancer_facts, vmin=0, vmax=1)
plt.subplot(132)
plt.title("Friend(x,y) in Group 1")
plt_heatmap(df_friends_ah_facts, vmin=0, vmax=1)
plt.subplot(133)
plt.title("Friend(x,y) in Group 2")
plt_heatmap(df_friends_in_facts, vmin=0, vmax=1)
#plt.savefig('ex_smokes_givenfacts.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Querying all the truth-values using LTN after training: smokers and cancer (left), friendship relations (middle and right).
###Code
plt.figure(figsize=(12,3))
plt.subplot(131)
plt_heatmap(df_smokes_cancer, vmin=0, vmax=1)
plt.subplot(132)
plt.title("Friend(x,y) in Group 1")
plt_heatmap(df_friends_ah, vmin=0, vmax=1)
plt.subplot(133)
plt.title("Friend(x,y) in Group 2")
plt_heatmap(df_friends_in, vmin=0, vmax=1)
#plt.savefig('ex_smokes_inferfacts.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Satisfiability of the axioms.
###Code
print("forall p: ~Friends(p,p) : %.2f" % Forall(p,Not(Friends([p,p]))))
print("forall p,q: Friends(p,q) -> Friends(q,p) : %.2f" % Forall((p,q),Implies(Friends([p,q]),Friends([q,p]))))
print("forall p: exists q: Friends(p,q) : %.2f" % Forall(p,Exists(q,Friends([p,q]))))
print("forall p,q: Friends(p,q) -> (Smokes(p)->Smokes(q)) : %.2f" % Forall((p,q),Implies(Friends([p,q]),Implies(Smokes(p),Smokes(q)))))
print("forall p: Smokes(p) -> Cancer(p) : %.2f" % Forall(p,Implies(Smokes(p),Cancer(p))))
###Output
forall p: ~Friends(p,p) : 1.00
forall p,q: Friends(p,q) -> Friends(q,p) : 0.99
forall p: exists q: Friends(p,q) : 0.78
forall p,q: Friends(p,q) -> (Smokes(p)->Smokes(q)) : 0.79
forall p: Smokes(p) -> Cancer(p) : 0.76
###Markdown
We can query unknown formulas.
###Code
print("forall p: Cancer(p) -> Smokes(p): %.2f" % Forall(p,Implies(Cancer(p),Smokes(p)),p=5))
print("forall p,q: (Cancer(p) or Cancer(q)) -> Friends(p,q): %.2f" % Forall((p,q), Implies(Or(Cancer(p),Cancer(q)),Friends([p,q])),p=5))
###Output
forall p: Cancer(p) -> Smokes(p): 0.96
forall p,q: (Cancer(p) or Cancer(q)) -> Friends(p,q): 0.21
###Markdown
Visualize the embeddings
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
x = [v.numpy() for v in g.values()]
x_norm = StandardScaler().fit_transform(x)
pca = PCA(n_components=2)
pca_transformed = pca.fit_transform(x_norm)
var_x = ltn.variable("x",x)
var_x1 = ltn.variable("x1",x)
var_x2 = ltn.variable("x2",x)
plt.figure(figsize=(8,5))
plt.subplot(221)
plt.scatter(pca_transformed[:len(g1.values()),0],pca_transformed[:len(g1.values()),1],label="Group 1")
plt.scatter(pca_transformed[len(g1.values()):,0],pca_transformed[len(g1.values()):,1],label="Group 2")
names = list(g.keys())
for i in range(len(names)):
plt.annotate(names[i].upper(),pca_transformed[i])
plt.title("Embeddings")
plt.legend()
plt.subplot(222)
plt.scatter(pca_transformed[:,0],pca_transformed[:,1],c=Smokes(var_x))
plt.title("Smokes")
plt.colorbar()
plt.subplot(224)
plt.scatter(pca_transformed[:,0],pca_transformed[:,1],c=Cancer(var_x))
plt.title("Cancer")
plt.colorbar()
plt.subplot(223)
plt.scatter(pca_transformed[:len(g1.values()),0],pca_transformed[:len(g1.values()),1],label="Group 1")
plt.scatter(pca_transformed[len(g1.values()):,0],pca_transformed[len(g1.values()):,1],label="Group 2")
res = Friends([var_x1,var_x2]).numpy()
for i1 in range(len(x)):
for i2 in range(i1,len(x)):
if (names[i1] in g1 and names[i2] in g2) \
or (names[i1] in g2 and names[i2] in g1):
continue
plt.plot(
[pca_transformed[i1,0],pca_transformed[i2,0]],
[pca_transformed[i1,1],pca_transformed[i2,1]],
alpha=res[i1,i2],c="black")
plt.title("Friendships per group")
plt.tight_layout()
#plt.savefig("ex_smokes_embeddings.pdf")
###Output
_____no_output_____
###Markdown
Smokes Friends CancerA classic example of Statistical Relational Learning is the smokers-friends-cancer example introduced in the [Markov Logic Networks paper (2006)](https://homes.cs.washington.edu/~pedrod/papers/mlj05.pdf).There are 14 people divided into two groups $\{a,b,\dots,h\}$ and $\{i,j,\dots,n\}$. - Within each group, there is complete knowledge about smoking habits. - In the first group, there is complete knowledge about who has and who does not have cancer. - Knowledge about the friendship relation is complete within each group only if symmetry is assumed, that is, $\forall x,y \ (friends(x,y) \rightarrow friends(y,x))$. Otherwise, knowledge about friendship is incomplete in that it may be known that e.g.\ $a$ is a friend of $b$, and it may be not known whether $b$ is a friend of $a$.- Finally, there is general knowledge about smoking, friendship and cancer, namely that smoking causes cancer, friendship is normally symmetric and anti-reflexive, everyone has a friend, and smoking propagates (actively or passively) among friends. One can formulate this task easily in LTN as follows.
###Code
import logging; logging.basicConfig(level=logging.INFO)
import math
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import logictensornetworks as ltn
np.set_printoptions(suppress=True)
pd.options.display.max_rows=999
pd.options.display.max_columns=999
pd.set_option('display.width',1000)
pd.options.display.float_format = '{:,.2f}'.format
def plt_heatmap(df, vmin=None, vmax=None):
plt.pcolor(df, vmin=vmin, vmax=vmax)
plt.yticks(np.arange(0.5,len(df.index),1),df.index)
plt.xticks(np.arange(0.5,len(df.columns),1),df.columns)
plt.colorbar()
pd.set_option('precision',2)
###Output
_____no_output_____
###Markdown
Language- LTN constants are used to denote the individuals. Each is grounded as a trainable embedding.- The `Smokes`, `Friends`, `Cancer` predicates are grounded as simple MLPs.- All the rules in the preamble are formulate in the knowledgebase.
###Code
embedding_size = 5
g1 = {l:ltn.Constant(np.random.uniform(low=0.0,high=1.0,size=embedding_size),trainable=True) for l in 'abcdefgh'}
g2 = {l:ltn.Constant(np.random.uniform(low=0.0,high=1.0,size=embedding_size),trainable=True) for l in 'ijklmn'}
g = {**g1,**g2}
Smokes = ltn.Predicate.MLP([embedding_size],hidden_layer_sizes=(8,8))
Friends = ltn.Predicate.MLP([embedding_size,embedding_size],hidden_layer_sizes=(8,8))
Cancer = ltn.Predicate.MLP([embedding_size],hidden_layer_sizes=(8,8))
friends = [('a','b'),('a','e'),('a','f'),('a','g'),('b','c'),('c','d'),('e','f'),('g','h'),
('i','j'),('j','m'),('k','l'),('m','n')]
smokes = ['a','e','f','g','j','n']
cancer = ['a','e']
Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())
And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())
Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())
Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())
Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics="forall")
Exists = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMean(p=6),semantics="exists")
formula_aggregator = ltn.Wrapper_Formula_Aggregator(ltn.fuzzy_ops.Aggreg_pMeanError())
###Output
_____no_output_____
###Markdown
Notice that the knowledge-base is not satisfiable in the strict logical sense of the word.For instance, the individual $f$ is said to smoke but not to have cancer, which is inconsistent with the rule $\forall x \ (S(x) \rightarrow C(x))$.Hence, it is important to adopt a probabilistic approach as done with MLN or a many-valued fuzzy logic interpretation as done with LTN.
###Code
# defining the theory
@tf.function
def axioms(p_exists):
"""
NOTE: we update the embeddings at each step
-> we should re-compute the variables.
"""
p = ltn.Variable.from_constants("p",list(g.values()))
q = ltn.Variable.from_constants("q",list(g.values()))
axioms = []
# Friends: knowledge incomplete in that
# Friend(x,y) with x<y may be known
# but Friend(y,x) may not be known
axioms.append(formula_aggregator(
[Friends([g[x],g[y]]) for (x,y) in friends]))
axioms.append(formula_aggregator(
[Not(Friends([g[x],g[y]])) for x in g1 for y in g1 if (x,y) not in friends and x<y ]+\
[Not(Friends([g[x],g[y]])) for x in g2 for y in g2 if (x,y) not in friends and x<y ]))
# Smokes: knowledge complete
axioms.append(formula_aggregator(
[Smokes(g[x]) for x in smokes]))
axioms.append(formula_aggregator(
[Not(Smokes(g[x])) for x in g if x not in smokes]))
# Cancer: knowledge complete in g1 only
axioms.append(formula_aggregator(
[Cancer(g[x]) for x in cancer]))
axioms.append(formula_aggregator(
[Not(Cancer(g[x])) for x in g1 if x not in cancer]))
# friendship is anti-reflexive
axioms.append(Forall(p,Not(Friends([p,p])),p=5))
# friendship is symmetric
axioms.append(Forall((p,q),Implies(Friends([p,q]),Friends([q,p])),p=5))
# everyone has a friend
axioms.append(Forall(p,Exists(q,Friends([p,q]),p=p_exists)))
# smoking propagates among friends
axioms.append(Forall((p,q),Implies(And(Friends([p,q]),Smokes(p)),Smokes(q))))
# smoking causes cancer + not smoking causes not cancer
axioms.append(Forall(p,Implies(Smokes(p),Cancer(p))))
axioms.append(Forall(p,Implies(Not(Smokes(p)),Not(Cancer(p)))))
# computing sat_level
sat_level = formula_aggregator(axioms).tensor
return sat_level
axioms(p_exists=tf.constant(1.))
###Output
2021-08-31 03:54:24.143335: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
###Markdown
Training
###Code
trainable_variables = \
Smokes.trainable_variables \
+ Friends.trainable_variables \
+ Cancer.trainable_variables \
+ ltn.as_tensors(list(g.values()))
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
for epoch in range(2000):
if 0 <= epoch < 400:
p_exists = tf.constant(1.)
else:
p_exists = tf.constant(6.)
with tf.GradientTape() as tape:
loss_value = 1. - axioms(p_exists=p_exists)
grads = tape.gradient(loss_value, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
if epoch%200 == 0:
print("Epoch %d: Sat Level %.3f"%(epoch, axioms(p_exists=p_exists)))
print("Training finished at Epoch %d with Sat Level %.3f"%(epoch, axioms(p_exists=p_exists)))
###Output
Epoch 0: Sat Level 0.572
Epoch 200: Sat Level 0.698
Epoch 400: Sat Level 0.784
Epoch 600: Sat Level 0.829
Epoch 800: Sat Level 0.844
Epoch 1000: Sat Level 0.851
Epoch 1200: Sat Level 0.856
Epoch 1400: Sat Level 0.856
Epoch 1600: Sat Level 0.856
Epoch 1800: Sat Level 0.857
Training finished at Epoch 1999 with Sat Level 0.857
###Markdown
Results
###Code
df_smokes_cancer_facts = pd.DataFrame(
np.array([[(x in smokes), (x in cancer) if x in g1 else math.nan] for x in g]),
columns=["Smokes","Cancer"],
index=list('abcdefghijklmn'))
df_friends_ah_facts = pd.DataFrame(
np.array([[((x,y) in friends) if x<y else math.nan for x in g1] for y in g1]),
index = list('abcdefgh'),
columns = list('abcdefgh'))
df_friends_in_facts = pd.DataFrame(
np.array([[((x,y) in friends) if x<y else math.nan for x in g2] for y in g2]),
index = list('ijklmn'),
columns = list('ijklmn'))
p = ltn.Variable.from_constants("p",list(g.values()))
q = ltn.Variable.from_constants("q",list(g.values()))
df_smokes_cancer = pd.DataFrame(
tf.stack([Smokes(p).tensor,Cancer(p).tensor],axis=1).numpy(),
columns=["Smokes","Cancer"],
index=list('abcdefghijklmn'))
pred_friends = Friends([p,q]).tensor
df_friends_ah = pd.DataFrame(
pred_friends[:8,:8].numpy(),
index=list('abcdefgh'),
columns=list('abcdefgh'))
df_friends_in = pd.DataFrame(
pred_friends[8:,8:].numpy(),
index=list('ijklmn'),
columns=list('ijklmn'))
plt.rcParams['font.size'] = 12
plt.rcParams['axes.linewidth'] = 1
###Output
_____no_output_____
###Markdown
Incomplete facts in the knowledge-base: axioms for smokers for individuals $a$ to $n$ and for cancer for individuals $a$ to $h$ (left), friendship relations in group 1 (middle), and friendship relations in group 2 (right).
###Code
plt.figure(figsize=(12,3))
plt.subplot(131)
plt_heatmap(df_smokes_cancer_facts, vmin=0, vmax=1)
plt.subplot(132)
plt.title("Friend(x,y) in Group 1")
plt_heatmap(df_friends_ah_facts, vmin=0, vmax=1)
plt.subplot(133)
plt.title("Friend(x,y) in Group 2")
plt_heatmap(df_friends_in_facts, vmin=0, vmax=1)
#plt.savefig('ex_smokes_givenfacts.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Querying all the truth-values using LTN after training: smokers and cancer (left), friendship relations (middle and right).
###Code
plt.figure(figsize=(12,3))
plt.subplot(131)
plt_heatmap(df_smokes_cancer, vmin=0, vmax=1)
plt.subplot(132)
plt.title("Friend(x,y) in Group 1")
plt_heatmap(df_friends_ah, vmin=0, vmax=1)
plt.subplot(133)
plt.title("Friend(x,y) in Group 2")
plt_heatmap(df_friends_in, vmin=0, vmax=1)
#plt.savefig('ex_smokes_inferfacts.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Satisfiability of the axioms.
###Code
print("forall p: ~Friends(p,p) : %.2f" % Forall(p,Not(Friends([p,p]))).tensor)
print("forall p,q: Friends(p,q) -> Friends(q,p) : %.2f" % Forall((p,q),Implies(Friends([p,q]),Friends([q,p]))).tensor)
print("forall p: exists q: Friends(p,q) : %.2f" % Forall(p,Exists(q,Friends([p,q]))).tensor)
print("forall p,q: Friends(p,q) -> (Smokes(p)->Smokes(q)) : %.2f" % Forall((p,q),Implies(Friends([p,q]),Implies(Smokes(p),Smokes(q)))).tensor)
print("forall p: Smokes(p) -> Cancer(p) : %.2f" % Forall(p,Implies(Smokes(p),Cancer(p))).tensor)
###Output
forall p: ~Friends(p,p) : 1.00
forall p,q: Friends(p,q) -> Friends(q,p) : 0.95
forall p: exists q: Friends(p,q) : 0.80
forall p,q: Friends(p,q) -> (Smokes(p)->Smokes(q)) : 0.78
forall p: Smokes(p) -> Cancer(p) : 0.76
###Markdown
We can query unknown formulas.
###Code
print("forall p: Cancer(p) -> Smokes(p): %.2f" % Forall(p,Implies(Cancer(p),Smokes(p)),p=5).tensor)
print("forall p,q: (Cancer(p) or Cancer(q)) -> Friends(p,q): %.2f" % Forall((p,q), Implies(Or(Cancer(p),Cancer(q)),Friends([p,q])),p=5).tensor)
###Output
forall p: Cancer(p) -> Smokes(p): 0.96
forall p,q: (Cancer(p) or Cancer(q)) -> Friends(p,q): 0.22
###Markdown
Visualize the embeddings
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
x = [c.tensor.numpy() for c in g.values()]
x_norm = StandardScaler().fit_transform(x)
pca = PCA(n_components=2)
pca_transformed = pca.fit_transform(x_norm)
var_x = ltn.Variable("x",x)
var_x1 = ltn.Variable("x1",x)
var_x2 = ltn.Variable("x2",x)
plt.figure(figsize=(8,5))
plt.subplot(221)
plt.scatter(pca_transformed[:len(g1.values()),0],pca_transformed[:len(g1.values()),1],label="Group 1")
plt.scatter(pca_transformed[len(g1.values()):,0],pca_transformed[len(g1.values()):,1],label="Group 2")
names = list(g.keys())
for i in range(len(names)):
plt.annotate(names[i].upper(),pca_transformed[i])
plt.title("Embeddings")
plt.legend()
plt.subplot(222)
plt.scatter(pca_transformed[:,0],pca_transformed[:,1],c=Smokes(var_x).tensor)
plt.title("Smokes")
plt.colorbar()
plt.subplot(224)
plt.scatter(pca_transformed[:,0],pca_transformed[:,1],c=Cancer(var_x).tensor)
plt.title("Cancer")
plt.colorbar()
plt.subplot(223)
plt.scatter(pca_transformed[:len(g1.values()),0],pca_transformed[:len(g1.values()),1],label="Group 1")
plt.scatter(pca_transformed[len(g1.values()):,0],pca_transformed[len(g1.values()):,1],label="Group 2")
res = Friends([var_x1,var_x2]).tensor.numpy()
for i1 in range(len(x)):
for i2 in range(i1,len(x)):
if (names[i1] in g1 and names[i2] in g2) \
or (names[i1] in g2 and names[i2] in g1):
continue
plt.plot(
[pca_transformed[i1,0],pca_transformed[i2,0]],
[pca_transformed[i1,1],pca_transformed[i2,1]],
alpha=res[i1,i2],c="black")
plt.title("Friendships per group")
plt.tight_layout()
#plt.savefig("ex_smokes_embeddings.pdf")
###Output
_____no_output_____ |
Breast Cancer/Cancer prediction.ipynb | ###Markdown
0 - Malignant1 - Benign Train and Test Split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
print(Y.shape, Y_train.shape, Y_test.shape)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.1)
# test_size --> to specify the percentage of test data needed
print(Y.shape, Y_train.shape, Y_test.shape)
print(Y.mean(), Y_train.mean(), Y_test.mean())
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.1, stratify=Y)
# stratify --> for correct distribution of data as of the original data
print(Y.mean(), Y_train.mean(), Y_test.mean())
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.1, stratify=Y, random_state=1)
# random_state --> specific split of data. each value of random_state splits the data differently
print(X_train.mean(), X_test.mean(), X.mean())
print(X_train)
###Output
[[1.490e+01 2.253e+01 1.021e+02 ... 2.475e-01 2.866e-01 1.155e-01]
[1.205e+01 1.463e+01 7.804e+01 ... 6.548e-02 2.747e-01 8.301e-02]
[1.311e+01 1.556e+01 8.721e+01 ... 1.986e-01 3.147e-01 1.405e-01]
...
[1.258e+01 1.840e+01 7.983e+01 ... 8.772e-03 2.505e-01 6.431e-02]
[1.349e+01 2.230e+01 8.691e+01 ... 1.282e-01 2.871e-01 6.917e-02]
[1.919e+01 1.594e+01 1.263e+02 ... 1.777e-01 2.443e-01 6.251e-02]]
###Markdown
Logistic Regression
###Code
# import Logistic Regression from sklearn
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression() # loading the logistic regression model to the variable "classifier"
classifier.fit?
# training the model on training data
classifier.fit(X_train, Y_train)
###Output
C:\Users\ASUS\anaconda3\lib\site-packages\sklearn\linear_model\_logistic.py:764: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
Evaluation of the model
###Code
# import accuracy_score
from sklearn.metrics import accuracy_score
prediction_on_training_data = classifier.predict(X_train)
accuracy_on_training_data = accuracy_score(Y_train, prediction_on_training_data)
print('Accuracy on training data : ', accuracy_on_training_data)
# prediction on test_data
prediction_on_test_data = classifier.predict(X_test)
accuracy_on_test_data = accuracy_score(Y_test, prediction_on_test_data)
print('Accuracy on test data : ', accuracy_on_test_data)
input_data = (13.54,14.36,87.46,566.3,0.09779,0.08129,0.06664,0.04781,0.1885,0.05766,0.2699,0.7886,2.058,23.56,0.008462,0.0146,0.02387,0.01315,0.0198,0.0023,15.11,19.26,99.7,711.2,0.144,0.1773,0.239,0.1288,0.2977,0.07259)
# change the input_data to numpy_array to make prediction
input_data_as_numpy_array = np.asarray(input_data)
print(input_data)
# reshape the array as we are predicting the output for one instance
input_data_reshaped = input_data_as_numpy_array.reshape(1,-1)
#prediction
prediction = classifier.predict(input_data_reshaped)
print(prediction) # returns a list with element [0] if Malignant; returns a listwith element[1], if benign.
if (prediction[0]==0):
print('The breast Cancer is Malignant')
else:
print('The breast cancer is Benign')
###Output
(13.54, 14.36, 87.46, 566.3, 0.09779, 0.08129, 0.06664, 0.04781, 0.1885, 0.05766, 0.2699, 0.7886, 2.058, 23.56, 0.008462, 0.0146, 0.02387, 0.01315, 0.0198, 0.0023, 15.11, 19.26, 99.7, 711.2, 0.144, 0.1773, 0.239, 0.1288, 0.2977, 0.07259)
[1]
The breast cancer is Benign
|
docs/annotationsets.ipynb | ###Markdown
Annotation SetsSee also: [Python Documentation](pythondoc/gatenlp/annotation_set.html)Annotation sets group annotations that belong together in some way. How to group annotations is entirely up to the user. Annotation sets are identified by names and there can be as many different sets as needed. The annotation set with the empty string as name is called the "default annotation set". There are no strict limitations to annotation set names, but it is recommended that apart from the default set, all names should follow Java or python name conventions. Annotation sets are represented by the `AnnotationSet` class and created by fetching a set from the document.
###Code
from gatenlp import Document
doc = Document("some document with some text so we can add annotations.")
annset = doc.annset("MySet")
###Output
_____no_output_____
###Markdown
Once an annotation set has been created it can be used to create andadd as many annotations as needed to it:
###Code
ann_tok1 = annset.add(0,4,"Token")
ann_tok2 = annset.add(5,13,"Token")
ann_all = annset.add(0,13,"Document")
ann_vowel1 = annset.add(1,2,"Vowel")
ann_vowel2 = annset.add(3,4,"Vowel")
###Output
_____no_output_____
###Markdown
Annotations can overlap arbitrarily and there are methods to check the overlapping and location relative to each other through the [Annotation](annotations) methods. The AnnotationSet instance has methods to retrieve annotations which relate to an annotation span or offset span in some specific way, e.g. are contained in the annotation span, overlap the annotation span or contain the annotation span:
###Code
anns_intok1 = annset.within(ann_tok1)
print(anns_intok1)
# AnnotationSet([
# Annotation(1,2,Vowel,id=3,features=None),
# Annotation(3,4,Vowel,id=4,features=None)])
anns_intok1 = annset.within(0,4)
print(anns_intok1)
# AnnotationSet([
# Annotation(0,4,Token,id=0,features=None),
# Annotation(1,2,Vowel,id=3,features=None),
# Annotation(3,4,Vowel,id=4,features=None)])
###Output
AnnotationSet([Annotation(1,2,Vowel,features=Features({}),id=3), Annotation(3,4,Vowel,features=Features({}),id=4)])
AnnotationSet([Annotation(0,4,Token,features=Features({}),id=0), Annotation(1,2,Vowel,features=Features({}),id=3), Annotation(3,4,Vowel,features=Features({}),id=4)])
###Markdown
In the example above, the annotation `ann_tok1` which has offsets (0,4) is not included in the result of `annset.within(ann_tok1)`: if an annotation is passed to any of these functions, by default that same annotation is not included in the result annotation set. This behaviour can be changed by using `include_self=True`. Result Annotation SetsThere are three ways of how one can obtain an annotation set in `gatenlp`:* From the document, using `annset()` or `annset("name")`: this is how to get a handle to an annotation set that is stored with the document and known by some name (which can be the empty string) . Whenever annotations are added to or deleted from such a set, this modifies what is stored with the document. Such sets are called "attached". * As the result of many of the AnnotationSet methods, e.g. `annset.covering(span)`: such annotation sets are by default immutable: they do not allow to add or delete annotations, but they can be changed to be mutable. Once mutable, annotations can get added or deleted but none of these changes are visible in the document: the set returned from the method is a "*detached*" set. * With the `AnnotationSet` constructor: such a set is empty and "detached". A "detached" annotation set returned from an AnnotationSet method contains annotations from the original attached set, and while the list of annotations is separate, the annotations themselves are identical to the ones in the original attached set. So if you change features of those annotations, they will modify the annotations in the document. In order to get a completely independent copy of all the annotations from a result set (which is a detached set), the method: `clone_anns()` can be used. After this, all the annotations are deep copies of the originals and can be modified without affecting the annotations in the original attached set. In order to get a completely independent copy of all the annotations from an original attached set, the method `deepcopy()` can be used.See examples below under "Accessing Annotations by Type" Indexing by offset and typeAnnotationSet objects initially just contain the annotations which are stored in some arbitrary order internally. But as soon any method is used that has to check how the start or end offsets compare between annotations or which require to process annotations in offset order, an index is created internally for accessing annotations in order of start or end offset. Similarly, any method that retrieves annotations by type creates an index to speed up retrieval. Index creation is done automatically as needed. Index creation can require a lot of time if it is done for a large corpus of documents. Iterating over Annotations Any AnnotationSet can be iterated over:
###Code
annset.add(20,25,"X")
annset.add(20,21,"X")
annset.add(20,27,"X")
for ann in annset:
print(ann)
###Output
Annotation(0,4,Token,features=Features({}),id=0)
Annotation(0,13,Document,features=Features({}),id=2)
Annotation(1,2,Vowel,features=Features({}),id=3)
Annotation(3,4,Vowel,features=Features({}),id=4)
Annotation(5,13,Token,features=Features({}),id=1)
Annotation(20,25,X,features=Features({}),id=5)
Annotation(20,21,X,features=Features({}),id=6)
Annotation(20,27,X,features=Features({}),id=7)
###Markdown
The default sorting order of annotations is by start offset, then by annotation id. So the end offset is not involved in the order, but annotations at the same offset are ordered by annotation id. Annotation ids are always incremented when annotations get added.The default iterator needs to first created the index for sorting annotations in offset order. If this is not relevant, it is possible to avoid creating the index by using `fast_iter()` which iterates over the annotations in the order they were added to the set.
###Code
for ann in annset.fast_iter():
print(ann)
###Output
Annotation(0,4,Token,features=Features({}),id=0)
Annotation(5,13,Token,features=Features({}),id=1)
Annotation(0,13,Document,features=Features({}),id=2)
Annotation(1,2,Vowel,features=Features({}),id=3)
Annotation(3,4,Vowel,features=Features({}),id=4)
Annotation(20,25,X,features=Features({}),id=5)
Annotation(20,21,X,features=Features({}),id=6)
Annotation(20,27,X,features=Features({}),id=7)
###Markdown
Annotations can be iterated over in reverse offset order using `reverse_iter()`:
###Code
for ann in annset.reverse_iter():
print(ann)
###Output
Annotation(20,27,X,features=Features({}),id=7)
Annotation(20,21,X,features=Features({}),id=6)
Annotation(20,25,X,features=Features({}),id=5)
Annotation(5,13,Token,features=Features({}),id=1)
Annotation(3,4,Vowel,features=Features({}),id=4)
Annotation(1,2,Vowel,features=Features({}),id=3)
Annotation(0,13,Document,features=Features({}),id=2)
Annotation(0,4,Token,features=Features({}),id=0)
###Markdown
Accessing Annotations by TypeEach annotation has an annotation type, which can be an arbitrary string, but using something that follows Java or Python naming conventions is recommended. To retrieve all annotations with some specific type, use `with_type()`:
###Code
anns_vowel = annset.with_type("Vowel")
print(anns_vowel)
###Output
AnnotationSet([Annotation(1,2,Vowel,features=Features({}),id=3), Annotation(3,4,Vowel,features=Features({}),id=4)])
###Markdown
The result set is a *detached* and *immutable* annotation set:
###Code
print(anns_vowel.immutable)
print(anns_vowel.isdetached())
try:
anns_vowel.add(2,3,"SomeNew")
except:
print("Cannot add a new annotation")
###Output
True
True
Cannot add a new annotation
###Markdown
After making the result set mutable, we can add annotations:
###Code
anns_vowel.immutable = False
anns_vowel.add(2,3,"SomeNew")
print(anns_vowel)
###Output
AnnotationSet([Annotation(1,2,Vowel,features=Features({}),id=3), Annotation(2,3,SomeNew,features=Features({}),id=8), Annotation(3,4,Vowel,features=Features({}),id=4)])
###Markdown
But since the result set is detached, the added annotation does not become part of the original annotation set stored with the document:
###Code
print(annset)
###Output
AnnotationSet([Annotation(0,4,Token,features=Features({}),id=0), Annotation(0,13,Document,features=Features({}),id=2), Annotation(1,2,Vowel,features=Features({}),id=3), Annotation(3,4,Vowel,features=Features({}),id=4), Annotation(5,13,Token,features=Features({}),id=1), Annotation(20,25,X,features=Features({}),id=5), Annotation(20,21,X,features=Features({}),id=6), Annotation(20,27,X,features=Features({}),id=7)])
###Markdown
In order to add annotations to the set stored with the document, that set needs to be used directly, not a result set obtained from it. Note that if an annotation is added to the original set, this does not affect any result set already obtained:
###Code
annset.add(2,3,"SomeOtherNew")
print(annset)
###Output
AnnotationSet([Annotation(0,4,Token,features=Features({}),id=0), Annotation(0,13,Document,features=Features({}),id=2), Annotation(1,2,Vowel,features=Features({}),id=3), Annotation(2,3,SomeOtherNew,features=Features({}),id=8), Annotation(3,4,Vowel,features=Features({}),id=4), Annotation(5,13,Token,features=Features({}),id=1), Annotation(20,25,X,features=Features({}),id=5), Annotation(20,21,X,features=Features({}),id=6), Annotation(20,27,X,features=Features({}),id=7)])
###Markdown
Overview over the AnnotationSet API
###Code
# get the annotation set name
print(annset.name)
# Get the number of annotations in the set: these two are equivalent
print(annset.size, len(annset))
# Get the document the annotation set belongs to:
print(annset.document)
# But a detached set does not have a document it belongs to:
print(anns_vowel.document)
# Get the start and end offsets for the whole annotation set
print(annset.start, annset.end)
# or return a tuple directly
print(annset.span)
# get an annotation by annotation id
a1 = anns_vowel.get(8)
print(a1)
# add an annotation that looks exactly as a given annotation:
# the given annotation itself is not becoming a member of the set
# It gets a new annotation id and a new identity. However the features are shared.
# An annotation can be added multiple times this way:
a2 = annset.add_ann(a1)
print(a2)
a3 = annset.add_ann(a1)
print(a3)
# Remove an annotation from the set.
# This can be done by annotation id:
annset.remove(a3.id)
# or by passing the annotation to remove
annset.remove(a2)
print(annset)
# Check if an annotation is in the set
print("ann_tok1 in the set:", ann_tok1 in annset)
tmpid = ann_tok1.id
print("ann_tok1 in the set, by id:", tmpid in annset)
# Get all annotation type names in an annotation set
print("Types:", annset.type_names)
###Output
Types: dict_keys(['Token', 'Document', 'Vowel', 'X', 'SomeOtherNew', 'SomeNew'])
|
analysis/analysis-within-city-casestudies.ipynb | ###Markdown
Visualization and data analysis of output indicators This notebook presents data visualization and analysis for output indicators from the Global indicator project. - Uses 4 sample cities, plots different indicators and compare, interpret the within-city variations and how that may or may not represent the real-world situation**Note: Refer to the [workflow documentation](https://github.com/gboeing/global-indicators/blob/master/documentation/workflow.md) for indicators tables and description**
###Code
import geopandas as gpd
import json
import os
import matplotlib.pyplot as plt
import osmnx as ox
%matplotlib inline
image_path = './images'
dpi = 300
process_folder = '../process'
process_config_path = '../process/configuration/cities.json'
with open(process_config_path) as json_file:
config = json.load(json_file)
output_folder = os.path.join(process_folder, config['folder'])
input_folder = os.path.join(process_folder, config['input_folder'])
# the path of "global_indicators_hex_250m.gpkg"
gpkgOutput_hex250 = os.path.join(output_folder, config['output_hex_250m'])
# create the path of "global_indicators_city.gpkg"
gpkgOutput_cities = os.path.join(output_folder, config['global_indicators_city'])
###Output
_____no_output_____
###Markdown
Plot Example Cities
###Code
scheme = 'NaturalBreaks'
k = 5
cmap = 'plasma'
edgecolor = 'none'
city_color = 'none'
city_edge = 'w'
city_edge_lw = 0.2
title_y = 1.02
title_fontsize = 16
title_weight = 'bold'
fontcolor = 'w'
params = {"text.color" : fontcolor,
"ytick.color" : fontcolor,
"xtick.color" : fontcolor}
plt.rcParams.update(params)
def plot_within(gpkgOutput_hex250, gpkgOutput_cities, filepath, figsize=(8, 8), facecolor="k", nrows=2, ncols=2, projected=True):
cols=['all_cities_walkability', 'all_cities_z_nh_population_density', 'all_cities_z_nh_intersection_density',
'all_cities_z_daily_living']
fig, axes = plt.subplots(figsize=figsize, facecolor=facecolor, nrows=nrows, ncols=ncols,)
for ax, col in zip(axes.flatten(), cols):
# the path of "global_indicators_hex_250m.gpkg"
gpkgOutput_hex250 = os.path.join(output_folder, config['output_hex_250m'])
# create the path of "global_indicators_city.gpkg"
gpkgOutput_cities = os.path.join(output_folder, config['global_indicators_city'])
# from filepaths, extract city-level data
hex250 = gpd.read_file(gpkgOutput_hex250, layer=city)
city_bound = gpd.read_file(gpkgOutput_cities, layer=city)
# plot hexplot and city boundaries
_ = hex250.plot(ax=ax, column=col, scheme=scheme, k=k, cmap=cmap, edgecolor=edgecolor,
label=city, legend=False, legend_kwds=None)
_ = city_bound.plot(ax=ax, color=city_color, edgecolor=city_edge, linewidth=city_edge_lw)
# add titles
fig.suptitle(f"{city} Within-city Indicators", color=fontcolor, fontsize=20, weight='bold')
ax.set_title(col, color=fontcolor, fontsize=10)
ax.set_axis_off()
# save to disk
save_path = os.path.join(image_path, f"{city}-within-maps.png")
fig.savefig(save_path, dpi=dpi, bbox_inches='tight', facecolor=fig.get_facecolor())
plt.close()
print(ox.ts(), f'figures saved to disk at "{filepath}"')
return fig, axes
cities = ["phoenix", "bern", "vic", "hong_kong"]
for city in cities:
print(ox.ts(), f"begin mapping {city}")
fp = image_path.format(city=city)
fig, axes = plot_within(gpkgOutput_hex250, gpkgOutput_cities, fp)
print(ox.ts(), f'all done, saved figures"')
###Output
2020-09-12 06:34:50 begin mapping phoenix
2020-09-12 06:35:08 figures saved to disk at "./images"
2020-09-12 06:35:22 figures saved to disk at "./images"
2020-09-12 06:35:37 figures saved to disk at "./images"
2020-09-12 06:35:50 figures saved to disk at "./images"
2020-09-12 06:35:50 begin mapping bern
2020-09-12 06:35:53 figures saved to disk at "./images"
2020-09-12 06:35:55 figures saved to disk at "./images"
2020-09-12 06:35:56 figures saved to disk at "./images"
2020-09-12 06:35:57 figures saved to disk at "./images"
2020-09-12 06:35:57 begin mapping vic
2020-09-12 06:35:59 figures saved to disk at "./images"
2020-09-12 06:36:01 figures saved to disk at "./images"
2020-09-12 06:36:02 figures saved to disk at "./images"
2020-09-12 06:36:03 figures saved to disk at "./images"
2020-09-12 06:36:03 begin mapping hong_kong
2020-09-12 06:36:10 figures saved to disk at "./images"
2020-09-12 06:36:15 figures saved to disk at "./images"
2020-09-12 06:36:22 figures saved to disk at "./images"
2020-09-12 06:36:28 figures saved to disk at "./images"
2020-09-12 06:36:28 all done, saved figures"
###Markdown
Visualization and data analysis of output indicators This notebook presents data visualization and analysis for output indicators from the Global indicator project. - Uses 4 sample cities, plots different indicators and compare, interpret the within-city variations and how that may or may not represent the real-world situation**Note: Refer to the [workflow documentation](https://github.com/gboeing/global-indicators/blob/master/documentation/workflow.md) for indicators tables and description**
###Code
import geopandas as gpd
import json
import os
import matplotlib.pyplot as plt
import osmnx as ox
%matplotlib inline
image_path = './images'
dpi = 300
process_folder = '../process'
process_config_path = '../process/configuration/cities.json'
with open(process_config_path) as json_file:
config = json.load(json_file)
output_folder = os.path.join(process_folder, config['folder'])
input_folder = os.path.join(process_folder, config['input_folder'])
# the path of "global_indicators_hex_250m.gpkg"
gpkgOutput_hex250 = os.path.join(output_folder, config['output_hex_250m'])
# create the path of "global_indicators_city.gpkg"
gpkgOutput_cities = os.path.join(output_folder, config['global_indicators_city'])
cities = ['adelaide',
'auckland',
'baltimore',
'bangkok',
'barcelona',
'belfast',
'bern',
'chennai',
'mexico_city',
'cologne',
'ghent',
'graz',
'hanoi',
'hong_kong',
'lisbon',
'melbourne',
'odense',
'olomouc',
'sao_paulo',
'phoenix',
'seattle',
'sydney',
'valencia',
'vic']
###Output
_____no_output_____
###Markdown
Plot Example Cities
###Code
scheme = 'NaturalBreaks'
k = 5
cmap = 'plasma'
edgecolor = 'none'
city_color = 'none'
city_edge = 'w'
city_edge_lw = 0.2
title_y = 1.02
title_fontsize = 16
title_weight = 'bold'
fontcolor = 'w'
params = {"text.color" : fontcolor,
"ytick.color" : fontcolor,
"xtick.color" : fontcolor}
plt.rcParams.update(params)
def plot_within(gpkgOutput_hex250, gpkgOutput_cities, filepath, figsize=(14, 8), facecolor="k", nrows=2, ncols=3, projected=True):
cols=['all_cities_walkability',
'pct_access_500m_public_open_space_any_binary',
'pct_access_500m_public_open_space_large_binary',
'pct_access_500m_pt_gtfs_any_binary',
'pct_access_500m_pt_gtfs_freq_20_binary',
'pct_access_500m_pt_gtfs_freq_30_binary']
fig, axes = plt.subplots(figsize=figsize, facecolor=facecolor, nrows=nrows, ncols=ncols,)
for ax, col in zip(axes.flatten(), cols):
# the path of "global_indicators_hex_250m.gpkg"
gpkgOutput_hex250 = os.path.join(output_folder, config['output_hex_250m'])
# create the path of "global_indicators_city.gpkg"
gpkgOutput_cities = os.path.join(output_folder, config['global_indicators_city'])
# from filepaths, extract city-level data
hex250 = gpd.read_file(gpkgOutput_hex250, layer=city)
city_bound = gpd.read_file(gpkgOutput_cities, layer=city)
# plot hexplot and city boundaries
_ = hex250.plot(ax=ax, column=col, scheme=scheme, k=k, cmap=cmap, edgecolor=edgecolor,
label=city, legend=False, legend_kwds=None)
_ = city_bound.plot(ax=ax, color=city_color, edgecolor=city_edge, linewidth=city_edge_lw)
# add titles
fig.suptitle(f"{city} Within-city Indicators", color=fontcolor, fontsize=20, weight='bold')
ax.set_title(col, color=fontcolor, fontsize=10)
ax.set_axis_off()
# save to disk
save_path = os.path.join(image_path, f"{city}-within-maps.png")
fig.savefig(save_path, dpi=dpi, bbox_inches='tight', facecolor=fig.get_facecolor())
plt.close()
print(ox.ts(), f'figures saved to disk at "{filepath}"')
return fig, axes
for city in cities:
print(ox.ts(), f"begin mapping {city}")
fp = image_path.format(city=city)
fig, axes = plot_within(gpkgOutput_hex250, gpkgOutput_cities, fp)
print(ox.ts(), f'all done, saved figures"')
###Output
2020-10-05 06:57:54 begin mapping adelaide
2020-10-05 06:58:07 figures saved to disk at "./images"
2020-10-05 06:58:17 figures saved to disk at "./images"
2020-10-05 06:58:27 figures saved to disk at "./images"
2020-10-05 06:58:37 figures saved to disk at "./images"
2020-10-05 06:58:47 figures saved to disk at "./images"
2020-10-05 06:58:57 figures saved to disk at "./images"
2020-10-05 06:58:57 begin mapping auckland
2020-10-05 06:59:06 figures saved to disk at "./images"
2020-10-05 06:59:17 figures saved to disk at "./images"
2020-10-05 06:59:27 figures saved to disk at "./images"
2020-10-05 06:59:37 figures saved to disk at "./images"
2020-10-05 06:59:48 figures saved to disk at "./images"
2020-10-05 06:59:58 figures saved to disk at "./images"
2020-10-05 06:59:58 begin mapping baltimore
2020-10-05 07:00:11 figures saved to disk at "./images"
2020-10-05 07:00:24 figures saved to disk at "./images"
2020-10-05 07:00:36 figures saved to disk at "./images"
2020-10-05 07:00:48 figures saved to disk at "./images"
2020-10-05 07:01:02 figures saved to disk at "./images"
2020-10-05 07:01:16 figures saved to disk at "./images"
2020-10-05 07:01:16 begin mapping bangkok
2020-10-05 07:01:37 figures saved to disk at "./images"
2020-10-05 07:01:58 figures saved to disk at "./images"
2020-10-05 07:02:21 figures saved to disk at "./images"
|
examples/SC20/09-HandsOn-oneTBB-with-SYCL.ipynb | ###Markdown
Using oneTBB with SYCL Sections- [oneTBB Generic Algorithms](oneTBB-Generic-Algorithms)- [Calculating pi with tbb::parallel_reduce](Calculating-pi-with-tbb::parallel_reduce)- [Using SYCL on GPU and oneTBB on CPU consecutively](Using-SYCL-on-GPU-and-oneTBB-on-CPU-consecutively)- [Using tbb::task_group to dispatch GPU and CPU code in parallel](Using-tbb::task_group-to-dispatch-GPU-and-CPU-code-in-parallel)- [Using resumable tasks or async_node to share the workload across the CPU and GPU](Using-resumable-tasks-or-async_node-to-share-the-workload-across-the-CPU-and-GPU) Learning Objectives* Gain experince with oneTBB generic algorithms * Use tbb::parallel_reduce to estimate pi as the area of a unit circle* Learn how to use oneTBB and SYCL together.* Learn how to use a resumable task or async_node to avoid blocking a oneTBB worker thread. oneTBB Generic Algorithms While it's possible to implement a parallel application by using oneTBB to specify each individual task that can runconcurrently, it is more common to make use of one of its data parallel generic algorithms. The oneTBB library provides a number of [generic parallel algorithms](https://spec.oneapi.com/versions/latest/elements/oneTBB/source/algorithms.html),including `parallel_for`, `parallel_reduce`, `parallel_scan`, `parallel_invoke` and `parallel_pipeline`. These functions capture many of the common parallel patterns that are key to unlocking multithreaded performance on the CPU. In this section, we provide an exercise that will introduce you one example algorithm, `parallel_reduce`. Calculating pi with tbb::parallel_reduceIn this exercise, we calculate pi using the approach shown in the figure below. The idea is tocompute the area of a unit circle, which is equal to pi. We do this by approximating the area of 1/4th of a unit circle, summing up the areas of ``num_intervals`` rectangles that havea height of ``sqrt(1-x*x)`` and a width of ``dx == 1.0/num_intervals``. This sum is multiplied by 4 to compute the total area of the unit circle, providing us with an approximation for pi. Run the sequential baseline implementationBefore we add any parallelism, let's validate this approach by running a baseline sequential implementation. Inspect the sequential code below - there are no modifications necessary. Run the first cell to create the file, then run the cell below it to compile and execute the code. This represents the baseline sequential result and time for our pi computation exercise.1. Inspect the code cell below, then click run ▶ to save the code to a file2. Run ▶ the cell in the __Build and Run the baseline__ section below the code snippet to compile and execute the code in the saved file
###Code
%%writefile lab/pi-serial.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
// =============================================================
#include <chrono>
#include <cmath>
#include <iostream>
#include <limits>
double calc_pi(int num_intervals) {
double dx = 1.0 / num_intervals;
double sum = 0.0;
for (int i = 0; i < num_intervals; ++i) {
double x = (i+0.5)*dx;
double h = std::sqrt(1-x*x);
sum += h*dx;
}
double pi = 4 * sum;
return pi;
}
int main() {
const int num_intervals = std::numeric_limits<int>::max();
double serial_time = 0.0;
{
auto st0 = std::chrono::high_resolution_clock::now();
double pi = calc_pi(num_intervals);
serial_time = 1e-9*(std::chrono::high_resolution_clock::now() - st0).count();
std::cout << "serial pi == " << pi << std::endl;
}
std::cout << "serial_time == " << serial_time << " seconds" << std::endl;
return 0;
}
###Output
_____no_output_____
###Markdown
Build and Run the baselineSelect the cell below and click Run ▶ to compile and execute the code above:
###Code
! chmod 755 q; chmod 755 ./scripts/run_pi-serial.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_pi-serial.sh; else ./scripts/run_pi-serial.sh; fi
###Output
_____no_output_____
###Markdown
Implement a parallel version with tbb::parallel_reduceOur sequential code accumulates values into a single final sum, making it a reduction operation and a match for ``tbb::parallel_reduce``.You can find detailed documentation for ``parallel_reduce`` [here](https://software.intel.com/content/www/us/en/develop/documentation/tbb-documentation/top/intel-threading-building-blocks-developer-reference/algorithms/parallelreduce-template-function.html). Briefly though, a ``parallel_reduce`` runs a user-provided functionon chunks of the iteration space, potentially concurrently, resulting in several partial results. In our example, these partial results will be partial sums. These partial results are combined using a user-provided reduction function, in our pi example, `std::plus` might be used (hint). The interface of ``parallel_reduce`` needed for this example is shown below:```cpptemplateValue parallel_reduce( const Range& range, const Value& identity, const Func& func, const Reduction& reduction );```The ``range`` object provides the iteration space, which in our example is 0 to num_intervals - 1. ``identity`` is the identity value for the operation that is being parallelized; for a summation, the identity value is 0, since ``sum == sum + 0``. We provide a lambda expression for ``func`` to compute the partial results, which in our example will return a partial sum for a given range ``r``, accumulating into the starting value ``init``. Finally, ``reduction`` is the operation to use to combine the partial results.For this exercise, complete the following steps:1. Inspect the code cell below and make the following modifications. 1. Fix the upper bound in the ``tbb::blocked_range`` 2. Fix the identity value 3. Add the loop body code 4. Fix the reduction function2. When the modifications are complete, click run ▶ to save the code to a file.3. Run ▶ the cell in the __Build and Run the modified code__ section below the code snippet to compile and execute the code in the saved file.
###Code
%%writefile lab/pi-parallel.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
// =============================================================
#include <chrono>
#include <cmath>
#include <iostream>
#include <limits>
#include <thread>
#include <tbb/tbb.h>
#define INCORRECT_VALUE 1
#define INCORRECT_FUNCTION std::minus<double>()
double calc_pi(int num_intervals) {
double dx = 1.0 / num_intervals;
double sum = tbb::parallel_reduce(
/* STEP 1: fix the upper bound: */ tbb::blocked_range<int>(0, INCORRECT_VALUE),
/* STEP 2: provide a proper identity value for summation */ INCORRECT_VALUE,
/* func */
[=](const tbb::blocked_range<int>& r, double init) -> double {
for (int i = r.begin(); i != r.end(); ++i) {
// STEP 3: Add the loop body code:
// Hint: it will look a lot like the the sequential code.
// the returned value should be (init + the_partial_sum)
}
return init;
},
// STEP 4: provide the reduction function
// Hint, maybe std::plus<double>{}
INCORRECT_FUNCTION
);
double pi = 4 * sum;
return pi;
}
static void warmupTBB() {
int num_threads = std::thread::hardware_concurrency();
tbb::parallel_for(0, num_threads,
[](unsigned int) {
std::this_thread::sleep_for(std::chrono::milliseconds(10));
});
}
int main() {
const int num_intervals = std::numeric_limits<int>::max();
double parallel_time = 0.0;
warmupTBB();
{
auto pt0 = std::chrono::high_resolution_clock::now();
double pi = calc_pi(num_intervals);
parallel_time = 1e-9*(std::chrono::high_resolution_clock::now() - pt0).count();
std::cout << "parallel pi == " << pi << std::endl;
}
std::cout << "parallel_time == " << parallel_time << " seconds" << std::endl;
return 0;
}
###Output
_____no_output_____
###Markdown
Build and Run the modified codeSelect the cell below and click Run ▶ to compile and execute the code that you modified above:
###Code
! chmod 755 q; chmod 755 ./scripts/run_pi-parallel.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_pi-parallel.sh; else ./scripts/run_pi-parallel.sh; fi
###Output
_____no_output_____
###Markdown
Pi Example Solution (Don't peak, unless you have to)
###Code
%%writefile solutions/pi-parallel.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache-2.0
// =============================================================
#include <chrono>
#include <cmath>
#include <iostream>
#include <limits>
#include <thread>
#include <tbb/tbb.h>
double calc_pi(int num_intervals) {
double dx = 1.0 / num_intervals;
double sum = tbb::parallel_reduce(
/* range = */ tbb::blocked_range<int>(0, num_intervals ),
/* identity = */ 0.0,
/* func */
[=](const tbb::blocked_range<int>& r, double init) -> double {
for (int i = r.begin(); i != r.end(); ++i) {
double x = (i+0.5)*dx;
double h = std::sqrt(1-x*x);
init += h*dx;
}
return init;
},
std::plus<double>{}
);
double pi = 4 * sum;
return pi;
}
static void warmupTBB() {
int num_threads = std::thread::hardware_concurrency();
tbb::parallel_for(0, num_threads,
[](unsigned int) {
std::this_thread::sleep_for(std::chrono::milliseconds(10));
});
}
int main() {
const int num_intervals = std::numeric_limits<int>::max();
double parallel_time = 0.0;
warmupTBB();
{
auto pt0 = std::chrono::high_resolution_clock::now();
double pi = calc_pi(num_intervals);
parallel_time = 1e-9*(std::chrono::high_resolution_clock::now() - pt0).count();
std::cout << "parallel pi == " << pi << std::endl;
}
std::cout << "parallel_time == " << parallel_time << " seconds" << std::endl;
return 0;
}
! chmod 755 q; chmod 755 ./scripts/run_pi-solution.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_pi-solution.sh; else ./scripts/run_pi-solution.sh; fi
###Output
_____no_output_____
###Markdown
Using SYCL on GPU and oneTBB on CPU consecutively Now we can look at using oneTBB algorithms in combination with SYCL. Let's start by computing `c = a + alpha * b` (usually known as a `triad` operation), first using a SYCL parallel_for and then a TBB parallel_for. On the **GPU**, we compute `c_sycl = a_array + b_array * alpha`, whereas on the **CPU**, we write to a different result array and compute `c_tbb = a_array + b_array * alpha`. In this example, we are executing these algorithms one after the other, and not overlapping the use of the GPU with the use of the CPU. 1. Inspect the code cell below, then click run ▶ to save the code to a file2. Run ▶ the cell in the __Build and Run the baseline__ section below the code snippet to compile and execute the code in the saved file
###Code
%%writefile lab/triad-consecutive.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache 2.0
// =============================================================
#include "../common/common_utils.hpp"
#include <array>
#include <tbb/blocked_range.h>
#include <tbb/parallel_for.h>
int main() {
const float alpha = 0.5; // alpha for triad calculation
const size_t array_size = 16;
std::array<float, array_size> a_array, b_array, c_sycl, c_tbb;
// sets array values to 0..N
common::init_input_arrays(a_array, b_array);
std::cout << "executing on the GPU using SYCL\n";
{
sycl::buffer a_buffer{a_array}, b_buffer{b_array}, c_buffer{c_sycl};
sycl::queue q{sycl::default_selector{}};
q.submit([&](sycl::handler& h) {
sycl::accessor a_accessor{a_buffer, h, sycl::read_only};
sycl::accessor b_accessor{b_buffer, h, sycl::read_only};
sycl::accessor c_accessor{c_buffer, h, sycl::write_only};
h.parallel_for(sycl::range<1>{array_size}, [=](sycl::id<1> index) {
c_accessor[index] = a_accessor[index] + b_accessor[index] * alpha;
});
}).wait(); //Wait here
}
std::cout << "executing on the CPU using TBB\n";
tbb::parallel_for(tbb::blocked_range<int>(0, a_array.size()),
[&](tbb::blocked_range<int> r) {
for (int index = r.begin(); index < r.end(); ++index) {
c_tbb[index] = a_array[index] + b_array[index] * alpha;
}
});
common::validate_results(alpha, a_array, b_array, c_sycl, c_tbb);
common::print_results(alpha, a_array, b_array, c_sycl, c_tbb);
}
###Output
_____no_output_____
###Markdown
Build and Run the baselineSelect the cell below and click Run ▶ to compile and execute the code above:
###Code
! chmod 755 q; chmod 755 ./scripts/run_consecutive.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_consecutive.sh; else ./scripts/run_consecutive.sh; fi
###Output
_____no_output_____
###Markdown
Using tbb::task_group to dispatch GPU and CPU code in parallelOf course the CPU and the GPU can work in parallel. Our first approach will be to use `tbb::task_group` to spawn a task for the GPU and another concurrent one for the CPU. It will look like:The class `tbb::task_group` is quite easy to use:```tbb::task_group g; // Create a task_group object gg.run([]{cout << "One task passed to g.run as a lambda\n";});g.run([]{cout << "Another concurrent task in this lambda\n";});g.wait() // Wait for both tasks to complete```For this exercise, complete the following steps:1. Inspect the code cell below and make the following modifications. 1. Complete the body of the lambda for the first call to run, offloading the code to the GPU 2. Complete the body of the lambda for the second call to run, executing the code on the CPU 2. When the modifications are complete, click run ▶ to save the code to a file.3. Run ▶ the cell in the __Build and Run the modified code__ section below the code snippet to compile and execute the code in the saved file.
###Code
%%writefile lab/triad-task_group.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache 2.0
// =============================================================
#include "../common/common_utils.hpp"
#include <array>
#include <tbb/blocked_range.h>
#include <tbb/parallel_for.h>
#include "tbb/task_group.h"
int main() {
const float alpha = 0.5; // coeff for triad calculation
const size_t array_size = 16;
std::array<float, array_size> a_array, b_array, c_sycl, c_tbb;
// sets array values to 0..N
common::init_input_arrays(a_array, b_array);
// create task_group
tbb::task_group tg;
// Run a TBB task that uses SYCL to offload to GPU, function run does not block
tg.run([&, alpha]() {
std::cout << "executing on the GPU using SYCL\n";
{
// STEP A: Complete the body to offload to the GPU
// Hint: look at (copy from) the consecutive calls sample
}
});
// Run a TBB task that uses SYCL to offload to CPU
tg.run([&, alpha]() {
std::cout << "executing on the CPU using TBB\n";
// STEP B: Complete the body to offload to the CPU
// Hint: look at (copy from) the consecutive calls sample
});
// wait for both TBB tasks to complete
tg.wait();
common::validate_results(alpha, a_array, b_array, c_sycl, c_tbb);
common::print_results(alpha, a_array, b_array, c_sycl, c_tbb);
}
###Output
_____no_output_____
###Markdown
Build and Run the modified codeSelect the cell below and click Run ▶ to compile and execute the code that you modified above:
###Code
! chmod 755 q; chmod 755 ./scripts/run_tasks.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_tasks.sh; else ./scripts/run_tasks.sh; fi
###Output
_____no_output_____
###Markdown
Solution (Don't peak unless you have to)
###Code
%%writefile solutions/triad-task_group-solved.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache 2.0
// =============================================================
#include "../common/common_utils.hpp"
#include <array>
#include <tbb/blocked_range.h>
#include <tbb/parallel_for.h>
#include "tbb/task_group.h"
int main() {
const float alpha = 0.5; // coeff for triad calculation
const size_t array_size = 16;
std::array<float, array_size> a_array, b_array, c_sycl, c_tbb;
// sets array values to 0..N
common::init_input_arrays(a_array, b_array);
// create task_group
tbb::task_group tg;
// Run a TBB task that uses SYCL to offload to GPU, function run does not block
tg.run([&, alpha]() {
std::cout << "executing on the GPU using SYCL\n";
{
sycl::buffer a_buffer{a_array}, b_buffer{b_array}, c_buffer{c_sycl};
sycl::queue q{sycl::default_selector{}};
q.submit([&](sycl::handler& h) {
sycl::accessor a_accessor{a_buffer, h, sycl::read_only};
sycl::accessor b_accessor{b_buffer, h, sycl::read_only};
sycl::accessor c_accessor{c_buffer, h, sycl::write_only};
h.parallel_for(sycl::range<1>{array_size}, [=](sycl::id<1> index) {
c_accessor[index] = a_accessor[index] + b_accessor[index] * alpha;
});
}).wait();
}
});
// Run a TBB task that uses SYCL to offload to CPU
tg.run([&, alpha]() {
std::cout << "executing on the CPU using TBB\n";
tbb::parallel_for(tbb::blocked_range<int>(0, a_array.size()),
[&](tbb::blocked_range<int> r) {
for (int index = r.begin(); index < r.end(); ++index) {
c_tbb[index] = a_array[index] + b_array[index] * alpha;
}
});
});
// wait for both TBB tasks to complete
tg.wait();
common::validate_results(alpha, a_array, b_array, c_sycl, c_tbb);
common::print_results(alpha, a_array, b_array, c_sycl, c_tbb);
}
! chmod 755 q; chmod 755 ./scripts/run_tasks-solved.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_tasks-solved.sh; else ./scripts/run_tasks-solved.sh; fi
###Output
_____no_output_____
###Markdown
Using resumable tasks or async_node to share the workload across the CPU and GPULet's say we only have to compute a single result array. But, we want to get the most out of both the CPU and the GPU by sharing the workload. The most simple alternative is to statically partition the iteration space in two sub-regions and assign the first partition to the GPU and the second one to the CPU:In the next code we introduce several changes:1. We use the `offload_ratio=0.5` variable to indicate that we want to offload to the GPU (using a SYCL queue) 50% of the iteration space and the other 50% to the CPU (that gets processed by a `tbb::parallel_for`)2. We use a different `alpha` for the GPU (`alpha_sycl = 0.5`) and for the CPU (`alpha_tbb = 1.0`). That way, when printing the C array we can easily identify the sub-array updated on the GPU (the *.5 values) and the sub-array updated on the CPU (all integer values).3. We use USM host-allocated arrays (also accessible from the GPU), instead of using sycl::buffer. That way: i) we provide another example that uses USM; ii) the resulting code is simpler; and iii) it may exhibit performance improvements on integrated GPUs that share the global memory with the CPU.4. We use two C arrays (`c_sycl` and `c_tbb`) as in the previous examples, but after the GPU and the CPU are done with their respective duties, we combine the GPU part into the CPU array `c_tbb`). In some cases (USM with fine-grained sharing capabilities) a single C array would do, but for portability sake, we decided to use the safest approach that avoids having the CPU and the GPU concurrently writing in the same array (even if it is in different non-overlapping regions).This is a simple fine-grained CPU+GPU demonstration that may not perform better than a CPU-only or GPU-only alternative, but for other coarser-grained problems this static partitioning of the iteration space can improve performance and/or reduce energy consumption. Resumable tasksIn the next code, we use `tbb::task::suspend()` instead of `tbb::task_group::run()` to avoid blocking a TBB working thread while waiting for the GPU task. Here you can find detailed information about [tbb::suspend_task](https://www.threadingbuildingblocks.org/docs/help/reference/appendices/preview_features/resumable_tasks.html), but you can also refer to slide 27 of the previous presentation.In the current state, a user-defined `AsyncActivity`is created in the `main()` function. At construction time, AsyncActity starts a thread that waits until `submit_flag==true`, then offloads the computation to the GPU, and when the GPU has finished, it sets `submit_flag=false`. `AsyncActivity::submit()` is called in the `main()` function after starting the CPU computation. This member function is the one setting `submit_flat=true` and then spin-waits until the thread completes the GPU work and sets `submit_flag=false`. There is useless thread spinning here, so let's fix it and simplify it using `tbb::task::suspend()`.1. Inspect the code cell below and make the following modifications. 1. STEP A: Inside `main()`, put the call to submit inside of a call to `tbb::task::suspend()`, as in Slide 27 from the previous presentation 2. STEP B: Inside the thread body, remove the `submit_flag=false` and instead use `tbb::task::resume()`. 3. STEP C: Inside `AsyncActivity::submit()` remove the idle spin loop waiting for the GPU to finish (now `tbb::task::suspend()` takes care of waiting)2. Run ▶ the cell in the __Build and Run the modified code__ section below the code snippet to compile and execute the code in the saved file
###Code
%%writefile lab/triad-hetero-suspend-resume.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache 2.0
// =============================================================
#include "../common/common_utils.hpp"
#include <array>
#include <atomic>
#include <cmath>
#include <iostream>
#include <thread>
#include <algorithm>
#include <CL/sycl.hpp>
#include <tbb/blocked_range.h>
#include <tbb/task.h>
#include <tbb/task_group.h>
#include <tbb/parallel_for.h>
template<size_t array_size>
class AsyncActivity {
float alpha;
const float *a_array, *b_array;
float *c_sycl;
sycl::queue& q;
float offload_ratio;
std::atomic<bool> submit_flag;
tbb::task::suspend_point suspend_point;
std::thread service_thread;
public:
AsyncActivity(float alpha_sycl, const float *a, const float *b, float *c, sycl::queue &queue) :
alpha{alpha_sycl}, a_array{a}, b_array{b}, c_sycl{c}, q{queue},
offload_ratio{0}, submit_flag{false},
service_thread([this] {
// We are in the constructor so this thread is dispatched at AsyncActivity construction
// Wait until the job is submitted into the tbb::suspend_task()
while(!submit_flag) std::this_thread::yield();
// Here submit_flag==true --> DISPATCH GPU computation
std::size_t array_size_sycl = std::ceil(array_size * offload_ratio);
float l_alpha=alpha;
const float *la=a_array, *lb=b_array;
float *lc=c_sycl;
q.submit([&](sycl::handler& h) {
h.parallel_for(sycl::range<1>{array_size_sycl}, [=](sycl::id<1> index) {
lc[index] = la[index] + lb[index] * l_alpha;
});
}).wait(); //The thread may spin or block here.
// Pass a signal into the main thread that the GPU work is completed
// STEP B: remove the submit_flag=false and instead use tbb::task::resume().
// See https://www.threadingbuildingblocks.org/docs/help/reference/appendices/preview_features/resumable_tasks.html
submit_flag = false;
}) {}
~AsyncActivity() {
service_thread.join();
}
void submit( float ratio, tbb::task::suspend_point sus_point ) {
offload_ratio = ratio;
suspend_point = sus_point;
submit_flag = true;
// STEP C: remove the idle spin loop on the submit_flag
// this becomes unecessary once suspend / resume is used
// Now it is necessary to avoid this function returning befor the GPU has finished
while (submit_flag) // Wait until submit_flat==false (The service thread does that after the GPU has finished)
std::this_thread::yield();
}
}; // class AsyncActivity
int main() {
constexpr float ratio = 0.5; // CPU or GPU offload ratio
// We use different alpha coefficients so that
//we can identify the GPU and CPU part if we print c_array result
const float alpha_sycl = 0.5, alpha_tbb = 1.0;
constexpr size_t array_size = 16;
sycl::queue q{sycl::gpu_selector{}};
std::cout << "Using device: " << q.get_device().get_info<sycl::info::device::name>() << '\n';
//This host allocation of c comes handy specially for integrated GPUs (CPU and GPU share mem)
float *a_array = malloc_host<float>(array_size, q);
float *b_array = malloc_host<float>(array_size, q);
float *c_sycl = malloc_host<float>(array_size, q);
float *c_tbb = new float[array_size];
// sets array values to 0..N
std::iota(a_array, a_array+array_size,0);
std::iota(b_array, b_array+array_size,0);
tbb::task_group tg;
AsyncActivity<array_size> activity{alpha_sycl, a_array, b_array, c_sycl, q};
//Spawn a task that runs a parallel_for on the CPU
tg.run([&, alpha_tbb]{
std::size_t i_start = static_cast<std::size_t>(std::ceil(array_size * ratio));
std::size_t i_end = array_size;
tbb::parallel_for(i_start, i_end, [=]( std::size_t index ) {
c_tbb[index] = a_array[index] + alpha_tbb * b_array[index];
});
});
//Spawn another task that asyncrhonously offloads computation to the GPU
// STEP A: Put the call to submit inside of a call to tbb::task::suspend, as in Slide 27 from the previous presentation
activity.submit(ratio, tbb::task::suspend_point{});
tg.wait();
//Merge GPU result into CPU array
std::size_t gpu_end = static_cast<std::size_t>(std::ceil(array_size * ratio));
std::copy(c_sycl, c_sycl+gpu_end, c_tbb);
common::validate_usm_results(ratio, alpha_sycl, alpha_tbb, a_array, b_array, c_tbb, array_size);
if(array_size<64)
common::print_usm_results(ratio, alpha_sycl, alpha_tbb, a_array, b_array, c_tbb, array_size);
free(a_array,q);
free(b_array,q);
free(c_sycl,q);
delete[] c_tbb;
}
###Output
_____no_output_____
###Markdown
Build and Run the modified codeSelect the cell below and click Run ▶ to compile and execute the code above:
###Code
! chmod 755 q; chmod 755 ./scripts/run_suspend-resume.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_suspend-resume.sh; else ./scripts/run_suspend-resume.sh; fi
###Output
_____no_output_____
###Markdown
Solution (Don't peak unless you have to)
###Code
%%writefile solutions/triad-hetero-suspend-resume-solved.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache 2.0
// =============================================================
#include "../common/common_utils.hpp"
#include <array>
#include <atomic>
#include <cmath>
#include <iostream>
#include <thread>
#include <algorithm>
#include <CL/sycl.hpp>
#include <tbb/blocked_range.h>
#include <tbb/task.h>
#include <tbb/task_group.h>
#include <tbb/parallel_for.h>
template<size_t array_size>
class AsyncActivity {
float alpha;
const float *a_array, *b_array;
float *c_sycl;
sycl::queue& q;
float offload_ratio;
std::atomic<bool> submit_flag;
tbb::task::suspend_point suspend_point;
std::thread service_thread;
public:
AsyncActivity(float alpha_sycl, const float *a, const float *b, float *c, sycl::queue &queue) :
alpha{alpha_sycl}, a_array{a}, b_array{b}, c_sycl{c}, q{queue},
offload_ratio{0}, submit_flag{false},
service_thread([this] {
// Wait until the job will be submitted into the async activity
while(!submit_flag) std::this_thread::yield();
// Here submit_flag==true --> DISPATCH GPU computation
std::size_t array_size_sycl = std::ceil(array_size * offload_ratio);
float l_alpha=alpha;
const float *la=a_array, *lb=b_array;
float *lc=c_sycl;
q.submit([&](sycl::handler& h) {
h.parallel_for(sycl::range<1>{array_size_sycl}, [=](sycl::id<1> index) {
lc[index] = la[index] + lb[index] * l_alpha;
});
}).wait(); //The thread may spin or block here.
// Pass a signal into the main thread that the GPU work is completed
tbb::task::resume(suspend_point);
}) {}
~AsyncActivity() {
service_thread.join();
}
void submit( float ratio, tbb::task::suspend_point sus_point ) {
offload_ratio = ratio;
suspend_point = sus_point;
submit_flag = true;
}
}; // class AsyncActivity
int main() {
constexpr float ratio = 0.5; // CPU or GPU offload ratio
// We use different alpha coefficients so that
//we can identify the GPU and CPU part if we print c_array result
const float alpha_sycl = 0.5, alpha_tbb = 1.0;
constexpr size_t array_size = 16;
sycl::queue q{sycl::gpu_selector{}};
std::cout << "Using device: " << q.get_device().get_info<sycl::info::device::name>() << '\n';
//This host allocation of c comes handy specially for integrated GPUs (CPU and GPU share mem)
float *a_array = malloc_host<float>(array_size, q);
float *b_array = malloc_host<float>(array_size, q);
float *c_sycl = malloc_host<float>(array_size, q);
float *c_tbb = new float[array_size];
// sets array values to 0..N
std::iota(a_array, a_array+array_size,0);
std::iota(b_array, b_array+array_size,0);
tbb::task_group tg;
AsyncActivity<array_size> activity{alpha_sycl, a_array, b_array, c_sycl, q};
//Spawn a task that runs a parallel_for on the CPU
tg.run([&, alpha_tbb]{
std::size_t i_start = static_cast<std::size_t>(std::ceil(array_size * ratio));
std::size_t i_end = array_size;
tbb::parallel_for(i_start, i_end, [=]( std::size_t index ) {
c_tbb[index] = a_array[index] + alpha_tbb * b_array[index];
});
});
//Spawn another task that asyncrhonously offloads computation to the GPU
tbb::task::suspend([&]( tbb::task::suspend_point suspend_point ) {
activity.submit(ratio, suspend_point);
});
tg.wait();
//Merge GPU result into CPU array
std::size_t gpu_end = static_cast<std::size_t>(std::ceil(array_size * ratio));
std::copy(c_sycl, c_sycl+gpu_end, c_tbb);
common::validate_usm_results(ratio, alpha_sycl, alpha_tbb, a_array, b_array, c_tbb, array_size);
if(array_size<64)
common::print_usm_results(ratio, alpha_sycl, alpha_tbb, a_array, b_array, c_tbb, array_size);
free(a_array,q);
free(b_array,q);
free(c_sycl,q);
delete[] c_tbb;
}
! chmod 755 q; chmod 755 ./scripts/run_suspend-resume-solved.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_suspend-resume-solved.sh; else ./scripts/run_suspend-resume-solved.sh; fi
###Output
_____no_output_____
###Markdown
Using flow::async_nodeNow let's assume we have a stream of data that is going to be processed in a TBB Flow Graph, and so use a `tbb::task::async_node` instead. As you can see in the figure, our graph has several nodes:1. The `tbb::flow::input_node` (**in_node**) initializes a struct with the arrays and companion information, initializes A and B, and passes a pointer to that structure to two nodes that will process the arrays in parallel.2. The `tbb::flow::function_node` (**cpu_node**) computes a sub-region of the arrays on the CPU, using a nested `tbb::parallel_for` to distribute the CPU load among the available CPU cores.3. The `tbb::flow::async_node` (**a_node**) dispatches to an AsyncActivity, quite similar to the previous example. As a reference, you can look at the [reference of tbb::flow::async_node](https://www.threadingbuildingblocks.org/docs/help/index.htmreference/appendices/preview_features/resumable_tasks.html) or at an easier [example](https://link.springer.com/chapter/10.1007/978-1-4842-4398-5_18).4. The `tbb::flow::join_node` (**node_join**) waits until the CPU and the GPU are done.5. The `tbb::flow::function_node` (**out_node**) receives the pointer to the message that contains the resulting array that is checked and printed.In the following code, the `AsyncActivity` wastes a TBB working thread by spinnning until the GPU has finished processing its region of the arrays, much like in the previous exercise. We can certainly do it better:1. Inspect the code cell below and make the following modifications. 1. STEP A: Inside `main()`, in the body of the `a_node`, remove the `try_put` that in this code is necessary to keep it working (it sends a message to the `node_join`). This `try_put` should now be moved to `AsyncActivity` thread, as we do in the next STEP. 2. STEP B: Inside the `AsyncActivity` thread body, remove the `submit_flag=false` and instead use `gateway->try_put(msg)`. We also have to call `gateway->release_wait()` so that we inform the graph, `g`, that there is no need to wait any longer for the `AsyncActivity`. 3. STEP C: Inside `AsyncActivity::submit()` add a call to `gateway.reserve_wait()` to notify the graph that you are dispatching to an asynchronous activity and that the graph has to wait for it. 4. STEP D: Inside `AsyncActivity::submit()` remove the idle spin loop waiting for the GPU to finish (now the `reserve_wait/release_wait` pair takes care of the necessary synchronization).2. Run ▶ the cell in the __Build and Run the modified code__ section below the code snippet to compile and execute the code in the saved file
###Code
%%writefile lab/triad-hetero-async_node.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache 2.0
// =============================================================
#include "../common/common_utils.hpp"
#include <cmath> //for std::ceil
#include <array>
#include <atomic>
#include <iostream>
#include <thread>
#include <CL/sycl.hpp>
#include <tbb/blocked_range.h>
#include <tbb/flow_graph.h>
#include <tbb/global_control.h>
#include <tbb/parallel_for.h>
constexpr size_t array_size = 16;
template<size_t ARRAY_SIZE>
struct msg_t {
static constexpr size_t array_size = ARRAY_SIZE;
const float offload_ratio = 0.5;
const float alpha_0 = 0.5;
const float alpha_1 = 1.0;
std::array<float, array_size> a_array; // input
std::array<float, array_size> b_array; // input
std::array<float, array_size> c_sycl; // GPU output
std::array<float, array_size> c_tbb; // CPU output
};
using msg_ptr = std::shared_ptr<msg_t<array_size>>;
using async_node_t = tbb::flow::async_node<msg_ptr, msg_ptr>;
using gateway_t = async_node_t::gateway_type;
class AsyncActivity {
msg_ptr msg;
gateway_t* gateway_ptr;
std::atomic<bool> submit_flag;
std::thread service_thread;
public:
AsyncActivity() : msg{nullptr}, gateway_ptr{nullptr}, submit_flag{false},
service_thread( [this] {
//Wait until other thread sets submit_flag=true
while( !submit_flag ) std::this_thread::yield();
// Here we go! Dispatch code to the GPU
// Execute the kernel over a portion of the array range
size_t array_size_sycl = std::ceil(msg->a_array.size() * msg->offload_ratio);
{
sycl::buffer a_buffer{msg->a_array}, b_buffer{msg->b_array}, c_buffer{msg->c_sycl};
sycl::queue q{sycl::gpu_selector{}};
float alpha = msg->alpha_0;
q.submit([&, alpha](sycl::handler& h) {
sycl::accessor a_accessor{a_buffer, h, sycl::read_only};
sycl::accessor b_accessor{b_buffer, h, sycl::read_only};
sycl::accessor c_accessor{c_buffer, h, sycl::write_only};
h.parallel_for(sycl::range<1>{array_size_sycl}, [=](sycl::id<1> index) {
c_accessor[index] = a_accessor[index] + b_accessor[index] * alpha;
});
}).wait();
}
// STEP B: Remove the set of submit_flag and replace with
// a call to try_put on the gateway
// and a call to release_wait on the gateway
submit_flag = false;
} ) {}
~AsyncActivity() {
service_thread.join();
}
void submit(msg_ptr m, gateway_t& gateway) {
// STEP C: add a call to gateway.reserve_wait()
msg = m;
gateway_ptr = &gateway;
submit_flag = true;
// STEP D: remove the idle spin loop on the submit_flag
// this becomes unecessary once reserve_wait / release_wait is used
while (submit_flag)
std::this_thread::yield();
}
};
int main() {
tbb::flow::graph g;
// Input node:
tbb::flow::input_node<msg_ptr> in_node{g,
[&](tbb::flow_control& fc) -> msg_ptr {
static bool has_run = false;
if (has_run) fc.stop();
has_run = true; // This example only creates a message to feed the Flow Graph
msg_ptr msg = std::make_shared<msg_t<array_size>>();
common::init_input_arrays(msg->a_array, msg->b_array);
return msg;
}
};
// CPU node
tbb::flow::function_node<msg_ptr, msg_ptr> cpu_node{
g, tbb::flow::unlimited, [&](msg_ptr msg) -> msg_ptr {
size_t i_start = static_cast<size_t>(std::ceil(msg->array_size * msg->offload_ratio));
size_t i_end = static_cast<size_t>(msg->array_size);
auto &a_array = msg->a_array, &b_array = msg->b_array, &c_tbb = msg->c_tbb;
float alpha = msg->alpha_1;
tbb::parallel_for(tbb::blocked_range<size_t>{i_start, i_end},
[&, alpha](const tbb::blocked_range<size_t>& r) {
for (size_t i = r.begin(); i < r.end(); ++i)
c_tbb[i] = a_array[i] + alpha * b_array[i];
}
);
return msg;
}};
// async node -- GPU
AsyncActivity async_act;
async_node_t a_node{g, tbb::flow::unlimited,
[&async_act](msg_ptr msg, gateway_t& gateway) {
async_act.submit(msg, gateway);
// STEP A: remove the try_put below since submit will not block
// In STEP B you will modify AsyncActivity so that it makes the call to try_put instead
gateway.try_put(msg);
}
};
// join node
using join_t = tbb::flow::join_node<std::tuple<msg_ptr, msg_ptr>>;
join_t node_join{g};
// out node
tbb::flow::function_node<join_t::output_type> out_node{g, tbb::flow::unlimited,
[&](const join_t::output_type& two_msgs) {
msg_ptr msg = std::get<0>(two_msgs); //Both msg's point to the same data
//Merge GPU result into CPU array
std::size_t gpu_end = static_cast<std::size_t>(std::ceil(msg->array_size * msg->offload_ratio));
std::copy(msg->c_sycl.begin(), msg->c_sycl.begin()+gpu_end, msg->c_tbb.begin());
common::validate_hetero_results(msg->offload_ratio, msg->alpha_0, msg->alpha_1,
msg->a_array, msg->b_array, msg->c_tbb);
if(msg->array_size<=64)
common::print_hetero_results(msg->offload_ratio, msg->alpha_0, msg->alpha_1,
msg->a_array, msg->b_array, msg->c_tbb);
}
}; // end of out node
// construct graph
tbb::flow::make_edge(in_node, a_node);
tbb::flow::make_edge(in_node, cpu_node);
tbb::flow::make_edge(a_node, tbb::flow::input_port<0>(node_join));
tbb::flow::make_edge(cpu_node, tbb::flow::input_port<1>(node_join));
tbb::flow::make_edge(node_join, out_node);
in_node.activate();
g.wait_for_all();
return 0;
}
###Output
_____no_output_____
###Markdown
Build and Run the modified codeSelect the cell below and click Run ▶ to compile and execute the code that you modified above:
###Code
! chmod 755 q; chmod 755 ./scripts/run_async_node.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_async_node.sh; else ./scripts/run_async_node.sh; fi
###Output
_____no_output_____
###Markdown
Solution (Don't peak unless you have to)
###Code
%%writefile solutions/triad-hetero-async_node-solved.cpp
//==============================================================
// Copyright (c) 2020 Intel Corporation
//
// SPDX-License-Identifier: Apache 2.0
// =============================================================
#include "../common/common_utils.hpp"
#include <cmath> //for std::ceil
#include <array>
#include <atomic>
#include <iostream>
#include <thread>
#include <CL/sycl.hpp>
#include <tbb/blocked_range.h>
#include <tbb/flow_graph.h>
#include <tbb/global_control.h>
#include <tbb/parallel_for.h>
constexpr size_t array_size = 16;
template<size_t ARRAY_SIZE>
struct msg_t {
static constexpr size_t array_size = ARRAY_SIZE;
const float offload_ratio = 0.5;
const float alpha_0 = 0.5;
const float alpha_1 = 1.0;
std::array<float, array_size> a_array; // input
std::array<float, array_size> b_array; // input
std::array<float, array_size> c_sycl; // GPU output
std::array<float, array_size> c_tbb; // CPU output
};
using msg_ptr = std::shared_ptr<msg_t<array_size>>;
using async_node_t = tbb::flow::async_node<msg_ptr, msg_ptr>;
using gateway_t = async_node_t::gateway_type;
class AsyncActivity {
msg_ptr msg;
gateway_t* gateway_ptr;
std::atomic<bool> submit_flag;
std::thread service_thread;
public:
AsyncActivity() : msg{nullptr}, gateway_ptr{nullptr}, submit_flag{false},
service_thread( [this] {
//Wait until other thread sets submit_flag=true
while( !submit_flag ) std::this_thread::yield();
// Here we go! Dispatch code to the GPU
// Execute the kernel over a portion of the array range
size_t array_size_sycl = std::ceil(msg->a_array.size() * msg->offload_ratio);
{
sycl::buffer a_buffer{msg->a_array}, b_buffer{msg->b_array}, c_buffer{msg->c_sycl};
sycl::queue q{sycl::gpu_selector{}};
float alpha = msg->alpha_0;
q.submit([&, alpha](sycl::handler& h) {
sycl::accessor a_accessor{a_buffer, h, sycl::read_only};
sycl::accessor b_accessor{b_buffer, h, sycl::read_only};
sycl::accessor c_accessor{c_buffer, h, sycl::write_only};
h.parallel_for(sycl::range<1>{array_size_sycl}, [=](sycl::id<1> index) {
c_accessor[index] = a_accessor[index] + b_accessor[index] * alpha;
});
}).wait();
}
gateway_ptr->try_put(msg);
gateway_ptr->release_wait();
} ) {}
~AsyncActivity() {
service_thread.join();
}
void submit(msg_ptr m, gateway_t& gateway) {
gateway.reserve_wait();
msg = m;
gateway_ptr = &gateway;
submit_flag = true;
}
};
int main() {
tbb::flow::graph g;
// Input node:
tbb::flow::input_node<msg_ptr> in_node{g,
[&](tbb::flow_control& fc) -> msg_ptr {
static bool has_run = false;
if (has_run) fc.stop();
has_run = true; // This example only creates a message to feed the Flow Graph
msg_ptr msg = std::make_shared<msg_t<array_size>>();
common::init_input_arrays(msg->a_array, msg->b_array);
return msg;
}
};
// CPU node
tbb::flow::function_node<msg_ptr, msg_ptr> cpu_node{
g, tbb::flow::unlimited, [&](msg_ptr msg) -> msg_ptr {
size_t i_start = static_cast<size_t>(std::ceil(msg->array_size * msg->offload_ratio));
size_t i_end = static_cast<size_t>(msg->array_size);
auto &a_array = msg->a_array, &b_array = msg->b_array, &c_tbb = msg->c_tbb;
float alpha = msg->alpha_1;
tbb::parallel_for(tbb::blocked_range<size_t>{i_start, i_end},
[&, alpha](const tbb::blocked_range<size_t>& r) {
for (size_t i = r.begin(); i < r.end(); ++i)
c_tbb[i] = a_array[i] + alpha * b_array[i];
}
);
return msg;
}};
// async node -- GPU
AsyncActivity async_act;
async_node_t a_node{g, tbb::flow::unlimited,
[&async_act](msg_ptr msg, gateway_t& gateway) {
async_act.submit(msg, gateway);
}
};
// join node
using join_t = tbb::flow::join_node<std::tuple<msg_ptr, msg_ptr>>;
join_t node_join{g};
// out node
tbb::flow::function_node<join_t::output_type> out_node{g, tbb::flow::unlimited,
[&](const join_t::output_type& two_msgs) {
msg_ptr msg = std::get<0>(two_msgs); //Both msg's point to the same data
//Merge GPU result into CPU array
std::size_t gpu_end = static_cast<std::size_t>(std::ceil(msg->array_size * msg->offload_ratio));
std::copy(msg->c_sycl.begin(), msg->c_sycl.begin()+gpu_end, msg->c_tbb.begin());
common::validate_hetero_results(msg->offload_ratio, msg->alpha_0, msg->alpha_1,
msg->a_array, msg->b_array, msg->c_tbb);
if(msg->array_size<=64)
common::print_hetero_results(msg->offload_ratio, msg->alpha_0, msg->alpha_1,
msg->a_array, msg->b_array, msg->c_tbb);
}
}; // end of out node
// construct graph
tbb::flow::make_edge(in_node, a_node);
tbb::flow::make_edge(in_node, cpu_node);
tbb::flow::make_edge(a_node, tbb::flow::input_port<0>(node_join));
tbb::flow::make_edge(cpu_node, tbb::flow::input_port<1>(node_join));
tbb::flow::make_edge(node_join, out_node);
in_node.activate();
g.wait_for_all();
return 0;
}
! chmod 755 q; chmod 755 ./scripts/run_async_node-solved.sh; if [ -x "$(command -v qsub)" ]; then ./q scripts/run_async_node-solved.sh; else ./scripts/run_async_node-solved.sh; fi
###Output
_____no_output_____ |
arrow-3.ipynb | ###Markdown
Introduction to Apache Arrow RA Tech Forum Goal of this talkUnderstand the purpose of Apache Arrow through the lense of Numpy and Pandas Our Daily Adventure- `Tabular` data- `Tools` for Backtesting, drifting, portfolio construction, analysis, reporting- `Time consuming` processes- `Memory intensive` processes- Sophisticated techniques like `multi-processing` etc There must be a better way - Raymond Hettinger There `might` be a better way A Hypothetical Wish List- Highly Performant- Optimized memory footprint- Copy-on demand (or Zero-copy when possible)/Share Memory- Efficiently move/share data between multi-process boundaries - Tabular representation - Rich data types - In-memory analytics - Memory-mapped access- Distributed- Language Agnostic (maybe you want to use Julia!) Tools We Are Already Familiar With Numpy A Hypothetical Wish List- Highly Performant- Optimized memory footprint- Copy-on demand (or Zero-copy when possible)/Share Memory- Efficiently move/share data between multi-process boundaries - Tabular representation - Rich data types - In-memory analytics - Memory-mapped access- Distributed- Language Agnostic (maybe you want to use Julia!) What makes Numpy fast?- Numpy Array stored in contiguous memory- Accessing elements in an array is constant time- Meta-data, Values isolation- Zero-copy slicing
###Code
array1 = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
print('strides =', array1.strides)
print(array1.dtype)
print(array1.data.tobytes())
###Output
strides = (24, 8)
int64
b'\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\x00\x00'
###Markdown
What makes Numpy fast?- Numpy Array stored in contiguous memory- Accessing elements in an array is constant time- Meta-data, Values isolation- Zero-copy slicing
###Code
array2 = array1.T # zero-copy of actual array
print(array2)
print('strides =', array2.strides)
print(array1.data.tobytes())
print(id(array2.data) == id(array1.data))
###Output
[[1 4 7]
[2 5 8]
[3 6 9]]
strides = (8, 24)
b'\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x07\x00\x00\x00\x00\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\x00\x00'
True
###Markdown
Numpy Object Array
###Code
# We have to resort to `object` here since Numpy Array need to hold values of homogenous types
array3 = np.array([['sec1', True, 100.],
['sec2', False, 200.],
['sec3', True, 300]], dtype=object)
print('strides =', array3.strides)
print(array3.dtype)
print(array3.data.tobytes())
###Output
strides = (24, 8)
object
b'0\xd3\\(Q\x7f\x00\x00\xe0]\xf9p\\U\x00\x00\xd06T(Q\x7f\x00\x00\xf0\xdb\\(Q\x7f\x00\x00\x00^\xf9p\\U\x00\x00\x105T(Q\x7f\x00\x00\xb0 ](Q\x7f\x00\x00\xe0]\xf9p\\U\x00\x0006T(Q\x7f\x00\x00'
###Markdown
Numpy - Is it really a reference?- Is it really a reference that is stored when `dtype=object`?
###Code
print(id(True))
array4 = np.array([True],dtype=object)
z=array4.data.tobytes()
print(f'{z=!r}')
print(z[-3::-1])
print((id(True)))
hex_value = hex(id(True))
print(hex(id(True)))
array4 = array2[0:2, 1:3] # zero-copy memory view
array4
###Output
_____no_output_____
###Markdown
A word about serialization/deserialization
###Code
print('array1 =', array1)
print('\n\n')
print('array3 =', array3)
###Output
array1 = [[1 2 3]
[4 5 6]
[7 8 9]]
array3 = [['sec1' True 100.0]
['sec2' False 200.0]
['sec3' True 300]]
###Markdown
A Hypothetical Wish List Numpy- Highly Performant- Optimized memory footprint- Numpy slicing - providing zero-copy views (sharing memory)- Efficiently move(but, no sharing) data between multi-process boundaries- In-memory analytics (without notion of indices)- Memory-mapped access- Rich data types (via structured arrays) - Performance characteristics probably not the same No Support- Tabular representation - Rich data types - Distributed- Langage Agnostic (maybe you want to use Julia!) There `might` be a better way! Tools We are already familiar with Numpy Static Frame / Pandas Static Frame / Pandas Things we gain- Tabular representation - Index/Index Hierarchy - Columnar- Declarative in-memory analytics - Selection semantics (.loc, iloc) - Group by - Row-wise/Column-wise function application The trade-offs- Memory footprint higher than plain numpy- More expensive to move data between processes (compared to Numpy) A Hypothetical Wish List Numpy- Highly Performant- Optimized memory footprint- Numpy slicing - providing zero-copy views (sharing memory)- Efficiently move data between multi-process boundaries- In-memory analytics (without notion of indices)- Memory-mapped access- Rich data types (via structured arrays) - Performance characteristics not the same Pandas- Tabular representation - Better in-memory analytics No Support- Rich data types - Distributed- Langage Agnostic (maybe you want to use Julia!) There `still might` be a better way! Apache Arrow- Specifies columnar-data representation (aimed at better speed and storage)- Efficiently move data between processes/machines- Logical types for Tabular representation- Rich data type support (without compromising speed/storage characteristics)- Language agnostic representation Some trade-offs .. We will talk about them soon Apache Arrow - Data Types- Primitive DataType (int32, float64, string)- Array
###Code
# PyArray
import pyarrow as pa
py_array = pa.array([1, 2, 3, 4, 5], type=pa.int32())
py_array
###Output
_____no_output_____
###Markdown
Columnar Storagearray = [1, null, 2, 4, 8]-------------------------------------------------------------------------------------------------------------------```* Length: 5, Null count: 1* Validity bitmap buffer: |Byte 0 (validity bitmap) | Bytes 1-63 | |-------------------------|-----------------------| | 00011101 | 0 (padding) |* Value Buffer: |Bytes 0-3 | Bytes 4-7 | Bytes 8-11| Bytes 12-15| Bytes 16-19| Bytes 20-63 | |----------|-------------|-----------|------------|------------|-------------| | 1 | unspecified | 2 | 4 | 8 | unspecified | ``` Source: https://arrow.apache.org/docs/format/Columnar.html Nested List Layout Examplearray = [[12, -7, 25], null, [0, -127, 127, 50], []]--------------------------------------------------```* Length: 4, Null count: 1* Validity bitmap buffer: | Byte 0 (validity bitmap) | Bytes 1-63 | |--------------------------|-----------------------| | 00001101 | 0 (padding) |* Offsets buffer (int32) | Bytes 0-3| Bytes 4-7| Bytes 8-11| Bytes 12-15| Bytes 16-19| Bytes 20-63 | |----------|----------|-----------|------------|------------|-------------| | 0 | 3 | 3 | 7 | 7 | unspecified |* Values array (Int8array): * Length: 7, Null count: 0 * Validity bitmap buffer: Not required * Values buffer (int8) | Bytes 0-6 | Bytes 7-63 | |------------------------------|-------------| | 12, -7, 25, 0, -127, 127, 50 | unspecified |```Source: https://arrow.apache.org/docs/format/Columnar.html Why is Columnar-Storage format important- Efficient storage- Fast lookup (offset based lookup)- Easy to move around bytes (no serialization/deserialization required)- Same performance characteristics for heteregenous data types- Support for Rich data types without compromising on performance Record Batch (equivalent to Frames)
###Code
# all elements should be of same length
schema = pa.schema([('id', pa.int32()),
('name', pa.string()),
('is_valid', pa.bool_())])
data = [
pa.array([1,2,3,4]),
pa.array(['foo', 'bar','baz', None]),
pa.array([True, None, False, True])
]
batch = pa.RecordBatch.from_arrays(data, schema=schema)
print(f'{batch.num_columns=}, \n{batch.num_rows=}, \n{batch.schema=}, \n\n{batch[1]=}')
print(batch[1])
print(batch[1:3][1]) # zero copy slicing
###Output
[
"foo",
"bar",
"baz",
null
]
[
"bar",
"baz"
]
###Markdown
A Hypothetical Wish List Numpy- Highly Performant- Optimized memory footprint- Numpy slicing - providing zero-copy views (sharing memory)- Efficiently move data between multi-process boundaries- In-memory analytics (without notion of indices)- Memory-mapped access- Rich data types (via structured arrays) - Performance characteristics not the same Pandas- Tabular representation - Better in-memory analytics No Support- Rich data types - Distributed- Langage Agnostic (maybe you want to use Julia!) A Hypothetical Wish List PyArrow- Highly Performant- Optimized memory footprint- Sharing memory - Zero-Copy slicing- Efficiently move data between multi-process boundaries- Memory-mapped access- Rich data types (with same performance characteristics)- Distributed- Langage Agnostic (maybe you `can` use Julia!) Some Trade-offs- Tabular representation (lack of indices)- In-memory analytics (current API not very Pythonic) Performance Tests Performance Test 1- N x 100 Numpy Array (np.float64)- N x 100 PyArrow Array (pyarrow.float64)- Multi-process each column and calculate Sum
###Code
plot_test('/tmp/test1.txt', 'batch_vs_numpy_only_numeric', 'duration', 'Time (seconds)')
plot_test('/tmp/memory_test1.txt', 'batch_vs_numpy_only_numeric_memory', 'size', 'avg memory used per process (MB)')
###Output
_____no_output_____
###Markdown
Performance Test 2- Store `object` type in Numpy Array in 3 columns (int, bool, string)- Same equivalent type in PyArrow Array in 3 columns (pyarrow.int, pyarrow.bool_, pyarrow.string)- Measure performance across - 1_000_000 rows x 3 columns - 10_000_000 rows x 3 columns - 20_000_000 rows x 3 columns - Demonstrates heterogenous data support in Apache Arrow
###Code
plot_test('test2.txt', 'batch_vs_numpy_with_python_objects', 'duration', 'Time (seconds)')
plot_test('memory_test2.txt', 'batch_vs_numpy_with_python_objects_memory', 'size', 'avg memory used per process (MB)')
###Output
_____no_output_____
###Markdown
Performance Test 3- Static Frame / Pandas with alternating bool/float types as columns- Similar columns in PyArrow- Measure performance across - 10_000 rows x 200 cols - 50_000 rows x 200 cols - 100_000 rows x 200 cols- Multi-process the `sum` calculation of each column
###Code
plot_test('test3.txt', 'batch_vs_pandas_vs_staticframe', 'duration', 'Time (seconds)')
plot_test('memory_test3.txt', 'batch_vs_pandas_vs_staticframe_memory', 'size', 'avg memory used per process (MB)')
###Output
_____no_output_____
###Markdown
Performance Test 4- Compare RecordBatch, ArrowFile and MappedArrowFile- Measure performance across - 10K - 1 million rows x 200 columns
###Code
plot_test('test4.txt', 'arrow_file_vs_mapped_arrow_file', 'duration', 'Time (seconds)')
plot_test('memory_test4.txt', 'arrow_file_vs_mapped_arrow_file_memory', 'size', 'avg mem used/process (MB)')
###Output
_____no_output_____
###Markdown
Why not use Apache Arrow A Hypothetical Wish List PyArrow- Highly Performant- Optimized memory footprint- Efficiently move data between multi-process boundaries- Memory-mapped access- Rich data types (with same performance characteristics)- Distributed- Langage Agnostic (maybe you `can` use Julia!) Some Trade-offs- Tabular representation (lack of indices)- In-memory analytics (current API not very Pythonic)- Lack of friendly user interface for Linear Algebra operations Compute Functions - Math functions
###Code
import pyarrow.compute as pc
print(f'{ batch[0] = }\n')
a = pc.sum(batch[0])
print(f'{a = }')
b = pc.multiply(batch[0], batch[0])
print(f'{b = }')
print(f'{pc.add(batch[0], batch[0])=}')
###Output
batch[0] = <pyarrow.lib.Int32Array object at 0x7f262ae1a280>
[
1,
2,
3,
4
]
a = <pyarrow.Int64Scalar: 10>
b = <pyarrow.lib.Int32Array object at 0x7f262a60b3a0>
[
1,
4,
9,
16
]
pc.add(batch[0], batch[0])=<pyarrow.lib.Int32Array object at 0x7f262a60b400>
[
2,
4,
6,
8
]
###Markdown
Compute Functions - Containment
###Code
print(f'{batch[0] = }\n')
print(f'{pc.equal(batch[0], 2)=}')
l = pc.SetLookupOptions(value_set=pa.array([2]))
print(f'{pc.is_in(batch[0], options=l) = }')
###Output
batch[0] = <pyarrow.lib.Int32Array object at 0x7f262a60b160>
[
1,
2,
3,
4
]
pc.equal(batch[0], 2)=<pyarrow.lib.BooleanArray object at 0x7f262a60b5e0>
[
false,
true,
false,
false
]
pc.is_in(batch[0], options=l) = <pyarrow.lib.BooleanArray object at 0x7f262a60b640>
[
false,
true,
false,
false
]
###Markdown
Rich Data Types (Structs/Union/Dictionary)
###Code
f1 = pa.field('security_id', pa.int32())
f2 = pa.field('price', pa.float64())
f3 = pa.field('is_active', pa.bool_())
new_struct = pa.struct([f1, f2, f3])
new_struct
f1 = pa.field('security_id', pa.int32())
f2 = pa.field('price', pa.float64())
f3 = pa.field('is_active', pa.bool_())
new_struct = pa.struct([f1, f2, f3])
row = pa.array([(1, 100, True), (2, 200., False)], type=new_struct)
print(row)
###Output
-- is_valid: all not null
-- child 0 type: int32
[
1,
2
]
-- child 1 type: double
[
100,
200
]
-- child 2 type: bool
[
true,
false
]
|
Module_C.ipynb | ###Markdown
Section 19.1: Root Finding Problem Statement The root or zero of a function, $f(x)$, is an xr such that $f(x_r)=0$. For functions such as $f(x)=x^2−9$, the roots are clearly 3 and −3. However, for other functions such as $f(x)=cos(x)−x$, determining an analytic, or exact, solution for the roots of functions can be difficult. For these cases, it is useful to generate numerical approximations of the roots of f and understand the limitations in doing so.**Example**: Using $fsolve$ function from scipy to compute the root of $f(x)=cos(x)−x$ near −2. Verify that the solution is a root (or close enough).
###Code
import numpy as np
from scipy import optimize
f = lambda x: np.cos(x) - x
r = optimize.fsolve(f, -2)
print("r =", r)
# Verify the solution is a root
result = f(r)
print("result=", result)
###Output
r = [0.73908513]
result= [0.]
###Markdown
**Example**: The function $f(x)=\frac{1}{x}$ has no root. Use the fsolve function to try to compute the root of $f(x)=\frac{1}{x}$. Turn on the full_output to see what’s going on. Remember to check the documentation for details.
###Code
f = lambda x: 1/x
r, infodict, ier, mesg = optimize.fsolve(f, -2, full_output=True)
print("r =", r)
result = f(r)
print("result=", result)
print(mesg)
###Output
r = [-3.52047359e+83]
result= [-2.84052692e-84]
The number of calls to function has reached maxfev = 400.
###Markdown
We can see that, the value r we got is not a root, even though the f(r) is a very small number. Since we turned on the full_output, which have more information. A message will be returned if no solution is found, and we can see mesg details for the cause of failure - "The number of calls to function has reached $maxfev = 400$." Section 19.2: Tolerance **Tolerance** is the level of error that is acceptable for an engineering application. We say that a computer program has converged to a solution when it has found a solution with an error smaller than the tolerance. When computing roots numerically, or conducting any other kind of numerical analysis, it is important to establish both a metric for error and a tolerance that is suitable for a given engineering/science application.For computing roots, we want an $x_r$ such that $f(x_r)$ is very close to 0. Therefore $|f(x)|$ is a possible choice for the measure of error since the smaller it is, the likelier we are to a root. Also if we assume that $x_i$ is the $i^t$$^h$ guess of an algorithm for finding a root, then $|x_i$$_+$$_1$$−x_i|$ is another possible choice for measuring error, since we expect the improvements between subsequent guesses to diminish as it approaches a solution. As will be demonstrated in the following examples, these different choices have their advantages and disadvantages.**Example**: Let error be measured by $e=|f(x)|$ and $tol$ be the acceptable level of error. The function $f(x)=x_2+tol/2$ has no real roots. However, $|f(0)|=tol/2$ and is therefore acceptable as a solution for a root finding program.Let error be measured by $e=|x_i$$_+$$_1$$−x_i|$ and tol be the acceptable level of error. The function $f(x)=\frac{1}{x}$ has no real roots, but the guesses $x_i=−tol/4$ and $x_i$$_+$$_1=tol/4$ have an error of $e=tol/2$ and is an acceptable solution for a computer program.Based on these observations, the use of tolerance and converging criteria must be done very carefully and in the context of the program that uses them. Section 19.3: Bisection Method The **Intermediate Value Theorem** $f(x)$ is a continuous function between $a$ and $b$, and $sign(f(a))≠sign(f(b))$, then there must be a $c$, such that $a<c<b$ and $f(c)=0$. This is illustrated in the following figure. The **bisection method** uses the intermediate value theorem iteratively to find roots. Let $f(x)$ be a continuous function, and $a$ and $b$ be real scalar values such that $a0$ and $f(b)0$, then m is an improvement on the left bound, $a$, and there is guaranteed to be a root on the open interval $(m,b)$. If $f(m)<0$, then m is an improvement on the right bound, $b$, and there is guaranteed to be a root on the open interval $(a,m)$. This scenario is depicted in the following figure.The process of updating a and b can be repeated until the error is acceptably low. **Example**: Program a function my_bisection(f, a, b, tol) that approximates a root $r$ of $f$, bounded by $a$ and $b$ to within $|f(\frac{a+b}{2})|<tol$.
###Code
import numpy as np
def my_bisection(f, a, b, tol):
# approximates a root, R, of f bounded
# by a and b to within tolerance
# | f(m) | < tol with m the midpoint
# between a and b Recursive implementation
# check if a and b bound a root
if np.sign(f(a)) == np.sign(f(b)):
raise Exception(
"The scalars a and b do not bound a root")
# get midpoint
m = (a + b)/2
if np.abs(f(m)) < tol:
# stopping condition, report m as root
return m
elif np.sign(f(a)) == np.sign(f(m)):
# case where m is an improvement on a.
# Make recursive call with a = m
return my_bisection(f, m, b, tol)
elif np.sign(f(b)) == np.sign(f(m)):
# case where m is an improvement on b.
# Make recursive call with b = m
return my_bisection(f, a, m, tol)
###Output
_____no_output_____
###Markdown
The $\sqrt{2}$ can be computed as the root of the function $f(x)=x_2−2$. Starting at $a=0$ and $b=2$, use my_bisection to approximate the $\sqrt{2}$ to a tolerance of $|f(x)|<0.1$ and $|f(x)|<0.01$. Verify that the results are close to a root by plugging the root back into the function.
###Code
f = lambda x: x**2 - 2
r1 = my_bisection(f, 0, 2, 0.1)
print("r1 =", r1)
r01 = my_bisection(f, 0, 2, 0.01)
print("r01 =", r01)
print("f(r1) =", f(r1))
print("f(r01) =", f(r01))
###Output
r1 = 1.4375
r01 = 1.4140625
f(r1) = 0.06640625
f(r01) = -0.00042724609375
###Markdown
Section 19.4: Newton-Raphson Method Let $f(x)$ be a smooth and continuous function and $x_r$ be an unknown root of $f(x)$. Now assume that $x_0$ is a guess for xr. Unless $x_0$ is a very lucky guess, $f(x_0)$ will not be a root. Given this scenario, we want to find an $x_1$ that is an improvement on $x_0$ (i.e., closer to $x_r$ than $x_0$). If we assume that $x_0$ is "close enough" to $x_r$, then we can improve upon it by taking the linear approximation of $f(x)$ around $x_0$, which is a line, and finding the intersection of this line with the x-axis. Written out, the linear approximation of $f(x)$ around $x_0$ is $f(x)≈f(x_0)+f′(x_0)(x−x_0)$. Using this approximation, we find $x_1$ such that $f(x_1)=0$. Plugging these values into the linear approximation results in the equation$0=f(x_0)+f′(x_0)(x_1−x_0),$which when solved for $x_1$ is $x_1=x_0−\frac{f(x_0)}{f′(x_0)}.$An illustration of how this linear approximation improves an initial guess is shown in the following figure. Written generally, a Newton step computes an improved guess, $x_i$, using a previous guess $x_i−1$, and is given by the equation$x_i=x_i$$_−$$_1$$−\frac{g(x_i{_−}{_1})}{g′(x_i{_−}{_1})}.$The Newton-Raphson Method of finding roots iterates Newton steps from x0 until the error is less than the tolerance. **Example**: Find the root of the function $f(x)=x^2−2$ using $x_0=1.4$ as a starting point.
###Code
import numpy as np
f = lambda x: x**2 - 2
f_prime = lambda x: 2*x
newton_raphson = 1.4 - (f(1.4))/(f_prime(1.4))
print("newton_raphson =", newton_raphson)
print("sqrt(2) =", np.sqrt(2))
###Output
newton_raphson = 1.4142857142857144
sqrt(2) = 1.4142135623730951
###Markdown
**Example**: Write a function $my$_$newton(f,df,x0,tol)$, where the output is an estimation of the root of $f$, $f$ is a function object $f(x)$, $df$ is a function object to $f′(x)$, $x0$ is an initial guess, and $tol$ is the error tolerance. The error measurement should be $|f(x)|$.
###Code
def my_newton(f, df, x0, tol):
if abs(f(x0)) < tol:
return x0
else:
return my_newton(f, df, x0 - f(x0)/df(x0), tol)
###Output
_____no_output_____
###Markdown
Use my_newton to compute $\sqrt{2}$ to within tolerance of $1e-6$ starting at $x0 = 1.5$.
###Code
estimate = my_newton(f, f_prime, 1.5, 1e-6)
print("estimate =", estimate)
print("sqrt(2) =", np.sqrt(2))
###Output
estimate = 1.4142135623746899
sqrt(2) = 1.4142135623730951
###Markdown
**Note**: If $x_0$ is close to $x_r$, then it can be proven that, in general, the Newton-Raphson method converges to xr much faster than the bisection method. However since $x_r$ is initially unknown, there is no way to know if the initial guess is close enough to the root to get this behavior unless some special information about the function is known a priori (e.g., the function has a root close to $x=0$). In addition to this initialization problem, the Newton-Raphson method has other serious limitations. For example, if the derivative at a guess is close to 0, then the Newton step will be very large and probably lead far away from the root. Also, depending on the behavior of the function derivative between $x_0$ and $x_r$, the Newton-Raphson method may converge to a different root than $x_r$ that may not be useful for our engineering application. **Example**: Write a function my_fixed_point$(f,g,tol,max_iter)$, where f and g are function objects and $tol$ and max_iter are strictly positive scalars. The input argument, max_iter, is also an integer. The output argument, $X$, should be a scalar satisfying $|f(X)−g(X)|<tol$; that is, $X$ is a point that (almost) satisfies $f(X)=g(X)$. To find $X$, you should use the Bisection method with error metric, $|F(m)|<tol$. The function my_fixed_point should “give up” after max_iter number of iterations and return $X=[]$ if this occurs.
###Code
def my_fixed_point(f,g,x0,tol,max_iter):
xn = x0
for n in range(0,max_iter):
fxn = f(xn)
if abs(fxn) < tol:
print('Solution found after',n,'iterations.')
return xn
gxn = g(xn)
if gxn == 0:
print('No derivative. No solution found.')
return None
xn = xn - fxn/gxn
print('Maximum iterations. No solution found.')
return None
f = lambda x: x**3 - x**2 - 1
g = lambda x: 3*x**2 - 2*x
my_fixed_point(f,g,1,1e-10,100)
f = lambda x: 5*x**3-4*x**2-3*x+9
g = lambda x: 15*x**2-8*x-3
my_fixed_point(f,g,1,1e-10,100)
###Output
Solution found after 6 iterations.
###Markdown
Section 19.5: Root Finding in Python Python has the existing root-finding functions for us to use to make things easy. The function we will use to find the root is $fsolve$ from the $scipy.optimize$.**Example**: Compute the root of the function $f(x)=5x_3−4x_2−3x+9$ using $fsolve$.
###Code
from scipy.optimize import fsolve
f = lambda x: 5*x**3-4*x**2-3*x+9
fsolve(f, [1, 1])
###Output
_____no_output_____ |
generating-functions.ipynb | ###Markdown
SageMath Demonstration: Generating functions of partitionsIn this notebook, we establish some techniques for computing generating functions of partitions using SageMath and verifying infinite product formulas up to a finite number of terms.See the [documentation](https://doc.sagemath.org/html/en/reference/combinat/sage/combinat/partition.html) of SageMath's `Partition` module for explanations of the methods relating to partitions used for calculations throughout this notebook.See the following resources for an introduction to the theory of ordinary generating functions and partitions:- Wikipedia [article](https://en.wikipedia.org/wiki/Generating_function) on generating functions- Mark Haiman's [notes](https://math.berkeley.edu/~mhaiman/math172-spring10/partitions.pdf) on partitions and their generating functionsFurther references:- Mike Zabrocki's [ebook](http://garsia.math.yorku.ca/~zabrocki/MMM1/MMM1Intro2OGFs.pdf) introduction to ordinary generating functions and accompanying [website](http://garsia.math.yorku.ca/~zabrocki/MMM1/)- Mike Zabrocki's [lecture notes](http://garsia.math.yorku.ca/~zabrocki/math4160f19/notes/ch4_generating_functions.pdf) on generating functions ConfigurationWe must first declare the power series ring $R = \mathbb{Z}[[q, t]]$ that contains our generating functions as elements.See the SageMath documentation on [Multivariate Power Series Rings] and [Multivariate Power Series]; and [Power Series Rings] and [Power Series] for more information.[Multivariate Power Series Rings]: https://doc.sagemath.org/html/en/reference/power_series/sage/rings/multi_power_series_ring.html[Multivariate Power Series]: https://doc.sagemath.org/html/en/reference/power_series/sage/rings/multi_power_series_ring_element.html[Power Series Rings]: https://doc.sagemath.org/html/en/reference/power_series/sage/rings/power_series_ring.html[Power Series]: https://doc.sagemath.org/html/en/reference/power_series/sage/rings/power_series_ring_element.html
###Code
# Precision variable to limit number of terms calculated in power series.
PREC = 20 # Feel free to take 100 or more
# Declare a power series ring R with integer coefficients over two variables t and q and precision as defined above
R.<t,q> = PowerSeriesRing(ZZ, default_prec=PREC)
###Output
_____no_output_____
###Markdown
General structure of generating functionsLet $\mathcal{P}$ denote the set of all integer partitions, and let $\mathcal{P}(n)$ denote the partitions of size $n$. Let $X$ be a subset of $\mathcal{P}$ and let $X(n) = X \cap \mathcal{P}(n)$.We wish to calculate the generating function $f(q,t)$ over partitions in the set $X$ where the summand is a generic term $T(q,t;\lambda)$ in two variables $q,t$ which depends on the partition $\lambda \in X$.$$f(q,t) = \sum_{\lambda \in X}T(q,t;\lambda)$$Let $\varphi_{X} : \mathcal{P} \to \{0,1\}$ be the characteristic function describing the subset $X \subset \mathcal{P}$,so that $\varphi_{X}(X) = \{1\}$, and $\varphi_{X}(\mathcal{P} \setminus X) = \{0\}$.The function implemented below calculates a finite approximation $f_{N}$ to the infinite power series/generating function $f$$$f_{N}(q,t) = \sum_{n=0}^{N}\sum_{\lambda \in X(n)}T(q,t;\lambda) = \sum_{n=0}^{N}\sum_{\lambda \in \mathcal{P}(n)}\varphi_{X}(\lambda)T(q,t;\lambda)$$given the summand expression function $T$, the maximum size $N$ of partitions which contribute to the partial sum, and the condition $\varphi_{X}$ describing membership of the set $X$.
###Code
def PartitionGenFunc(summand_expr=lambda p : 1, max_size=20, condition=lambda p : True):
r"""Returns a power series based on a sum over partitions p satisfying a condition, with configurable summand
With the default settings, this will count all partitions of size less than 20.
"""
return sum(sum(summand_expr(p) for p in Partitions(n) if condition(p)) for n in range(max_size))
###Output
_____no_output_____
###Markdown
Euler product formula for generating function of partition sizes and lengths In the code cell below we calculate the 2D (partial) generating function for partitions$$N \mapsto f_{N}(q,t) := \sum_{n = 0}^{N}\sum_{\lambda \in \mathcal{P}(n)}q^{\mathrm{size}(\lambda)}t^{\mathrm{length}(\lambda)} \tof(q,t) = \sum_{\lambda \in \mathcal{P}}q^{\mathrm{size}(\lambda)}t^{\mathrm{length}(\lambda)} \text{ as } N \to \infty$$up to the given precision $N = \verb|PREC|$ directly using Sage's `Partitions` class and methods.
###Code
partitions_genfunc = lambda max_size : sum(sum(q^n * t^len(p) for p in Partitions(n)) for n in range(max_size))
print(partitions_genfunc(PREC))
###Output
1 + t*q + t*q^2 + t^2*q^2 + t*q^3 + t^2*q^3 + t*q^4 + t^3*q^3 + 2*t^2*q^4 + t*q^5 + t^3*q^4 + 2*t^2*q^5 + t*q^6 + t^4*q^4 + 2*t^3*q^5 + 3*t^2*q^6 + t*q^7 + t^4*q^5 + 3*t^3*q^6 + 3*t^2*q^7 + t*q^8 + t^5*q^5 + 2*t^4*q^6 + 4*t^3*q^7 + 4*t^2*q^8 + t*q^9 + t^5*q^6 + 3*t^4*q^7 + 5*t^3*q^8 + 4*t^2*q^9 + t*q^10 + t^6*q^6 + 2*t^5*q^7 + 5*t^4*q^8 + 7*t^3*q^9 + 5*t^2*q^10 + t*q^11 + t^6*q^7 + 3*t^5*q^8 + 6*t^4*q^9 + 8*t^3*q^10 + 5*t^2*q^11 + t*q^12 + t^7*q^7 + 2*t^6*q^8 + 5*t^5*q^9 + 9*t^4*q^10 + 10*t^3*q^11 + 6*t^2*q^12 + t*q^13 + t^7*q^8 + 3*t^6*q^9 + 7*t^5*q^10 + 11*t^4*q^11 + 12*t^3*q^12 + 6*t^2*q^13 + t*q^14 + t^8*q^8 + 2*t^7*q^9 + 5*t^6*q^10 + 10*t^5*q^11 + 15*t^4*q^12 + 14*t^3*q^13 + 7*t^2*q^14 + t*q^15 + t^8*q^9 + 3*t^7*q^10 + 7*t^6*q^11 + 13*t^5*q^12 + 18*t^4*q^13 + 16*t^3*q^14 + 7*t^2*q^15 + t*q^16 + t^9*q^9 + 2*t^8*q^10 + 5*t^7*q^11 + 11*t^6*q^12 + 18*t^5*q^13 + 23*t^4*q^14 + 19*t^3*q^15 + 8*t^2*q^16 + t*q^17 + t^9*q^10 + 3*t^8*q^11 + 7*t^7*q^12 + 14*t^6*q^13 + 23*t^5*q^14 + 27*t^4*q^15 + 21*t^3*q^16 + 8*t^2*q^17 + t*q^18 + t^10*q^10 + 2*t^9*q^11 + 5*t^8*q^12 + 11*t^7*q^13 + 20*t^6*q^14 + 30*t^5*q^15 + 34*t^4*q^16 + 24*t^3*q^17 + 9*t^2*q^18 + t*q^19 + t^10*q^11 + 3*t^9*q^12 + 7*t^8*q^13 + 15*t^7*q^14 + 26*t^6*q^15 + 37*t^5*q^16 + 39*t^4*q^17 + 27*t^3*q^18 + 9*t^2*q^19 + t^11*q^11 + 2*t^10*q^12 + 5*t^9*q^13 + 11*t^8*q^14 + 21*t^7*q^15 + 35*t^6*q^16 + 47*t^5*q^17 + 47*t^4*q^18 + 30*t^3*q^19 + t^11*q^12 + 3*t^10*q^13 + 7*t^9*q^14 + 15*t^8*q^15 + 28*t^7*q^16 + 44*t^6*q^17 + 57*t^5*q^18 + 54*t^4*q^19 + t^12*q^12 + 2*t^11*q^13 + 5*t^10*q^14 + 11*t^9*q^15 + 22*t^8*q^16 + 38*t^7*q^17 + 58*t^6*q^18 + 70*t^5*q^19 + t^12*q^13 + 3*t^11*q^14 + 7*t^10*q^15 + 15*t^9*q^16 + 29*t^8*q^17 + 49*t^7*q^18 + 71*t^6*q^19 + t^13*q^13 + 2*t^12*q^14 + 5*t^11*q^15 + 11*t^10*q^16 + 22*t^9*q^17 + 40*t^8*q^18 + 65*t^7*q^19 + t^13*q^14 + 3*t^12*q^15 + 7*t^11*q^16 + 15*t^10*q^17 + 30*t^9*q^18 + 52*t^8*q^19 + t^14*q^14 + 2*t^13*q^15 + 5*t^12*q^16 + 11*t^11*q^17 + 22*t^10*q^18 + 41*t^9*q^19 + t^14*q^15 + 3*t^13*q^16 + 7*t^12*q^17 + 15*t^11*q^18 + 30*t^10*q^19 + t^15*q^15 + 2*t^14*q^16 + 5*t^13*q^17 + 11*t^12*q^18 + 22*t^11*q^19 + t^15*q^16 + 3*t^14*q^17 + 7*t^13*q^18 + 15*t^12*q^19 + t^16*q^16 + 2*t^15*q^17 + 5*t^14*q^18 + 11*t^13*q^19 + t^16*q^17 + 3*t^15*q^18 + 7*t^14*q^19 + t^17*q^17 + 2*t^16*q^18 + 5*t^15*q^19 + t^17*q^18 + 3*t^16*q^19 + t^18*q^18 + 2*t^17*q^19 + t^18*q^19 + t^19*q^19
###Markdown
In the code cell below we calculate the same generating function using the Euler product formula where each factor counts the number of parts of size $i$ in any given partition$$N \mapsto g_{N}(q,t) = \prod_{i=0}^{N}\frac{1}{1-tq^{i}} \to \prod_{i=0}^{\infty}\frac{1}{1-tq^{i}} = f(q,t) \text{ as } N \to \infty$$
###Code
eulerprod = lambda max_part_size : prod(1/(1 - t*q^i) for i in range(1,max_part_size))
print(eulerprod(PREC))
###Output
1 + t*q + t*q^2 + t^2*q^2 + t*q^3 + t^2*q^3 + t*q^4 + t^3*q^3 + 2*t^2*q^4 + t*q^5 + t^3*q^4 + 2*t^2*q^5 + t*q^6 + t^4*q^4 + 2*t^3*q^5 + 3*t^2*q^6 + t*q^7 + t^4*q^5 + 3*t^3*q^6 + 3*t^2*q^7 + t*q^8 + t^5*q^5 + 2*t^4*q^6 + 4*t^3*q^7 + 4*t^2*q^8 + t*q^9 + t^5*q^6 + 3*t^4*q^7 + 5*t^3*q^8 + 4*t^2*q^9 + t*q^10 + t^6*q^6 + 2*t^5*q^7 + 5*t^4*q^8 + 7*t^3*q^9 + 5*t^2*q^10 + t*q^11 + t^6*q^7 + 3*t^5*q^8 + 6*t^4*q^9 + 8*t^3*q^10 + 5*t^2*q^11 + t*q^12 + t^7*q^7 + 2*t^6*q^8 + 5*t^5*q^9 + 9*t^4*q^10 + 10*t^3*q^11 + 6*t^2*q^12 + t*q^13 + t^7*q^8 + 3*t^6*q^9 + 7*t^5*q^10 + 11*t^4*q^11 + 12*t^3*q^12 + 6*t^2*q^13 + t*q^14 + t^8*q^8 + 2*t^7*q^9 + 5*t^6*q^10 + 10*t^5*q^11 + 15*t^4*q^12 + 14*t^3*q^13 + 7*t^2*q^14 + t*q^15 + t^8*q^9 + 3*t^7*q^10 + 7*t^6*q^11 + 13*t^5*q^12 + 18*t^4*q^13 + 16*t^3*q^14 + 7*t^2*q^15 + t*q^16 + t^9*q^9 + 2*t^8*q^10 + 5*t^7*q^11 + 11*t^6*q^12 + 18*t^5*q^13 + 23*t^4*q^14 + 19*t^3*q^15 + 8*t^2*q^16 + t*q^17 + t^9*q^10 + 3*t^8*q^11 + 7*t^7*q^12 + 14*t^6*q^13 + 23*t^5*q^14 + 27*t^4*q^15 + 21*t^3*q^16 + 8*t^2*q^17 + t*q^18 + O(t, q)^20
###Markdown
Now we verify that $f_{N}$ and $g_{N}$ agree for all terms of total degree up to `PREC`
###Code
# Compare these two power series up to the specified precision
print(partitions_genfunc(PREC) == eulerprod(PREC))
###Output
True
###Markdown
Core partition generating functions We introduce a function on partitions called the $(k,r)$-weighted hook count$$\lambda \mapsto |\{\square \in \lambda : l(\square) + k(a(\square) + 1) \equiv 0 \mod r\}| =: \mathrm{WHC_{k,r}}(\lambda)$$which we calculate using the [upper hook](https://doc.sagemath.org/html/en/reference/combinat/sage/combinat/partition.htmlsage.combinat.partition.Partition.upper_hook) method of SageMath's `Partition` class.[Click here](https://edwardmpearce.github.io/tutorial-partitions/intro/visualization/table-of-functions) for a description (with illustrations) of various functions on cells in a partition.
###Code
def WeightedHookCount(p,k,r):
"""For a partition p and parameters k and r, we count the number of cells c in p for which leg(c) + k * (arm(c) + 1) = 0 (mod r)"""
return sum(p.upper_hook(c[0],c[1], k) % r == 0 for c in p.cells())
###Output
_____no_output_____
###Markdown
In the code cell below we calculate a 2D (partial) generating function over partitions where the $q$ exponent indicates partition size and the $t$ component indicates the $(k,r)$-weighted hook count$$(k,r,N) \mapsto \sum_{n = 0}^{N}\sum_{\lambda \in \mathcal{P}(n)}q^{\mathrm{size}(\lambda)}t^{\mathrm{WHC_{k,r}}(\lambda)}$$
###Code
# 2D generating function for partitions (up to given precision) where q exponent indicates partition size
# and t component indicates the (k,r)-weighted hook count |{c in p : leg(c) + k * (arm(c) + 1) = 0 (mod r)}|
WHC_genfunc = lambda k, r, max_size : sum(sum(q^n * t^WeightedHookCount(p,k,r) for p in Partitions(n)) for n in range(max_size))
print(WHC_genfunc(-1, 5, PREC))
###Output
1 + q + q^2 + t*q^2 + 2*q^3 + t*q^3 + 2*q^4 + 2*t*q^4 + 2*q^5 + t^2*q^4 + 4*t*q^5 + 3*q^6 + t^2*q^5 + 5*t*q^6 + 3*q^7 + 2*t^2*q^6 + 6*t*q^7 + 3*q^8 + t^3*q^6 + 5*t^2*q^7 + 9*t*q^8 + 4*q^9 + t^3*q^7 + 7*t^2*q^8 + 10*t*q^9 + 4*q^10 + 2*t^3*q^8 + 10*t^2*q^9 + 11*t*q^10 + 4*q^11 + t^4*q^8 + 5*t^3*q^9 + 17*t^2*q^10 + 14*t*q^11 + 5*q^12 + t^4*q^9 + 7*t^3*q^10 + 21*t^2*q^11 + 15*t*q^12 + 5*q^13 + 2*t^4*q^10 + 11*t^3*q^11 + 26*t^2*q^12 + 16*t*q^13 + 5*q^14 + t^5*q^10 + 5*t^4*q^11 + 21*t^3*q^12 + 35*t^2*q^13 + 19*t*q^14 + 6*q^15 + t^5*q^11 + 7*t^4*q^12 + 28*t^3*q^13 + 40*t^2*q^14 + 20*t*q^15 + 6*q^16 + 2*t^5*q^12 + 11*t^4*q^13 + 39*t^3*q^14 + 45*t^2*q^15 + 21*t*q^16 + 6*q^17 + t^6*q^12 + 5*t^5*q^13 + 22*t^4*q^14 + 58*t^3*q^15 + 55*t^2*q^16 + 24*t*q^17 + 7*q^18 + t^6*q^13 + 7*t^5*q^14 + 30*t^4*q^15 + 72*t^3*q^16 + 60*t^2*q^17 + 25*t*q^18 + 7*q^19 + 2*t^6*q^14 + 11*t^5*q^15 + 45*t^4*q^16 + 88*t^3*q^17 + 65*t^2*q^18 + 26*t*q^19 + t^7*q^14 + 5*t^6*q^15 + 22*t^5*q^16 + 72*t^4*q^17 + 114*t^3*q^18 + 75*t^2*q^19 + t^7*q^15 + 7*t^6*q^16 + 30*t^5*q^17 + 96*t^4*q^18 + 131*t^3*q^19 + 2*t^7*q^16 + 11*t^6*q^17 + 46*t^5*q^18 + 128*t^4*q^19 + t^8*q^16 + 5*t^7*q^17 + 22*t^6*q^18 + 76*t^5*q^19 + t^8*q^17 + 7*t^7*q^18 + 30*t^6*q^19 + 2*t^8*q^18 + 11*t^7*q^19 + t^9*q^18 + 5*t^8*q^19 + t^9*q^19
###Markdown
Define a (k,r)-core partition to be a partition $\lambda$ which has (k,r)-weighted hook count equal to zero.That is, $\mathrm{WHC_{k,r}}(\lambda) = |\{\square \in \lambda : l(\square) + k(a(\square) + 1) \equiv 0 \mod r\}| = 0$.The code below calculates the generating function for $(k,r)$-core partitions up to a given size $N$.$$(k,r,N) \mapsto = \sum_{n = 0}^{N}|\mathcal{C}_{k,r}(n)|q^{n}$$
###Code
# Generating function for the number of (k,r)-core partitions of a given size (i.e. partitions which have (k,r)-weighted hook count equal to zero)
krcore_genfunc = lambda k, r, max_size : sum(sum(q^n for p in Partitions(n) if WeightedHookCount(p,k,r) == 0) for n in range(max_size))
###Output
_____no_output_____
###Markdown
Below we examine the $(k,r)$-core generating functions for particular choices of the parameters $(k,r)$.In particular, we observe evidence that $$\sum_{n = 0}^{\infty}|\mathcal{C}_{1,2}(n)|q^{n} = \sum_{k = 0}^{\infty}q^{k(k+1)/2} = 1 + q + q^{3} + q^{6} + q^{10} + q^{15} + \ldots$$$$\sum_{n = 0}^{\infty}|\mathcal{C}_{2,3}(n)|q^{n} = \frac{1}{1-q} = \sum_{n = 0}^{\infty}q^{n} = 1 + q + q^{2} + q^{3} + q^{4} + q^{5} + \ldots$$$$\sum_{n = 0}^{\infty}|\mathcal{C}_{2,5}(n)|q^{n} = \frac{1}{1-q}\ \cdot \frac{1}{1-q^{2}} = (1+q)\sum_{n = 0}^{\infty}(n+1)q^{2n} = 1 + q + 2q^{2} + 2q^{3} + 3q^{4} + 3q^{5} + \ldots$$
###Code
twocores_genfunc = krcore_genfunc(1,2, PREC)
twocores_genfunc
# Calculate the generating function for (2,3)-core partitions up to the specified precision
twothreecores_genfunc = krcore_genfunc(2,3, PREC)
# Examine our particular generating function and notice a pattern in the terms
print(twothreecores_genfunc)
# Experimentally check that the generating function is equal to \sum_{i=0}^{\infty}q^{i} = 1/(1-q)
print(twothreecores_genfunc^-1)
print(twothreecores_genfunc - sum(q^n for n in range(PREC)))
print(twothreecores_genfunc * (1-q)) # Equal to 1 up to precision
# Calculate the generating function for (2,5)-core partitions up to the specified precision
twofivecores_genfunc = krcore_genfunc(2,5, PREC)
# Examine our particular generating function and notice a pattern in the terms
print(twofivecores_genfunc)
# Experimentally check that the generating function is equal to (1 + q)\sum_{i=0}^{\infty}(i+1)q^{2i} = (1 + q)/(1 - q^2)^2 = 1/((1-q)(1-q^2))
print(twofivecores_genfunc^-1)
print(twofivecores_genfunc - (1+q) * sum((i+1)*q^(2*i) for i in range(PREC // 2)))
print(twofivecores_genfunc * (1-q)*(1-q^2)) # Equal to 1 up to precision
###Output
1 + q + 2*q^2 + 2*q^3 + 3*q^4 + 3*q^5 + 4*q^6 + 4*q^7 + 5*q^8 + 5*q^9 + 6*q^10 + 6*q^11 + 7*q^12 + 7*q^13 + 8*q^14 + 8*q^15 + 9*q^16 + 9*q^17 + 10*q^18 + 10*q^19
1 - q - q^2 + q^3 + O(t, q)^20
0
1 - 11*q^20 + 10*q^22
###Markdown
Other modified hook residuesThe following series of calculations are based on the question posed [here](https://ptwiddle.github.io/Partitions-Lab/LaTeX/Introduction.pdf).For $k=0,1,2$ we calculate an initial number of terms for the generating functions$$\mathcal{G}_{k}(q,t) := \sum_{\lambda \in \mathcal{P}}q^{|\lambda|}t^{d_{k}(\lambda)}$$where $d_{k}(\lambda) = |\{\square \in \lambda : l(\square) - a(\square) - 1 \equiv k - 1 \mod 3\}|$Note that $\mathcal{G}_{1} = \mathcal{G}_{2}$ because $d_{-k}(\lambda) = d_{k}(\mathrm{conjugate}(\lambda))$ where the index $k$ is taken modulo $3$, and conjugation is an involution (in particular, a bijection) on the set $\mathcal{P}$ of partitions.
###Code
G0 = 0; G1 = 0; G2 = 0
for n in range(30):
for p in Partitions(n):
t_counts = [0, 0, 0]
for c in p.cells():
t_counts[p.upper_hook(c[0],c[1], -1) % 3] += 1
G1 += q^n * t^t_counts[0]
G2 += q^n * t^t_counts[1]
G0 += q^n * t^t_counts[2]
print(G1)
print(G1 == G2)
print(G0)
print(G1^-1)
G1 - (1+3*t*q^4)/(1-q)
###Output
_____no_output_____ |
Notebooks_DataScience/Demo40_Plotting_Matplotlib.ipynb | ###Markdown
MATPLOTLIB FOR VISUALIZATIONWelcome to the first exciting colorful data visualization tutorial. Matplotlib is the default library most data scientists use for plotting. It is an excellent tool and a must-know for any python proficient data scientist.- Python Basics- Object Oriented Python- Python for Data Science- NumPy- Pandas- **Plotting** - **Matplotlib** - Seaborn Let's get visualizing!
###Code
import matplotlib.pyplot as plt
import math
%matplotlib inline
import numpy as np
x = np.linspace(-1,1,100)
y = x*x
plt.plot(x,y)
plt.show()
plt.plot(x,y,'k')
plt.ylabel('y label')
plt.xlabel('x label')
plt.title('Title')
plt.show()
###Output
_____no_output_____
###Markdown
MULTIPLE PLOTS
###Code
x = np.linspace(-10,10,100)
y = []
for i in x:
y.append(math.sin(i))
plt.subplot(1,3,1)
plt.plot(x,x*x, 'r')
plt.subplot(1,3,2)
plt.plot(x,x*x*x, 'k')
plt.subplot(1,3,3)
plt.plot(x,y,'b')
###Output
_____no_output_____
###Markdown
OBJECT ORIENTED PLOTTINGSo remember the classes and attributes and objects and instances and all that jazz?This is where we use it for the very first time. Object oriented plotting is one of the most important features matplotlib has to offer. we instantiate a figure which is basically a plot, then we graph on it by calling different methods on it. Sounds complicated Mahnoor, let's break it down
###Code
figure = plt.figure()
left, bottom, width, height = 0, 0, 0.4, 0.9
ax = figure.add_axes([left, bottom, width, height])
ax.plot(x,y)
ax.set_xlabel("X label")
ax.set_ylabel("Y label")
ax.set_title("Title")
ax.set_xlim([-10,10])
ax.set_ylim([-1,1])
figure = plt.figure()
left, bottom, width, height = 0, 0, 0.9, 0.9
left1, bottom1, width1, height1 = 0.3, 0.5, 0.2, 0.3
ax = figure.add_axes([left, bottom, width, height])
ax1 = figure.add_axes([left1, bottom1, width1, height1])
ax.plot(x,y)
ax1.plot(x,y)
###Output
_____no_output_____
###Markdown
WHAT. JUST. HAPPENEDIf you're wondering what just happened. You. are. not. alone. It took me a while too. So here's how I visualize the above code:- **paper = plt.figure()** - creates a canvas. Think of it as the paper you're drawing the graph on.- **box = paper.add_axes()** - basically creates the box in which you want to draw your graph.- **box.plot()** - creates the plot !
###Code
figure,axes = plt.subplots(nrows=1, ncols=5)
plt.tight_layout()
figure,axes = plt.subplots(nrows=1, ncols=2)
for ax in axes:
ax.plot(x,y)
plt.tight_layout()
axes[0].set_title('Title graph 0')
axes[1].set_title('Title graph 1')
# [8.0, 6.0] is the default figsize
figure = plt.figure(figsize=(3,4))
ax = figure.add_axes([0,0,1,1])
ax.plot(x,y)
fig, axes = plt.subplots(nrows=3, ncols=1,figsize=(10,4))
axes[1].plot(x,x*x*x, label="x*x*x")
axes[1].plot(x,x*x, label="x*x")
plt.tight_layout()
axes[1].legend()
fig.savefig("Figure1", dpi=300)
###Output
_____no_output_____
###Markdown
CUSTOMIZE THE APPEARANCE
###Code
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y,color='purple')#Can use RGB hex codes
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y,color='purple', linewidth= 10, alpha=.4)
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y,color='purple', linestyle='-.', marker='o',markersize=10)
###Output
_____no_output_____ |
6 复试/2 笔试/9 机器学习/统计学习方法课件/统计学习方法-代码/第1章 统计学习方法概论(LeastSquaresMethod)/least_sqaure_method.ipynb | ###Markdown
原文代码作者:https://github.com/wzyonggege/statistical-learning-method中文注释制作:机器学习初学者微信公众号:ID:ai-start-com配置环境:python 3.6代码全部测试通过。 第1章 统计学习方法概论 高斯于1823年在误差e1 ,… , en独立同分布的假定下,证明了最小二乘方法的一个最优性质: 在所有无偏的线性估计类中,最小二乘方法是其中方差最小的! 使用最小二乘法拟和曲线对于数据$(x_i, y_i)(i=1, 2, 3...,m)$拟合出函数$h(x)$有误差,即残差:$r_i=h(x_i)-y_i$此时L2范数(残差平方和)最小时,h(x) 和 y 相似度最高,更拟合 一般的H(x)为n次的多项式,$H(x)=w_0+w_1x+w_2x^2+...w_nx^n$$w(w_0,w_1,w_2,...,w_n)$为参数最小二乘法就是要找到一组 $w(w_0,w_1,w_2,...,w_n)$ 使得$\sum_{i=1}^n(h(x_i)-y_i)^2$ (残差平方和) 最小即,求 $min\sum_{i=1}^n(h(x_i)-y_i)^2$---- 举例:我们用目标函数$y=sin2{\pi}x$, 加上一个正太分布的噪音干扰,用多项式去拟合【例1.1 11页】
###Code
import numpy as np
import scipy as sp
from scipy.optimize import leastsq
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
*ps: numpy.poly1d([1,2,3]) 生成 $1x^2+2x^1+3x^0$*
###Code
# 目标函数
def real_func(x):
return np.sin(2*np.pi*x)
# 多项式
def fit_func(p, x):
f = np.poly1d(p)
return f(x)
# 残差
def residuals_func(p, x, y):
ret = fit_func(p, x) - y
return ret
# 十个点
x = np.linspace(0, 1, 10)
x_points = np.linspace(0, 1, 1000)
# 加上正态分布噪音的目标函数的值
y_ = real_func(x)
y = [np.random.normal(0, 0.1)+y1 for y1 in y_]
def fitting(M=0):
"""
M 为 多项式的次数
"""
# 随机初始化多项式参数
p_init = np.random.rand(M+1)
# 最小二乘法
p_lsq = leastsq(residuals_func, p_init, args=(x, y))
print('Fitting Parameters:', p_lsq[0])
# 可视化
plt.plot(x_points, real_func(x_points), label='real')
plt.plot(x_points, fit_func(p_lsq[0], x_points), label='fitted curve')
plt.plot(x, y, 'bo', label='noise')
plt.legend()
return p_lsq
# M=0
p_lsq_0 = fitting(M=0)
# M=1
p_lsq_1 = fitting(M=1)
# M=3
p_lsq_3 = fitting(M=3)
# M=9
p_lsq_9 = fitting(M=9)
###Output
Fitting Parameters: [-7.35300865e+03 3.20446626e+04 -5.87661832e+04 5.89723258e+04
-3.52349521e+04 1.27636926e+04 -2.70301291e+03 2.80321069e+02
-3.97563291e+00 -2.00783231e-02]
###Markdown
当M=9时,多项式曲线通过了每个数据点,但是造成了过拟合 正则化 结果显示过拟合, 引入正则化项(regularizer),降低过拟合$Q(x)=\sum_{i=1}^n(h(x_i)-y_i)^2+\lambda||w||^2$。回归问题中,损失函数是平方损失,正则化可以是参数向量的L2范数,也可以是L1范数。- L1: regularization\*abs(p)- L2: 0.5 \* regularization \* np.square(p)
###Code
regularization = 0.0001
def residuals_func_regularization(p, x, y):
ret = fit_func(p, x) - y
ret = np.append(ret, np.sqrt(0.5*regularization*np.square(p))) # L2范数作为正则化项
return ret
# 最小二乘法,加正则化项
p_init = np.random.rand(9+1)
p_lsq_regularization = leastsq(residuals_func_regularization, p_init, args=(x, y))
plt.plot(x_points, real_func(x_points), label='real')
plt.plot(x_points, fit_func(p_lsq_9[0], x_points), label='fitted curve')
plt.plot(x_points, fit_func(p_lsq_regularization[0], x_points), label='regularization')
plt.plot(x, y, 'bo', label='noise')
plt.legend()
###Output
_____no_output_____ |
magnolia/sandbox/notebooks/source-separation/pit/PitCnn_evaluation.ipynb | ###Markdown
PIT-S-CNN BSS Eval example notebookThis notebook contains an example of computing SDR, SIR, and SAR improvements on signals separated using Lab41's model.
###Code
# Generic imports
import sys
import time
import numpy as np
import tensorflow as tf
# Plotting imports
import IPython
from IPython.display import Audio
from matplotlib import pyplot as plt
fig_size = [0,0]
fig_size[0] = 8
fig_size[1] = 4
plt.rcParams["figure.figsize"] = fig_size
# Import Lab41's separation model
from magnolia.dnnseparate.pit import PITModel
# Import utilities for using the model
from magnolia.iterate.hdf5_iterator import SplitsIterator
from magnolia.iterate.supervised_iterator import SupervisedMixer
from magnolia.utils.clustering_utils import clustering_separate, preprocess_signal
from magnolia.iterate.mixer import FeatureMixer
from magnolia.features.spectral_features import istft, scale_spectrogram
from magnolia.utils.postprocessing import reconstruct
from magnolia.features.preprocessing import undo_preemphasis
from magnolia.utils.bss_eval import bss_eval_sources
###Output
_____no_output_____
###Markdown
Paths
###Code
libritest = "** Path to librispeech test hdf5 **"
model_path = "** Path to model checkpoint **"
libritrain = "** path to LibriSpeech train hdf5 **"
female_speakers = '** path to list of female speakers in train set (available in repo) **'
male_speakers = '** path to list of male speakers in train set (in repo) **'
female_speakers_test = 'data/librispeech/authors/test-clean-M.txt'
male_speakers_test = 'data/librispeech/authors/test-clean-M.txt'
###Output
_____no_output_____
###Markdown
Hyperparameters fft_size : Number of samples in the fft window overlap : Amount of overlap in the fft windows sample_rate : Number of samples per second in the input signals numsources : Number of sources datashape : (Number of time steps, nubmer of frequency bins) preemp_coef : Preemphasis coefficient
###Code
fft_size = 512
overlap = 0.0256
sample_rate = 10000
numsources = 2
datashape = (51, fft_size//2 + 1)
preemp_coef = 0.95
###Output
_____no_output_____
###Markdown
Initialize and load an instance of Lab41's source separation model
###Code
tf.reset_default_graph()
model = PITModel(method='pit-s-cnn', num_steps=datashape[0], num_freq_bins=datashape[1], num_srcs=numsources)
config = tf.ConfigProto()
config.allow_soft_placement = True
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
model.load(model_path, sess)
###Output
_____no_output_____
###Markdown
Define some helper functions for evaluating BSS metrics
###Code
def bss_eval_sample(mixer, num_sources):
"""
Function to generate a sample from mixer and evaluate BSS metrics on it
"""
# Generate a sample
data = next(mixer)
# Get the waveforms for the mixed signal and the true sources
mixes = [reconstruct(data[0], data[0], sample_rate, None, overlap, preemphasis=preemp_coef)] * num_sources
sources = [reconstruct(src, src, sample_rate, None, overlap, preemphasis=preemp_coef) for metadata, src in data[1:]]
# Stack the input mix and the true sources into arrays
input_mix = np.stack(mixes)
reference_sources = np.stack(sources)
# Use the model to separate the signal into the desired number of sources
spec = data[0]
spec_mag, spec_phase = scale_spectrogram(spec)
sources_spec = model.separate(spec_mag, sess)
estimated_sources = np.stack([reconstruct(x, spec, sample_rate, None, overlap,
square=True, preemphasis=preemp_coef) for x in sources_spec])
# Compute the SDR, SIR, SAR of the input mixes
do_nothing = bss_eval_sources(reference_sources, input_mix)
# Compute the SDR, SIR, SAR of the separated sources
do_something = bss_eval_sources(reference_sources, estimated_sources)
# Compute the SDR, SIR, SAR improvement due to separation
sdr = do_something[0] - do_nothing[0]
sir = do_something[1] - do_nothing[1]
sar = do_something[2] - do_nothing[2]
return {'SDR': sdr, 'SIR': sir, 'SAR': sar}
###Output
_____no_output_____
###Markdown
Evaluation of in set BSS metricsThis section shows the evaluation of SDR, SIR, and SAR on mixtures of speakers that are in the training set. Get the speaker keys corresponding to F and M speakers in the training set
###Code
with open(female_speakers,'r') as speakers:
keys = speakers.read().splitlines()
speaker_keys = keys[:]
in_set_F = keys[:]
with open(male_speakers,'r') as speakers:
keys = speakers.read().splitlines()
speaker_keys += keys
in_set_M = keys[:]
###Output
_____no_output_____
###Markdown
Create mixers for in set FF, FM, MM, and all speaker mixes.The splits used in creating each SplitsIterator should be the same as the ones used in training the model.
###Code
# Create an iterator over the male speakers in set and set the active split to the test split
maleiter = SplitsIterator([0.8,0.1,0.1], libritrain, speaker_keys=in_set_M, shape=(150,fft_size//2+1), return_key=True)
maleiter.set_split(2)
# Create an iterator over the female speakers in set and set the active split to the test split
femaleiter = SplitsIterator([0.8,0.1,0.1], libritrain, speaker_keys=in_set_F, shape=(150,fft_size//2+1), return_key=True)
femaleiter.set_split(2)
# Create mixers for each type of possible speaker mixes
MMmixer = SupervisedMixer([maleiter,maleiter], shape=(150,fft_size//2+1),
mix_method='add', diffseed=True)
FFmixer = SupervisedMixer([femaleiter,femaleiter], shape=(150,fft_size//2+1),
mix_method='add', diffseed=True)
MFmixer = SupervisedMixer([maleiter,femaleiter], shape=(150,fft_size//2+1),
mix_method='add', diffseed=True)
FMmixer = SupervisedMixer([femaleiter,maleiter], shape=(150,fft_size//2+1),
mix_method='add', diffseed=True)
mixers = [MMmixer, FFmixer, MFmixer, FMmixer]
# Some book keeping in preparation for evaluating on samples from the mixers
mixerdesc = ['MM','FF','MF','FM']
mixersSDR = [[],[],[],[]]
mixersSIR = [[],[],[],[]]
mixersSAR = [[],[],[],[]]
i=0
###Output
_____no_output_____
###Markdown
Evaluate BSS metrics on 500 samples from each mixer
###Code
# Number of samples to evaluate
num_samples = 500
# Get the starting i
try:
starti = i
except:
starti = 0
# Iterate over samples, computing BSS metrics for samples from each mixer
for i in range(starti, num_samples):
for j,mixer in enumerate(mixers):
# Compute SDR, SIR, SAR for this mixer
evals = bss_eval_sample(mixer, 2)
# Store the results
mixersSDR[j].append( 1/(2)*(evals['SDR'][0] + evals['SDR'][1]) )
mixersSIR[j].append( 1/(2)*(evals['SIR'][0] + evals['SIR'][1]) )
mixersSAR[j].append( 1/(2)*(evals['SAR'][0] + evals['SAR'][1]) )
# Compute the mean SDR, SIR, SAR
MMSDR = np.mean(mixersSDR[0])
FFSDR = np.mean(mixersSDR[1])
MFSDR = np.mean(mixersSDR[2])
FMSDR = np.mean(mixersSDR[3])
# Clear the display and show the progress so far
IPython.display.clear_output(wait=True)
print(str(i)+':' +
' MM: ' + str(MMSDR) +
', FF: ' + str(FFSDR) +
', MF: ' + str((MFSDR+FMSDR)/2) +
', All: '+ str((MMSDR+FMSDR+MFSDR+FFSDR)/4))
###Output
_____no_output_____
###Markdown
Evaluation of out of set BSS metricsThis section shows the evaluation of SDR, SIR, SAR on mixtures of speakers that were not in the training set Get the speaker keys for F and M speakers from the test set
###Code
with open(female_speakers_test,'r') as speakers:
out_set_F = speakers.read().splitlines()
with open(male_speakers_test,'r') as speakers:
out_set_M = speakers.read().splitlines()
all_speakers = out_set_F + out_set_M
###Output
_____no_output_____
###Markdown
Create mixers for out of set FF FM MM, all, speaker mixes
###Code
# Make an iterator over female speakers
Fiterator = SplitsIterator([1], libritest, speaker_keys=out_set_F, shape=datashape, return_key=True)
Fiterator.set_split(0)
# Make an iterator over male speakers
Miterator = SplitsIterator([1], libritest, speaker_keys=out_set_M, shape=datashape, return_key=True)
Miterator.set_split(0)
# Make an iterator over all speakers
Aiterator = SplitsIterator([1], libritest, speaker_keys=all_speakers, shape=datashape, return_key=True)
# Create mixers for each combination of speakers
outsetFFmixer = SupervisedMixer([Fiterator,Fiterator], shape=datashape,
mix_method='add', diffseed=True)
outsetFMmixer = SupervisedMixer([Fiterator,Miterator], shape=datashape,
mix_method='add', diffseed=True)
outsetMMmixer = SupervisedMixer([Miterator,Miterator], shape=datashape,
mix_method='add', diffseed=True)
outsetAAmixer = SupervisedMixer([Aiterator,Aiterator], shape=datashape,
mix_method='add', diffseed=True)
###Output
_____no_output_____ |
ZebraKet/archive/profit-optimization.ipynb | ###Markdown
What ever constants we want
###Code
budget = 10000
###Output
_____no_output_____
###Markdown
First let's read in our data. We can start off with the small data set
###Code
profit, cost = read_profit_optimization_data(standard_mock_data['small'])
# TODO: Somehow visualize the data
###Output
_____no_output_____
###Markdown
Next we try the classical solution. Note, we have a discreet method (which is not very applicable to the real world) and a binary method (more applicable). discrete method just for completeness (recall: this is not very applicable to real world)
###Code
binary_solution, binary_cost, binary_profit = binary_profit_optimizer(profit=profit, cost=cost, budget=budget)
print('Found binary (crude) profit optimization solution', binary_solution, binary_cost, binary_profit)
###Output
Found binary (crude) profit optimization solution (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19) 356.73150914723203 499.42411280612487
###Markdown
Next we use the discrete solution
###Code
discrete_profit = discrete_profit_optimizer(profit=profit, cost=cost, budget=budget)
print('Found discrete (crude) profit optimization solution', discrete_profit)
# TODO: Somehow visualize the result and make it comparable to the next steps
# TODO: fix this function to make it yield appropriate data (i.e. solution set, cost)
###Output
Found discrete (crude) profit optimization solution 14130.2
###Markdown
Great! Now lets use the QUBO formulation. We start with the simulated annealing
###Code
# TODO make the ProfitQubo take a list of budgets (i.e. max_number_of_products)
qubo = ProfitQubo(profits=profit, costs=cost, budget=budget, max_number_of_products=10)
sampler = SimulatedAnnealingSampler().sample_dqm
qubo.solve(sampler, **{"num_reads":100, "num_sweeps": 100000})
print(qubo.solution_set)
###Output
_____no_output_____
###Markdown
Ok, lets try the hybrid solver now
###Code
sampler = LeapHybridDQMSampler().sample_dqm
qubo.solve(sampler)
print(qubo.solution_set)
###Output
x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 ... y13 energy num_oc.
12 10 10 10 10 10 10 10 9 10 10 9 10 10 10 ... 1 -4205.553113 1
3 0 0 1 0 10 5 8 4 10 0 9 0 8 10 ... 1 -3218.261363 1
22 10 10 10 10 10 8 9 4 2 2 10 2 1 3 ... 1 -3123.25136 1
10 0 10 10 10 0 0 8 4 0 0 7 6 10 10 ... 1 -2893.952026 1
20 10 0 10 2 4 7 8 8 5 9 7 9 0 7 ... 0 -2742.28209 1
4 0 0 1 0 4 5 0 4 0 0 7 0 10 10 ... 1 -2674.632111 1
8 0 0 0 0 0 7 5 0 10 0 7 6 10 10 ... 1 -2585.862854 1
11 10 10 10 10 10 10 10 10 10 10 10 10 10 10 ... 0 -2568.389841 1
13 10 10 10 10 10 10 10 10 10 10 10 10 10 10 ... 0 -2568.389841 1
14 0 0 0 0 0 0 0 0 0 0 6 0 10 10 ... 1 -2541.815244 1
21 0 0 0 0 0 0 0 0 0 0 4 0 10 10 ... 1 -2538.177225 1
19 0 0 0 0 0 0 0 0 0 0 1 0 10 10 ... 1 -2535.618553 1
15 0 0 1 2 2 5 5 0 10 0 8 6 8 9 ... 0 -2254.845684 1
6 0 0 10 0 4 5 5 0 0 0 7 0 10 3 ... 1 -2177.510903 1
1 0 0 1 2 0 0 0 9 5 0 7 0 10 10 ... 0 -2154.264835 1
2 0 0 1 0 4 7 0 0 10 0 8 0 8 3 ... 0 -2044.836689 1
23 0 0 9 0 6 7 5 0 0 10 10 5 4 6 ... 0 -1969.87312 1
7 0 0 10 2 0 0 5 9 0 10 7 6 1 10 ... 0 -1845.243642 1
0 0 0 10 0 0 7 5 0 0 10 9 0 1 10 ... 0 460.936119 1
5 0 10 10 10 0 0 8 9 0 0 9 0 10 10 ... 1 842.225072 1
9 0 0 10 2 0 0 8 9 0 0 9 0 10 10 ... 1 2795.673726 1
18 10 10 10 10 10 10 10 10 10 10 10 10 10 10 ... 1 4893.276212 1
17 10 10 10 10 10 10 10 10 10 10 10 10 10 10 ... 1 726259.759128 1
16 10 10 10 10 10 10 10 10 10 10 10 10 10 10 ... 0 2430630.295183 1
['DISCRETE', 24 rows, 24 samples, 34 variables]
|
Estimation.ipynb | ###Markdown
###Code
import numpy as np
import math
import random
from matplotlib import pyplot as plt
from IPython.display import clear_output
#np.random.seed(19680801)
grid=np.zeros((200,200)) # about 40,000 people can change accordingly
x0=np.random.randint(0,199) #randomly choose one person with the virus
y0=np.random.randint(0,199)
grid[x0,y0]=1
plt.imshow(grid, interpolation='none', vmin=0, vmax=1, aspect='equal')
ax = plt.gca();
ax.set_xticks(np.arange(0, 200, 1));
ax.set_yticks(np.arange(0, 200, 1));
ax.set_xticklabels(np.arange(1, 201, 1));
ax.set_yticklabels(np.arange(1, 201, 1));
num_sim=30 # ran for 30 simulations
plot_list=[]
infectioncount=[]
for a in range(0,num_sim): # out of 1/9 chance the person contaminates one of the 8 surrounding people or does not.
for i in range(0,199):
for j in range(0,199):
if grid[i,j]==1:
x_step=0
y_step=0
x_step=i+np.random.randint(-1,2)
y_step=j+np.random.randint(-1,2)
grid[x_step,y_step]=1
infectioncount.append(np.count_nonzero(grid))
plt.figure()
plt.imshow(grid, interpolation='none', vmin=0, vmax=1, aspect='equal')
plt.show
plt.plot(np.arange(0,num_sim), infectioncount, '.')
plt.xlabel('Time (hours of class)')
plt.ylabel('Number of Infections')
plt.show()
infectioncount
###Output
_____no_output_____ |
notebooks/experiments/icews/Integrated_crisis_early_warning_system_CVX_optimization.ipynb | ###Markdown
Imports
###Code
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import cvxpy as cp
import time
import collections
from typing import Dict
from typing import List
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
import seaborn as sns
import networkx as nx
import imp
import os
import pickle as pk
%matplotlib inline
import sys
sys.path.insert(0, '../../../src/')
import network_utils
import utils
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def reload():
imp.reload(network_utils)
imp.reload(utils)
def get_array_of_138(a):
r = a
if len(a) < 138:
r = np.array(list(a) + [0 for i in range(138 - len(a))])
return r
def get_matrix_stochastic(a):
a = a / np.sum(a)
return np.matrix(a)
###Output
_____no_output_____
###Markdown
Body
###Code
triad_map, triad_list = network_utils.generate_all_possible_sparse_triads()
unique_triad_num = len(triad_list)
transitives = []
for triad in triad_list:
transitives.append(network_utils.is_sparsely_transitive_balanced(triad))
transitives = np.array(transitives)
t = np.sum(transitives)
print('{} transitive and {} nontransitive.'.format(t, 138-t))
ch = []
for triad in triad_list:
ch.append(network_utils.is_sparsely_cartwright_harary_balanced(triad))
ch = np.array(ch)
t = np.sum(ch)
print('{} C&H balance and {} non C&H balance.'.format(t, 138-t))
cluster = []
for triad in triad_list:
cluster.append(network_utils.is_sparsely_clustering_balanced(triad))
cluster = np.array(cluster)
t = np.sum(cluster)
print('{} clustering balance and {} non C&H balance.'.format(t, 138-t))
###Output
93 transitive and 45 nontransitive.
24 C&H balance and 114 non C&H balance.
44 clustering balance and 94 non C&H balance.
###Markdown
Convex optimization problem
###Code
loaded_d = utils.load_it('/home/omid/Downloads/DT/cvx_data.pk')
obs = loaded_d['obs']
T = loaded_d['T']
obs_mat = []
for o in obs:
obs_mat.append(np.matrix(o))
obs_normalized = []
for o in obs:
obs_normalized.append(get_matrix_stochastic(o))
dists = []
for i in range(len(T) - 1):
dists.append(np.linalg.norm(T[i] - T[i+1]))
plt.plot(dists);
dists = []
for i in range(len(T) - 1):
st1 = network_utils.get_stationary_distribution(T[i])
st2 = network_utils.get_stationary_distribution(T[i + 1])
dists.append(np.linalg.norm(st1 - st2))
plt.plot(dists);
sts = []
for i in range(len(T)):
sts.append(network_utils.get_stationary_distribution(T[i]))
dists = []
mean_st = np.mean(sts, axis=0)
for st in sts:
dists.append(np.linalg.norm(st - mean_st))
plt.plot(dists);
mts = []
for i in range(len(T)):
mts.append(network_utils.get_mixing_time_range(T[i]))
plt.plot(mts);
l = len(T)
test_numbers = 10
# l = 20
# test_numbers = 5
l
# l = l - 1 # one less than actual value
r = obs_normalized
start_time = time.time()
n = 138
eps = 0.01
# lam1 = 0.5
errs = []
for test_number in np.arange(test_numbers, 0, -1):
print(test_number)
P = [cp.Variable(n, n) for _ in range(l - test_number - 1)]
term1 = 0
for i in range(1, l - test_number - 1):
term1 += cp.norm2(P[i] - P[i - 1])
# term2 = 0
# for i in range(1, l - test_number - 1):
# term2 += cp.norm1(P[i] - P[i - 1])
objective = cp.Minimize(term1) # + term2 * lam1)
# Constraints.
constraints = []
for i in range(l - test_number - 1):
constraints += (
[0 < P[i],
P[i] <= 1,
P[i] * np.ones(n) == np.ones(n),
r[i] * P[i] == r[i + 1],
# r[i + 1] * P[i] == r[i + 1]])
cp.norm2(r[i + 1] * P[i] - r[i + 1]) < eps])
# Problem.
prob = cp.Problem(objective, constraints)
# Solving the problem.
res = prob.solve(cp.MOSEK)
err = np.linalg.norm(r[l - test_number] - (r[l - test_number - 1] * P[l - test_number - 2].value), 2)
errs.append(err)
duration = time.time() - start_time
print('It took :{} mins.'.format(round(duration/60, 2)))
print('Errors: {} +- {}'.format(round(np.mean(errs), 4), round(np.std(errs)), 6))
print(errs)
# Baselines.
mean_errs = []
for test_number in np.arange(test_numbers, 0, -1):
mean_err = np.linalg.norm(r[l - test_number] - np.mean(r[:l - test_number - 1], axis=0)[0], 2)
mean_errs.append(mean_err)
last_errs = []
for test_number in np.arange(test_numbers, 0, -1):
last_err = np.linalg.norm(r[l - test_number] - r[l - test_number - 1], 2)
last_errs.append(last_err)
# rnd_errs = []
# for test_number in np.arange(test_numbers, 0, -1):
# rnd_err = np.linalg.norm(r[l - test_number] - (1/138) * np.ones(138), 2)
# rnd_errs.append(rnd_err)
sns.set(rc={'figure.figsize': (6, 4)})
plt.plot(errs, '-p')
plt.plot(mean_errs, '-o')
plt.plot(last_errs, '-*')
plt.legend(['Time-varying Markov Chains', 'Average Ratio', 'Last Ratio'])
plt.xlabel('Period')
plt.ylabel('Root Mean Square Error (RMSE)')
# plt.savefig('country_RMSE_20_5.png');
###Output
_____no_output_____
###Markdown
save P
###Code
# estimated_matrices = []
# for i in range(len(P)):
# estimated_matrices.append(P[i].value)
# # Saves the transitions.
# with open('pickles/estimated_matrices_icew.pk', 'wb') as f:
# pk.dump(estimated_matrices, f)
# Loading.
with open('pickles/estimated_matrices_icew.pk', 'rb') as f:
estimated_matrices = pk.load(f)
errs
mean_errs
last_errs
f = plt.figure()
plt.plot(errs)
# plt.plot(rnd_errs)
plt.plot(mean_errs)
plt.plot(last_errs)
plt.legend(['Time-varying Markov Chains', 'Average Ratio', 'Last Ratio']);
###Output
_____no_output_____
###Markdown
l = 10, tn = 3norm2 with 3 test numbers with norm2( r[i + 1] * P[i] - r[i + 1] ) < epsIt took :0.73 mins. Errors: 0.0188 +- 0.0 [0.013755268962595392, 0.010296676466642061, 0.032355268242071876] l = 10, tn = 3norm2 with r[i + 1] * P[i] == r[i + 1]]It took :0.73 mins.Errors: 0.0196 +- 0.0[0.01451470613135414, 0.01066150620601352, 0.033668886819223885] l = 10, tn = 30.5 * norm1 and norm2 with 3 test numbers with norm2( r[i + 1] * P[i] - r[i + 1] ) < epsIt took :7.17 mins.Errors: 0.0191 +- 0.0[0.013871543936858752, 0.010700561801106873, 0.0326669902633813] l = 10, tn = 3norm1 with 3 test numbers with norm2( r[i + 1] * P[i] - r[i + 1] ) < epsIt took :7.06 mins.Errors: 0.0198 +- 0.0[0.014775100256169692, 0.01158931803653387, 0.0331656784507608] l = 15, tn = 3norm2 with 3 test numbers with norm2( r[i + 1] * P[i] - r[i + 1] ) < epsIt took :1.75 mins.Errors: 0.02 +- 0.0[0.030233537368111404, 0.010352427211241055, 0.024087536308780952] l = 15, tn = 3norm2 with 3 test numbers with r[i + 1] * P[i] == r[i + 1]]It took :1.84 mins.Errors: 0.021174134391456965 +- 0.009946572226333857[0.03176729638619647, 0.007862558834175212, 0.023892547953999217]
###Code
# BAK:
# start_time = time.time()
# n = 138
# # lam1 = 0.5
# eps = 0.01
# P = [cp.Variable(n, n) for _ in range(l-1)]
# term1 = 0
# for i in range(1, l-1):
# term1 += cp.norm2(P[i] - P[i - 1])
# # term2 = 0
# # for i in range(1, l-1):
# # term2 += cp.norm1(P[i] - P[i - 1])
# objective = cp.Minimize(term1) # + term2 * lam1)
# # Constraints.
# constraints = []
# for i in range(l-1):
# constraints += (
# [0 < P[i],
# P[i] <= 1,
# P[i] * np.ones(n) == np.ones(n),
# r[i] * P[i] == r[i + 1],
# r[i + 1] * P[i] == r[i + 1]])
# # cp.norm2(r[i + 1] * P[i] - r[i + 1]) < eps])
# # Problem.
# prob = cp.Problem(objective, constraints)
# # Solving the problem.
# res = prob.solve(cp.MOSEK)
# v = np.linalg.norm(r[l] - (r[l-1] * P[l-2].value), 2)
# print(v)
# duration = time.time() - start_time
# print('It took :{} mins.'.format(round(duration/60, 2)))
sns.set(rc={'figure.figsize': (6, 4)})
diff = []
for i in range(1, l-2):
diff.append(np.linalg.norm(P[i].value - P[i-1].value))
plt.plot(diff);
sns.set(rc={'figure.figsize': (14, 6)})
legends = []
for i, transition_matrix in enumerate(P):
st_dist = network_utils.get_stationary_distribution(np.asarray(transition_matrix.value))
plt.plot(st_dist)
# legends.append(i)
# plt.legend(legends)
self_transitive_means = []
self_nontransitive_means = []
nontransitive_to_transitive_means = []
transitive_to_nontransitive_means = []
self_transitive_stds = []
self_nontransitive_stds = []
nontransitive_to_transitive_stds = []
transitive_to_nontransitive_stds = []
for matrix in P:
trans_matrix = matrix.value
probs = np.sum(trans_matrix[transitives, :][:, transitives], axis=1)
self_transitive_means.append(np.mean(probs))
self_transitive_stds.append(np.std(probs))
probs = np.sum(trans_matrix[~transitives, :][:, transitives], axis=1)
nontransitive_to_transitive_means.append(np.mean(probs))
nontransitive_to_transitive_stds.append(np.std(probs))
probs = np.sum(trans_matrix[~transitives, :][:, ~transitives], axis=1)
transitive_to_nontransitive_means.append(np.mean(probs))
transitive_to_nontransitive_stds.append(np.std(probs))
probs = np.sum(trans_matrix[transitives, :][:, ~transitives], axis=1)
self_nontransitive_means.append(np.mean(probs))
self_nontransitive_stds.append(np.std(probs))
plt.errorbar(x=np.arange(l-2), y=self_transitive_means, yerr=self_transitive_stds, fmt='r')
plt.errorbar(x=np.arange(l-2), y=nontransitive_to_transitive_means, yerr=nontransitive_to_transitive_stds, fmt='g')
plt.errorbar(x=np.arange(l-2), y=self_nontransitive_means, yerr=self_nontransitive_stds, fmt='b')
plt.errorbar(x=np.arange(l-2), y=transitive_to_nontransitive_means, yerr=transitive_to_nontransitive_stds, fmt='k')
plt.legend(['self transitive', 'nontransitive to transitive', 'self nontransitive', 'transitive to nontransitive']);
# plt.errorbar(x=np.arange(39), y=self_transitive_means) #, yerr=self_transitive_stds)
# plt.errorbar(x=np.arange(39), y=nontransitive_to_transitive_means) #, yerr=nontransitive_to_transitive_stds)
# plt.errorbar(x=np.arange(39), y=self_nontransitive_means) #, yerr=self_nontransitive_stds)
# plt.errorbar(x=np.arange(39), y=transitive_to_nontransitive_means) #, yerr=transitive_to_nontransitive_stds)
# plt.legend(['self transitive', 'nontransitive to transitive', 'self nontransitive', 'transitive to nontransitive']);
trans_matrix = P[-1].value
# trans_matrix = estimated_matrices[-1]
###Output
_____no_output_____
###Markdown
KDE PLOTS STARTS
###Code
sns.set(rc={'figure.figsize': (6, 4)})
probs = np.sum(trans_matrix[transitives, :][:, transitives], axis=1)
sns.distplot(probs)
print('Transition probability of "transitive to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~transitives, :][:, transitives], axis=1)
sns.distplot(probs)
print('Transition probability of "not transitive to transitive": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[transitives, :][:, ~transitives], axis=1)
sns.distplot(probs)
print('Transition probability of "transitive to not transitive": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~transitives, :][:, ~transitives], axis=1)
sns.distplot(probs)
print('Transition probability of "not transitive to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
plt.xlabel('Probability')
plt.ylabel('#Triads');
plt.legend([r'B $\rightarrow$ B', r'U $\rightarrow$ B', r'B $\rightarrow$ U', r'U $\rightarrow$ U'], loc='upper center')
plt.savefig('ICEWS_transitivity_transitionprobabilities_kde.pdf');
sns.set(rc={'figure.figsize': (6, 4)})
probs = np.sum(trans_matrix[ch, :][:, ch], axis=1)
sns.distplot(probs)
print('Transition probability of "C&H balance to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~ch, :][:, ch], axis=1)
sns.distplot(probs)
print('Transition probability of "not C&H balance to C&H balance": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[ch, :][:, ~ch], axis=1)
sns.distplot(probs)
print('Transition probability of "C&H balance to not C&H balance": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~ch, :][:, ~ch], axis=1)
sns.distplot(probs)
print('Transition probability of "not C&H balance to not C&H balance": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
plt.xlabel('Probability')
plt.ylabel('#Triads');
plt.legend([r'B $\rightarrow$ B', r'U $\rightarrow$ B', r'B $\rightarrow$ U', r'U $\rightarrow$ U'], loc='upper center')
plt.savefig('ICEWS_classical_transitionprobabilities_kde.pdf');
sns.set(rc={'figure.figsize': (6, 4)})
probs = np.sum(trans_matrix[cluster, :][:, cluster], axis=1)
sns.distplot(probs)
print('Transition probability of "clustering to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~cluster, :][:, cluster], axis=1)
sns.distplot(probs)
print('Transition probability of "not clustering to clustering": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~cluster, :][:, ~cluster], axis=1)
sns.distplot(probs)
print('Transition probability of "not clustering to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[cluster, :][:, ~cluster], axis=1)
sns.distplot(probs)
print('Transition probability of "clustering to not clustering": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
plt.xlabel('Probability')
plt.ylabel('#Triads');
plt.legend([r'B $\rightarrow$ B', r'U $\rightarrow$ B', r'B $\rightarrow$ U', r'U $\rightarrow$ U'], loc='upper center');
plt.savefig('ICEWS_clustering_transitionprobabilities_kde.pdf');
###Output
/home/omid/.local/lib/python3.5/site-packages/scipy/stats/stats.py:1706: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
KDE PLOTS ENDS
###Code
sns.set(rc={'figure.figsize': (6, 4)})
probs = np.sum(trans_matrix[transitives, :][:, transitives], axis=1)
plt.hist(probs)
print('Transition probability of "transitive to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~transitives, :][:, transitives], axis=1)
plt.hist(probs)
print('Transition probability of "not transitive to transitive": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[transitives, :][:, ~transitives], axis=1)
plt.hist(probs)
print('Transition probability of "transitive to not transitive": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~transitives, :][:, ~transitives], axis=1)
plt.hist(probs)
print('Transition probability of "not transitive to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
plt.xlabel('Probability')
plt.ylabel('#Triads');
plt.legend(['balanced -> balanced', 'unbalanced -> balanced', 'balanced -> unbalanced', 'unbalanced -> unbalanced'])
# plt.title('(a)', weight='bold')
plt.savefig('ICEWS_transitivity_transitionprobabilities.pdf');
sns.set(rc={'figure.figsize': (6, 4)})
probs = np.sum(trans_matrix[ch, :][:, ch], axis=1)
plt.hist(probs)
print('Transition probability of "C&H balance to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~ch, :][:, ch], axis=1)
plt.hist(probs)
print('Transition probability of "not C&H balance to C&H balance": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[ch, :][:, ~ch], axis=1)
plt.hist(probs)
print('Transition probability of "C&H balance to not C&H balance": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~ch, :][:, ~ch], axis=1)
plt.hist(probs)
print('Transition probability of "not C&H balance to not C&H balance": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
plt.legend(['balanced -> balanced', 'unbalanced -> balanced', 'balanced -> unbalanced', 'unbalanced -> unbalanced'])
# plt.title('(c)', weight='bold')
plt.savefig('ICEWS_classical_transitionprobabilities.pdf');
sns.set(rc={'figure.figsize': (6, 4)})
probs = np.sum(trans_matrix[cluster, :][:, cluster], axis=1)
plt.hist(probs)
print('Transition probability of "clustering to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~cluster, :][:, cluster], axis=1)
plt.hist(probs)
print('Transition probability of "not clustering to clustering": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~cluster, :][:, ~cluster], axis=1)
plt.hist(probs)
print('Transition probability of "not clustering to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[cluster, :][:, ~cluster], axis=1)
plt.hist(probs)
print('Transition probability of "clustering to not clustering": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
plt.xlabel('Probability')
plt.ylabel('#Triads');
plt.legend(['balanced -> balanced', 'unbalanced -> balanced', 'unbalanced -> unbalanced', 'balanced -> unbalanced'])
# plt.title('(b)', weight='bold')
plt.savefig('ICEWS_clustering_transitionprobabilities.pdf');
###Output
Transition probability of "clustering to self": 0.87 +- 0.19
Transition probability of "not clustering to clustering": 0.91 +- 0.11
Transition probability of "not clustering to self": 0.09 +- 0.11
Transition probability of "clustering to not clustering": 0.13 +- 0.19
###Markdown
Specific triads transitions in different transition probability matrices
###Code
def print_those(from_triad, to_triad):
probs = []
for l in range(len(T)):
probs.append(
T[l][from_triad, to_triad])
print('{} +- {}'.format(np.mean(probs), np.std(probs)))
probs = []
for l in range(len(P)):
probs.append(
P[l].value[from_triad, to_triad])
print('{} +- {}\n'.format(np.mean(probs), np.std(probs)))
# transitivity balanced
print_those(from_triad=8, to_triad=22)
#classically balanced
print_those(from_triad=18, to_triad=33)
print_those(from_triad=15, to_triad=26)
print_those(from_triad=11, to_triad=37)
np.where(P[-1].value > 0.99)
np.where(P[-1].value[:, 22] > 0.006)
np.where(P[-1].value[:, 33] > 0.006)
np.where(P[-1].value[:, 26] > 0.006)
np.where(P[-1].value[:, 37] > 0.006)
# reload()
# utils.plot_box_plot_for_transitions(
# estimated_matrices[-1], transitives, True, 'ICEWS_transitivity', 'ICEWS')
reload()
sns.set(font_scale=1.3)
sns.set_style("white")
utils.plot_box_plot_for_transitions(
estimated_matrices[-1], transitives, True, 'ICEWS_transitivity')
# reload()
# utils.plot_box_plot_for_transitions(
# estimated_matrices[-1], cluster, True, 'ICEWS_clustering', 'ICEWS')
reload()
sns.set(font_scale=1.3)
sns.set_style("white")
utils.plot_box_plot_for_transitions(
estimated_matrices[-1], cluster, True, 'ICEWS_clustering')
reload()
sns.set(font_scale=1.3)
sns.set_style("white")
utils.plot_box_plot_for_transitions(
estimated_matrices[-1], ch, True, 'ICEWS_classical')
sns.set_style('white', rc={'figure.figsize': (6, 4)})
probs1 = np.sum(trans_matrix[transitives, :][:, transitives], axis=1)
probs2 = np.sum(trans_matrix[~transitives, :][:, transitives], axis=1)
probs3 = np.sum(trans_matrix[transitives, :][:, ~transitives], axis=1)
probs4 = np.sum(trans_matrix[~transitives, :][:, ~transitives], axis=1)
colors = ['#e66101', '#fdb863', '#b2abd2', '#5e3c99']
plt.hist([probs1, probs2, probs3, probs4], color=colors)
plt.xlabel('Probability')
plt.ylabel('#Triads');
plt.legend([r'B $\rightarrow$ B', r'U $\rightarrow$ B', r'B $\rightarrow$ U', r'U $\rightarrow$ U'], loc='upper center')
plt.savefig('ICEWS_transitivity_transitionprobabilities_binbeside.pdf');
colors = ['#e66101', '#b2abd2', '#fdb863', '#5e3c99']
probs = np.sum(trans_matrix[cluster, :][:, cluster], axis=1)
sns.distplot(probs, color=colors[0])
print('Transition probability of "clustering to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~cluster, :][:, cluster], axis=1)
sns.distplot(probs, color=colors[1])
print('Transition probability of "not clustering to clustering": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[~cluster, :][:, ~cluster], axis=1)
sns.distplot(probs, color=colors[2])
print('Transition probability of "not clustering to self": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
probs = np.sum(trans_matrix[cluster, :][:, ~cluster], axis=1)
sns.distplot(probs, color=colors[3])
print('Transition probability of "clustering to not clustering": {} +- {}'.format(
round(np.mean(probs), 2), round(np.std(probs), 2)))
plt.xlabel('Probability')
plt.ylabel('#Triads');
plt.legend([r'B $\rightarrow$ B', r'U $\rightarrow$ B', r'B $\rightarrow$ U', r'U $\rightarrow$ U'], loc='upper center');
def set_the_hatch(bars, hatch):
for patch in bars.patches:
if not patch.get_hatch():
patch.set_hatch(hatch)
sns.set_style('white', rc={'figure.figsize': (6, 4)})
ax = plt.gca()
bins = np.arange(0, 1, 0.05)
alpha = 1
# Define some hatches
hatches = ['-', '+', 'x', '\\', '*', 'o']
probs1 = np.sum(trans_matrix[transitives, :][:, transitives], axis=1)
probs2 = np.sum(trans_matrix[~transitives, :][:, transitives], axis=1)
probs3 = np.sum(trans_matrix[transitives, :][:, ~transitives], axis=1)
probs4 = np.sum(trans_matrix[~transitives, :][:, ~transitives], axis=1)
bars = sns.distplot(probs1, bins=bins, norm_hist=False, hist_kws={"linewidth": 3, "alpha": alpha})
set_the_hatch(bars, hatches[1])
bars = sns.distplot(probs2, bins=bins, norm_hist=False, hist_kws={"linewidth": 3, "alpha": alpha})
set_the_hatch(bars, hatches[4])
bars = sns.distplot(probs3, bins=bins, norm_hist=False, hist_kws={"linewidth": 3, "alpha": alpha})
set_the_hatch(bars, hatches[3])
bars = sns.distplot(probs4, bins=bins, norm_hist=False, hist_kws={"linewidth": 3, "alpha": alpha})
set_the_hatch(bars, hatches[5])
ax.set_xlim([0, 1])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.xaxis.grid(b=True, which='major', linestyle='--')
ax.xaxis.grid(b=True, which='minor', linestyle=':')
ax.yaxis.grid(b=True, which='major', linestyle='--')
plt.tight_layout()
plt.xlabel('Probability')
plt.ylabel('#Transitions');
plt.legend([r'B $\rightarrow$ B', r'U $\rightarrow$ B', r'B $\rightarrow$ U', r'U $\rightarrow$ U'], loc='upper center');
plt.savefig('ICEWS_transitivity_transitionprobabilities_kde2.pdf');
sns.set_style('white', rc={'figure.figsize': (6, 4)})
ax = plt.gca()
bins = np.arange(0, 1, 0.05)
alpha = 0.5
# Define some hatches
hatches = ['-', '+', 'x', '\\', '*', 'o']
probs1 = np.sum(trans_matrix[transitives, :][:, transitives], axis=1)
probs2 = np.sum(trans_matrix[~transitives, :][:, transitives], axis=1)
probs3 = np.sum(trans_matrix[transitives, :][:, ~transitives], axis=1)
probs4 = np.sum(trans_matrix[~transitives, :][:, ~transitives], axis=1)
sns.distplot(probs1, bins=bins, norm_hist=False, hist_kws={"linewidth": 3, "alpha": alpha})
sns.distplot(probs2, bins=bins, norm_hist=False, hist_kws={"linewidth": 3, "alpha": alpha})
sns.distplot(probs3, bins=bins, norm_hist=False, hist_kws={"linewidth": 3, "alpha": alpha})
sns.distplot(probs4, bins=bins, norm_hist=False, hist_kws={"linewidth": 3, "alpha": alpha})
ax.set_xlim([0, 1])
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.xaxis.grid(b=True, which='major', linestyle='--')
ax.xaxis.grid(b=True, which='minor', linestyle=':')
ax.yaxis.grid(b=True, which='major', linestyle='--')
plt.tight_layout(pad=1.5)
plt.xlabel('Probability')
plt.ylabel('#Transitions');
plt.legend([r'B $\rightarrow$ B', r'U $\rightarrow$ B', r'B $\rightarrow$ U', r'U $\rightarrow$ U'], loc='upper center');
plt.savefig('ICEWS_transitivity_transitionprobabilities_kde.pdf');
###Output
/home/omid/.local/lib/python3.5/site-packages/scipy/stats/stats.py:1706: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
|
simulate_directional_sound.ipynb | ###Markdown
###Code
!pip install sofasonix
# press "Mount Drive"
import numpy as np
import matplotlib.pyplot as plt
from SOFASonix import SOFAFile
import scipy.io.wavfile as wav
# replace with your path!
loadsofa = SOFAFile.load('/content/drive/My Drive/directional_sound/HRIR_FULL2DEG.sofa')
data = loadsofa.data_ir
direction = loadsofa.SourcePosition
direction = direction[:,0:2] # the first two colums are azimuth and elevation angles in degree
sr = int(loadsofa.Data_SamplingRate[0]) # sampling_rate in Hz
#loadsofa.view() # if interested, you can explore the whole dataset
## create noise signal
duration = 0.5 #seconds
sample_n = int(duration*sr)
noise = np.random.uniform(-1,1,sample_n)
## create speech signal
# You can take the 'hallo2.wav' file from the repository or record your own voice e.g. with Audacity and sampling rate of 48kHz
load_speech = wav.read('/content/drive/My Drive/directional_sound/hallo2.wav')
speech = load_speech[1]
sampling_rate = load_speech[0]
if sampling_rate != sr:
print('Warning: sampling_rate != sr')
def get_hrir(az_wish, el_wish):
m_altered = np.abs(direction[:,0]- az_wish) + np.abs(direction[:,1]- el_wish)
m_min = np.amin(m_altered, axis=0)
i_row = np.argwhere(m_altered == m_min)[0][0]
return data[i_row][0], data[i_row][1], i_row
def get_stereo(signal, az_wish, el_wish):
'''
signal: numpy 1D array, e.g. signal=noise or signal=speech
az_wish: azimuth angle in degree at which sound should be virtually placed
el_wish: elevation angle in degree at which sound should be virtually placed
'''
hrir_l, hrir_r, i_row = get_hrir(az_wish, el_wish)
left = np.convolve(signal, hrir_l, mode='valid') # 'valid': avoid boundary effects; The convolution product is only given for points where the signals overlap completely
right = np.convolve(signal, hrir_r, mode='valid')
audio = np.hstack((left.reshape(-1,1), right.reshape(-1,1)))
scaled = np.int16(audio/np.max(np.abs(audio)) * 32767)
file_name = 'stereo['+str(direction[i_row][0].round(1))+', '+str(direction[i_row][1].round(1))+'].wav'
wav.write('/content/drive/My Drive/directional_sound/'+file_name, sr, scaled)
# az (azimuth) is the angle lying in the horizontal plane. It goes from 0° - nose direction, to 90° - left ear, to 180° - back, to 270° - right ear, and again to 360° - nose direction
# el (elevation) is the angle lying in the vertical plane. It goes from -88° - down, to 88° - up.
get_stereo(signal=speech, az_wish=90, el_wish=0)
# You have to use ear phones! Loudspeaker will not make the illusion of sound coming from a certain direction.
def get_roundabout(signal, az_begin, az_end, step_size, el=0):
'''
signal: numpy 1D array, e.g. signal=noise or signal=speech
az_begin: azimuth angle in degree at which the roundabout starts
az_end: azimuth angle in degree at which the roundabout ends
step_size:azimuth angle step in degree
el: elevation in degree at which the sound travels horizontally around the head
'''
hrir_l, hrir_r, i_row = get_hrir(0, el_wish=0)
left = np.convolve(signal, hrir_l, mode='valid')
right = np.convolve(signal, hrir_r, mode='valid')
audio = np.hstack((left.reshape(-1,1),right.reshape(-1,1)))
print('Simulated alzimuth angles: ', np.arange(az_begin, az_end+1, step_size))
for az_wish in np.arange(az_begin, az_end+1, step_size):
hrir_l, hrir_r, i_row = get_hrir(az_wish, el_wish=el)
left = np.convolve(signal, hrir_l, mode='valid')
right = np.convolve(signal, hrir_r, mode='valid')
audio_n = np.hstack((left.reshape(-1,1),right.reshape(-1,1)))
audio = np.vstack((audio,audio_n))
scaled = np.int16(audio/np.max(np.abs(audio)) * 32767)
file_name = 'new_audio_roundabout_hallo.wav'
wav.write('/content/drive/My Drive/directional_sound/'+file_name, sr, scaled)
get_roundabout(signal=speech, az_begin=0, az_end=360, step_size=45, el=0)
###Output
Simulated alzimuth angles: [ 0 45 90 135 180 225 270 315 360]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.